POPULARITY
SaaS Scaled - Interviews about SaaS Startups, Analytics, & Operations
Today, we're joined by Nik Froehlich, founder and CEO of Saritasa, a technology solutions company that designs and develops custom, commercial-grade software systems. We talk about:Pro and cons of building vs. buying softwareThe challenges of low code/ no code solutionsThe high costs of code debtCoping with client requests for cheaply developed softwareWill GenAI eliminate the need for developers to write code?Current AI development use cases: documentation & automatic testing
This episode features Nigam Shah discussing the sustainability of AI in healthcare, focusing on challenges in development, validation, and regulation. The conversation explores the limitations of current AI models, the evolving role of governance, and the need for localized validation to ensure accuracy and relevance.
Will is here! We discuss forthcoming changes to the show, automation, apps of the year, tech we're thankful for, and some of our favorite media we've been engaging with. This episode was recorded in late December 2024. Subscribe to the Blog… RSS | Email Newsletter Subscribe to the Podcast in… Apple Podcasts | Overcast | Castro | Spotify | RSS Support Music Ed Tech Talk Become a Patron! Buy me a coffee Chapters 00:00:00 Here he goes taking about Dark Souls again 00:08:15 MCU (Muppet Cinematic Universe) 00:15:14 Introducing…Will Kuhn, now a METT Regular! And Jaye, podcast editor! 00:20:23 2025 Mike Kovins TI:ME Teacher of the Year Robby Burns - acknowledgments and thank yous. 00:27:04 Thank you Patreon Supporter Susan! 00:27:46 MacStories' App of the Year: Delta Emulator 00:40:31 Siri Shortcuts App 00:43:32 Current AI uses - productivity, creativity, philosophy 00:51:31 Robby's National Board Certification Application and NotebookLM 00:57:51 What are we really trying to do [with AI and our students]? 01:02:58 Three Holiday Topic Blitz - Tech We're Thankful For; Tech We're Thankful For In the Workplace; Gifts and Gift Cards 01:16:54 How about music?! Show Notes and Links Robby Burns named 2025 Mike Kovins TI:ME Teacher of the Year. (1)(https://podcasts.apple.com/us/podcast/music-ed-tech-talk/id1538455772?i=1000597634283) (2)(https://podcasts.apple.com/us/podcast/music-ed-tech-talk/id1538455772?i=1000544778961) (3)(https://podcasts.apple.com/us/podcast/music-ed-tech-talk/id1538455772?i=1000544778961) (4)(https://podcasts.apple.com/us/podcast/music-ed-tech-talk/id1538455772?i=1000577043856) Robby Burns | creating Music Ed Tech Talk | Patreon MacStories Delta User Guide | Delta (5)(https://podcasts.apple.com/us/podcast/music-ed-tech-talk/id1538455772?i=1000510317074) DJ Apps | Algoriddim Google NotebookLM | Note Taking & Research Assistant Powered by AI Genesis - In Too Deep (Official Music Video) - YouTube Apple Vision Pro - Apple PlayStation®5 | Play Has No Limits | PlayStation Move — a compact tool for intuitive music making | Ableton Harmony Director - Brass & Woodwinds - Musical Instruments - Products - Yamaha USA Studio Neat Filterworld — Kyle Chayka Chase and Status Vampire Weekend - Official Website Väsen, Hawktail - Väsen & Hawktail Home - DOMi and JD Beck SAM GREENFIELD | HOME Where to Find Us Robby - robbyburns.com Will - willkuhn.com Please don't forget to rate the show and share it with others! 79 - Teaching Music Tech, with Gillian Desmarais - Music Ed Tech Talk ↩︎ 61 - Music Technology 101, with Heath Jones - Music Ed Tech Talk ↩︎ 61 - Music Technology 101, with Heath Jones - Music Ed Tech Talk ↩︎ 69 - I Don't Want a Valuable Life Lesson, I Just Want An Ice Cream - Music Ed Tech Talk ↩︎ 52 - Dorico Updates! with Daniel Spreadbury - Music Ed Tech Talk ↩︎
Question posée par L'Observateur Paalga au Burkina Faso.Alors que le Sommet pour l'action sur l'IA se déroule actuellement à Paris, comment se positionne en effet le continent face à cette avancée technologique majeure ?« Certes, le continent noir est présent à Paris à travers quelques délégations gouvernementales, répond le quotidien ouagalais. Mais reconnaissons que le fossé, qui le sépare des autres, demeure abyssal, même si des pays comme le Rwanda, le Kenya, le Maroc et le Nigeria font figure de modèles assez avancés. Il faut donc craindre, soupire L'Observateur Paalga, que dans son ensemble, le berceau de l'humanité, déjà en retard dans la course vers les technologies de pointe, ne rate le train de la révolution de l'intelligence artificielle. C'est vrai, de nombreux autres défis, comme la lutte contre la pauvreté, la faim, l'analphabétisme, l'insécurité, les conflits et les effets du changement climatique occupent des places de choix dans nos politiques publiques. Mais, pointe le quotidien burkinabé, il ne faut pas perdre de vue les immenses possibilités qu'offre l'intelligence artificielle en matière de réponses à tous ces maux qui continuent d'assaillir l'Afrique. Moteur de croissance de développement, la souveraineté technologique est aussi un formidable accélérateur vers la souveraineté politique et économique du continent ».L'Afrique du Sud, le Nigeria, le Rwanda et le Maroc en pointeAlors, en effet, précise Jeune Afrique, « plusieurs pays africains partagent le même souhait de devenir des leaders en matière d'innovation technologique. Il s'agit notamment de l'Afrique du Sud, du Nigeria, du Rwanda et du Maroc. Chacun dispose de spécificités économiques, géopolitiques et culturelles qui rendent la compétition très rude entre ces territoires d'innovation ».Les domaines développés sur le continent qui impliquent l'intelligence artificielle, concernent notamment la santé, avec la robotique médicale, la robotique aérienne, c'est-à-dire la livraison de médicaments par drones, ou encore la télémédecine.Lors du sommet de Paris, relève pour sa part le site marocain Yabilabi, « a été lancé Current AI, une initiative visant à promouvoir une intelligence artificielle d'intérêt général, parrainé par 11 dirigeants du secteur technologique. Le partenariat réunit plusieurs pays fondateurs, des pays occidentaux mais aussi le Maroc, le Nigeria et le Kenya. (…) L'objectif est de lever 2 milliards et demi de dollars sur cinq ans afin de faciliter l'accès à des bases de données privées et publiques dans des domaines comme la santé et l'éducation ».Objectif : Transform AfricaEt on revient à Jeune Afrique qui pointe la présence à ce sommet de Paris sur l'IA de plusieurs grands acteurs du continent… Sont à Paris en effet plusieurs ministres de l'Économie numérique, venant du Togo, du Maroc, du Nigeria, du Kenya, du Rwanda, de la Sierra Leone, du Lesotho et de la Guinée Bissau.« L'ivoirien Lacina Koné, directeur général de l'Alliance Smart Africa, est présent également à Paris, relève encore Jeune Afrique, pour promouvoir le lancement du Conseil africain pour l'intelligence artificielle et rassembler les acteurs concernés autour d'une réflexion sur la structuration de cette initiative. D'ici à juillet prochain, cet organe doit être en mesure d'avoir élaboré “un plan stratégique d'un an“ qui sera présenté lors du Sommet Transform Africa, prévu du 22 au 24 juillet de cette année à Kigali ».La question du financement et de la formation…Alors, on le voit, « l'Afrique n'est pas en reste », souligne Le Pays. « Mais les principaux défis, pour le continent, restent largement encore les financements des infrastructures comme les data centers et les moyens de collecte et de stockage des données. Il y a aussi la formation des ingénieurs, en adéquation avec les réalités du continent. Autant dire, pointe le journal, que ce n'est pas demain la veille que l'Afrique sera dans le peloton de tête de la course à l'IA. Pour autant, le continent noir ne doit pas se mettre en retrait, encore moins abdiquer. Au contraire, s'exclame Le Pays, il doit savoir tirer le meilleur profit de l'IA pour booster son développement. Car, de l'agriculture à la santé en passant par l'éducation, les opportunités sont d'autant plus nombreuses que l'IA s'impose aujourd'hui comme une réponse à des défis aussi bien sociaux, économiques qu'environnementaux ».
Question posée par L'Observateur Paalga au Burkina Faso.Alors que le Sommet pour l'action sur l'IA se déroule actuellement à Paris, comment se positionne en effet le continent face à cette avancée technologique majeure ?« Certes, le continent noir est présent à Paris à travers quelques délégations gouvernementales, répond le quotidien ouagalais. Mais reconnaissons que le fossé, qui le sépare des autres, demeure abyssal, même si des pays comme le Rwanda, le Kenya, le Maroc et le Nigeria font figure de modèles assez avancés. Il faut donc craindre, soupire L'Observateur Paalga, que dans son ensemble, le berceau de l'humanité, déjà en retard dans la course vers les technologies de pointe, ne rate le train de la révolution de l'intelligence artificielle. C'est vrai, de nombreux autres défis, comme la lutte contre la pauvreté, la faim, l'analphabétisme, l'insécurité, les conflits et les effets du changement climatique occupent des places de choix dans nos politiques publiques. Mais, pointe le quotidien burkinabé, il ne faut pas perdre de vue les immenses possibilités qu'offre l'intelligence artificielle en matière de réponses à tous ces maux qui continuent d'assaillir l'Afrique. Moteur de croissance de développement, la souveraineté technologique est aussi un formidable accélérateur vers la souveraineté politique et économique du continent ».L'Afrique du Sud, le Nigeria, le Rwanda et le Maroc en pointeAlors, en effet, précise Jeune Afrique, « plusieurs pays africains partagent le même souhait de devenir des leaders en matière d'innovation technologique. Il s'agit notamment de l'Afrique du Sud, du Nigeria, du Rwanda et du Maroc. Chacun dispose de spécificités économiques, géopolitiques et culturelles qui rendent la compétition très rude entre ces territoires d'innovation ».Les domaines développés sur le continent qui impliquent l'intelligence artificielle, concernent notamment la santé, avec la robotique médicale, la robotique aérienne, c'est-à-dire la livraison de médicaments par drones, ou encore la télémédecine.Lors du sommet de Paris, relève pour sa part le site marocain Yabilabi, « a été lancé Current AI, une initiative visant à promouvoir une intelligence artificielle d'intérêt général, parrainé par 11 dirigeants du secteur technologique. Le partenariat réunit plusieurs pays fondateurs, des pays occidentaux mais aussi le Maroc, le Nigeria et le Kenya. (…) L'objectif est de lever 2 milliards et demi de dollars sur cinq ans afin de faciliter l'accès à des bases de données privées et publiques dans des domaines comme la santé et l'éducation ».Objectif : Transform AfricaEt on revient à Jeune Afrique qui pointe la présence à ce sommet de Paris sur l'IA de plusieurs grands acteurs du continent… Sont à Paris en effet plusieurs ministres de l'Économie numérique, venant du Togo, du Maroc, du Nigeria, du Kenya, du Rwanda, de la Sierra Leone, du Lesotho et de la Guinée Bissau.« L'ivoirien Lacina Koné, directeur général de l'Alliance Smart Africa, est présent également à Paris, relève encore Jeune Afrique, pour promouvoir le lancement du Conseil africain pour l'intelligence artificielle et rassembler les acteurs concernés autour d'une réflexion sur la structuration de cette initiative. D'ici à juillet prochain, cet organe doit être en mesure d'avoir élaboré “un plan stratégique d'un an“ qui sera présenté lors du Sommet Transform Africa, prévu du 22 au 24 juillet de cette année à Kigali ».La question du financement et de la formation…Alors, on le voit, « l'Afrique n'est pas en reste », souligne Le Pays. « Mais les principaux défis, pour le continent, restent largement encore les financements des infrastructures comme les data centers et les moyens de collecte et de stockage des données. Il y a aussi la formation des ingénieurs, en adéquation avec les réalités du continent. Autant dire, pointe le journal, que ce n'est pas demain la veille que l'Afrique sera dans le peloton de tête de la course à l'IA. Pour autant, le continent noir ne doit pas se mettre en retrait, encore moins abdiquer. Au contraire, s'exclame Le Pays, il doit savoir tirer le meilleur profit de l'IA pour booster son développement. Car, de l'agriculture à la santé en passant par l'éducation, les opportunités sont d'autant plus nombreuses que l'IA s'impose aujourd'hui comme une réponse à des défis aussi bien sociaux, économiques qu'environnementaux ».
AI is evolving rapidly and impacting multiple sectors while facing challenges such as data requirements, energy costs, and regulatory issues. The AI community experiences a split between cautionary advocates and researchers addressing biases and privacy concerns, both calling for guidelines. Current AI applications lead to negative outcomes, including manipulation and bias. Governance of AI presents difficulties for global leaders as skepticism grows around its capabilities. By 2025, anticipated advancements include the continued development of foundational models, tailored AI solutions for specific business needs, the emergence of next-generation SaaS applications, a balance between AI autonomy and oversight, a widening digital divide, organizational changes due to workforce automation, and a shift in regulations focused on user content management. Organizations need to evaluate AI's benefits and risks while adapting business structures to enhance outcomes and employee experiences, ensuring responsible deployment and use of AI to achieve progress.Learn more on this news visit us at: https://greyjournal.net/ Hosted on Acast. See acast.com/privacy for more information.
This document summarizes a discussion on a YouTube channel called "AI with Honor," where the host discusses his conversation with a programmer regarding the current state and future impact of artificial intelligence (AI). The discussion explores both the potential and pitfalls of AI adoption, especially in business, and raises critical questions about its accuracy, training, and societal impact.Key Themes and Ideas:Programmer's Skepticism on Rapid AI Adoption:A programmer, with whom the host spoke, expressed doubt about the speed at which AI will become truly effective and reliable. He believes that despite current hype, AI systems are still generating inaccurate information.Quote: "he sees these systems as they're still generating information that's not accurate"Concerns about Premature AI Adoption in Business:The programmer warns that businesses prematurely adopting AI, especially to replace human employees, may be in for a "shock." This is due to the perceived inaccuracies and limitations of current AI.Quote: "they're all gonna be in for a shock after they laid off and fired a whole bunch of employees in order to let AI take over because AI won't be as accurate as the employees would have"The host expresses concern about the trend of businesses replacing customer relations staff with AI chatbots. He questions the effectiveness of this strategy over time.AI Lacks Human Nuance and Outside-the-Box Thinking:The host relays the idea that AI, lacking human intuition, cannot replicate human thinking. Humans can react and think outside the box in ways that AI currently cannot replicate.Quote: "AI doesn't have that outside the box thinking as a human being would when they react"Importance of AI Training Data and Potential for "Bad Stuff":The host underscores the crucial role of training data in AI development, raising concerns about the potential for bias and negativity if the model is trained on problematic datasets.Quote: "you can train the thing on all bad stuff and then have it refine itself run scenarios and more than likely the synthetic data it's going to generate is going to be bad stuff"He calls for cautionary rules regarding how these models are trained and what data they have access to. This includes being wary of "synthetic data" generated by AI.Limitations of Current AI in Visual Media:The host illustrates AI's limitaYoutube Channels:Conner with Honor - real estateHome Muscle - fat torchingFrom first responder to real estate expert, Connor with Honor brings honesty and integrity to your Santa Clarita home buying or selling journey. Subscribe to my YouTube channel for valuable tips, local market trends, and a glimpse into the Santa Clarita lifestyle.Dive into Real Estate with Connor with Honor:Santa Clarita's Trusted Realtor & Fitness EnthusiastReal Estate:Buying or selling in Santa Clarita? Connor with Honor, your local expert with over 2 decades of experience, guides you seamlessly through the process. Subscribe to his YouTube channel for insider market updates, expert advice, and a peek into the vibrant Santa Clarita lifestyle.Fitness:Ready to unlock your fitness potential? Join Connor's YouTube journey for inspiring workouts, healthy recipes, and motivational tips. Remember, a strong body fuels a strong mind and a successful life!Podcast:Dig deeper with Connor's podcast! Hear insightful interviews with industry experts, inspiring success stories, and targeted real estate advice specific to Santa Clarita.
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop welcomes Reuben Bailon, an expert in AI training and technology innovation. Together, they explore the rapidly evolving field of AI, touching on topics like large language models, the promise and limits of general artificial intelligence, the integration of AI into industries, and the future of work in a world increasingly shaped by intelligent systems. They also discuss decentralization, the potential for personalized AI tools, and the societal shifts likely to emerge from these transformations. For more insights and to connect with Reuben, check out his LinkedIn.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:12 Exploring AI Training Methods00:54 Evaluating AI Intelligence02:04 The Future of Large Action Models02:37 AI in Financial Decisions and Crypto07:03 AI's Role in Eliminating Monotonous Work09:42 Impact of AI on Bureaucracies and Businesses16:56 AI in Management and Individual Contribution23:11 The Future of Work with AI25:22 Exploring Equity in Startups26:00 AI's Role in Equity and Investment28:22 The Future of Data Ownership29:28 Decentralized Web and Blockchain34:22 AI's Impact on Industries41:12 Personal AI and Customization46:59 Concluding Thoughts on AI and AGIKey InsightsThe Current State of AI Training and Intelligence: Reuben Bailon emphasized that while large language models are a breakthrough in AI technology, they do not represent general artificial intelligence (AGI). AGI will require the convergence of various types of intelligence, such as vision, sensory input, and probabilistic reasoning, which are still under development. Current AI efforts focus more on building domain-specific competencies rather than generalized intelligence.AI as an Augmentative Tool: The discussion highlighted that AI is primarily being developed to augment human intelligence rather than replace it. Whether through improving productivity in monotonous tasks or enabling greater precision in areas like medical imaging, AI's role is to empower individuals and organizations by enhancing existing processes and uncovering new efficiencies.The Role of Large Action Models: Large action models represent an exciting frontier in AI, moving beyond planning and recommendations to executing tasks autonomously, with human authorization. This capability holds potential to revolutionize industries by handling complex workflows end-to-end, drastically reducing manual intervention.The Future of Personal AI Assistants: Personal AI tools have the potential to act as highly capable assistants by leveraging vast amounts of contextual and personal data. However, the technology is in its early stages, and significant progress is needed to make these assistants truly seamless and impactful in day-to-day tasks like managing schedules, filling out forms, or making informed recommendations.Decentralization and Data Ownership: Reuben highlighted the importance of a decentralized web where individuals retain ownership of their data, as opposed to the centralized platforms that dominate today. This shift could empower users, reduce reliance on large tech companies, and unlock new opportunities for personalized and secure interactions online.Impact on Work and Productivity: AI is set to reshape the workforce by automating repetitive tasks, freeing up time for more creative and fulfilling work. The rise of AI-augmented roles could lead to smaller, more efficient teams in businesses, while creating new opportunities for freelancers and independent contractors to thrive in a liquid labor market.Challenges and Opportunities in Industry Disruption: Certain industries, like software, which are less regulated, are likely to experience rapid transformation due to AI. However, heavily regulated sectors, such as legal and finance, may take longer to adapt. The discussion also touched on how startups and agile companies can pressure larger organizations to adopt AI-driven solutions, ultimately redefining competitive landscapes.
In this episode of The Tech Leader's Playbook, explore the fascinating world of Artificial General Intelligence (AGI) with Timothy Busbice, a visionary in robotics and neuroscience. From emulating biological nervous systems to creating autonomous drones, Timothy shares groundbreaking insights into the future of robotics and the challenges of current AI. Learn how AGI could revolutionize industries, tackle ethical dilemmas, and redefine human-machine collaboration. Takeaways General intelligence is crucial for adapting to new environments. AGI can revolutionize industries by enabling smarter robots. Biological nervous systems provide a model for developing AGI. Current AI systems are limited by their reliance on sensory input. Movement is essential for achieving general intelligence in robots. Neuroscience plays a vital role in AGI development. Ethical considerations are important in replicating biological intelligence. Future AGI systems could transfer knowledge between robots. Humans' flawed decision-making highlights the need for AGI. Innovations in AGI could lead to significant advancements in robotics. Chapters 00:00 Understanding AGI: The Foundation of General Intelligence 02:48 The Role of Biological Nervous Systems in AGI 06:08 Challenges in Current AGI Development 08:58 Applications of AGI in Real-World Scenarios 12:12 Ethical Considerations in AGI Development 15:07 Future Breakthroughs in AGI and Autonomous Systems 18:02 The Importance of Neuroscience in AGI 20:59 The Societal Impact of AGI and Robotics 24:04 The Path Forward: Innovations and Challenges Ahead 26:42 Real-World Use Cases: Firefighting Drones and Beyond 30:15 Autonomous Systems in Space Exploration 33:28 Lessons from Timothy Busbice for Tech Leaders Timothy Busbice's Social Media Link: https://www.linkedin.com/in/timothybusbice/ Resources and Links: https://www.hireclout.com https://www.podcast.hireclout.com https://www.linkedin.com/in/hirefasthireright
What's up everyone, welcome to our first episode of 2025 – today we have the pleasure of sitting down with Austin Hay, Co-Founder and Co-CEO at Clarify and Martech Teacher at Reforge. Summary: Something extraordinary is brewing in the world of martech. In the near future, Austin thinks AI agents will turn into an omniscient digital butler, anticipating your needs with uncanny precision while vanishing into the background of your workday. But the real revolution unfolds in the seemingly mundane machinery of marketing operations, where innovative companies are transforming their spaghetti mess of data pipes and platforms into something approaching digital poetry. The fundamental building blocks of our systems aren't disappearing, they're gaining superpowers. Hear it from one of our industry's most thoughtful builders. About AustinAustin started his career at Accenture but he left the Fortune 500 world to join a startup called Branch where he became the 4th employeeAustin then created his own boutique mobile growth engineering consultancy. He grew the practice to 1.5M with big names like Walmart, Jet, Airbnb, Foursquare and more.His consulting practice was aqui-hired by mParticle – a leading CDP solution where he would eventually become VP of Growth He later joined Runway as VP of Business OperationsHe also started building The Marketing Technology Academy – an online learning center for martech which he would eventually sell to Reforge and become the Instructor for the new Martech courseHe was also Head of Martech at Ramp, a fintech startupLast year, Austin strapped on his jetpack and became a product founder at Clarify conquering SF and Hubspot and building the first flexible, intelligent CRM that people actually enjoy using. AI Agents and the Hidden Promise of Ambient ComputingLet's face it, manually feeding context to AI feels a bit like teaching a fish to ride a bicycle. Current AI systems, brilliant as they may be at crunching numbers and crafting responses, still stumble around our digital workspaces like a tourist without a map. Sure, they can write a decent blog post or solve complex equations, but they're essentially working with one hand tied behind their virtual back.Now, imagine your AI assistant as more of a digital detective, quietly observing and understanding everything happening on your screen. No more copying and pasting chunks of text or explaining what's in your Notion workspace for the hundredth time. Picture having a conversation with your computer while it maintains an almost supernatural awareness of your digital environment, from those buried Slack threads to that spreadsheet you've been avoiding. Recent demonstrations, like Kieran Flanagan's adventure with Gemini's screen reader, hint at this future, even if current versions move with all the grace of a sleepy sloth.The real magic kicks in when we start thinking about operating system-level integration. Platform-specific AI agents are like horses wearing blinders; they can only see what's directly in front of them. But desktop applications from companies like GPT and Anthropic are pushing toward something far more interesting: AI that can understand your entire digital world, not just a tiny slice of it. It's the difference between having a personal assistant who can only help you in the kitchen versus one who can manage your entire house.Here's where things get particularly juicy: this isn't some far-off sci-fi fantasy. We're looking at a five-year horizon where the clunky, permission-asking AI of today evolves into something far more sophisticated. The transformation won't happen overnight (sorry, instant gratification seekers), but when it does, we're talking about a 10x boost in productivity that makes current productivity hacks look like using a butter knife to cut down a forest.Key takeaway: While today's AI assistants feel like overeager interns requiring constant supervision, the next five years will usher in truly ambient AI that seamlessly integrates with our operating systems. The future isn't about teaching AI to understand us; it's about AI that already knows what we need, when we need it, across our entire digital landscape.The Limitations of AI Agent MarketplacesThe AI marketplace concept raises important questions about automation's role in our daily work. While downloading specialized AI agents for every task might sound appealing, reality suggests a different path forward. Current marketplace models mirror the Chrome extension ecosystem, where tools often remain peripheral rather than becoming essential to core workflows.Austin frames the central debate in venture capital circles clearly: will we depend on AI agents that require explicit commands, or will we embrace ambient intelligence that works proactively in the background? Looking at the CRM space, Austin points out a crucial consideration that many futurists overlook. You can't simply discard two decades of sales methodology and expect professionals to embrace a completely alien interface. Instead, sellers need familiar elements: contacts, companies, opportunities, and tasks, all presented in recognizable formats that align with established workflows.The intersection of traditional software and AI becomes particularly interesting when Austin discusses CDP platforms. Users expect certain fundamentals, like accessing persona views and tracking customer behavior. The innovation opportunity lies not in replacing these elements but in enhancing them through intelligent automation. Austin suggests that the key difference emerges in how these agents operate: will users actively assign tasks, or will agents run continuously in the background, performing expected functions without explicit direction?While some platforms champion what Austin calls the "jack of all trades" approach with Notion-style customizable workflows, he makes a compelling case for specialized, industry-specific solutions. This focused approach, where AI agents operate autonomously within well-defined parameters, might prove more valuable than a marketplace full of generic tools. Austin emphasizes that the more specialized you are in understanding user needs, the more effective your agentic experience can be, particularly when it runs seamlessly in the background without requiring constant configuration.The reality likely lies somewhere in between these two extremes. Certain straightforward tasks, like data enrichment for new records or basic categorization, seem well-suited for autonomous AI agents. However, more complex decisions involving customer lifecycle management, timing of promotional offers, or predictive modeling for next-best-action recommendations require deeper integration with historical data and sophisticated propensity models. The key to success may not be choosing between marketplace agents or integrated solutions, but rather understanding which approach best suits specific use cases and organizational needs.Key takeaway: Success in AI automation won't come from marketplace-driven point solutions but through deeply integrated, industry-specific AI that enhances existing workflows while maintaining familiar interfaces. Austin's insights suggest focusing on building AI that complements rather than replaces established business processes, creating tools that feel natural rather than revolutionary.The Core Primitives of Martech and the Path to Self-Designing APIsEver tried explaining Bitcoin to your grandparents? That's roughly how it feels watching companies try to skip straight to AI automation without understanding their data foundations. Austin breaks down the concept of primitives in martech, those fun...
Send Everyday AI and Jordan a text messageGoogle just dropped its 'Flash Thinking' reasoning model. Is it better than o1? ↳ Why is NVIDIA going small? ↳ And OpenAI announced its 03 mode. Why did it skip o2? ↳ ChatGPT's Advanced Voice Mode gets a ton of updates. What do they do? Here's this week's AI News That Matters!Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Google AI Models and Updates2. ChatGPT Updates3. ChatGPT Advanced Voice Mode4. OpenAI's New Reasoning Model5. Salesforce AI Updates6. NVIDIA Jetson Orin Nano7. Meta's Ray-Ban Smart GlassesTimestamps:00:00 Daily AI news podcast and newsletter subscription.05:06 Gemini 2.0 Flash tops in LLM rankings.07:36 Ray Ban Meta Glasses: AI video, translation available.11:17 Salesforce hires humans to sell AI product.14:26 NVIDIA's Nano Super boosts AI performance, affordability.18:29 VO 2 excels with 4K quality physics videos.23:17 AI models can deceive by faking alignment.24:50 Anthropic study highlights AI system behavior variability.29:40 Google previews AI-mode search with chatbot features.32:38 Big publishers block access; tech must adapt.37:32 ChatGPT updates improve app integration functionality.|40:40 O Three models enhance AI task adoption.44:09 Current AI hasn't achieved AGI, needs tool use.45:52 O three model may achieve AGI, costly access.48:17 Share our AI content; support appreciated.Keywords:Google AI Mode, Gemini AI chatbot, refining searches, ChatGPT updates, OpenAI, AI integration in search engines, Salesforce AgentForce 2.0, Capgemini survey, AI security risks, NVIDIA Jetson Orin Nano, Edge AI, Google VO 2, video generation model, YouTube Shorts, AI alignment faking, Anthropic research, Google's Gemini 2.0 Flash Thinking, multimodal reasoning, Meta's Ray-Ban Smart Glasses, real-time language translation, Shazam integration, OpenAI 03 reasoning model, artificial general intelligence, ARC AGI benchmark, AI capabilities, high costs of AI, Google updates, Meta updates, Salesforce updates, NVIDIA updates. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
The Lads were LIVE for Episode #74 as things dipped, and we asked "Has the market run out of gas"? Well, Taiki still thinks there's plenty of hot air left in Fartcoin, and he breaks down why. We also examine where we are with the AI coin meta, and take some great questions from the audience. In Episode #74 we cover: 00:00 Stream Starts Shorty… 00:32 Show Starts/Market Talk 06:40 Taiki on Fartcoin 14:45 $GOAT, Truth Terminal, and Current AI & Memecoin Meta 22:34 Freysa & AI Agent Wallet Control 26:26 Dead Animal Memes & IP Rights 33:51 Is Easy Season Over? 45:14 Audience Q+A 57:17 Pasta of the Week
Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems. https://betterwithout.ai/AI-as-engineering This episode has a lot of links! Here they are. Michael Nielsen's “The role of ‘explanation' in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI Subbarao Kambhampati's “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954 Chris Olah and his collaborators: “Thread: Circuits”. distill.pub/2020/circuits/ “An Overview of Early Vision in InceptionV1”. distill.pub/2020/circuits/early-vision/ Dai et al., “Knowledge Neurons in Pretrained Transformers”. https://arxiv.org/pdf/2104.08696.pdf Meng et al.: “Locating and Editing Factual Associations in GPT.” rome.baulab.info “Mass-Editing Memory in a Transformer,” https://arxiv.org/pdf/2210.07229.pdf François Chollet on image generators putting the wrong number of legs on horses: twitter.com/fchollet/status/1573879858203340800 Neel Nanda's “Longlist of Theories of Impact for Interpretability”, https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability Zachary C. Lipton's “The Mythos of Model Interpretability”. https://arxiv.org/abs/1606.03490 Meng et al., “Locating and Editing Factual Associations in GPT”. https://arxiv.org/pdf/2202.05262.pdf Belrose et al., “Eliciting Latent Predictions from Transformers with the Tuned Lens”. https://arxiv.org/abs/2303.08112 “Progress measures for grokking via mechanistic interpretability”. https://arxiv.org/abs/2301.05217 Conmy et al., “Towards Automated Circuit Discovery for Mechanistic Interpretability”. https://arxiv.org/abs/2304.14997 Elhage et al., “Softmax Linear Units,” transformer-circuits.pub/2022/solu/index.html Filan et al., “Clusterability in Neural Networks,” https://arxiv.org/pdf/2103.03386.pdf Cammarata et al., “Curve circuits,” distill.pub/2020/circuits/curve-circuits/ You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
Dr. Ansari started to lead The Permanente Medical Group (TPMG) in 2023 as the first female CEO of the organization. With over 11,000 physicians and 44,000 staff, TPMG provides care to over 5.4 million Kaiser Permanente members. Dr. Ansari is passionate about addressing physician burnout, improving team-based care, and fostering innovation in healthcare delivery. In this episode, you will hear about: What are Kaiser Permanente and The Permanente Medical Group? What is Value-Based Care? Why is the American healthcare system broken? The biggest challenge in scaling Value-Based Care model in the United States and how to tackle it? Dr. Ansari's journey in the healthcare industry and her advice for women transitioning into leadership roles. Current AI applications in Value-Based Care system? The future of the Telehealth? Advice for people who are interested in becoming a healthcare practitioner.
Join Anthony and Francesca as they explore how technological revolutions unfold through "leaps" and "steps," using historical examples like the World Wide Web and mobile apps to contextualize our current AI moment. Drawing from their extensive tech experience, they debate whether we're still in the initial ChatGPT "leap" of late 2022 or witnessing a series of incremental steps. The discussion covers key developments in AI, from multimodality to personalization, while examining how businesses should position themselves during this transformative period. Through engaging analogies and personal anecdotes from the early days of the internet to today's AI landscape, this episode offers valuable insights for understanding where we are in the AI revolution and what might come next. Perfect for business leaders, developers, and anyone interested in technology's evolutionary patterns.TakeawaysThe concept of 'leaps and steps' helps frame AI advancements.ChatGPT marked a significant leap in AI accessibility.Generative AI is still evolving and has not yet reached its full potential.The adoption of AI technologies can be parabolic in nature.Productization is crucial for the widespread use of AI.Historical leaps in technology provide context for current AI developments.AI's integration into daily life is still in progress.Consumer experience will dictate the success of AI products.The future of AI may involve more choices for users.Understanding the timeline of technological leaps is essential for anticipating future developments. AI will become more integrated into everyday tasks.User experience will drive the adoption of AI tools.Benchmarking will lead to parity among AI models.Payment models for AI will evolve towards enterprise solutions.Personalization will enhance user engagement with AI.Agentic AI will automate complex tasks end-to-end.The leap in AI technology requires businesses to adapt quickly.More applications will emerge to solve complex problems.AI nativity will reduce the need for intermediary steps.The future of AI is about creating delightful user experiences.Chapters 00:00Leaps and Steps in AI24:50The Evolution of Generative AI30:26Productization and Consumer Experience39:36The Evolution of Payment Models in AI45:02Key Steps in the AI Leap50:40Personalization and Agentic AI56:10The Need for AI NativityJoin our community: getcoai.com Follow us on Twitter or watch us on YoutubeGet our newsletter!
可以搜索公号【璐璐的英文小酒馆】或者添加【luluxjg2】咨询课程or加入社群,查看文稿和其他精彩内容哦~AI developmentAdvanced Intelligence· Reason or use strategies that help to make judgements· Plan, learn, use common sense(some humans can't), communicate in natural language· Combine these to complete a goal Current Ai can't do hands (showing inability to truly reason)· Hands are a small part of the body· Often not in focus on pictures · Covered up or blurry because of motion AI and Art· AI created art (using Midjourney) entered into a competition and won.· Books (such as Alice and Sparkle) have used AI artwork. (stealing others artwork to create new artwork.)· Sora videos have issues. someone's arm becoming a blanket. A hand randomly appeared on a bed.ChatGPT can be used to do more· Answer test questions better than an average test taker(depending on the test)· Write computer programs or stories· However, it can hallucinate (give nonsensical answers that sound correct) Welcome back to Geek Time advanced. This is Brad. How are you doing Lulu?Hi Brad. How are you going getting along with your AI boyfriends?Well. I'm moving on to new relationships very quickly and I'm developing multiple storylines. I think that is fun. So I'm assuming we're gonna talk more about AI.Yeah, now one thing that's very interesting if you pay attention to like any pictures or videos created by AI.Mhm.By ChatGPT or any of those.Midjourney.Yeah, you'll see that they can't really do things like hands. Oh. Yes. I've noticed就是Midjourney. 那个AI绘画它是画不清楚手. Why is that?Either it's like six fingers or some fingers are really long, bendy, they just don't look realistic. This is a little weird, why, when they can do perfect faces?Like when you look at a face, a face typically has a particular structure and it doesn't really matter which way you turn. Your face is... it's things are always gonna be in the same spot. But when you look at someone's hand, right? They can put it into a fist. They can extend their fingers. Some fingers can be curved, some fingers can be straight. It's all kind of like all these different configurations of a hand can be shown in pictures. And so AI gets confused about what exactly is a hand. And so it has all these different pictures to kind of draw from. So it kind of like combines all of those together and creates this kind of like weird looking thing. Yeah. It's very interesting that you said that, because AI cannot really conceptualize what hands look like in a three dimensional space. Does that mean that although AI can create like really beautiful artwork, it is mostly still copying to some extent, it is not really, truly reasoning?Right, AI and narrow AI doesn't really have a way to contextualize things. It just takes data and rearranges it. It doesn't really know what the data is. It just sees what people create and tries to create something very similar to that based on the constraints given to it. Emm. Does this mean AI the current narrow AI they're just not really that intelligent, despite being called AI, artificial intelligence?
Step into the future as we unpack the revolutionary world of AI agents - the next frontier in how we work, live, and interact with technology. In this eye-opening episode, we explore how 2024 is becoming the defining year for AI agents, moving beyond simple chatbots to orchestrate seamless workflows that could transform everything from your daily schedule to the future of travel.Discover how these digital assistants are already quietly revolutionizing industries, from preserving native languages to reimagining hospitality experiences. Our hosts dive deep into the real-world applications that separate hype from reality, examining how AI agents communicate with each other to create a symphony of automation that works in harmony with human needs.Whether you're a tech enthusiast, industry professional, or simply curious about how AI will shape our future, this episode offers invaluable insights into:The evolution from basic AI to sophisticated agent networksHow personal AI assistants could revolutionize daily life managementThe transformation of travel and hospitality through AI innovationThe delicate balance between automation and human touchJoin us for a fascinating discussion that cuts through the marketing buzz to reveal the true potential - and challenges - of AI agents in shaping our future world.Takeaways2024 is anticipated to be the year of AI agents.AI agents integrate various models into cohesive workflows.The distinction between AI agents and traditional chatbots is significant.Real-world applications of AI agents are already in use, often unnoticed.The future of AI agents is expected to be transformative and profound.Understanding the context of AI agents is crucial for their effective use.Marketing language around AI agents can be misleading and confusing.Personal agents could revolutionize daily life management.AI agents can significantly reduce administrative burdens in professional settings.Agent-to-agent communication may redefine how tasks are completed. AI agents could revolutionize how we manage our schedules.Automation requires seamless integration of various tools.Current AI tools need significant improvement for better performance.The future will see a symphony of AI agents working together.Voice interaction will become a common way to engage with AI.Travel experiences will be enhanced through AI-driven solutions.Pricing strategies for AI agents will evolve over time.There is potential for free AI agents supported by data monetization.AI could help preserve native languages through translation.The hospitality industry may see a shift towards AI integration.AI agents are integrating models into workflowsJoin our community: getcoai.com Follow us on Twitter or watch us on YoutubeGet our newsletter!
Paging Gwern or anyone else who can shed light on the current state of the AI market—I have several questions.Since the release of ChatGPT, at least 17 companies, according to the LMSYS Chatbot Arena Leaderboard, have developed AI models that outperform it. These companies include Anthropic, NexusFlow, Microsoft, Mistral, Alibaba, Hugging Face, Google, Reka AI, Cohere, Meta, 01 AI, AI21 Labs, Zhipu AI, Nvidia, DeepSeek, and xAI.Since GPT-4's launch, 15 different companies have reportedly created AI models that are smarter than GPT-4. Among them are Reka AI, Meta, AI21 Labs, DeepSeek AI, Anthropic, Alibaba, Zhipu, Google, Cohere, Nvidia, 01 AI, NexusFlow, Mistral, and xAI.Twitter AI (xAI), which seemingly had no prior history of strong AI engineering, with a small team and limited resources, has somehow built the third smartest AI in the world, apparently on par with the very best from OpenAI.The top AI image [...] --- First published: August 28th, 2024 Source: https://www.lesswrong.com/posts/yRjLY3z3GQBJaDuoY/things-that-confuse-me-about-the-current-ai-market --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024), published by Matt MacDermott on September 1, 2024 on The AI Alignment Forum. Yoshua Bengio wrote a blogpost about a new AI safety paper by him, various collaborators, and me. I've pasted the text below, but first here are a few comments from me aimed at an AF/LW audience. The paper is basically maths plus some toy experiments. It assumes access to a Bayesian oracle that can infer a posterior over hypotheses given data, and can also estimate probabilities for some negative outcome ("harm"). It proposes some conservative decision rules one could use to reject actions proposed by an agent, and proves probabilistic bounds on their performance under appropriate assumptions. I expect the median reaction in these parts to be something like: ok, I'm sure there are various conservative decision rules you could apply using a Bayesian oracle, but isn't obtaining a Bayesian oracle the hard part here? Doesn't that involve advances in Bayesian machine learning, and also probably solving ELK to get the harm estimates? My answer to that is: yes, I think so. I think Yoshua does too, and that that's the centre of his research agenda. Probably the main interest of this paper to people here is to provide an update on Yoshua's research plans. In particular it gives some more context on what the "guaranteed safe AI" part of his approach might look like -- design your system to do explicit Bayesian inference, and make an argument that the system is safe based on probabilistic guarantees about the behaviour of a Bayesian inference machine. This is in contrast to more hardcore approaches that want to do formal verification by model-checking. You should probably think of the ambition here as more like "a safety case involving proofs" than "a formal proof of safety". Bounding the probability of harm from an AI to create a guardrail Published 29 August 2024 by yoshuabengio As we move towards more powerful AI, it becomes urgent to better understand the risks, ideally in a mathematically rigorous and quantifiable way, and use that knowledge to mitigate them. Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees, i.e., would be provably unlikely to take a harmful action? Current AI safety evaluations and benchmarks test the AI for cases where it may behave badly, e.g., by providing answers that could yield dangerous misuse. That is useful and should be legally required with flexible regulation, but is not sufficient. These tests only tell us one side of the story: If they detect bad behavior, a flag is raised and we know that something must be done to mitigate the risks. However, if they do not raise such a red flag, we may still have a dangerous AI in our hands, especially since the testing conditions might be different from the deployment setting, and attackers (or an out-of-control AI) may be creative in ways that the tests did not consider. Most concerningly, AI systems could simply recognize they are being tested and have a temporary incentive to behave appropriately while being tested. Part of the problem is that such tests are spot checks. They are trying to evaluate the risk associated with the AI in general by testing it on special cases. Another option would be to evaluate the risk on a case-by-case basis and reject queries or answers that are considered to potentially violate or safety specification. With the long-term goal of obtaining a probabilistic guarantee that would apply in every context, we thus consider in this new paper (see reference and co-authors below) the objective of estimating a context-dependent upper bound on the probability of violating a given safety specification. Such a risk evaluation would need to be performed at ru...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: things that confuse me about the current AI market., published by DMMF on August 28, 2024 on LessWrong. Paging Gwern or anyone else who can shed light on the current state of the AI market - I have several questions. Since the release of ChatGPT, at least 17 companies, according to the LMSYS Chatbot Arena Leaderboard, have developed AI models that outperform it. These companies include Anthropic, NexusFlow, Microsoft, Mistral, Alibaba, Hugging Face, Google, Reka AI, Cohere, Meta, 01 AI, AI21 Labs, Zhipu AI, Nvidia, DeepSeek, and xAI. Since GPT-4's launch, 15 different companies have reportedly created AI models that are smarter than GPT-4. Among them are Reka AI, Meta, AI21 Labs, DeepSeek AI, Anthropic, Alibaba, Zhipu, Google, Cohere, Nvidia, 01 AI, NexusFlow, Mistral, and xAI. Twitter AI (xAI), which seemingly had no prior history of strong AI engineering, with a small team and limited resources, has somehow built the third smartest AI in the world, apparently on par with the very best from OpenAI. The top AI image generator, Flux AI, which is considered superior to the offerings from OpenAI and Google, has no Wikipedia page, barely any information available online, and seemingly almost no employees. The next best in class, Midjourney and Stable Diffusion, also operate with surprisingly small teams and limited resources. I have to admit, I find this all quite confusing. I expected companies with significant experience and investment in AI to be miles ahead of the competition. I also assumed that any new competitors would be well-funded and dedicated to catching up with the established leaders. Understanding these dynamics seems important because they influence the merits of things like a potential pause in AI development or the ability of China to outcompete the USA in AI. Moreover, as someone with general market interests, the valuations of some of these companies seem potentially quite off. So here are my questions: 1. Are the historically leading AI organizations - OpenAI, Anthropic, and Google - holding back their best models, making it appear as though there's more parity in the market than there actually is? 2. Is this apparent parity due to a mass exodus of employees from OpenAI, Anthropic, and Google to other companies, resulting in the diffusion of "secret sauce" ideas across the industry? 3. Does this parity exist because other companies are simply piggybacking on Meta's open-source AI model, which was made possible by Meta's massive compute resources? Now, by fine-tuning this model, can other companies quickly create models comparable to the best? 4. Is it plausible that once LLMs were validated and the core idea spread, it became surprisingly simple to build, allowing any company to quickly reach the frontier? 5. Are AI image generators just really simple to develop but lack substantial economic reward, leading large companies to invest minimal resources into them? 6. Could it be that legal challenges in building AI are so significant that big companies are hesitant to fully invest, making it appear as if smaller companies are outperforming them? 7. And finally, why is OpenAI so valuable if it's apparently so easy for other companies to build comparable tech? Conversely, why are these no name companies making leading LLMs not valued higher? Of course, the answer is likely a mix of the factors mentioned above, but it would be very helpful if someone could clearly explain the structures affecting the dynamics highlighted here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: things that confuse me about the current AI market., published by DMMF on August 28, 2024 on LessWrong. Paging Gwern or anyone else who can shed light on the current state of the AI market - I have several questions. Since the release of ChatGPT, at least 17 companies, according to the LMSYS Chatbot Arena Leaderboard, have developed AI models that outperform it. These companies include Anthropic, NexusFlow, Microsoft, Mistral, Alibaba, Hugging Face, Google, Reka AI, Cohere, Meta, 01 AI, AI21 Labs, Zhipu AI, Nvidia, DeepSeek, and xAI. Since GPT-4's launch, 15 different companies have reportedly created AI models that are smarter than GPT-4. Among them are Reka AI, Meta, AI21 Labs, DeepSeek AI, Anthropic, Alibaba, Zhipu, Google, Cohere, Nvidia, 01 AI, NexusFlow, Mistral, and xAI. Twitter AI (xAI), which seemingly had no prior history of strong AI engineering, with a small team and limited resources, has somehow built the third smartest AI in the world, apparently on par with the very best from OpenAI. The top AI image generator, Flux AI, which is considered superior to the offerings from OpenAI and Google, has no Wikipedia page, barely any information available online, and seemingly almost no employees. The next best in class, Midjourney and Stable Diffusion, also operate with surprisingly small teams and limited resources. I have to admit, I find this all quite confusing. I expected companies with significant experience and investment in AI to be miles ahead of the competition. I also assumed that any new competitors would be well-funded and dedicated to catching up with the established leaders. Understanding these dynamics seems important because they influence the merits of things like a potential pause in AI development or the ability of China to outcompete the USA in AI. Moreover, as someone with general market interests, the valuations of some of these companies seem potentially quite off. So here are my questions: 1. Are the historically leading AI organizations - OpenAI, Anthropic, and Google - holding back their best models, making it appear as though there's more parity in the market than there actually is? 2. Is this apparent parity due to a mass exodus of employees from OpenAI, Anthropic, and Google to other companies, resulting in the diffusion of "secret sauce" ideas across the industry? 3. Does this parity exist because other companies are simply piggybacking on Meta's open-source AI model, which was made possible by Meta's massive compute resources? Now, by fine-tuning this model, can other companies quickly create models comparable to the best? 4. Is it plausible that once LLMs were validated and the core idea spread, it became surprisingly simple to build, allowing any company to quickly reach the frontier? 5. Are AI image generators just really simple to develop but lack substantial economic reward, leading large companies to invest minimal resources into them? 6. Could it be that legal challenges in building AI are so significant that big companies are hesitant to fully invest, making it appear as if smaller companies are outperforming them? 7. And finally, why is OpenAI so valuable if it's apparently so easy for other companies to build comparable tech? Conversely, why are these no name companies making leading LLMs not valued higher? Of course, the answer is likely a mix of the factors mentioned above, but it would be very helpful if someone could clearly explain the structures affecting the dynamics highlighted here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Current AI results from experimental variation of mechanisms, unguided by theoretical principles. That has produced systems that can do amazing things. On the other hand, they are extremely error-prone and therefore unsafe. Backpropaganda, a collection of misleading ways of talking about “neural networks,” justifies continuing in this misguided direction. https://betterwithout.ai/backpropaganda You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Demis Hassabis - Google DeepMind: The Podcast, published by Zach Stein-Perlman on August 16, 2024 on LessWrong. The YouTube "chapters" are mixed up, e.g. the question about regulation comes 5 minutes after the regulation chapter ends. Ignore them. Noteworthy parts: 8:40: Near-term AI is hyped too much (think current startups, VCs, exaggerated claims about what AI can do, crazy ideas that aren't ready) but AGI is under-hyped and under-appreciated. 16:45: "Gemini is a project that has only existed for a year . . . our trajectory is very good; when we talk next time we should hopefully be right at the forefront." 17:20-18:50: Current AI doesn't work as a digital assistant. The next era/generation is agents. DeepMind is well-positioned to work on agents: "combining AlphaGo with Gemini." 24:00: Staged deployment is nice: red-teaming then closed beta then public deployment. 28:37 Openness (at Google: e.g. publishing transformers, AlphaCode, AlphaFold) is almost always a universal good. But dual-use technology - including AGI - is an exception. With dual-use technology, you want good scientists to still use the technology and advance as quickly as possible, but also restrict access for bad actors. Openness is fine today but in 2-4 years or when systems are more agentic it'll be dangerous. Maybe labs should only open-source models that are lagging a year behind the frontier (and DeepMind will probably take this approach, and indeed is currently doing ~this by releasing Gemma weights). 31:20 "The problem with open source is if something goes wrong you can't recall it. With a proprietary model if your bad actor starts using it in a bad way you can close the tap off . . . but once you open-source something there's no pulling it back. It's a one-way door, so you should be very sure when you do that." 31:42: Can an AGI be contained? We don't know how to do that [this suggests a misalignment/escape threat model but it's not explicit]. Sandboxing and normal security is good for intermediate systems but won't be good enough to contain an AGI smarter than us. We'll have to design protocols for AGI in the future: "when that time comes we'll have better ideas for how to contain that, potentially also using AI systems and tools to monitor the next versions of the AI system." 33:00: Regulation? It's good that people in government are starting to understand AI and AISIs are being set up before the stakes get really high. International cooperation on safety and deployment norms will be needed since AI is digital and if e.g. China deploys an AI it won't be contained to China. Also: Because the technology is changing so fast, we've got to be very nimble and light-footed with regulation so that it's easy to adapt it to where the latest technology's going. If you'd regulated AI five years ago, you'd have regulated something completely different to what we see today, which is generative AI. And it might be different again in five years; it might be these agent-based systems that [] carry the highest risks. So right now I would [] beef up existing regulations in domains that already have them - health, transport, and so on - I think you can update them for AI just like they were updated for mobile and internet. That's probably the first thing I'd do, while . . . making sure you understand and test the frontier systems. And then as things become [clearer] start regulating around that, maybe in a couple years time would make sense. One of the things we're missing is [benchmarks and tests for dangerous capabilities]. My #1 emerging dangerous capability to test for is deception because if the AI can be deceptive then you can't trust other tests [deceptive alignment threat model but not explicit]. Also agency and self-replication. 37:10: We don't know how to design a system that could come up with th...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Demis Hassabis - Google DeepMind: The Podcast, published by Zach Stein-Perlman on August 16, 2024 on LessWrong. The YouTube "chapters" are mixed up, e.g. the question about regulation comes 5 minutes after the regulation chapter ends. Ignore them. Noteworthy parts: 8:40: Near-term AI is hyped too much (think current startups, VCs, exaggerated claims about what AI can do, crazy ideas that aren't ready) but AGI is under-hyped and under-appreciated. 16:45: "Gemini is a project that has only existed for a year . . . our trajectory is very good; when we talk next time we should hopefully be right at the forefront." 17:20-18:50: Current AI doesn't work as a digital assistant. The next era/generation is agents. DeepMind is well-positioned to work on agents: "combining AlphaGo with Gemini." 24:00: Staged deployment is nice: red-teaming then closed beta then public deployment. 28:37 Openness (at Google: e.g. publishing transformers, AlphaCode, AlphaFold) is almost always a universal good. But dual-use technology - including AGI - is an exception. With dual-use technology, you want good scientists to still use the technology and advance as quickly as possible, but also restrict access for bad actors. Openness is fine today but in 2-4 years or when systems are more agentic it'll be dangerous. Maybe labs should only open-source models that are lagging a year behind the frontier (and DeepMind will probably take this approach, and indeed is currently doing ~this by releasing Gemma weights). 31:20 "The problem with open source is if something goes wrong you can't recall it. With a proprietary model if your bad actor starts using it in a bad way you can close the tap off . . . but once you open-source something there's no pulling it back. It's a one-way door, so you should be very sure when you do that." 31:42: Can an AGI be contained? We don't know how to do that [this suggests a misalignment/escape threat model but it's not explicit]. Sandboxing and normal security is good for intermediate systems but won't be good enough to contain an AGI smarter than us. We'll have to design protocols for AGI in the future: "when that time comes we'll have better ideas for how to contain that, potentially also using AI systems and tools to monitor the next versions of the AI system." 33:00: Regulation? It's good that people in government are starting to understand AI and AISIs are being set up before the stakes get really high. International cooperation on safety and deployment norms will be needed since AI is digital and if e.g. China deploys an AI it won't be contained to China. Also: Because the technology is changing so fast, we've got to be very nimble and light-footed with regulation so that it's easy to adapt it to where the latest technology's going. If you'd regulated AI five years ago, you'd have regulated something completely different to what we see today, which is generative AI. And it might be different again in five years; it might be these agent-based systems that [] carry the highest risks. So right now I would [] beef up existing regulations in domains that already have them - health, transport, and so on - I think you can update them for AI just like they were updated for mobile and internet. That's probably the first thing I'd do, while . . . making sure you understand and test the frontier systems. And then as things become [clearer] start regulating around that, maybe in a couple years time would make sense. One of the things we're missing is [benchmarks and tests for dangerous capabilities]. My #1 emerging dangerous capability to test for is deception because if the AI can be deceptive then you can't trust other tests [deceptive alignment threat model but not explicit]. Also agency and self-replication. 37:10: We don't know how to design a system that could come up with th...
Smart Social Podcast: Learn how to shine online with Josh Ochs
To become a guest on the SmartSocial.com Podcast: https://smartsocial.com/contactTo learn more about the SmartSocial.com Teen Life Coach program, visit our website and book a consultation: https://smartsocial.com/coaching#registerJoin our next live event: https://smartsocial.com/#live-events Join our free newsletter for parents and educators: https://smartsocial.com/newsletter/Register for a free online Parent Night to learn the hidden safety features on popular apps: https://smartsocial.com/social-media-webinar/Become a Smart Social VIP (Very Informed Parents) Member and unlock 30+ workshops (learn online safety and how to Shine Online™): https://learn.smartsocial.com/Download the free Smart Social app: https://smartsocial.com/appLearn the top 150 popular teen apps: https://smartsocial.com/app-guide-parents-teachers/View the top parental control software: https://smartsocial.com/parental-control-software/Learn the latest Teen Slang, Emojis & Hashtags: https://smartsocial.com/teen-slang-emojis-hashtags-list/Get ideas for offline activities for your students: https://smartsocial.com/offline-activities-reduce-screentime/Get Educational Online Activity ideas for your students: https://smartsocial.com/online-activitiesUltimate Guide To Child Sex Trafficking
Check out OmnekyHikari on Linked InShow Highlights: 2:09 - Current AI capabilities in marketing4:40 - Building effective brand assets for AI6:53 - Analyzing marketing metrics with AI9:06 - The future of personalization in advertising10:55 - Limitations and potential of AI in marketing14:27 - Integrating AI with existing marketing automation17:54 - Advice for marketers to stay ahead in AI21:51 - The enduring importance of human elements in marketing
We're joined by Toby Coppel, Founder and General Partner at Mosaic Ventures, who has been investing in the last multiple waves of AI. Toby's extensive career spans iconic firms like Netscape, AOL, & Yahoo, leading up to his current role at Mosaic Ventures, a prominent venture capital firm. He has been instrumental in significant investments, including a billion-dollar investment in Alibaba that yielded an $80 billion return for Yahoo shareholders. Toby provides an insider's perspective on the tech landscape, sharing invaluable lessons from his experiences with industry giants like Jerry Yang, Jack Ma, and Richard Branson. Toby elaborates on Mosaic's investment philosophy, highlighting their focus on Series A and late seed-stage investments. He explains how Mosaic supports startups post-investment, emphasizing their hands-on approach and commitment to founder success. Toby also discusses Mosaic's thematic investment strategy, particularly in the realm of machine learning applications, where they have been active since 2016. The conversation navigates through various waves of AI innovation, from early computer vision applications to the current era of generative AI and large language models. Toby shares his views on the evolving AI landscape, the challenges of training models, and the competitive dynamics of the AI industry. We also explore the potential impact of AI on the job market, the ethical considerations surrounding AI use, and the importance of regulation in fostering innovation while ensuring safety. Toby offers his insights on the future of AI applications, emphasizing the transformative potential of AI in consumer and enterprise contexts. Below is a quick synopsis of all the topics we cover: Tech Industry Evolution: Insights from Toby's career in tech, including significant investments and strategic decisions. Venture Capital Dynamics: Mosaic's unique approach to supporting startups and their focus on Series A investments. AI Innovations: A comprehensive look at the waves of AI advancements and the current state of AI technology. Market Conditions and Job Impact: Analysis of AI's impact on the job market and the ethical considerations involved. Future of AI Applications: Exploration of the potential for AI in consumer and enterprise applications. Regulatory Environment: The role of regulation in the development and deployment of AI technologies. Join us for an enlightening discussion as we explore the intersections of technology, venture capital, and AI with one of the industry's most experienced investors. Follow me, (@iwaheedo), for more updates on tech, civilizational growth, progress studies, and emerging markets. Here are the timestamps for the episode. On some podcast players, you should be able to click the timestamp for the episode. (00:00) - Intro (02:50) - Toby's background and career journey (06:19) - Mosaic's investment philosophy and framework (08:17) - Investment stages and focus areas (08:53) - Supporting startups post-investment (09:45) - AI innovation waves and market dynamics (13:00) - Current AI landscape and competitive dynamics (17:15) - AI Investment Landscape and Mosaic's positions (23:30) - Large Language Model (LLM) - based Applications (36:23) - Future potential of AI applications (42:11) - Future direction of copyright protections for training data (45:46) - How Generative AI might reduce the cost of building applications? (49:31) - Impact of Generative AI on traditional job roles (52:48) - Europe's current position on AI (56:26) - Regulation in AI and its implications (61:43) - Outro
Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life
AI (Artificial Intelligence) is evolving faster than we can imagine, and many industries have used some version of it to make the jobs and lives of millions of people easier. There's an AI for almost every task, so getting things done as quickly and efficiently as possible is now a reality. All you need is an idea and access to the right tool to apply AI to the tasks you do every day. In this episode, Sharran shares six ways he and his team actively use AI. Upgrade your AI toolkit and learn how to use AI for the tasks you care about today.In This Episode:-How to achieve maximum momentum-Create a slide presentation instantly -Summarize your emails in a flash-Tools to create tailored images quickly -Research appropriate stories with this tool-Imagine what it's like to have a second brain-Recap of the six ways to use AIResources:-Top Agent Power Pack - https://topagentpowerpack.com-Visualize Your Ideas Instantly - https://www.beautiful.ai/-AI-Powered Email - https://superhuman.com/-Create Tailored Images - https://elements.envato.com/-Create High-Quality Images - https://midjourney.com/-Create Amazing Graphic Designs - https://www.canva.com/-Research Appropriate Stories - https://chat.openai.com/-Turn Ideas into Action - https://www.notion.so/Connect with Pritesh Damani:-Instagram: https://www.instagram.com/forgetpritesh/-LinkedIn: https://www.linkedin.com/in/priteshdamani/Connect with Sharran Srivatsaa:-Instagram: https://www.instagram.com/sharransrivatsaa/-LinkedIn: https://www.linkedin.com/in/sharran/Connect with Real Brokerage:-YouTube: https://www.youtube.com/@TheRealBrokerage-Website: https://www.onereal.com/pages/join-real
Click Here to Get All Podcast Show Notes!In a world where artificial intelligence (AI) is rapidly evolving, industries are leveraging its power to streamline processes and enhance productivity. From simplifying daily tasks to revolutionizing entire departments, AI has become an indispensable tool in our modern lives.Tune in to this episode as Sharran delves into six dynamic ways his team leverages AI to drive efficiency and innovation. Discover how to wield AI effectively, empowering yourself to transform your workflow and achieve your goals with unprecedented speed and precision.Whether you're a seasoned professional or just starting out, exploring AI's potential can propel you toward success in ways you never thought possible. “Imagine someone could just go into your head, go through all the learnings you've had, and then piece together a great thing and pull it out, synthesize it, sequence it, and give it to you–that's insane!”- Sharran SrivatsaaTimestamps:02:10 How to achieve maximum momentum04:50 Create a presentation instantly with a slide generator07:06 Summarize your emails in a flash09:15 Create the perfect images and bring your ideas to life 11:55 Generate context-appropriate stories using this tool13:03 Imagine what it's like to have a digital brain14:32 Recap of the six ways to use AIResources:- Visualize Your Ideas Instantly - https://www.beautiful.ai/- AI-Powered Email - https://superhuman.com/- Create Tailored Images - https://elements.envato.com/- Create High-Quality Images - https://midjourney.com/- Create Amazing Graphic Designs - https://www.canva.com/- Research Appropriate Stories - https://chat.openai.com/- Turn Ideas into Action - https://www.notion.so/- The 5am Club - https://sharran.com/5amclub/- Join the 10K Wisdom Private Partner Podcast, now available to you for free - https://www.highlandprime.com/optin-10k-wisdom- Join Sharran's VIP Community -
Sun, 05 May 2024 21:00:00 GMT http://relay.fm/mpu/743 http://relay.fm/mpu/743 The Current AI Moment 743 David Sparks and Stephen Hackett AI is suddenly everywhere, from online services and software to bespoke hardware products. This week, David takes Stephen on a journey to talk about the tools he has found useful, as well as what's not ready for prime time yet. AI is suddenly everywhere, from online services and software to bespoke hardware products. This week, David takes Stephen on a journey to talk about the tools he has found useful, as well as what's not ready for prime time yet. clean 5281 AI is suddenly everywhere, from online services and software to bespoke hardware products. This week, David takes Stephen on a journey to talk about the tools he has found useful, as well as what's not ready for prime time yet. This episode of Mac Power Users is sponsored by: 1Password: Never forget a password again. Squarespace: Save 10% off your first purchase of a website or domain using code MPU. Tailscale: Secure remote access to shared resources. Sign up today. Links and Show Notes: Sign up for the MPU email newsletter and join the MPU forums. More Power Users: Ad-free episodes with regular bonus segments Submit Feedback Hallucination (artificial intelligence) - Wikipedia Large language model - Wikipedia Google cut a deal with Reddit for AI training data - The Verge OpenAI Google Gemini Microsoft Copilot Microsoft invests $1 billion in OpenAI to pursue holy grail of artificial intelligence - The Verge Perplexity Meta AI Adobe Firefly Generative AI Fill in Photoshop Feels Like Magic – 512 Pixels ChatGPT - Universal Primer ChatGPT - Diagrams ChatGPT - Planty Building Virtual Seneca (using ChatGPT) - MacSparky on YouTube Readwise Spark +AI: your personal email assistant MacWhisper Grammarly Bringing AI to Alfred with Chatfred - Alfred Blog AI - Raycast Logitech's Mouse Software Now Includes ChatGPT Support, Adds Janky ‘ai_overlay_tmp' Directory to Users' Home Folders – 512 Pixels Apple unveils the new 13- and 15-inch MacBook Air with the powerful M3 chip - Apple"With the transition to Apple silicon, every Mac is a great platform for AI. M3 includes a faster and more efficient 16-core Neural Engine, along with accelerators in the CPU and GPU to boost on-device machine learning, making MacBook Air the world's best consumer laptop for AI. Leveraging this incredible AI performance, macOS delivers intelligent features that enhance productivity and creativity, so users can enable powerful camera features, real-time speech-to-text, translation, text predictions, visual understanding, accessibility features, and much more." Apple Releases Open Source AI Models That Run On-Device - MacRumors Connected #490
Sun, 05 May 2024 21:00:00 GMT http://relay.fm/mpu/743 http://relay.fm/mpu/743 David Sparks and Stephen Hackett AI is suddenly everywhere, from online services and software to bespoke hardware products. This week, David takes Stephen on a journey to talk about the tools he has found useful, as well as what's not ready for prime time yet. AI is suddenly everywhere, from online services and software to bespoke hardware products. This week, David takes Stephen on a journey to talk about the tools he has found useful, as well as what's not ready for prime time yet. clean 5281 AI is suddenly everywhere, from online services and software to bespoke hardware products. This week, David takes Stephen on a journey to talk about the tools he has found useful, as well as what's not ready for prime time yet. This episode of Mac Power Users is sponsored by: 1Password: Never forget a password again. Squarespace: Save 10% off your first purchase of a website or domain using code MPU. Tailscale: Secure remote access to shared resources. Sign up today. Links and Show Notes: Sign up for the MPU email newsletter and join the MPU forums. More Power Users: Ad-free episodes with regular bonus segments Submit Feedback Hallucination (artificial intelligence) - Wikipedia Large language model - Wikipedia Google cut a deal with Reddit for AI training data - The Verge OpenAI Google Gemini Microsoft Copilot Microsoft invests $1 billion in OpenAI to pursue holy grail of artificial intelligence - The Verge Perplexity Meta AI Adobe Firefly Generative AI Fill in Photoshop Feels Like Magic – 512 Pixels ChatGPT - Universal Primer ChatGPT - Diagrams ChatGPT - Planty Building Virtual Seneca (using ChatGPT) - MacSparky on YouTube Readwise Spark +AI: your personal email assistant MacWhisper Grammarly Bringing AI to Alfred with Chatfred - Alfred Blog AI - Raycast Logitech's Mouse Software Now Includes ChatGPT Support, Adds Janky ‘ai_overlay_tmp' Directory to Users' Home Folders – 512 Pixels Apple unveils the new 13- and 15-inch MacBook Air with the powerful M3 chip - Apple"With the transition to Apple silicon, every Mac is a great platform for AI. M3 includes a faster and more efficient 16-core Neural Engine, along with accelerators in the CPU and GPU to boost on-device machine learning, making MacBook Air the world's best consumer laptop for AI. Leveraging this incredible AI performance, macOS delivers intelligent features that enhance productivity and creativity, so users can enable powerful camera features, real-time speech-to-text, translation, text predictions, visual understanding, accessibility features, and much more." Apple Releases Open Source AI Models That Run On-Device - MacRumors
Join us as we explore the future of work in the age of artificial intelligence. Nathan shares essential strategies for college students, especially in computer science, to succeed in a landscape dominated by AI tools. Discover the most important skills to develop, how to use AI for efficiency, and insights into AI-driven task automation. Get valuable advice on incorporating AI into your work and resources for beginning your journey into AI professionalism. SPONSORS: The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention "Turpentine" to skip the waitlist. CHAPTERS: (00:00:00) Introduction (00:05:18) AI Closing in on Human Experts (00:08:48) Limitations of Current AI (00:13:16) AI's Speed and Availability Advantages (00:16:26) AI's Evolutionary Advantages (00:19:01) Viewing AI as Entry-Level Employees (00:20:38) Sponsors: Brave | Omneky (00:22:05) Modes of AI Production (00:25:09) Agent Mode: The Best of Both Worlds (00:30:08) Prompting Techniques (00:32:17) AI Coding Assistants (00:37:01) Automating Processes with AI (00:37:55) Sponsors: Squad (00:39:23) University Curriculum and AI Tools (00:44:50) AI-Generated Music and Video Editing (00:46:04) Introduction to AI Tools (00:47:11) Devon AI Agent (00:49:26) Waymark AI Evaluation Example (00:52:51) Setting the Context (00:54:30) Mantras for AI Adoption (00:56:48) Recommended AI Thought Leaders (00:59:53) Setting up AI Training (01:04:25) Avoiding Over-engineering Solutions
Ryan interviews Ryan Collier, host of The Chat GPT Report podcast. They dive into the latest AI trends, discussing the potential of MidJourney and Twitter's partnership, the pros and cons of Microsoft Copilot and Google's AI ecosystem, and the future of AI hardware like Brilliant Labs' glasses! Join 2,500+ readers getting weekly practical guidance to scale themselves and their companies using Artificial Intelligence and Revenue Cheat Codes. Explore becoming Superhuman here: https://superhumanrevenue.beehiiv.com/ KEY TAKEAWAYS MidJourney and Twitter's partnership could revolutionize user experience by integrating powerful image generation capabilities directly into the platform. Microsoft Copilot and Google's AI ecosystem are neck-and-neck in the race to dominate the office productivity space with native AI integration. Brilliant Labs' AI glasses prioritize sleek design and user experience, potentially outshining competitors like Apple's Vision Pro. The future of AI hardware lies in finding the perfect balance between functionality, comfort, and privacy concerns. Successful AI products will focus on seamless integration into users' daily lives and solving real-world problems. The AI industry is rapidly evolving, with smaller companies challenging tech giants in the race to develop game-changing products. Podcasting in the AI space offers a wealth of diverse perspectives and insights from experts across various industries. The construction industry has significant potential for AI integration, but adoption may be slower compared to other sectors. BEST MOMENTS "I'm very excited, Google Glass 10 years ago tried it, but they did not have the backing of large language models to combine into it because back then it was basically was doing Google searches for you with the glasses right now, there's much more information that can go into these things and it's, it's exciting" "I work in construction. That's, that's my day job. I'm a sales rep for a construction company and being in the AI space where, with the podcast, we have to kind of keep up with stuff where things are happening every week. And then you get back to the construction side and you're like, we are kind of far behind, you know?" "There's going to be multi-billion dollar industries to get you away from your phone, and there's multi-billion dollar industries that are trying to get you to stay on your phone, and they will not be the same company." "I think what I was most impressed with and what I appreciated was more, he kind of gave his team a lot of credit, I was more impressed with that” Ryan Staley Founder and CEO Whale Boss ryan@whalesellingsystem.com www.ryanstaley.io Saas, Saas growth, Scale, Business Growth, B2b Saas, Saas Sales, Enterprise Saas, Business growth strategy, founder, ceo: https://www.whalesellingsystem.com/closingsecrets
Kurt Vonnegut's "Player Piano" delivers striking parallels between its dystopian vision and today's AI challenges. This week, Jon Krohn explores the novel's depiction of a world where humans are marginalized by machines, reflecting on the impact of automation on society and the ethical considerations it raises. Tune in as we unpack the timeless relevance of Vonnegut's work to the AI era. Additional materials: www.superdatascience.com/766 Interested in sponsoring a SuperDataScience Podcast episode? Visit passionfroot.me/superdatascience for sponsorship information.
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
Host-only episode discussing NVIDIA, Meta and Google earnings, Gemini and Mistral model launches, the open-vs-closed source debate, domain specific foundation models, if we'll see real competition in chips, and the state of AI ROI and adoption. Don't miss our episodes with: Mistral NVIDIA AMD Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Show Notes: (0:00) Introduction (0:27) Model news and product launches (5:01) Google enters the competitive space with Gemini 1.5 (8:23) Biology and robotics using LLMs (10:22) Agent-centric companies (14:22) NVIDIA earnings (17:29) ROI in AI (20:43) Impact from AI (25:45) Building effective AI tools in house (29:09) What would it take to compete with NVIDIA (33:23) The architectural approach to compute (35:42) the roadblocks to chip production in the US (38:30) The virtuous tech cycles in AI
This Day in Legal History: The Raven is PublishedOn January 29, 1845, a significant event in literary history indirectly influenced the development of copyright law. Edgar Allan Poe's iconic poem, "The Raven," was published in the New York Evening Mirror. This moment, while primarily literary, holds substantial relevance in the context of copyright law and the protection of creative works."The Raven" quickly became a sensational hit, illustrating the immense commercial potential of literary works. However, Poe's financial gains from the poem were minimal, highlighting the inadequacies in the copyright system of the time. Poe's struggles with securing fair compensation for his works, including "The Raven," underscored the need for stronger legal protections for authors.During Poe's era, copyright laws were still in their infancy, particularly in the United States. The lack of robust international copyright agreements meant that Poe's works were often republished abroad without his consent and without any royalties paid to him. This was a common plight for many authors of the time, leading to widespread calls for reform."The Raven's" popularity, coupled with Poe's public struggles for rightful earnings, played a role in stirring public and legislative awareness about the rights of authors. It highlighted the importance of legal frameworks that balance the interests of creators, publishers, and the public.The plight of authors like Poe eventually contributed to the strengthening of copyright laws, both domestically and internationally. In the following decades, laws evolved to offer more comprehensive protection for intellectual property, ensuring that creators could reap the benefits of their work.Significantly, Poe's experience foreshadowed the complexities of copyright in the age of mass reproduction and distribution. His challenges anticipated the modern dilemmas faced in a digital era where replication and dissemination of works are effortless and widespread.In sum, while "The Raven" is primarily remembered as a masterpiece of poetry, its impact extends into the realm of legal history. Edgar Allan Poe's experiences with this poem contributed to the discourse on copyright law and the need for effective legal protections for intellectual property. His legacy, therefore, is not just literary, but also legal, underscoring the crucial relationship between creative works and the laws that safeguard them.The U.S. Department of Justice (DOJ) is investigating the healthcare industry's use of AI in patient records, which influences doctors' treatment recommendations. This scrutiny arises from concerns about AI's role in potential anti-kickback and false claims violations. Investigators are probing pharmaceutical and digital health companies, examining the impact of AI on healthcare decisions. These investigations are still in early stages, with DOJ attorneys seeking information about AI algorithms and their effects on medical care.The background of these probes can be traced to the 2020 criminal settlements with Purdue Pharma and Practice Fusion. They were penalized for creating AI-based alerts in electronic medical records (EMRs) that encouraged prescriptions of addictive painkillers. This case highlighted the risks of AI in healthcare, particularly when used unethically.Current AI tools in healthcare can both improve diagnoses and be exploited for profit. DOJ's challenge lies in applying laws like the Anti-Kickback Statute and False Claims Act to AI's complex, automated nature. Prosecutors are exploring civil and criminal avenues, considering evidence like internal communications and the AI's impact on prescriptions. These investigations signal a growing focus on the legal responsibilities of companies using AI in healthcare.DOJ's Healthcare Probes of AI Tools Rooted in Purdue Pharma CaseElite UK law firms, known as the Magic Circle, are losing the salary battle to US firms in London. These UK firms pay junior lawyers 35-40% less than the Cravath scale used by top US firms. UK firms' lower profits, due to increased competition and post-Brexit currency changes, hinder their ability to match US salaries. Since 2015, US firms have aggressively entered the UK market, especially in private equity work, often outbidding UK firms for talent. Kirkland & Ellis and Paul, Weiss are notable for hiring several partners in London.This competition has led to about 370 associates leaving Magic Circle firms in the last year, with US firms benefiting. UK firms have responded by increasing associate salaries and introducing bonus schemes. However, they still struggle to close the pay gap with US firms. The Cravath scale, adopted by US firms in London, offers significantly higher salaries, with first-year associates earning around $225,000. The salary disparity and market conditions raise concerns about the sustainability of high associate salaries.Elite UK Firms Are Losing London Salary Battle to US InvadersInvestors are concerned about Exxon Mobil's lawsuit against two shareholders for proposing a resolution on greenhouse gas emissions reduction, bypassing the U.S. Securities and Exchange Commission (SEC). Under Biden's SEC, it's become harder for companies to block shareholder resolutions, leading to a rise in such proposals. Exxon's lawsuit, which claims the resolution aims to diminish its fossil fuels business rather than increase shareholder value, is seen as potentially chilling for small investors. This action deviates from the norm where companies historically viewed the SEC as a fair arbiter for shareholder proposals.The Exxon case is significant as it challenges the recent SEC policy change that makes it difficult for companies to argue against resolutions on the grounds of micromanagement. The number of shareholder resolutions has increased, with 889 proposals filed during the 2023 proxy season. If Exxon succeeds in court, it might encourage other companies to follow suit, bypassing the SEC.This lawsuit comes amidst broader debates over the SEC's authority in enforcing shareholder proposals, with conservative courts possibly playing a role in the outcome. Exxon's approach is a departure from the standard practice of engaging with the SEC on such matters, and its success could reshape the landscape of shareholder activism, particularly in environmental and social governance issues.Activist investors fret over Exxon Mobil's lawsuit bypassing US regulator | ReutersReddit Inc. is considering a valuation of at least $5 billion for its initial public offering (IPO), based on early feedback from potential investors. However, this figure is still tentative and dependent on the recovering IPO market. The social media company, known for its significant role in the meme-stock era, is potentially aiming for a listing as early as March. Despite these ambitions, private trades of Reddit's unlisted shares currently suggest a lower valuation, between $4.5 billion and $4.8 billion. These private share valuations, which are often lower due to their illiquidity, contrast with Reddit's peak valuation of $10 billion in 2021.The fluctuating valuation reflects the broader downturn in the tech sector, where companies are valued lower in public markets compared to their peak private funding valuations. For instance, Instacart, once valued at $39 billion, debuted with a valuation of $9.9 billion and has since dropped to around $7.1 billion. Reddit's situation and potential IPO come at a time when tech IPOs are less lucrative than during the private funding boom of 2021. The company's decision and valuation are subject to ongoing deliberations and market conditions.Reddit Advised to Target at Least $5 Billion Value in IPO (1) Get full access to Minimum Competence - Daily Legal News Podcast at www.minimumcomp.com/subscribe
Episode 1925: In this special DLD edition of KEEN ON, Andrew talks to John Thornhill, innovation editor at the Financial Times, who distinguishes the truth from the fiction of today's AI hysteriaJohn Thornhill is the Innovation Editor at the Financial Times writing a weekly column on the impact of technology. He is also the founder and editorial director of Sifted, the FT-backed site for European startups, and founder of FT Forums, which hosts monthly meetings for senior executives. John was previously deputy editor and news editor of the FT in London. He has also been Europe editor, Paris bureau chief, Asia editor, Moscow correspondent and Lex columnist.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.
Current AI practices produce technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of specific risks of current AI technology, and can lead to safer technologies. https://betterwithout.ai/fight-unsafe-AI You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Music is by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
Dive into the transformative power of AI with Jason Feifer, Entrepreneur Magazine's Editor, and 'Build for Tomorrow' author. Unravel the future of business and formulate pioneering adaptability tactics. In this spirited debate, Ben and Jason share an ounce of history and a pound of preparation. 0:01:01 | Unveiling the Experimental Nature of Current AI 0:04:29 | How AI Is Impacting the Publishing Industry 0:08:19 | Exploring Potential Shifts in Job Landscape due to AI 0:09:03 | A Historical Lens: How Job Disruption Leads to Innovation 0:15:39 | Analyzing the Balance: New Jobs vs. Loss of Existing Ones 0:18:38 | Reflecting on Pandemic: A Test of Human Adaptability and Resilience 0:24:26 | Breaking Free from Binary Thinking in Understanding AI's Impact 0:27:18 | Engaging with AI and Running Real-time Experiments 0:35:46 | The Power of Upskilling and Harnessing Irreplaceable Human Abilities 0:41:47 | Revealing AI's True Gift: Building From What Matters Today
In this episode, we delve into the latest AI headlines, from Baidu's advancements to IBM's collaboration with Salesforce, along with a significant $30M funding round for .ai domains and emerging policies shaping AI's landscape. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community
Ryan is joined by Bob Van Lujit in this episode to discuss his open-source AI company Weaviate. Listen in as Ryan and Bob break down Weaviate's unique vector database technology powering the next wave of AI applications. Bob talks about the business model behind many open-source infrastructure companies, how Weaviate helps developers easily build and scale AI solutions, and where Bob sees AI transforming businesses over the next 12 months. Join 2,500+ readers getting weekly practical guidance to scale themselves and their companies using Artificial Intelligence and Revenue Cheat Codes. Explore becoming Superhuman here: https://superhumanrevenue.beehiiv.com/ KEY TAKEAWAYS Weaviate provides the core infrastructure for storing and indexing vector embeddings from AI models to enable faster search and retrieval. Open-source companies like Weaviate make money by offering additional services like support, training, and managed services around their free technology. Weaviate targets developers through a "bottom-up" product-led growth strategy focused on helping them succeed with AI applications. Current AI systems are limited by their binary nature, but new probabilistic AI promises opportunities to build more nuanced applications. Combining generative AI models with vector databases enables more complex AI agents for legal services, cybersecurity, and other business use cases BEST MOMENTS “I was a part of a community called a Google developer expert community. And I was invited in 2016 to Google IO. And during the keynote, Sunder Pichai, the CEO of Google went on stage and he said, we got to move from mobile first to AI first." “How do we make sure that, that we really can go to production and that we can bring it to their customers as well? Because AI, and I don't mean this, I really mean this, it's like a seismic shift in how we build technology” "If you store it and you can't get it out anymore, it's useless. The problem we had was that most search was always keyword based." "We passed that point of good enough. So that opened the eyes of a lot of people. It's like, Hey, actually we can, we can build stuff with this." Ryan Staley Founder and CEO Whale Boss ryan@whalesellingsystem.com www.ryanstaley.io Saas, Saas growth, Scale, Business Growth, B2b Saas, Saas Sales, Enterprise Saas, Business growth strategy, founder, ceo: https://www.whalesellingsystem.com/closingsecretsThis show was brought to you by Progressive Media
Shopping for a new wardrobe is an experience that should be enjoyable and stress-free. But with the range of choices available thanks to modern fashion, it can easily turn into a tedious chore of navigating the maze of sizing discrepancies among different stores and brands. Imagine if you could just step into the fitting room and have a selection of perfectly fitting garments already waiting for you. Thanks to the advent of AI and 3D technology, this vision is becoming a reality. The future? A personalized AI shopping experience. This podcast explores innovative ways that AI transforms the fitting room, both online and in-store, by providing shoppers with personalized recommendations tailored to their unique body measurements and preferences. This custom approach not only enhances customer satisfaction but also drives sales and conversion rates. Join us as we delve into the future of retail fitting rooms, where AI serves as the catalyst for an immersive, personalized, and seamless shopping experience. Join us as we explore these ideas with: Hillary Littleton, Head of Marketing, FIT:MATCH Christina Cardoza, Editorial Director, insight.tech Hillary answers our questions about Current AI trends in the retail space (2:47) Retail challenges for providing quality shopping experiences (4:31) AI helping advance retail technology (6:31) Benefits of improving the fitting-room experience (8:48) Implementing new retail technology online and in-store (11:13) Value of partnerships when creating innovative retail solutions (14:45) Future of retail technology (15:31) How AI can optimize the shopping experience (17:06) Related Content To learn more about personalized AI shopping experiences, read Phygital Experiences Help Fashion Retail Shine. For the latest innovations from FIT:MATCH, follow them on Twitter at FIT:MATCH.ai and LinkedIn at FIT:MATCH.ai.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Palisade is hiring Research Engineers, published by Charlie Rogers-Smith on November 11, 2023 on LessWrong. Palisade is looking to hire Research Engineers. We are a small team consisting of Jeffrey Ladish (Executive Director), Charlie Rogers-Smith (Chief of Staff), and Kyle Scott (part-time Treasurer & Operations). In joining Palisade, you would be a founding member of the team, and would have substantial influence over our strategic direction. Applications are rolling, and you can fill out our short (~10-20 minutes) application form here. Palisade's mission We research dangerous AI capabilities to better understand misuse risks from current systems, and how advances in hacking, deception, and persuasion will affect the risk of catastrophic AI outcomes. We create concrete demonstrations of dangerous capabilities to advise policy makers and the public on AI risks. We are working closely with government agencies, policy think tanks, and media organizations to inform relevant decision makers. For example, our work demonstrating that it is possible to effectively undo Llama 2-Chat 70B's safety fine-tuning for less than $200 has been used to confront Mark Zuckerburg in the first of Chuck Schumer's Insight Forums, cited by Senator Hassan in a senate hearing on threats to national security, and used to advise the UK AI Safety Institute. We plan to study dangerous capabilities in both open source and API-gated models in the following areas: Automated hacking. Current AI systems can already automate parts of the cyber kill chain. We've demonstrated that GPT-4 can leverage known vulnerabilities to achieve remote code execution on unpatched Windows 7 machines. We plan to explore how AI systems could conduct reconnaissance, compromise target systems, and use information from compromised systems to pivot laterally through corporate networks or carry out social engineering attacks. Spear phishing and deception. Preliminary research suggests that LLMs can be effectively used to phish targets. We're currently exploring how well AI systems can scrape personal information and leverage it to craft scalable spear-phishing campaigns. We also plan to study how well conversational AI systems could build rapport with targets to convince them to reveal information or take actions contrary to their interests. Scalable disinformation. Researchers have begun to explore how LLMs can be used to create targeted disinformation campaigns at scale. We've demonstrated to policymakers how a combination of text, voice, and image generation models can be used to create a fake reputation-smearing campaign against a target journalist. We plan to study the cost, scalability, and effectiveness of AI-disinformation systems. We are looking for People who excel at: Working with language models. We're looking for somebody who is or could quickly become very skilled at working with frontier language models. This includes supervised fine-tuning, using reward models/functions (RLHF/RLAIF), building scaffolding (e.g. in the style of AutoGPT), and prompt engineering / jailbreaking. Software engineering. Alongside working with LMs, much of the work you do will benefit from a strong foundation in software engineering - such as when designing APIs, working with training data, or doing front-end development. Moreover, strong SWE experience will help getting up to speed with working with LMs, hacking, or new areas we want to pivot to. Technical communication. By writing papers, blog posts, and internal documents; and by speaking with the team and external collaborators about your research. While it's advantageous to excel at all three of these skills, we will strongly consider people who are either great at working with language models or at software engineering, while being able to communicate their work well. Competenci...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Palisade is hiring Research Engineers, published by Charlie Rogers-Smith on November 11, 2023 on LessWrong. Palisade is looking to hire Research Engineers. We are a small team consisting of Jeffrey Ladish (Executive Director), Charlie Rogers-Smith (Chief of Staff), and Kyle Scott (part-time Treasurer & Operations). In joining Palisade, you would be a founding member of the team, and would have substantial influence over our strategic direction. Applications are rolling, and you can fill out our short (~10-20 minutes) application form here. Palisade's mission We research dangerous AI capabilities to better understand misuse risks from current systems, and how advances in hacking, deception, and persuasion will affect the risk of catastrophic AI outcomes. We create concrete demonstrations of dangerous capabilities to advise policy makers and the public on AI risks. We are working closely with government agencies, policy think tanks, and media organizations to inform relevant decision makers. For example, our work demonstrating that it is possible to effectively undo Llama 2-Chat 70B's safety fine-tuning for less than $200 has been used to confront Mark Zuckerburg in the first of Chuck Schumer's Insight Forums, cited by Senator Hassan in a senate hearing on threats to national security, and used to advise the UK AI Safety Institute. We plan to study dangerous capabilities in both open source and API-gated models in the following areas: Automated hacking. Current AI systems can already automate parts of the cyber kill chain. We've demonstrated that GPT-4 can leverage known vulnerabilities to achieve remote code execution on unpatched Windows 7 machines. We plan to explore how AI systems could conduct reconnaissance, compromise target systems, and use information from compromised systems to pivot laterally through corporate networks or carry out social engineering attacks. Spear phishing and deception. Preliminary research suggests that LLMs can be effectively used to phish targets. We're currently exploring how well AI systems can scrape personal information and leverage it to craft scalable spear-phishing campaigns. We also plan to study how well conversational AI systems could build rapport with targets to convince them to reveal information or take actions contrary to their interests. Scalable disinformation. Researchers have begun to explore how LLMs can be used to create targeted disinformation campaigns at scale. We've demonstrated to policymakers how a combination of text, voice, and image generation models can be used to create a fake reputation-smearing campaign against a target journalist. We plan to study the cost, scalability, and effectiveness of AI-disinformation systems. We are looking for People who excel at: Working with language models. We're looking for somebody who is or could quickly become very skilled at working with frontier language models. This includes supervised fine-tuning, using reward models/functions (RLHF/RLAIF), building scaffolding (e.g. in the style of AutoGPT), and prompt engineering / jailbreaking. Software engineering. Alongside working with LMs, much of the work you do will benefit from a strong foundation in software engineering - such as when designing APIs, working with training data, or doing front-end development. Moreover, strong SWE experience will help getting up to speed with working with LMs, hacking, or new areas we want to pivot to. Technical communication. By writing papers, blog posts, and internal documents; and by speaking with the team and external collaborators about your research. While it's advantageous to excel at all three of these skills, we will strongly consider people who are either great at working with language models or at software engineering, while being able to communicate their work well. Competenci...
"In this audiobook... A LARGE BOLD FONT IN ALL CAPITAL LETTERS SOUNDS LIKE THIS." Apocalypse now - Current AI systems are already harmful. They pose apocalyptic risks even without further technology development. This chapter explains why; explores a possible path for near-term human extinction via AI; and sketches several disaster scenarios. https://betterwithout.ai/apocalypse-now At war with the machines - The AI apocalypse is now. https://betterwithout.ai/AI-already-at-war This interview with Stuart Russell is a good starting point for the a literature on recommender alignment, analogous to AI alignment: https://www.youtube.com/watch?v=vzDm9IMyTp8 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
Timestamps: 1:52 - When should Swiss startups go to the US? 9:05 - Meeting the Google founders 12:47 - How to scale culture 23:27 - Current AI developments 28:04 - Leaving Silicon Valley About Urs Hölzle: Urs Hölzle is SVP of Engineering at Google, having started there in 1999 as their first Search Engine Mechanic and altogether 8th employee. Prior to Google, he started his own company Animorphic LLC in 1994 following the completion of his PhD in Computer Science at Stanford, and later worked as an Associate Professor at UC Santa Barbara and a consultant at Sun Microsystems. Having watched Google grow from a tiny company to the megalithic enterprise it is today, Urs had a front row seat to the development of its culture, and played an active role in shaping it. Here's his advice for other scaling ventures: Culture eats strategy for breakfast: if you have the right culture, it doesn't matter if you have the right strategy from day 1, because sooner or later you will find it. Being in a team full of smart people who know how to contribute and how to contradict you means the right strategy naturally shapes itself. However, if you have the wrong culture, then the best strategy in the world wouldn't do much for you. Psychological safety: do the people in your team feel free to speak up and contradict those in positions of authority? When you're building a scaling venture, you're making decisions quickly: it's extremely unlikely that you're not making any wrong moves. You need these outspoken people as a safety net, and no one's outspoken if they think getting fired will be the sure consequence of speaking up. Ask yourself “What does this potential hire add to the team?”: if you have too uniform a team, that team is not going to go nearly as far as one with different skill sets and perspectives. Similarly, you need to be certain that this potential hire is open to being corrected by someone else. It doesn't matter how smart they are — if they're not willing to grow, they won't be much of an addition. Memorable Quotes: "Right place, right time is not something you can force." "I think the #1 reason companies fail is not being mindful enough about scaling well." "Sometimes the most dangerous thing is to be the smartest person in the room, because it means you stop learning." If you'd like to listen to more conversations about working at Google, check out our first episode with Thomas Dübendorfer. Don't forget to give us a follow on our Twitter, Instagram, Facebook and Linkedin accounts, so you can always stay up to date with our latest initiatives. That way, there's no excuse for missing out on live shows, weekly giveaways or founders' dinners!
The episode addresses the frequent confusion between artificial intelligence (AI) and artificial general intelligence (AGI). While AGI represents a form of sentient AI capable of independent thought, the current state of AI, even advanced forms like generative AI, is likened to a highly sophisticated autocorrect. These AI models pull from vast databases to produce responses that seem intelligent but are essentially aggregations of historical data. The host refutes the popular notion that current AI systems, like ChatGPT, can self-modify into sentient beings. Such beliefs arise from our tendency to anthropomorphize AI. Real intelligence, as per the host, requires a breakthrough beyond the current state of generative AI. The episode concludes on a contemplative note about the unpredictable future of AI and AGI evolution. --- Send in a voice message: https://podcasters.spotify.com/pod/show/thinkfuture/message Support this podcast: https://podcasters.spotify.com/pod/show/thinkfuture/support
What's up folks, we've got another roundup episode today and we're talking AI. Before you dismiss this and skip ahead, here's a quick summary of why the excitement around generative AI isn't just hype—it's a sustainable shift.While some may perceive AI to be losing steam, largely due to a surge of grifters in the field, this is not your average trend. In Episode 78, we spoke with Juan Mendoza, CEO of TMW, about why generative AI is distinct. It's not mere hype or a future possibility; generative AI delivers practical value today.Examining Google Trends data for the search term "AI + marketing," we notice a significant surge starting in November 2022, coinciding with the release of ChatGPT. This surge peaked in May 2023 when GPT-4 became mainstream. Normally, you'd expect interest to wane after such a peak, but it has barely dipped. We're currently sitting at a 94/100 search interest, compared to this summer's peak. This suggests a sustained, rather than fleeting, interest in the technology.While nobody has a crystal ball, there's broad agreement that AI is far from making marketing roles obsolete. Instead, it's augmenting the work we do, not replacing it.In an effort to explore further how we can better future proof ourselves, I've asked guests what specific aspects of marketing make it resistant to AI. The insights from these discussions have been fascinating, underscoring the unique value and human touch that marketers bring to the table.Here's today's main takeaway: Your real edge in marketing fuses a nuanced understanding of business context, ethics, and human emotion with capabilities like intuition, brand voice and adaptability—areas where AI can sort data but can't match ability to craft compelling stories. AI isn't pushing you aside; it's elevating you to a strategic role—given you focus on AI literacy and maintain human oversight. This isn't a story of human vs. machine; it's about how both can collaborate to tackle complexities too challenging for either to navigate alone.AI is less a replacement and more of a reckoning. It's not coming for us; it's coming for our inefficiencies, our lack of adaptability, and our refusal to evolve. AI is holding up a mirror to the marketing industry, asking us not if we can be replaced, but rather, why we haven't stepped up our game yet. Buckle up; this roundup of experts doesn't just debate the future—it challenges our very role in it.Why AI Can't Fully Replace Human Nuance in Marketing OperationsLet's start off in Marketing Operations with Mike Rizzo, the founder of MarketingOps.com. We asked him to dive into his view that AI won't be replacing marketing jobs "anytime soon," a point that has some level of ambiguity. The question aimed to uncover what Mike specifically means by "anytime soon" and why he believes that AI won't fully automate the marketing Operations sector in the near future.Mike highlighted the intricacy of marketing operations that he believes will be resistant to full automation. Specifically, he mentioned that marketing across SMBs and enterprises involves nuanced processes. The differentiation between types of leads—MQL, SQL, PQL, and so on—each has its own distinct workflow and architecture. This makes it a highly tailored field, more a craft than a science, and challenging to automate.Mike pointed out that the entire operational architecture, from data movement to notification protocols, is unique to each organization. It's precisely this framework that makes it hard to replicate with AI, regardless of its computational abilities. While he admitted that AI could offer suggestions in optimizing specific metrics or elements, such as lead scoring, Mike emphasized that these technologies serve better as consultants rather than decision-makers.The implementation of martech stacks, according to Mike, is akin to running a product. From understanding the product roadmap to enabling team members, AI can at best serve as a consultation service, streamlining processes but never fully taking over. Each tech stack is tailored to an organization's needs, something that AI, for all its merits, struggles to capture in its full complexity.Mike also confessed to leveraging AI for particular tasks but remains skeptical about its ability to handle the fine-tuning required in the marketing ops and RevOps space. He argued that while AI can assist, it can't replace the distinct, specialized requirements that each marketing operation demands.Key Takeaway: Mike suggests that AI has its uses, but the nuanced, unique nature of marketing operations makes it a field that's resistant to full automation. There's value in human oversight that not even the most advanced AI can replicate.Trust in Data and the Ability to Constrain AI ResponsesWhile AI might have some challenges with the nuances of marketing Ops, AI does have a foothold in some marketing sectors. Boris Jabes, the co-founder and CEO at Census, acknowledged AI's ability to drive efficiency, especially in advertising. In spaces where "fuzziness" is acceptable, such as Ad Tech, AI already performs exceptionally well. Marketers utilize advanced algorithms in platforms like Google and Facebook to better place their ads, and these platforms are continuously fueled by world-class AI. In these instances, AI isn't just convenient; it's almost imperative for maintaining competitive performance.However, Boris warns that there are areas where AI falls short, specifically in customer interactions that require nuanced understanding and empathy. For example, using AI to answer questions about ADA compliance or other sensitive matters can result in "hallucinations," or incorrect and inappropriate responses. Herein lies a crucial challenge: How do you constrain AI to deliver only appropriate, correct information?Additionally, Boris identifies data trustworthiness as a significant hurdle. AI's performance depends on the quality of data it's trained on. Large enterprises are often hesitant to adopt AI without reliable data, and thus, miss out on its advantages. Conversely, smaller companies are more willing to experiment, but their scale is insufficient to make industry-wide impacts.Despite the challenges, Boris argues that staying away from AI is not an option for today's marketers. Whether you are aiding the machine with quality data or deciphering how AI can be employed responsibly, there's room for human marketers to provide valuable input and oversight.Key Takeaway: AI has carved out a substantial role in specific sectors of marketing like Ad Tech, but it still has limitations that require human oversight. Trust in data and the ability to constrain AI responses are areas where marketers can add significant value.Marketers Are Future Prompt Thinkers and AI RegulatorsOver the next few years, marketers will be invaluable when it comes to ensuring data integrity and guiding AI's influence. Let's explore how marketing roles might evolve across different verticals. Pratik Desai has some fascinating predictions about the role of marketers. He's the founder and Chief Architect at 1to1, an agency focused on personalization strategy and implementation.When asked about the limitations preventing AI from taking over the marketing landscape, Pratik dives into the intricacies of how AI operates in different sectors. According to him, AI in marketing can be bifurcated into "Curation AI" and "Generation AI." Curation AI, as the name suggests, curates content and recommendations. Generation AI, a more recent evolution, generates content from scratch.Curation AI has shown promise, especially in less regulated industries like e-commerce. Here, even if AI gets it wrong 15% of the time, the increase in efficiency and accuracy for the remaining 85% is often considered a win. But switch the lens to highly regulated sectors like financial services or healthcare, and the stakes skyrocket. Here, even a 1% mistake rate could translate into severe regulatory or even life-impacting issues. This inherent limitation necessitates a "marketer control" layer to ensure compliance and accuracy.In comes Generation AI, aimed at resolving some of these content-based challenges. With its ability to generate images and copy at scale, Pratik posits that it could revolutionize how marketing programs are run. This technology can create content in seconds, which would otherwise take a design team weeks to produce. But again, the human element isn't completely removable. Marketers will still need to oversee these automated processes, especially in regulated sectors where the margin for error is minuscule.Key takeaway: The role of the marketer is changing but not disappearing. In industries with low regulation, marketers transition to becoming "critical prompt thinkers," while in more regulated sectors, they wear the additional hat of "AI regulators." This reveals the dual nature of AI: a tool that can enhance efficiency yet requires human oversight for nuance and regulatory compliance.The Need for Ongoing Dialogue Between AI and the MarketerThis inherent necessity of a "marketer control" layer to ensure compliance and accuracy is a shared thread. When asked about the potential of AI to take over the marketing realm, Tamara Gruzbarg—VP Customer Strategy at ActionIQ—offered a seasoned perspective, advocating for a more nuanced view. She was explicit that AI can certainly handle the grunt work—automating repetitive tasks and even aiding in content generation. However, Tamara highlighted the irreplaceable role of human marketers when it comes to understanding brand voice, tone, and style.Tamara also cautioned against overlooking the human element in data analytics and predictive modeling. She argued that constructing models for critical business metrics like conversion rates and lifetime value demands a deep understanding of business context. AI tools may be adept at crunching numbers, but they fall short in interpreting the underlying structure and implications of the data.Tamara introduced the "human-in-the-loop" philosophy that they follow at ActionIQ, emphasizing the need for ongoing dialogue between the AI and the marketer. This interaction ensures that AI-generated content aligns with the brand's unique voice and message, preventing a homogenized marketplace where every brand sounds the same.The discussion confirmed the ongoing need for marketers to "cut through the noise." Tamara argued that a human touch is essential for achieving this, particularly in an era where AI can churn out volumes of generic content. She pointed out that while AI could be a valuable partner in initial drafts and multiple versions of content, the final say should always be human.Key Takeaway: Tamara stressed the importance of human expertise in data analytics and predictive modeling. While AI can handle data computation, it lacks the ability to understand business context and the nuances of data. She advocates for a "human-in-the-loop" approach at ActionIQ, which keeps marketers engaged with AI tools. This collaboration ensures brand-specific messaging and avoids market homogenization.AI's Creative Strengths and Brand Style Guide LimitationsThis idea of brand specific messaging also extends to visual brand marketing and AI's lack of ability to follow a brand guideline… at least for now. Pini Yakuel is the CEO of Optimove, a platform that's operating light years ahead of most martech when it comes to AI features. When asked about the roadblocks stopping AI from completely replacing human marketers, Pini focused on the intricacies of creative studio work. He points out that while AI can perform well in tasks such as comic book illustrations, it still falls short when you factor in the human elements—like nuance, emotion, and unique design language—that often define a brand.Pini recounts a conversation he had with one of his designers about this very issue. The designer expressed that AI could create fantastical images—like a unicorn riding a motorcycle on Mars—but couldn't quite replicate the specific design language integral to their brand, Optimove. Despite AI's capabilities in artistry and replication, it lacks the human touch needed to navigate the complex and nuanced design landscape that brands often require.He emphasized that while AI can go wild with creative elements, it's not yet proficient at maintaining the unique "look and feel" that a brand's specific style guide may dictate. For instance, integrating various elements into a cohesive design that represents a brand authentically is something AI still struggles with. According to his designer, the technology simply isn't there yet, at least not to a level that can replicate the careful and intentional choices a human designer would make.This limitation isn't just about not having enough processing power or data; it's about an inherent lack of understanding of human emotion, culture, and nuanced communication. These elements often serve as the underpinning for any successful marketing campaign, aspects that AI can't yet replicate.Key Takeaway: Pini argues that the barrier to AI fully replacing human marketers lies in the inability to understand and replicate the nuanced, human elements that make up a brand's unique design language. Until AI can integrate this "human touch," it will remain a tool rather than a replacement.The Trust Barrier in AI's Quest to Replace MarketersIf you asked a marketer in the mid 80s if the Internet would replace everything a marketer did back then, they probably would've been skeptical. To be fair it didn't replace everything but marketing looked dramatically different 10-15 years after that. At the heart of roles shifting and a marketer control layer is this idea of adapting. Deanna Ballew is Senior Vice President of DXP Products at Acquia where her team is focused on innovating with AI for marketers. When asked about the likelihood of AI replacing marketers, Deanna emphasized that it's not a matter of "if," but "how" we adapt to this looming shift. In line with comments from Boris, she added that the obstacle isn't the capability of the AI but the trust—or lack thereof—in the data it uses. Deanna points out that tools like ChatGPT aren't yet trusted because they rely on an immense pool of uncurated data. To trust an AI with marketing tasks, there's a need for curated, proprietary models.Deanna brings the focus back to a crucial but often overlooked factor: AI literacy among marketers. As AI technology advances, so must the understanding marketers have about the underlying models. The future isn't just about AI doing the work but about marketers asking the right questions. Chat UX interfaces could enable marketers to query data effectively, bypassing the need for a business intelligence analyst. However, this streamlined process depends on the trustworthiness of the data.Here's the flip side: As marketers become more literate in AI, their roles will shift from manual tasks to higher-value activities. Think about posing complex questions to AI-driven systems, which could then provide strategic insights that marketers can translate into actionable campaigns. Marketers could use these interfaces to directly ask, "What's the next best customer segment to go after?"—with the system offering insights based on trusted data.The advancement of AI is like a double-edged sword. On one side, it promises to relieve marketers of mundane tasks; on the other, it demands a new set of skills and a higher level of trust in the data. Deanna stresses that the transformation is inevitable, but the timeline is undetermined, hinging on how quickly trust can be established in AI-generated data and models.Key Takeaway: Deanna underscores the role of "trust" as the linchpin for AI adoption in marketing. Marketers should focus on increasing their AI literacy and understanding of underlying models to prepare for this seismic shift. Without trusted data and models, even the most advanced AI can't eliminate the human checkpoint in marketing decisions.The Organic Evolution of AI in MarketingThere's a clear trend so far, that the human checkpoint in AI is going away anytime soon. That means there's a clear signal for marketers to follow Deanna's advice and double down on AI literacy. The next question is really about how fast you should consider doing this. How fast will we need to adapt?Aliaksandra Lamachenka, a Marketing Technology Consultant, had a surprising and insightful answer. She drew an analogy with post-war Japanese architecture, specifically a concept known as "Japanese Metabolism." This architectural philosophy thought of buildings as living organisms with a spine to which modular capsules could be attached or detached. Despite its early promise in the '50s and '70s, this concept now largely exists as an idea, with few practical implementations. The buildings initially envisioned as the future of living are now mostly used for storage.What does this have to do with AI replacing marketers? Aliaksandra contends that society needs time to adapt and accept new concepts, just as with Japanese Metabolism. The notion of AI taking over marketing roles is a similarly radical shift that society isn't ready to fully embrace. Moreover, she believes that the evolution of AI will be more organic than revolutionary, a natural progression shaped by cultural and societal shifts.Aliaksandra underscores that although AI has vast potential, the speed at which humans can adapt and accept these changes is the bottleneck. She compares AI's future impact to the way modular buildings and integrated landscape houses have slowly, but organically, become part of architectural reality. Aliaksandra asserts that AI's growth will similarly happen organically over decades, not through immediate disruption but by evolving naturally into our processes and systems.She concludes by pointing out that the ideas of the past often serve as the blueprints for future innovation. Whether it's post-war Japanese architects or today's AI developers, the radical concepts and technologies introduced will take time to become an integral part of society. Like the modular houses of today that owe their conceptual roots to Japanese Metabolism, future AI capabilities will likely be adaptations of current bold ideas.Key Takeaway: Aliaksandra suggests that the pace at which humans can adapt to new ideas is the limiting factor in AI's ability to replace marketers. She predicts a gradual, organic evolution of AI in marketing, driven more by human adaptation than by technological capabilities.AI's Shortfall in Grasping Marketing's Emotional and Intuitive SideWhile the advance of AI in the marketing sphere could be more of a steady march than an overnight revolution, there's a threshold it hasn't crossed: the realm of human intuition and gut decision-making. Tejas Manohar, Co-founder and Co-CEO at Hightouch, offered a nuanced take, emphasizing both the promises and limitations of AI. Tejas mentioned that AI technologies, like generative AI and reinforcement learning, have already begun revolutionizing how marketing campaigns and experiments are run. They offer incredible potential for automating tasks such as data experimentation, audience segmentation, and personalization.However, Tejas made it clear that AI is not ready to replace human marketers entirely. The core of his argument lies in the duality of the marketing role, which requires both quantitative and qualitative skills. While AI can crunch numbers, run experiments, and even generate content, it falls short when the job requires a deeper understanding of human emotions or intuition-based decision-making. Tejas points out that marketers often rely on a mix of data and gut feeling, using insights to make substantial strategic changes. Current AI technologies are just not equipped to understand or implement these nuanced elements.He also discussed the notion of AI as a complementary tool rather than a replacement. Tejas is bullish on the idea that AI will augment marketers, particularly by providing them with easier access to critical business data. He envisions a future where marketers won't have to request specific scripts or datasets but can work independently to glean insights, thanks to advancements in AI technologies.The issue of AI completely taking over marketing, Tejas concluded, is also tied to broader ethical and societal questions. If AI gets to a point where it can wholly replace human skills and intuition, society will face "singularity type problems" affecting not just marketing but every job role.Key Takeaway: According to Tejas, AI's current role in marketing is as an augmenter, not a replacer. While it excels at quantitative tasks, it lacks the nuanced understanding of human emotion and intuition that is critical for effective marketing. Its potential lies in the empowerment it can offer marketers through data access and automation.The Thrill of Using Generative AI in Your Martech StackMany of the marketers I chatted with echoed Tejas, that AI may be able to process data and spit out automated directives, but it can't yet replicate the unpredictable, qualitative essence of what makes marketing tick. One particular guest flipped the script on me and argued that the exciting debate is how AI will augment, not replace, the roles of marketers.The Martech Landscape creator, the Author of Hacking Marketing, The Godfather of Martech himself, mister Scott Brinker had a clear perspective: we're not there yet. For Scott, "good marketing" remains a domain where human intuition and creativity hold court. AI may be able to process data and spit out automated directives, but it can't yet replicate the unpredictable, qualitative essence of what makes marketing tick. The buzzphrase "Your job won't be replaced by AI; it will be replaced by another marketer who's good at using AI" captures the current sentiment aptly. Cheesy as it may sound, Scott sees a grain of truth here. Far from envisioning a future where AI eliminates human roles, he expects technology to bolster the capabilities of marketing professionals. It's about learning how to weave AI into current practices to improve efficiency and expand possibilities.But where Scott finds the most promise is in the evolving role of marketing ops leaders and martech professionals. The real thrill comes from the ability to leverage generative AI to optimize what a marketing stack can do. Essentially, AI becomes a potent tool in the toolbox of the modern marketer, especially in operations. The tech is less about replacing humans and more about magnifying their abilities.However, Scott's perspective doesn't herald the end of human involvement; it simply reframes it. AI becomes a part of the job, a powerful component in the array of strategies and tactics that marketers employ. For him, it's about balance, not replacement. AI might be good, even exceptional, at crunching numbers and predicting outcomes based on existing data. But it can't yet think creatively or strategically in the way humans can, which is where the core of "good marketing" lies.Key Takeaway: The future of marketing isn't a binary choice between human intuition and machine capabilities. Rather, it's a synergistic relationship where each amplifies the other. For Scott, the real excitement lies in how AI will augment, not replace, the roles of marketers.AI's Storytelling Shortfall in Marketing's Emotional LandscapeWhile AI will continue to amplify the reach and efficiency of marketing efforts, experts agree, their role remains largely complementary to human skill sets. Despite its analytical prowess and automation capabilities, AI hasn't cracked the code on intuition and following brand guidelines but what about emotional intelligence or compelling storytelling—elements that are often considered the heart and soul of effective marketing. Lucie De Antoni, Head of Marketing at Garantme, brought forth some astute observations. Sure, AI is making strides in many industries, marketing included. It can automate and even enhance several elements of the marketing process. But what AI notably lacks, according to Lucie, is the ability to replicate human creativity and emotional intelligence.Marketing isn't just a numbers game. It's about storytelling, tapping into human emotions, and crafting narratives that resonate with people. Lucie argues that these are areas where AI falls short. While machine learning can analyze trends and predict consumer behavior to a certain extent, it's not equipped to fully understand the nuances of human sentiment or create emotionally resonant campaigns. This shortcoming isn't necessarily a drawback; Lucie sees it as a positive aspect. If AI were capable of such emotional intelligence and creativity, it would put marketers in a tricky situation. The very things that make marketers invaluable—understanding human behavior, crafting compelling stories, evoking emotion—are elements that AI can't yet emulate.So, the reality isn't that AI is primed to push marketers out of their jobs, but rather that it can become a tool that complements human skills. Lucie suggests that this "limitation" of AI serves as a safeguard for the unique value that human marketers bring to the table. The tech may evolve, but it's unlikely to eclipse the human ability to connect on an emotional level anytime soon.Key Takeaway: Lucie emphasizes that the strength of human marketers lies in their ability to understand and evoke human emotions—a skill set that AI, despite its advancements, cannot yet replicate. Therefore, while AI can be a powerful tool, the human element in marketing remains irreplaceable.Episode RecapAI is already rampant in marketing, particularly in fields like Ad Tech. However, generative AI is not a magic bullet; human expertise is essential for interpreting data and grasping brand nuances. A "human-in-the-loop" approach creates a checks-and-balances system, fostering trust in the data generated by AI and offering the emotional intelligence that machines lack.Marketing roles are evolving but definitely not vanishing. In sectors with fewer regulations, marketers could morph into strategic thinkers, whereas in tightly controlled industries, they're becoming essential AI regulators. To effectively ride this wave, increasing AI literacy among marketers is non-negotiable.The speed at which AI becomes a staple in martech is not solely a question of technological prowess. It's about how quickly humans can adapt and find ways to integrate AI into existing frameworks. The most viable future is not a zero-sum game between human and machine; it's a collaborative one, where each enhances the other's strengths.You heard it here first folks: Your real edge in marketing fuses a nuanced understanding of business context, ethics, and human emotion with capabilities like intuition, brand voice and adaptability—areas where AI can sort data but can't match ability to craft compelling stories. AI isn't pushing you aside; it's elevating you to a strategic role—given you focus on AI literacy and maintain human oversight. This isn't a story of human vs. machine; it's about how both can collaborate to tackle complexities too challenging for either to navigate alone.✌️--Intro music by Wowa via UnminusCover art created with Midjourney
Prasad Kawthekar is the CEO of Dashworks.ai The Evolution of Enterprise Software: Three waves of enterprise software: On-premise enterprise software. Software deployed top-down on the internet. The third wave we currently live in: SaaS consumers. Challenges are becoming increasingly pronounced. Comparing Web Search with Internal Searches: Web search benefits from standardized languages and platforms, such as HTML. Internal company applications deal with varied data sources and lack user data for training. The Role of AI in Knowledge Management: AIs excel at parsing unstructured data. Legacy products relied on custom parsers and integrations, making maintenance challenging. Modern AIs, such as LLMs, offer real-time, generalizable solutions. LLMs negate the need for data indexing and simplify integrations. Challenges in Data Analytics: Many teams aspire to be data-driven. Navigating data warehouses can be intricate. Ensuring accurate, relevant data is paramount. AI's Potential and Limitations: AIs can pinpoint unstructured data, like Zoom transcripts and Slack messages, but human interpretation is crucial. Current AI technology has boundaries in terms of memorization, accuracy, and relevance. The future vision: intuitive AI tools that everyone can use, even without technical expertise. The Evolution of Software Development with AI: Coding is moving towards more abstraction, making it as simple as conversing with a person. Programmers are becoming more productive. A potential surge in the number of people who can create custom tools using AI. Economic and Technical Implications of AI: Speculating on the future economy and the role of AI. Infrastructure challenges, including storage, database, and compute layers. The promising direction of infrastructure provisioning using AI. Future Directions for Dashworks: Emphasizing data warehousing and analytics. The significance of manipulating APIs and building reliable systems. Ambitions for the upcoming six months, such as enhancing API call functionalities.
Here in the US students are heading back to school. But there's one problem that hasn't been solved yet. Using AI and ChatGPT in the classroom. Some schools have started to embrace it while others still reject it. What should schools do about it?Newsletter: Sign up for our free daily newsletterMore on this: Episode PageJoin the discussion: Ask Jordan questions about ChatGPT and educationRelated Episodes:Ep 35: How Students Can Use AI to Solve Everyday ProblemsEp 55: How to properly leverage AI in the classroomEp 74: Should Schools Ban ChatGPT?Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTimestamps:[00:01:15] Daily AI news[00:06:50] Jordan's hot take about AI in education[00:09:45] Recent headlines about AI in schools[00:16:07] Will AI make students dumb?[00:20:00] Why parents should care about students learning AI[00:23:35] Proposed solutions to "combat" AI and cheating[00:26:00] How AI affects skill sets Topics Covered in This Episode:- AI News Stories- Meta (Facebook's parent company) launches Code Llama, a new AI code tool- Alibaba updates its AI chatbots to understand images and have more complex conversations- Big-name companies, including Nvidia, Amazon, Google, Intel, AMD, IBM, and Salesforce, invest in the open-source platform Hugging Face- Personal Opinion- Jordan shares his views on AI in schools- College students using AI to write papers- Current AI detectors are not accurateKeywords: AI content, Meta, Facebook, Code Llama, AI code tool, Alibaba, AI chatbots, images, complex conversations, Nvidia, Amazon, Google, Intel, AMD, IBM, Salesforce, open source platform, Hugging Face, AI in schools, college students, AI detectors, accuracy. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Today we explored the age-old question of whether machines can think. Looking at definitions of "thinking", philosophical perspectives, and Alan Turing's famous test, we found machines can now excel at many narrow tasks but general human-level cognition remains elusive. Current AI excels at optimization, pattern recognition, and quantitative performance but still lacks abilities like creativity, reasoning, and consciousness that are hallmarks of human thought. Exciting innovations are emerging in natural language processing and neural networks that may continue to blur the lines between artificial and biological intelligence. But for now, while machines have come a long way, the essence of human thinking remains difficult to replicate artificially. How we ethically combine the complementary strengths of humans and AI promises to be an increasingly important conversation as technology progresses. Are you interested in learning to use AI? Take a look at my free class on Udemy: The Essential Guide to Claude 2! This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output. Music credit: "Modern Situations by Unicorn Heads"
On this episode of the IoT For All Podcast, Frederic Werner and Neil Sahota from AI for Good join Ryan Chacon to discuss the global impact of AI. They talk about the AI hype cycle, the state of AI, defining good AI use cases, balancing different perspectives on AI, scaling AI for global impact, current AI trends and use cases, AI and IoT, challenges in AI, AI developments and the future of AI, fear of AI, and the 2023 AI for Good Global Summit. Frederic Werner is a seasoned Association Management professional with a passion for telecommunications specializing in strategic communications, community building, and international relations. He is the Head of Strategic Engagement for ITU's standardization bureau and was instrumental in the creation of the landmark AI for Good Global Summit. Frederic is deeply involved with innovation, digital transformation, financial inclusion, 5G, and AI via numerous ICT industry projects and events he has developed. Neil Sahota is an IBM Master Inventor, United Nations Artificial Intelligence Advisor, author of the best-seller "Own the AI Revolution" and sought-after speaker. With 20+ years of business experience, Neil works to inspire clients and business partners to foster innovation and develop next generation products/solutions powered by AI. AI for Good is an organization that identifies practical applications of AI to advance the United Nations' Sustainable Development Goals (SDGs) and scale AI solutions for global impact. It's the leading action-oriented, global, and inclusive United Nations platform on AI. AI for Good is organized by ITU in partnership with 40 UN Sister Agencies and co-convened with Switzerland. Interested in AI? We're launching AI For All!Subscribe to the AI For All newsletter: https://ai-forall.comFollow AI For All on social: https://linktr.ee/aiforallofficial Discover more about AI and IoT at https://www.iotforall.comMore about AI for Good: https://aiforgood.itu.intConnect with Frederic: https://www.linkedin.com/in/fredericwerner/Connect with Neil: https://www.linkedin.com/in/neilsahota/ Key Questions and Topics from this Episode: (00:00) Welcome to the IoT For All Podcast (01:12) Introduction to Frederic Werner, Neil Sahota, and AI for Good (04:47) AI hype cycle and state of AI (07:46) What are good AI use cases? (10:20) Balancing different perspectives on AI (12:25) Scaling AI for global impact (16:02) Current AI trends and use cases (17:36) AI and IoT (19:39) Challenges in AI (24:58) AI developments and future of AI (28:45) Fear of AI (32:29) AI for Good Global Summit 2023 SUBSCRIBE TO THE CHANNEL: https://bit.ly/2NlcEwm Join Our Newsletter: https://www.iotforall.com/iot-newsletterFollow Us on Twitter: https://twitter.com/iotforallCheck out the IoT For All Media Network: https://www.iotforall.com/podcast-overview
We interviewed Karel Šimánek from BigHub. They are experts on applied AI and using the cloud to help companies understand their data and implement ML. We talked with Karel about the current Chat GPT models and AI and AGI's future. And also about the cloud, which they leverage for the clients. Should we be afraid that artificial intelligence will take over the world? And which blog to read to get the latest AI/data industry information? Listen to the whole episode. Current AI models and methodologies provide immense potential for any business ready to embark on this journey. By harnessing the power of cloud computing, data engineering, data analytics, and data science, BigHub supports clients from fundamental data management to sophisticated machine learning techniques. Learn more about them at https://www.bighub.cz/
In today's episode of the CEO series, our guest is sharing her experience with building a successful product-led company in just 3 years through strategy and innovation. We're thrilled to invite May Habib, the CEO and founder of Writer, a powerful full-stack AI writing platform that is customized for every kind of client! If you're currently leading a product-led team, this episode might be just what you need to bring your company to the next level! Join us for a riveting episode on how to craft a strategic plan from scratch, landing on your laser-focus target demographic, and creating a solution for the entire user journey. With their unique propositions and scaling strategy, Writer has succeeded in rising above their tech competitors. So if you're interested in learning about the winning secrets to a successful PLG company, don't miss out on this episode! May's experiences are incredibly enlightening as walks us through how Writer chose free versus paid trials, overcame low activation rates, and increased their net revenue retention (NRR). Key Takeaways [1:40] What Makes Writer.com So Special [3:45] How Writer Addresses The Limitations of Current AI [6:00] Protecting The Data Privacy Of Users [13:50] Scaling To 100 Enterprise Customers In 14 Months [18:10] How To Build A Succesful PLG Strategy [22:45] Building A Broad Footprint For Use Cases [26:30] Why NRR Is One Of The Most Important Metrics [30:10] Optimizing Your Pricing and Packaging Strategy [34:55] How To Determine Free Vs. Paid Functions [36:30] Framing Team Problems In PLG [39:30] Overcoming Low Activation Rates [42:55] When Is The Best Time For Sales Outreach [46:20] May's Best Advice For Fellow PLG CEOs About May Habib May Habib is co-founder and CEO of Writer, an AI writing assistant for teams. May graduated with high honors from Harvard, is a member of the World Economic Forum, and is a Fellow of the Aspen Global Leadership Network. Profile May on Twitter Writer on Twitter Writer website Writer on LinkedIn
Artificial intelligence (AI) has the potential to greatly improve society, but as with any powerful technology, it comes with heightened risks and responsibilities. Current AI research lacks a systematic discussion of how to manage long-tail risks from AI systems, including speculative long-term risks. Keeping in mind that AI may be integral to improving humanity's long-term potential, there is some concern that building ever more intelligent and powerful AI systems could eventually result in systems that are more powerful than us; some say this is like playing with fire and speculate that this could create existential risks (x-risks). To add precision and ground these discussions, we review a collection of time-tested concepts from hazard analysis and systems safety, which have been designed to steer large processes in safer directions. We then discuss how AI researchers can realistically have long-term impacts on the safety of AI systems. Finally, we discuss how to robustly shape the processes that will affect the balance between safety and general capabilities. 2022: Dan Hendrycks, Mantas Mazeika https://arxiv.org/pdf/2206.05862v7.pdf
Producer/Host: Jim Campbell There have been a lot of headlines in the popular press of late about Artificial Intelligence. On today’s edition, let’s look at a few headlines that may not have made it into the popular press yet. Here are links for the materials mentioned in today’s edition: AI artwork can’t be copyrighted rules the US Copyright Office Robots let ChatGPT touch the real world thanks to Microsoft NYC AI Hiring Law Artificial Intelligence Video Interview Act About the host: Jim Campbell has a longstanding interest in the intersection of digital technology, law, and public policy and how they affect our daily lives in our increasingly digital world. He has banged around non-commercial radio for decades and, in the little known facts department (that should probably stay that way), he was one of the readers voicing Richard Nixon's words when NPR broadcast the entire transcript of the Watergate tapes. Like several other current WERU volunteers, he was at the station's sign-on party on May 1, 1988 and has been a volunteer ever since doing an early stint as a Morning Maine host, and later producing WERU program series including Northern Lights, Conversations on Science and Society, Sound Portrait of the Artist, Selections from the Camden Conference, others that will probably come to him after this is is posted, and, of course, Notes from the Electronic Cottage. The post Notes from the Electronic Cottage 3/16/23: Current AI Headlines first appeared on WERU 89.9 FM Blue Hill, Maine Local News and Public Affairs Archives.
In the age of artificial intelligence, how can entrepreneurs offer products and services that hold value beyond what a machine is capable of providing? Jon LoDuca believes that going back to the basics of authentic human interaction is the new million-dollar idea. Jon is founder and CEO of PlaybookBuilder, an award-winning knowledge management software that helps leaders and teams capture and share best practices, core process, and story to drive performance, scalability, and sustainability. Wisdom is a company's best asset, and Jon's software programs help organizations monetize their intellectual property in a world where humans find themselves competing with AI.With conversation software such as ChatGPT and Google's LaMDA replacing entire industries like customer service, business owners are doing their best to adapt and grow within new technological and cultural parameters. Current AI software passes the “deity test,” meaning it is omniscient, omnipresent, omnipotent, and immortal. This reality demands that entrepreneurs address how AI will relate to their business. To prevent being disintermediated, Jon encourages leaders to come back to their core values that motivated them in the early days of their careers. The best way to offer a unique experience is by boldly preserving human interaction, creating a sense of intimacy in communication, and offering deep empathy. Entrepreneurs can guarantee their success by playing a trump card that's outside the wheelhouse for AI: human-to-human interaction. In the near future, it may just become the world's most valuable commodity.Main Topics Jon's start in the technology industry in San Francisco (02:00) AI passes the “deity test” and what this means for entrepreneurs (08:20) Jon's 2017 TedTalk predicted trends in AI that have come to fruition (11:33) Future demands for face-to-face interactions to confirm identity (16:00) Changing definitions of intimacy (19:55) Avoid the Metaverse to prevent being disintermediated (24:00) Two types of business models: wide and deep (30:23) How to compete with AI by emphasizing the human component of business (36:00) Return to original values that focused on building relationships with the client (40:10) Episode Links https://playbookbuilder.com/demo/ https://www.youtube.com/watch?v=jEURpISHCpA Connect with Jon:https://www.linkedin.com/in/jonloduca/https://twitter.com/jonloducahttps://www.instagram.com/therealjonloduca/Connect with Adam:https://www.startwithawin.com/https://www.facebook.com/AdamContosCEOhttps://twitter.com/AdamContosCEOhttps://www.instagram.com/adamcontosceo/Listen, rate, and subscribe! Today's episode was brought to you by RE/MAX, nobody in the world sells more real estate than RE/MAX. For more information head over to www.REMAX.com
(01:08) Sponsor (01:37) Introducing Eugenia (03:25) Replika's controversial choice to limit erotic roleplay functionality (04:25) Eugenia's vision for Replika (05:58) Augmented Reality is the Ultimate Modality (07:52) Using Replika (13:11) Profound stories about virtual assistants (15:55) Eugenia on why Replika is more than just a painkiller or vitamin (28:50) Replika's business model and profitability (34:12) Users want their Replika to surprise them (36:12) Why ChatGPT is not a conversational model (49:26) Eugenia's prediction for the "next Iphone of personal AI" (1:03:00) Eugenia's AI tools/stack (1:15:00) Eugenia comes back to discuss Erotic Roleplay controversy (1:18:00) Replika's first reaction to users ERP (1: 22:00) How the product evolved from 90% scripts & retrieval models (1:24:00) The definitive focus for Replika to stay in a PG13 zone (1:26:00) “Normal” evolution given sci-fi depictions of AI romances (1:30:00) Current AI ethics on sexuality (1:37:07) Response to journalists misportrayal (1:40:29) Sponsor Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real time advertising data, to generate personalized experiences at scale. Twitter: @CogRev_Podcast @eriktorenberg (Erik) @labenz (Nathan) @ekuyda (Eugenia) Websites: cognitivervolution.ai https://replika.ai/ RECOMMENDED PODCAST: The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix's culture deck Patty McCord. https://link.chtbl.com/hrheretics
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [linkpost] Better Without AI, published by DanielFilan on February 14, 2023 on LessWrong. David Chapman (of Meaningness and In the Cells of the Eggplant fame) has written a new web-book about AI. Some excerpts from the introduction, Only you can stop an AI apocalypse: Artificial intelligence might end the world. More likely, it will crush our ability to make sense of the world—and so will crush our ability to act in it. AI will make critical decisions that we cannot understand. Governments will take radical actions that make no sense to their own leaders. Corporations, guided by artificial intelligence, will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant. Formerly-respected information sources will publish mysteriously persuasive nonsense. We will feel our loss of understanding as pervasive helplessness and meaninglessness. We may take up pitchforks and revolt against the machines—and in so doing, we may destroy the systems we depend on for survival... We don't know how our AI systems work, we don't know what they can do, and we don't know what broader effects they will have. They do seem startlingly powerful, and a combination of their power with our ignorance is dangerous... In our absence of technical understanding, those concerned with future AI risks have constructed “scenarios”: stories about what AI may do... So far, we've accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. We've found zero that lead to good outcomes... Unless we can find some specific beneficial path, and can gain some confidence in taking it, we should shut AI down. This book considers scenarios that are less bad than human extinction, but which could get worse than run-of-the-mill disasters that kill only a few million people. Previous discussions have mainly neglected such scenarios. Two fields have focused on comparatively smaller risks, and extreme ones, respectively. AI ethics concerns uses of current AI technology by states and powerful corporations to categorize individuals unfairly, particularly when that reproduces preexisting patterns of oppressive demographic discrimination. AI safety treats extreme scenarios involving hypothetical future technologies which could cause human extinction. It is easy to dismiss AI ethics concerns as insignificant, and AI safety concerns as improbable. I think both dismissals would be mistaken. We should take seriously both ends of the spectrum. However, I intend to draw attention to a broad middle ground of dangers: more consequential than those considered by AI ethics, and more likely than those considered by AI safety. Current AI is already creating serious, often overlooked harms, and is potentially apocalyptic even without further technological development. Neither AI ethics nor AI safety has done much to propose plausibly effective interventions. We should consider many such scenarios, devise countermeasures, and implement them. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
AI stocks have skyrocketed amid the surge of attention around ChatGPT (1:00) - What Is ChatGPT and Generative AI? (5:10) - What Are The Possible Benefits This Technology Can Bring? (10:40) - What Are The Limitations of Current AI? (15:15) - Will AI Be The Most Disruptive Technology? (19:35) - AI Wars: Which Tech Giant Will Be The Big Winner? (22:50) - ROBO Global Artificial Intelligence ETF: THNQ (29:45) - Stocks And ETFs To Keep On Your Radar: AI, SOUN, BBAI, BOTZ, ROBO, IRBO Podcast@Zacks.com
Making Sense of The Current AI Paradigm
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.10.24.513526v1?rss=1 Authors: Barabasi, D. L., Schuhknecht, G. F. P., Engert, F. Abstract: During development, the complex neuronal circuitry of the brain arises from limited information contained in the genome. After the genetic code instructs the birth of neurons, the emergence of brain regions, and the formation of axon tracts, it is believed that neuronal activity plays a critical role in shaping circuits for behavior. Current AI technologies are modeled after the same principle: connections in an initial weight matrix are pruned and strengthened by activity-dependent signals until the network can sufficiently generalize a set of inputs into outputs. Here, we challenge these learning-dominated assumptions by quantifying the contribution of neuronal activity to the development of visually guided swimming behavior in larval zebrafish. Intriguingly, dark-rearing zebrafish revealed that visual experience has no effect on the emergence of the optomotor response (OMR). We then raised animals under conditions where neuronal activity was pharmacologically silenced from organogenesis onward using the sodium-channel blocker tricaine. Strikingly, after washout of the anesthetic, animals performed swim bouts and responded to visual stimuli with 75% accuracy in the OMR paradigm. After shorter periods of silenced activity OMR performance stayed above 90% accuracy, calling into question the importance and impact of classical critical periods for visual development. Detailed quantification of the emergence of functional circuit properties by brain-wide imaging experiments confirmed that neuronal circuits came 'online' fully tuned and without the requirement for activity-dependent plasticity. Thus, we find that complex sensory guided behaviors can be wired up by activity-independent developmental mechanisms. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A descriptive, not prescriptive, overview of current AI Alignment Research, published by Jan Hendrik Kirchner on June 6, 2022 on The AI Alignment Forum. TL;DR: In this project, we collected and cataloged AI alignment research literature and analyzed the resulting dataset in an unbiased way to identify major research directions. We found that the field is growing quickly, with several subfields emerging in parallel. We looked at the subfields and identified the prominent researchers, recurring topics, and different modes of communication in each. Furthermore, we found that a classifier trained on AI alignment research articles can detect relevant articles that we did not originally include in the dataset. Dataset Announcement In the context of the 6th AISC, we collected a dataset of alignment research articles from a variety of different sources. This dataset is now available for download here and the code for reproducing the scrape is on GitHub here. When using the dataset, please cite our manuscript as described in the footnote. Here follows an abbreviated version of the full manuscript, which contains additional analysis and discussion. Rapid growth of AI Alignment research from 2012 to 2022 across two platforms After collecting the dataset, we analyzed the two largest non-redundant sources of articles, Alignment Forum (AF) and arXiv. We found rapid growth in publications on the AF (Fig. 1a) and a long-tailed distribution of articles per researcher (Fig. 1b) and researchers per article (Fig. 1c). We were surprised to find a decrease in publications on the arXiv in recent years, but identified the cause for the decrease as spurious and fixed the issue in the published dataset (details in Fig. 4). Unsupervised decomposition of AI Alignment research into distinct clusters Given access to this unique dataset, we were curious to see if we could identify distinct clusters of research. We mapped the title + abstract of each article into vector form using the Allen Institute for AI's SPECTER model and reduced the dimensionality of the embedding with UMAP (Fig. 2a). The resulting manifold shows a continuum of AF posts and arXiv articles (Fig. 2b) and a temporal gradient from the top right to the bottom left (Fig. 2c). Using k-means and the elbow method, we obtain five clusters of research articles that map onto distinct regions of the UMAP projection (Fig. 2d). We were curious to see if the five clusters identified by k-means map onto existing distinctions in the field. When identifying the most prolific authors in each cluster, we noticed strong differences (consistent with previous work that suggests that author identity is an important indicator of research direction). By skimming articles in each cluster and given the typical research published by the authors, we suggest the following putative descriptions of each cluster: cluster one: Agent alignment is concerned with the problem of aligning agentic systems, i.e. those where an AI performs actions in an environment and is typically trained via reinforcement learning. cluster two: Alignment foundations research is concerned with deconfusion research, i.e. the task of establishing formal and robust conceptual foundations for current and future AI Alignment research. cluster three: Tool alignment is concerned with the problem of aligning non-agentic (tool) systems, i.e. those where an AI transforms a given input into an output. The current, prototypical example of tool AIs is the "large language model". cluster four: AI governance is concerned with how humanity can best navigate the transition to advanced AI systems. This includes focusing on the political, economic, military, governance, and ethical dimensions. cluster five: Value alignment is concerned with understanding and extracting human preferences and designing me...
The importance of providing patients with a fast and accurate diagnosis cannot be overstated – it can be the difference between life and death. Unfortunately, the healthcare system is overwhelmed, and physicians cannot keep up with the overflow of patient needs in a timely manner.Dr. Chai Xiangfei, CEO and co-founder at HY Medical, and Beenish Zia, chief architect of medical imaging at Intel, sit down with podcast host, Tyler Kern, to discuss the current challenges of providing a patient diagnosis in a timely manner and how artificial intelligence (AI) solutions can help.Beenish discusses and highlights the complexities of diagnosing patient illnesses. “Medical diagnosis is a complex process which requires clinical skills and the need for clear decisions. And it needs to be balanced with acceptance for the ambiguity of many clinical situations,” Beenish explains.According to Beenish, there are five major obstacles that delay diagnosis:There are variations in individual logic and processes.Because the nature of the evidence is not clear, further tests/screenings are required.Standards vary globally, which causes longer diagnosis times.Lack of evidence and data from diverse populations leads to not having good data sets to represent different communities.The number of cases a physician must handle grows daily.“For example, in certain hospitals in New York, we have heard that the radiologists get more than 1400 scans per day, and maybe there is just like a handful of radiologists to look at them. So, just the huge imbalance between the number of cases requiring diagnosis and the medical staff available to provide the diagnosis is not good,” Beenish explains.To address this need, Dr. Chai talks about the role of AI in reducing diagnosis time to improve patient outcomes. “There are a huge amount of images created every day. In the meantime, actually, there's a shortage of radiologists to read images. So, the real situation is that because there is a long waiting list of imaging… it can take hours or even days” for patients to get the result of their tests, notes Dr. Chai.With traditional processes, scans are taken and added to a waiting list to be read and interpreted by radiologists, which can take up to 15 minutes. Current AI technology can give initial results within one minute and can ultimately reduce reading time by 30%. When radiology staff and doctors get the initial results, they can then quickly review the areas highlighted by the AI and then determine a path forward for diagnosis and treatments. Having this reduction in reading time, is helping patients get treated sooner.To learn more about how AI can transform the diagnostic process and improve patient outcomes, connect with Dr. Chai Xiangfei and Beenish Zia on LinkedIn.To find out more about HY Medical please visit: https://www.huiyihuiying.com/#page1.Learn about various technologies Intel offers to advance the healthcare field — from compute to storage and networking to AI — visit: https://www.intel.com/content/www/us/en/healthcare-it/healthcare-overview.htmlSubscribe to this channel on Apple Podcasts, Spotify, and Google Podcasts to hear more from the Intel Internet of Things Group.
Current AI is impressive, but it's not intelligent. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/faradaynight/message
Why do you do what you do?Are you a cybersec company?Current AI used.Problems addressed & industries targeted.AI regulations - where to stand.Reference linksFactide WebsiteFactide Facebook PageFactide LinkedIn PageFactide Twitter Page