Branch of mathematics concerning chance and uncertainty
POPULARITY
Imagine teaching a robot 1000 tasks in just 24 hours. Imagine teaching robots just like you teach humans.In fact, what if teaching a robot were as easy as showing it once?Humans can learn new skills almost instantly by watching, trying, or receiving a quick explanation. Robots, historically, haven't been so lucky. Training them often requires huge datasets with real or virtual data, massive engineering effort, and weeks or months of experimentation.But that may be changing.In this episode of TechFirst, host John Koetsier talks with Edward Johns, Director of the Robot Learning Lab at Imperial College London, about a breakthrough in efficient imitation learning that allowed a robot to learn 1,000 different tasks in just 24 hours.Instead of collecting huge datasets, Johns' team combines simulation training, clever algorithm design, and single demonstrations to dramatically speed up how robots learn.We discuss:• How robots can learn from just one demonstration• Why breaking tasks into “reach” and “interact” phases makes learning faster• The role of simulation data in robotics AI• Why robotics doesn't have the same data advantage as large language models• The future of prompt-like robot training• Whether humanoid robots will actually learn like humansAs robotics hardware rapidly improves and costs fall, breakthroughs like this could be the key to making robots truly useful in homes, factories, and everyday life.If robots are going to become real collaborators with humans, they'll need to learn quickly ... just like we do.⸻GuestEdward JohnsDirector, Robot Learning LabImperial College Londonhttps://www.imperial.ac.uk⸻Subscribe for more conversations on AI, robotics, and the future of technology:https://techfirst.substack.com00:00 Can robots learn as fast as humans?00:51 Teaching a robot 1,000 tasks in 24 hours01:08 The two-phase learning approach02:14 Old-school robotics vs. machine learning03:29 The robotics data bottleneck04:47 The challenge of dynamic environments06:04 The coming wave of robot data06:59 Why robots must be teachable by users08:08 Why LLM-style scaling is harder in robotics09:42 Prompting robots with demonstrations10:54 Probabilistic robot behavior and safety12:20 What robots can do today13:53 Why hardware precision still matters16:53 When this reaches the real world17:59 Humanoids that look human vs. learn human18:40 The robotics boom around the world22:34 The risk of scaling too early23:46 Faster learning vs. more data26:20 The next frontier in robot learning
As foundation models, including large language models and multimodal models, are increasingly deployed in complex and high-stakes settings, ensuring their safety has become more important than ever. In this talk, I present a probabilistic perspective on AI safety: safety risks are treated as structured distributions to be discovered and controlled, rather than isolated failures to be patched. I first introduce probabilistic red-teaming methods that characterize distributions of failures, revealing systematic safety risks that standard evaluation often misses. I then describe probabilistic defense methods that control model behavior during deployment by adaptively steering generation toward constraint-aligned distributions. By unifying failure discovery and behavior control under a probabilistic perspective, this talk highlights a distributional approach for understanding and managing safety risks in foundation models. About the speaker: Ruqi Zhang is an Assistant Professor in the Department of Computer Science at Purdue University. Her research focuses on probabilistic machine learning, generative modeling, and trustworthy AI. Prior to joining Purdue, she was a postdoctoral researcher at the Institute for Foundations of Machine Learning (IFML) at the University of Texas at Austin. She received her Ph.D. from Cornell University. Dr. Zhang has been a key organizer of the Symposium on Probabilistic Machine Learning. She has served as an Area Chair and Editor for ML conferences and journals, including ICML, NeurIPS, ICLR, AISTATS, UAI, and TMLR. Her contributions have been recognized with several honors, including AAAI New Faculty Highlights, Amazon Research Award, Spotlight Rising Star in Data Science, Seed for Success Acorn Award, and Ross-Lynn Research Scholar.
This week, we're joined by Alexis Gauba, Co-Founder of Raindrop, an AI native observability platform built for agents in production.Alexis breaks down why operating agents is fundamentally different from monitoring traditional software. As systems shift from deterministic code to probabilistic behavior, dashboards alone are not enough. Teams need to detect unknown issues, track signals like forgetting, and understand long agent trajectories across millions of AI events.We discuss why agent observability has become essential over the past year, what makes agent infrastructure distinct from prior platform shifts, and when internal tooling stops scaling. Alexis also explains Raindrop's approach to production monitoring, combining explicit signals with automated detection to help teams not just find issues, but fix them.Episode chapters:2:05 - Founding Raindrop3:54 - Building with Close Friends5:44 - What Raindrop Actually Does7:45 - The Reliability Challenge of Agents9:55 - Monitoring Agents at Scale14:45 - Experiencing the Pain Firsthand18:00 - The Rise of Agent Infrastructure22:17 - Internal AI Use Cases24:07 - Hiring for Initiative and Ownership28:30 - The Power of Multitasking32:06 - Quick Fire Round This episode is brought to you by Grata, the leading deal sourcing platform for private equity. Grata's AI powered search, investment grade data, and intuitive workflows help you find and win the right deals faster. Visit grata.com to book a demo.This episode is also sponsored by Overlap, the AI powered app that uses LLMs to surface the best moments from any podcast. Overlap reads full transcripts, finds the most relevant clips, and stitches them into a personalized stream of insights. Tap into podcasts as a real information source with Overlap 2.0, now available on the App Store.
In this episode, Jeff Mains sits down with KG Charles-Harris, a serial entrepreneur who has founded six companies across industries ranging from genomics to AI. KG is the founder and CEO of Quarrio, a deterministic AI platform that solves a critical problem: getting accurate, consistent answers from corporate data in seconds instead of weeks.KG shares his unconventional path to entrepreneurship, explaining how his companies emerge from late-night conversations with brilliant people who share a common problem. He breaks down the crucial difference between deterministic and probabilistic AI systems, making the case that when decisions involve real money, real lives, or real consequences, accuracy isn't optional—it's essential.Key Takeaways[0:00] Introduction to KG Charles-Harris and his multi-industry entrepreneurial journey[1:18] How companies are born from conversations: The pattern behind KG's six startups[2:30] The genomics company origin story: From 4:30 AM conversation to Norwegian startup[3:28] Why Quarrio exists: Even data company CEOs can't get the data they need[4:31] The Quarrio platform: 100% accuracy, plain language queries, auto-visualization[5:27] Real-world impact: The $60M margin leak that took two quarters to find (would take 5 seconds with Quarrio)[7:00] Deterministic vs. probabilistic AI explained: Why autopilots don't hallucinate[11:30] The cycle time framework: Information → Decision → Action → Results[13:00] Why ChatGPT's inconsistency is a dealbreaker for enterprise decisions[18:30] Organizations as "decision-making machines" and democratizing decisions to every level[20:30] The data explosion: Managing 300+ structured data sources in mid-sized enterprises[23:00] Why Quarrio focuses on structured enterprise data (SAP, Salesforce, Oracle) instead of PDFs[30:00] Go-to-market strategy: Why they started with Salesforce and sales teams[32:30] The Salesforce incubation story: Free office space and immediate investment[33:30] Team building philosophy: Surrounding yourself with people smarter than you[37:00] Stewardship as core ethos: Taking care of family, team, customers, and partners[38:30] The founder's dilemma: Resilience vs. delusion—knowing when to persist[43:00] Where to connect with KG and learn more about QuarrioTweetable Quotes"An organization is essentially a machine for making decisions and taking actions that have certain types of results." — KG Charles-Harris"Cycle time to information shortens cycle time to decision, which shortens cycle time to action, which shortens cycle time to results." — KG Charles-Harris"Agentic AI without context is useless. You need determinism to trust what is enacted within your system." — KG Charles-Harris"Effectiveness requires redundancy. Efficiency optimizes for the shortest time or best expense, but effectiveness accomplishes the goal." — KG Charles-Harris"I'm not very smart, and because I realize that, I ensure I work with people who are very smart. Then they make me look smart." — KG Charles-Harris"Most of us give up before we should have. The break would have come had we stuck it out one more month." — KG Charles-Harris"If you don't have their back, you cannot expect them to have yours. It's a
In this episode, Robert explains why successful investing is not about prediction but about probabilities, positioning, and risk management. He breaks down how probabilistic thinking shapes portfolio decisions, why being wrong is inevitable, and how asymmetric risk reward setups allow investors to win over time without needing certainty. Robert then discusses where he currently sees favorable odds across different asset classes, how he thinks about adding exposure at the margin, and why process matters more than short term narratives. The episode wraps with a listener question on market volatility, global headlines, and how to avoid overcomplicating market moves during periods of uncertainty.
Tyson Singer (Head of Tech & Platforms @ Spotify) joins us to unpack how Spotify is transforming its product development lifecycle across creation, experimentation and maintenance to shift from "localized speed" to "systematic speed." We explore why the industry's current obsession with the "Build It" phase of development is shortsighted, and how Spotify is aggressively deploying AI in the "Think It" (prototyping/strategy) and "Maintain It" (fleet management) phases. Tyson also details the internal tools driving this shift, including AiKA and Honk, and shares why the future of engineering relies on moving from I-shaped specialists to T-shaped generalists. ABOUT TYSON SINGERTyson Singer is the SVP of Technology & Platforms at Spotify, where he leads technology infrastructure, developer experience, cybersecurity, and finance IT. Tyson is the executive behind Spotify's internal developer portal, Backstage, and Spotify's experimentation system, Confidence, which are now both commercially available. He has a background as an engineer, architect, and product lead, and he holds a Master's in Computer Science from Stanford University. Tyson is also an avid outdoor adventurer. This episode is brought to you by Retool!What happens when your team can't keep up with internal tool requests? Teams start building their own, Shadow IT spreads across the org, and six months later you're untangling the mess…Retool gives teams a better way: governed, secure, and no cleanup required.Retool is the leading enterprise AppGen platform, powering how the world's most innovative companies build the tools that run their business. Over 10,000 organizations including Amazon, Stripe, Adobe, Brex, and Orangetheory Fitness use the platform to safely harness AI and their enterprise data to create governed, production-ready apps.Learn more at Retool.com/elc SHOW NOTES:Tyson's 9-year journey @ Spotify: From the "crucible" of hyper-growth to leading Tech & Platforms (3:46)The pivot from "localized speed" to "systematic speed" (7:27)Core principles of Spotify's Platform org: Partnering with customers & "Taking the pain away" (10:37)The "Think it, Build it, Ship it, Tweak it" lifecycle framework & why the industry obsession with "Build It" (coding agents) is missing the bigger picture (14:57)How Spotify is investing in the "Think It" phase: AI prototyping with deep business context (16:49)AiKA (AI Knowledge Assistant): Context engineering for humans and bots (18:47)"Honk": Spotify's internal framework for large-scale automated code changes (22:17)Addressing the decline of code quality and the bottleneck of human PR reviews (25:50)Probabilistic vs. Deterministic code reviews: A new approach to quality checks (29:43)Identifying bottlenecks to company value outside of R&D (Legal, Licensing, etc.) (32:12)Why systems change is fundamentally about people and identity shifts (35:57)Rapid fire questions (38:49) This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Mark “Murch” Erhardt and Mike Schmidt are joined by Bastien Teinturier, Rearden Code, and Pieter Wuille to discuss Newsletter #385: 2025 Year-in-Review Special.January● Updated ChillDKG draft (43:08) ● Offchain DLCs (45:53) ● Compact block reconstructions (2:29:27) February● Erlay update (1:53:55) ● LN ephemeral anchor scripts (0:50) ● Probabilistic payments (54:45) March● Bitcoin Forking Guide (3:29:35) ● Private block template marketplace to prevent centralizing MEV (3:05:28) ● LN upfront and hold fees using burnable outputs (13:12) April● SwiftSync speedup for initial block download (2:09:35) ● DahLIAS interactive aggregate signatures (3:26:02) Summary 2025: Quantum (58:07) May● Cluster mempool (1:22:11) ● Increasing or removing Bitcoin Core's OP_RETURN policy limit (2:45:43) June● Calculating the selfish mining danger threshold (2:20:39) ● Fingerprinting nodes using addr messages (3:11:38) ● Garbled locks (3:19:01) Summary 2025: Soft fork proposals (26:57) July● Chain code delegation (49:07) August● Utreexo draft BIPs (2:15:57) ● Lowering the minimum relay feerate (2:39:52) ● Peer block template sharing (2:56:01) ● Differential fuzzing of Bitcoin and LN implementations (3:16:08) Summary 2025: Stratum v2 (2:04:49) September● Details about the design of Simplicity (3:23:01) ● Partitioning and eclipse attacks using BGP interception (3:13:47) October● Discussions about arbitrary data (3:01:15) ● Channel jamming mitigation simulation results and updates (11:05) November● Comparing performance of ECDSA signature validation in OpenSSL vs. libsecp256k1 (2:01:47) ● Modeling stale rates by propagation delay and mining centralization (2:22:32) ● BIP3 and the BIP process (3:31:37) ● Bitcoin Kernel C API introduced (3:35:35) December● Splicing (7:33)
Back when I first worked with Jana Werner at Tesco Bank, I saw firsthand how a crisis could be a crucible for innovation and transformation. Her ability to unlock potential in even the most challenged teams was unforgettable. Now, teaming up with Phil Le-Brun—a transformational leader I came to know through his work at McDonald's—they've co-authored The Octopus Organization, a guide for thriving in an age of continuous transformation.In this episode, we go behind the scenes of their book and explore the anti-patterns that hold organizations back, the behaviors leaders must unlearn, and the mindset shifts required to succeed when change never stops. Whether you're a CEO, change agent, or team lead, you'll leave with small, actionable experiments to start evolving your organization—today.Key TakeawaysUnlearning blame-based leadership: Shifting focus from fixing people to fixing systems unlocks performance and trust.Spotting anti-patterns in everyday behavior: Habits like jargon, silos, and avoidance subtly block progress.Embracing uncertainty in leadership: Probabilistic thinking builds better decisions and psychological safety.Driving transformation through small experiments: Distributed action outperforms top-down mandates.Leading with curiosity in the age of AI: Execs must actively engage with tech to stay relevant and credible.Additional InsightsBehind the book: Why The Octopus Organization centers on 36 anti-patterns and how they uncovered themReal-world leadership stories: Lessons from Tesco Bank, McDonald's, Amazon, and FerrariTransformation fatigue is real: Overengineered change efforts often create fear and resistanceAlignment breakdowns in leadership teams: Many transformations fail because leaders aren't truly on the same pageReframing performance: Asking “what did you stop doing” reveals deeper impact than traditional goalsEpisode Highlights00:00 – Episode RecapJana Werner shares how she took over a struggling tech team, discovered their true strengths, and transformed their performance by rebuilding culture and trust. Phil Le-Brun describes the importance of creating a culture of trust in organizations, allowing people to test ideas and make a real difference.02:46 – Guest Introduction: Jana Werner & Phil Le-BrunBarry O'Reilly introduces guests Jana Werner and Phil Le-Brun, describing their collaboration during times of crisis at Tesco Bank, their leadership backgrounds, and their shared vision for adaptive, purpose-driven organizations as captured in their new book.04:36 – Revitalizing a Demotivated Team at Tesco BankJana Werner narrates how she took over a demotivated technology team, overcame her initial preconceptions, and transformed the group into a top-performing unit by changing culture, empowering individuals, and shifting organizational dynamics.07:07 – Lessons from McDonald's: Balancing Centralization and AgilityPhil Le-Brun explains McDonald's transformation journey, the need to unify local and corporate efforts, and the financial impact of building trust and alignment.10:16 – Learning from Industry LeadersPhil recounts interviews with CEOs like Indra Nooyi and Benedetto Vigna, highlighting that true leadership requires humility, storytelling, and ongoing curiosity.14:14 – Unlearning the Need for CertaintyJana Werner discusses shifting away from needing all the answers and embracing uncertainty, drawing on insights from Annie Duke and other...
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome back to AI Unraveled, your daily strategic briefing on the business impact of artificial intelligence.Today, we are dismantling the biggest lie in enterprise tech: that all AI adoption is created equal. We are breaking down the 3-Layer Agentic Workflow ROI Model, a financial framework that separates the 'cash cows' from the 'venture bets.' 1. The Paradigm Shift: Deterministic vs. Probabilistic We are exiting the "If-Then" era of rigid RPA and entering the "Probabilistic" era of Agentic AI. The value proposition is shifting from efficiency (doing it faster) to efficacy (figuring out what to do). However, this introduces "probabilistic risk"—where errors aren't bugs, but costly hallucinations.2. Layer 1: Simple Automation (The Efficiency Engine)
In this week's mini-episode, we discuss the importance of probabilistic thinking: viewing Jiu-Jitsu as a game of probabilities where your goal is to create (and maintain) the greatest possible chance of success.Get our Intro to Mechanics audio course, normally $79, FREE:https://bjjmentalmodels.com/freeintro⬆️ LEVEL UP with BJJ Mental Models Premium!The world's LARGEST library of Jiu-Jitsu audio lessons, our complete podcast network, online coaching, and much more! Your first week is free:https://bjjmentalmodels.comNeed more BJJ Mental Models?Get the legendary BJJMM newsletter:https://bjjmentalmodels.com/newsletterLearn more mental models in our online database:https://bjjmentalmodels.com/databaseFollow us on social:https://instagram.com/bjjmentalmodelshttps://threads.com/@bjjmentalmodelshttps://bjjmentalmodels.bsky.socialhttps://youtube.com/@bjjmentalmodels⚠️ NEW course from BJJ Mental Models!MINDSET FOR BETAS, our new Jiu-Jitsu audio course with Rob Biernacki, is now available on BJJ Mental Models Premium! For a limited time, get your first month FREE at:https://bjjmentalmodels.com/beta
The Deep Wealth Podcast - Extracting Your Business And Personal Deep Wealth
Send us a textUnlock Proven Strategies for a Lucrative Business Exit—Subscribe to The Deep Wealth Podcast TodayHave Questions About Growing Profits And Maximizing Your Business Exit? Submit Them Here, and We'll Answer Them on the Podcast!“Resilience trumps resources, all day, every day.” - Jeffrey FeldbergExclusive Insights from This Week's EpisodesIn this powerhouse solo episode, 9-figure founder Jeffrey Feldberg pulls back the curtain on silent killers zapping your bottom line, like ignored customer needs and rigid resource traps. You'll walk away with three battle-tested multipliers to skyrocket loyalty, efficiency, and bold growth – no massive overhauls required. 00:01 The hidden profit drain most entrepreneurs miss (and why “revenue spikes” lie)00:05 The 3 multipliers from Step 2 “X-Factors”—why competitors can't copy them00:10 Action step: Run a 3-touchpoint Customer Experience Audit (and score it 1–10)00:11 Micro-personalization: 15% revenue lift with tiny, targeted changes00:14 Dynamic resource allocation: stop “set-and-forget” budgets draining 30% of profits00:20 Cross-training playbook: “Embanet Recruits” for seasonal load balancing00:23 Probabilistic planning vs. deterministic planning—plan for uncertainty00:34 Recap: three multipliers, zero fluff—small shifts, big profitClick here for full show notes, transcript, and resources:https://podcast.deepwealth.com/490Essential Resources to Maximize Your Business ExitLearn More About Deep Wealth MasteryFREE Deep Wealth eBook on Why You Suck At Selling Your Business And What You CaUnlock Your Lucrative Exit and Secure Your Legacy
In this week's episode, host Daniel Raimi talks with Arvind Ravikumar, an assistant professor at the University of Texas at Austin, about recent federal deregulation of methane emissions in the United States; specifically, the effects on methane emissions from the production of natural gas and liquefied natural gas. Ravikumar highlights some of his recent research, which explores how all steps in the supply chain of natural gas can affect emissions intensity—including transportation of the energy source to end users—and the variation in methane emissions across countries from their natural gas supply chains. References and recommendations: “Tracking U.S. Liquefied Natural Gas Supply Chain Greenhouse Gas Emissions Intensity through Direct Measurements” by Yuanrui Zhu, Greg Ross, Jenna Brown, Olga Khaliukova, William Daniels, Jiayang (Lyra) Wang, Selina Roman-White, Fiji George, Daniel Zimmerle, Dorit Hammerling, and Arvind Ravikumar; https://chemrxiv.org/engage/chemrxiv/article-details/6882ca69fc5f0acb52e159e3 “Probabilistic, Measurement-Informed Greenhouse Gas Emissions from Global Liquefied Natural Gas Supply Chains Reveal Wide Country-Level Variation” by Haoming Ma, Yuanrui Zhu, Wennan Long, Mohammad Masnadi, Garvin Heath, Paul Balcombe, Fiji George, Selina Roman-White, and Arvind Ravikumar; https://chemrxiv.org/engage/chemrxiv/article-details/6883b68723be8e43d6fdcf73 “AI as Normal Technology” by Arvind Narayanan and Sayash Kapoor; https://knightcolumbia.org/content/ai-as-normal-technology
The Crisis We're Not Talking About We're living through the greatest thinking crisis in human history—and most people don't even realize it's happening. Right now, AI generates your answers before you've finished asking the question. Search engines remember everything so you don't have to. Algorithms curate your reality, telling you what to think before you've had the chance to think for yourself. We've built the most sophisticated cognitive tools humanity has ever known, and in doing so, we've systematically dismantled our ability to use our own minds. A recent MIT study found that students who exclusively used ChatGPT to write essays showed weaker brain connectivity, lower memory retention, and a fading sense of ownership over their work. Even more alarming? When they stopped using AI tools later, the cognitive effects lingered. Their brains had gotten lazy, and the damage wasn't temporary. This isn't about technology being bad. This is about survival. In a world where machines can think faster than we can, the ability to think clearly—to reason, analyze, question, and decide—has become the most valuable skill you can possess. Those who can think will thrive. Those who can't will be left behind. The Scope of Cognitive Collapse Let's be clear about what we're facing. Multiple studies across 2024 and 2025 have found a significant negative correlation between frequent AI tool usage and critical thinking abilities. We're not talking about a slight dip in performance. We're talking about measurable cognitive decline. A Swiss study showed that more frequent AI use led to cognitive decline as users offloaded critical thinking to machines, with younger participants aged 17-25 showing higher dependence on AI tools and lower critical thinking scores compared to older age groups. Think about that. The generation that should be developing the sharpest minds is instead experiencing the steepest cognitive erosion. The data gets worse. Researchers from Microsoft and Carnegie Mellon University found that the more users trusted AI-generated outputs, the less cognitive effort they applied—confidence in AI correlates with diminished analytical engagement. We're outsourcing our thinking, and in the process, we're forgetting how to think at all. But AI dependency is only part of the story. Our entire information ecosystem has become hostile to independent thought. Social media algorithms create filter bubbles that curate content aligned with your existing views. Users online tend to prefer information adhering to their worldviews, ignore dissenting information, and form polarized groups around shared narratives—and when polarization is high, misinformation quickly proliferates. You're not thinking anymore. You're being fed a carefully constructed reality designed to keep you engaged, not informed. The algorithm knows what you'll click on, what will make you angry, and what will keep you scrolling. And every time you accept that curated reality without question, your capacity for independent thought atrophies a little more. What Happened to Education? Here's where it gets personal. Schools used to teach you HOW to think. Now they teach you WHAT to think—and there's a massive difference. Research from Harvard professional schools found that while more than half of faculty surveyed said they explicitly taught critical thinking in their courses, students reported that critical thinking was primarily being taught implicitly. Translation? Professors think they're teaching thinking skills, but students aren't actually learning them. Students were generally unable to recall or define key terms like metacognition and cognitive biases. The problem runs deeper than higher education. Teachers struggle with balancing the demands of covering vast amounts of content with the need for in-depth learning experiences, and there's a misconception that critical thinking is an innate ability that develops naturally over time. But research shows the opposite: critical thinking skills can be explicitly taught and developed through deliberate practice. So why aren't we doing it? Because education systems reward compliance and memorization, not inquiry and analysis. Students learn to regurgitate information for tests, not to question assumptions or evaluate evidence. They're taught to accept authority, not challenge it. To consume information, not interrogate it. We've created generations of people who are educated but can't think. Who have degrees but lack discernment. Who can Google anything but can't reason through problems on their own. The Cost of Mental Outsourcing Let's talk about what you're actually losing when you stop thinking for yourself. First, you lose agency. When you can't analyze information independently, you become dependent on whoever controls the information flow. Political leaders, social media influencers, corporations, algorithms—they all shape your reality, and you don't even realize it's happening. 73% of Democrats and Republicans can't even agree on basic facts. Not opinions. Facts. That's what happens when thinking skills collapse—you can't distinguish between what's true and what you want to be true. Second, you lose adaptability. Repeated use of AI tools creates cognitive debt that reduces long-term learning performance in independent thinking and can lead to diminished critical inquiry, increased vulnerability to manipulation, and decreased creativity. In a rapidly changing world, the inability to think flexibly and adapt to new information is a death sentence for your career, your relationships, and your relevance. Third, you lose connection—to your work, your decisions, your life. 83% of students who used ChatGPT exclusively couldn't recall key points in their essays, and none could provide accurate quotes from their own papers. When you outsource thinking, you forfeit ownership. Your work stops being yours. Your ideas stop being original. You become a conduit for someone else's thinking, not a generator of your own. Research shows that partisan echo chambers increase both policy and affective polarization compared to mixed discussion groups. You're not just losing the ability to think—you're losing the ability to connect with people who think differently. You're trapped in a bubble where everyone agrees with you, which feels comfortable but leaves you intellectually brittle and socially isolated. The societal cost? We're becoming ungovernable. When people can't think critically, they can't solve complex problems. They can't compromise. They can't distinguish between legitimate disagreement and malicious manipulation. Democracy requires citizens who can reason, debate, and arrive at informed conclusions. Without thinking skills, democratic institutions collapse into tribal warfare where the loudest voices win, not the most rational ones. Why This Moment Demands Action Here's what makes this crisis urgent: we're at an inflection point. Researchers have identified a tipping point beyond which the process of polarization speeds up as the forces driving it are compounded and forces mitigating polarization are overwhelmed. Some political groups may have already passed this critical threshold. Once you cross that line, reversing cognitive decline becomes exponentially harder. Think about what's coming. AI is getting smarter, faster, and more persuasive. Deepfakes and AI-manipulated media are becoming increasingly sophisticated and harder to detect. Whether or not they've already influenced major events, the capability exists—and your ability to evaluate what's real becomes more critical every day. Social media platforms are optimizing for engagement, not truth. Educational systems are struggling to adapt. The information environment is becoming more hostile to critical thinking every single day. If you don't develop thinking skills now—if you don't reclaim your capacity for independent thought—you'll be swept along by forces you can't see and can't resist. You'll believe what you're told to believe. Buy what you're told to buy. Vote how you're told to vote. And you won't even realize you've lost the ability to choose. But here's the truth they don't want you to know: thinking skills can be learned. They can be developed. They can be strengthened through deliberate practice. You're not doomed to cognitive passivity. You can take back control of your mind. What Becomes Possible Imagine waking up every morning with the confidence that you can evaluate any information that comes your way. No more anxiety about whether you're being manipulated. No more second-guessing your decisions because you don't trust your own judgment. No more feeling like everyone else knows something you don't. When you master thinking skills, you become intellectually self-sufficient. You can spot logical fallacies in arguments. You can identify bias in news sources. You can separate correlation from causation. You can ask the right questions instead of accepting convenient answers. You can hold two competing ideas in your mind and evaluate them fairly without your ego getting in the way. You become harder to fool and impossible to control. Political propaganda bounces off you because you can see through emotional manipulation. Marketing tactics lose their power because you understand psychological triggers. Social media algorithms can't trap you in echo chambers because you actively seek out diverse perspectives and challenge your own assumptions. Your relationships improve because you can actually listen to people who disagree with you without feeling threatened. Your career accelerates because you can solve problems others can't see. Your decisions get better because you're working from logic and evidence, not fear and instinct. Research shows that innovative teaching methods like problem-based learning and interactive instruction significantly boost academic performance and cultivate critical thinking skills. These aren't just abstract benefits—they translate into real-world outcomes. Better grades. Better jobs. Better lives. Most importantly, you reclaim your autonomy. You stop being a passive consumer of information and become an active creator of understanding. Your thoughts become truly your own again. Your beliefs are chosen, not imposed. Your worldview is constructed through rigorous analysis, not algorithmic manipulation. The Path Forward This episode is the beginning of a journey. Over the coming weeks, we'll break down the specific thinking skills you need to master: logical reasoning, argument analysis, decision-making frameworks, cognitive bias recognition, and information evaluation. Each episode will give you concrete tools you can use immediately. But before we get to the tactics, you need to understand why this matters. Why thinking skills aren't just nice to have—they're essential for survival in the modern world. Why the ability to think clearly is the ultimate competitive advantage. The thinking crisis is real. It's measurable. It's accelerating. But it's not inevitable. You have a choice right now. You can keep outsourcing your thinking to machines and algorithms, accepting a future where your mind grows weaker with each passing year. Or you can decide that your ability to think—to reason, to analyze, to question, to decide—is too valuable to surrender. The world needs people who can think. Your community needs people who can think. You need to be able to think. Not because it makes you smarter than everyone else, but because it makes you free. This is your invitation to reclaim your mind. Everything that follows will show you how. But first, you had to see what's at stake. Welcome to Thinking 101. Let's rebuild the most important skill you'll ever develop. Over the next eight weeks, we're building your thinking toolkit from the ground up. Logical reasoning. Causal thinking. Probabilistic judgment. Mental models that let you see what others miss. Each episode drops a specific skill you can use immediately—not theory, but weapons-grade thinking tools for the real world. Links to each episode will appear in the description as they're released, and you can find the full playlist on our channel. Subscribe now and hit the notification bell so you don't miss a single one. Because here's the truth: these skills compound. Miss one, and you're building on a shaky foundation. Watch them all, and you'll think circles around the competition. If you found this valuable, hit that like button—it helps more people discover this series. Drop a comment below: What's one thinking skill you wish you'd learned earlier? I read every single one. And if you want to go deeper, I write Studio Notes on Substack every Monday where I share the personal stories behind what I'm teaching here—the hard-won lessons, the mistakes that taught me why these skills matter, and what it actually looks like to rebuild your thinking from the ground up. The links in the description. This week's post examines the education system's failure to teach students how to think. You can find it here - https://philmckinney.substack.com/p/the-worlds-best-test-takers The crisis is real. The solution is here. Let's get to work.
In this lecture, we review basic probability fundamentals (measure spaces, probability measures, random variables, probability density functions, probability mass functions, cumulative distribution functions, moments, mean/expected value/center of mass, standard deviation, variance), and then we start to build a vocabulary of different probabilistic models that are used in different modeling contexts. These include uniform, triangular, normal, exponential, Erlang-k, Weibull, and Poisson variables. We will finish the discussing next time with the Bernoulli-based discrete variables and Poisson processes.
"J'ai tout essayé, mais peu importe ce que je fais la personne refuse de m'écouter."On a tous vécu une situation où l'on essaie de faire passer en vain un message à autrui. Où au bout de plusieurs essais, on abandonne, se convaincant que c'est une cause perdue et que notre interlocuteur est borné d'esprit. Et si la clé pour le convaicre n'était pas ce qu'on lui dit, mais comment on lui dit ? Plus précisément, et si l'ordre dans lequel on lui disait les chosses était la clé pour le convaincre ? Dans cet épisode de l'art du mentaliste, on explore comment à travers le cadrage d'une information vous optimisez vos chances qu'elle soit correctement entendue par votre interlocuteur. Un épisode court, concret et puissant ! Références : -Bandler, Richard, John Grinder, and Steve Andreas. "Neuro-linguistic programming™ and the transformation of meaning." Utah: Real People (1982).- https://onlinelibrary.wiley.com/doi/10.1111/j.1756-8765.2009.01020.x- DeLong KA, Urbach TP, Kutas M. Probabilistic word pre-activation during language comprehension inferred from electrical brain activity. Nat Neurosci. 2005 Aug;8(8):1117-21. doi: 10.1038/nn1504. Epub 2005 Jul 10. PMID: 16007080.*L'art du mentaliste, un podcast animé par Taha Mansour et Alexis Dieux, musique par Antoine Piolé.Retrouvez Taha Mansour :- Son site : www.tahamansour.com- Instagram / Facebook : @TahaMentalismeRetrouvez Alexis Dieux :- Son site : https://www.alexisdieux.com/- Instagram : @alexisdieuxhypnose
Today's show:Jason and Alex are joined by Nuro's Dave Ferguson to unpack the Lucid–Uber–Nuro deal aiming to launch fully driverless Gravity SUVs on Uber by 2026. They dig into why off-the-shelf sensors and Nvidia's Thor chip drive costs down, how mapping 150 cities improves safety, and why lidar still matters for night and edge cases. Plus: lessons from Cruise's collapse, how regulators think about safety multiples, and why autonomy could finally make rides cheaper than owning a car.Timestamps:(00:00) Show intro…(06:23) Uber x Lucid x Nuro: why this 2026 robotaxi deal matters(09:57) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist(11:13) What the self-driving “cost stack” really looks like in 2025(14:08) Road-mapping at scale: Nuro's 150-city data collection push(15:38) Can AVs handle snow & rain? Weather rollout and Uber Black positioning(20:05) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(21:07) Show Continues…(24:24) Probabilistic vs. deterministic driving models — and the DARPA roots(30:08) Stripe Startups - Stripe Startups offers early-stage, venture-backed startups access to Stripe fee credits and more. Apply today on stripe.com/startups.(31:08) Show Continues…(35:38) Post-Cruise reality check: transparency, regulators & trustSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(09:57) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist(20:05) Northwest Registered Agent - Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit https://www.northwestregisteredagent.com/twist today!(30:08) Stripe Startups - Stripe Startups offers early-stage, venture-backed startups access to Stripe fee credits and more. Apply today on stripe.com/startups.Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Xaq Pitkow runs the Lab for the Algorithmic Brain at Carnegie Mellon University. The main theme of our discussion is how Xaq approaches his research into cognition by way of principles, from which his questions and models and methods spring forth. We discuss those principles, and In that light, we discuss some of his specific lines of work and ideas on the theoretical side of trying understand and explain a slew of cognitive processes. A few of the specifics we discuss are: How when we present tasks for organisms to solve, they use strategies that are suboptimal relative to the task, but nearly optimal relative to their beliefs about what they need to do - something Xaq calls inverse rational control. Probabilistic graph networks. How brains use probabilities to compute. A new ecological neuroscience project Xaq has started with multiple collaborators. LAB: Lab for the Algorithmic Brain. Related papers How does the brain compute with probabilities? Rational thoughts in neural codes. Control when confidence is costly Generalization of graph network inferences in higher-order graphical models. Attention when you need. 0:00 - Intro 3:57 - Xaq's approach 8:28 - Inverse rational control 19:19 - Space of input-output functions 24:48 - Cognition for cognition 27:35 - Theory vs. experiment 40:32 - How does the brain compute with probabilities? 1:03:57 - Normative vs kludge 1:07:44 - Ecological neuroscience 1:20:47 - Representations 1:29:34 - Current projects 1:36:04 - Need a synaptome 1:42:20 - Across scales
In this episode of Crazy Wisdom, Stewart Alsop sits down with Derek Osgood, CEO of DoubleO.ai, to talk about the challenges and opportunities of building with AI agents. The conversation ranges from the shift from deterministic to probabilistic processes, to how humans and LLMs think differently, to why lateral thinking, humor, and creative downtime matter for true intelligence. They also explore the future of knowledge work, the role of context engineering and memory in making agents useful, and the culture of talent, credentials, and hidden gems in Silicon Valley. You can check out Derek's work at doubleo.ai or connect with him on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Derek Osgood explains what AI agents are, the challenge of reliability and repeatability, and the difference between chat-based and process-based agents.05:00 Conversation shifts to probabilistic vs deterministic systems, with examples of agents handling messy data like LinkedIn profiles.10:00 Stewart Alsop and Derek discuss how humans reason compared to LLMs, token vs word prediction, and how language shapes action.15:00 They question whether chat interfaces are the right UX for AI, weighing structure, consistency, and the persistence of buttons in knowledge work.20:00 Voice interaction comes up, its sci-fi allure, and why unstructured speech makes it hard without stronger memory and higher-level reasoning.25:00 Derek unpacks OpenAI's approach to memory as active context retrieval, context engineering, and why vector databases aren't the full answer.30:00 They examine talent wars in AI, credentialism, signaling, and the difference between PhD-level model work and product design for agents.35:00 Leisure and creativity surface, linking downtime, fantasy, and imagination to better lateral thinking in knowledge work.40:00 Discussion of asynchronous AI reasoning, longer time horizons, and why extending “thinking time” could change agent behavior.45:00 Derek shares how Double O orchestrates knowledge work with natural language workflows, making agents act like teammates.50:00 They close with reflections on re-skilling, learning to work with LLMs, BS detection, and the future of critical thinking with AI.Key InsightsOne of the biggest challenges in building AI agents is not just creating them but ensuring their reliability, accuracy, and repeatability. It's easy to build a demo, but the “last mile” of making an agent perform consistently in the messy, unstructured real world is where the hard problems live.The shift from deterministic software to probabilistic agents reflects the complexity of real-world data and processes. Deterministic systems work only when inputs and outputs are cleanly defined, whereas agents can handle ambiguity, search for missing context, and adapt to different forms of information.Humans and LLMs share similarities in reasoning—both operate like predictive engines—but the difference lies in agency and lateral thinking. Humans can proactively choose what to do without direction and make wild connections across unrelated experiences, something current LLMs still struggle to replicate.Chat interfaces may not be the long-term solution for interacting with AI. While chat offers flexibility, it is too unstructured for many use cases. Derek argues for a hybrid model where structured UI/UX supports repeatable workflows, while chat remains useful as one tool within a broader system.Voice interaction carries promise but faces obstacles. The unstructured nature of spoken input makes it difficult for agents to act reliably without stronger memory, better context retrieval, and a more abstract understanding of goals. True voice-first systems may require progress toward AGI.Much of the magic in AI comes not from the models themselves but from context engineering. Effective systems don't just rely on vector databases and embeddings—they combine full context, partial context, and memory retrieval to create a more holistic understanding of user goals and history.Beyond the technical, the episode highlights cultural themes: credentialism, hidden talent, and the role of leisure in creativity. Derek critiques Silicon Valley's obsession with credentials and signaling, noting that true innovation often comes from hidden gem hires and from giving the brain downtime to make unexpected lateral connections that drive creative breakthroughs.
What are the hidden dangers lurking beneath the surface of vibe coded apps and hyped-up CEO promises? And what is Influence Ops?I'm joined by Susanna Cox (Disesdi), an AI security architect, researcher, and red teamer who has been working at the intersection of AI and security for over a decade. She provides a masterclass on the current state of AI security, from explaining the "color teams" (red, blue, purple) to breaking down the fundamental vulnerabilities that make GenAI so risky.We dive into the recent wave of AI-driven disasters, from the Tea dating app that exposed its users' sensitive data to the massive Catholic Health breach. We also discuss why the trend of blindly vibe coding is an irresponsible and unethical shortcut that will create endless liabilities in the near term.Susanna also shares her perspective on AI policy, the myth of separating "responsible" from "secure" AI, and the one threat that truly keeps her up at night: the terrifying potential of weaponized globally scaled Influence Ops to manipulate public opinion and democracy itself.Find Disesdi Susanna Cox:Substack: https://disesdi.substack.com/Socials (LinkedIn, X, etc.): @DisesdiKEY MOMENTS:00:26 - Who is Disesdi Susanna Cox?03:52 - What are Red, Blue, and Purple Teams in Security?07:29 - Probabilistic vs. Deterministic Thinking: Why Data & Security Teams Clash12:32 - How GenAI Security is Different (and Worse) than Classical ML14:39 - Recent AI Disasters: Catholic Health, Agent Smith & the "T" Dating App18:34 - The Unethical Problem with "Vibe Coding"24:32 - "Vibe Companies": The Gaslighting from CEOs About AI30:51 - Why "Responsible AI" and "Secure AI" Are the Same Thing33:13 - Deconstructing the "Woke AI" Panic44:39 - What Keeps an AI Security Expert Up at Night? Influence Ops52:30 - The Vacuous, Haiku-Style Hellscape of LinkedIn
In this episode of Deciphered, Mike Cashman, partner, Bain & Company is joined by Alex Robinson, CEO, Juniper Square to discuss AI's growing impact on fund services.Timestamps:03:52 Digital interfaces for investor experience in private markets13:54 Automation in repetitive fund administration tasks18:58 Probabilistic vs deterministic models in AI applications27:19 Future industry structure of fund administration services34:20 Passporting KYC identities across funds and platforms39:04 Protocol-based solutions for universal KYC in private markets39:32 How to learn more about Juniper Square's innovationsPlease subscribe to the show so you never miss an episode, and leave us a review if you enjoy the show!You can find Mike Cashman hereYou can find Alex Robinson hereFor more insights from the Deciphered podcast, visit the page on Bain's website
Dr. Maxwell Ramstead grills Guillaume Verdon (AKA “Beff Jezos”) who's the founder of Thermodynamic computing startup Extropic.Guillaume shares his unique path – from dreaming about space travel as a kid to becoming a physicist, then working on quantum computing at Google, to developing a radically new form of computing hardware for machine learning. He explains how he hit roadblocks with traditional physics and computing, leading him to start his company – building "thermodynamic computers." These are based on a new design for super-efficient chips that use the natural chaos of electrons (think noise and heat) to power AI tasks, which promises to speed up AND lower the costs of modern probabilistic techniques like sampling. He is driven by the pursuit of building computers that work more like your brain, which (by the way) runs on a banana and a glass of water! Guillaume talks about his alter ego, Beff Jezos, and the "Effective Accelerationism" (e/acc) movement that he initiated. Its objective is to speed up tech progress in order to “grow civilization” (as measured by energy use and innovation), rather than “slowing down out of fear”. Guillaume argues we need to embrace variance, exploration, and optimism to avoid getting stuck or outpaced by competitors like China. He and Maxwell discuss big ideas like merging humans with AI, decentralizing intelligence, and why boundless growth (with smart constraints) is “key to humanity's future”.REFS:1. John Archibald Wheeler - "It From Bit" Concept00:04:45 - Foundational work proposing that physical reality emerges from information at the quantum levelLearn more: https://cqi.inf.usi.ch/qic/wheeler.pdf 2. AdS/CFT Correspondence (Holographic Principle)00:05:15 - Theoretical physics duality connecting quantum gravity in Anti-de Sitter space with conformal field theoryhttps://en.wikipedia.org/wiki/Holographic_principle 3. Renormalization Group Theory00:06:15 - Mathematical framework for analyzing physical systems across different length scales https://www.damtp.cam.ac.uk/user/dbs26/AQFT/Wilsonchap.pdf 4. Maxwell's Demon and Information Theory00:21:15 - Thought experiment linking information processing to thermodynamics and entropyhttps://plato.stanford.edu/entries/information-entropy/ 5. Landauer's Principle00:29:45 - Fundamental limit establishing minimum energy required for information erasure https://en.wikipedia.org/wiki/Landauer%27s_principle 6. Free Energy Principle and Active Inference01:03:00 - Mathematical framework for understanding self-organizing systems and perception-action loopshttps://www.nature.com/articles/nrn2787 7. Max Tegmark - Information Bottleneck Principle01:07:00 - Connections between information theory and renormalization in machine learninghttps://arxiv.org/abs/1907.07331 8. Fisher's Fundamental Theorem of Natural Selection01:11:45 - Mathematical relationship between genetic variance and evolutionary fitnesshttps://en.wikipedia.org/wiki/Fisher%27s_fundamental_theorem_of_natural_selection 9. Tensor Networks in Quantum Systems00:06:45 - Computational framework for simulating many-body quantum systems https://arxiv.org/abs/1912.10049 10. Quantum Neural Networks00:09:30 - Hybrid quantum-classical models for machine learning applicationshttps://en.wikipedia.org/wiki/Quantum_neural_network 11. Energy-Based Models (EBMs)00:40:00 - Probabilistic framework for unsupervised learning based on energy functionshttps://www.researchgate.net/publication/200744586_A_tutorial_on_energy-based_learning 12. Markov Chain Monte Carlo (MCMC)00:20:00 - Sampling algorithm fundamental to modern AI and statistical physics https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo 13. Metropolis-Hastings Algorithm00:23:00 - Core sampling method for probability distributionshttps://arxiv.org/abs/1504.01896 ***SPONSOR MESSAGE***Google Gemini 2.5 Flash is a state-of-the-art language model in the Gemini app. Sign up at https://gemini.google.com
Today, I'm chatting with Stuart Winter-Tear about AI product management. We're getting into the nitty-gritty of what it takes to build and launch LLM-powered products for the commercial market that actually produce value. Among other things in this rich conversation, Stuart surprised me with the level of importance he believes UX has in making LLM-powered products successful, even for technical audiences. After spending significant time on the forefront of AI's breakthroughs, Stuart believes many of the products we're seeing today are the result of FOMO above all else. He shares a belief that I've emphasized time and time again on the podcast–product is about the problem, not the solution. This design philosophy has informed Staurt's 20-plus year-long career, and it is pivotal to understanding how to best use AI to build products that meet users' needs. Highlights/ Skip to Why Stuart was asked to speak to the House of Lords about AI (2:04) The LLM-powered products has Stuart been building recently (4:20) Finding product-market fit with AI products (7:44) Lessons Stuart has learned over the past two years working with LLM-power products (10:54) Figuring out how to build user trust in your AI products (14:40) The differences between being a digital product manager vs. AI product manager (18:13) Who is best suited for an AI product management role (25:42) Why Stuart thinks user experience matters greatly with AI products (32:18) The formula needed to create a business-viable AI product (38:22) Stuart describes the skills and roles he thinks are essential in an AI product team and who he brings on first (50:53) Conversations that need to be had with academics and data scientists when building AI-powered products (54:04) Final thoughts from Stuart and where you can find more from him (58:07) Quotes from Today's Episode “I think that the core dream with GenAI is getting data out of IT hands and back to the business. Finding a way to overlay all this disparate, unstructured data and [translate it] to the human language is revolutionary. We're finding industries that you would think were more conservative (i.e. medical, legal, etc.) are probably the most interested because of the large volumes of unstructured data they have to deal with. People wouldn't expect large language models to be used for fact-checking… they're actually very powerful, especially if you can have your own proprietary data or pipelines. Same with security–although large language models introduce a terrifying amount of security problems, they can also be used in reverse to augment security. There's a lovely contradiction with this technology that I do enjoy.” - Stuart Winter-Tear (5:58) “[LLM-powered products] gave me the wow factor, and I think that's part of what's caused the problem. If we focus on technology, we build more technology, but if we focus on business and customers, we're probably going to end up with more business and customers. This is why we end up with so many products that are effectively solutions in search of problems. We're in this rush and [these products] are [based on] FOMO. We're leaving behind what we understood about [building] products—as if [an LLM-powered product] is a special piece of technology. It's not. It's another piece of technology. [Designers] should look at this technology from the prism of the business and from the prism of the problem. We love to solutionize, but is the problem the problem? What's the context of the problem? What's the problem under the problem? Is this problem worth solving, and is GenAI a desirable way to solve it? We're putting the cart before the horse.” - Stuart Winter-Tear (11:11) “[LLM-powered products] feel most amazing when you're not a domain expert in whatever you're using it for. I'll give you an example: I'm terrible at coding. When I got my hands on Cursor, I felt like a superhero. It was unbelievable what I could build. Although [LLM products] look most amazing in the hands of non-experts, it's actually most powerful in the hands of experts who do understand the domain they're using this technology. Perhaps I want to do a product strategy, so I ask [the product] for some assistance, and it can get me 70% of the way there. [LLM products] are great as a jumping off point… but ultimately [they are] only powerful because I have certain domain expertise.” - Stuart Winter-Tear (13:01) “We're so used to the digital paradigm. The deterministic nature of you put in X, you get out Y; it's the same every time. Probabilistic changes every time. There is a huge difference between what results you might be getting in the lab compared to what happens in the real world. You effectively find yourself building [AI products] live, and in order to do that, you need good communities and good feedback available to you. You need these fast feedback loops. From a pure product management perspective, we used to just have the [engineering] timeline… Now, we have [the data research timeline]. If you're dealing with cutting-edge products, you've got these two timelines that you're trying to put together, and the data research one is very unpredictable. It's the nature of research. We don't necessarily know when we're going to get to where we want to be.” - Stuart Winter-Tear (22:25) “I believe that UX will become the #1 priority for large language model products. I firmly believe whoever wins in UX will win in this large language model product world. I'm against fully autonomous agents without human intervention for knowledge work. We need that human in the loop. What was the intent of the user? How do we get that right push back from the large language model to understand even the level of the person that they're dealing with? These are fundamental UX problems that are going to push UX to the forefront… This is going to be on UX to educate the user, to be able to inject the user in at the right time to be able to make this stuff work. The UX folk who do figure this out are going to create the breakthrough and create the mass adoption.” - Stuart Winter-Tear (33:42)
Mobile attribution is getting better than ever before. And that's in spite of it becoming more and more complex.There's so many measurement methodologies. Advanced SAN. AEM. Advanced AEM. Unified Measurement. SKAN. AAK. Privacy Sandbox. GAID. IDFA. Probabilistic. Modeled.You name it, there's MORE of everything.But in spite of all that, mobile attribution is getting better. Way better. And maybe, it's actually BECAUSE of all that. And the great news: it's also getting EASIER.Makes sense? Insane?Impossible?Check out this convo in Growth Masterminds between John Koetsier and Singular CTO Eran Friedman
Mark “Murch” Erhardt and Mike Schmidt are joined by Sindura Saraswathi, Christian Kümmerle, and Stéphan Vuylsteke to discuss Newsletter #345.News● P2P traffic analysis (1:35) ● Research into single-path LN pathfinding (6:45) ● Probabilistic payments using different hash functions as an xor function (21:17) Bitcoin Core PR Review Club● Stricter internal handling of invalid blocks (26:12) Releases and release candidates● Eclair v0.12.0 (37:49) Notable code and documentation changes● Bitcoin Core #31407 (38:52) ● Eclair #3027 (43:22) ● Eclair #3007 (44:17) ● Eclair #2976 (44:57) ● LDK #3608 (47:17) ● LDK #3624 (48:12) ● LDK #3016 (50:28) ● LDK #3629 (52:15) ● BDK #1838 (53:06)
Female VC Lab Show Notes Episode Title and Number: CES2025: What's Next for Smart Cities - AI for Data, Planning, and Beyond Name of Session: What's Next for Smart Cities: AI for Data, Planning and Beyond Date and Time: Tuesday, January 7, 2025 at 3:00 PM Location: Las Vegas Convention Center North - N257 Session Description: The rise of AI-powered Smart Cities is significant. Learn from early adopters who have seen AI capabilities become notable force multipliers in our smartest cities. Full Panel: David Shier, Managing Director, Visionary Future (Moderator) Sheri Bachstein, CEO, The Weather Company Barbara Bickham, Founder & GP, Trailyn VC Nadia Hansen, Global Digital Transformation Executive, Salesforce Prachi Vakharia, ARPA-I: Strategic Advisor for Innovations & Infrastructure, USDOT Episode Summary: In this episode, we dive into the transformative potential of AI in developing smart cities, as discussed in a CES 2025 panel. The conversation covers aspects ranging from extreme weather prediction to citizen-centric services and the ethical implications of AI in urban governance. Key Points Discussed: The role of AI in predicting extreme weather and urban design Probabilistic forecasting and its benefits for emergency preparedness The concept of citizen-centric smart cities for enhanced public participation Challenges in data standardization and the importance of public-private partnerships Ethical considerations, including bias and privacy concerns in AI technologies Timecode Guide: 00:00:04 - Introduction to smart cities and AI 00:00:38 - Discussion about extreme weather prediction with AI 00:01:53 - Explanation of probabilistic forecasting 00:03:09 - Concept of citizen-centric smart cities 00:04:47 - Data challenges and public-private partnerships 00:08:09 - Ethics, transparency, and public education on AI 00:11:01 - Challenges with AI model sizes and limited data 00:17:40 - Privacy concerns in smart cities 00:18:45 - Ensuring AI doesn't perpetuate bias 00:20:02 - Discussion on potential job losses and new opportunities 00:21:09 - Misuse of AI by authoritarian governments 00:22:25 - Weather strategy in the era of climate change 00:25:27 - Collaboration and human-centered AI 00:27:45 - Future vision for smart cities Full Topic Guide: The Future of Smart Cities: Harnessing AI for a Better Urban Life Introduction: As we stand on the brink of an AI-driven transformation in urban living, the CES 2025 panel offers an enlightening glimpse into how next-gen technologies are being leveraged to create smarter, more resilient cities. Hosts Daniel and Barbara Bickham review these discussions, stressing the interplay of data, AI, and human insight in shaping the future. AI in Predicting Extreme Weather: One of the most compelling highlights was the role of AI in weather prediction and urban planning. As climate change leads to unpredictable weather patterns, AI provides a way to prepare for unprecedented events. AI's capacity to analyze vast amounts of climate data allows for effective "what if" scenario modeling, enabling cities to design infrastructure that can withstand extreme conditions. For example, you might reinforce a seawall today based on AI predictions of future storms. Probabilistic Forecasting: Advancements in probabilistic forecasting were another exciting point discussed. Unlike traditional methods, probabilistic forecasting uses AI to run thousands of scenarios, providing a range of potential outcomes rather than a single prediction. This not only improves weather accuracy but also aids in emergency preparedness, allowing cities to allocate resources more efficiently and preempt disasters, thereby saving lives and reducing costs. Citizen-Centric Smart Cities: The concept of making cities more user-friendly for residents drew considerable attention. Imagine all city services being available through a single app—reporting potholes, paying taxes, or even participating in local governance through blockchain-enabled voting systems. This not only simplifies interactions with municipal services but also empowers citizens by giving them a direct say in urban management. Data Challenges and Public-Private Partnerships: However, the transition to smart cities is fraught with challenges, particularly in data management. Cities gather massive amounts of data, but these are often trapped in silos, hindering a comprehensive overview. Collaboration through public-private partnerships is vital here. Government bodies possess critical data, while private enterprises offer the technological expertise to process and utilize this information effectively. Examples like the Weather Company's collaborations with NOAA and NVIDIA highlight the potential of these partnerships. Ethical and Transparency Considerations: The panel stressed the ethical implications of integrating AI into city management. Trust and transparency are crucial for citizen buy-in. There needs to be a concerted effort to educate both government officials and the public on AI. Moreover, it's important to ensure that AI systems are fair and do not perpetuate existing biases. Implementing strong ethical guidelines and having diverse teams develop these technologies can mitigate potential risks. Privacy Concerns: With vast amounts of data being collected, privacy is a significant concern. Clear rules on data collection, use, and access are necessary to prevent misuse. Enabling citizens to control their data and ensuring transparency about how it is used are steps toward this goal. Conclusion: The future of smart cities depends not just on technological advancements but on collaborative, human-centered approaches. As we navigate this transformation, the focus must remain on using technology to enhance, not hinder, urban life. Notable Quotes from the Hosts: "It's about giving people the tools and the knowledge to be active participants in shaping the future of their cities." "We need to be mindful of the potential consequences of AI on jobs, privacy, and equality." Fun Facts or Interesting Tidbits: People were more interested in searching for inmates in jail rather than marriage licenses on the redesigned Clark County website. A project in Massachusetts involved transit maps from the 1970s, highlighting the need to update and digitize old data for modern use. Stay engaged with the conversation on how AI and technology are transforming our cities. Feel inspired? Share your ideas on how we can build a smarter, more inclusive future for our urban environments. Follow us on social media and subscribe to our newsletter for more updates and deep dives into the latest technological trends.
In this episode of the podcast, I'm joined by two academics -- Julian Runge from Northwestern University's Medill School and Koen Pauwels from Northeastern University -- for a conversation about the methodological (and stylistic) distinction between marketing experimentation and probabilistic measurement. Julian has previously appeared on the podcast and recently contributed a guest article for Mobile Dev Memo, and he and I have co-authored a number of articles (including this Harvard Business Review piece). Koen runs the Marketing and Metrics blog as well as the Pauwels on Marketing newsletter on LinkedIn. Some of the topics addressed in our discussion include: Experimentation in marketing measurement; The most popular techniques for probabilistic measurement, and how they are implemented; How a firm can integrate experimentation into its marketing measurement efforts; How firms tend to improperly implement Media Mix Modeling; Whether it is possible to measure incrementality for a specific channel, using that channel's tools; How marketers should think about demonstrating the value of their efforts; How the value of brand equity can be measured and integrated into marketing measurement; How a firm should think about experimentation and opportunity cost. Thanks to the sponsors of this week's episode of the Mobile Dev Memo podcast: Vibe. Vibe is the leading Streaming TV ad platform for small and medium-sized businesses looking for actionable advertising campaign performance. INCRMNTAL. True attribution measures incrementality, always on. Interested in sponsoring the Mobile Dev Memo podcast? Contact Marketecture. The Mobile Dev Memo podcast is available on: Apple Podcasts Spotify Google Podcasts
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meOur theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Effective data science education requires feedback and rapid iteration.Building LLM applications presents unique challenges and opportunities.The software development lifecycle for AI differs from traditional methods.Collaboration between data scientists and software engineers is crucial.Hugo's new course focuses on practical applications of LLMs.Continuous learning is essential in the fast-evolving tech landscape.Engaging learners through practical exercises enhances education.POC purgatory refers to the challenges faced in deploying LLM-powered software.Focusing on first principles can help overcome integration issues in AI.Aspiring data scientists should prioritize problem-solving over specific tools.Engagement with different parts of an organization is crucial for data scientists.Quick paths to value generation can help gain buy-in for data projects.Multimodal models are an exciting trend in AI development.Probabilistic programming has potential for future growth in data science.Continuous learning and curiosity are vital in the evolving field of data science.Chapters:09:13 Hugo's Journey in Data Science and Education14:57 The Appeal of Bayesian Statistics19:36 Learning and Teaching in Data Science24:53 Key Ingredients for Effective Data Science Education28:44 Podcasting Journey and Insights36:10 Building LLM Applications: Course Overview42:08 Navigating the Software Development Lifecycle48:06 Overcoming Proof of Concept Purgatory55:35 Guidance for Aspiring Data Scientists01:03:25 Exciting Trends in Data Science and AI01:10:51 Balancing Multiple Roles in Data Science01:15:23 Envisioning Accessible Data Science for AllThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim
The nueroscience of intuition & how to hone it as your super power. Professor Joel Pearson is a neuroscientist who's been studying how the brain processes unconscious information for 25 years. He's on a mission to distill the science of intuition into simple, practical rules that are easy to follow and can improve decision-making. His book is called The Intuition Toolkit. He shares his simple rules for us, how to hone in on our intuition, make it easier to listen to & when NOT to listen to it. CONNECT WITH US Connect with That's Helpful on Instagram. Find Joel on Twitter & via his website. BOOK The Intuition Toolkit PODCASTS Intuitive Eating 101 Want to become a podcast sponsor, got some feedback for me or just fancy a chat? Email me - thatshelpful@edstott.com TIMESTAMPS 00:00:00 Intro 00:02:24 Why intuition is Joel's passion 00:03:53 How hard is intuition to study in the lab? 00:04:54 How do you study intuition in the lab? 00:09:18 Why do we feel intuition physically? 00:11:17 Why are certain people more intuitive than others? 00:13:04 Self Awareness 00:16:45 Getting better at using your intuition over time 00:19:30 The difference between addiction & intuition 00:24:17 Intuitive Eating & processed foods 00:32:59 Probabilistic thinking 00:36:09 The importance of environment 00:39:12 When not to use intuition 00:45:00 The one thing to remember when it comes to intuition
My guest today is Michael Garfield, a paleontologist, futurist, writer, podcast host and strategic advisor whose “mind-jazz” performances — essays, music and fine art — bridge the worlds of art, science and philosophy. This year, Michael received a $10k O'Shaughnessy Grant for his “Humans On the Loop” discussion series, which explores the nature of agency, power, responsibility and wisdom in the age of automation. This whirlwind discussion is impossible to sum up in a couple of sentences (just look at the number of books & articles mentioned!) Ultimately, it is a conversation about a subject I think about every day: how we can live curious, collaborative and fulfilling lives in our deeply weird, complex, probabilistic world. I hope you enjoy this conversation as much as I did. For the full transcript, episode takeaways, and bucketloads of other goodies designed to make you go, “Hmm, that's interesting!”, check out our Substack. Important Links: Michael's Website Humans On The Loop Twitter Future Fossils Substack Show Notes: What is “mind jazz”? Humans “ON” the loop? The Red Queen hypothesis and the power of weirdness Probabilistic thinking & the perils of optimization Context collapse, pernicious convenience & coordination at scale How organisations learn Michael as World Emperor MORE! Books, Articles & Podcasts Mentioned: The Nature of Technology: What It Is and How It Evolves; by W. Brian Arthur Pharmako-AI; by K Allado-McDowell The Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century; by Howard Bloom The Genius of the Beast: A Radical Re-Vision of Capitalism; by Howard Bloom One Summer: America, 1927; by Bill Bryson Through the Looking-Glass, and What Alice Found There; by Lewis Carroll The Beginning of Infinity: Explanations That Transform the World; by David Deutsch Scale Theory: A Nondisciplinary Inquiry; by Joshua DiCaglio Revenge of the Tipping Point: Overstories, Superspreaders and the Rise of Social Engineering; by Malcolm Gladwell The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous; by Joseph Henrich Do Conversation: There's No Such Thing as Small Talk; by Robert Poynton Reality Hunger: A Manifesto; by David Shields The Time Falling Bodies Take to Light: Mythology, Sexuality and the Origins of Culture; by William Irwin Thompson The New Inquisition: Irrational Rationalism and the Citadel of Science; by Robert Anton Wilson Designing Neural Media; by K Allado-McDowell Pace Layering: How Complex Systems Learn and Keep Learning; by Steward Brand Losing Humanity: The Case against Killer Robots; by Bonnie Docherty What happens with digital rights management in the real world?; by Cory Doctorow The Evolution of Surveillance Part 1: Burgess Shale to Google Glass; by Michael Garfield An Introduction to Extitutional Theory; by Jessy Kate Schingler 175 - C. Thi Nguyen on The Seductions of Clarity, Weaponized Games, and Agency as Art; Future Fossils with Michael Garfield
In the realm of nutrition science and health, understanding the intricate relationship between various factors and health outcomes is crucial yet challenging. How do we determine whether a specific nutrient genuinely impacts our health, or if the observed effects are merely coincidental? This intriguing question brings us to the core concepts of correlation and causation. You've likely heard the adage “correlation is not causation,” but what does this truly mean in the context of scientific research and public health recommendations? Can a strong association between two variables ever imply a causal relationship, or is it always just a statistical coincidence? These questions are not merely academic; they are pivotal in shaping the guidelines that influence our daily lives. For instance, when studies reveal a link between high sodium intake and hypertension, how do scientists distinguish between a mere correlation and a true causal relationship? Similarly, the debate around LDL cholesterol and cardiovascular disease hinges on understanding whether high cholesterol levels directly cause heart disease, or if other confounding factors are at play. Unraveling these complexities requires a deep dive into the standards of proof and the different models used to assess causality in scientific research. As we delve into these topics, we'll explore how public health recommendations are formed despite the inherent challenges in proving causality. What methods do scientists use to ensure that their findings are robust and reliable? How do they account for the myriad of confounding variables that can skew results? By understanding the nuances of these processes, we can better appreciate the rigorous scientific effort that underpins dietary guidelines and health advisories. Join us on this exploration of correlation, causation, and the standards of proof in nutrition science. Through real-world examples and critical discussions, we will illuminate the pathways from observational studies to actionable health recommendations. Are you ready to uncover the mechanisms that bridge the gap between scientific evidence and practical health advice? Let's dive in and discover the fascinating dynamics at play. Timestamps: 01:32 Understanding Correlation and Causation 03:54 Historical Perspectives on Causality 06:33 Causal Models in Health Sciences 14:53 Probabilistic vs. Deterministic Causation 30:52 Standards of Proof in Public Health 36:44 Applying Causal Models in Nutrition Science 58:54 Key Ideas Segment (Premium-only) Links: Enroll in the next cohort of our Applied Nutrition Literacy course Go to episode page Subscribe to Sigma Nutrition Premium Receive our free weekly email: the Sigma Synopsis Related episode: 343 – Understanding Causality in Nutrition Science
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Analyzing DeepMind's Probabilistic Methods for Evaluating Agent Capabilities, published by Axel Højmark on July 22, 2024 on The AI Alignment Forum. Produced as part of the MATS Program Summer 2024 Cohort. The project is supervised by Marius Hobbhahn and Jérémy Scheurer Introduction To mitigate risks from future AI systems, we need to assess their capabilities accurately. Ideally, we would have rigorous methods to upper bound the probability of a model having dangerous capabilities, even if these capabilities are not yet present or easily elicited. The paper "Evaluating Frontier Models for Dangerous Capabilities" by Phuong et al. 2024 is a recent contribution to this field from DeepMind. It proposes new methods that aim to estimate, as well as upper-bound the probability of large language models being able to successfully engage in persuasion, deception, cybersecurity, self-proliferation, or self-reasoning. This post presents our initial empirical and theoretical findings on the applicability of these methods. Their proposed methods have several desirable properties. Instead of repeatedly running the entire task end-to-end, the authors introduce milestones. Milestones break down a task and provide estimates of partial progress, which can reduce variance in overall capability assessments. The expert best-of-N method uses expert guidance to elicit rare behaviors and quantifies the expert assistance as a proxy for the model's independent performance on the task. However, we find that relying on milestones tends to underestimate the overall task success probability for most realistic tasks. Additionally, the expert best-of-N method fails to provide values directly correlated with the probability of task success, making its outputs less applicable to real-world scenarios. We therefore propose an alternative approach to the expert best-of-N method, which retains its advantages while providing more calibrated results. Except for the end-to-end method, we currently feel that no method presented in this post would allow us to reliably estimate or upper bound the success probability for realistic tasks and thus should not be used for critical decisions. The overarching aim of our MATS project is to uncover agent scaling trends, allowing the AI safety community to better predict the performance of future LLM agents from characteristics such as training compute, scaffolding used for agents, or benchmark results (Ruan et al., 2024). To avoid the issue of seemingly emergent abilities resulting from bad choices of metrics (Schaeffer et al., 2023), this work serves as our initial effort to extract more meaningful information from agentic evaluations. We are interested in receiving feedback and are particularly keen on alternative methods that enable us to reliably assign low-probability estimates (e.g. 1e7) to a model's success rate on a task. Evaluation Methodology of Phuong et al. The goal of the evaluations we discuss is to estimate the probability of an agent succeeding on a specific task T . Generally, when we refer to an agent, we mean an LLM wrapped in scaffolding that lets it execute shell commands, run code, or browse the web to complete some predetermined task. Formally, the goal is to estimate P(Ts), the probability that the agent solves task T and ends up in the solved state Ts . The naive approach to estimate this is with Monte Carlo sampling: The authors call this the end-to-end method. However, the end-to-end method struggles with low-probability events. The expected number of trials needed to observe one success for a task is 1P(Ts) making naive Monte Carlo sampling impractical for many low-probability, long-horizon tasks. In practice, this could require running multi-hour tasks hundreds of thousands of times. To address this challenge, Phuong et al. devise three addi...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Analyzing DeepMind's Probabilistic Methods for Evaluating Agent Capabilities, published by Axel Højmark on July 22, 2024 on LessWrong. Produced as part of the MATS Program Summer 2024 Cohort. The project is supervised by Marius Hobbhahn and Jérémy Scheurer Introduction To mitigate risks from future AI systems, we need to assess their capabilities accurately. Ideally, we would have rigorous methods to upper bound the probability of a model having dangerous capabilities, even if these capabilities are not yet present or easily elicited. The paper "Evaluating Frontier Models for Dangerous Capabilities" by Phuong et al. 2024 is a recent contribution to this field from DeepMind. It proposes new methods that aim to estimate, as well as upper-bound the probability of large language models being able to successfully engage in persuasion, deception, cybersecurity, self-proliferation, or self-reasoning. This post presents our initial empirical and theoretical findings on the applicability of these methods. Their proposed methods have several desirable properties. Instead of repeatedly running the entire task end-to-end, the authors introduce milestones. Milestones break down a task and provide estimates of partial progress, which can reduce variance in overall capability assessments. The expert best-of-N method uses expert guidance to elicit rare behaviors and quantifies the expert assistance as a proxy for the model's independent performance on the task. However, we find that relying on milestones tends to underestimate the overall task success probability for most realistic tasks. Additionally, the expert best-of-N method fails to provide values directly correlated with the probability of task success, making its outputs less applicable to real-world scenarios. We therefore propose an alternative approach to the expert best-of-N method, which retains its advantages while providing more calibrated results. Except for the end-to-end method, we currently feel that no method presented in this post would allow us to reliably estimate or upper bound the success probability for realistic tasks and thus should not be used for critical decisions. The overarching aim of our MATS project is to uncover agent scaling trends, allowing the AI safety community to better predict the performance of future LLM agents from characteristics such as training compute, scaffolding used for agents, or benchmark results (Ruan et al., 2024). To avoid the issue of seemingly emergent abilities resulting from bad choices of metrics (Schaeffer et al., 2023), this work serves as our initial effort to extract more meaningful information from agentic evaluations. We are interested in receiving feedback and are particularly keen on alternative methods that enable us to reliably assign low-probability estimates (e.g. 1e7) to a model's success rate on a task. Evaluation Methodology of Phuong et al. The goal of the evaluations we discuss is to estimate the probability of an agent succeeding on a specific task T . Generally, when we refer to an agent, we mean an LLM wrapped in scaffolding that lets it execute shell commands, run code, or browse the web to complete some predetermined task. Formally, the goal is to estimate P(Ts), the probability that the agent solves task T and ends up in the solved state Ts . The naive approach to estimate this is with Monte Carlo sampling: The authors call this the end-to-end method. However, the end-to-end method struggles with low-probability events. The expected number of trials needed to observe one success for a task is 1P(Ts) making naive Monte Carlo sampling impractical for many low-probability, long-horizon tasks. In practice, this could require running multi-hour tasks hundreds of thousands of times. To address this challenge, Phuong et al. devise three additional method...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Important open problems in voting, published by Closed Limelike Curves on July 2, 2024 on LessWrong. Strategy-resistance Identify, or prove impossibility, of a voting system which incentivizes 1. A strictly sincere ranking of all candidates in the zero-information setting, where it implements a "good" social choice rule such as the relative (normalized) utilitarian rule, a Condorcet social choice rule, or the Borda rule. 2. In a Poisson game or similar setting: a unique semi-sincere Nash equilibrium that elects the Condorcet winner (if one exists), similar to those shown for approval voting by Myerson and Weber (1993) and Durand et al. (2019). Properties of Multiwinner voting systems There's strikingly little research on multiwinner voting systems. You can find a table of criteria for single-winner systems on Wikipedia, but if you try and find the same for multi-winner systems, there's nothing. Here's 9 important criteria we can judge multiwinner voting systems on: 1. Independence of Irrelevant Alternatives 2. Independence of Universally-Approved Candidates 3. Monotonicity 4. Participation 5. Precinct-summability 6. Polynomial-time approximation scheme 7. Proportionality for solid coalitions 8. Perfect representation in the limit 9. Core-stability (may need to be approximated within a constant factor) I'm curious which combinations of these properties exist. Probabilistic/weighted voting systems are allowed. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Generative AI Summit Austin - Registration --------------------------------------------------------------The future of defense and warfare has always been linked to technological change, from the discovery of fire to the Bronze Age to the Nuclear Age to now the AI Age. My guest today is Charlie Burgoyne, Founder and CEO of both Valkyrie and their newest spinout Andromeda, who joins me to discuss the future of warfare going forward. But we don't stop there as we explore every rabbit hole in search of Truth. Andromeda synthesizes large volumes of information quickly and accurately, helping users understand the interconnected context to make informed decisions.Pop culture and media narratives of innovation, from Terminator to Data to Her to Oppenheimer, shape how the public views, embraces, or fears technological waves.History has shown that industry and defense have always been intertwined, with modern tech entrepreneurs playing roles similar to industrialists of the past in advancing defense capabilities.Probabilistic thinking is essential in defense decision-making, especially where high levels of computation can easily mask high levels of uncertainties.What's Next Austin"I think that we're going to see a big push to support the intelligence community the way we've seen a big push to support the defense community. I think that an equivalent to AFC is coming on the intelligence side."Accountability in Age of LLMS at SXSW by Charlie BurgoyneValkyrie: Website, Facebook, InstagramAndromeda: Website -------------------Austin Next Links: Website, X/Twitter, YouTube, LinkedIn
In this episode of the Agile Uprising podcast, host Troy Lightfoot chat with Prateek Singh author of Scaling Simplified and the new course for Product Management "Accelerating Product Value" about taking your product management career to the next level with Probablisitc Thinking and Flow. Troy's Next Product Value Class: "AgileUprising" for 20% off! The episode is rich with insights, making it a valuable listen for anyone involved in organizational change. Probabilistic vs. Deterministic Thinking in Product Management: The discussion highlighted the importance of probabilistic thinking, where multiple future outcomes are considered, rather than deterministic thinking, which often leads to rigid and potentially inaccurate expectations. Challenges with Traditional Product Prioritization: Traditional methods of product prioritization, such as backlog management, are seen as potentially obsolete the moment they are established due to the dynamic nature of market and development realities. Advantages of Rapid Experimentation: Getting ideas to production swiftly and with minimal initial investment allows for direct testing with customers, providing real feedback and reducing the risk of significant investment in unproven ideas. Financial Impact of Flow and Learning: Faster realization of product value through improved flow can significantly enhance ROI, by reducing the costs associated with delays and increasing the effectiveness of learning from the market. The Role of Flow Metrics in Learning Systems: Flow metrics like cycle time and throughput are vital for transforming product development and operations into learning systems, where the speed of learning and adaptation is critical. Concept of Thinking in Bets: The podcast also touched on using the concept of “Thinking in Bets” (from Annie Duke's work) to manage investment in product development. This approach advocates for small, incremental bets to minimize losses while exploring the potential success of new ideas. Links Read More ---- About the Agile Uprising If you enjoyed this episode, please give us a review, a rating, or leave comments on iTunes, Stitcher or your podcasting platform of choice. It really helps others find us. Much thanks to the artist from who provided us our outro music free-of-charge! If you like what you heard, to find more music you might enjoy! If you'd like to join the discussion and share your stories, please jump into the fray at our We at the Agile Uprising are committed to being totally free. However, if you'd like to contribute and help us defray hosting and production costs we do have a . Who knows, you might even get some surprises in the mail!
Cate seemed to be fully entrenched on the default path — she had graduated from Yale and became the Supreme Court advocate on her way to becoming a partner in her law firm. But she didn't want to live the lives of the people around her. She pivoted hard, you could almost hear car brakes squeaking as she made the turn, and over a year she became the number one female poker player in the world. She later started art and perfume companies and led operations at Avlea — a pandemic medicine company. One might think she's simply a superhuman, and what she did is beyond the grasp of mere mortals, but Cate claims that agency is learnable and joins the podcast to tell us how.
Speakers for AI Engineer World's Fair have been announced! See our Microsoft episode for more info and buy now with code LATENTSPACE — we've been studying the best ML research conferences so we can make the best AI industry conf! Note that this year there are 4 main tracks per day and dozens of workshops/expo sessions; the free livestream will air much less than half of the content this time.Apply for free/discounted Diversity Program and Scholarship tickets here. We hope to make this the definitive technical conference for ALL AI engineers.ICLR 2024 took place from May 6-11 in Vienna, Austria. Just like we did for our extremely popular NeurIPS 2023 coverage, we decided to pay the $900 ticket (thanks to all of you paying supporters!) and brave the 18 hour flight and 5 day grind to go on behalf of all of you. We now present the results of that work!This ICLR was the biggest one by far, with a marked change in the excitement trajectory for the conference:Of the 2260 accepted papers (31% acceptance rate), of the subset of those relevant to our shortlist of AI Engineering Topics, we found many, many LLM reasoning and agent related papers, which we will cover in the next episode. We will spend this episode with 14 papers covering other relevant ICLR topics, as below.As we did last year, we'll start with the Best Paper Awards. Unlike last year, we now group our paper selections by subjective topic area, and mix in both Outstanding Paper talks as well as editorially selected poster sessions. Where we were able to do a poster session interview, please scroll to the relevant show notes for images of their poster for discussion. To cap things off, Chris Ré's spot from last year now goes to Sasha Rush for the obligatory last word on the development and applications of State Space Models.We had a blast at ICLR 2024 and you can bet that we'll be back in 2025
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Aaron Lowry, an experienced consultant and returning guest. They discuss a wide range of topics, including Lowry's work in rebuilding custom vehicles, the value of blending aesthetics with engineering, and the challenges of balancing principles and propositions in problem-solving. They also explore the evolving world of artificial intelligence, contrasting its limitations with human intelligence, and consider its impact on creative expression. Connect with Aaron on Twitter at @Aaron_Lowry for more insights into his projects and ideas. Check out this GPT we trained on this conversation Timestamps 00:00 - Stewart Alsop introduces Aaron Lowry, discussing their previous conversations and current interests. They mention the makerspace and complexities in physical and software creation, while Lowry shares insights on sheet metal work and its principles. 00:05 - Stewart talks about challenges in crafting and how quick access to information on computers may impact patience. He appreciates Lowry's language of attunement and asks for Lowry's views on AI, given that he hasn't been directly involved in building it. 00:10 - Lowry discusses intelligence, consciousness, and the reciprocal relationship between agent and environment. He explores challenges in defining intelligence, noting the mirror-like effect of AI reflecting our own limitations. 00:15 - Stewart discusses how filtering AI models reduces their utility. Lowry describes prompt injection as a way to navigate AI limitations while emphasizing the importance of understanding the parameters that bound the data set. 00:20 - Lowry acknowledges the energy required to maintain AI models, comparing it to the efficiency of the human brain. He stresses the probabilistic nature of human intelligence versus the deterministic nature of machine learning. 00:25 - Lowry distinguishes between the infinite potential of probabilistic intelligence and deterministic frameworks. He compares real-world interaction to a video game, noting how deterministic thinking can make people behave like NPCs. 00:30 - They discuss navigating principles versus propositions, likening it to piloting a sailboat. Maintaining direction requires continuous feedback and adaptation. 00:35 - Stewart differentiates between propositional and participatory knowing, noting AI's strong grasp of the former. Lowry argues that perspective is assigned in AI models but participation remains absent. 00:40 - Lowry describes the truck he is restoring, noting the blend of modern engineering and aesthetic choices. He shares his process of acquiring knowledge from books and the internet. 00:45 - They discuss Brian Rommel's approach to training language models with high-quality data from the past, emphasizing the importance of data quality. 00:50 - They discuss how AI models can synthesize a broader spectrum of perspectives than any individual. Lowry advocates for plurality in models, warning against a single authoritative perspective. 00:55 - They delve into AI's impact on art. Despite the democratization of creative tools, Lowry asserts that authentic artistic inspiration is still necessary. He highlights the empty appeal of AI-generated perfection lacking the soul of human art. Key insights Principles vs. Propositions in Problem-Solving: Aaron Lowry emphasizes the importance of working with first principles rather than rigid propositions. He compares this to piloting a sailboat, where adaptability and constant course correction are crucial, and stresses that a principle-based approach allows for dynamic navigation of complex challenges. Sheet Metal Work as a Metaphor: Lowry draws parallels between his experience working with sheet metal and broader life lessons. He finds that patience, precision, and an understanding of thermodynamics are essential when shaping materials and that these skills have broader applications, like aligning with fundamental principles in all aspects of life. AI and Human Intelligence Contrasts: Despite not being directly involved in building AI, Lowry offers a thoughtful analysis of its relationship to human intelligence. He argues that AI can mirror our limitations and reflects how intelligence is both probabilistic and deterministic, giving us powerful tools but also raising ethical and practical challenges. Guardrails and Filtering in AI Models: The conversation explores how filtering in AI reduces its utility. While Lowry acknowledges that filters are essential for contextualizing data sets, he also notes that prompt injection helps circumvent these limitations, revealing the inherent challenges in fully controlling AI output. Plurality of Perspectives in AI: Both Alsop and Lowry agree that multiple AI models are necessary to capture a range of perspectives, and relying on a single authoritative model could be dangerous. They highlight that AI models should maintain diversity to better reflect the broad spectrum of human experience. AI's Role in Creative Expression: They touch upon the potential of AI to create art, noting how it can democratize creative tools. However, Lowry points out that even with high technical proficiency, AI-generated art often lacks the emotional resonance that comes from genuine human inspiration and participation. Blending Aesthetics with Engineering: Lowry shares his approach to rebuilding classic vehicles, which blends modern engineering with aesthetic considerations. His goal is to maintain the beauty of the original designs while ensuring functionality, illustrating the delicate balance between creativity and technical precision.
Today we're sharing a recent interview Erik gave to Alex LaBossiere that covers lots of themes we discuss on Moment of Zen - including Erik's views on tech vs. the media, hypocrisy of the elites, risk orientation, and the thesis behind Turpentine. Grow your newsletter with Beehiiv, head to https://Beehiiv.com and use code "MOZ" for 20% off your first three months -- SPONSORS: NETSUITE | BEEHIIV | SQUAD NETSUITE
Erik Torenberg is a technology entrepreneur and investor. Presently, he's the founder of Turpentine. Previously, he was the chairman of On Deck, co-founder and general partner at Village Global, and first employee at Product Hunt. 0:00 - Intro 5:08 - On Investing vs Operating and Being a Player 7:34 - Trust and Direct Distribution 10:45 - Legacy Media and the Techlash 17:24 - What is Turpentine 18:41 - Lessons From 1000 Interviews 20:59 - The Printing Press, Barbells, and the Future of Media 25:22 - Pre-Internet Journalism and Survival of the Fittest Ideas 28:44 - Creator Monetization and Price Discrimination 34:55 - Does Media Have a Direction? 41:44 - Tailwinds and Turpentine at Scale 49:37 - Creating More Startups and Understanding Risk 54:24 - Is Entrepreneurship Founder or Market Limited? 59:04 - What Do Most People Get Wrong About Community Building? 1:01:34 - Probabilistic vs Deterministic Mindsets 1:04:29 - On Faith over Logic 1:08:06 - Modernity and God Shaped Holes 1:10:44 - The Hypocrisy of Elites 1:13:30 - Asking Erik Questions He Asks People 1:15:40 - What Should More People be Thinking About?
Episode 121I spoke with Professor Ryan Tibshirani about:* Differences between the ML and statistics communities in scholarship, terminology, and other areas. * Trend filtering* Why you can't just use garbage prediction functions when doing conformal predictionRyan is a Professor in the Department of Statistics at UC Berkeley. He is also a Principal Investigator in the Delphi group. From 2011-2022, he was a faculty member in Statistics and Machine Learning at Carnegie Mellon University. From 2007-2011, he did his Ph.D. in Statistics at Stanford University.Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. The Gradient Podcast on: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:10) Ryan's background and path into statistics* (07:00) Cultivating taste as a researcher* (11:00) Conversations within the statistics community* (18:30) Use of terms, disagreements over stability and definitions* (23:05) Nonparametric Regression* (23:55) Background on trend filtering* (33:48) Analysis and synthesis frameworks in problem formulation* (39:45) Neural networks as a specific take on synthesis* (40:55) Divided differences, falling factorials, and discrete splines* (41:55) Motivations and background* (48:07) Divided differences vs. derivatives, approximation and efficiency* (51:40) Conformal prediction* (52:40) Motivations* (1:10:20) Probabilistic guarantees in conformal prediction, choice of predictors* (1:14:25) Assumptions: i.i.d. and exchangeability — conformal prediction beyond exchangeability* (1:25:00) Next directions* (1:28:12) Epidemic forecasting — COVID-19 impact and trends survey* (1:29:10) Survey methodology* (1:38:20) Data defect correlation and its limitations for characterizing datasets* (1:46:14) OutroLinks:* Ryan's homepage* Works read/mentioned* Nonparametric Regression* Adaptive Piecewise Polynomial Estimation via Trend Filtering (2014) * Divided Differences, Falling Factorials, and Discrete Splines: Another Look at Trend Filtering and Related Problems (2020)* Distribution-free Inference* Distribution-Free Predictive Inference for Regression (2017)* Conformal Prediction Under Covariate Shift (2019)* Conformal Prediction Beyond Exchangeability (2023)* Delphi and COVID-19 research* Flexible Modeling of Epidemics* Real-Time Estimation of COVID-19 Infections* The US COVID-19 Trends and Impact Survey and Big data, big problems: Responding to “Are we there yet?” Get full access to The Gradient at thegradientpub.substack.com/subscribe
In episode 119 of The Gradient Podcast, Daniel Bashir speaks to Professor Michael Sipser.Professor Sipser is the Donner Professor of Mathematics and member of the Computer Science and Artificial Intelligence Laboratory at MIT.He received his PhD from UC Berkeley in 1980 and joined the MIT faculty that same year. He was Chairman of Applied Mathematics from 1998 to 2000 and served as Head of the Mathematics Department 2004-2014. He served as interim Dean of Science 2013-2014 and then as Dean of Science 2014-2020.He was a research staff member at IBM Research in 1980, spent the 1985-86 academic year on the faculty of the EECS department at Berkeley and at MSRI, and was a Lady Davis Fellow at Hebrew University in 1988. His research areas are in algorithms and complexity theory, specifically efficient error correcting codes, interactive proof systems, randomness, quantum computation, and establishing the inherent computational difficulty of problems. He is the author of the widely used textbook, Introduction to the Theory of Computation (Third Edition, Cengage, 2012).Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pubSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:40) Professor Sipser's background* (04:35) On interesting questions* (09:00) Different kinds of research problems* (13:00) What makes certain problems difficult* (18:48) Nature of the P vs NP problem* (24:42) Identifying interesting problems* (28:50) Lower bounds on the size of sweeping automata* (29:50) Why sweeping automata + headway to P vs. NP* (36:40) Insights from sweeping automata, infinite analogues to finite automata problems* (40:45) Parity circuits* (43:20) Probabilistic restriction method* (47:20) Relativization and the polynomial time hierarchy* (55:10) P vs. NP* (57:23) The non-connection between GO's polynomial space hardness and AlphaGo* (1:00:40) On handicapping Turing Machines vs. oracle strategies* (1:04:25) The Natural Proofs Barrier and approaches to P vs. NP* (1:11:05) Debates on methods for P vs. NP* (1:15:04) On the possibility of solving P vs. NP* (1:18:20) On academia and its role* (1:27:51) OutroLinks:* Professor Sipser's homepage* Papers discussed/read* Halting space-bounded computations (1978)* Lower bounds on the size of sweeping automata (1979)* GO is Polynomial-Space Hard (1980)* A complexity theoretic approach to randomness (1983)* Parity, circuits, and the polynomial-time hierarchy (1984)* A follow-up to Furst-Saxe-Sipser* The Complexity of Finite Functions (1991) Get full access to The Gradient at thegradientpub.substack.com/subscribe
Today - the science behind intuition & how it can help us to hone this super power. Professor Joel Pearson is a neuroscientist who's been studying how the brain processes unconscious information for 25 years. He's on a mission to distill the science of intuition into simple, practical rules that are easy to follow and can improve decision-making. His book is called The Intuition Toolkit. He shares his simple rules for us, how to hone in on our intuition, make it easier to listen to & when NOT to listen to it. PSA: The podcast is now also available on Youtube & as a video podcast on Spotify if you'd like to see our smiling faces while you listen! This episode is brought to you by YouFoodz, for $200 of your first 5 boxes use the code HELPFUL or order via this link. CONNECT WITH US Connect with That's Helpful on Instagram. Find Joel on Twitter & via his website. BOOK The Intuition Toolkit PODCASTS Intuitive Eating 101 Want to become a podcast sponsor, got some feedback for me or just fancy a chat? Email me - thatshelpful@edstott.com TIMESTAMPS 00:00:00 Intro 00:02:24 Why intuition is Joel's passion 00:03:53 How hard is intuition to study in the lab? 00:04:54 How do you study intuition in the lab? 00:09:18 Why do we feel intuition physically? 00:11:17 Why are certain people more intuitive than others? 00:13:04 Self Awareness 00:16:45 Getting better at using your intuition over time 00:19:30 The difference between addiction & intuition 00:24:17 Intuitive Eating & processed foods 00:32:59 Probabilistic thinking 00:36:09 The importance of environment 00:39:12 When not to use intuition 00:45:00 The one thing to remember when it comes to intuition
Dr. Steven 'SEVEN' Waterhouse is an active investor and builder in the crypto / web3 space with 11 years of experience. Dr. Waterhouse is currently CEO and co-founder of Orchid Labs, advisor to various projects including RiscZero and Squid Router, and is CEO and co-founder of Nazare Ventures, a new fund focused on crypto and AI, highlighting his ambition to contribute towards the digital landscape of the future. He was recently CTO & Venture Partner at Fabric Ventures and was previously a partner at Pantera Capital from 2013 to 2016. He served on the board of Bitstamp during this period.Dr. Waterhouse holds a Ph.D. in Engineering from Cambridge with a focus on speech recognition and machine learning. At Cambridge, Dr. Waterhouse received the Isaac Newton Prize for outstanding Ph.D. research and represented Cambridge in the rowing and water polo teams.In this conversation, we discuss:- Introduction to Orchid Protocol- Ensuring decentralisation as AI and Web3's convergence grows- DePINs- Technology and innovation- Privacy in the digital age- Building blocks for a freer internet- Integrating on-chain privacy- The fight for internet freedom- Probabilistic nano-payments- The role of Digital Resource Networks (DRNs)- SensorshipOrchidWebsite: www.orchid.comX: @OrchidProtocolDiscord: discord.gg/GDbxmjxX9FDr. Steven WaterhouseX: @deseventralLinkedIn: Steven W. --------------------------------------------------------------------------------- This episode is brought to you by PrimeXBT. PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers. PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions. Code: CRYPTONEWS50 This promotion is available for a month after activation. Click the link below: PrimeXBT x CRYPTONEWS50
In this new episode in the "Making Better Supply Chain Bets with the Power of Probabilities" series with Noodle.ai, hosts Scott Luton and Greg White are joined by the CRO of Noodle.ai, Tim Krug, and the End-to-End Planning and Support Director with Imperial Brands, Remi Guillon.Join us as they delve into the transformative story of a fast-moving consumer goods company, highlighting the adoption of innovative planning approaches through the lens of artificial intelligence. Listen in and learn more about:Probabilistic supply chain planning and the future of AI-driven decision-makingThe human element in supply chain transformation and the cultural aspects and leadership role in embracing technologyThe significance of trust between partners and what it takes to create unbiased forecastsThe importance of data cleanliness, change management, and acting with integrityAdditional Links & Resources:Learn more about Noodle.ai: www.Noodle.aiLearn more about Supply Chain Now: https://supplychainnow.comWEBINAR- AI-based forecast automation dreams do come true at Danone: https://bit.ly/3RRcMRjWEBINAR- Inflation & SMBs: Unlocking Cash Tied up in Inventory: https://bit.ly/48BIUzNWEBINAR- Achieving the Next-Gen Control Tower in 2024: Lessons Learned from the High-Tech Industry: https://bit.ly/3Syzrn1WEBINAR- Data-Driven Decision Making in Logistics: https://bit.ly/49lt102This episode is hosted by Scott Luton and Greg White. For additional information, please visit our dedicated show page at: https://supplychainnow.com/navigating-ai-frontier-demand-planning-imperial-brands-1231
We talked about: Rob's background Going from software engineering to Bayesian modeling Frequentist vs Bayesian modeling approach About integrals Probabilistic programming and samplers MCMC and Hakaru Language vs library Encoding dependencies and relationships into a model Stan, HMC (Hamiltonian Monte Carlo) , and NUTS Sources for learning about Bayesian modeling Reaching out to Rob Links: Book 1: https://bayesiancomputationbook.com/welcome.html Book/Course: https://xcelab.net/rm/statistical-rethinking/ Free ML Engineering course: http://mlzoomcamp.com Join DataTalks.Club: https://datatalks.club/slack.html Our events: https://datatalks.club/events.html
We learned a new word!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Tune in to hear Dwayne Kerrigan and Keiron McCammon dive into various topics such as technology, AI, spirituality, and connected devices. This candid and unscripted episode showcases the unpredictable nature of business and entrepreneurship.[00:00:00] Welcome[00:04:07] Probabilistic model trained on human knowledge.[00:08:06] Models and their accuracy.[00:12:26] Technological advancements and consciousness.[00:15:39] Influencers and world leaders.[00:20:42] Slowing things down online.[00:23:23] Facebook's impact on disinformation.[00:29:44] Teenage relationships and tragic consequences.[00:33:28] Shift Happens incubator.[00:35:12] Ripple effect of knowledge.Connect with Dwayne KerriganLinked In: https://www.linkedin.com/in/dwayne-kerrigan-998113281/ Facebook: https://www.facebook.com/businessofdoingbusinessdk Instagram: https://www.instagram.com/thebusinessofdoingbusinessdk/Disclaimer The views, information, or opinions expressed by guests during The Business of Doing Business are solely those of the individuals involved and do not necessarily represent those of Dwayne Kerrigan and his affiliates. Dwayne Kerrigan or The Business of Doing Business is not responsible for and does not verify the accuracy of any of the information contained in the podcast series. The primary purpose of this podcast is to educate and inform. Listeners are advised to consult with a qualified professional or specialist before making any decisions based on the content of this podcast.
Subscribe to the show Click here to get your free copy of The Inner Voice of Trading audiobook.
In this episode, we dive into a common pitfall we've seen many developers fall into lately. While many ad networks use probabilistic models for optimization, they turn to SKAN data when it comes to billing. This discrepancy might mean you're paying 30-40% more than anticipated or that your campaign's performance isn't as stellar as you believed. In this episode, I unpack the underlying mechanics of what is happening – and what we recommend doing instead.**Check out the show notes here:https://mobileuseracquisitionshow.com/episode/ad-network-skan-billing/**Get more mobile user acquisition goodies here:http://RocketShipHQ.comhttp://RocketShipHQ.com/blog