POPULARITY
Summary In this episode of the Develop This podcast, Dennis interviews Jason Archer, the Vice President of Business Development for the St. Louis Economic Development Partnership. They discuss the importance of business retention and expansion (BRE) programs, particularly the innovative regional BRE committee established during the pandemic. Jason shares insights on building trust among economic development partners, the structure of their outreach efforts, and the challenges faced in maintaining collaboration. The conversation highlights the significance of data collection, quantifying success, and practical advice for other organizations looking to implement similar initiatives. Takeaways The St. Louis Partnership focuses on economic development for the city and county. Business retention and expansion (BRE) is a key pillar of economic development. The regional BRE committee was formed during the pandemic to enhance outreach efforts. Trust-building among economic development partners is crucial for collaboration. Companies appreciate streamlined communication from multiple agencies in one meeting. Regular meetings and centralized scheduling improve efficiency in outreach. Data collection is essential for understanding company needs and measuring success. Challenges include managing outreach frequency and overcoming company hesitance. Formalizing processes can help in establishing effective BRE programs. Sharing success metrics fosters accountability and collaboration among partners.
A string of robberies and assaults at an Arizona rest area have targeted truckers. The local sheriff joins the show to offer some advice. Also, some small carriers have trouble keeping customers due to federal safety scores. Alex Clark of CDL Legal says “formalizing” pre-trips helps. And Collin Long of OOIDA's Washington, D.C., office discusses the pro-trucker policies that make him optimistic about the near future. 0:00 – Newscast 10:01 – Sheriff offers advice on avoiding rest area robberies 24:27 – Formalizing pre-trip can prevent trouble later 39:25 – Pro-trucker policy discussions a reason for optimism
Death is one of the few sure things in life, but few of us talk through how we want our final days to go, or who we want to help us through them. Formalizing those things in an advance directive may be easier and more important than you think.
In this episode of CISO Tradecraft, host G Mark Hardy explores the top 10 cybersecurity predictions for 2025. From the rise of AI influencers to new standards in encryption, Hardy discusses significant trends and changes expected in the cybersecurity landscape. The episode delves into topics such as branding, application security, browser-based security, and post-quantum cryptography, aiming to prepare listeners for future challenges and advancements in the field. Big Thanks to our Sponsor CruiseCon - https://cruisecon.com/ CruiseCon Discount Code: CISOTRADECRAFT10 Team8 Fixing AppSec Paper - https://bunny-wp-pullzone-pqzn4foj9c.b-cdn.net/wp-content/uploads/2024/11/Fixing-AppSec-Paper.pdf Terraform and Open Policy Agent Example - https://spacelift.io/blog/terraform-best-practices#8-introduce-policy-as-code Transcripts - https://docs.google.com/document/d/1u6B2PrkJ1D14d9HjQQHSg7Fan3M6n4dy Chapters 01:19 1) AI Influencers become normalized 03:17 2) The Importance of Production Quality in Branding 05:19 3) Google and Apple Collaboration for Enhanced Security 06:28 4) Consolidation in Application Security and Vulnerability Management 08:36 5) The Rise of Models Committees 09:09 6) Formalizing the CISO Role 11:03 7) Exclusive CISO Retreats: The New Trend 12:12 8) Automating Cybersecurity Tasks with Agentic AI 13:10 9) Browser-Based Security Solutions 14:22 10) Post-Quantum Cryptography: Preparing for the Future
In this episode, I begin discussing the question and history of formalizing results in Programming Languages Theory using interactive theorem provers like Rocq (formerly Coq) and Agda.
In this episode, we sit down with Jessica Snead, Program Coordinator and Lead Instructor at the Houston Community College 911 Dispatch Academy. With nearly a decade of hands-on dispatch experience, Jessica has worked across all emergency service disciplines, rising to a supervisory role and now actively training the next generation of dispatchers. Jessica shares her passion for elevating pre-agency training to better equip dispatchers before they step into their roles. She believes that formal training, which covers the essentials of emergency communications, can alleviate burnout, reduce turnover, and improve preparedness among trainees. Join us as Jessica discusses how her academy is addressing these issues and the positive impact she envisions for both trainees and agencies alike.Learn more about the awesome program at Houston Community College here: Make Sure to Visit our Partners Prepared at Prepard911.comand Xybix Media xybix.com/mediaThank you for listening to Let's Talk Dispatch! Don't forget to subscribe and leave a 5 Star Review!Follow Us on Social Media Instagram | Follow Here! Facebook | Follow Here!Youtube | Subscribe Here! Interested in being on an Episode of Let's Talk Dispatch?Sign Up Here | Be My Next Guest!Find additional resources and Dispatch Merch at:Theraspydispatcher.com
An employee handbook is one of the most important documents an employer can maintain. Why do so many employers consider it essential? We'll cover these reasons, and dive into the ways an employee handbook can help you. Listen in to learn more about: [0:57] Formalizing policies [1:40] Meeting state and local policy and notice requirements [3:24] Supporting the onboarding process [4:04] Guiding employment decisions [4:31] Reinforcing at-will status [6:08] Informing employees if they have questions or concerns [7:03] And policies considered must-have for a handbook Learn more about our RUN Powered by ADP® bundles and how our Employee Handbook Wizard can help you create your employee handbook. This content is based on generally accepted HR practices, is advisory in nature, and does not constitute legal advice or other professional services. ADP does not warrant or guarantee the accuracy, reliability, and completeness of the content. Employers are encouraged to consult with legal counsel for advice regarding their organization's compliance with applicable laws. This content is current as of the published date. Copyright © 2024 ADP, Inc. All Rights Reserved. The ADP logo, ADP, RUN Powered by ADP, and HR{preneur} are registered trademarks of ADP, Inc. and its affiliates. All other marks are the property of their respective owners. Privacy at ADP
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Formalizing the Informal (event invite), published by abramdemski on September 11, 2024 on LessWrong. Formalizing the Informal One way to view MIRI's Agent Foundations research is that it saw the biggest problem in AI safety as "human preferences are informal, but we need to somehow get formal guarantees about them" -- and so, in response, it set out to make a formal-informal bridge. Recently, I've been thinking about how we might formally represent the difference between formal and informal. My prompt is something like: if we assume that classical probability theory applies to "fully formal" propositions, how can we generalize it to handle "informal" stuff? I'm going to lead a discussion on this tomorrow, Wednesday Sept. 11, at 11am EDT (8am Pacific, 4pm UK). Discord Event link (might not work for most people): https://discord.com/events/1237103274591649933/1282859362125352960 Zoom link (should work for everyone): https://us06web.zoom.us/j/6274543940?pwd=TGZpY3NSTUVYNHZySUdCQUQ5ZmxQQT09 You can support my work on Patreon. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Formalizing the Informal (event invite), published by abramdemski on September 11, 2024 on LessWrong. Formalizing the Informal One way to view MIRI's Agent Foundations research is that it saw the biggest problem in AI safety as "human preferences are informal, but we need to somehow get formal guarantees about them" -- and so, in response, it set out to make a formal-informal bridge. Recently, I've been thinking about how we might formally represent the difference between formal and informal. My prompt is something like: if we assume that classical probability theory applies to "fully formal" propositions, how can we generalize it to handle "informal" stuff? I'm going to lead a discussion on this tomorrow, Wednesday Sept. 11, at 11am EDT (8am Pacific, 4pm UK). Discord Event link (might not work for most people): https://discord.com/events/1237103274591649933/1282859362125352960 Zoom link (should work for everyone): https://us06web.zoom.us/j/6274543940?pwd=TGZpY3NSTUVYNHZySUdCQUQ5ZmxQQT09 You can support my work on Patreon. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Epistemic states as a potential benign prior, published by Tamsin Leake on August 31, 2024 on The AI Alignment Forum. Malignancy in the prior seems like a strong crux of the goal-design part of alignment to me. Whether your prior is going to be used to model: processes in the multiverse containing a specific "beacon" bitstring, processes in the multiverse containing the AI, processes which would output all of my blog, so I can make it output more for me, processes which match an AI chatbot's hypotheses about what it's talking with, then you have to sample hypotheses from somewhere; and typically, we want to use either solomonoff induction or time-penalized versions of it such as levin search (penalized by log of runtime) or what QACI uses (penalized by runtime, but with quantum computation available in some cases), or the implicit prior of neural networks (large sequences of multiplying by a matrix, adding a vector, and ReLU, often with a penalty related to how many non-zero weights are used). And the solomonoff prior is famously malign. (Alternatively, you could have knightian uncertainty about parts of your prior that aren't nailed down enough, and then do maximin over your knightian uncertainty (like in infra-bayesianism), but then you're not guaranteed that your AI gets anywhere at all; its knightian uncertainty might remain so immense that the AI keeps picking the null action all the time because some of its knightian hypotheses still say that anything else is a bad idea. Note: I might be greatly misunderstanding knightian uncertainty!) (It does seem plausible that doing geometric expectation over hypotheses in the prior helps "smooth things over" in some way, but I don't think this particularly removes the weight of malign hypotheses in the prior? It just allocates their steering power in a different way, which might make things less bad, but it sounds difficult to quantify.) It does feel to me like we do want a prior for the AI to do expected value calculations over, either for prediction or for utility maximization (or quantilization or whatever). One helpful aspect of prior-distribution-design is that, in many cases, I don't think the prior needs to contain the true hypothesis. For example, if the problem that we're using a prior for is to model processes which match an AI chatbot's hypotheses about what it's talking with then we don't need the AI's prior to contain a process which behaves just like the human user it's interacting with; rather, we just need the AI's prior to contain a hypothesis which: is accurate enough to match observations. is accurate enough to capture the fact that the user (if we pick a good user) implements the kind of decision theory that lets us rely on them pointing back to the actual real physical user when they get empowered - i.e. in CEV(user-hypothesis), user-hypothesis builds and then runs CEV(physical-user), because that's what the user would do in such a situation. Let's call this second criterion "cooperating back to the real user". So we need a prior which: Has at least some mass on hypotheses which correspond to observations cooperate back to the real user and can eventually be found by the AI, given enough evidence (enough chatting with the user) Call this the "aligned hypothesis". Before it narrows down hypothesis space to mostly just aligned hypotheses, doesn't give enough weight to demonic hypothesis which output whichever predictions cause the AI to brainhack its physical user, or escape using rowhammer-type hardware vulnerabilities, or other failures like that. Formalizing the chatbot model First, I'll formalize this chatbot model. Let's say we have a magical inner-aligned "soft" math-oracle: Which, given a "scoring" mathematical function from a non-empty set a to real numbers (not necessarily one that is tractably ...
SUMMARY Jeremy chat with his guest, Coach Jeff Goodrum, and discusses his love for martial arts and the open-mindedness it brings. He talks about the importance of being a student and constantly learning, as well as the impact of his instructors and upbringing on his mindset. Coach Goodrum also touches on his experiences with fighting and the entertainment aspect of martial arts. He reflects on his journey through different martial arts styles and the influence of his father. He talks about his journey from being a college dropout to starting his own martial arts school and mentoring kids. He shares how his mother's support and his passion for music and martial arts motivated him. Coach Goodrum also discusses the importance of having a mentor and how he wants to be that mentor for kids who don't have a support system. He emphasizes the need for kids to have positive role models and learn how to be heroes in their own lives. He also talks about the challenges of formalizing his training program and the importance of continuous learning and self-improvement. He discusses the importance of seeking help and mentorship in business and martial arts. He highlights the benefits of working with the Small Business Development Center and emphasizes the value of learning from experienced individuals. Lastly, he also talks about the need for education and knowledge in entrepreneurship and the importance of finding a balance between different aspects of life. He shares his vision for his martial arts school and the impact he hopes to have on his students. Overall, the conversation explores themes of mentorship, education, personal growth, and community building. TAKEAWAYS * Martial arts fosters an open-mindedness and willingness to constantly learn and improve. * The influence of instructors and upbringing plays a significant role in shaping one's mindset. * Fighting can be both entertaining and a valuable learning experience. * Being a student and constantly seeking knowledge is important in martial arts and in life. * Following one's passion and making decisions based on personal fulfillment is crucial. Having a support system and a mentor can make a huge difference in a person's life. * Kids who don't have a support system need positive role models and mentors to guide them. * Formalizing a training program and starting a business can be challenging but rewarding. * Continuous learning and self-improvement are essential for personal and professional growth. Seeking help and mentorship is crucial for success in business and martial arts. * The Small Business Development Center offers valuable resources and support for small businesses. * Education and knowledge are essential for entrepreneurship. * Finding a balance between different aspects of life is important. * Martial arts can teach valuable life skills and help individuals cope with trauma. * Building a strong community is a key aspect of martial arts. * Being open to learning and seeking guidance from experienced individuals is beneficial in personal and professional growth.
Summary: Sam Obletz, co-founder of Claim, shares his journey as an entrepreneur and the inspiration behind his tech business. He recounts his early experiences in entrepreneurship, from building custom computers as a kid to starting a freelance web design business. Sam's interest in technology and passion for learning new technologies led him to pursue a career in finance and impact investing. Eventually, he teamed up with his co-founder, Tapp Stevenson, to start Claim, a company focused on helping people navigate the complex world of insurance claims. Sam emphasizes the importance of finding a business partner with complementary skills and experiences. He also discusses the challenges and opportunities of raising capital and the role of narratives in explaining data and business plans. Claim is a brand discovery app for Gen Z college students that helps them save money while finding new brands and having social experiences with their friends. The app operates through a weekly drop, similar to a Nike sneaker drop, where users unwrap a brand of the week and receive cash back offers for making their first purchase. Claim aims to reinvent marketing and advertising by passing advertising budgets directly to consumers through low prices, creating a more authentic and cost-effective experience for both consumers and brands. The app is currently available at around 50 colleges and universities and is expanding regularly. Keywords: entrepreneurship, tech business, custom computers, web design, impact investing, partnership, raising capital, narratives, data, brand discovery, Gen Z, college students, savings, social experiences, cash back offers, marketing, advertising, authenticity, cost-effective, expansion Takeaways Early experiences in entrepreneurship can shape a person's passion for technology and learning new technologies. Finding a business partner with complementary skills and experiences can greatly benefit an entrepreneur. Raising capital requires storytelling and understanding the timing and market interest. Putting data into a narrative format can help others understand the thought process and decision-making behind a business. Formalizing business agreements and infrastructure is crucial for long-term success and dispute resolution. Claim is a brand discovery app that helps college students save money while finding new brands and having social experiences with their friends. The app operates through a weekly drop, where users unwrap a brand of the week and receive cash back offers for making their first purchase. Claim aims to reinvent marketing and advertising by passing advertising budgets directly to consumers through low prices, creating a more authentic and cost-effective experience. The app is currently available at around 50 colleges and universities and is expanding regularly. Titles Navigating the Challenges of Raising Capital The Power of Complementary Partnerships in Entrepreneurship Discover New Brands and Save Money with Claim Reinventing Marketing and Advertising with Claim Sound Bites "I'm very happy to be here. Thanks for having me, Mitch." "If you find a business partner that their skills and experiences are complimentary, I would almost argue that there should be some overlap so that you can both jam on whatever the problems of the day are." "If your product doesn't really require AI or it doesn't really meaningfully add to the user experience or the customer value, then you don't just slap the label on for fundraising because one, it's a distraction and two, people are going to see right through it." "Claim's mission is to create easy and affordable memories." "We nudge people to discover the world together without the brainwashing of social platforms." "It's gratifying to me to give people those social experiences while solving a very real business need for marketers out there." Chapters 00:00 Introduction and Welcome 03:12 Early Entrepreneurship Experiences 08:01 Finding a Complementary Business Partner 12:00 Challenges and Opportunities of Raising Capital 24:55 The Journey to Claim: From Childhood Memories to Entrepreneurship 27:04 Introduction to Claim and its Mission 37:25 How Claim Works: Brand Discovery and Cash Back Offers 46:06 Challenges and Lessons Learned in Running Claim 54:03 Expanding Claim's Reach to More Universities
unbillable hours - a podcast about better professional services marketing
Sure, your firm has frameworks - but does it have a "signature methodology," i.e., a proprietary process designed to help educate prospects and structure engagements? If not, you should design one ASAP - and listen to this to learn how you might do it: Voices, production, etc. by Ash and Flo. Creative and design advice by @calmar.creativ Into, outro voiceover by @iamthedakota Music also by @iamthedakota
This episode is also available as a blog post: https://lifetapestrycreations.wordpress.com/2024/05/20/formalizing-you/ Formalizing You is the title of this week's channeled blog. The shift happening this week is more profound than you're accustomed to. Until now, you were most likely accepting one new you piece at a time. In the next week, you'll combine these pieces into a greater whole. By the end, you won't care what anyone wants or needs because you'll have adapted to new you. Copyright 2009-2024. All rights reserved. Feel free to share this content with others, post it on your blog, etc. But please maintain the integrity of this channel by including the channel's name, Brenda Hoffman, and the source website link: LifeTapestryCreations.com. --- Send in a voice message: https://podcasters.spotify.com/pod/show/brenda-hoffman6/message
This episode is also available as a blog post: https://lifetapestrycreations.wordpress.com/2024/05/19/formalizing-you/ Formalizing You is the title of this week's channeled blog. Until now, you were most likely adapting to one new you piece at a time. At the beginning of this energy burst, you'll question your new you thoughts. By the end, you won't care what anyone thinks. Copyright 2009-2024. All rights reserved. Feel free to share this content with others, post it on your blog, etc. But please maintain the integrity of this channel by including the channel's name: Brenda Hoffman, and the source website link: LifeTapestryCreations.com. --- Send in a voice message: https://podcasters.spotify.com/pod/show/brenda-hoffman6/message
In this conversation, Alex and Ramli John discuss their experiences with podcasting, the evolution of content creation, and the importance of standing out in a crowded market. They also touch on the concept of product adoption and the goals of the Product Adoption Academy within AppCues. In this conversation, Ramli John and Alex discuss the concept of product-led growth and its misconceptions. They explore the role of marketing and sales in a product-led growth model and the importance of personalized user experiences. They also touch on the use of AI in content creation and the potential impact on the value of human creators.Key TakeawaysBuilding relationships with podcast guests can lead to valuable connections and friendships.Creating a unique and memorable brand for a podcast can help it stand out in a crowded market.There is value in embracing your own weirdness and personality when creating content.Creative and unconventional marketing approaches can help brands differentiate themselves and capture attention.Formalizing and educating around a specific role or craft can bring more weight and seriousness to it.The goals of the Product Adoption Academy include course completion, customer acquisition, and impact on marketing metrics.Being product-led means prioritizing the success of the end user and creating a positive user experience. Product-led growth is about making it easy for users to achieve what they want and be successful with the product.Product-led growth does not necessarily mean freemium or free trial, but rather a focus on user success.Sales and marketing play important roles in a product-led growth model, including guiding users at key moments and personalizing the user experience.AI tools can be used to repurpose content, create summaries, and enhance content creation workflows.The value of human creators in the face of AI lies in their ability to provide unique perspectives and experiences.Show LinksVisit AppCuesConnect with Ramli John on LinkedInConnect with Alex Birkett on LinkedIn and TwitterConnect with Omniscient Digital on LinkedIn or TwitterPast guests on The Long Game podcast include: Morgan Brown (Shopify), Ryan Law (Animalz), Dan Shure (Evolving SEO), Kaleigh Moore (freelancer), Eric Siu (Clickflow), Peep Laja (CXL), Chelsea Castle (Chili Piper), Tracey Wallace (Klaviyo), Tim Soulo (Ahrefs), Ryan McReady (Reforge), and many more.Some interviews you might enjoy and learn from:Actionable Tips and Secrets to SEO Strategy with Dan Shure (Evolving SEO)Building Competitive Marketing Content with Sam Chapman (Aprimo)How to Build the Right Data Workflow with Blake Burch (Shipyard)Data-Driven Thought Leadership with Alicia Johnston (Sprout Social)Purpose-Driven Leadership & Building a Content Team with Ty Magnin (UiPath)Also, check out our Kitchen Side series where we take you behind the scenes to see how the sausage is made at our agency:Blue Ocean vs Red Ocean SEOShould You Hire Writers or Subject Matter Experts?How Do Growth and Content Overlap?Connect with Omniscient Digital on social:Twitter: @beomniscientLinkedin: Be OmniscientListen to more episodes of The Long Game podcast here: https://beomniscient.com/podcast/
In this curated episode of the Revenue Builders Podcast, John McMahon and John Kaplan, sponsored by Force Management, delve into the crucial aspect of decision criteria in sales. Joined by Anne Gary, the team dissects the intricacies of aligning decision criteria with product differentiation to secure sales success. From handling competition to managing scope creep, they provide invaluable insights into navigating the sales landscape with precision.KEY TAKEAWAYS[00:00:35] Understanding how competitors may adopt your terminology highlights the importance of clarifying distinctions in customer conversations.[00:01:26] Identifying who within the company influences decision criteria sheds light on the political dynamics of the sales process.[00:02:03] Aligning product differentiators with decision criteria ensures that customers are truly buying what you're selling.[00:03:04] Formalizing and quantifying decision criteria is essential to avoid ambiguity and ensure accountability.[00:04:38] Scope creep occurs when additional criteria are introduced, increasing the risk of losing the deal.[00:06:43] Failing to define criteria rigorously can lead to irreversible setbacks, especially evident in failed Proof of Value (POV) scenarios.[00:07:48] After a failed POV, complaining appears futile as it disregards agreed-upon rules and may damage credibility with the evaluation team and economic buyer.HIGHLIGHT QUOTES[00:02:21] "If you're outside of the bullseye, they're not buying what you're selling."[00:04:58] "It's a seller's job to get the criteria to be... in their bullseye."[00:06:43] "If you execute the POV and you haven't locked all this down... it's nearly impossible to recover from a failed POV."[00:07:48] "Even if you did it again... you'd wind up losing again."Listen to the full episode with Anne Gary through this link:https://revenue-builders.simplecast.com/episodes/decoding-decision-criteria-mastering-champions-blueprint-for-sales-success-with-anne-gary/Check out John McMahon's book here:Amazon Link: https://a.co/d/1K7DDC4Check out Force Management's Ascender platform here: https://my.ascender.co/Ascender/
Breaking Math WebsiteBreaking Math Email: BreakingMathPodcast@gmail.comEmail us for copies of the transcript! Resources on the LEAN theorem prover and programming language can be found at the bottom of the show notes (scroll to the bottom). SummaryThis episode is inspired by a correspondence the Breaking Math Podcast had with the editors of Digital Discovery, a journal by the Royal Society of Chemistry. In this episode the hosts review a paper about how the Lean Interactive Theorem Prover, which is usually used as a tool in creating mathemtics proofs, can be used to create rigorous and robust models in physics and chemistry. The paper is titled Formalizing chemical physics using the Lean Theorem prover and can be found in Digital Discovery, a journal with the Royal Society of Chemistry. Also - we have a brand new member of the Brekaing Math Team! This episode is the debut episode for Autumn, CEO of Cosmo Labs, occasional co-host / host of the Breaking Math Podcast, and overall contributor who has been working behind the scenes on the podcast on branding and content for the last several months. Welcome Autumn! Autumn and Gabe discuss how the paper explores the use of interactive theorem provers to ensure the accuracy of scientific theories and make them machine-readable. The episode discusses the limitations and potential of interactive theorem provers and highlights the themes of precision and formal verification in scientific knowledge. This episode also provide resources (listed below) for listeners intersted in learning more about working with the LEAN interactive theorem prover. TakeawaysInteractive theorem provers can revolutionize the way scientific theories are formulated and verified, ensuring mathematical certainty and minimizing errors.Interactive theorem provers require a high level of mathematical knowledge and may not be accessible to all scientists and engineers.Formal verification using interactive theorem provers can eliminate human error and hidden assumptions, leading to more confident and reliable scientific findings.Interactive theorem provers promote clear communication and collaboration across disciplines by forcing explicit definitions and minimizing ambiguities in scientific language. Lean Theorem Provers enable scientists to construct modular and reusable proofs, accelerating the pace of knowledge acquisition.Formal verification presents challenges in terms of transforming informal proofs into a formal language and bridging the reality gap.Integration of theorem provers and machine learning has the potential to enhance creativity, verification, and usefulness of machine learning models.The limitations and variables in formal verification require rigorous validation against experimental data to ensure real-world accuracy.Lean Theorem Provers have the potential to provide unwavering trust, accelerate innovation, and increase accessibility in scientific research.AI as a scientific partner can automate the formalization of informal theories and suggest new conjectures, revolutionizing scientific exploration.The impact of Lean Theorem Provers on humanity includes a shift in scientific validity, rapid scientific breakthroughs, and democratization of science.Continuous expansion of mathematical libraries in Lean Theorem Provers contributes to the codification of human knowledge.Resources are available for learning Lean Theorem Proving, including textbooks, articles, videos, and summer programs.Resrouces / Links: Email Professor Tyler Josephson about summer REU undergraduate opportunities at the University of Maryland Baltimore (or online!) at tjo@umbc.edu. See below Professor Tyler Josephson's links on learnnig more about LEANThe Natural Number Game: Start in a world without math, unlock tactics and collect theorems until you can beat a 'boss' level and prove that 2+2=4, and go further. Free LEAN Texbook and CourseProfessor Josephson's most-recommended resource for beginners learning Lean - a free online course and textbook from Prof. Heather Macbeth at Fordham University. Quanta Magazine articles on LeanProf. Kevin Buzzard of Imperial College London's lecture on LEAN interactive theorem prover and the future of mathematics. Become a supporter of this podcast: https://www.spreaker.com/podcast/breaking-math-podcast--5545277/support.
Stephanie Berner is a Customer Success Executive at LinkedIn. Since 2018, Stephanie has spearheaded all post-sales functions at LinkedIn Sales Solutions through its period of rapid growth. With a background in building and scaling customer success teams at Box, Medallia, and Opower, Stephanie has extensive experience in delivering exceptional customer experiences across various company stages. — In this episode, we discuss: Common customer success mistakes Creating a world-class customer success org Tactics for hiring exceptional talent How to structure compensation packages Where customer success fits into the wider org Key early-stage customer success metrics and rituals Successful strategies from Box, Medallia, and LinkedIn — Referenced: Aaron Levie: https://www.linkedin.com/in/boxaaron/ Box: https://www.box.com/ David Love: https://www.linkedin.com/in/david-s-love/ Gainsight: https://www.gainsight.com/ Jon Herstein: https://www.linkedin.com/in/jonherstein/ Jonathan Lister: https://www.linkedin.com/in/jonathanlister/ Ken Fine: https://www.linkedin.com/in/kmfine/ Medallia: https://www.medallia.com/ Nick Mehta: https://www.linkedin.com/in/nickmehta/ Opower: https://www.oracle.com/utilities/opower-energy-efficiency/ — Where to find Stephanie Berner: LinkedIn: https://www.linkedin.com/in/stephanieberner/ — Where to find Brett Berson: LinkedIn: https://www.linkedin.com/in/brett-berson-9986094/ Twitter/X: https://twitter.com/brettberson — Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ Twitter: https://twitter.com/firstround YouTube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast — Timestamps: (00:00) Introduction (02:21) Formalizing customer success at a startup (05:01) Hiring ICs before CSMs (06:22) Tactics for hiring standout talent (11:39) 3 questions to ask candidates (15:38) Fail-case patterns among customer success hires (17:49) Considering candidates with non-traditional backgrounds (21:21) Indexing toward a bias for action (24:17) What v1 of customer success looks like (26:03) Key early-stage customer success metrics (28:21) Whether customer success or sales should own renewals (30:40) Where customer success fits into the org (32:14) Why customer success doesn't report to an executive (33:48) Distinguishing a product problem from a customer success one (35:18) Simple way to deal with customer churn (39:21) Tactics to get customers to give honest feedback (40:58) What happens when customer success and product teams collaborate (44:14) Rituals for zero-to-one customer success (48:23) How to structure an early customer success team (52:01) Structuring compensation packages (54:35) Aligning customer success with the business model (60:14) The role of customer success in B2B software (62:17) Common customer success mistakes (67:44) People who had an outsized impact on Stephanie
In this week's episode of the Maximize Business Value Podcast, Tom delves into Chapter 26 of the Maximize Business Value Playbook - "Formalize Documentation." Join as we explore the indispensable practice of putting commitments in writing and the profound impact it has on business stability and value.Tom begins by highlighting the cultural significance of a handshake, especially in the South. However, in the realm of business, he underscores the necessity of memorializing everything in writing. The risk of different recollections arising from verbal agreements poses a significant threat, particularly for business owners engaged in crucial discussions, such as those involving compensation or ownership.Through a compelling real-life example, Tom recounts a business acquisition where verbal promises to key managers went unfulfilled, leading to confusion and dissatisfaction post-transaction. Had these commitments been documented, the ensuing conflict could have been easily avoided.The lesson is clear: "If it's worth saying, it's worth documenting." Formalizing agreements in writing not only provides clarity but also prevents potential conflicts that could impact both business owners and employees. This proactive approach fosters a sense of security and confidence, ultimately contributing to increased business value.MAXIMIZE BUSINESS VALUE BOOK: https://amzn.to/2AvazXTMAXIMIZE BUSINESS VALUE PLAYBOOK: https://amzn.to/3Vv2KWqSchedule a time with Tom: http://calendly.com/tombronsonCONNECT WITH TOMFacebook: https://www.facebook.com/masterypartnersLinkedIn: https://www.linkedin.com/in/tom-bronson/Website: https://www.masterypartners.com/Please be sure to like and follow for more great content to help YOU maximize YOUR business value!Tom Bronson is the founder and President of Mastery Partners, a company that helps business owners maximize business value, design exit strategy, and transition their business on their terms. Mastery utilizes proven techniques and strategies that dramatically improve business value that was developed during Tom's career 100 business transactions as either a business buyer or seller. As a business owner himself, Tom has been in your situation hundreds of times, and he knows what it takes to create and implement the right strategy. Tom is passionate about helping business owners and has the experience to do it. © 2023 Mastery Partners, LLC.
Smart Agency Masterclass with Jason Swenk: Podcast for Digital Marketing Agencies
Are you struggling to get a handle on your agency's growth? Have you established the right strategies for formalizing growth beyond referrals? Today's guest grew organically for a long time but eventually saw the need to clearly define his agency's direction and business goals. His team lacked a clear understanding of the overall direction, resulting in the constant need for guidance. He talks about the moment he knew it was time to hire an operator and how having clear processes and systems benefitted his team. He also shares how starting a podcast revived his agency, fueled their social media, and helped him become a better leader. Paris Childress is the founder and CEO of Hop Online, a performance marketing agency for SaaS companies. Having run his agency for 14 years, he's seen many ups and downs and come out stronger. Today he shares how to identify and navigate the hard times of entrepreneurship and advice on how to overcome challenges and make informed decisions. Tune in for an inspiring conversation with a seasoned agency owner. In this episode, we'll discuss: Becoming a manager of systems, not people. Formalizing growth with a three-fold marketing strategy. Establishing a brand and leadership through podcasting. Subscribe Apple | Spotify | iHeart Radio | Stitcher | Radio FM Sponsors and Resources E2M Solutions: Today's episode of the Smart Agency Masterclass is sponsored by E2M Solutions, a web design, and development agency that has provided white-label services for the past 10 years to agencies all over the world. Check out e2msolutions.com/smartagency and get 10% off for the first three months of service. Learning to Manage Systems Instead of People to Grow Beyond Referrals Paris is an accidental agency owner who rushed to create a company once he realized invoices were an important part of getting paid. At the time, he didn't fully understand what he was creating but luckily organic growth soon followed. As time went by, Paris realized the agency was not set up to see real growth past initial referrals. The challenge now was to get the systems and the structure in place to grow his small team. It was time to move from managing people to managing systems. Basically, managing the agency was getting harder because he had to manage more and more people who didn't always know what to do. They needed systems to fall back on. He needed to clearly define their services, implement SOPs, and figure out what they were good at. Step one was hiring an operator. As the agency's visionary, he recognized it was time to get someone to focus on processes and execution. His agency greatly benefitted from that clarity and setting up the systems for real growth. More recently, the pandemic presented new growth opportunities as SaaS businesses were red hot during COVID-19. Now, Paris is getting ready to tackle new challenges as SaaS businesses are seeing their valuations go down. It's definitely a tougher environment with marketing budget cuts so they're focusing on developing their brand. Formalizing Growth with a Dedicated Team and a Partner for Outbound One of the first steps to have a more serious focus on their marketing was formalizing a growth team working on generating more leads for the agency. It includes two full-time marketers, one salesperson, and himself. With a decrease of new business coming their way, Paris made the conscious decision to dedicate agency resources to the problem. This helped them avoid downsizing. With the margins being way too low for way too long, he had been pondering that option. However, he realized it was not a marketing problem but rather a sales problem. In terms of strategy, Paris was clear he needed to develop the 3 marketing channels: inbound, outbound, and partnerships. To tackle their lack of inbound strategy, he launched a podcast, which has become his favorite part of his week. It's still very small but it's starting to attract the right audience. A podcast can make a huge difference in your brand even with 50 to 100 downloads per week. It's not about building a huge audience, but rather building a relevant audience. As to the outbound piece, Paris is surprised at how well it's worked. They'd previously tried in the past with internal teams but this year they found a qualified partner to help them on this front. The result has been a steady pipeline with pretty good leads for them. Of course, it is a longer sales cycle than inbound or referrals. Discovery calls should be approached with a much more consultative approach and guide the conversation to get them talking about their business so they can start to realize they do need help in some areas. Establishing Thought Leadership Through Podcasting Starting a podcast has helped revitalize his agency in many ways. For instance, they now get to repurpose that content. It's fueling their social media and creating assets for salespeople. Additionally, and perhaps most importantly for Paris, it brought a lot of clarity for him as an agency owner. Since having the podcast, he has had new exciting ideas and gets to have interesting conversations with influential entrepreneurs. It has made him more focused and a better leader for his team. Paris strives to find guests he knows his audience will enjoy and benefit from. It can also be a strategy to invite people he'd like to be clients but this aspect is not a primary concern for him. He focuses on generating great conversations. In his opinion, if that person is ready to work with him they'll come back based on that first exchange they had during the podcast. As a strategy, a podcast will take time to bear fruit. Building your brand and establishing your agency as a leader in your niche takes a lot of effort. In the end, you will have a machine that creates predictability, freedom, and wealth but it all starts with creating content that helps and using that to build the business over time. Real Growth Also Relies on Client Retention One key aspect of building a successful agency is client retention. Once you have the strategies to maintain a full pipeline, you'll also need to work on having sound processes. Separate yourself on your processes and you'll see the positive impact on your client churn rate. It's not always about constantly getting new clients, maintaining those clients also requires a lot of work and will help you experience real growth. The frustration of constantly having to replace lost clients is not conducive to the growth and success of an agency. If you're experiencing this, it may be time to think about what you can do differently in terms of service delivery and marketing. After all, bringing on two clients and losing one is better than a 1:1 win/loss ratio. Do You Want to Transform Your Agency from a Liability to an Asset? If you want to be around amazing agency owners who can see what you may not be able to see and help you grow your agency, go to Agency Mastery 360. Our agency growth program helps you take a 360-degree view of your agency and gain mastery of the 3 pillar systems (attract, convert, scale) so you can create predictability, wealth, and freedom.
Is there some way we can detect bad behaviour in our AI system without having to know exactly what it looks like? In this episode, I speak with Mark Xu about mechanistic anomaly detection: a research direction based on the idea of detecting strange things happening in neural networks, in the hope that that will alert us of potential treacherous turns. We both talk about the core problems of relating these mechanistic anomalies to bad behaviour, as well as the paper "Formalizing the presumption of independence", which formulates the problem of formalizing heuristic mathematical reasoning, in the hope that this will let us mathematically define "mechanistic anomalies". Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Episode art by Hamish Doodles: hamishdoodles.com/ Topics we discuss, and timestamps: 0:00:38 - Mechanistic anomaly detection 0:09:28 - Are all bad things mechanistic anomalies, and vice versa? 0:18:12 - Are responses to novel situations mechanistic anomalies? 0:39:19 - Formalizing "for the normal reason, for any reason" 1:05:22 - How useful is mechanistic anomaly detection? 1:12:38 - Formalizing the Presumption of Independence 1:20:05 - Heuristic arguments in physics 1:27:48 - Difficult domains for heuristic arguments 1:33:37 - Why not maximum entropy? 1:44:39 - Adversarial robustness for heuristic arguments 1:54:05 - Other approaches to defining mechanisms 1:57:20 - The research plan: progress and next steps 2:04:13 - Following ARC's research The transcript: axrp.net/episode/2023/07/24/episode-23-mechanistic-anomaly-detection-mark-xu.html ARC links: Website: alignment.org Theory blog: alignment.org/blog Hiring page: alignment.org/hiring Research we discuss: Formalizing the presumption of independence: arxiv.org/abs/2211.06738 Eliciting Latent Knowledge (aka ELK): alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge Mechanistic Anomaly Detection and ELK: alignmentforum.org/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk Can we efficiently explain model behaviours? alignmentforum.org/posts/dQvxMZkfgqGitWdkb/can-we-efficiently-explain-model-behaviors Can we efficiently distinguish different mechanisms? alignmentforum.org/posts/JLyWP2Y9LAruR2gi9/can-we-efficiently-distinguish-different-mechanisms
Many organizations want to formalize their thought leadership efforts and take it to the next level. But how do you turn a "casual thought leadership presence" into something more? Today, we discuss ways to harness a "casual" message and turn it into powerful thought leadership that you can take to scale. Our guests are Stacey Flax, Thought Leadership Communication Manager and Carlos Williams, Applications Development Manager from Hach, an organization that focuses on water analysis. Stacey and Carlos share how people at Hach had been doing thought leadership on their own, before Hach chose to formalize it and amplify the expertise the organization had to offer. Carlos explains how part of the job of a thought leader is to convey your message in relatable terms but also somehow make it fun, through story or anecdotes. Stacey further explains the need to take a step back and think about your message. Who are you trying to reach? What's the core essence of your insights? They discuss how to convey your thought leadership message in different ways, using different media forms and different techniques. We also learn how Stacey aided in putting the formal structure in place, getting a baseline of subject matter experts, cataloging all of the previously produced content, and gaining further support from the leadership by being able to show the impact thought leadership was having. As a bonus, we take a look back at John Snow's discovery of cholera in the water in the 1920s - perhaps some of the first thought leadership on the topic of water analysis - and how those insights still affect us today. Three Key Takeaways: * When formalizing thought leadership within an organization, start by discovering the experts in your organization, and curate the material they have already produced into a usable catalog. * When creating thought leadership for technical or niche topics, it is important to use storytelling to spice up the content and make it somehow relatable. * Ensure any information you distribute in your thought leadership aligns with the organization's strategy.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ARC is hiring theoretical researchers, published by Paul Christiano on June 12, 2023 on The AI Alignment Forum. The Alignment Research Center's Theory team is starting a new hiring round for researchers with a theoretical background. Please apply here. What is ARC's Theory team? The Alignment Research Center (ARC) is a non-profit whose mission is to align future machine learning systems with human interests. The high-level agenda of the Theory team (not to be confused with the Evals team) is described by the report on Eliciting Latent Knowledge (ELK): roughly speaking, we're trying to design ML training objectives that incentivize systems to honestly report their internal beliefs. For the last year or so, we've mostly been focused on an approach to ELK based on formalizing a kind of heuristic reasoning that could be used to analyze neural network behavior, as laid out in our paper on Formalizing the presumption of independence. Our research has reached a stage where we're coming up against concrete problems in mathematics and theoretical computer science, and so we're particularly excited about hiring researchers with relevant background, regardless of whether they have worked on AI alignment before. See below for further discussion of ARC's current theoretical research directions. Who is ARC looking to hire? Compared to our last hiring round, we have more of a need for people with a strong theoretical background (in math, physics or computer science, for example), but we remain open to anyone who is excited about getting involved in AI alignment, even if they do not have an existing research record. Ultimately, we are excited to hire people who could contribute to our research agenda. The best way to figure out whether you might be able to contribute is to take a look at some of our recent research problems and directions: Some of our research problems are purely mathematical, such as these matrix completion problems – although note that these are unusually difficult, self-contained and well-posed (making them more appropriate for prizes). Some of our other research is more informal, as described in some of our recent blog posts such as Finding gliders in the game of life. A lot of our research occupies a middle ground between fully-formalized problems and more informal questions, such as fixing the problems with cumulant propagation described in Appendix D of Formalizing the presumption of independence. What is working on ARC's Theory team like? ARC's Theory team is led by Paul Christiano and currently has 2 other permanent team members, Mark Xu and Jacob Hilton, alongside a varying number of temporary team members (recently anywhere from 0–3). Most of the time, team members work on research problems independently, with frequent check-ins with their research advisor (e.g., twice weekly). The problems described above give a rough indication of the kind of research problems involved, which we would typically break down into smaller, more manageable subproblems. This work is often somewhat similar to academic research in pure math or theoretical computer science. In addition to this, we also allocate a significant portion of our time to higher-level questions surrounding research prioritization, which we often discuss at our weekly group meeting. Since the team is still small, we are keen for new team members to help with this process of shaping and defining our research. ARC shares an office with several other groups working on AI alignment such as Redwood Research, so even though the Theory team is small, the office is lively with lots of AI alignment-related discussion. What are ARC's current theoretical research directions? ARC's main theoretical focus over the last year or so has been on preparing the paper Formalizing the presumption of independence and on follo...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ARC is hiring theoretical researchers, published by paulfchristiano on June 12, 2023 on LessWrong. The Alignment Research Center's Theory team is starting a new hiring round for researchers with a theoretical background. Please apply here. What is ARC's Theory team? The Alignment Research Center (ARC) is a non-profit whose mission is to align future machine learning systems with human interests. The high-level agenda of the Theory team (not to be confused with the Evals team) is described by the report on Eliciting Latent Knowledge (ELK): roughly speaking, we're trying to design ML training objectives that incentivize systems to honestly report their internal beliefs. For the last year or so, we've mostly been focused on an approach to ELK based on formalizing a kind of heuristic reasoning that could be used to analyze neural network behavior, as laid out in our paper on Formalizing the presumption of independence. Our research has reached a stage where we're coming up against concrete problems in mathematics and theoretical computer science, and so we're particularly excited about hiring researchers with relevant background, regardless of whether they have worked on AI alignment before. See below for further discussion of ARC's current theoretical research directions. Who is ARC looking to hire? Compared to our last hiring round, we have more of a need for people with a strong theoretical background (in math, physics or computer science, for example), but we remain open to anyone who is excited about getting involved in AI alignment, even if they do not have an existing research record. Ultimately, we are excited to hire people who could contribute to our research agenda. The best way to figure out whether you might be able to contribute is to take a look at some of our recent research problems and directions: Some of our research problems are purely mathematical, such as these matrix completion problems – although note that these are unusually difficult, self-contained and well-posed (making them more appropriate for prizes). Some of our other research is more informal, as described in some of our recent blog posts such as Finding gliders in the game of life. A lot of our research occupies a middle ground between fully-formalized problems and more informal questions, such as fixing the problems with cumulant propagation described in Appendix D of Formalizing the presumption of independence. What is working on ARC's Theory team like? ARC's Theory team is led by Paul Christiano and currently has 2 other permanent team members, Mark Xu and Jacob Hilton, alongside a varying number of temporary team members (recently anywhere from 0–3). Most of the time, team members work on research problems independently, with frequent check-ins with their research advisor (e.g., twice weekly). The problems described above give a rough indication of the kind of research problems involved, which we would typically break down into smaller, more manageable subproblems. This work is often somewhat similar to academic research in pure math or theoretical computer science. In addition to this, we also allocate a significant portion of our time to higher-level questions surrounding research prioritization, which we often discuss at our weekly group meeting. Since the team is still small, we are keen for new team members to help with this process of shaping and defining our research. ARC shares an office with several other groups working on AI alignment such as Redwood Research, so even though the Theory team is small, the office is lively with lots of AI alignment-related discussion. What are ARC's current theoretical research directions? ARC's main theoretical focus over the last year or so has been on preparing the paper Formalizing the presumption of independence and on follow-up work to ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 'Fundamental' vs 'applied' mechanistic interpretability research, published by Lee Sharkey on May 23, 2023 on LessWrong. When justifying my mechanistic interpretability research interests to others, I've occasionally found it useful to borrow a distinction from physics and distinguish between 'fundamental' versus 'applied' interpretability research. Fundamental interpretability research is the kind that investigates better ways to think about the structure of the function learned by neural networks. It lets us make new categories of hypotheses about neural networks. In the ideal case, it suggests novel interpretability methods based on new insights, but is not the methods themselves. Examples include: A Mathematical Framework for Transformer Circuits (Elhage et al., 2021) Toy Models of Superposition (Elhage et al., 2022) Polysemanticity and Capacity in Neural Networks (Scherlis et al., 2022) Interpreting Neural Networks through the Polytope Lens (Black et al., 2022) Causal Abstraction for Faithful Model Interpretation (Geiger et al., 2023) Research agenda: Formalizing abstractions of computations (Jenner, 2023) Work that looks for ways to identify modules in neural networks (see LessWrong 'Modularity' tag). Applied interpretability research is the kind that uses existing methods to find the representations or circuits that particular neural networks have learned. It generally involves finding facts or testing hypotheses about a given network (or set of networks) based on assumptions provided by theory. Examples include Steering GPT-2-XL by adding an activation vector (Turner et al., 2023) Discovering Latent Knowledge in Language Models (Burns et al., 2022) The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable (Millidge et al., 2022) In-context Learning and Induction Heads (Olsson et al., 2022) We Found An Neuron in GPT-2 (Miller et al., 2023) Language models can explain neurons in language models (Bills et al., 2023) Acquisition of Chess Knowledge in AlphaZero (McGrath et al., 2021) Although I've found the distinction between fundamental and applied interpretability useful, it's not always clear cut: Sometimes articles are part fundamental, part applied (e.g. arguably 'A Mathematical Framework for Transformer Circuits' is mostly theoretical, but also studies particular language models using new theory). Sometimes articles take generally accepted 'fundamental' -- but underutilized -- assumptions and develop methods based on them (e.g. Causal Scrubbing, where the key underutilized fundamental assumption was that the structure of neural networks can be well studied using causal interventions). Other times the distinction is unclear because applied interpretability feeds back into fundamental interpretability, leading to fundamental insights about the structure of computation in networks (e.g. the Logit Lens lends weight to the theory that transformer language models do iterative inference). Why I currently prioritize fundamental interpretability Clearly both fundamental and applied interpretability research are essential. We need both in order to progress scientifically and to ensure future models are safe. But given our current position on the tech tree, I find that I care more about fundamental interpretability. The reason is that current interpretability methods are unsuitable for comprehensively interpreting networks on a mechanistic level. So far, our methods only seem to be able to identify particular representations that we look for or describe how particular behaviors are carried out. But they don't let us identify all representations or circuits in a network or summarize the full computational graph of a neural network (whatever that might mean). Let's call the ability to do these things 'comprehensive interpretability' . We ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 'Fundamental' vs 'applied' mechanistic interpretability research, published by Lee Sharkey on May 23, 2023 on The AI Alignment Forum. When justifying my mechanistic interpretability research interests to others, I've occasionally found it useful to borrow a distinction from physics and distinguish between 'fundamental' versus 'applied' interpretability research. Fundamental interpretability research is the kind that investigates better ways to think about the structure of the function learned by neural networks. It lets us make new categories of hypotheses about neural networks. In the ideal case, it suggests novel interpretability methods based on new insights, but is not the methods themselves. Examples include: A Mathematical Framework for Transformer Circuits (Elhage et al., 2021) Toy Models of Superposition (Elhage et al., 2022) Polysemanticity and Capacity in Neural Networks (Scherlis et al., 2022) Interpreting Neural Networks through the Polytope Lens (Black et al., 2022) Causal Abstraction for Faithful Model Interpretation (Geiger et al., 2023) Research agenda: Formalizing abstractions of computations (Jenner, 2023) Work that looks for ways to identify modules in neural networks (see LessWrong 'Modularity' tag). Applied interpretability research is the kind that uses existing methods to find the representations or circuits that particular neural networks have learned. It generally involves finding facts or testing hypotheses about a given network (or set of networks) based on assumptions provided by theory. Examples include Steering GPT-2-XL by adding an activation vector (Turner et al., 2023) Discovering Latent Knowledge in Language Models (Burns et al., 2022) The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable (Millidge et al., 2022) In-context Learning and Induction Heads (Olsson et al., 2022) We Found An Neuron in GPT-2 (Miller et al., 2023) Language models can explain neurons in language models (Bills et al., 2023) Acquisition of Chess Knowledge in AlphaZero (McGrath et al., 2021) Although I've found the distinction between fundamental and applied interpretability useful, it's not always clear cut: Sometimes articles are part fundamental, part applied (e.g. arguably 'A Mathematical Framework for Transformer Circuits' is mostly theoretical, but also studies particular language models using new theory). Sometimes articles take generally accepted 'fundamental' -- but underutilized -- assumptions and develop methods based on them (e.g. Causal Scrubbing, where the key underutilized fundamental assumption was that the structure of neural networks can be well studied using causal interventions). Other times the distinction is unclear because applied interpretability feeds back into fundamental interpretability, leading to fundamental insights about the structure of computation in networks (e.g. the Logit Lens lends weight to the theory that transformer language models do iterative inference). Why I currently prioritize fundamental interpretability Clearly both fundamental and applied interpretability research are essential. We need both in order to progress scientifically and to ensure future models are safe. But given our current position on the tech tree, I find that I care more about fundamental interpretability. The reason is that current interpretability methods are unsuitable for comprehensively interpreting networks on a mechanistic level. So far, our methods only seem to be able to identify particular representations that we look for or describe how particular behaviors are carried out. But they don't let us identify all representations or circuits in a network or summarize the full computational graph of a neural network (whatever that might mean). Let's call the ability to do these things 'comprehensive interpreta...
00:00 - Intro01:40 - context for Kyles story3:15 'why me for blessing?'10:30 - rejecting a childhood faith13:07 - A childhood of sunshine & rainbows' - Kyle's middle years 21:24 - Formalizing an old commitment & a prayer that changed Kyle's life 25:00 - A big guy feeling small & God's overwhelming love27:11 - What's changed? 35:00 Lordship & choosing Jesus when we don't know what it looks like35:46 - Q & A "Choosing the right path in the face of suffering"39:54 - Wise words from Mother T51:47 - Priorities & life moving forward for Kyle 55:27 - Closing Q, carrying legacy, leaving legacy 59:58 - Closing thoughts & practical advice 1:06:58 - Outroreference:"Renovation of the heart" - Dallas Willard "God has a name" - John Mark Comer "Invitation to a journey"- Robert Mulholland @kyle_sinclair11Ask us some Q&A on insta: https://www.instagram.com/firestarters_for_jesus/GO GET SOME MERCH! www.firestartersforjesus.com/shopSend us some mail:Address it to Fire Starters726 W Francis St,Aspen, CO 81611United States Go watch the pod: https://www.youtube.com/channel/UCFJDeoosi4NVIQRZ-9DvC6w
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Formalizing the "AI x-risk is unlikely because it is ridiculous" argument, published by Christopher King on May 3, 2023 on LessWrong. There is a lot of good writing on technical arguments against AI x-risk (such as Where I agree and disagree with Eliezer (which mostly argues for more uncertainty) and others). However, in the wider world the most popular argument is more of the form "it is ridiculous" or "it is sci-fi" or some sort of other gut feeling. In particular, I think this is the only way people achieve extremely low credence on AI doom (low enough that they worry about other disasters instead). Although this seems like a fallacy, in this post I will attempt to formalize this argument. Not only is it good, but I think it turns out to be extremely strong! In my judgement, I still find the arguments for x-risk stronger or at least balanced with the "it is ridiculous" argument, but it still deserves serious study. In particular, I think analyzing and critiquing it should become a part of the AI public discourse. For example, I think there are flaws in the argument that, when revealed, would cause people to become more worried about AI x-risk. I do not quite know what these flaws are yet. In any case, I hope this post will allow us to start studying the argument. The argument is actually a bunch of separate arguments that tend to be lumped together into one "it is ridiculous" argument. For the purposes of this article, Bob is skeptical of AI x-risk and Alice argues in favor of it. Existential risk would stop a 10,000 year trend Forgetting all of human history, Bob first assumes our priors for the long term future are very much human agnostic. The vast majority of outcomes have no humans, but are just arbitrary arrangements of matter (paperclips, diamonds, completely random, etc...). So our argument will need to build a case for the long term future actually being good for humans, despite this prior. Next, Bob takes into account human history. The total value of the stuff humans consume tends to go up. In particular, it seems to follow a power law, which is a straight line on this log-log graph. Which means Bob has the gods of straight lines on his side! This should result in a massive update of the priors towards "the future will have lots of things that humans like". Most people of course don't track economics or think about power-laws, but they have an intuitive sense of human progress. This progress is pretty robust to a wide variety of disasters, but not to x-risk, and thus the model is evidence that x-risk simply won't occur. Clever arguments fail in unexpected ways However, trends do break sometimes, and AI seems pretty dangerous. In fact, Alice has very good technical theories of why it is dangerous. But if you go through history, you find that even very good theories are hit and miss. It is good enough to locate the hypothesis, but still has a decent chance of being wrong. Alice might say "but if the theory fails, that might just mean AI is bad to humans in a way I didn't expect, not that AI is safe". But our prior does say AI is safe, thanks to the gods of straight lines. And Alice does not have a god of straight lines for AI doom; if anything, AI has tended to get more useful to humans over time, not less useful. Out of the ideas Bob has heard, sci-fi-ness and bad-ness is correlated Science fiction is a pretty good predictor of the future (in that future progress has often been predicted by some previous sci-fi story). However, if Bob discovers that a new idea he heard previously occurred in sci-fi, on average this provides evidence against the idea. That's because if an idea is both bad and not from sci-fi, Bob is unlikely to hear it. And thus being sci-fi and being bad becomes correlated conditioned on Bob having heard about it. Bob should partially di...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prizes for matrix completion problems, published by paulfchristiano on May 3, 2023 on LessWrong. Here are two self-contained algorithmic questions that have come up in our research. We're offering a bounty of $5k for a solution to either of them—either an algorithm, or a lower bound under any hardness assumption that has appeared in the literature. Question 1 (existence of PSD completions): given m=Ω(n) entries of an n×n matrix, can we tell in time ~O(nm) whether it has any (real, symmetric) positive semidefinite completion? Proving that this task is at least as hard as dense matrix multiplication or PSD testing would count as a resolution. Question 2 (fast “approximate squaring”): given A∈Rn×n and a set of m=Ω(n) entries of AAT, can I find some PSD matrix that agrees with AAT in those m entries in time ~O(nm)? We'll pay $5k for a solution to either problem. The offer is open for each problem for 3 months or until the problem gets solved (whichever happens first). Winners are welcome to publish solutions independently. Otherwise, if the result ends up being a significant part of a paper, we'll invite them to be a coauthor. We'll also consider smaller prizes for partial progress, or anything that we find helpful for either solving the problem or realizing we should give up on it. To understand the motivation for these questions, you can read our paper on Formalizing the presumption of independence and in particular Appendix D.7.2. ARC is trying to find efficient heuristic estimators as a formalization of defeasible reasoning about quantities like the variance of a neural network's output. These two questions are very closely related to one of the simplest cases where we haven't yet found any reasonable linear time heuristic estimator. We don't expect to receive many incorrect proposals, but if we receive more than 5 we may start applying a higher standard in order to save our time. If we can't understand a solution quickly, we may ask you to provide more details, and if we still can't understand it we may reject it. We expect a correct solution to be about as clear and easy to verify as a paper published at STOC. For both problems, it's OK if we incorrectly treat a matrix as PSD as long as all of its eigenvalues are at least −ε for a small constant ε. ~O hides polylogarithmic factors in n, ε, and the max matrix entry. Feel free to ask for other clarifications on our question on Math Overflow, on Facebook, or by email. To submit a solution, send an email to prize@alignment.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prizes for matrix completion problems, published by Paul Christiano on May 3, 2023 on The AI Alignment Forum. Here are two self-contained algorithmic questions that have come up in our research. We're offering a bounty of $5k for a solution to either of them—either an algorithm, or a lower bound under any hardness assumption that has appeared in the literature. Question 1 (existence of PSD completions): given (m) entries of an (n times n) matrix, including the diagonal, can we tell in time (tilde{O}(nm)) whether it has any (real, symmetric) positive semidefinite completion? Proving that this task is at least as hard as dense matrix multiplication or PSD testing would count as a resolution. Question 2 (fast “approximate squaring”): given (A in ;mathbb{R}^{n times n}) and a set of (m = Omega(n)) entries of (AA^T), can I find some PSD matrix that agrees with (AA^T) in those m entries in time (tilde{O}(nm))? We'll pay $5k for a solution to either problem. The offer is open for each problem for 3 months or until the problem gets solved (whichever happens first). Winners are welcome to publish solutions independently. Otherwise, if the result ends up being a significant part of a paper, we'll invite them to be a coauthor. We'll also consider smaller prizes for partial progress, or anything that we find helpful for either solving the problem or realizing we should give up on it. To understand the motivation for these questions, you can read our paper on Formalizing the presumption of independence and in particular Appendix D.7.2. ARC is trying to find efficient heuristic estimators as a formalization of defeasible reasoning about quantities like the variance of a neural network's output. These two questions are very closely related to one of the simplest cases where we haven't yet found any reasonable linear time heuristic estimator. We don't expect to receive many incorrect proposals, but if we receive more than 5 we may start applying a higher standard in order to save our time. If we can't understand a solution quickly, we may ask you to provide more details, and if we still can't understand it we may reject it. We expect a correct solution to be about as clear and easy to verify as a paper published at STOC. For both problems, it's OK if we incorrectly treat a matrix as PSD as long as all of its eigenvalues are at least (-varepsilon) for a small constant (varepsilon). (tilde{O}) hides polylogarithmic factors in (n), (varepsilon), and the max matrix entry. Feel free to ask for other clarifications on our question on Math Overflow, on Facebook, or by email. To submit a solution, send an email to prize@alignment.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
The common saying is the only thing you can get two dog trainers to agree on is that the third trainer doesn't know what they're doing. There are some topics or subjects in the dog world that all people just aren't going to agree on. That shouldn't keep people from having open and honest conversations about them though. After Jeremy Moore (Dogbone Hunter) listened to the recent episode 172 I did with Bob Owens (Loneduck Outfitters), he reached out with some concerns and thoughts that warranted further discussion. Both trainers agreed to come on and try to lead by example by putting their cards on the table and openly discuss Force Fetch and E-collars for everyone to hear in hopes that it provides some value overall to other people that may be conflicted on some of the same topics! -What is "Trainer Fights" intro -What is Force Fetch? -"Learning to learn" -Formalizing the retrieve with hold conditioning and building off obedience -Hold conditioning -What if a dog drops an object AFTER hold conditioning? -Avoid creating bad habits in the first place -Is it the same number of reps whether its spaced out over 16 months as opposed to speeding it up in a 4 month program? -How influencers influence the average DIY trainer at home -We create these issues that we are fixing via Force Fetch later down the road -"Just a hunting dog" -Hunting vs. testing scenarios -How to handle a "no go" -Consequences vs. Incentives -"Electricity" vs "Stimulation" -The "right tool" means absolutely nothing in the "wrong hands" -"The score doesnt matter at the end of the first quarter!" -You dont have to do MORE. Just do less but better. -- Presented By: Standing Stone Supply "The perfect course for someone getting a new puppy!" DT Systems "Dog Tested. Dog Tough" North American Pudelpointer Society "More than just a breed club!" -- Other Partners: Eukanuba Ugly Dog Whiskey Bird Dog Society -- GDIY Links: Patreon (Extended Outro Subject: Thoughts on Puppy First "Hunts") Instagram Facebook Website Learn more about your ad choices. Visit megaphone.fm/adchoices
170 | Formalizing Design with Gabrielle Mérite and Alan Wilson
We take a step back from the coverage of the economic crisis and ongoing negotiation and focus on emerging businesses that are the future of the economy, In this episode, Uzair talks to Omer Khan, founder and CEO of PostEx, Pakistan's largest e-commerce service provider. PostEx is offering easy access to capital with embedded logistics and Uzair and Omer talk about the company's vision, what it takes to solve for issues faced by small businesses, and how young adults can succeed at startups and founding their own businesses. Chapters: 0:00 Introduction 1:20 What is PostEx? 10:45 Credit, trust, and small business 22:30 Formalizing and digitizing payments 29:30 Why cash is king? 35:30 Failing and entrepreneurship 39:50 Where is the startup ecosystem headed?
In this episode talk with Gerwin Klein about the formal verification of the microkernel seL4 which was done using Isabelle at NICTA / Data61 in Australia. We also talk a little about his PhD Project veryfing a piece of the Java Virtual Machine. Links Gerwin's Twitter Gerwin's Website ProofCraft's Website
In this episode talk with Gerwin Klein about the formal verification of the microkernel seL4 which was done using Isabelle at NICTA / Data61 in Australia. We also talk a little about his PhD Project veryfing a piece of the Java Virtual Machine. Links Gerwin's Twitter Gerwin's Website ProofCraft's Website
In this episode talk with Gerwin Klein about the formal verification of the microkernel seL4 which was done using Isabelle at NICTA / Data61 in Australia. We also talk a little about his PhD Project veryfing a piece of the Java Virtual Machine. Links Gerwin's Twitter Gerwin's Website ProofCraft's Website
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Research agenda: Formalizing abstractions of computations, published by Erik Jenner on February 2, 2023 on The AI Alignment Forum. Big thanks to Leon Lang, Jérémy Scheurer, Adam Gleave, and Shoshannah Tekofsky for their feedback on a draft of this post, to Euan McLean (via FAR AI) for his feedback and a lot of help with editing, and to everyone else who discussed this agenda with me, in particular Johannes Treutlein for frequent research check-ins! Summary My current agenda is to develop a formal framework for thinking about abstractions of computations. These abstractions are ways to partially describe the “algorithm” a neural network or other computation is using, while throwing away irrelevant details. Ideally, this framework would tell us 1) all possible abstractions of a given computation, and 2) which of these are most useful (for a specific purpose, such as detecting deception). “Useful” doesn't necessarily mean “easily human-understandable”—I see that as an open question. I anticipate the main applications to alignment to be automating interpretability or mechanistic anomaly detection. There are also potential connections to other alignment topics, such as natural abstractions or defining terms like “search process”. This agenda is at an early stage (I have been thinking about it for ~2 months, and about related topics for another ~2 months before that). So feedback now could change my future direction. I also list a few potential projects that seem self-contained. If you're interested in working on any of those, or collaborating in some other way, please get in touch! I encourage you to skip around and/or only read parts of the post. Here's an overview: Introduction and Q&A mostly talk about motivation and connections to alignment.. What are abstractions of computations? discusses my current guess as to what the framework should look like. There's a list of Some potential projects. Appendix: Related work gives a quick overview of relevant work in academia, and the relation between this agenda and other alignment research. This post doesn't contain any actual theorems or experiments, so if you're only interested in that, you can stop reading. Introduction Humans can't just look at the weights of a neural network and tell what it's doing. There are at least two reasons for this: Neural network weights aren't a format we're great at thinking about. Neural networks are often huge. The second point would likely apply to any system that does well on complicated tasks. For example, neural networks are decision trees, but this doesn't mean we can look at the decision tree corresponding to a network and understand how it works. To reason about these systems, we will likely have to simplify them, i.e. throw away details that are irrelevant for whatever we want to find out. In other words, we are looking for abstractions of computations (such as neural networks). Abstractions are already how we successfully reason about many other complicated systems. For example, if you want to understand the Linux kernel, you wouldn't start by reading the entire source code top to bottom. Instead, you'd try to get a high-level understanding—what are the different modules, how do they interact? Similarly, we use pseudocode to more easily communicate how an algorithm works, abstracting away low-level details. If we could figure out a general way to find useful abstractions of computations, or at least of neural networks, perhaps we could apply this to understand them in a similar way. We could even automate this process and mechanically search for human-understandable abstractions. Making things easier to understand for humans isn't the only application of abstractions. For example, abstractions have been used for more efficient theorem proving and in model checking (e.g. abstract inte...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Research agenda: Formalizing abstractions of computations, published by Erik Jenner on February 2, 2023 on LessWrong. Big thanks to Leon Lang, Jérémy Scheurer, Adam Gleave, and Shoshannah Tekofsky for their feedback on a draft of this post, to Euan McLean (via FAR AI) for his feedback and a lot of help with editing, and to everyone else who discussed this agenda with me, in particular Johannes Treutlein for frequent research check-ins! Summary My current agenda is to develop a formal framework for thinking about abstractions of computations. These abstractions are ways to partially describe the “algorithm” a neural network or other computation is using, while throwing away irrelevant details. Ideally, this framework would tell us 1) all possible abstractions of a given computation, and 2) which of these are most useful (for a specific purpose, such as detecting deception). “Useful” doesn't necessarily mean “easily human-understandable”—I see that as an open question. I anticipate the main applications to alignment to be automating interpretability or mechanistic anomaly detection. There are also potential connections to other alignment topics, such as natural abstractions or defining terms like “search process”. This agenda is at an early stage (I have been thinking about it for ~2 months, and about related topics for another ~2 months before that). So feedback now could change my future direction. I also list a few potential projects that seem self-contained. If you're interested in working on any of those, or collaborating in some other way, please get in touch! I encourage you to skip around and/or only read parts of the post. Here's an overview: Introduction and Q&A mostly talk about motivation and connections to alignment.. What are abstractions of computations? discusses my current guess as to what the framework should look like. There's a list of Some potential projects. Appendix: Related work gives a quick overview of relevant work in academia, and the relation between this agenda and other alignment research. This post doesn't contain any actual theorems or experiments, so if you're only interested in that, you can stop reading. Introduction Humans can't just look at the weights of a neural network and tell what it's doing. There are at least two reasons for this: Neural network weights aren't a format we're great at thinking about. Neural networks are often huge. The second point would likely apply to any system that does well on complicated tasks. For example, neural networks are decision trees, but this doesn't mean we can look at the decision tree corresponding to a network and understand how it works. To reason about these systems, we will likely have to simplify them, i.e. throw away details that are irrelevant for whatever we want to find out. In other words, we are looking for abstractions of computations (such as neural networks). Abstractions are already how we successfully reason about many other complicated systems. For example, if you want to understand the Linux kernel, you wouldn't start by reading the entire source code top to bottom. Instead, you'd try to get a high-level understanding—what are the different modules, how do they interact? Similarly, we use pseudocode to more easily communicate how an algorithm works, abstracting away low-level details. If we could figure out a general way to find useful abstractions of computations, or at least of neural networks, perhaps we could apply this to understand them in a similar way. We could even automate this process and mechanically search for human-understandable abstractions. Making things easier to understand for humans isn't the only application of abstractions. For example, abstractions have been used for more efficient theorem proving and in model checking (e.g. abstract interpretation). ...
Have you used the Python Read-Eval-Print Loop (REPL) to explore the language and learn about how it operates? Would it help if it provided syntax highlighting, definitions, and code completion and behaved more like an IDE? This week on the show, Christopher Trudeau is here, bringing another batch of PyCoder's Weekly articles and projects.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we efficiently explain model behaviors?, published by Paul Christiano on December 16, 2022 on The AI Alignment Forum. ARC's current plan for solving ELK (and maybe also deceptive alignment) involves three major challenges: Formalizing probabilistic heuristic argument as an operationalization of “explanation” Finding sufficiently specific explanations for important model behaviors Checking whether particular instances of a behavior are “because of” a particular explanation All three of these steps are very difficult, but I have some intuition about why steps #1 and #3 should be possible and I expect we'll see significant progress over the next six months. Unfortunately, there's no simple intuitive story for why step #2 should be tractable, so it's a natural candidate for the main technical risk. In this post I'll try to explain why I'm excited about this plan, and why I think that solving steps #1 and #3 would be a big deal, even if step #2 turns out to be extremely challenging. I'll argue: Finding explanations is a relatively unambitious interpretability goal. If it is intractable then that's an important obstacle to interpretability in general. If we formally define “explanations,” then finding them is a well-posed search problem and there is a plausible argument for tractability. If that tractability argument fails then it may indicate a deeper problem for alignment. This plan can still add significant value even if we aren't able to solve step #2 for arbitrary models. I. Finding explanations is closely related to interpretability Our approach requires finding explanations for key model behaviors like “the model often predicts that a smiling human face will appear on camera.” These explanations need to be sufficiently specific that they distinguish (the model actually thinks that a human face is in front of the camera and is predicting how light reflects off of it) from (the model thinks that someone will tamper with the camera so that it shows a picture of a human face). Our notion of “explanation” is informal, but I expect that most possible approaches to interpretability would yield the kind of explanation we want (if they succeeded at all). As a result, understanding when finding explanations is intractable may also help us understand when interpretability is intractable. As a simple caricature, suppose that we identify a neuron representing the model's beliefs about whether there is a person in front of the camera. We then verify experimentally that (i) when this neuron is on it leads to human faces appearing on camera, (ii) this neuron tends to fire under the conditions where we'd expect a human to be in front of the camera. I think that finding this neuron is the hard part of explaining the face-generating-behavior. And if this neuron actually captures the model's beliefs about humans, then it will distinguish (human in front of camera) from (sensors tampered with). So if we can find this neuron, then I think we can find a sufficiently specific explanation of the face-generating-behavior. In reality I don't expect there to be a “human neuron” that leads to such a simple explanation, but I think the story is the same no matter how complex the representation is. If beliefs about humans are encoded in a direction then both tasks require finding the direction; if they are a nonlinear function of activations then both tasks require understanding that nonlinearity; and so on.. The flipside of the same claim is that ARC's plan effectively requires interpretability progress. From that perspective, the main way ARC's research can help is by identify a possible goal for interpretability. By making a goal precise we may have a better chance of automating it (by applying gradient descent and search, as discussed in section III), and even if we can't automate it then...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ARC paper: Formalizing the presumption of independence, published by Erik Jenner on November 20, 2022 on The AI Alignment Forum. (I did not have anything to do with this paper and these are just my own takes.) The Alignment Research Center recently published their second report, Formalizing the presumption of independence. While it's not explicitly about AI alignment, it's probably still interesting for some people here. Summary The paper is about "heuristic arguments". These are similar to proofs, except that their conclusions are not guaranteed to be correct and can be overturned by counterarguments. Mathematicians often use these kinds of arguments, but in contrast to proofs, they haven't been formalized. The paper mainly describes the open problem of finding a good formalization of heuristic arguments. They do describe one attempt, "cumulant propagation", in Appendix D, but point out it can behave pathologically. So what's the "presumption of independence" from the title? Lots of heuristic arguments work by assuming that some quantities are independent to simplify things, and that's what the paper focuses on. Such an argument can be overturned by showing that there's actually some correlation we initially ignored, which should then lead to a more sophisticated heuristic argument with a potentially different conclusion. What does this have to do with alignment? The paper only very briefly mentions alignment (in Appendix F), more detailed discussion is planned for the future. But roughly: Avoiding catastrophic failures. Heuristic arguments can let us better estimate the probability of rare failures, or failures which occur only on novel distributions where we cannot easily draw samples. This can be used during validation to estimate risk, or potentially during training to further reduce risk. Eliciting latent knowledge. Heuristic arguments may let us see “why” a model makes its predictions. We could potentially use them to distinguish cases where similar behaviors are produced by very different mechanisms—for example distinguishing cases where a model predicts that a smiling human face will show up on camera because it predicts there will actually be a smiling human in the room, from cases where it makes the same prediction because it predicts that the camera will be tampered with. [...] Neither of these applications is straightforward, and it should not be obvious that heuristic arguments would allow us to achieve either goal. [...] Heuristic arguments can be seen as somewhere between interpretability and formal verification: unlike interpretability, heuristic arguments are meant to be machine-checkable and don't have to be human-understandable. But unlike formal proofs, they don't require perfect certainty and might be much easier to find. Readers here might also be reminded of Logical Induction. This paper is trying to do something somewhat different though: [Approaches to logical uncertainty] have primarily focused on establishing coherence conditions and on capturing inductive reasoning, i.e. ensuring that a reasoner eventually successfully predicts φ(n) given observations of φ(1), φ(2), . . . φ(n − 1). These systems would not automatically recognize intuitively valid heuristic arguments [...], although they would eventually learn to trust these arguments after observing them producing good predictions in practice. Indeed, we can view ourselves as reasoners in exactly this situation, trying to understand and formalize a type of reasoning that appears to often make good predictions in practice. Formalizations of inductive reasoning may help clarify the standards we should use for evaluating a proposed heuristic estimator, but do not constitute a good heuristic estimator themselves. So should you read the paper? Given it's a 60-page report (though most of that's...
We spoke with 100 marketing leaders and asked "What's the most underrated tactic in B2B Marketing?" In this roundtable discussion Benji, James, and Dan breakdown the findings. Discussed in this episode: - Tips for creating deep, personal, and fun marketing. - The power of relationship in marketing - Formalizing a structure to collect insights from your ideal buyers
This podcast is powered by JewishPodcasts.org. Start your own podcast today and share your content with the world. Click jewishpodcasts.fm/signup to get started.
So, you're longing to launch your own venture but don't know how to get started? Health and Business coach Shannon Petteruti has the answers and is sharing them on this episode of She Turned Entrepreneur. Step 1: Locate your passion. Having that vision will make even the toughest tasks and longest days a labor of love. Step 2: Habit. Habit. Habit. Establish foundational practices — then stick to them! It's all about staying focused, which instills the confidence to take those necessary chances that are key to growth. This dynamic entrepreneur has done it herself and knows what it takes to launch a successful enterprise. If we're not taking risks we're not moving forward, says Shannon, “You have to believe in who you are and then just put those pieces in place.” And when some of those pieces don't work, that's just fine too. Do a post-mortem and move on. It's all part of the process! Shannon offers advice for aspiring entrepreneurs as well as actionable ideas about habits that will serve any business from inception through development and maturity. The work is endless but the rewards tremendous. “It's just so fun. I love having your thoughts, your input and your work 20 hours a day all going to your outcome and not someone else you're working for,” says Shannon, also the author of "IV Vitamin Regulations: Your Guide for USP 797 Pharmacy Compliance." Join us for a lively chat with a coach will inspire you to kick procrastination to the curb! It all starts with the non-negotiable! Inspired by Shannon's energy and ideas? You might want to book a discovery call to find out more. Just click here. Click here to listen to, rate and review this or previous She Turned Entrepreneur episodes. Here are key takeaways from the conversation:· Why is going it alone so much fun? All the effort comes back directly to benefit our own venture (and not someone else's).· Take each task as it comes and complete it. Keep the focus! You'll get to the fun stuff eventually and be in a place to really enjoy it because your house is in order! · Create goals and a checklist. Then institute the habit of consistently checking things off. · Efforts are going to fail, but that can easily be turned into an opportunity. Do the post mortem, then do better next time!· There's nothing like the independence in owning your own business, but you don't have to go it alone. Seek advice and encouragement from a coach or some other supportive resource.· Envision who you are, then methodically institute habits to put the pieces in place. Here's a quick look into the episode:· About the journey Shannon has taken from her childhood in New Jersey to a business degree in college to motherhood, which is where her entrepreneurial flair emerged.· Shannon has just launched two new offshoots:o Formalizing her business coaching commitment.o Authoring a book on 797 pharmacy regulations ("IV Vitamin Regulations: Your Guide for USP 797 Pharmacy Compliance") as well as a course that decodes the regulations and requirements for small business owners seeking compliance.· Shannon's focus is primarily (but not exclusively) on women with health-related businesses. Many are nurse-practitioners going out on their own.· Advice for small businesses in their formative stages:o Keep the focus one week out — and no more.o Drill down on the most pressing among the million things that need to be done.o Block non-negotiable time for tasks — and then execute accordingly. o Kick procrastination to the curb! If it needs to be done sooner or later, make it sooner! Especially if it's something you don't want to do. You'll feel better immediately!· How Shannon works with clients:o She coaches exclusively one-to-one.o The process is enhanced by its interactive and intimate nature.· Shannon's Best Advice for Staying Motivated On Those Inevitable Long Days:o It's all about habits. Put in place rituals around writing up goals and marching through that to-do list.o Don't rely on motivation. It can come and go. Grit is what's often required on the daily.· Shannon's Biggest Lesson Learned: Look at failures in a positive light. They are your friend! · Advice for rising entrepreneurs:o Figure out what you're passionate about so that those 14-hour days (at least mostly) feel like a labor of love.o Consider starting a side hustle while getting your venture off the ground.o Seek advice and support through a coach or your network.· Recommended Reading:o "Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones," by James Clear.
Is it time to consider an advisory board? And if so, why? What skills do you need? And how do you find the right people who are willing to help? Aashish Agarwaal, founder and chairman of the Enerji Group, and Alexey Volynets of the International Finance Corporation share entrepreneurial perspectives and corporate governance advice to help you figure out what's right for your company.Rare is the entrepreneur who is an expert at everything. Turning to others outside your company for advice can be essential for success. Formalizing that process with an advisory board is helping Aashish Agarwaal strategically transform Enerji Group, the digital publishing enterprise he founded in India. But it took awhile to figure out exactly what he needed, who could help, and how to run the board effectively.Alexey Volynets understands what Agarwaal had to go through to create the ideal advisory board. As an expert in corporate governance at the International Finance Corporation, he has been teaching companies about corporate governance for years.The most common manifestation of corporate governance is a board — fiduciary or advisory. Whereas fiduciary boards have financial liabilities, advisory boards are simply there to provide expertise that you may be lacking. “As you are growing, and when you are on the top of the world, it's important to have a check,” Volynets explains. “External advisors, especially very independent voices will ask you the right questions and will challenge your assumptions.”Agarwaal figured out who he needed by first identifying the skill gaps in his company and what strategic initiatives he needed help with and how often. He suggests “I would say first what are the gaps, and second, do you need that help on a consistent basis or intermittent basis? Because again, you have to decide how much investment you're going to make in it.” Finding the right people isn't easy, either. Volynets suggests the best place to look is your own networks to find the business people you trust with the criteria you need. But Volynets cautions entrepreneurs to avoid adding friends, suppliers, contractors, etc. to an advisory board, even if they have the requisite skills. He says “The most important characteristic is emotional independence.” Listen to Aashish's first hand experience on creating an advisory board and Alexey's insights on how to do it strategically and successfully when it's time to tap into the experience and expertise of other business leaders.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This is Cognitive Revolution, my show about the personal side of the intellectual journey. Each week, I interview an eminent scientist, writer, or academic about the experiences that shaped their ideas. The show is available wherever you listen to podcasts.Tom Griffiths is Professor Psychology and Computation Science at Princeton University, where he directs the Computational Cognitive Science Lab. Tom uses algorithms from AI to inform his work as a psychologist—testing the ways in which hims align with or deviate from the standards set by the AI models. He’s a central figure in this field, and in this episode we go deep on how it first occurred to Tom to use computers to study the mind—as well as where this work has taken him over the years. Tom recently released a podcast series through Audible, co-hosted with Brian Christian, called Algorithms at Work. I finished it recently and can confidently say it’s one of the best podcast series I’ll listen to all year!Like this episode? Here’s another one to check out:I’d love to know what you thought of this episode! Just reply to this email or send a note directly to my inbox. Feel free to tweet me @CodyKommers. You can also leave a rating for the show on iTunes (or another platform). This is super helpful, as high ratings are one of the biggest factors platforms look at in their recommender system algorithms. The better the ratings, the more they present the show to new potential listeners. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit codykommers.substack.com/subscribe
https://www.facebook.com/PearlsofGreatPriceEmpowermentServices Do you know the same Dawn life begins at 40? Well, for me, a crisis happened at 40 that turned my entire world upside down. And I believe that crisis really put me on the path, uh, to what I'm doing now, being new. And as the midwife, is the community manager of ladies in winter. Formalizing pools of greed price, empowerment services, uh, a career woman. Yes. And so I had my plan and I worked my plan because I knew where I wanted to be by 40 I'm done, everything was working out seamlessly at every stage. Rung of the ladder up the organization that I climbed, it worked in my favor, but as I said until the age of 40, where I lost my job and my world fell apart for a career woman whose identity was wrapped up in my career. And at that point, I really had a moment. Where everything I knew about God at that time, I began to question. I grew up in the church, Dawn, uh, at the sea. I was the loudest baby in the church. So my faith was part and parcel of my journey has always been. Email: dawnlongempowermentcoach@gmail.com Music The Inspiration by Keys of Moon | https://soundcloud.com/keysofmoon Attribution 4.0 International (CC BY 4.0) https://creativecommons.org/licenses/by/4.0/ Music promoted by https://www.chosic.com/ About - Your Transformation Journey This podcast morphed from the Unstoppable Leadership podcast. The guest that was on had a running theme of transformation and why they started on their journey and that is why the name Your Transformation Journey was born. You know there is more to you, your life, or your business, yet you're frustrated at being constantly held back by yourself. Stuck in fear, self-doubt, and overthinking. You keep playing it small. You've had enough now and are ready for change. Ready to step into the person you really are and create the life or business that you love and I am here to help you do just that. Welcome to Heart-Centered When Nobody Else Will Committed- Client-Centered Your Transformation Journey Podcast Social Media Youtube: https://www.youtube.com/channel/UC6AE_PfvuUcHThDseXIMJXA Spotify: https://open.spotify.com/show/4mbHjqbPwHEYQrbbkDZNPC Instagram: https://www.instagram.com/yourtransformationjourney/ Website: https://www.dawnlongcoach.com LInkedin: www.linkedin.com/in/dawnlong --- Send in a voice message: https://anchor.fm/yourtransformationjourney/message Support this podcast: https://anchor.fm/yourtransformationjourney/support
In this episode of TGC Q&A, our third in a six-week series on faith and work, Amy Sherman answers the question, “How did followers of Jesus serve the poor?”She discusses:• Imitating Jesus (:32) • Formalizing the process (1:58) • John Calvin and the Reformation influence (2:47) • John Wesley and the Methodist tradition (4:04) • The influence of Octavia Hill (5:00) • Maggie Walker's bank (7:08) • Explore more from TGC on the topic of Serving the Poor.