All of the processes of governing, whether undertaken by a govnt, market or network, whether over a family, tribe, formal or informal organization or territory and whether through the laws, norms, power or language of an organized society
POPULARITY
Categories
To know what is good and not do it is a sin. We are called to be people who continually do good. In this week’s episode of Real Life with Jack Hibbs, we will see how that applies to government, citizenship, and our leaders. What good are we called to do for our nation, state, city, and community?(00:00) Understanding God's Authority in Governance(09:01) Divine Government and Human Responsibility(17:20) The Influence of God in Governance CONNECT WITH PASTOR JACK Get Updates via Text: https://text.whisp.io/jack-hibbs-podcastWebsite: https://jackhibbs.com/ Instagram: http://bit.ly/2FCyXpO Facebook: https://bit.ly/2WZBWV0 YouTube: https://bit.ly/437xMHn DAZE OF DECEPTION BOOK:https://jackhibbs.com/daze-of-deception/ Did you know we have a Real Life Network? Sign up for free for more exclusive content:https://bit.ly/3CIP3M99
In this episode, Lex speaks Friederike Ernst, co-founder of Gnosis. Together, they explore the evolution of Gnosis from an Ethereum-based prediction market project into a major infrastructure provider powering over $100 billion in DAO treasuries and $10–15 billion in monthly DEX trading via CowSwap. Tracing the company's journey from a 2017 ICO raising $12.5 million in ETH (now worth ~$450 million) to spinning out critical tools like Safe, CowSwap, and Zodiac, all originally built for internal use.Despite their success, Gnosis recognizes that the crypto-native user base is limited and has now pivoted to building user-centric, mainstream products like the upcoming Gnosis App targeting Gen Z with real-world financial utility. The company emphasizes its founding mission of democratizing financial ownership and warns against complacency as incumbents like Stripe and Robinhood enter the space. Lastly, Gnosis sees a near-term opportunity in AI-agent driven commerce, especially through reverse advertising models that could unlock trillion-dollar markets.NOTABLE DISCUSSION POINTS:The $12.5M ICO That Became a $450M Treasury: Gnosis raised $12.5 million in ETH during their 2017 ICO when ETH was trading at $40. Through conservative treasury management and holding their ETH position, that initial raise has sustained the company for nearly a decade and grown to approximately $450 million today. Friederike attributes this to “conservative treasury management and sheer luck” — a remarkable case study in long-term crypto treasury stewardship.Polymarket Runs on Gnosis Infrastructure: Despite Polymarket's $10B+ valuation and mainstream recognition, it still uses Gnosis's conditional token framework that was written years ago. Friederike acknowledges being “a little salty” that infrastructure they built powers such a significant share of the on-chain prediction market economy without Gnosis directly benefiting financially. It's a stark illustration of the “first up the mountain” dynamic where pioneers clear the path but don't always capture the value.The 19th Century German Banking Parallel: Friederike draws a compelling historical analogy: impoverished German farmers in the 1800s faced predatory moneylenders charging 25-40% interest. They responded by forming collective community banks, lending to each other at 4-6%. Within decades, tens of thousands existed, and one-third of Germans remain members today. She positions crypto's ownership model as the modern equivalent — a cooperative financial revolution for a generation economically disenfranchised by incumbent systems.TOPICSGnosis, Gnosis Safe, CowSwap, Zodiac, CPK, Polymarket, Kalshi, ConsenSys, Ethereum, ETH, AI, AI Agents, ICO, Onchain, Governance, Crypto Treasury, Web3, Blockchain, Finance, Banking, Payments, Custody, WalletsABOUT THE FINTECH BLUEPRINT
Risky Women, the global network dedicated to connecting, celebrating and championing women in Governance, Risk and Compliance (GRC) today announced an affiliate partnership with SoftSkillingIt to deliver its online training programme to the emerging and established risk and compliance workforce. This course is designed to address the growing need to equip individuals at all career stages with essential soft skills -aka human skills and professional skills - for personal development and professional growth. In an era defined by rapid advancements in artificial intelligence and automation, the demand for uniquely human attributes such as critical thinking, problem-solving, emotional intelligence, adaptability, communication, and collaboration has surged.
Woke Capitalism: Origins, ESG, DEI, and the Power of BlackRock — Charles Gasparino — Gasparino traces the origins of "woke capitalism," detailing how corporate America became an active institutional agent for progressive social change. This ideological shift, accelerated through boardroom political calculations, led to widespread adoption of corporate acronyms including ESG (Environmental, Social, Governance) and DEI (Diversity, Equity, Inclusion), redirecting corporate focus from shareholder returns toward stakeholder capitalism models. Larry Fink's BlackRockstrategically recognized that managing progressive-oriented investment funds could attract trillions in assets, positioning the firm as a powerful enforcer of these policies across corporate America. 1927
Join us for this exciting episode of Hot Topics on the Edge of Show as host Josh Krieger dives deep into the latest developments in the world of cryptocurrency and blockchain technology. In this episode, we are joined by Jonathan Baha'i from TOTO and Michael Ros from Sleap.io, both of whom were key sponsors at the recent Future of Money, Governance, and Law Summit in Washington, D.C.Episode Highlights: The rise of altcoin ETFs and their potential to reshape the crypto landscape, especially in light of Bitcoin's recent struggles.Polymarket receiving regulatory approval from the CFTC, paving the way for a new era of prediction markets.How TOTO is leveraging blockchain to redefine civic engagement and governance.Impact of the upcoming FIFA World Cup on travel trends and how Sleep.io is revolutionizing travel bookings with crypto.Whether you're a crypto enthusiast, a travel lover, or just curious about the future of governance, this episode is packed with valuable insights and thought-provoking discussions.Support us through our Sponsors! ☕
Finding it difficult to navigate the changing landscape of data protection? In this episode of the DMI podcast, host Will Francis speaks with Steven Roberts, Group Head of Marketing at Griffith College, Chartered Director, certified Data Protection Officer, and long-time marketing leader. Steven demystifies GDPR, AI governance, and the rapidly evolving regulatory environment that marketers must now navigate. Steven explains how GDPR enforcement has matured, why AI has created a new layer of complexity, and how businesses can balance innovation with compliance. He breaks down the EU AI Act, its risk-based structure, and its implications for organizations inside and outside the EU. Steven also shares practical guidance for building internal AI policies, tackling “shadow AI,” reducing data breach risks, and supporting teams with training and clear governance. For an even deeper look into how businesses can ensure data protection compliance, check out Steven's book, Data Protection for Business: Compliance, Governance, Reputation and Trust. Steven's Top 3 Tips Build data protection into projects from the start, using tools like Data Protection Impact Assessments to uncover risks early. Invest in regular staff training to avoid common mistakes caused by human error. Balance compliance with business performance by setting clear policies, understanding your risk appetite, and iterating your AI governance over time. The Ahead of the Game podcast is brought to you by the Digital Marketing Institute and is available on YouTube, Apple Podcasts, Spotify, and all other podcast platforms. And if you enjoyed this episode please leave a review so others can find us. If you have other feedback for or would like to be a guest on the show, email the podcast team! Timestamps 01:29 – AI's impact on GDPR & the explosion of new global privacy laws 03:26 – Is GDPR the global gold standard? 05:04 – GDPR enforcement today: Who gets fined and why 07:09 – Cultural attitudes toward data: EU vs. US 08:51 – The EU AI Act explained: Risk tiers, guardrails & human oversight 10:48 – What businesses must do: DPIAs, fundamental rights assessments & more 13:38 – Shadow AI, risk appetite & internal governance challenges 17:10 – Should you upload company data to ChatGPT? 20:40 – How the AI Act affects countries outside the EU 24:47 – Will privacy improve over time? 28:45 – What teams can do now: Tools, processes & data audits 33:49 – Data enrichment tools: targeting vs. Legality 36:47 – Will anyone actually check your data practices? 40:06 – Steven's top tips for navigating GDPR & AI
In this final episode of the series, we join a round-table discussion among experts from the Smart Internet Lab at the University of Bristol. Join Dimitra Simeonidou, Simon Saunders, and Paul Wilson as they explore the future of telecommunications and the exciting advancements in network technology. Discover how AI and sensing capabilities are transforming networks into intelligent systems that not only communicate but also 'feel' the environment. This episode delves into the implications of these innovations for smart cities, crisis management, and the ethical considerations that come with them.Chapters:(00:00) Introduction(00:30) The Evolution of Telecommunications(10:15) AI and Sensing in Networks(20:00) Future Networks Scenarios(30:00) Ethical Considerations and Governance(40:00) Closing ThoughtsThis is an 18Sixty Production.
The Fusaka Files Podcast – Episode Overview In the latest episode of the Fusaka Files podcast, Paul Brody and Pooja Ranjan engage in an insightful discussion with Tomasz Stańczak, Co-Executive Director of Ethereum. They explore the concept of credible neutrality and its significance for businesses and institutions navigating the evolving landscape of blockchain technology. The Fusaka Files is a limited-episode podcast series exploring Ethereum's upcoming Fusaka upgrade through the lens of real-world use, ecosystem readiness, and enterprise impact.
This month we are delighted to have a conversation with Sabaheta Ramcilović-Suominen, who is an Associate Professor in International Forest Policy and Governance at the Natural Resources Institute, Finland (LUKE). Saba talks to us about her contributions to the recently published, open access book, Socioecological Transformations: Linking Ontologies with Structures, Personal with Collective Change. Sabaheta has a dual role in this book, as a chapter contributor and as the editor. We start the conversation with some insights into Sabaheta's personal journey into research and how she got to be interested in concepts like non-duality and inner change in the context of working toward sustainability. This episode particularly covers the concept of separation, as a root cause of socioecological distress and alienation, and radical intraconnectedness, as a lens that tackles the illusion of separation and how that informs the wider literature on socioecological transformations. Sabaheta also gives us a sneak peek into the five or six opening chapters, and some of the topics and concepts tackled there. Thus, many of the important contributions and chapters are not introduced or discussed. If you are interested to learn more about the book, please check it out here https://library.oapen.org/viewer/web/viewer.html?file=/bitstream/handle/20.500.12657/104337/9781040406724.pdf?sequence=1&isA. In addition, if you would like to hear more, there is a digital Book Launch on 9.12.2025, which will spotlight many of the chapters not covered in this episode. More information on the book launch here: https://justglobeproject.com/book-launch-socioecological-transformations-linking-ontologies-with-structures-personal-with-collective-change-part-ii/.If you would like to learn more about Sabaheta's work, please check out her latest project website https://justglobeproject.com/ and her research profile https://www.luke.fi/en/experts/sabaheta-ramciloviksuominen.
John Hearty leads AI Governance at Mastercard, ensuring that AI systems the company builds or buys are fair, transparent, and effective. He drives responsible AI adoption, engages regulators, and advances tools, training, and research to embed AI ethics across the organization. With a background in R&D and global payments infrastructure, John holds 40+ patents and serves on ISO AI standards committees. On The Menu:Trust as foundation: Mastercard's AI governance philosophy explainedEvaluating AI systems for efficacy, fairness, and transparencyGenerative AI's transformative impact on marketing and creativityBuilding responsible AI solutions with external suppliers effectivelyThe evolution from R&D to AI governance leadershipThe future of marketing in chatbot and agent-based systemsCreating shared AI capabilities that elevate entire organizations
Are you unknowingly passing up five-figure tax savings available right now?Tait Duryea and Ryan Gibson sit down with oil and gas expert Tim Pawul to reveal why minerals, royalties, and non-operated working interests have become one of the most powerful tools for cash flow and tax strategy. Get ready to understand the real reason hydrocarbons aren't going anywhere, how minerals function like “underground real estate,” and why non-op wells offer significant first-year tax advantages for high-income earners. Discover how pilots can use this asset class to balance real estate, reduce taxable income, and access institutional-quality deals with transparency and scale.Tim Pawul is the President of Minerals and Royalties Authority, a leading connector, advisor, and investor in the oil and gas minerals space. With over a decade of experience mapping the institutional evolution of minerals, Tim has become one of the industry's loudest, most trusted voices. He hosts The Minerals and Royalties Podcast, works with institutional and private investors, and brings unmatched insight into minerals, royalties, and non-operated working interests. His deep industry relationships make him a go-to resource for anyone serious about energy investing.Show notes:(0:00) Intro(1:04) Fossil fuel dependence today(2:07) Introducing guest expert Tim Pawul(4:06) How Tim entered the minerals industry(7:02) Why shale changed everything(11:12) How minerals became institutional(19:11) What mineral rights actually are(32:39) How non-op working interests work(44:54) Red flags and due diligence tips(51:32) OutroConnect with Tim Pawul:LinkedIn: https://www.linkedin.com/in/tim-pawul-54aa9526/The Minerals and Royalties Podcast:Apple Podcasts: https://podcasts.apple.com/us/podcast/the-minerals-and-royalties-podcast/id1502759760 Spotify: https://open.spotify.com/show/2uiMdKFVMq7hWv1EBCDLsI Learn more about: Turbine Capital Iron Horse Energy Fund providing 80-85% tax deduction for 2025:https://turbinecap.investnext.com/portal/offerings/8798/Turbine Capital Oil & Gas Tax Benefits eBook: https://drive.google.com/file/d/17jUYSuoJGCU2ea4SN6MLgazmp-3LE2Lt/view Terms not explained during the episode: ESG = Environmental, Social Governance. ESG stands for Environmental, Social, and Governance, a set of standards used to evaluate a company's sustainable and ethical practices.AFE = Authorization For Expenditure. A formal document used in the oil and gas industry to outline the detailed cost estimate and approval for a specific project, such as drilling, completion, or reworking a well. It serves as a budget and a project proposal, requiring partner approval before capital is committed and tracking expenditures against the budgeted amount. G&A = General and AdministrativeNAPE Expo: https://registration.expologic.com/registration/types/668F3878-BC35-4659-A750-6491C592938D/3591 NAPE Registration Discount Code: MRA26 to get $75 off—Do you have questions or want to discuss this episode? Contact us at ask@passiveincomepilots.com *Legal Disclaimer*The content of this podcast is provided solely for educational and informational purposes. The views and opinions expressed are those of the hosts, Tait Duryea and Ryan Gibson, and do not reflect those of any organization they are associated with, including Turbine Capital or Spartan Investment Group. The opinions of our guests are their own and should not be construed as financial advice. This podcast does not offer tax, legal, or investment advice. Listeners are advised to consult with their own legal or financial counsel and to conduct their own due diligence before making any financial decisions.
Many hospices established their single entity corporate structure 40+ years ago and have left it largely untouched. But this too is worth a dust off especially as hospices are expanding into new geographies, new service lines, and affiliating with additional organizations. When done thoughtfully, an expanded corporate organizational chart does not need to cause painful complexity and administrative burdens. In fact, it may allow you to streamline governance, better protect your assets, and create operational efficiencies. Husch Blackwell's Meg Pekarske and Adam Royal discuss what they are seeing across the country and key considerations when contemplating organizational changes.
In our November 2025 Community Chat, and with members of the Caribbean tech community, ICT Consultant Natalie Maharaj of Trinidad and Tobago, and Dr Lyndell St. Ville of Saint Lucia and the tech firm Datashore, the panel discusses: * the imperative of implementing digital ID systems in the Caribbean region; * the current state of AI governance; and * cloud versus local or on-premises facilities and services in the Caribbean. The episode, show notes and links to some of the things mentioned during the episode can be found on the ICT Pulse Podcast Page (www.ict-pulse.com/category/podcast/) Enjoyed the episode? Do rate the show and leave us a review! Also, connect with us on: Facebook – https://www.facebook.com/ICTPulse/ Instagram – https://www.instagram.com/ictpulse/ Twitter – https://twitter.com/ICTPulse LinkedIn – https://www.linkedin.com/company/3745954/admin/ Join our mailing list: http://eepurl.com/qnUtj Music credit: The Last Word (Oui Ma Chérie), by Andy Narrell Podcast editing support: Mayra Bonilla Lopez ---------------
www.centerforgenerosity.comwww.gsbfundraising.comStrengthening charitable causes to change the world through generosity. For nearly 50 years, GSB has provided effective counsel to a wide array of non-profit entities, including congregations, camps, healthcare systems, colleges, and other agencies. We have established a formidable reputation for ethical work and consistently meeting or exceeding fundraising goals. Our approach is custom-designed to the needs of the client as well as to the culture of the client organization and constituency.
Jessica Zuñiga is the Head of AI Readiness & Data Governance at BlueYeti and one of the most important voices in data governance in Mexico and Latin America. Tim and Juan discuss with Jessica about when data and AI governance truly becomes a business enabler and how to use frameworks like DAMA effectively. See omnystudio.com/listener for privacy information.
This is the takeaway episode with Jessica Zuñiga is the Head of AI Readiness & Data Governance at BlueYeti and one of the most important voices in data governance in Mexico and Latin America. Tim and Juan discuss with Jessica about when data and AI governance truly becomes a business enabler and how to use frameworks like DAMA effectively. If you like what you heard, you should check out the full episode!See omnystudio.com/listener for privacy information.
Mary O'Carroll welcomes Ben Campbell (General Counsel, Deloitte) to unpack how law firms can—and must—learn from consulting and advisory firms. With a career that spans the DOJ, BigLaw, and now a top in‑house role, Ben offers a unique vantage on how governance, compensation, pricing and talent models are evolving. In this episode: Outcome‑based billing: Ben walks through how outcome‑based (versus hourly) billing shifts incentives, aligns with the client, and drives efficiency. Governance at scale: At Deloitte, the partnership model combines with a layered board/CEO structure—getting buy‑in from hundreds of partners and deploying resources across businesses. Talent & career flexibility: Moving beyond "lockstep" life‑path models, Ben discusses how allowing flexible progression and acknowledging different career goals helps retain and grow talent. AI & disruption: The "pyramid" leverage model (many junior + a few senior) is under pressure. Routine tasks will be automated; strategic judgment will remain the premium play. What law firms can borrow now: From shared back‑offices and staffing flexibility to outcome‑pricing and more dynamic governance—Ben makes the case for law firms to evolve before "behind" becomes the new norm. If you're wondering how the next chapter of legal and professional services might look, this conversation is a must‑listen—smart, candid and forward‑leaning. Follow Mary on LinkedIn Rate and review on Apple Podcasts
SRI360 | Socially Responsible Investing, ESG, Impact Investing, Sustainable Investing
Richard Brandweiner, Chair of Impact Investing Australia and a longtime institutional investor, joins the show to discuss the realities of impact investing at scale. He reflects on universal ownership, system-level risks, blended finance, and what it truly takes to align capital with real-world outcomes and fiduciary expectations.Richard shares lessons from leadership roles at Perpetual, Aware Super, LeapFrog, Pendal, and Regnan, and why hope isn't a strategy when designing investment frameworks meant to deliver measurable impact.A candid conversation for investors, asset owners, and practitioners who want an honest look at where sustainable finance is working, and where it isn't.—Intro (00:00)Parents' WWII survival shaped Richard's moral compass (03:54)Studied economics at the University of New South Wales (08:06)Trading shares through the 1987 market crash in high school (10:44)Career in Perpetual Investments and creating the first sustainable fund (13:15)Becoming CIO at First State Super in 2013 (17:34)Affordable housing fund idea sparked impact focus (19:34)Structural issues in asset owner systems (33:29)Transition from CIO to Leapfrog impact role (38:35)Challenges launching institutional-grade impact fund (42:04)Becoming BT CEO and integrating Regnan's early ESG legacy (43:54)At Regnan, the impact case is the investment case (48:29)Regnan's measurement approach and SDG taxonomy (54:18)Impact Investing Australia - mission and focus (58:37)Making impact the third axis in finance (01:04:55)Ethical vs ESG vs impact investing (01:09:22)How Australian Ethical outperforms with values-led investing (01:12:16)Governance for Aboriginal community investment and autonomy (01:14:00)Structural barriers to scaling impact investing globally (01:21:38)Communication and accounting gaps in environmental costs (01:32:08)Rapid-fire questions (01:35:37)Contact info (01:47:14)— Discover More from SRI360°: Explore all episodes of the SRI360° PODCAST Sign up for the free weekly email update—Additional Resources:Richard Brandweiner LinkedInImpact Investing Australia Website
John Maytham is joined by David Frost, CEO of the Southern Africa Tourism Services Association (SATSA). We’ll unpack what this investigation means for the sector, whether it can restore confidence, and what reforms are needed to prevent a repeat of this governance collapse. Presenter John Maytham is an actor and author-turned-talk radio veteran and seasoned journalist. His show serves a round-up of local and international news coupled with the latest in business, sport, traffic and weather. The host’s eclectic interests mean the program often surprises the audience with intriguing book reviews and inspiring interviews profiling artists. A daily highlight is Rapid Fire, just after 5:30pm. CapeTalk fans call in, to stump the presenter with their general knowledge questions. Another firm favourite is the humorous Thursday crossing with award-winning journalist Rebecca Davis, called “Plan B”. Thank you for listening to a podcast from Afternoon Drive with John Maytham Listen live on Primedia+ weekdays from 15:00 and 18:00 (SA Time) to Afternoon Drive with John Maytham broadcast on CapeTalk https://buff.ly/NnFM3Nk For more from the show go to https://buff.ly/BSFy4Cn or find all the catch-up podcasts here https://buff.ly/n8nWt4x Subscribe to the CapeTalk Daily and Weekly Newsletters https://buff.ly/sbvVZD5 Follow us on social media: CapeTalk on Facebook: https://www.facebook.com/CapeTalk CapeTalk on TikTok: https://www.tiktok.com/@capetalk CapeTalk on Instagram: https://www.instagram.com/ CapeTalk on X: https://x.com/CapeTalk CapeTalk on YouTube: https://www.youtube.com/@CapeTalk567 See omnystudio.com/listener for privacy information.
In today's episode, Steve sits down with journalist Nick Witchell for a conversation focused on what business leaders can learn from this year's major cyber attacks and the recent AWS outage. The two also discuss cyber regulations and the challenge of operating global enterprise during significant geopolitical turmoil. Key Takeaways: Boards and senior executives understand there is a threat, but many still lack knowledge of how to deal with it. We are too reliant on technology; for the sake of business continuity, a backup plan must be in place. High-quality simulation exercises are a crucial step toward more cyber resilience. Tune in to hear more about: The role of policy and regulation (3:17) Why cyber simulation exercises are so important (5:45) Steve's thoughts on the recent AWS outage (7:54) Standout Quotes: “Now, in the boardroom itself, in companies themselves, we have seen over the past few years an increasing awareness of the threat that these kinds of things can bring to really the future of an organization. But the challenge I think we now face is really helping boards, senior executives to transition from, yes, I get there's a threat, but what should I actually be doing about it?” - Steve Durbin “I think that in the main, cloud service providers are still probably far better equipped to provide the level of service that most companies need than you'd be able to do yourself. However, we do need to take into account that things will go wrong. And we have to plan for that. So if you are an organization that can quite happily exist without access to data in a cloud provider, it doesn't have to be Amazon, it could be anybody else, then fine. I would question why you're using them in that case. If on the other hand, you are dependent on them, you have to have some backup in place.” - Steve Durbin “All too often I'm seeing people particularly in the area of, say, cyber simulation exercises, because they're viewing it as a compliance exercise, going for least cost. That to me is a bit like saying I've just moved into an area where I know the burglary rate is quite high. What's the cheapest lock and door that I can get on my front door? It's madness. Not many of us would do it. We would try to work within our budget. We'd try to really figure out how important things were in our house. That's the mentality we have to adopt. So yes, you can get some of these things done very cheaply and you can tick a box, but it's not going to help you when things go wrong.” - Steve Durbin Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
HEADLINE: The Centrality of Violence: Babeuf, Marx, and the Paris Commune GUEST AUTHOR: Professor Sean McMeekin 50-WORD SUMMARY: Communism relies exclusively on extreme political violence and the disintegration of governance norms, never the ballot box. Early radical Gracchus Babeuf established a violent precedent, advocating the abolition of private property and the extermination of class enemies. Karl Marx embraced the bloody Paris Commune (1871) as proof that a true revolution required killing class enemies.
How can you write science-based fiction without info-dumping your research? How can you use AI tools in a creative way, while still focusing on a human-first approach? Why is adapting to the fast pace of change so difficult and how can we make the most of this time? Jamie Metzl talks about Superconvergence and more. In the intro, How to avoid author scams [Written Word Media]; Spotify vs Audible audiobook strategy [The New Publishing Standard]; Thoughts on Author Nation and why constraints are important in your author life [Self-Publishing with ALLi]; Alchemical History And Beautiful Architecture: Prague with Lisa M Lilly on my Books and Travel Podcast. Today's show is sponsored by Draft2Digital, self-publishing with support, where you can get free formatting, free distribution to multiple stores, and a host of other benefits. Just go to www.draft2digital.com to get started. This show is also supported by my Patrons. Join my Community at Patreon.com/thecreativepenn Jamie Metzl is a technology futurist, professional speaker, entrepreneur, and the author of sci-fi thrillers and futurist nonfiction books, including the revised and updated edition of Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World. You can listen above or on your favorite podcast app or read the notes and links below. Here are the highlights and the full transcript is below. Show Notes How personal history shaped Jamie's fiction writing Writing science-based fiction without info-dumping The super convergence of three revolutions (genetics, biotech, AI) and why we need to understand them holistically Using fiction to explore the human side of genetic engineering, life extension, and robotics Collaborating with GPT-5 as a named co-author How to be a first-rate human rather than a second-rate machine You can find Jamie at JamieMetzl.com. Transcript of interview with Jamie Metzl Jo: Jamie Metzl is a technology futurist, professional speaker, entrepreneur, and the author of sci-fi thrillers and futurist nonfiction books, including the revised and updated edition of Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World. So welcome, Jamie. Jamie: Thank you so much, Jo. Very happy to be here with you. Jo: There is so much we could talk about, but let's start with you telling us a bit more about you and how you got into writing. From History PhD to First Novel Jamie: Well, I think like a lot of writers, I didn't know I was a writer. I was just a kid who loved writing. Actually, just last week I was going through a bunch of boxes from my parents' house and I found my autobiography, which I wrote when I was nine years old. So I've been writing my whole life and loving it. It was always something that was very important to me. When I finished my DPhil, my PhD at Oxford, and my dissertation came out, it just got scooped up by Macmillan in like two minutes. And I thought, “God, that was easy.” That got me started thinking about writing books. I wanted to write a novel based on the same historical period – my PhD was in Southeast Asian history – and I wanted to write a historical novel set in the same period as my dissertation, because I felt like the dissertation had missed the human element of the story I was telling, which was related to the Cambodian genocide and its aftermath. So I wrote what became my first novel, and I thought, “Wow, now I'm a writer.” I thought, “All right, I've already published one book. I'm gonna get this other book out into the world.” And then I ran into the brick wall of: it's really hard to be a writer. It's almost easier to write something than to get it published. I had to learn a ton, and it took nine years from when I started writing that first novel, The Depths of the Sea, to when it finally came out. But it was such a positive experience, especially to have something so personal to me as that story. I'd lived in Cambodia for two years, I'd worked on the Thai-Cambodian border, and I'm the child of a Holocaust survivor. So there was a whole lot that was very emotional for me. That set a pattern for the rest of my life as a writer, at least where, in my nonfiction books, I'm thinking about whatever the issues are that are most important to me. Whether it was that historical book, which was my first book, or Hacking Darwin on the future of human genetic engineering, which was my last book, or Superconvergence, which, as you mentioned in the intro, is my current book. But in every one of those stories, the human element is so deep and so profound. You can get at some of that in nonfiction, but I've also loved exploring those issues in deeper ways in my fiction. So in my more recent novels, Genesis Code and Eternal Sonata, I've looked at the human side of the story of genetic engineering and human life extension. And now my agent has just submitted my new novel, Virtuoso, about the intersection of AI, robotics, and classical music. With all of this, who knows what's the real difference between fiction and nonfiction? We're all humans trying to figure things out on many different levels. Shifting from History to Future Tech Jo: I knew that you were a polymath, someone who's interested in so many things, but the music angle with robotics and AI is fascinating. I do just want to ask you, because I was also at Oxford – what college were you at? Jamie: I was in St. Antony's. Jo: I was at Mansfield, so we were in that slightly smaller, less famous college group, if people don't know. Jamie: You know, but we're small but proud. Jo: Exactly. That's fantastic. You mentioned that you were on the historical side of things at the beginning and now you've moved into technology and also science, because this book Superconvergence has a lot of science. So how did you go from history and the past into science and the future? Biology and Seeing the Future Coming Jamie: It's a great question. I'll start at the end and then back up. A few years ago I was speaking at Lawrence Livermore National Laboratory, which is one of the big scientific labs here in the United States. I was a guest of the director and I was speaking to their 300 top scientists. I said to them, “I'm here to speak with you about the future of biology at the invitation of your director, and I'm really excited. But if you hear something wrong, please raise your hand and let me know, because I'm entirely self-taught. The last biology course I took was in 11th grade of high school in Kansas City.” Of course I wouldn't say that if I didn't have a lot of confidence in my process. But in many ways I'm self-taught in the sciences. As you know, Jo, and as all of your listeners know, the foundation of everything is curiosity and then a disciplined process for learning. Even our greatest super-specialists in the world now – whatever their background – the world is changing so fast that if anyone says, “Oh, I have a PhD in physics/chemistry/biology from 30 years ago,” the exact topic they learned 30 years ago is less significant than their process for continuous learning. More specifically, in the 1990s I was working on the National Security Council for President Clinton, which is the president's foreign policy staff. My then boss and now close friend, Richard Clarke – who became famous as the guy who had tragically predicted 9/11 – used to say that the key to efficacy in Washington and in life is to try to solve problems that other people can't see. For me, almost 30 years ago, I felt to my bones that this intersection of what we now call AI and the nascent genetics revolution and the nascent biotechnology revolution was going to have profound implications for humanity. So I just started obsessively educating myself. When I was ready, I started writing obscure national security articles. Those got a decent amount of attention, so I was invited to testify before the United States Congress. I was speaking out a lot, saying, “Hey, this is a really important story. A lot of people are missing it. Here are the things we should be thinking about for the future.” I wasn't getting the kind of traction that I wanted. I mentioned before that my first book had been this dry Oxford PhD dissertation, and that had led to my first novel. So I thought, why don't I try the same approach again – writing novels to tell this story about the genetics, biotech, and what later became known popularly as the AI revolution? That led to my two near-term sci-fi novels, Genesis Code and Eternal Sonata. On my book tours for those novels, when I explained the underlying science to people in my way, as someone who taught myself, I could see in their eyes that they were recognizing not just that something big was happening, but that they could understand it and feel like they were part of that story. That's what led me to write Hacking Darwin, as I mentioned. That book really unlocked a lot of things. I had essentially predicted the CRISPR babies that were born in China before it happened – down to the specific gene I thought would be targeted, which in fact was the case. After that book was published, Dr. Tedros, the Director-General of the World Health Organization, invited me to join the WHO Expert Advisory Committee on Human Genome Editing, which I did. It was a really great experience and got me thinking a lot about the upside of this revolution and the downside. The Birth of Superconvergence Jamie: I get a lot of wonderful invitations to speak, and I have two basic rules for speaking: Never use notes. Never ever. Never stand behind a podium. Never ever. Because of that, when I speak, my talks tend to migrate. I'd be speaking with people about the genetics revolution as it applied to humans, and I'd say, “Well, this is just a little piece of a much bigger story.” The bigger story is that after nearly four billion years of life on Earth, our one species has the increasing ability to engineer novel intelligence and re-engineer life. The big question for us, and frankly for the world, is whether we're going to be able to use that almost godlike superpower wisely. As that idea got bigger and bigger, it became this inevitable force. You write so many books, Jo, that I think it's second nature for you. Every time I finish a book, I think, “Wow, that was really hard. I'm never doing that again.” And then the books creep up on you. They call to you. At some point you say, “All right, now I'm going to do it.” So that was my current book, Superconvergence. Like everything, every journey you take a step, and that step inspires another step and another. That's why writing and living creatively is such a wonderfully exciting thing – there's always more to learn and always great opportunities to push ourselves in new ways. Balancing Deep Research with Good Storytelling Jo: Yeah, absolutely. I love that you've followed your curiosity and then done this disciplined process for learning. I completely understand that. But one of the big issues with people like us who love the research – and having read your Superconvergence, I know how deeply you go into this and how deeply you care that it's correct – is that with fiction, one of the big problems with too much research is the danger of brain-dumping. Readers go to fiction for escapism. They want the interesting side of it, but they want a story first. What are your tips for authors who might feel like, “Where's the line between putting in my research so that it's interesting for readers, but not going too far and turning it into a textbook?” How do you find that balance? Jamie: It's such a great question. I live in New York now, but I used to live in Washington when I was working for the U.S. government, and there were a number of people I served with who later wrote novels. Some of those novels felt like policy memos with a few sex scenes – and that's not what to do. To write something that's informed by science or really by anything, everything needs to be subservient to the story and the characters. The question is: what is the essential piece of information that can convey something that's both important to your story and your character development, and is also an accurate representation of the world as you want it to be? I certainly write novels that are set in the future – although some of them were a future that's now already happened because I wrote them a long time ago. You can make stuff up, but as an author you have to decide what your connection to existing science and existing technology and the existing world is going to be. I come at it from two angles. One: I read a huge number of scientific papers and think, “What does this mean for now, and if you extrapolate into the future, where might that go?” Two: I think about how to condense things. We've all read books where you're humming along because people read fiction for story and emotional connection, and then you hit a bit like: “I sat down in front of the president, and the president said, ‘Tell me what I need to know about the nuclear threat.'” And then it's like: insert memo. That's a deal-killer. It's like all things – how do you have a meaningful relationship with another person? It's not by just telling them your story. Even when you're telling them something about you, you need to be imagining yourself sitting in their shoes, hearing you. These are very different disciplines, fiction and nonfiction. But for the speculative nonfiction I write – “here's where things are now, and here's where the world is heading” – there's a lot of imagination that goes into that too. It feels in many ways like we're living in a sci-fi world because the rate of technological change has been accelerating continuously, certainly for the last 12,000 years since the dawn of agriculture. It's a balance. For me, I feel like I'm a better fiction writer because I write nonfiction, and I'm a better nonfiction writer because I write fiction. When I'm writing nonfiction, I don't want it to be boring either – I want people to feel like there's a story and characters and that they can feel themselves inside that story. Jo: Yeah, definitely. I think having some distance helps as well. If you're really deep into your topics, as you are, you have to leave that manuscript a little bit so you can go back with the eyes of the reader as opposed to your eyes as the expert. Then you can get their experience, which is great. Looking Beyond Author-Focused AI Fears Jo: I want to come to your technical knowledge, because AI is a big thing in the author and creative community, like everywhere else. One of the issues is that creators are focusing on just this tiny part of the impact of AI, and there's a much bigger picture. For example, in 2024, Demis Hassabis from Google DeepMind and his collaborative partner John Jumper won the Nobel Prize for Chemistry with AlphaFold. It feels to me like there's this massive world of what's happening with AI in health, climate, and other areas, and yet we are so focused on a lot of the negative stuff. Maybe you could give us a couple of things about what there is to be excited and optimistic about in terms of AI-powered science? Jamie: Sure. I'm so excited about all of the new opportunities that AI creates. But I also think there's a reason why evolution has preserved this very human feeling of anxiety: because there are real dangers. Anybody who's Pollyanna-ish and says, “Oh, the AI story is inevitably positive,” I'd be distrustful. And anyone who says, “We're absolutely doomed, this is the end of humanity,” I'd also be distrustful. So let me tell you the positives and the negatives, and maybe some thoughts about how we navigate toward the former and away from the latter. AI as the New Electricity Jamie: When people think of AI right now, they're thinking very narrowly about these AI tools and ChatGPT. But we don't think of electricity that way. Nobody says, “I know electricity – electricity is what happens at the power station.” We've internalised the idea that electricity is woven into not just our communication systems or our houses, but into our clothes, our glasses – it's woven into everything and has super-empowered almost everything in our modern lives. That's what AI is. In Superconvergence, the majority of the book is about positive opportunities: In healthcare, moving from generalised healthcare based on population averages to personalised or precision healthcare based on a molecular understanding of each person's individual biology. As we build these massive datasets like the UK Biobank, we can take a next jump toward predictive and preventive healthcare, where we're able to address health issues far earlier in the process, when interventions can be far more benign. I'm really excited about that, not to mention the incredible new kinds of treatments – gene therapies, or pharmaceuticals based on genetics and systems-biology analyses of patients. Then there's agriculture. Over the last hundred years, because of the technologies of the Green Revolution and synthetic fertilisers, we've had an incredible increase in agricultural productivity. That's what's allowed us to quadruple the global population. But if we just continue agriculture as it is, as we get towards ten billion wealthier, more empowered people wanting to eat like we eat, we're going to have to wipe out all the wild spaces on Earth to feed them. These technologies help provide different paths toward increasing agricultural productivity with fewer inputs of land, water, fertiliser, insecticides, and pesticides. That's really positive. I could go on and on about these positives – and I do – but there are very real negatives. I was a member of the WHO Expert Advisory Committee on Human Genome Editing after the first CRISPR babies were very unethically created in China. I'm extremely aware that these same capabilities have potentially incredible upsides and very real downsides. That's the same as every technology in the past, but this is happening so quickly that it's triggering a lot of anxieties. Governance, Responsibility, and Why Everyone Has a Role Jamie: The question now is: how do we optimise the benefits and minimise the harms? The short, unsexy word for that is governance. Governance is not just what governments do; it's what all of us do. That's why I try to write books, both fiction and nonfiction, to bring people into this story. If people “other” this story – if they say, “There's a technology revolution, it has nothing to do with me, I'm going to keep my head down” – I think that's dangerous. The way we're going to handle this as responsibly as possible is if everybody says, “I have some role. Maybe it's small, maybe it's big. The first step is I need to educate myself. Then I need to have conversations with people around me. I need to express my desires, wishes, and thoughts – with political leaders, organisations I'm part of, businesses.” That has to happen at every level. You're in the UK – you know the anti-slavery movement started with a handful of people in Cambridge and grew into a global movement. I really believe in the power of ideas, but ideas don't spread on their own. These are very human networks, and that's why writing, speaking, communicating – probably for every single person listening to this podcast – is so important. Jo: Mm, yeah. Fiction Like AI 2041 and Thinking Through the Issues Jo: Have you read AI 2041 by Kai-Fu Lee and Chen Qiufan? Jamie: No. I heard a bunch of their interviews when the book came out, but I haven't read it. Jo: I think that's another good one because it's fiction – a whole load of short stories. It came out a few years ago now, but the issues they cover in the stories, about different people in different countries – I remember one about deepfakes – make you think more about the topics and help you figure out where you stand. I think that's the issue right now: it's so complex, there are so many things. I'm generally positive about AI, but of course I don't want autonomous drone weapons, you know? The Messy Reality of “Bad” Technologies Jamie: Can I ask you about that? Because this is why it's so complicated. Like you, I think nobody wants autonomous killer drones anywhere in the world. But if you right now were the defence minister of Ukraine, and your children are being kidnapped, your country is being destroyed, you're fighting for your survival, you're getting attacked every night – and you're getting attacked by the Russians, who are investing more and more in autonomous killer robots – you kind of have two choices. You can say, “I'm going to surrender,” or, “I'm going to use what technology I have available to defend myself, and hopefully fight to either victory or some kind of stand-off.” That's what our societies did with nuclear weapons. Maybe not every American recognises that Churchill gave Britain's nuclear secrets to America as a way of greasing the wheels of the Anglo-American alliance during the Second World War – but that was our programme: we couldn't afford to lose that war, and we couldn't afford to let the Nazis get nuclear weapons before we did. So there's the abstract feeling of, “I'm against all war in the abstract. I'm against autonomous killer robots in the abstract.” But if I were the defence minister of Ukraine, I would say, “What will it take for us to build the weapons we can use to defend ourselves?” That's why all this stuff gets so complicated. And frankly, it's why the relationship between fiction and nonfiction is so important. If every novel had a situation where every character said, “Oh, I know exactly the right answer,” and then they just did the right answer and it was obviously right, it wouldn't make for great fiction. We're dealing with really complex humans. We have conflicting impulses. We're not perfect. Maybe there are no perfect answers – but how do we strive toward better rather than worse? That's the question. Jo: Absolutely. I don't want to get too political on things. How AI Is Changing the Writing Life Jo: Let's come back to authors. In terms of the creative process, the writing process, the research process, and the business of being an author – what are some of the ways that you already use AI tools, and some of the ways, given your futurist brain, that you think things are going to change for us? Jamie: Great question. I'll start with a little middle piece. I found you, Jo, through GPT-5. I asked ChatGPT, “I'm coming out with this book and I want to connect with podcasters who are a little different from the ones I've done in the past. I've been a guest on Joe Rogan twice and some of the bigger podcasts. Make me a list of really interesting people I can have great conversations with.” That's how I found you. So this is one reward of that process. Let me say that in the last year I've worked on three books, and I'll explain how my relationship with AI has changed over those books. Cleaning Up Citations (and Getting Burned) Jamie: First is the highly revised paperback edition of Superconvergence. When the hardback came out, I had – I don't normally work with research assistants because I like to dig into everything myself – but the one thing I do use a research assistant for is that I can't be bothered, when I'm writing something, to do the full Chicago-style footnote if I'm already referencing an academic paper. So I'd just put the URL as the footnote and then hire a research assistant and say, “Go to this URL and change it into a Chicago-style citation. That's it.” Unfortunately, my research assistant on the hardback used early-days ChatGPT for that work. He did the whole thing, came back, everything looked perfect. I said, “Wow, amazing job.” It was only later, as I was going through them, that I realised something like 50% of them were invented footnotes. It was very painful to go back and fix, and it took ten times more time. With the paperback edition, I didn't use AI that much, but I did say things like, “Here's all the information – generate a Chicago-style citation.” That was better. I noticed there were a few things where I stopped using the thesaurus function on Microsoft Word because I'd just put the whole paragraph into the AI and say, “Give me ten other options for this one word,” and it would be like a contextual thesaurus. That was pretty good. Talking to a Robot Pianist Character Jamie: Then, for my new novel Virtuoso, I was writing a character who is a futurist robot that plays the piano very beautifully – not just humanly, but almost finding new things in the music we've written and composing music that resonates with us. I described the actions of that robot in the novel, but I didn't describe the inner workings of the robot's mind. In thinking about that character, I realised I was the first science-fiction writer in history who could interrogate a machine about what it was “thinking” in a particular context. I had the most beautiful conversations with ChatGPT, where I would give scenarios and ask, “What are you thinking? What are you feeling in this context?” It was all background for that character, but it was truly profound. Co-Authoring The AI Ten Commandments with GPT-5 Jamie: Third, I have another book coming out in May in the United States. I gave a talk this summer at the Chautauqua Institution in upstate New York about AI and spirituality. I talked about the history of our human relationship with our technology, about how all our religious and spiritual traditions have deep technological underpinnings – certainly our Abrahamic religions are deeply connected to farming, and Protestantism to the printing press. Then I had a section about the role of AI in generating moral codes that would resonate with humans. Everybody went nuts for this talk, and I thought, “I think I'm going to write a book.” I decided to write it differently, with GPT-5 as my named co-author. The first thing I did was outline the entire book based on the talk, which I'd already spent a huge amount of time thinking about and organising. Then I did a full outline of the arguments and structures. Then I trained GPT-5 on my writing style. The way I did it – which I fully describe in the introduction to the book – was that I'd handle all the framing: the full introduction, the argument, the structure. But if there was a section where, for a few paragraphs, I was summarising a huge field of data, even something I knew well, I'd give GPT-5 the intro sentence and say, “In my writing style, prepare four paragraphs on this.” For example, I might write: “AI has the potential to see us humans like we humans see ant colonies.” Then I'd say, “Give me four paragraphs on the relationship between the individual and the collective in ant colonies.” I could have written those four paragraphs myself, but it would've taken a month to read the life's work of E.O. Wilson and then write them. GPT-5 wrote them in seconds or minutes, in its thinking mode. I'd then say, “It's not quite right – change this, change that,” and we'd go back and forth three or four times. Then I'd edit the whole thing and put it into the text. So this book that I could have written on my own in a year, I wrote a first draft of with GPT-5 as my named co-author in two days. The whole project will take about six months from start to finish, and I'm having massive human editing – multiple edits from me, plus a professional editor. It's not a magic AI button. But I feel strongly about listing GPT-5 as a co-author because I've written it differently than previous books. I'm a huge believer in the old-fashioned lone author struggling and suffering – that's in my novels, and in Virtuoso I explore that. But other forms are going to emerge, just like video games are a creative, artistic form deeply connected to technology. The novel hasn't been around forever – the current format is only a few centuries old – and forms are always changing. There are real opportunities for authors, and there will be so much crap flooding the market because everybody can write something and put it up on Amazon. But I think there will be a very special place for thoughtful human authors who have an idea of what humans do at our best, and who translate that into content other humans can enjoy. Traditional vs Indie: Why This Book Will Be Self-Published Jo: I'm interested – you mentioned that it's your named co-author. Is this book going through a traditional publisher, and what do they think about that? Or are you going to publish it yourself? Jamie: It's such a smart question. What I found quickly is that when you get to be an author later in your career, you have all the infrastructure – a track record, a fantastic agent, all of that. But there were two things that were really important to me here: I wanted to get this book out really fast – six months instead of a year and a half. It was essential to me to have GPT-5 listed as my co-author, because if it were just my name, I feel like it would be dishonest. Readers who are used to reading my books – I didn't want to present something different than what it was. I spoke with my agent, who I absolutely love, and she said that for this particular project it was going to be really hard in traditional publishing. So I did a huge amount of research, because I'd never done anything in the self-publishing world before. I looked at different models. There was one hybrid model that's basically the same as traditional, but you pay for the things the publisher would normally pay for. I ended up not doing that. Instead, I decided on a self-publishing route where I disaggregated the publishing process. I found three teams: one for producing the book, one for getting the book out into the world, and a smaller one for the audiobook. I still believe in traditional publishing – there's a lot of wonderful human value-add. But some works just don't lend themselves to traditional publishing. For this book, which is called The AI Ten Commandments, that's the path I've chosen. Jo: And when's that out? I think people will be interested. Jamie: April 26th. Those of us used to traditional publishing think, “I've finished the book, sold the proposal, it'll be out any day now,” and then it can be a year and a half. It's frustrating. With this, the process can be much faster because it's possible to control more of the variables. But the key – as I was saying – is to make sure it's as good a book as everything else you've written. It's great to speed up, but you don't want to compromise on quality. The Coming Flood of Excellent AI-Generated Work Jo: Yeah, absolutely. We're almost out of time, but I want to come back to your “flood of crap” and the “AI slop” idea that's going around. Because you are working with GPT-5 – and I do as well, and I work with Claude and Gemini – and right now there are still issues. Like you said about referencing, there are still hallucinations, though fewer. But fast-forward two, five years: it's not a flood of crap. It's a flood of excellent. It's a flood of stuff that's better than us. Jamie: We're humans. It's better than us in certain ways. If you have farm machinery, it's better than us at certain aspects of farming. I'm a true humanist. I think there will be lots of things machines do better than us, but there will be tons of things we do better than them. There's a reason humans still care about chess, even though machines can beat humans at chess. Some people are saying things I fully disagree with, like this concept of AGI – artificial general intelligence – where machines do everything better than humans. I've summarised my position in seven letters: “AGI is BS.” The only way you can believe in AGI in that sense is if your concept of what a human is and what a human mind is is so narrow that you think it's just a narrow range of analytical skills. We are so much more than that. Humans represent almost four billion years of embodied evolution. There's so much about ourselves that we don't know. As incredible as these machines are and will become, there will always be wonderful things humans can do that are different from machines. What I always tell people is: whatever you're doing, don't be a second-rate machine. Be a first-rate human. If you're doing something and a machine is doing that thing much better than you, then shift to something where your unique capacities as a human give you the opportunity to do something better. So yes, I totally agree that the quality of AI-generated stuff will get better. But I think the most creative and successful humans will be the ones who say, “I recognise that this is creating new opportunities, and I'm going to insert my core humanity to do something magical and new.” People are “othering” these technologies, but the technologies themselves are magnificent human-generated artefacts. They're not alien UFOs that landed here. It's a scary moment for creatives, no doubt, because there are things all of us did in the past that machines can now do really well. But this is the moment where the most creative people ask themselves, “What does it mean for me to be a great human?” The pat answers won't apply. In my Virtuoso novel I explore that a lot. The idea that “machines don't do creativity” – they will do incredible creativity; it just won't be exactly human creativity. We will be potentially huge beneficiaries of these capabilities, but we really have to believe in and invest in the magic of our core humanity. Where to Find Jamie and His Books Jo: Brilliant. So where can people find you and your books online? Jamie: Thank you so much for asking. My website is jamiemetzl.com – and my books are available everywhere. Jo: Fantastic. Thanks so much for your time, Jamie. That was great. Jamie: Thank you, Joanna.The post Writing The Future, And Being More Human In An Age of AI With Jamie Metzl first appeared on The Creative Penn.
What does it truly mean to surrender in leadership, marriage, and personal growth? In this episode of Seek Go Create, Benita Williams, author of "The Sustainable CEO," joins hosts Tim Winders and A.M. Williams for a vulnerable conversation about moving from burnout to overflow, the power of aligning with your divine design, and how couples can thrive together in life and business. Discover practical habits for sustainable growth, hear honest stories about overcoming adversity, and learn why surrender might be the key you're missing. If you want to lead with purpose and partnership, this episode is one you won't want to miss."Surrender is the starting point for change." - Benita Williams Access all show and episode resources HEREAbout Our Guest:Benita Williams is the author of "The Sustainable CEO: Equipping Leaders to Move from Burnout to Overflow through Rest, Clarity and Stewardship." She specializes in helping leaders achieve sustainable growth by aligning leadership with stewardship principles, and is recognized for her expertise in strategy, consulting, and leadership development.A.M. Williams is a renowned coach for founders and executives, known for his Leverage Leader approach, which focuses on identity architecture, governance, and expansion. Despite overcoming the challenges of paraplegia, he has built his coaching and training business from a bedbound condition and has inspired leaders globally to grow in governance and fulfill their divine design. Together, Benita and A.M. provide a powerful and complementary perspective on leadership, marriage, and legacy.Reasons to Listen: Discover how Benita Williams and A.M. Williams have built a thriving marriage while navigating leadership challenges together, even in the face of disability and life-changing adversity.Uncover powerful insights on shifting from ambition-driven leadership to alignment and stewardship—learn what it really means to operate from your "divine design" for lasting success.Get a behind-the-scenes look at the unique ways these leaders collaborate, resolve conflict, and blend complementary strengths in business and life—with practical takeaways you won't find in traditional leadership books.Episode Resources & Action Steps:Resources Mentioned in This Episode:The Sustainable CEO: Unlocking the Divine Overflow Sequence for Leadership, Alignment and Legacy: Book by Benita Williams that outlines a faith-driven, sustainable approach to leadership and stewardship. (Referenced throughout, especially around [00:48:28], and recommended by A.M. Williams at [00:50:55].)Coach A.M. Williams Website: Website for A.M. Williams containing resources on leadership, governance, and coaching (shared by Tim Winders at [00:30:17]).(Bonus) Seek Go Create Podcast – Benita Williams Episode: Previous episode featuring Benita Williams, referenced by Tim Winders at [00:01:51]. Listeners are encouraged to check out that episode for deeper insight into her journey.Action Steps for Listeners:Practice Surrender and Stewardship: As Benita Williams emphasized ([00:00:00], [00:53:54]), start by surrendering old patterns, identities, and ways of operating. Open yourself up to what God is calling you to next, and focus on...
What if the rules we write today could make tomorrow's technology more human, safer, and genuinely worth wanting? We sit down with Anna Aseeva, a legal strategist working at the intersection of sustainability, intellectual property, and AI, to map a smarter path for digital innovation that starts with design and ends with systems people trust.We dig into the significant shifts shaping tech governance right now. Anna explains a practical model for aligning IP and sustainability: protect early to nurture fragile ideas through sandboxes and investment, then open up mature solutions with licensing that shares benefits and safeguards intent. This conversation is equally about culture and code. We talk about legal design that reads like plain talk, citizen participation that turns evidence into policy input, and civic apps that could let communities steer platform rules. We cover digital sustainability beyond emissions—lighter websites, greener hosting, and product decisions that fight digital obesity and planned obsolescence. And we don't shy away from the realities of AI: hallucinated footnotes, invented coauthors, and the simple fixes that come from a careful human in the loop.If you're a builder or curious listener who wants technology to serve people and planet, you'll find clear takeaways: design for sustainability from day one, keep humans in charge of final decisions, protect what's fragile, open what's ready, and invite people into the process. Subscribe, share with a friend, and tell us: where should human review be non-negotiable?Send us a textCheck out "Protection for the Inventive Mind" – available now on Amazon in print and Kindle formats. The views and opinions expressed (by the host and guest(s)) in this podcast are strictly their own and do not necessarily reflect the official policy or position of the entities with which they may be affiliated. This podcast should in no way be construed as promoting or criticizing any particular government policy, institutional position, private interest or commercial entity. Any content provided is for informational and educational purposes only.
Service Management Leadership Podcast with Jeffrey Tefertiller
In this episode, Jeffrey discusses AI and AI governance.Each week, Jeffrey will be sharing his knowledge on Service Delivery (Mondays) and Service Management (Thursdays). Jeffrey is the founder of Service Management Leadership, an IT consulting firm specializing in Service Management, Asset Management, CIO Advisory, and Business Continuity services. The firm's website is www.servicemanagement.us. Jeffrey has been in the industry for 30 years and brings a practical perspective to the discussions. He is an accomplished author with seven acclaimed books in the subject area and a popular YouTube channel with approximately 1,500 videos on various topics. Also, please follow the Service Management Leadership LinkedIn page.
Professor McMeekin states clearly that communism, specifically Marxist-Leninism, prospers only in conjunction with extreme violence and the disintegration of governance norms. The discussion covers the French revolutionary Babeuf, who advocated for the overturning of private property, centralized rationing, and "cleansing political violence" against "class enemies." Babeuf set a precedent for the centrality of political violence to the communist project. Marx later embraced the Paris Commune of 1871, even though he did not organize it, seeing the Commune's violence—including the killing of class enemies and throwing women and children into battle—as proof of the veracity and sincerity of a true communist revolution.
The discussion centers on the book "Rewiring Democracy," authored by Bruce Schneier and Nathan E. Sanders, which explores the implications of artificial intelligence (AI) on governance, power distribution, and democratic principles. The authors highlight the risks associated with AI, particularly the concentration of power among a few corporations, primarily in Silicon Valley, which can undermine democratic values and lead to inefficiencies in government and business. They advocate for a vision of AI that democratizes power and enhances the efficiency of governance, emphasizing the need for transparency, fairness, and accountability in AI systems.Schneier and Sanders argue that the democratization of AI technology is already underway, as the costs of developing AI models decrease, allowing smaller organizations to create their own systems. However, they caution that the opacity of these models poses significant challenges. They suggest that regulation and competition can play crucial roles in ensuring that AI systems are transparent and accountable to both the public and clients. The conversation also touches on the importance of diverse participation in policymaking, asserting that individuals bring valuable lived experiences that can inform AI governance.The episode further addresses the issue of bias in AI systems, emphasizing that while complete neutrality is unattainable, transparency about inherent biases is essential. The authors discuss the legal implications of biased AI implementations, referencing a case involving a pharmacy chain that faced accountability for racially biased facial recognition technology. They argue for a systemic approach to governance that considers the roles of both technology providers and the organizations that implement these systems.For Managed Service Providers (MSPs) and IT service leaders, the insights from this episode underscore the importance of actively testing AI systems for bias and ensuring compliance with evolving regulations. The authors encourage IT providers to engage in the development of governance frameworks that prioritize transparency and accountability, ultimately fostering a more equitable technological landscape. As AI continues to evolve, the need for informed participation and robust regulatory frameworks will be critical for maintaining democratic values and addressing the challenges posed by emerging technologies.
Why do governments rely on coercion and punishment when voluntary cooperation often produces better, more sustainable outcomes?Episode SummaryOn this episode, I'm joined once again by Professor Yuval Feldman, who returns to explore the core question behind his latest book: Can The Public Be Trusted? Instead of asking how much we trust our governments, Yuval flips the script, asking how much governments trust us, and whether that trust is deserved. Together, we dive into the concept of voluntary compliance, where people follow rules not because they're forced to, but because they believe in doing the right thing. We unpack the complexity of this idea through real-world examples, from tax compliance to environmental policy to COVID-19 interventions. Yuval explains why people who think they're ethical can actually be the hardest to regulate, and how misplaced trust can lead to serious regulatory blind spots. We also explore the psychological tension between intrinsic motivation and external enforcement, and why regulators often default to command-and-control, even when trust might offer a better solution. As ever, Yuval makes nuanced, sophisticated ideas feel accessible and immediately relevant. You'll hear about the role of culture, the limits of nudging, why economists might (sometimes!) actually be right about human behaviour and how AI might help policymakers make better decisions. Guest BioProfessor Yuval Feldman is a legal scholar and behavioural scientist at Bar-Ilan University in Israel. A returning guest and the podcast's very first interviewee, Yuval is internationally renowned for his work at the intersection of law, psychology, and behavioural economics. His new book, Can The Public Be Trusted? The Promise and Perils of Voluntary Compliance is available open-access via Cambridge University Press (link below).AI-Generated Timestamped Summary[00:00:00] Introduction: why this question of “can the public be trusted?” matters for regulation and risk[00:03:42] Yuval's personal background: how he came into law + psychology and the origin of his VComp lab[00:09:15] Defining voluntary compliance: what it means, how it differs from coercion[00:14:52] Intrinsic motivation vs crowding out: when good intentions are undermined by heavy‑handed regulation[00:21:30] Designing regulatory systems for trust: frameworks and features that support voluntary compliance[00:27:47] Case study: Covid‑19 and public cooperation—what we learned about trust, compliance and enforcement[00:34:10] Tax compliance as a trust test: how citizens respond when they believe the system treats them fairly[00:39:58] Environmental regulation and the limits of voluntary strategies: when culture or technology create barriers[00:45:22] Cross‑cultural & technological dynamics: how digital reputation, culture and platforms impact compliance[00:50:05] The perils of voluntary compliance: when trust can be misplaced, manipulated or simply ineffective[00:55:30] Final reflections: what this means for risk professionals, policymakers and anyone designing systems of human behaviour[01:00:12] Closing: how to reframe regulation to see the public not as a risk but as a resource.LinksYuval's academic profile - https://law.biu.ac.il/en/feldmanHis profile on LinkedIn - https://www.linkedin.com/in/yuval-feldman-21942514/ His open-access book Can the Public Be Trusted? (Cambridge University Press) – https://www.cambridge.org/core/books/can-the-public-be-trusted/B3E11831E3051D4E928B9252B6767A4BYuval's previous appearances on the show On The Law of Good People or ‘why we should write rules for good people not bad people' (2019) - https://www.humanriskpodcast.com/professor-yuval-feldman-on-why/ On Trust & Voluntary Compliance (2022) - https://www.humanriskpodcast.com/professor-yuval-feldman-on-trust-compliance?
By Craig Scott - The gospel of the coming kingdom of God is a message of how governance on earth (and presumably all creation) will be set right… “ When the righteous are in authority, the people rejoice.”
Gayatri Kalyanaraman is in conversation with Rupam Tandon, Financial Executive with 30 years of experience across banking, asset management, and insurance. She's also a certified Independent Director (IoD), ESG specialist, and startup mentor. She talks about her journey starting in a small town to traveling the globe and literally spreading her wings. Gayatri Kalyanaraman, Host for Software People Stories podcast and co-founder Sangatna Angels welcomes Rupam Tandon and sets the tone for an inspiring conversation about leadership, resilience, and governance. Highlights of the conversation are here01:00 – Early Career and Foundations in FinanceRupam shares her beginnings in Dubai's financial industry during the 1990s — a period of rapid banking expansion — and lessons on trust, relationships, and financial discipline.03:00 – London Years & the Global Financial Crisis She moves to London to work with asset managers and hedge funds, witnessing firsthand the subprime crisis and collapse of major institutions like Lehman Brothers and AIG.07:00 – Return to India & Setting Up Global Operations Rupam explains her decision to return to India for family reasons and how she helped transition a major bank's operations from Ireland to Bangalore — navigating SEC and FCA audits successfully.10:00 – The Shift to Insurance and Building Communities She discusses moving from investment banking to insurance, leading initiatives in Risk, ESG, and Cyber, and mentoring senior executives.13:00 – Reflections on Technology & Automation From manual processes to automation and bots — Rupam contrasts the early years of finance with today's data-driven governance and the rise of straight-through processing (STP).16:00 – Navigating Change and Cultural Adaptation Growing up in Lucknow and moving across continents, Rupam reflects on adapting to different work cultures, continuous learning, and seizing opportunities.20:00 – Decision-Making and Balancing Life & Work Rupam shares personal stories about taking career risks, balancing motherhood, and receiving mentorship and compassion from colleagues during turbulent times.24:00 – Continuous Learning & Board Readiness She outlines her professional upskilling journey — courses from ISB, IIM Bangalore, and IoD — and her belief in lifelong learning as a foundation for effective governance.27:00 – Second Innings: Purpose and Governance for the Future Rupam shares her vision for the next decade — mentoring startups, strengthening governance frameworks, and shaping ethical, transparent organizations. Memorable Quotes:“Success without ethics is incomplete.”“Governance isn't about control — it's about responsibility.” “It's time for stewardship — to give back, guide, and help the next generation grow with integrity.”“Even the strongest institutions can collapse if governance, risk, and culture are not aligned.”https://www.linkedin.com/in/rupam-tandon/After nearly three decades of experience across industries and markets, She is now embarking on my second innings with a focus on startup governance and scaleups. Her journey has been a diverse one, spanning Global roles (in UAE & UK, India) in business strategy, branding, communications, and stakeholder engagement. She is passionate about helping companies build strong, sustainable foundations for growth. As an alumnus of Lucknow University and the Indian School of Business (ISB), where she specialized in Negotiation as part of an Executive Management Program, I've honed my ability to adapt and thrive in fast-moving, high-pressure environments.My Core Strengths:- Strategic Oversight: Managing boards/foundations and aligning teams with business goals- Business Model Innovation: Identifying opportunities at the intersection of strategy, branding, product development, and customer experience- Start up consulting , Fintech, Finance based business.- Reputation Building: Creating narratives that enhance corporate reputation and build trust- Communications Strategy: Developing integrated communication plans to engage stakeholders and drive impact- Mentorship & Coaching: Supporting founders, leaders, and teams to grow through personalized guidance and coaching- Crisis Communications: Navigating through challenges to protect and strengthen brand reputation- Networking & Ecosystem Building: Cultivating strategic partnerships and growing influential networks- CSR & Sustainability: Defining and driving impactful corporate social responsibility initiativesRupam Tandon is a certified board director and former investment banker with expertise in governance, risk, and sustainability. She advises startups and SMEs on ethical leadership and board resilience, teaches ESG and stakeholder strategy to executives, and mentors emerging leaders. She advocates for policy-led reforms in India's entrepreneurial governance landscape.
Jessica Swank, Chief People Officer at Box, joined us on The Modern People Leader. We talked about building an "org brain", preparing managers to lead teams of humans plus agents, avoiding agent sprawl and tech debt, and why every people leader needs to start experimenting with AI personally to stay ahead.---- Downloadable PDF with top takeaways: https://modernpeopleleader.kit.com/episode269Sponsor Links:
My guest today is Cosmo Jiang, Partner at Pantera Capital. Pantera was one of the earliest institutional voices to embrace digital asset treasury companies, or DATs, and helped catalyze the Solana DAT wave. We begin by unpacking the DAT investment thesis using a traditional fundamental investing framework. We then discuss the role of DATs in the crypto-ecosystem, the convergence of crypto and traditional finance, and why Solana is their largest position. Please enjoy this conversation with Cosmo Jiang. For the full show notes, transcript, and links to the best content to learn more, check out the episode page HERE. ----- Making Markets is a property of Colossus, LLC. For more episodes of Making Markets, visit joincolossus.com/episodes. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here. Follow us on Twitter: @makingmkts | @ericgoldenx Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes (00:00:00) Welcome to Making Markets (00:01:04) Introduction to Digital Asset Treasury Companies (DATs) (00:01:44) Understanding the Role of DATs in the Crypto Ecosystem (00:02:48) The Investment Thesis Behind DATs (00:04:13) Comparing DATs to Traditional Financial Models (00:06:59) MicroStrategy's Strategy and Performance (00:09:53) Active Management vs. Passive Investment in DATs (00:13:40) Pantera Capital's Journey with DATs (00:16:51) The Rise of Solana DATs (00:20:20) Advocating for Solana and the Role of Spokespersons (00:22:46) Guiding Investors on Solana Exposure (00:23:28) Exploring AI Investment Strategies (00:24:10) The Role of ETFs in Digital Asset Management (00:25:19) Active Management vs. Spot Investments (00:25:53) The Future of Digital Asset Treasuries (00:27:43) The Intersection of TradFi and Crypto (00:29:24) Governance and Decentralization in Crypto (00:33:22) Solana: A Promising Layer 1 Blockchain (00:43:21) Current State and Future of Crypto Markets Learn more about your ad choices. Visit megaphone.fm/adchoices
EqualAI president & CEO Miriam Vogel, co-author of the new book Governing the Machine, joins Washington AI Network founder Tammy Haddad for a timely conversation on AI trust, governance, and global competition. Vogel breaks down the surge in AI litigation, why companies are missing ROI without governance, and how workforce fears are shaping adoption. She argues that America's long-held advantage in brand trust may be its most powerful asset in the global AI race — if leaders act now.
We weigh the promise and peril of the AI agent economy, pressing into how overprovisioned non-human identities, shadow AI, and SaaS integrations expand risk while go-to-market teams push for speed. A CMO and a CFO align on governance-first pilots, PLG trials, buyer groups, and the adoption metrics that sustain value beyond the sale.• AI adoption surge matched by adversary AI• Overprovisioned agents and shadow AI in SaaS• Governance thresholds before budget scale• PLG trials, sandbox, and POV sequencing• Visualization to reach the aha moment• Buying groups, ICP, and economic buyer alignment• Post‑sales usage, QBRs, NRR and churn signals• Zero trust limits and non-human identities• Breach disclosures as industry standards• Co-sourcing MSSP with in-house oversightSecurity isn't slowing AI down; it's the unlock that makes enterprise AI valuable. We dive into the AI agent economy with a CMO and a CFO who meet in the messy middle. The result is a practical blueprint for moving from hype to governed production without killing momentum.We start by mapping where controls fail: once users pass SSO and MFA, agents often operate beyond traditional identity and network guardrails. That's how prompts pull sensitive deal data across Salesforce and Gmail, and how third‑party API links expand the attack surface. From there, we lay out an adoption sequence that balances trust and speed. Think frictionless free trials and sandboxes that reach an immediate “aha” visualization of shadow AI and permissions, then progress to a scoped POV inside the customer's environment with clear policies and measurable outcomes. Along the way, we detail the buying group: economic buyers who sign and practitioners who live in the UI, plus the finance lens that sets pilot capital, milestones, and time-to-value expectations.We also challenge sacred cows. Zero trust is essential, but attackers increasingly log in with valid credentials and pivot through integrations, so verification must include non-human identities and agent-to-agent controls. Breach disclosures, far from being a greater threat than breaches, are foundational to ecosystem trust and faster remediation. And while MSSPs add critical scale, co-sourcing—retaining strategic oversight and compliance ownership—keeps accountability inside. If you care about ICP, PLG motions, PQLs, NRR, or simply reducing AI risk while driving growth, this conversation turns buzzwords into a playbook you can run.Vamshi Sriperumbudur: https://www.linkedin.com/in/vamsriVamshi Sriperumbudur was recently the CMO for Prisma SASE at Palo Alto Networks, where he led a complete marketing transformation, driving an impact of $1.3 billion in ARR in 2025 (up 35%) and establishing it as the platform leader. Chithra Rajagopalan - https://www.linkedin.com/in/chithra-rajagopalan-mba/Chithra Rajagopalan is the Head of Finance at Obsidian Security and former Head of Finance at Glue, and she is recognized as a leader in scaling businesses. Chithra is also an Investor and Advisory Board member for Campfire, serving as the President and Treasurer of Blossom Projects.Website: https://www.position2.com/podcast/Rajiv Parikh: https://www.linkedin.com/in/rajivparikh/Sandeep Parikh: https://www.instagram.com/sandeepparikh/Email us with any feedback for the show: sparkofages.podcast@position2.com
On 20 November 2025, the UK Covid-19 Inquiry will publish its second report. This module focuses on political and administrative decision making during the pandemic, examining how key decisions were made across government, and civil service performance. This report will also include perspectives from Scotland, Wales, and Northern Ireland. In a rapid reaction webinar the following day, our expert IfG panel unpacked the report's findings and explored their implications for future governance: Tim Durrant, Programme Director for the IfG Ministers work programme Stuart Hoddinott, Associate Director at the IfG Akash Paun, Programme Director for the IfG Devolution work programme Alex Thomas, Programme Director for the IfG Civil Service work programme The webinar was chaired by Dr Hannah White, Director and CEO of the Institute for Government.
Michael welcomes Jaelynn Williams, former CEO of Air Methods and a respected healthcare and business thought leader. Jaelynn has developed the Board AI Governance Index™, a groundbreaking framework that helps boards assess and strengthen their AI readiness—balancing innovation with accountability. They explore how this 100-point model gives leaders a structured way to oversee AI risk, capture opportunity, and ensure responsible use of technology in complex, high-stakes environments like healthcare.
Newton's 3rd Law states that for every action, there is an equal and opposite reaction. Outside of physics, this notion holds especially true in human behavior, namely, politics. Governance should be done in a manner to prevent the possibility of an inflamed reaction, not to incite it. Together, let's examine our current political landscape and what we can do as leaders to level out the drama.LEARN MORE:Website: https://stephenmansfield.tv/Instagram: https://instagram.com/mansfieldwrites/X: https://twitter.com/MansfieldWrites
Jay Chrawnna, director of The Kus, dives into the subtle ways people are already using Web3 in everyday transactions (often without realizing it) and what this means for the future of consumer Web3. He explores how gaming projects like FIFA Rivals are redefining digital asset ownership, and highlights how entities like the Indonesian government are leveraging blockchain to improve transparency, efficiency, and mainstream accessibility. Key Takeaways: How the next wave of Web3 could be completely invisible to users. Why Polkadot may be the only platform capable of scaling Web3 at full potential. What "Consumer Web3" really means, and why it's about more than wallets and tokens. How blockchain is being used to reimagine institutions at a massive scale. Guest Bio: Jay Chrawnna is the director of The Kus, a media broadcaster for Polkadot. On the Space Monkeys Podcast and weekly Attempts at Governance liveshows, the DOT eco's top movers and doers drive discourse and shape the second age of Polkadot. Jay exclusively uses the Brave browser. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte
This episode is a re-air of one of our most popular conversations from this year, featuring insights worth revisiting. Thank you for being part of the Data Stack community. Stay up to date with the latest episodes at datastackshow.com. This week on The Data Stack Show, John chats with Paul Blankley, Founder and CTO of Zenlytic, live from Denver! Paul and John discuss the rapid evolution of AI in business intelligence, highlighting how AI is transforming data analysis and decision-making. Paul also explores the potential of AI as an "employee" that can handle complex analytical tasks, from unstructured data processing to proactive monitoring. Key insights include the increasing capabilities of AI in symbolic tasks like coding, the importance of providing business context to AI models, and the future of BI tools that can flexibly interact with both structured and unstructured data. Paul emphasizes that the next generation of AI tools will move beyond traditional dashboards, offering more intelligent, context-aware insights that can help businesses make more informed decisions. It's an exciting conversation you won't want to miss.Highlights from this week's conversation include:Welcoming Paul Back and Industry Changes (1:03)AI Model Progress and Superhuman Domains (2:01)AI as an Employee: Context and Capabilities (4:04)Model Selection and User Experience (7:37)AI as a McKinsey Consultant: Decision-Making (10:18)Structured vs. Unstructured Data Platforms (12:55)MCP Servers and the Future of BI Interfaces (16:00)Value of UI and Multimodal BI Experiences (18:38)Pitfalls of DIY Data Pipelines and Governance (22:14)Text-to-SQL, Semantic Layers, and Trust (28:10)Democratizing Semantic Models and Personalization (33:22)Inefficiency in Analytics and Analyst Workflows (35:07)Reasoning and Intelligence in Monitoring (37:20)Roadmap: Proactive AI by 2026 (39:53)Limitations of BI Incumbents, Future Outlooks and Parting Thoughts (41:15)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Environmental, Social, Governance (ESG) initiatives aren't just “the right thing to do”, they can also save companies real dollars, particularly if they're investing in data centers and other infrastructure. Join Jonathan Ciccio, Continuous Improvement Manager for The Siemon Company, as we discuss The Siemon Company’s ESG initiatives. The Siemon Company has been in business for... Read more »
MAGAISM/BRO CULTURE CRONYISM/CEOs RULE!/ELONISMI am deeply ashamed': Larry Summers to step back from public commitments after new Epstein emails Senator Elizabeth Warren: “[Summers] cannot be trusted to advise our nation's politicians, policymakers and institutions — or teach a generation of students at Harvard or anywhere else.”And an unidentified Trump administration official told Politico that companies and organizations should end their association with Summers.The former Treasury secretary, along with Bill Clinton and the Democratic megadonor Reid Hoffman, are among the Democratic figures whom the Justice Department is investigating over Epstein ties — at President Trump's behest.Economist Warns That Trump's Investments in the Tech Industry Could Crash the Whole EconomyItalian economist Mariana Mazzucato, a University College London economics professor:“I think the kind of capitalism Trump has is crony capitalism.”“I would describe crony capitalism as Mafia-like. You're showing your upper hand. You're handing out favors to some. But then divide and conquer. Picking and choosing without a particular strategy.”She argues that the Intel deal is poorly designed because it doesn't have any conditions to incentivize the company to be build new products, while the government simply acts as a passive investor.All SNAP recipients required to reapply as Trump admin cracks down on fraud: 'Business as usual is over' SEC to Allow Companies to Block Shareholder ProposalsThe U.S. Securities and Exchange Commission (SEC) announced that it will not express opinions to requests from companies asking to exclude shareholder proposals from their proxy voting materials during the 2025-2026 proxy season, effectively allowing companies to avoid voting on issues proposed by investors such as climate, sustainability and diversity at annual meetings. Elon Musk is set to make more than every U.S. elementary teacher combinedWashing Post: BezosWhite nationalist talking points and racial pseudoscience: welcome to Elon Musk's Grokipedia: World's richest person wanted to ‘purge' propaganda from Wikipedia, so he created a compendium of racist disinformationSTAKEHOLDERS RULE!‘We've probably made housing unaffordable for a whole generation of Americans': top real-estate CEO on the real cost of Covid economic firefightingSean Dobson, CEO of The Amherst GroupFord CEO says he has 5,000 open mechanic jobs with 6-figure salaries from the shortage of manually skilled workers: ‘We are in trouble in our country' CEO James D. Farley, Jr: $24,861,866; 253:1.Ford Family Executive Chair William Clay Ford, Jr. 20,379,912; 207:1$519,845 for personal use of aircraft$1,394,538 for securityEOnly 16% of Large Companies on Track for Net ZeroMissing at U.N.'s Climate Meeting: American ExecutivesWOKE DATADisney ditches 'diversity' and 'DEI' in business report for the first time since 2019Nearly half of LGBTQ characters AXED from TV amid Trump-era rollback of woke DEI initiativesAccording to Deadline, around 41% of the 489 LGBTQ characters that were on the small screen this year will not return due to series cancellations and endings.AIJeff Bezos is putting $6.2 billion—and himself as co-CEO—behind a new AI startupProject PrometheusVik Bajaj: StanfordOpenAI accused of ‘consistent and dangerous pattern' rushing product to market that is ‘inherently unsafe or lacking in needed guardrails'The nonprofit Public Citizen is now demanding OpenAI withdraw Sora 2 from the public, writing in a letter to the company and CEO Sam Altman that the app's hasty release so that it could launch ahead of competitors shows a “consistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails.”Sora 2, the letter says, shows a “reckless disregard” for product safety, as well as people's rights to their own likeness and the stability of democracy.OpenAI didn't immediately respond to a request for comment.In the age of AI, CEOs quietly signal that layoffs are a badge of honorPeople Are Having AI “Children” With Their AI Partners Eli Lilly CEO says he has 'at least 1 or 2 AIs running' during every meeting he's in David Ricks: Ricks said he doesn't like OpenAI's ChatGPT for science-related questions — "It's too verbal," he said. Instead, he prefers Anthropic's Claude and xAI's Grok.Still, he has to be careful to watch for hallucinations, an issue the frontier model companies are still trying to tamp down.The CEO of $2.2 billion AI company Turing can't live without ChatGPT, swears by his Kindle, and has only taken 2 weeks of vacation in 7 yearsJonathan Siddharth; StanfordSTUPIDThe CFO Centre names Natalie Garfield as new CFOHeinz goes all-in on Thanksgiving leftovers with squeezable turkey gravy
Strong governance builds strong organizations — and ASBO International is leading the way.In this episode of School Business Insider, host John Brucato talks with members of the ASBO International Governance Task Force — Jim Rowan, Lynn Knight, Sharon Bruce, and Jordan Ely — about how the group is modernizing ASBO's leadership structure to make it more transparent, inclusive, and effective.They discuss why the task force was created, how it conducted its review, and what its key recommendations — including the Candidate Eligibility Matrix and Candidate Visibility and Engagement Guide — mean for the future of ASBO International.It's a conversation about process, purpose, and progress — and the importance of building governance systems that reflect the members they serve.Contact School Business Insider: Check us out on social media: LinkedIn Twitter (X) Website: https://asbointl.org/SBI Email: podcast@asbointl.org Make sure to like, subscribe and share for more great insider episodes!Disclaimer:The views, thoughts, and opinions expressed are the speaker's own and do not represent the views, thoughts, and opinions of the Association of School Business Officials International. The material and information presented here is for general information purposes only. The "ASBO International" name and all forms and abbreviations are the property of its owner and its use does not imply endorsement of or opposition to any specific organization, product, or service. The presence of any advertising does not endorse, or imply endorsement of, any products or services by ASBO International.ASBO International is a 501(c)3 nonprofit, nonpartisan organization and does not participate or intervene in any political campaign on behalf of, or in opposition to, any candidate for elective public office. The sharing of news or information concerning public policy issues or political campaigns and candidates are not, and should not be construed as, endorsements by ASBO Internatio...
Environmental, Social, Governance (ESG) initiatives aren't just “the right thing to do”, they can also save companies real dollars, particularly if they're investing in data centers and other infrastructure. Join Jonathan Ciccio, Continuous Improvement Manager for The Siemon Company, as we discuss The Siemon Company’s ESG initiatives. The Siemon Company has been in business for... Read more »
This week on I 501(c) You, Michael sits down with Sandra Bograd, Of Counsel at Genova Burns, to unpack the often-misunderstood world of nonprofit and corporate board governance. Drawing from decades of experience in law, compliance, and ethics, Sandra breaks down what makes boards effective, how governance in nonprofits and for-profits overlap, and why board members must lean into their fiduciary roles, paid or not. If you've ever wondered what separates a high-impact board from a passive one, this episode is essential listening for nonprofit leaders, board members, and governance professionals. Timestamps: (00:00) Introducing Sandra Bograd, Esq., Of Counsel, Genova Burns, LLC (08:00) How did you navigate stakeholder relationships? (10:30) What is good governance? (12:45) Ensuring good governance (15:00) Getting the board on board (18:40) Is governance different on for-profit and nonprofit boards? (20:45) Role of the board chair (23:05) Prepping for board meetings (24:30) Benefits of strong board governance (29:15) Risks of joining a board (35:25) Thoughts when nonprofits are victims of fraud (39:35) Board speaking with senior staff Join us every other week as we release a new podcast with information about how you can be the best board member and provide great service to your organization. Listen to the podcast on any of the following platforms: YouTube Apple Podcasts Spotify Podcasts Amazon iHeartRadio Visit us at: www.thecorleycompany.com/podcast
Welcome back to another episode of the EUVC Podcast, where we bring together Europe's venture family to share the stories, insights, and lessons that drive our ecosystem forward.Today we dive into one of the most under-discussed — yet increasingly important — topics in European venture: Opportunity Funds.Joining Andreas Munk Holm is Lea Strumberger, Senior Investment Manager at KfW Capital, one of Europe's largest and most mission-driven LPs. KfW Capital co-operates several modules of Germany's €10B Future Fund (Zukunftsfonds) and deploys into VC funds to strengthen Europe's late-stage capital base.Within that framework, KfW Capital has launched an Opportunity Fund facility to back managers deploying Series B+ capital — often into their own breakouts — with a structure and governance playbook that preserves alignment and avoids “continuation-vehicle rescue” dynamics. Public examples of European Opportunity strategies include Notion Capital's Opportunities funds, built alongside its core franchise.Here's what's covered00:17 — Mandate & why Series B+: Europe needs domestic late-stage capital04:39 — Two OF archetypes: inside-only vs blended08:15 — How KfW diligences emergent managers launching OFs13:19 — Why a third-party lead (≥25%) matters18:53 — Terms that matter: fees, carry, GP commit, duration25:30 — GP commit reality for second-timers33:19 — Governance: allocation policy, LPAC, down-rounds36:10 — Hurdle rates: 6–8% standard, not the battleground37:55 — Market pulse: ~10 OFs/year cross KfW's desk
On Wednesday, President Donald Trump signed a government funding bill ending a record 43-day shutdown. The legislation came after weeks of political gridlock that ended when a small group of Senate Democrats reached a compromise with Republicans.MPR News guest host Catharine Richert talks with two political scientists about what the deal to end the shutdown tells us about power, priorities and politics in Washington.Guests:Larry Jacobs is a political scientist and founder and director of the Center for the Study of Politics and Governance at the University of Minnesota. He is also the author of “Democracy Under Fire: Donald Trump and the Breaking of American History.”Cindy Rugeley is an associate professor of political science and head of the Political Science Program at the University of Minnesota Duluth.Subscribe to the MPR News with Angela Davis podcast on: Apple Podcasts, Spotify or RSS.Use the audio player above to listen to the full conversation.
SS&C Blue Prism's VP reveals how they achieved $200M annual savings and $600M revenue growth by deploying 3,000 AI agents, processing 6 million documents monthly as their own first customer.Topics Include:SS&C Blue Prism evolved from RPA leader to agentic automation provider over 25 yearsServes 22,000 clients in regulated industries like financial services, healthcare, manufacturing, and retailOffers AI agents, governance gateway, and secure enterprise chat leveraging AWS BedrockAs "customer zero," they deployed 3,000 agents processing 6 million documents monthlyGenerated $200M annual savings and $600M revenue growth using their own technologyFinancial services client unlocked unstructured document processing previously impossible with traditional automationHealthcare client's AI processes MRIs more accurately than human radiologistsKey lesson: Focus on business outcomes first, not just implementing AI everywhereCritical insight: Plan for scale on day one, not after pilots succeedAWS Marketplace streamlined purchasing, especially in challenging Latin American marketsFuture vision: B2A economy where agents negotiate parking, shopping, and services autonomouslyPredicts agent-to-agent communication will revolutionize healthcare monitoring and wealth managementParticipants:Satish Shenoy – Global Vice President, Technology Alliances and GenAI GTM, SS&C Blue PrismSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
S1E4: Gripping the Hot Blade of AI: Risk, Trust, and Governance Nate Couture, CISO of the University of Vermont Health System, joins hosts Tamer Baker and Steven Hajny to explore how healthcare organizations can manage shadow AI responsibly, secure sensitive data, and build governance frameworks to unlock AI's full potential. Key Takeaways: 1. Discovering and managing shadow AI starts with visibility, DLP, and cross-functional collaboration. 2. Thoughtful AI governance is the key to balancing innovation with patient privacy and trust. 3. AI is a powerful tool to enhance, not replace, human productivity in healthcare. To stream our Station live 24/7 visit www.HealthcareNOWRadio.com or ask your Smart Device to “….Play Healthcare NOW Radio”. Find all of our network podcasts on your favorite podcast platforms and be sure to subscribe and like us. Learn more at www.healthcarenowradio.com/listen
New @greenpillnet pod out today!
Climate Change, Indigenous Demands, and Governance Challenges in the Amazon. Ernesto Araújo discusses how indigenous leaders at COP 30 highlighted demands tied to environmental issues in the Amazon, which spans many countries. Poverty drives illegal activities, like logging and mining, even on indigenous lands (14% of Brazilian territory), which are often exploited through bribery and organized crime. While Lula speaks of protecting the Amazon, deforestation and indigenous health figures remain poor. The complex solution requires enforcing existing laws and focusing on establishing law and order to fight pervasive corruption. 1958