Podcast appearances and mentions of Jeremy Bentham

British philosopher, jurist, and social reformer

  • 308PODCASTS
  • 421EPISODES
  • 49mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 29, 2025LATEST
Jeremy Bentham

POPULARITY

20172018201920202021202220232024


Best podcasts about Jeremy Bentham

Latest podcast episodes about Jeremy Bentham

Beer and Conversation with Pigweed and Crowhill
508: Are we living in a digital panopticon?

Beer and Conversation with Pigweed and Crowhill

Play Episode Listen Later Apr 29, 2025 41:24


The boys drink and review Delicious IPA from Stone, then discuss prisons, and whether we're in a digital version of one. The "panopticon" is a prison design invented by the philosopher Jeremy Bentham. The idea is that if you make prisoners feel as if they're constantly under surveillance, the prisoners will regulate themselves and the guards won't have to bang them about so much. Modern prisons have adopted some of Bentham's ideas, but so have many other institutions. Foucault said this idea was the blueprint for all modern institutions. Schools, hospitals, and other organizations enforce conformity by defining what is "normal" and by constant surveillance. Stephen Cave added the concept of a "freedom quotient" by which we can determine how much freedom a person can exercise in any given situation. The boys tie it all together and ask whether the modern world is a digital prison. We know we're being surveilled. We know we're supposed to follow what the powerful have defined as "normal."

Merriam-Webster's Word of the Day

Merriam-Webster's Word of the Day for April 24, 2025 is: ostensible • ah-STEN-suh-bul • adjective Ostensible is used to describe something that seems or is said to be true or real, but is possibly not true or real. In other words, it is plausible rather than demonstrably true or real. // The ostensible purpose of a filibuster is to extend debate, but in reality it is used to delay or prevent action. See the entry > Examples: “No drums, no bass, no conventional song structures: Hosianna Mantra was a 40-minute contemplation of the cosmos and cosmic love, couched in words and sounds that explicitly linked it to humanity's grandest and most consistent way of considering meaning, religion. The ostensible polytheism conveyed by the name and the concept were only ways to realize how little we actually know, and how much we wager through mere survival.” — Grayson Haver Currin, Pitchfork, 19 Jan. 2025 Did you know? British philosopher and economist Jeremy Bentham once wrote to Indian religious leader Ram Mohan Roy asking him to “send me two letters—one confidential, another ostensible.” By ostensible he meant that, unlike the confidential letter, the latter was intended to be shown to people other than Bentham himself. This sense of ostensible shows clearly the influence of the word's Latin ancestor, the verb ostendere, meaning “to hold out for inspection,” “to show,” “to make clear by one's actions,” and “to demonstrate.” Ostensible is still used today as it is in Bentham's letter, but it is much more likely to suggest a discrepancy between a declared or implied aim or reason (i.e., the aim or reason that someone displays or “shows” to others) and the true one. For example, someone might give “seeing an old friend” as their ostensible reason for planning a trip when in reality they are planning on spending most of their time relaxing on the beach.

Freiheitsunternehmer Podcast
Verlierst du deine Freiheit in 2025?!

Freiheitsunternehmer Podcast

Play Episode Listen Later Apr 14, 2025 22:52


Folge 223: In meinem Soziologie Studium habe ich gelernt wie moderne Überwachungsstaaten funktionieren. Aktuell wird gerade über die Einführung des "Digitalen Euros" diskutiert. Diese Einführung würde für den Nährboden eines modernen Überwachungsstaat sorgen. Wir sehen gerade, dass unsere demokratischen Systeme immer mehr bröckeln und sich Menschen nach "starken Leadern" sehnen. Diese "starken Leader " haben aber nicht immer das beste für das Volk im Sinne und deshalb sollten wir meiner Meinung nach nicht zulassen, dass der "digitale Euro" eingeführt wird, da wir in eine Abhängigkeit geraten könnten, welche katastrophal wäre. Im 18. Jahrhundert entwickelte Jeremy Bentham das Panoptikon-Konzept was vom französischen Philosophen Michel Foucault genutzt wurde, um moderne Überwachungsstaaten zu erklären. Die Einführung des "digitalen Euros" erinnert mich sehr stark an die Literatur, die ich damals zu dem Thema im Studium gelesen habe. Der "digitale Euro" an sich ist noch nicht das Problem, jedoch ist der "digitale Euro" eine Art Nährboden, für einen modernen Überwachungsstaat. Wir sollten einen gesellschaftlichen Diskurs darüber führen, ob wir diesen Nährboden überhaupt entstehen lassen wollen... Denn Geister die man einmal herbeigerufen hat, wird man nicht mehr los (behauptet zumindest Johann Wolfgang von Goethe in seinem Werk der Zauberlehrling ;). Ich bin gespannt auf deine Meinung zu diesem Thema. Let´s connect on Instagram: https://www.instagram.com/timo_eckhardt/

The Atlas Obscura Podcast
Jeremy Bentham's Auto-Icon

The Atlas Obscura Podcast

Play Episode Listen Later Mar 26, 2025 15:32


Jeremy Bentham began planning for his death at a young age. He wrote a will in 1769, at the age of 21. But how did this philosopher's dead body wind up on display in a university student center?

Grace Point Church Ann Rd

Pastor Ty Neal Judge by Jesus Colossians 2:8 Filled by Jesus Colossians 2:9-10Ephesians 3:14-20 Connection by Jesus Colossians 2:11-12Deuteronomy 10:16Colossians 2:11Colossians 2:12Romans 6:8-11 Forgiven by Jesus Colossians 2:13-14Romans 9:161 Corinthians 2:14 “Jeremy Bentham, present, not voting” Romans 7:24Romans 7:25aColossians 2:13-14 Victorious by Jesus Colossians 2:15 “So when the devil throws your sins in your face and declares that you deserve death and hell, tell him this: “I admit that I deserve death and hell, what of it? For I know

All The Best Podcasts Have Daddy Issues
There's No Place Like Home, Part 3

All The Best Podcasts Have Daddy Issues

Play Episode Listen Later Mar 19, 2025 55:05


Jeremy Bentham? I barely know 'em

Inselradio LOST 815
Folge 099 - 5x07 - Der Vater des Feminismus

Inselradio LOST 815

Play Episode Listen Later Dec 31, 2024 222:59


Wir besprechen heute mit Sandra vom Höllenschlund den Aufstieg und auch Niedergang des Jeremy Bentham. Es wird also lang in unserer Besprechung zur 5x07 in unserer Folge 099 - Der Vater des Feminismus! Hierzu sei vorab schon mal eine TRIGGERWARNUNG ausgebsprochen die ihr auch in den Kapitelmarken findet. Kapitel 00:00:00.000 Intro & Begrüßung 00:14:14.668 Hard-Facts 00:18:59.403 Bisher bei LOST 00:22:41.199 Folgenbesprechung 02:39:17.908 TRIGGERWARNUNG - Besprechung des Suizid-Versuchs 03:06:42.516 Triggerwarnung - Ende 03:12:04.232 Trivia 03:12:23.574 Bechdel-Test 03:22:11.820 Bewertung 03:36:03.427 Wie geht es weiter? 03:40:53.257 Verabschiedung & Outro Folgt uns hier: https://twitter.com/inselradio815 https://bsky.app/profile/inselradio815.bsky.social https://www.facebook.com/inselradiolost815 https://www.instagram.com/inselradiolost815/ https://www.youtube.com/@inselradiolost8156 inselradiolost815.de Folge direkt herunterladen

Social Innovation
EP 115 - Jeremy Bentham - Co-Chair World Energy Council - Smart Business, Smart Policy and Smart Politics

Social Innovation

Play Episode Listen Later Nov 19, 2024 49:31


In this week's episode of Impact at Scale with Zal Dastur, we are doing a deep dive into energy with my guest, Jeremy Bentham, the Co-Chair of the World Energy Council. Jeremy discusses the critical themes surrounding the energy transition, including the role of the World Energy Council, the need for a shift in perception regarding energy transition, the importance of aligning various factors for successful implementation, and the potential of nuclear energy. We also explore the geopolitical aspects of energy planning, the size and impact of the energy system, the evolving role of fossil fuel companies and the significance of consumer demand and individual actions in driving the transition towards a sustainable energy future. Some Topics Jeremy Covered Reperceiving Energy Transition The Impact of Policy on Energy Systems Exploring Nuclear Energy Geopolitics and Energy Planning Other Titles We Considered When change happens it is better to be early than late Changing the perception of energy companies The green premium for commodities are hugely diluted Different ways to get to the same goal

Interplace
Markets, Machines, and Morality

Interplace

Play Episode Listen Later Oct 10, 2024 18:07


Hello Interactors,We've entered fall here in the northern hemisphere, and you know what that means — pumpkin spice everything, cozy sweaters, and … economics! That's right, as the leaves change color (at least for those above 40°N latitude), it's the perfect time to explore how the changing seasons mirror shifts in human interaction, from the flow of resources to the balance of power and progress. This week, it's time to cozy up with Adam Smith, Jeremy Bentham, and James Watt —three names you probably didn't expect to find together, but trust me, they make quite the trio. So grab your favorite fall beverage and join me on a journey through the Industrial Revolution, steam engines, and the forgotten role of moral feedback loops in economics. Let's find out why balancing wealth and well-being is harder than finding a public restroom in an old university. PURGING THE URGE FOR SYMPATHYI needed to pee. More specifically, the stretch receptors in the walls of my bladder, which monitor the volume of urine inside, became activated. That sent sensory signals to the spinal cord and brain through my pelvic nerves. The pons in the brainstem (which includes a dedicated urination control center) processed this information in coordination with my prefrontal cortex, which allowed for conscious control over my decision to urinate.It was a Sunday, and the campus was dead. Lucky for me a door was open, so I ducked in and began my search for a potty. The hallway was musty and narrow. The walls were old, but not as old as the 250-year-old structure surrounding it. There was no immediately visible sign for a restroom, but there were numerous potential doors and directions for me to attempt. As I approached one of them, the industrial grade door magically opened before I could even touch it. I cautiously inched forward half wondering if it would lock behind me.Now inside another chamber further in the interior, I was met with another set of mysterious doors. I stepped inside another narrower hallway that twisted suddenly to a sign above another door that read WC. Whatever Potter-esque ghosts had guided me here clearly had sympathy. And so did my parasympathetic nervous system. It simultaneously signaled the detrusor muscle of my bladder wall to contract and my urethral sphincter to relax. I stood there in relief wondering if I could find my way out.I was visiting the University of Glasgow, hoping to learn more about its famous figures, especially Adam Smith, whom I see as an important moral philosopher rather than just the “father of economics.” A few days later in Edinburgh, I tortured my family by leading them on a search for his gravestone. I was pleased to find it acknowledged his The Theory of Moral Sentiments, where sympathy balances self-interest, as well as his more popular The Wealth of Nations. Unsurprisingly, the nearby tourist plaque focused only on Wealth of Nations, reflecting the emphasis on economics over his broader moral philosophy.Adam Smith's moral philosophy was central to his life's work, with The Theory of Moral Sentiments being his enduring focus, while The Wealth of Nations but a brief but significant interlude. For Smith, economics was not just about market mechanics, but deeply intertwined with human nature, ethics, and the broader pursuit of communal well-being. He was more concerned with the motivations behind human actions than with the technical details of market forces, which came to dominate modern economics. Smith believed that the drive for self-betterment was not solely about personal wealth but was intrinsically linked to the well-being of communities, where self-interest was balanced by sympathy for others.In Smith's view, economic actions should be guided by moral virtues, such as prudence and justice, ensuring that individual efforts to improve one's own life would ultimately contribute to the greater good of society. His exploration of economics was always part of a larger moral framework, where community engagement and ethical behavior were essential for both individual and societal progress. Today, this broader moral context is often overlooked, but for Smith, economics was inseparable from philosophical inquiry into human behavior. He emphasized how the improvement of human life goes far beyond just the accumulation of material wealth.MORALS MEET MARKET MANIPULATIONMany conservatives today may brush this interpretation as being too ‘woke'. Well, some eventually did back then too. As the British economy was expanding in Smith's later years, he spoke in favor of capping interest rates with usury law. Usury is defined as the practice of making unethical or immoral loans that unfairly enrich the lender, often involving excessive or abusive interest rates. He believed exorbitant rates could lead to preying on the disadvantaged during a time of need resulting in growing disadvantages to the larger community.Historically, many societies including ancient Christian, Jewish, Islamic, and Buddhist communities considered charging interest of any kind as wrong or illegal. Smith was rooted in elements of Christian morals, but critics claimed he was being hypocritical. They pointed to examples in his publications, often out of context, of where he suggested government can't know better than individuals about their own risks, costs, and benefits and thus should not meddle.But even in The Wealth of Nations Smith was clear about three conditions necessary for an effective economy and with each he paired moral values also found in The Theory of Moral Sentiments:* State-Justice: Smith argued, “Commerce and manufacturers…can seldom flourish long in any state which does not enjoy a regular administration of justice,” emphasizing the need for laws that ensure security and regulate excessive accumulation of wealth.* Market-Liberty: He valued the “liberty of trade…notwithstanding some restraints,” while warning that monopolies “hurt…the general interest of the country.”* Community-Benevolence: Rooted in moral sentiments, Smith believed in a shared commitment to community, where “many reputable rules…must have been laid down and approved of by common consent.”Smith's main usury critic was the philosopher Jeremy Bentham, known for developing the philosophy of utilitarianism. A letter written to Smith in 1787 stated:“Should it be my fortune to gain any advantage over you, it must be with weapons which you have taught me to wield, and with which you yourself have furnished me…I can see scarce any other way of convicting you of any error or oversight, than by judging you out of your own mouth.”Bentham is most famous for the idea of “maximizing the greatest happiness for the greatest number” which helped promote legal reforms and social progress including welfare, equal rights for women, the separation of church and state, and the decriminalization of homosexual acts. But his ultimate focus of utilitarianism was on the practical outcomes of policies going so far as to develop mathematical formulas, called felicific calculus, to determine how much pleasure or pain must be inflicted in society to achieve the most happiness for the greatest number.He was also a staunch economic expansionist, believing, as verified in his calculus, that it would expand good for most. It would be his student, John Stuart Mill, who expanded on but also critiqued Bentham's utilitarianism later in the mid 1800s.“I conceive Mr. Bentham's writings to have done and to be doing very serious evil. It is by such things that the more enthusiastic and generous minds are prejudiced against all his other speculations, and against the very attempt to make ethics and politics a subject of precise and philosophical thinking.”Mill too was an expansionist, but acknowledged utilitarian reasoning could be used to defend exploitive and immoral colonial practices, including slavery. Mill believed slavery "effectually brutifies the intellect" of both slave and the enslaver and condemned the notion that certain races were inherently inferior and required subjugation.Nevertheless, early colonizers and imperialists, as well as modern day neo-liberals weaponized elements of utilitarianism much like they did with The Wealth of Nations. They used (and continue to use) select elements to justify laissez-faire economics, deregulation, and the exploitation of labor, often prioritizing economic efficiency over moral considerations such as fairness and social equity.For example, Margaret Thatcher and Ronald Reagan both used utilitarian logic believing their policies would maximize overall economic growth and prosperity, benefiting society as a whole, even at the expense of rising inequality and social welfare. Their consequentialist approach justified market-driven reforms for a perceived greater good. Given today's historic wealth imbalances, the result of that calculus is less than convincing.Bentham also failed to convince Smith in that fateful letter, but to many it marked a notable shift in economic thinking and philosophy. Smith passed away three years after his exchange with Bentham and theoretical mathematical utilitarianism became the ultimate measure of right and wrong in governance and ethics in the UK and the US. Smith's morality, which emphasized moral virtues guiding economic actions, lost out to consequentialisms focus solely on outcomes, often justifying exploitation and suffering if it maximized societal gain and economic expansion for the expansionists — despite John Stuart Mill's, and countless others, objections.ECONOMIC ENGINES IN MORAL MACHINESDuring Adam Smith's lifetime, the Industrial Age rapidly emerged, transforming economies and wealth structures. Technological advancements, like the steam engine, fueled industrial capitalism, driving unprecedented economic growth and wealth accumulation. This focus on efficiency relied on maximizing productivity, whether through steam-powered machines, the exploitation of enslaved people, the working poor, or the displacement of Indigenous populations, prioritizing economic gain over human well-being.In 1783, while Smith and Bentham were debating economic philosophy, James Watt was at the University of Glasgow, focused on regulating unchecked power —specifically the excessive speed of steam engines which he helped to invent. To prevent mechanical failures from fluctuating steam pressure, Watt invented the centrifugal governor. This device used weighted iron balls that spun outward with centrifugal force as the engine's speed increased, raising a spindle that adjusted a valve to control steam flow. By automatically reducing steam when the engine ran too fast and increasing it when it slowed, the governor ensured safe and efficient operation. Watt's invention, introduced in 1788, was in full production by 1790, paving the way for innovations like the first steam locomotive in 1804.Watt's governor symbolized the need to impose limits on unchecked mechanical power, ensuring the engine operated within safe and efficient parameters. This technological innovation mirrored a broader theme of the Industrial Revolution — the balance between harnessing new, powerful technologies for economic growth while recognizing the risks of unregulated force, whether in machines or the rapid, unrestrained accumulation of wealth and resources in society. Watt's governor was an early acknowledgment that unchecked power, whether mechanical or economic, could lead to instability and disaster."I am never content until I have constructed a mechanical model of the subject I am studying. If I succeed in making one, I understand. Otherwise, I do not." – Lord KelvinOur brains also act as a kind of governor on the unchecked power of our kidneys, just as moral feedback loops serve as a governor on unchecked economic ambition. Like the stretch receptors in our bladder sensing when fluid volume builds, moral reasoning, as Smith envisioned, detects the social and ethical consequences of unfettered economic expansion. These signals, akin to the centrifugal force moving the governor's spindle, prompt individuals and society to regulate their actions, guiding decisions based not only on self-interest but on moral duty.In contrast, Bentham's utilitarian calculus, much like a theoretical mathematical model divorced from natural systems, ignores these ethical feedback loops. By relying solely on abstract calculations of happiness and efficiency, Bentham's approach, like a machine operating without awareness of its environment, risks distorting human and social behaviors. Where Smith's model calls for moral constraints on economic behavior, much like the body's signals to prevent overstretching, Bentham's framework lacks the necessary human safeguards, leading to potential exploitation and imbalance in pursuit of theoretical utility maximization.I do wonder what our economic systems would look like if, like our bodies, they were designed to self-regulate, ensuring that the pursuit of wealth doesn't come at the expense of human well-being? Just as our bodily functions rely on natural feedback loops to maintain equilibrium, why have we allowed our economies to run unchecked, often leading to exploitation and inequality? Adam Smith believed in moral constraints on ambition, yet today, much of our economic thinking prioritizes growth without those safeguards.As walked off campus that day, I reflected on Watt's governor regulating the steam engine and the moral feedback loops Smith envisioned. I wondered if Smith and Watt made the metaphoric connection in their encounters with one another, maybe even on their way to relieve themselves in the very building in which I found myself. Perhaps they each happened on this connection in their own thought experiments, which makes me wonder why more don't today? Surely there's a morally sound way to balance personal gain with the greater good — a bit like public restrooms. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit interplace.io

Thinking Allowed
Sight and Power

Thinking Allowed

Play Episode Listen Later Sep 24, 2024 28:59


Laurie Taylor talks to Becca Voelcker, Lecturer in the Art Department at Goldsmiths, University of London, about her research into the relationship between sight and power. Everyday life is full of moments where we are seen, often without our knowledge, even in the virtual world, where cookie trails and analytics make us visible to profit making companies. Going back in time, Jeremy Bentham's panopticon depended on seeing its occupants to control them. If we cannot control who sees us today are we also being controlled? How does that square with the many moments when being seen is also a means of social recognition?Also, David Lyon, Professor Emeritus of Sociology and Law at Queen's University, Kingston, Ontario explores the surveillance which permeates all aspects of our lives today. Every click on the keyboard, every contact with a doctor or the police, each time we walk under a video camera or pass through a security check we are identified, traced, and tracked. So how does surveillance make people visible, how did it grow to its present size and prevalence, and what are the social and personal costs?Producer: Jayne Egerton

FLF, LLC
What a Gnostic Benthamite Christian Lawyer Looks Like: Me. [God, Law, and Liberty]

FLF, LLC

Play Episode Listen Later Sep 6, 2024 18:58


David begins his examination of what he considers the two predominate views among Christians on law and politics, those he calls the neo-Covenanters and neo-Baptists, with how he realized he read the Bible like the legal positivist, Jeremy Bentham, and why reading the Bible that way is gnostic, not Christian. Is being a heretic easier than ever before? The answer may surprise you.

God, Law & Liberty Podcast
S3E144: What a Gnostic Benthamite Christian Lawyer Looks Like: Me.

God, Law & Liberty Podcast

Play Episode Listen Later Sep 6, 2024 18:58


David begins his examination of what he considers the two predominate views among Christians on law and politics, those he calls the neo-Covenanters and neo-Baptists, with how he realized he read the Bible like the legal positivist, Jeremy Bentham, and why reading the Bible that way is gnostic, not Christian. Is being a heretic easier than ever before? The answer may surprise you.Support the show: https://www.factennessee.org/donateSee omnystudio.com/listener for privacy information.

Fight Laugh Feast USA
What a Gnostic Benthamite Christian Lawyer Looks Like: Me. [God, Law, and Liberty]

Fight Laugh Feast USA

Play Episode Listen Later Sep 6, 2024 18:58


David begins his examination of what he considers the two predominate views among Christians on law and politics, those he calls the neo-Covenanters and neo-Baptists, with how he realized he read the Bible like the legal positivist, Jeremy Bentham, and why reading the Bible that way is gnostic, not Christian. Is being a heretic easier than ever before? The answer may surprise you.

Choses à Savoir
Pourquoi trouvait-on des moulins à discipline dans les prisons du XIXe siècle ?

Choses à Savoir

Play Episode Listen Later Sep 2, 2024 1:51


Les moulins à discipline, ou "treadwheels," étaient des dispositifs utilisés dans les prisons anglaises du XIXe siècle comme méthode de punition et de réhabilitation des détenus. Appelés "treadwheels," en an glais elles servaient de punition et au travail forcé des détenus. Ces appareils consistaient en une grande roue que les prisonniers devaient faire tourner en marchant ou en grimpant sur des barreaux, un effort physique intense et épuisant. 1. Discipline et Punition- Contrôle de la population carcérale : Les moulins à discipline étaient utilisés pour imposer une routine stricte et une discipline rigoureuse. Le travail monotone et épuisant servait de moyen de punir les détenus tout en les empêchant de comploter ou de causer des troubles.- Dissuasion : L'épuisement physique associé à l'utilisation des moulins décourageait les comportements indisciplinés parmi les prisonniers.2. Réhabilitation par le Travail- Habitude du travail : On croyait que le travail forcé pouvait inculquer une éthique de travail aux détenus, aidant à leur réhabilitation et à leur réintégration dans la société après leur libération.- Réforme morale : Le travail pénible était vu comme un moyen de réformer moralement les détenus, en les éloignant des habitudes oisives et criminelles.3. Utilisation de l'énergie mécanique- Production d'énergie : En plus de ses fonctions disciplinaires, le treadwheel était parfois utilisé pour générer de l'énergie mécanique pour des tâches telles que le pompage de l'eau ou le broyage de grain. Cependant, cet usage pratique était souvent secondaire par rapport à l'objectif disciplinaire.4. Développement des Pratiques Pénitentiaires- Influence des Réformateurs : Les réformateurs pénitentiaires du XIXe siècle, comme Jeremy Bentham, prônaient l'utilisation du travail forcé comme moyen de réhabilitation. Le treadwheel s'inscrivait dans cette philosophie, combinant punition et utilité économique.En résumé, les moulins à discipline étaient installés dans les prisons du XIXe siècle pour imposer la discipline, réformer les détenus par le travail et parfois pour des usages utilitaires. Bien que leur utilisation ait souvent été critiquée pour ses conditions inhumaines, ils représentent une période de l'histoire pénitentiaire où la punition et la réhabilitation étaient étroitement liées. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

FLF, LLC
The Law of Nature Jeremy Bentham and I Overlooked [God, Law, and Liberty]

FLF, LLC

Play Episode Listen Later Aug 30, 2024 17:09


Common law authority William Blackstone said that man “must in all points conform to the will of his nature,” and this will was called the “natural law.” Today, David explains how he overlooked the most fundamental law of human nature because he read the Bible like a disciple of legal positivist Jeremy Bentham. From his experience, David offers a proposition about the state of evangelicalism in America.

God, Law & Liberty Podcast
S3E143: The Law of Nature Jeremy Bentham and I Overlooked

God, Law & Liberty Podcast

Play Episode Listen Later Aug 30, 2024 17:09


Common law authority William Blackstone said that man “must in all points conform to the will of his nature,” and this will was called the “natural law.” Today, David explains how he overlooked the most fundamental law of human nature because he read the Bible like a disciple of legal positivist Jeremy Bentham. From his experience, David offers a proposition about the state of evangelicalism in America.Support the show: https://www.factennessee.org/donateSee omnystudio.com/listener for privacy information.

Fight Laugh Feast USA
The Law of Nature Jeremy Bentham and I Overlooked [God, Law, and Liberty]

Fight Laugh Feast USA

Play Episode Listen Later Aug 30, 2024 17:09


Common law authority William Blackstone said that man “must in all points conform to the will of his nature,” and this will was called the “natural law.” Today, David explains how he overlooked the most fundamental law of human nature because he read the Bible like a disciple of legal positivist Jeremy Bentham. From his experience, David offers a proposition about the state of evangelicalism in America.

Les Nuits de France Culture
Michel Foucault : "Avant 1750, le médecin dans l'hôpital est la dernière roue de la charrette"

Les Nuits de France Culture

Play Episode Listen Later Aug 6, 2024 95:00


durée : 01:35:00 - Les Nuits de France Culture - Retour sur un numéro des "Lundis de l'Histoire" de mai 1977, consacré aux origines de l'hôpital moderne et à la prison "panoptique", théorisée par le philosophe anglais Jeremy Bentham (1748-1832). Une table ronde animée par l'historien Roger Chartier, avec le philosophe Michel Foucault. - invités : Michel Foucault Philosophe; Bruno Fortier; Michelle Perrot Historienne spécialiste de l'histoire des femmes, professeure émérite d'histoire contemporaine à l'Université Paris Cité; Arlette Farge Historienne spécialiste du 18e siècle, directrice de recherches en histoire au CNRS; Jean-Claude Perrot Historien

The Management Theory Toolbox
Episode 12: Carrots and Sticks 2.0 with Dr. Richard (Dick) Malott

The Management Theory Toolbox

Play Episode Listen Later Jul 29, 2024 21:34 Transcription Available


Key Points:Operant Conditioning and Behavior:Explore the basics of operant conditioning and its relevance to management.Discussion of Jeremy Bentham's Panopticon and its implications for behavior management.The role of observation in influencing behavior.Distinction between direct and indirect acting contingencies.Interview with Dr. Dick Malott:Background and work of Dr. Malott in behavior analysis.Consistency of operant conditioning principles across different groups (rats, students, managers, children with autism).Explanation of behavioral contingencies and categories (unlearned/learned rewards and aversive conditions).The importance of rule-governed behavior and rules that are easy to follow.Behavioral Management in Organizations:Effective implementation of behavior management strategies in the workplace.Importance of easy-to-follow rules with immediate, significant, and likely outcomes.Examples of effective performance management in educational and organizational settings.Challenges in implementing and maintaining behavior management systems.Practical Takeaways:Reflect on feedback mechanisms in your workplace.Redesign processes to make rules clearer and feedback more immediate.Relevant Articles:Greer, C. R., Lusch, R. F., & Hitt, M. A. (2017). "A Service Perspective for Human Capital Resources: A Critical Base for Strategy Implementation," Academy of Management Perspectives, 31: 137-158.Podsakoff, P. M., Bommer, W. H., Podsakoff, N. P., & MacKenzie, S. B. (2006). "Relationships Between Leader Reward Behavior and Punishment Behavior and Subordinate Attitudes, Perceptions, and Behaviors: A Meta-Analytic Review of Existing and New Research," Organizational Behavior and Human Decision Processes, 99: 113-142.Trevino, L. K. (1992). "The Social Effects of Punishment in Organizations: A Justice Perspective," Academy of Management Review, 17: 647-676.Molenmaker, W. E., Kwaadsteniet, E. W., & van Dyjk, E. (2016). "The Impact of Personal Responsibility on the (Un)Willingness to Punish Non-Cooperation and Reward Cooperation," Organizational Behavior and Human Decision Processes, 134: 1-15.Podsakoff, P. M., & Mackenzie, S. B. (1997). "Impact of Organizational Citizenship Behavior on Organizational Performance: A Review and Suggestions for Future Research," Human Performance, 10(2): 133-151.Link to Dr. Dick Malott's Book:Principles of BehaviorNext Episode Teaser:Stay tuned for our next episode, where we explore blame and punishment in the context of organizational learning. In the meantime, keep learning, keep growing, and keep adding to your management theory toolbox!Dr. Richard Malott [Guest], with more than 40 years of experience at Western Michigan University, has used the principles of behavior to construct teaching models and behavioral systems that have been sustained over several decades. As a result, he has taught generations of students to use behavior analysis in their everyday lives as learners, teachers, practitioners, and citizens, and has provided the training grounds for many of the field's leaders in behavioral systems design. Richard Malott is a prolific, creative, and engaging writer who has authored some of the field's most important and widely read publications, including Elementary Principles of Behavior (first with Donald Whaley and then with Maria E. Malott and Elizabeth Trojan Suarez), which is in its eight ed

The Retrospectors
Making Voting Secret

The Retrospectors

Play Episode Listen Later Jul 18, 2024 11:40


Rerun: Before the Ballot Act of 18th July, 1872, the British electorate were expected to declare their preferred candidate publicly at hustings, often under pressure from their employers and landlords, and plied with alcohol supplied by the politicians standing for election, in a process known as ‘soaking'. Over the years, alternatives had been put forward - including Jeremy Bentham's concept of 1818, which involved a multitude of secret boxes with viewing windows - before the modern idea of private booths and a ballot box came to the fore.  In this episode, Arion, Rebecca and OIly explain why many voters saw secret ballots as sneaky and cowardly; explain how Australia beat Britain when it came to instituting voting in secret; and discover the teething problems experienced when Pontefract became the first town to test out the new process… Further Reading: • ‘Britain's first secret ballot' (BBC News, 2015): https://www.bbc.co.uk/news/uk-england-leeds-31630588 • ‘Rhodri Marsden's Interesting Objects: Pontefract's secret ballot box' (The Independent, 2015): https://www.independent.co.uk/news/uk/politics/rhodri-marsden-s-interesting-objects-pontefract-s-secret-ballot-box-a114506.html • ‘What was the Secret Ballot? | The Ballot Act 1872' (Royal Holloway University London, 2020): https://www.youtube.com/watch?v=9M8Lix4FgUM Learn more about your ad choices. Visit podcastchoices.com/adchoices

Jungle of Mystery: A Lost Podcast
S5 E7: "The Life and Death of Jeremy Bentham"

Jungle of Mystery: A Lost Podcast

Play Episode Listen Later Jul 6, 2024 68:22


Kind of at a loss for funny quips about this episode, so all I can say is buckle up.

Oooh, Spooky
Episode 291 - Cat Trick, The Underworld, Jeremy Bentham, Gambling Earl

Oooh, Spooky

Play Episode Listen Later Jul 3, 2024 61:09


Or Feline Foolery, El Inframundo, Jez Likebeckham, Betting Noble. Our Patreon if you'd like to support the show and get exclusive podcasts.

Lauren Gets Lost
Season 5 Ep 7 - The Life and Death of Jeremy Bentham

Lauren Gets Lost

Play Episode Listen Later Jun 11, 2024 115:42


This podcast gets you to where you need to be, and after this episode you will need to be in therapy. The siblings are back with Zane's least favorite favorite episode. This week Lauren share's her opinions on writing a paper, the Oceanic 6 are horrible to Locke, Zane cries, and John Locke is alive?! All this and more as we breakdown "The Life and Death of Jeremy Bentham." Follow us on all our socials!!  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Tiktok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | --- Support this podcast: https://podcasters.spotify.com/pod/show/zane-kohler/support

Philosophy Acquired - Learn Philosophy
Utilitarianism. Principles, Criticisms, and Contemporary Perspectives.

Philosophy Acquired - Learn Philosophy

Play Episode Listen Later May 16, 2024 7:59


This episode will be exploring Utilitarianism's Principles, Criticisms, and Contemporary Perspectives.Utilitarianism is a moral theory that suggests the rightness or wrongness of an action is determined by its consequences, specifically by the amount of happiness or pleasure it produces. This theory traces its origins to the works of Jeremy Bentham and John Stuart Mill, who developed and popularized utilitarian thought in the 19th century.Jeremy Bentham, in his book "An Introduction to the Principles of Morals and Legislation," laid out the basic principles of utilitarianism. He argued that the goal of ethics should be to maximize happiness or pleasure and minimize suffering or pain for the greatest number of people. This concept of "the greatest happiness principle" forms the foundation of utilitarianism.

Lex Rex Institute Podcast
Season 2 Episode 9 - Jeremy Bentham on Bobbies and Penology

Lex Rex Institute Podcast

Play Episode Listen Later May 16, 2024 65:23


In this episode, we take you through Jeremy Bentham's view on the role of policing and what policing used to look like - in that mythical, pre-Benthamic society. Oh, and we'll also talk about his mummified head. It relates. We promise.The delay was BAD in this one. We apologize for repeatedly interrupting each other.VCA Lawsuit in Orange County: https://www.lexrex.org/post/voter-choice-act-lawsuitIntellectuals by Paul Johnson: https://a.co/d/bXOHeQY

Oh What A Time...
#44 Weather (Part 2)

Oh What A Time...

Play Episode Listen Later May 6, 2024 34:18


This is Part 2! For Part 1, check the feed from yesterday! This week we're talking about unique weather events through history. From the great freeze of 1899 that plunged Miami to sub-zero temperatures, the great storm (in the UK) of 1987 (and how badly Michael Fish got it all BANG WRONG) and of course, the LONG-HOT-SUMMER-OF-NINETEEN-SEVENTY-SIX (which Elis' parents WILL NOT STOP GOING ON ABOUT). Plus there's even more Jeremy Bentham bantz. And if you want to get in touch with the show, you know what to do: hello@ohwhatatime.com And YES! You may have spotted a new numbering system. Well, we haven't gone straight from episode #39 to episode #44 by accident (!), we have in fact retroactively applied episode numbers to old subscriber specials: #40 Heroes (OWAT: Full Timer Edition) #41 Gifts (OWAT: Full Timer Edition) #42 Sex, Drugs and Rock n Roll (OWAT: Full Timer Edition) #43 Protests (OWAT: Full Timer Edition) When informed of this, Tom Craine said he felt “absolutely no emotion whatsoever” - but nonetheless, there's the explanation for those who need it. If you're impatient and want both parts in one lovely go next time plus a whole lot more(!), why not treat yourself and become an Oh What A Time: FULL TIMER? In exchange for your £4.99 per month to support the show, you'll get: - two bonus episodes every month! - ad-free listening - episodes a week ahead of everyone else - And first dibs on any live show tickets Subscriptions are available via AnotherSlice, Apple and Spotify. For all the links head to: ohwhatatime.com You can also follow us on:  X (formerly Twitter) at @ohwhatatimepod And Instagram at @ohwhatatimepod Aaannnd if you like it, why not drop us a review in your podcast app of choice? Thank you to Dan Evans for the artwork (idrawforfood.co.uk). Chris, Elis and Tom x Learn more about your ad choices. Visit podcastchoices.com/adchoices

Oh What A Time...
#44 Weather (Part 1)

Oh What A Time...

Play Episode Listen Later May 5, 2024 30:56


This week we're talking about unique weather events through history. From the great freeze of 1899 that plunged Miami to sub-zero temperatures, the great storm (in the UK) of 1987 (and how badly Michael Fish got it all BANG WRONG) and of course, the LONG-HOT-SUMMER-OF-NINETEEN-SEVENTY-SIX (which Elis' parents WILL NOT STOP GOING ON ABOUT). Plus there's even more Jeremy Bentham bantz. And if you want to get in touch with the show, you know what to do: hello@ohwhatatime.com And YES! You may have spotted a new numbering system. Well, we haven't gone straight from episode #39 to episode #44 by accident (!), we have in fact retroactively applied episode numbers to old subscriber specials: #40 Heroes (OWAT: Full Timer Edition) #41 Gifts (OWAT: Full Timer Edition) #42 Sex, Drugs and Rock n Roll (OWAT: Full Timer Edition) #43 Protests (OWAT: Full Timer Edition) When informed of this, Tom Craine said he felt “absolutely no emotion whatsoever” - but nonetheless, there's the explanation for those who need it. If you're impatient and want both parts in one lovely go next time plus a whole lot more(!), why not treat yourself and become an Oh What A Time: FULL TIMER? In exchange for your £4.99 per month to support the show, you'll get: - two bonus episodes every month! - ad-free listening - episodes a week ahead of everyone else - And first dibs on any live show tickets Subscriptions are available via AnotherSlice, Apple and Spotify. For all the links head to: ohwhatatime.com You can also follow us on:  X (formerly Twitter) at @ohwhatatimepod And Instagram at @ohwhatatimepod Aaannnd if you like it, why not drop us a review in your podcast app of choice? Thank you to Dan Evans for the artwork (idrawforfood.co.uk). Chris, Elis and Tom x Learn more about your ad choices. Visit podcastchoices.com/adchoices

Oh What A Time...
#44 Weather (OWAT: Full timer edition)

Oh What A Time...

Play Episode Listen Later May 4, 2024 51:13


This week we're talking about unique weather events through history. From the great freeze of 1899 that plunged Miami to sub-zero temperatures, the great storm (in the UK) of 1987 (and how badly Michael Fish got it all BANG WRONG) and of course, the LONG-HOT-SUMMER-OF-NINETEEN-SEVENTY-SIX (which Elis' parents WILL NOT STOP GOING ON ABOUT). Plus there's even more Jeremy Bentham bantz. And if you want to get in touch with the show, you know what to do: hello@ohwhatatime.com And YES! You may have spotted a new numbering system. Well, we haven't gone straight from episode #39 to episode #44 by accident (!), we have in fact retroactively applied episode numbers to old subscriber specials: #40 Heroes (OWAT: Full Timer Edition) #41 Gifts (OWAT: Full Timer Edition) #42 Sex, Drugs and Rock n Roll (OWAT: Full Timer Edition) #43 Protests (OWAT: Full Timer Edition) When informed of this, Tom Craine said he felt “absolutely no emotion whatsoever” - but nonetheless, there's the explanation for those who need it. You can follow us on: X (formerly Twitter) at @ohwhatatimepod And Instagram at @ohwhatatimepod Aaannnd if you like it, why not drop us a review in your podcast app of choice? Thank you to Dan Evans for the artwork (idrawforfood.co.uk). And thank you for subscribing! We couldn't make the show without you! We'll see you next week! Chris, Elis and Tom x See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Oh What A Time...
#39 Survival (Part 2)

Oh What A Time...

Play Episode Listen Later Apr 29, 2024 41:55


This is Part 2! For Part 1, check the feed from yesterday! In this episode we'll be taking a look at incredible stories of survival through history; from the men of the American Civil War who had their lives saved via a Bible in their pocket, Alexander Selkirk - the man whose story inspired Robinson Crusoe, Violet Jessop - who survived three infamous ships of the White Star Line and the bonus bit for the OWAT: Full Timers this week is ‘the miracle of the Andes' ie. the story of the Uruguayan rugby team who survived a plane crash in the Andes and were forced into cannibalism to overcome starvation (among many, many other hardships). Elsewhere, this week we're discussing ‘Custardo' and whether this is a realistic alternative for Tom given his love of drinking neat custard. We also discuss whether it's appropriate to bring the embalmed bones of Jeremy Bentham on tour with us. If you've got anything to add on anything here, you know what to do: hello@ohwhatatime.com If you're impatient and want both parts in one lovely go next time plus a whole lot more(!), why not treat yourself and become an Oh What A Time: FULL TIMER? In exchange for your £4.99 per month to support the show, you'll get: - two bonus episodes every month! - ad-free listening - episodes a week ahead of everyone else - And first dibs on any live show tickets Subscriptions are available via AnotherSlice, Apple and Spotify. For all the links head to: ohwhatatime.com You can also follow us on:  X (formerly Twitter) at @ohwhatatimepod And Instagram at @ohwhatatimepod Aaannnd if you like it, why not drop us a review in your podcast app of choice? Thank you to Dan Evans for the artwork (idrawforfood.co.uk). Chris, Elis and Tom x Learn more about your ad choices. Visit podcastchoices.com/adchoices

Sass N Sips
LOST The Life and Death of Jeremy Bentham

Sass N Sips

Play Episode Listen Later Apr 29, 2024 49:54


"Yeah. He's the man who killed me." - John LockeIn this episode: Losties 2.0, Team Ben?, & jacked up time framesIn other news, Agnes has a new theory and wants to protest a character's absence. Original episode air date 02/25/2009Support the Show.Check out Spreadshop!http://arthemisclothing.ca - Use SASSPOD for 15% off https://www.muzmm.com- Code SASSPOD for 20% offhttps://www.podpage.com/?via=sasspod to create your own webpagehttps://www.buzzsprout.com/?referrer_id=682706 to start your own podhttps://www.lyft.com/i/LISA594490?utm_medium=p2pi_iacc For a LyftGet in touch:(732) 595-2922sass.n.sips@gmail.com or sassnsips.comIG @sassnsipsFB @Sass N SipsTwitter @SassSipsIG @RealSassyLisaIG @RealsassyBritYouTube @Sass N SipsPodchaser podchaser.com/sassnsipsClips used in this podcast were used in accordance with the US Copyrights act FAIR USE Exemption for critic...

Oh What A Time...
#39 Survival (Part 1)

Oh What A Time...

Play Episode Listen Later Apr 28, 2024 38:33


In this episode we'll be taking a look at incredible stories of survival through history; from the men of the American Civil War who had their lives saved via a Bible in their pocket, Alexander Selkirk - the man whose story inspired Robinson Crusoe, Violet Jessop - who survived three infamous ships of the White Star Line and the bonus bit for the OWAT: Full Timers this week is ‘the miracle of the Andes' ie. the story of the Uruguayan rugby team who survived a plane crash in the Andes and were forced into cannibalism to overcome starvation (among many, many other hardships). Elsewhere, this week we're discussing ‘Custardo' and whether this is a realistic alternative for Tom given his love of drinking neat custard. We also discuss whether it's appropriate to bring the embalmed bones of Jeremy Bentham on tour with us. If you've got anything to add on anything here, you know what to do: hello@ohwhatatime.com If you're impatient and want both parts in one lovely go next time plus a whole lot more(!), why not treat yourself and become an Oh What A Time: FULL TIMER? In exchange for your £4.99 per month to support the show, you'll get: - two bonus episodes every month! - ad-free listening - episodes a week ahead of everyone else - And first dibs on any live show tickets Subscriptions are available via AnotherSlice, Apple and Spotify. For all the links head to: ohwhatatime.com You can also follow us on:  X (formerly Twitter) at @ohwhatatimepod And Instagram at @ohwhatatimepod Aaannnd if you like it, why not drop us a review in your podcast app of choice? Thank you to Dan Evans for the artwork (idrawforfood.co.uk). Chris, Elis and Tom x Learn more about your ad choices. Visit podcastchoices.com/adchoices

Oh What A Time...
#39 Survival (OWAT: Full timer edition)

Oh What A Time...

Play Episode Listen Later Apr 25, 2024 78:59


In this episode we'll be taking a look at incredible stories of survival through history; from the men of the American Civil War who had their lives saved via a Bible in their pocket, Alexander Selkirk - the man whose story inspired Robinson Crusoe, Violet Jessop - who survived three infamous ships of the White Star Line and the bonus bit for the OWAT: Full Timers this week is ‘the miracle of the Andes' ie. the story of the Uruguayan rugby team who survived a plane crash in the Andes and were forced into cannibalism to overcome starvation (among many, many other hardships). Elsewhere, this week we're discussing ‘Custardo' and whether this is a realistic alternative for Tom given his love of drinking neat custard. We also discuss whether it's appropriate to bring the embalmed bones of Jeremy Bentham on tour with us. If you've got anything to add on anything here, you know what to do: hello@ohwhatatime.com You can follow us on: X (formerly Twitter) at @ohwhatatimepod And Instagram at @ohwhatatimepod Aaannnd if you like it, why not drop us a review in your podcast app of choice? Thank you to Dan Evans for the artwork (idrawforfood.co.uk). And thank you for subscribing! We couldn't make the show without you! We'll see you next week! Chris, Elis and Tom x See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Dr. Junkie Show
#141: The Panopticon

The Dr. Junkie Show

Play Episode Listen Later Mar 21, 2024 33:20


This week I wrap up a multi-part discussion of Foucault's theories of panoptic power, institutional knowledge, and discourses used to endorse awful ideas and beliefs about drugs and drug users. I also talk about Michel Foucault's car accident while high on opium, the notion of panoptic power, Jeremy Bentham's panoptic prison, discourse, stigma and stereotype. Foucault audio at intro and outro from Century of the Self lecture series. Support the show

Chaos On The Set
Lost Season 4

Chaos On The Set

Play Episode Listen Later Feb 22, 2024 60:02


The Lost crew is back to break down their reactions to season 4. They discuss how the 2008 writers strike affected the season, their thoughts on our new characters, and Jeremy Bentham's fate. Plus, they get into the most famous episode: The Constant. Tune in and send us your thoughts @ChaosOnTheSet

LOST in my 40s
The Life and Death of Jeremy Bentham - Locke/Derek

LOST in my 40s

Play Episode Listen Later Feb 7, 2024 106:43


This week, we get sciency with imaginary time, get "exited" from the Island, and do our best to avoid having a Beneurysm! Email us here (it may make it onto a video pod!) --- https://www.spacebearmedia.com/contact All our other links! --- https://linktr.ee/spacebearmedia *PLEASE RATE & REVIEW!*

The Dr. Junkie Show
#137: Foucault on Drugs

The Dr. Junkie Show

Play Episode Listen Later Feb 3, 2024 25:01


Why do humans have such an odd fascination with criminals and outlaws? What happened to all the kings and queens who used to be in charge of everything...where did they go? Why? And what does any of this have to do with drugs?In this episode I pick up where I left off last time by introducing Michel Foucault's concept of panoptic power, which explains why now days we all self-discipline to conform to social regulations. The war on drugs thrives in spaces where most citizens are thoroughly convinced of the stereotypes that surround drug use: immorality, contagion, degradation, the "disease" of addiction. Today I explain how that cultural knowledge comes to exist, and perhaps how we might be able to disrupt and rewrite those scripts.   Support the show

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

In 2023 we did a few Fundamentals episodes covering Benchmarks 101, Datasets 101, FlashAttention, and Transformers Math, and it turns out those were some of your evergreen favorites! So we are experimenting with more educational/survey content in the mix alongside our regular founder and event coverage. Pls request more!We have a new calendar for events; join to be notified of upcoming things in 2024!Today we visit the shoggoth mask factory: how do transformer models go from trawling a deeply learned latent space for next-token prediction to a helpful, honest, harmless chat assistant? Our guest “lecturer” today is ; you might know him from his prolific online writing on and Twitter, or from his previous work leading RLHF at HuggingFace and now at the Allen Institute for AI (AI2) which recently released the open source GPT3.5-class Tulu 2 model which was trained with DPO. He's widely considered one of the most knowledgeable people on RLHF and RLAIF. He recently gave an “RLHF 201” lecture at Stanford, so we invited him on the show to re-record it for everyone to enjoy! You can find the full slides here, which you can use as reference through this episode. Full video with synced slidesFor audio-only listeners, this episode comes with slide presentation along our discussion. You can find it on our YouTube (like, subscribe, tell a friend, et al).Theoretical foundations of RLHFThe foundation and assumptions that go into RLHF go back all the way to Aristotle (and you can find guidance for further research in the slide below) but there are two key concepts that will be helpful in thinking through this topic and LLMs in general:* Von Neumann–Morgenstern utility theorem: you can dive into the math here, but the TLDR is that when humans make decision there's usually a “maximum utility” function that measures what the best decision would be; the fact that this function exists, makes it possible for RLHF to model human preferences and decision making.* Bradley-Terry model: given two items A and B from a population, you can model the probability that A will be preferred to B (or vice-versa). In our world, A and B are usually two outputs from an LLM (or at the lowest level, the next token). It turns out that from this minimal set of assumptions, you can build up the mathematical foundations supporting the modern RLHF paradigm!The RLHF loopOne important point Nathan makes is that "for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior". For example, it might be difficult for you to write a poem, but it's really easy to say if you like or dislike a poem someone else wrote. Going back to the Bradley-Terry Model we mentioned, the core idea behind RLHF is that when given two outputs from a model, you will be able to say which of the two you prefer, and we'll then re-encode that preference into the model.An important point that Nathan mentions is that when you use these preferences to change model behavior "it doesn't mean that the model believes these things. It's just trained to prioritize these things". When you have preference for a model to not return instructions on how to write a computer virus for example, you're not erasing the weights that have that knowledge, but you're simply making it hard for that information to surface by prioritizing answers that don't return it. We'll talk more about this in our future Fine Tuning 101 episode as we break down how information is stored in models and how fine-tuning affects it.At a high level, the loop looks something like this:For many RLHF use cases today, we can assume the model we're training is already instruction-tuned for chat or whatever behavior the model is looking to achieve. In the "Reward Model & Other Infrastructure" we have multiple pieces:Reward + Preference ModelThe reward model is trying to signal to the model how much it should change its behavior based on the human preference, subject to a KL constraint. The preference model itself scores the pairwise preferences from the same prompt (worked better than scalar rewards).One way to think about it is that the reward model tells the model how big of a change this new preference should make in the behavior in absolute terms, while the preference model calculates how big of a difference there is between the two outputs in relative terms. A lot of this derives from John Schulman's work on PPO:We recommend watching him talk about it in the video above, and also Nathan's pseudocode distillation of the process:Feedback InterfacesUnlike the "thumbs up/down" buttons in ChatGPT, data annotation from labelers is much more thorough and has many axis of judgement. At a simple level, the LLM generates two outputs, A and B, for a given human conversation. It then asks the labeler to use a Likert scale to score which one it preferred, and by how much:Through the labeling process, there are many other ways to judge a generation:We then use all of this data to train a model from the preference pairs we have. We start from the base instruction-tuned model, and then run training in which the loss of our gradient descent is the difference between the good and the bad prompt.Constitutional AI (RLAIF, model-as-judge)As these models have gotten more sophisticated, people started asking the question of whether or not humans are actually a better judge of harmfulness, bias, etc, especially at the current price of data labeling. Anthropic's work on the "Constitutional AI" paper is using models to judge models. This is part of a broader "RLAIF" space: Reinforcement Learning from AI Feedback.By using a "constitution" that the model has to follow, you are able to generate fine-tuning data for a new model that will be RLHF'd on this constitution principles. The RLHF model will then be able to judge outputs of models to make sure that they follow its principles:Emerging ResearchRLHF is still a nascent field, and there are a lot of different research directions teams are taking; some of the newest and most promising / hyped ones:* Rejection sampling / Best of N Sampling: the core idea here is that rather than just scoring pairwise generations, you are generating a lot more outputs (= more inference cost), score them all with your reward model and then pick the top N results. LLaMA2 used this approach, amongst many others.* Process reward models: in Chain of Thought generation, scoring each step in the chain and treating it like its own state rather than just scoring the full output. This is most effective in fields like math that inherently require step-by-step reasoning.* Direct Preference Optimization (DPO): We covered DPO in our NeurIPS Best Papers recap, and Nathan has a whole blog post on this; DPO isn't technically RLHF as it doesn't have the RL part, but it's the “GPU Poor” version of it. Mistral-Instruct was a DPO model, as do Intel's Neural Chat and StableLM Zephyr. Expect to see a lot more variants in 2024 given how “easy” this was.* Superalignment: OpenAI launched research on weak-to-strong generalization which we briefly discuss at the 1hr mark.Note: Nathan also followed up this post with RLHF resources from his and peers' work:Show Notes* Full RLHF Slides* Interconnects* Retort (podcast)* von Neumann-Morgenstern utility theorem* Bradley-Terry model (pairwise preferences model)* Constitutional AI* Tamer (2008 paper by Bradley Knox and Peter Stone)* Paul Christiano et al. RLHF paper* InstructGPT* Eureka by Jim Fan* ByteDance / OpenAI lawsuit* AlpacaEval* MTBench* TruthfulQA (evaluation tool)* Self-Instruct Paper* Open Assistant* Louis Castricato* Nazneen Rajani* Tulu (DPO model from the Allen Institute)Timestamps* [00:00:00] Introductions and background on the lecture origins* [00:05:17] History of RL and its applications* [00:10:09] Intellectual history of RLHF* [00:13:47] RLHF for decision-making and pre-deep RL vs deep RL* [00:20:19] Initial papers and intuitions around RLHF* [00:27:57] The three phases of RLHF* [00:31:09] Overfitting issues* [00:34:47] How preferences get defined* [00:40:35] Ballpark on LLaMA2 costs* [00:42:50] Synthetic data for training* [00:47:25] Technical deep dive in the RLHF process* [00:54:34] Projection / best event sampling* [00:57:49] Constitutional AI* [01:04:13] DPO* [01:08:54] What's the Allen Institute for AI?* [01:13:43] Benchmarks and models comparisonsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have Dr. Nathan Lambert in the house. Welcome.Nathan [00:00:18]: Thanks guys.Swyx [00:00:19]: You didn't have to come too far. You got your PhD in Berkeley, and it seems like you've lived there most of the time in recent years. You worked on robotics and model-based reinforcement learning on your PhD, and you also interned at FAIR and DeepMind. You bootstrapped the RLHF team at Hugging Face, and you recently joined the Allen Institute as a research scientist. So that's your quick bio. What should people know about you that maybe is not super obvious about you on New LinkedIn?Nathan [00:00:43]: I stay sane in various insane sport and ultra-endurance sport activities that I do.Swyx [00:00:50]: What's an ultra-endurance sport activity?Nathan [00:00:52]: Long-distance trail running or gravel biking. Try to unplug sometimes, although it's harder these days. Yeah.Swyx [00:00:59]: Well, you know, just the Bay Area is just really good for that stuff, right?Nathan [00:01:02]: Oh, yeah. You can't beat it. I have a trailhead like 1.2 miles from my house, which is pretty unmatchable in any other urban area.Swyx [00:01:11]: Pretty excellent. You also have an incredible blog, Interconnects, which I'm a fan of. And I also just recently discovered that you have a new podcast, Retort.Nathan [00:01:20]: Yeah, we do. I've been writing for a while, and I feel like I've finally started to write things that are understandable and fun. After a few years lost in the wilderness, if you ask some of my friends that I made read the earlier blogs, they're like, oh, this is yikes, but it's coming along. And the podcast is with my friend Tom, and we just kind of like riff on what's actually happening on AI and not really do news recaps, but just what it all means and have a more critical perspective on the things that really are kind of funny, but still very serious happening in the world of machine learning.Swyx [00:01:52]: Yeah. Awesome. So let's talk about your work. What would you highlight as your greatest hits so far on Interconnects, at least?Nathan [00:01:59]: So the ones that are most popular are timely and or opinion pieces. So the first real breakout piece was when April and I also just wrote down the thing that everyone in AI was feeling, which is we're all feeling stressed, that we're going to get scooped, and that we're overworked, which is behind the curtain, what it feels to work in AI. And then a similar one, which we might touch on later in this, was about my recent job search, which wasn't the first time I wrote a job search post. People always love that stuff. It's so open. I mean, it's easy for me to do in a way that it's very on-brand, and it's very helpful. I understand that until you've done it, it's hard to share this information. And then the other popular ones are various model training techniques or fine tuning. There's an early one on RLHF, which is, this stuff is all just like when I figure it out in my brain. So I wrote an article that's like how RLHF actually works, which is just the intuitions that I had put together in the summer about RLHF, and that was pretty well. And then I opportunistically wrote about QSTAR, which I hate that you have to do it, but it is pretty funny. From a literature perspective, I'm like, open AI publishes on work that is very related to mathematical reasoning. So it's like, oh, you just poke a little around what they've already published, and it seems pretty reasonable. But we don't know. They probably just got like a moderate bump on one of their benchmarks, and then everyone lost their minds. It doesn't really matter.Swyx [00:03:15]: You're like, this is why Sam Altman was fired. I don't know. Anyway, we're here to talk about RLHF 101. You did a presentation, and I think you expressed some desire to rerecord it. And that's why I reached out on Twitter saying, like, why not rerecord it with us, and then we can ask questions and talk about it. Yeah, sounds good.Nathan [00:03:30]: I try to do it every six or 12 months is my estimated cadence, just to refine the ways that I say things. And people will see that we don't know that much more, but we have a bit of better way of saying what we don't know.Swyx [00:03:43]: Awesome. We can dive right in. I don't know if there's any other topics that we want to lay out as groundwork.Alessio [00:03:48]: No, you have some awesome slides. So for people listening on podcast only, we're going to have the slides on our show notes, and then we're going to have a YouTube version where we run through everything together.Nathan [00:03:59]: Sounds good. Yeah. I think to start skipping a lot of the, like, what is a language model stuff, everyone knows that at this point. I think the quote from the Llama 2 paper is a great kind of tidbit on RLHF becoming like a real deal. There was some uncertainty earlier in the year about whether or not RLHF was really going to be important. I think it was not that surprising that it is. I mean, with recent models still using it, the signs were there, but the Llama 2 paper essentially reads like a bunch of NLP researchers that were skeptical and surprised. So the quote from the paper was, meanwhile, reinforcement learning known for its instability seemed a somewhat shadowy field for those in the NLP research community. However, reinforcement learning proved highly effective, particularly given its cost and time effectiveness. So you don't really know exactly what the costs and time that Meta is looking at, because they have a huge team and a pretty good amount of money here to release these Llama models. This is just the kind of thing that we're seeing now. I think any major company that wasn't doing RLHF is now realizing they have to have a team around this. At the same time, we don't have a lot of that in the open and research communities at the same scale. I think seeing that converge would be great, but it's still very early days. And the other thing on the slide is some of Anthropic's work, but everyone knows Anthropic is kind of the masters of this, and they have some of their own techniques that we're going to talk about later on, but that's kind of where we start.Alessio [00:05:17]: Can we do just a one-second RL version? So you come from a robotics background, which RL used to be, or maybe still is, state-of-the-art. And then now you're seeing a lot of LLM plus RL, so you have the gym fans, Eureka, you have MPU, which we had on the podcast when they started with RL. Now they're doing RL plus LLMs. Yeah. Any thoughts there on how we got here? Maybe how the pendulum will keep swinging?Nathan [00:05:46]: I really think RL is about a framing of viewing the world through trial and error learning and feedback, and really just one that's focused on thinking about decision-making and inputs in the world and how inputs have reactions. And in that, a lot of people come from a lot of different backgrounds, whether it's physics, electrical engineering, mechanical engineering. There are obviously computer scientists, but compared to other fields of CS, I do think it's a much more diverse background of people. My background was in electrical engineering and doing robotics and things like that. It really just changes the worldview. I think that reinforcement learning as it was back then, so to say, is really different. You're looking at these toy problems and the numbers are totally different, and everyone went kind of zero to one at scaling these things up, but people like Jim Phan and other people that were... You saw this transition in the decision transformer and papers and when people are trying to use transformers to do decision-making for things like offline RL, and I think that was kind of like the early days. But then once language models were so proven, it's like everyone is using this tool for their research. I think in the long run, it will still settle out, or RL will still be a field that people work on just because of these kind of fundamental things that I talked about. It's just viewing the whole problem formulation different than predicting text, and so there needs to be that separation. And the view of RL in language models is pretty contrived already, so it's not like we're doing real RL. I think the last slide that I have here is a way to make RLHF more like what people would think of with RL, so actually running things over time, but a weird lineage of tools that happen to get us to where we are, so that's why the name takes up so much space, but it could have gone a lot of different ways. Cool.Alessio [00:07:29]: We made it one slide before going on a tangent.Nathan [00:07:31]: Yeah, I mean, it's kind of related. This is a...Swyx [00:07:35]: Yeah, so we have a history of RL.Nathan [00:07:37]: Yeah, so to give the context, this paper really started because I have this more diverse background than some computer scientists, such as trying to understand what the difference of a cost function or a reward function and a preference function would be without going into all of the details. Costs are normally things that control theorists would work with in these kind of closed domains, and then reinforcement learning has always worked with rewards that's central to the formulation that we'll see, and then the idea was like, okay, we now are at preferences, and each step along the way there's kind of different assumptions that you're making. We'll get into these, and those assumptions are built on other fields of work. So that's what this slide is going to say, it's like RLHF, while directly building on tools from RL and language models, is really implicitly impacted and built on theories and philosophies spanning tons of human history. I think we cite Aristotle in this paper, which is fun. It's like going pre-BC, it's like 2,300 years old or something like that. So that's the reason to do this, I think. We kind of list some things in the paper about summarizing what different presumptions of RLHF could be. I think going through these is actually kind of funny. It's fun to talk about these, because they're kind of grab bags of things that you'll see return throughout this podcast that we're talking about it. The core thing of RLHF that, in order to be a believer in this, is that RL actually works. It's like, if you have a reward function, you can optimize it in some way and get a different performance out of it, and you could do this at scale, and you could do this in really complex environments, which is, I don't know how to do that in all the domains. I don't know how to exactly make chat GPT. So it's kind of, we'll overshadow everything. And then there's, go from something kind of obvious like that, and then you read the von Neumann-Morgenstern utility theorem, which is essentially an economic theory that says you can weight different probabilities of different people, which is a theoretical piece of work that is the foundation of utilitarianism, and trying to quantify preferences is crucial to doing any sort of RLHF. And if you look into this, all of these things, there's way more you could go into if you're interested in any of these. So this is kind of like grabbing a few random things, and then kind of similar to that is the Bradley-Terry model, which is the fancy name for the pairwise preferences that everyone is doing. And then all the things that are like, that Anthropic and OpenAI figured out that you can do, which is that you can aggregate preferences from a bunch of different people and different sources. And then when you actually do RLHF, you extract things from that data, and then you train a model that works somehow. And we don't know, there's a lot of complex links there, but if you want to be a believer in doing this at scale, these are the sorts of things that you have to accept as preconditions for doing RLHF. Yeah.Swyx [00:10:09]: You have a nice chart of like the sort of intellectual history of RLHF that we'll send people to refer to either in your paper or in the YouTube video for this podcast. But I like the other slide that you have on like the presumptions that you need to have for RLHF to work. You already mentioned some of those. Which one's underappreciated? Like, this is the first time I've come across the VNM Utility Theorem.Nathan [00:10:29]: Yeah, I know. This is what you get from working with people like to my co-host on the podcast, the rhetoric is that sociologist by training. So he knows all these things and like who the philosophers are that found these different things like utilitarianism. But there's a lot that goes into this. Like essentially there's even economic theories that like there's debate whether or not preferences exist at all. And there's like different types of math you can use with whether or not you actually can model preferences at all. So it's pretty obvious that RLHF is built on the math that thinks that you can actually model any human preference. But this is the sort of thing that's been debated for a long time. So all the work that's here is like, and people hear about in their AI classes. So like Jeremy Bentham, like hedonic calculus and all these things like these are the side of work where people assume that preferences can be measured. And this is like, I don't really know, like, this is what I kind of go on a rant and I say that in RLHF calling things a preference model is a little annoying because there's no inductive bias of what a preference is. It's like if you were to learn a robotic system and you learned a dynamics model, like hopefully that actually mirrors the world in some way of the dynamics. But with a preference model, it's like, Oh my God, I don't know what this model, like I don't know what chat GPT encodes as any sort of preference or what I would want it to be in a fair way. Anthropic has done more work on trying to write these things down. But even like if you look at Claude's constitution, like that doesn't mean the model believes these things. It's just trained to prioritize these things. And that's kind of what the later points I'm looking at, like what RLHF is doing and if it's actually like a repeatable process in the data and in the training, that's just unknown. And we have a long way to go before we understand what this is and the link between preference data and any notion of like writing down a specific value.Alessio [00:12:05]: The disconnect between more sociology work versus computer work already exists, or is it like a recent cross contamination? Because when we had Tri Dao on the podcast, he said FlashAttention came to be because at Hazy they have so much overlap between systems engineer and like deep learning engineers. Is it the same in this field?Nathan [00:12:26]: So I've gone to a couple of workshops for the populations of people who you'd want to include this like R. I think the reason why it's not really talked about is just because the RLHF techniques that people use were built in labs like OpenAI and DeepMind where there are some of these people. These places do a pretty good job of trying to get these people in the door when you compare them to like normal startups. But like they're not bringing in academics from economics, like social choice theory. There's just too much. Like the criticism of this paper that this is based on is like, oh, you're missing these things in RL or at least this decade of RL and it's like it would be literally be bigger than the Sutton and Barto book if you were to include everyone. So it's really hard to include everyone in a principled manner when you're designing this. It's just a good way to understand and improve the communication of what RLHF is and like what is a good reward model for society. It really probably comes down to what an individual wants and it'll probably motivate models to move more in that direction and just be a little bit better about the communication, which is a recurring theme and kind of my work is like I just get frustrated when people say things that don't really make sense, especially when it's going to manipulate individual's values or manipulate the general view of AI or anything like this. So that's kind of why RLHF is so interesting. It's very vague in what it's actually doing while the problem specification is very general.Swyx [00:13:42]: Shall we go to the, I guess, the diagram here on the reinforcement learning basics? Yeah.Nathan [00:13:47]: So reinforcement learning, I kind of mentioned this, it's a trial and error type of system. The diagram and the slides is really this classic thing where you have an agent interacting with an environment. So it's kind of this agent has some input to the environment, which is called the action. The environment returns a state and a reward and that repeats over time and the agent learns based on these states and these rewards that it's seeing and it should learn a policy that makes the rewards go up. That seems pretty simple than if you try to mentally map what this looks like in language, which is that like the language models don't make this easy. I think with the language model, it's very hard to define what an environment is. So if the language model is the policy and it's generating, it's like the environment should be a human, but setting up the infrastructure to take tens of thousands of prompts and generate them and then show them to a human and collect the human responses and then shove that into your training architecture is very far away from working. So we don't really have an environment. We just have a reward model that returns a reward and the state doesn't really exist when you look at it like an RL problem. What happens is the state is a prompt and then you do a completion and then you throw it away and you grab a new prompt. We're really in as an RL researcher, you would think of this as being like you take a state, you get some completion from it and then you look at what that is and you keep kind of iterating on it and all of that isn't here, which is why you'll hear RLHF referred to as bandits problem, which is kind of like you choose one action and then you watch the dynamics play out. There's many more debates that you can have in this. If you get the right RL people in the room, then kind of like this is an RL even when you zoom into what RLHF is doing.Alessio [00:15:22]: Does this change as you think about a chain of thought reasoning and things like that? Like does the state become part of the chain that you're going through?Nathan [00:15:29]: There's work that I've mentioned on one slide called process reward models that essentially rewards each step in the chain of thought reasoning. It doesn't really give the part of interaction, but it does make it a little bit more fine grained where you can think about like calling it at least you have many states from your initial state. That formulation I don't think people have fully settled on. I think there's a bunch of great work out there, like even OpenAI is releasing a lot of this and let's verify step by step is there pretty great paper on the matter. I think in the next year that'll probably get made more concrete by the community on like if you can easily draw out like if chain of thought reasoning is more like RL, we can talk about that more later. That's a kind of a more advanced topic than we probably should spend all the time on.Swyx [00:16:13]: RLHF for decision making. You have a slide here that compares pre-deep RL versus deep RL.Nathan [00:16:19]: This is getting into the history of things, which is showing that the work that people are using now really came from well outside of NLP and it came before deep learning was big. Next up from this paper, Tamer, which is from 2008. Some names that are still really relevant in kind of human centric RL, Bradley Knox and Peter Stone. If you have an agent take an action, you would just have a human give a score from zero to one as a reward rather than having a reward function. And then with that classifier, you can do something with a policy that learns to take actions to maximize that reward. It's a pretty simple setup. It works in simple domains. And then the reason why this is interesting is you compare it to the paper that everyone knows, which is this Paul Christiano et al. Deep Reinforced Learning from Human Preferences paper, which is where they showed that learning from human preferences, you can solve like the basic RL tasks at the time. So various control problems and simulation and this kind of like human preferences approach had higher rewards in some environments than if you just threw RL at the environment that returned a reward. So the preferences thing was you took two trajectories. So in this case, it was like complete trajectories of the agent and the human was labeling which one is better. You can see how this kind of comes to be like the pairwise preferences that are used today that we'll talk about. And there's also a really kind of interesting nugget that is the trajectory that the humans were labeling over has a lot more information than the RL algorithm would see if you just had one state, which is kind of why people think that it's why the performance in this paper was so strong. But I still think that it's surprising that there isn't more RL work of this style happening now. This paper is in 2017. So it's like six years later and I haven't seen things that are exactly similar, but it's a great paper to understand where stuff that's happening now kind of came from.Swyx [00:17:58]: Just on the Christiano paper, you mentioned the performance being strong. I don't remember what results should I have in mind when I think about that paper?Nathan [00:18:04]: It's mostly like if you think about an RL learning curve, which is like on the X axis, you have environment interactions on the Y axis, you have performance. You can think about different like ablation studies of between algorithms. So I think they use like A2C, which I don't even remember what that stands for as their baseline. But if you do the human preference version on a bunch of environments, like the human preference labels, the agent was able to learn faster than if it just learned from the signal from the environment, which means like it's happening because the reward model has more information than the agent would. But like the fact that it can do better, I was like, that's pretty surprising to me because RL algorithms are pretty sensitive. So I was like, okay.Swyx [00:18:41]: It's just one thing I do want to establish as a baseline for our listeners. We are updating all the weights. In some sense, the next token prediction task of training a language model is a form of reinforcement learning. Except that it's not from human feedback. It's just self-supervised learning from a general corpus. There's one distinction which I love, which is that you can actually give negative feedback. Whereas in a general sort of pre-training situation, you cannot. And maybe like the order of magnitude of feedback, like the Likert scale that you're going to talk about, that actually just gives more signal than a typical training process would do in a language model setting. Yeah.Nathan [00:19:15]: I don't think I'm the right person to comment exactly, but like you can make analogies that reinforcement learning is self-supervised learning as well. Like there are a lot of things that will point to that. I don't know whether or not it's a richer signal. I think that could be seen in the results. It's a good thing for people to look into more. As reinforcement learning is so much less compute, like it is a richer signal in terms of its impact. Because if they could do what RLHF is doing at pre-training, they would, but they don't know how to have that effect in like a stable manner. Otherwise everyone would do it.Swyx [00:19:45]: On a practical basis, as someone fine-tuning models, I have often wished for negative fine-tuning, which pretty much doesn't exist in OpenAI land. And it's not the default setup in open-source land.Nathan [00:19:57]: How does this work in like diffusion models and stuff? Because you can give negative prompts to something to like stable diffusion or whatever. It's for guidance.Swyx [00:20:04]: That's for clip guidance.Nathan [00:20:05]: Is that just from like how they prompt it then? I'm just wondering if we could do something similar. It's another tangent.Swyx [00:20:10]: I do want to sort of spell that out for people in case they haven't made the connection between RLHF and the rest of the training process. They might have some familiarity with it.Nathan [00:20:19]: Yeah. The upcoming slides can really dig into this, which is like this in 2018 paper, there was a position paper from a bunch of the same authors from the Christiano paper and from the OpenAI work that everyone knows, which is like, they write a position paper on what a preference reward model could do to solve alignment for agents. That's kind of based on two assumptions. The first assumption is that we can learn user intentions to a sufficiently high accuracy. That doesn't last with me because I don't know what that means. But the second one is pretty telling in the context of RLHF, which is for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior. And this is the whole thing. It's like we can compare two poems that the model generates and it can be viewed as liking a positive example, or it could be viewed as really disliking a negative example. And that's what I think a lot of people are doing in like the harm space is like a harmful response to a language model, whether or not you agree with the company's definition of harms is that it's a really bad negative example and they downweight them by preferring something more benign in the RLHF process, among other ways of dealing with safety. So that's a good way of saying it's like this is core, this kind of like comparison and positive or negative example is core to all of the RLHF work that has continued.Swyx [00:21:29]: People often say, I don't know what I want, but I'll know when I see it. This is that expressed in reinforcement learning tools.Nathan [00:21:35]: Yeah, it is. Yeah, it is. That's what everyone's doing in the preference modeling stage that we'll get to. Yeah. Yeah. And you can see there are more papers. This is really just to have all the links for people that go deeper. There's a Ziegler et al. paper in 2019, which shows that you can do this RLHF process on language models. This familiar diagram starts to emerge in 2019, and it's just to show that this goes really far back. I think we can kind of breeze through some of these. And then 2020 is the first open AI experiment that I think caught people's eyes, which is this learning to summarize experiment. It has this three-step process that we'll go to into more when I kind of go into the main concepts. But this is like the first time you see this diagram that they reuse with InstructGPT, they reuse with ChatGPT. And the types of examples that they would have, I don't think I need to read these exactly, but one that I have read a whole bunch of times is like, they took these prompts from Reddit that was like, explain like I'm five or get career advice, and people really pour their heart and soul into these. So these are like multi-paragraph pieces of writing. And then they essentially do comparisons between a vanilla language model, like I think it was either GPT-2 or GPT-3, I don't always get the exact years.Swyx [00:22:42]: 3 was early 2020. So that's about right.Nathan [00:22:45]: Yeah. So this is probably done with GPT-2. It doesn't really matter. But the language model does normal things when you do few shot, which is like it repeats itself. It doesn't have nice text. And what they did is that this was the first time where the language model would generate like pretty nice text from an output. It was restricted to the summarization domain. But I think that I guess this is where I wish I was paying attention more because I would see the paper, but I didn't know to read the language model outputs and kind of understand this qualitative sense of the models very well then. Because you look at the plots in the papers, these Learning to Summarize and Destruct GPT have incredibly pretty plots, just like nicely separated lines with error bars and they're like superfine tuning works, the RL step works. But if you were early to see like how different the language that was written by these models was, I think you could have been early to like things like ChatGPT and knowing RLHF would matter. And now I think the good people know to chat with language models, but not even everyone does this. Like people are still looking at numbers. And I think OpenAI probably figured it out when they were doing this, how important that could be. And then they had years to kind of chisel away at that and that's why they're doing so well now. Yeah.Swyx [00:23:56]: I mean, arguably, you know, it's well known that ChatGPT was kind of an accident that they didn't think it would be that big of a deal. Yeah.Nathan [00:24:02]: So maybe they didn't. Maybe they didn't, but they were getting the proxy that they needed.Swyx [00:24:06]: I've heard off the record from other labs that it was in the air. If OpenAI didn't do it, someone else would have done it. So you've mentioned a couple of other papers that are very seminal to this period. And I love how you say way back when in referring to 2019.Nathan [00:24:19]: It feels like it in my life.Swyx [00:24:21]: So how much should people understand the relationship between RLHF, instruction tuning, PPO, KL divergence, anything like that? Like how would you construct the level of knowledge that people should dive into? What should people know at the high level? And then if people want to dive in deeper, where do they go? Is instruct tuning important here or is that part of the overall process towards modern RLHF?Nathan [00:24:44]: I think for most people, instruction tuning is probably still more important in their day to day life. I think instruction tuning works very well. You can write samples by hand that make sense. You can get the model to learn from them. You could do this with very low compute. It's easy to do almost in like no code solutions at this point. And the loss function is really straightforward. And then if you're interested in RLHF, you can kind of learn from it from a different perspective, which is like how the instruction tuning distribution makes it easier for your RLHF model to learn. There's a lot of details depending on your preference data, if it's close to your instruction model or not, if that matters. But that's really at the RLHF stage. So I think it's nice to segment and just kind of understand what your level of investment and goals are. I think instruction tuning still can do most of what you want to do. And it's like, if you want to think about RLHF, at least before DPO really had taken off at all, it would be like, do you want to have a team of at least like five people if you're really thinking about doing RLHF? I think DPO makes it a little bit easier, but that's still really limited to kind of one data set that everyone's using at this point. Like everyone's using this ultra feedback data set and it boosts AlpacaVal, MTBench, TruthfulQA and like the qualitative model a bit. We don't really know why. It's like, it might just be a data set combined with the method, but you've got to be ready for a bumpy ride if you're wanting to try to do RLHF. I don't really recommend most startups to do it unless it's like going to provide them a clear competitive advantage in their kind of niche, because you're not going to make your model chat GPT like better than OpenAI or anything like that. You've got to accept that there's some exploration there and you might get a vein of benefit in your specific domain, but I'm still like, oh, be careful going into the RLHF can of worms. You probably don't need to.Swyx [00:26:27]: Okay. So there's a bit of a time skip in what you mentioned. DPO is like a couple months old, so we'll leave that towards the end. I think the main result that I think most people talk about at this stage, we're talking about September 2020 and then going into, I guess maybe last year was Vicuña as one of the more interesting applications of instruction tuning that pushed LLAMA1 from, let's say a GPT 3-ish model to a GPT 3.5 model in pure open source with not a lot of resources. I think, I mean, they said something like, you know, they use like under $100 to makeNathan [00:26:58]: this. Yeah. Like instruction tuning can really go a long way. I think the claims of chat GPT level are long overblown in most of the things in open source. I think it's not to say, like Vicuña was a huge step and it's just kind of showing that instruction tuning with the right data will completely change what it feels like to talk with your model. Yeah.Swyx [00:27:19]: From text completion to actually chatting back and forth. Yeah. Yeah.Nathan [00:27:23]: Instruction tuning can be multi-turn. Just having a little bit of data that's like a couple of turns can go a really long way. That was like the story of the whole first part of the year is like people would be surprised by how far you can take instruction tuning on a small model. I think the things that people see now is like the small models don't really handle nuance as well and they could be more repetitive even if they have really good instruction tuning. But if you take that kind of 7 to 70 billion parameter jump, like the instruction tuning at the bigger model is like robustness, little things make more sense. So that's still just with instruction tuning and scale more than anything else.Swyx [00:27:56]: Excellent. Shall we go to technical overview?Nathan [00:27:58]: Yeah. This is kind of where we go through my own version of this like three phase process. You can talk about instruction tuning, which we've talked about a lot. It's funny because all these things, instruction tuning has the fewest slides, even though it's the most practical thing for most people. We could save the debate for like if the big labs still do instruction tuning for later, but that's a coming wave for people. And then like preference data and training and then kind of like what does reinforce learning optimization actually mean? We talk about these sequentially because you really have to be able to do each of them to be able to do the next one. You need to be able to have a model that's chatty or helpful instruction following. Every company has their own word that they like to assign to what instructions mean. And then once you have that, you can collect preference data and do some sort of optimization.Swyx [00:28:39]: When you say word, you mean like angle bracket inst or do you mean something else?Nathan [00:28:42]: Oh, I don't even know what inst means, but just saying like they use their adjective that they like. I think Entropic also like steerable is another one.Swyx [00:28:51]: Just the way they describe it. Yeah.Nathan [00:28:53]: So like instruction tuning, we've covered most of this is really about like you should try to adapt your models to specific needs. It makes models that were only okay, extremely comprehensible. A lot of the times it's where you start to get things like chat templates. So if you want to do system prompts, if you want to ask your model, like act like a pirate, that's one of the ones I always do, which is always funny, but like whatever you like act like a chef, like anything, this is where those types of things that people really know in language models start to get applied. So it's good as a kind of starting point because this chat template is used in our early childhood and all of these things down the line, but it was a basic pointer. It's like, once you see this with instruction tuning, you really know it, which is like you take things like stack overflow where you have a question and an answer. You format that data really nicely. There's much more tricky things that people do, but I still think the vast majority of it is question answer. Please explain this topic to me, generate this thing for me. That hasn't changed that much this year. I think people have just gotten better at scaling up the data that they need. Yeah, this is where this talk will kind of take a whole left turn into more technical detail land. I put a slide with the RLHF objective, which I think is good for people to know. I've started going back to this more, just kind of understand what is trying to happen here and what type of math people could do. I think because of this algorithm, we've mentioned this, it's in the air, direct preference optimization, but everything kind of comes from an equation of trying to learn a policy that maximizes the reward. The reward is some learned metric. A lot can be said about what the reward should be subject to some constraint. The most popular constraint is the KL distraint, which is just a distributional distance. Essentially in language models, that means if you have a completion from your instruction or RLHF model, you can compare that completion to a base model. And looking at the log probs from the model, which are essentially how likely each token is, you can see a rough calculation of the distance between these two models, just as a scalar number. I think what that actually looks like in code, you can look at it. It'd be like a sum of log probs that you get right from the model. It'll look much more simpler than it sounds, but it is just to make the optimization kind of stay on tracks.Make sure it doesn't overfit to the RLHF data. Because we have so little data in RLHF, overfitting is really something that could happen. I think it'll fit to specific features that labelers like to see, that the model likes to generate, punctuation, weird tokens like calculator tokens. It could overfit to anything if it's in the data a lot and it happens to be in a specific format. And the KL constraint prevents that. There's not that much documented work on that, but there's a lot of people that know if you take that away, it just doesn't work at all. I think it's something that people don't focus on too much. But the objective, as I said, it's just kind of, you optimize the reward. The reward is where the human part of this comes in. We'll talk about that next. And then subject to a constraint, don't change the model too much. The real questions are, how do you implement the reward? And then how do you make the reward go up in a meaningful way? So like a preference model, the task is kind of to design a human reward. I think the equation that most of the stuff is based on right now is something called a Bradley-Terry model, which is like a pairwise preference model where you compare two completions and you say which one you like better. I'll show an interface that Anthropic uses here. And the Bradley-Terry model is really a fancy probability between two selections. And what's happening in the math is that you're looking at the probability that the chosen completion, the one you like better, is actually the better completion over the rejected completion. And what these preference models do is they assume this probability is correlated to reward. So if you just sample from this probability, it'll give you a scalar. And then you use that reward later on to signify what piece of text is better. I'm kind of inclined to breeze through the math stuff because otherwise, it's going to be not as good to listen to.Alessio [00:32:49]: I think people want to hear it. I think there's a lot of higher level explanations out there. Yeah.Nathan [00:32:55]: So the real thing is you need to assign a scalar reward of how good a response is. And that's not necessarily that easy to understand. Because if we take back to one of the first works, I mentioned this tamer thing for decision making. People tried that with language models, which is if you have a prompt in a completion and you just have someone rate it from 0 to 10, could you then train a reward model on all of these completions in 0 to 10 ratings and see if you can get chat2BT with that? And the answer is really kind of no. Like a lot of people tried that. It didn't really work. And then that's why they tried this pairwise preference thing. And it happened to work. And this Bradley Terry model comes from the 50s. It's from these fields that I was mentioning earlier. And it's wild how much this happens. I mean, this screenshot I have in the slides is from the DPO paper. I think it might be the appendix. But it's still really around in the literature of what people are doing for RLHF.Alessio [00:33:45]: Yeah.Nathan [00:33:45]: So it's a fun one to know.Swyx [00:33:46]: I'll point out one presumption that this heavily relies on. You mentioned this as part of your six presumptions that we covered earlier, which is that you can aggregate these preferences. This is not exactly true among all humans, right? I have a preference for one thing. You have a preference for a different thing. And actually coming from economics, you mentioned economics earlier. There's a theorem or a name for this called error impossibility, which I'm sure you've come across..Nathan [00:34:07]: It's one of the many kind of things we throw around in the paper.Swyx [00:34:10]: Right. Do we just ignore it?Nathan [00:34:14]: We just, yeah, just aggregate. Yeah. I think the reason this really is done on a deep level is that you're not actually trying to model any contestable preference in this. You're not trying to go into things that are controversial or anything. It's really the notion of preference is trying to stay around correctness and style rather than any meaningful notion of preference. Because otherwise these companies, they don't want to do this at all. I think that's just how it is. And it's like, if you look at what people actually do. So I have a bunch of slides on the feedback interface. And they all publish this.Swyx [00:34:43]: It's always at the appendices of every paper.Nathan [00:34:47]: There's something later on in this talk, which is like, but it's good to mention. And this is when you're doing this preference collection, you write out a very long document of instructions to people that are collecting this data. And it's like, this is the hierarchy of what we want to prioritize. Something amount like factuality, helpfulness, honestness, harmlessness. These are all different things. Every company will rank these in different ways, provide extensive examples. It's like, if you see these two answers, you should select this one and why. And all of this stuff. And then my kind of like head scratching is like, why don't we check if the models actually do these things that we tell the data annotators to collect? But I think it's because it's hard to make that attribution. And it's hard to test if a model is honest and stuff. It would just be nice to understand the kind of causal mechanisms as a researcher or like if our goals are met. But at a simple level, what it boils down to, I have a lot more images than I need. It's like you're having a conversation with an AI, something like type GPT. You get shown two responses or more in some papers, and then you have to choose which one is better. I think something you'll hear a lot in this space is something called a Likert scale. Likert is a name. It's a name for probably some research in economics, decision theory, something. But essentially, it's a type of scale where if you have integers from like one to eight, the middle numbers will represent something close to a tie. And the smallest numbers will represent one model being way better than the other. And the biggest numbers will be like the other models better. So in the case of one to eight, if you're comparing models A to B, if you return a one, if you really liked option A, you return eight if you really like B, and then like a four or five if they were close. There's other ways to collect this data. This one's become really popular. We played with it a bit at Hugging Face. It's hard to use. Filling out this preference data is really hard. You have to read like multiple paragraphs. It's not for me. Some people really like it. I hear I'm like, I can't imagine sitting there and reading AI-generated text and like having to do that for my job. But a lot of these early papers in RLHF have good examples of what was done. The one I have here is from Anthropic's collection demo because it was from slides that I did with Anthropic. But you can look up these in the various papers. It looks like Chat2BT with two responses, and then you have an option to say which one is better. It's nothing crazy. The infrastructure is almost exactly the same, but they just log which one you think is better. I think places like Scale are also really big in this where a lot of the labeler companies will help control like who's doing how many samples. You have multiple people go over the same sample once and like what happens if there's disagreement. I don't really think this disagreement data is used for anything, but it's good to know like what the distribution of prompts is, who's doing it, how many samples you have, controlling the workforce. All of this is very hard. A last thing to add is that a lot of these companies do collect optional metadata. I think the Anthropic example shows a rating of like how good was the prompt or the conversation from good to bad because things matter. Like there's kind of a quadrant of preference data in my mind, which is you're comparing a good answer to a good answer, which is like really interesting signal. And then there's kind of the option of you're comparing a bad answer to a bad answer, which is like you don't want to train your model on two different issues. This is like, we did this at Hugging Base and it was like, our data was like, we don't know if we can use this because a lot of it was just bad answer to bad answer because you're like rushing to try to do this real contract. And then there's also good answer to bad answer, which I think is probably pretty reasonable to include. You just prefer the good one and move on with your life. But those are very different scenarios. I think open AIs of the world are all in good answer, good answer, and have learned to eliminate everything else. But when people try to do this in open source, it's probably like what Open Assistance saw is like, there's just a lot of bad answers in your preference data. And you're like, what do I do with this? Metadata flags can help. I threw in the instruct GPT metadata. You can see how much they collect here. And like everything from the model fails to actually complete the task, hallucinations, different types of offensive or dangerous content, moral judgment, expresses opinion. Like, I don't know exactly if they're doing this now, but you can kind of see why doing RLHF at scale and prioritizing a lot of different endpoints would be hard because these are all things I'd be interested in if I was scaling up a big team to do RLHF and like what is going into the preference data. You do an experiment and you're like, okay, we're going to remove all the data where they said the model hallucinates like just that and then retrain everything. Like, what does that do?Swyx [00:38:59]: Yeah, so hallucination is big, but some of these other metadata categories, and I've seen this in a lot of papers, it's like, does it contain sexual content? Does it express a moral judgment? Does it denigrate a protected class? That kind of stuff, very binary. Should people try to adjust for this at the RLHF layer or should they put it as a pipeline where they have a classifier as a separate model that grades the model output?Nathan [00:39:20]: Do you mean for training or like a deployment? Deployment. I do think that people are doing it at deployment. I think we've seen safety and other things in the RLHF pipeline. Like Lama 2 is famous for kind of having this like helpfulness and safety reward models. Deep in the Gemini report is something that Gemini has like four things, which is like helpfulness, factuality, maybe safety, maybe something else. But places like Anthropic and Chattopadhyay and Bard almost surely have a classifier after, which is like, is this text good? Is this text bad? That's not that surprising, I think, because you could use like a hundred times smaller language model and do much better at filtering than RLHF. But I do think it's still so deeply intertwined with the motivation of RLHF to be for safety that some of these categories still persist. I think that's something I'll kind of settle out, I think.Swyx [00:40:11]: I'm just wondering if it's worth collecting this data for the RLHF purpose, if you're not going to use it in any way, separate model to-Nathan [00:40:18]: Yeah, I don't think OpenAI will collect all of this anymore, but I think for research perspectives, it's very insightful to know, but it's also expensive. So essentially your preference data scales with how many minutes it takes for you to do each task and every button is like, it scales pretty linearly. So it's not cheap stuff.Swyx [00:40:35]: Can we, since you mentioned expensiveness, I think you may have joined one of our spaces back in Lama 2 was released. We had an estimate from you that was something on the order of Lama 2 costs $3 to $6 million to train GPU-wise, and then it was something like $20 to $30 million in preference data. Is that something that's still in the ballpark? I don't need precise numbers.Nathan [00:40:56]: I think it's still a ballpark. I know that the 20 million was off by a factor of four because I was converting from a prompt number to a total data point. So essentially when you do this, if you have multi-turn setting, each turn will be one data point and the Lama 2 paper reports like 1.5 million data points, which could be like 400,000 prompts. So I would say it's still say like 6 to 8 million is safe to say that they're spending, if not more, they're probably also buying other types of data and or throwing out data that they don't like, but it's very comparable to compute costs. But the compute costs listed in the paper always are way lower because all they have to say is like, what does one run cost? But they're running tens or hundreds of runs. So it's like, okay, like... Yeah, it's just kind of a meaningless number. Yeah, the data number would be more interesting.Alessio [00:41:42]: What's the depreciation of this data?Nathan [00:41:46]: It depends on the method. Like some methods, people think that it's more sensitive to the, this is what I was saying. It was like, does the type of instruction tuning you do matter for RLHF? So like, depending on the method, some people are trying to figure out if you need to have like what is called like, this is very confusing. It's called like on policy data, which is like your RLHF data is from your instruction model. I really think people in open source and academics are going to figure out how to use any preference data on any model just because they're scrappy. But there's been an intuition that to do like PPO well and keep improving the model over time and do like what Meta did and what people think that OpenAI does is that you need to collect new preference data to kind of edge the distribution of capabilities forward. So there's a depreciation where like the first batch of data you collect isn't really useful for training the model when you have the fifth batch. We don't really know, but it's a good question. And I do think that if we had all the LLAMA data, we wouldn't know what to do with all of it. Like probably like 20 to 40% would be pretty useful for people, but not the whole data set. Like a lot of it's probably kind of gibberish because they had a lot of data in there.Alessio [00:42:51]: So do you think like the open source community should spend more time figuring out how to reuse the data that we have or like generate more data? I think that's one of the-Nathan [00:43:02]: I think if the people are kind of locked into using synthetic data, people also think that synthetic data is like GPT-4 is more accurate than humans at labeling preferences. So if you look at these diagrams, like humans are about 60 to 70% agreement. And we're like, that's what the models get to. And if humans are about 70% agreement or accuracy, like GPT-4 is like 80%. So it is a bit better, which is like in one way of saying it.Swyx [00:43:24]: Humans don't even agree with humans 50% of the time.Nathan [00:43:27]: Yeah, so like that's the thing. It's like the human disagreement or the lack of accuracy should be like a signal, but how do you incorporate that? It's really tricky to actually do that. I think that people just keep using GPT-4 because it's really cheap. It's one of my like go-to, like I just say this over and over again is like GPT-4 for data generation, all terms and conditions aside because we know OpenAI has this stuff is like very cheap for getting pretty good data compared to compute or salary of any engineer or anything. So it's like tell people to go crazy generating GPT-4 data if you're willing to take the organizational like cloud of should we be doing this? But I think most people have accepted that you kind of do this, especially at individuals. Like they're not gonna come after individuals. I do think more companies should think twice before doing tons of OpenAI outputs. Also just because the data contamination and what it does to your workflow is probably hard to control at scale.Swyx [00:44:21]: And we should just mention at the time of recording, we've seen the first example of OpenAI enforcing their terms of service. ByteDance was caught, reported to be training on GPT-4 data and they got their access to OpenAI revoked. So that was one example.Nathan [00:44:36]: Yeah, I don't expect OpenAI to go too crazy on this cause they're just gonna, there's gonna be so much backlash against them. And like, everyone's gonna do it anyways.Swyx [00:44:46]: And what's at stake here to spell it out is like, okay, that's like cost $10 to collect one data point from a human. It's gonna cost you like a 10th of a cent with OpenAI, right? So like it's just orders of magnitude cheaper. And therefore people-Nathan [00:44:58]: Yeah, and it's like the signal you get from humans is from preferences isn't that high. The signal that you get from humans for instructions is pretty high, but it is also very expensive. So like the human instructions are definitely like by far and away the best ones out there compared to the synthetic data. But I think like the synthetic preferences are just so much easier to get some sort of signal running with and you can work in other, I think people will start working in other goals there between safety and whatever. That's something that's taking off and we'll kind of see that. I think in 2024, at some point, people will start doing things like constitutional AI for preferences, which will be pretty interesting. I think we saw how long it took RLHF to get started in open source. Instruction tuning was like the only thing that was really happening until maybe like August, really. I think Zephyr was the first model that showed success with RLHF in the public, but that's a long time from everyone knowing that it was something that people are interested in to having any like check mark. So I accept that and think the same will happen with constitutional AI. But once people show that you can do it once, they continue to explore.Alessio [00:46:01]: Excellent.Swyx [00:46:01]: Just in the domain of human preference data suppliers, Scale.ai very happily will tell you that they supplied all that data for Lama 2. The other one is probably interesting, LMSYS from Berkeley. What they're running with Chaterina is perhaps a good store of human preference data.Nathan [00:46:17]: Yeah, they released some toxicity data. They, I think, are generally worried about releasing data because they have to process it and make sure everything is safe and they're really lightweight work. I think they're trying to release the preference data. I have, if we make it to evaluation, I'd pretty much say that Chaterina is the best limited evaluation that people have to learn how to use language models. And like, it's very valuable data. They also may share some data with people that they host models from. So like if your model is hosted there and you pay for the hosting, you can get the prompts because you're pointing the endpoint at it and that gets pinged to you and you're any real LLM inference stack saves the prompts tha

Verdibørsen
Filosofen som ble utstoppet

Verdibørsen

Play Episode Listen Later Jan 7, 2024 16:34


Hør om Jeremy Bentham og andre favorittfilosofer! Hør episoden i appen NRK Radio

Mises Media
2. Jeremy Bentham: The Utilitarian as Big Brother

Mises Media

Play Episode Listen Later Jan 4, 2024 46:24


An Austrian Perspective on the History of Economic Thought, Volume 2: Classical Economics The second volume contains an enlightening critique of Ricardian economics, showing the constraints on theory entailed by Ricardo's static and pseudo-mathematical method. Ricardo's successor John Stuart Mill is the object of a devastating intellectual portrait. Marxism is subjected to a merciless demolition, and Rothbard shows the roots of this system in metaphysical speculation. The French classical liberals such as Bastiat, on the other hand, contributed to the subjectivist school. A further highlight of this volume is a discussion of the bullionist controversy: the views of the Banking and Currency Schools receive extensive analysis. Narrated by Jeff Riggenbach

Mises Media
2. Jeremy Bentham: The Utilitarian as Big Brother (continued)

Mises Media

Play Episode Listen Later Jan 4, 2024 26:18


An Austrian Perspective on the History of Economic Thought, Volume 2: Classical Economics The second volume contains an enlightening critique of Ricardian economics, showing the constraints on theory entailed by Ricardo's static and pseudo-mathematical method. Ricardo's successor John Stuart Mill is the object of a devastating intellectual portrait. Marxism is subjected to a merciless demolition, and Rothbard shows the roots of this system in metaphysical speculation. The French classical liberals such as Bastiat, on the other hand, contributed to the subjectivist school. A further highlight of this volume is a discussion of the bullionist controversy: the views of the Banking and Currency Schools receive extensive analysis. Narrated by Jeff Riggenbach

Lenny's Podcast: Product | Growth | Career
Strategies for becoming less distracted and improving focus | Nir Eyal (author of Indistractable and Hooked)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Dec 29, 2023 84:42


Nir Eyal is the author of two best-selling books, Hooked: How to Build Habit-Forming Products and Indistractable: How to Control Your Attention and Choose Your Life. He writes, consults, and teaches at the intersection of psychology, technology, and business. His books have sold over 1 million copies in more than 30 languages; he has taught at Stanford's Graduate School of Business and its Design School; and he has started and sold two startups since 2003. In our conversation, we discuss:• Strategies for becoming less distractible and improving focus• The difference between distraction and “traction”• Reactive work vs. reflexive work and why you should book time in your calendar• The 10-minute rule to overcome internal triggers and stay focused• The problem with to-do lists, and what to do instead• The value of creating a timebox schedule that aligns with personal values and priorities• The use of pacts as a last line of defense against distraction• How to develop a high-agency mindset• Advice for leaders on helping employees improve focus in the workplace—Brought to you by Vanta—Automate compliance. Simplify security | Jira Product Discovery—Atlassian's new prioritization and roadmapping tool built for product teams | Teal—Your personal career growth platform—Find the full transcript at: https://www.lennyspodcast.com/strategies-for-becoming-less-distracted-and-improving-focus-nir-eyal-author-of-indistractable-and/—Where to find Nir Eyal:• X: https://twitter.com/nireyal• LinkedIn: https://www.linkedin.com/in/nireyal/• Website: https://www.nirandfar.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Nir's background(04:20) How to become less distractible(07:43) Understanding distraction and traction(12:52) The four steps to becoming indistractable(13:53) Mastering internal triggers(18:49) Surfing the urge with a 10-minute timer(23:20) Making time for traction with a timebox schedule(25:02) How to turn your values into time(28:36) Booking deep work time(29:22) Making pacts to prevent distraction(31:00) The problem with to-do lists(34:31) The drawback of deadlines(36:08) Distraction is an emotion regulation problem(39:54) Hacking back external triggers(45:03) Preventing distraction with pacts(48:18) Specific tools to hold you accountable(53:42) Managing emotions and discomfort(56:37) Taking responsibility and being high-agency(01:00:09) Becoming indistractable at work(01:05:04) Schedule syncing to align with managers(01:09:36) We are not as hooked on technology as people think(01:16:00) Life purpose and personal responsibility(01:17:38) Lightning round—Referenced:• Indistractable: How to Control Your Attention and Choose Your Life: https://www.amazon.com/Indistractable-Control-Your-Attention-Choose/dp/194883653X• Hooked: How to Build Habit-Forming Products: https://www.amazon.com/Hooked-How-Build-Habit-Forming-Products/dp/1591847788• Dorothy Parker's quote: https://twitter.com/nireyal/status/1472280598723108866• “Writing is bleeding” quote: https://www.hemingwaysociety.org/quotation-controversy-writing-and-bleeding• The Pomodoro Technique Explained: https://www.forbes.com/sites/bryancollinseurope/2020/03/03/the-pomodoro-technique/• Timeboxing: Why It Works and How to Get Started in 2024: https://www.nirandfar.com/timeboxing/• Using your working time well - Issue 22: https://www.lennysnewsletter.com/p/time-management-issue-22• All-In podcast: https://www.allinpodcast.co/• Nir's post about “the planning fallacy”: https://www.linkedin.com/posts/nireyal_why-do-tasks-always-seem-to-take-longer-than-activity-7137440438939959297-XIUB/• How the Ancient Greeks Beat Distraction: https://www.nirandfar.com/tantalizing-distractions/• Jeremy Bentham: https://iep.utm.edu/jeremy-bentham• An overview of Sigmund Freud's pleasure principle: https://www.sciencedirect.com/topics/nursing-and-health-professions/pleasure-principle• The Matrix “There is no spoon” scene: https://www.youtube.com/watch?v=uAXtO5dMqEI• Outlet timer: https://www.amazon.com/Century-Indoor-24-Hour-Mechanical-Outlet/dp/B01LPSGBZS• Forest app: https://www.forestapp.cc/• Focusmate: https://www.focusmate.com/• Have We Been Thinking About Willpower the Wrong Way for 30 Years?: https://hbr.org/2016/11/have-we-been-thinking-about-willpower-the-wrong-way-for-30-years• We Need Social Antibodies to Fight the Disease of Distraction: https://nireyal.medium.com/we-need-social-antibodies-to-fight-the-disease-of-distraction-51f9187be016• The Mere Presence of Your Smartphone Reduces Brain Power, Study Shows: https://news.utexas.edu/2017/06/26/the-mere-presence-of-your-smartphone-reduces-brain-power• Leading in Tough Times: HBS Faculty Member Amy C. Edmondson on Psychological Safety: https://www.hbs.edu/recruiting/insights-and-advice/blog/post/leading-in-tough-times• If Tech Is So Distracting, How Do Slack Employees Stay So Focused?: https://www.nirandfar.com/slack-use/• Managing up: https://www.lennysnewsletter.com/p/managing-up• Duolingo: https://www.duolingo.com/• FitBot: https://www.fitbotapp.com/• Paulo Coelho's quote: https://twitter.com/paulocoelho/status/416264984188825600• Alchemy: The Dark Art and Curious Science of Creating Magic in Brands, Business, and Life: https://www.amazon.com/Alchemy-Curious-Science-Creating-Business/dp/006238841X• The Experience Machine: How Our Minds Predict and Shape Reality: https://www.amazon.com/Experience-Machine-Minds-Predict-Reality/dp/1524748455• Empire of the Sun on Prime Video: https://www.amazon.com/Empire-Sun-Christian-Bale/dp/B001N3JY82• Sesame grinder: https://www.miyacompany.com/450-014-450-014• Muji pens: https://www.muji.us/collections/pen-pencils—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

The Fourth Way
(296)S11E9/18: The Panopticon

The Fourth Way

Play Episode Listen Later Dec 12, 2023 21:02


A huge thanks to Seth White for the awesome music! Thanks to Palmtoptiger17 for the beautiful logo: https://www.instagram.com/palmtoptiger17/ Facebook Page: https://www.facebook.com/thewayfourth/?modal=admin_todo_tour YouTube: https://www.youtube.com/channel/UCTd3KlRte86eG9U40ncZ4XA?view_as=subscriber Instagram: https://www.instagram.com/theway4th/  Kingdom Outpost: https://kingdomoutpost.org/ My Reading List Goodreads: https://www.goodreads.com/author/show/21940220.J_G_Elliot Spotify Playlist: https://open.spotify.com/playlist/4VSvC0SJYwku2U0awRaNAu?si=3ad0b2fbed2e4864 Propaganda Season Outline: https://docs.google.com/spreadsheets/d/1xa4MhYMAg2Ohc5Nvya4g9MHxXWlxo6haT2Nj8Hlws8M/edit?usp=sharing  Episode Outline/Transcript: https://docs.google.com/document/d/12hvujJAW3X9W-w98b7TNWi8pYow6Xupd-HHZ6_BeVV4/edit?usp=sharing  Loving to Know: https://www.goodreads.com/book/show/11933842-loving-to-know?from_search=true&from_srp=true&qid=XWnd3R1hFl&rank=1  Panopticon: https://www.youtube.com/watch?v=RbllEmx0WPU&t=371s Discipline and Punish: https://www.goodreads.com/book/show/80369.Discipline_and_Punish?from_search=true&from_srp=true&qid=rD9eynB6b3&rank=1 Snowden: https://www.goodreads.com/author/list/7140597.Edward_Snowden Ellsberg: https://www.goodreads.com/book/show/86433.Secrets?from_search=true&from_srp=true&qid=7aHe1xV3lS&rank=3 HTLINGUAL: https://en.wikipedia.org/wiki/HTLINGUAL 1971 COINTELPRO Documentary: https://www.youtube.com/watch?v=y9lr3-7EYHI Optic Nerve: https://en.wikipedia.org/wiki/Optic_Nerve_(GCHQ) Double Slit Experiment: https://www.youtube.com/watch?v=Iuv6hY6zsd0 Schrodinger's Cat: https://www.youtube.com/watch?v=UjaAxUO6-Uw Thanks to our monthly supporters Laverne Miller Jesse Killion ★ Support this podcast on Patreon ★

Climate 21
Decoding the Tesla Phenomenon & More: Climate Revelations with Jeremy Bentham

Climate 21

Play Episode Listen Later Nov 1, 2023 51:22 Transcription Available


In this week's episode of the Climate Confident podcast, I had the pleasure of Chatting with Jeremy Bentham, Co-Chair & Senior Advisor at World Energy Council. Jeremy brings a wealth of knowledge from his extensive background in the energy sector.

The Dishcast with Andrew Sullivan
Martha Nussbaum On Justice For Animals

The Dishcast with Andrew Sullivan

Play Episode Listen Later Oct 13, 2023 42:49


This is a free preview of a paid episode. To hear more, visit andrewsullivan.substack.comMartha is a philosopher and legal thinker. She has taught at Harvard, Brown, Oxford and is currently the Ernst Freund Distinguished Service Professor of Law and Ethics at the University of Chicago, appointed in the Philosophy Department and the Law School. Her many books include The Fragility of Goodness, Sex and Social Justice, Creating Capabilities, and From Disgust to Humanity: Sexual Orientation and Constitutional Law. Her new book, which we discuss in this episode, is Justice for Animals.You can listen to the episode right away in the audio player above (or on the right side of the player, click “Listen On” to add the Dishcast feed to your favorite podcast app). For two clips of our convo — on whether fish feel pain, and if we should sterilize city rats instead of killing them — pop over to our YouTube page.Other topics: Martha growing up in NYC; converting to Judaism; studying Latin and Greek; becoming a professional actress; giving up meat; her late daughter's profound influence on Justice For Animals; Aristotle's views on justice; the difference between instinct and sentience; why crustaceans and insects probably don't feel pain; preventing pain vs. stopping cruelty; Jeremy Bentham and Peter Singer; the matriarchal society of orcas; Martha and Amartya Sen's creation of the “capability approach”; how zoos prevent pain but nevertheless limit life; how parrots are content living solo, even in a lab; why we shouldn't rank animals according to intelligence; George Pitcher's The Dogs Who Came to Stay; the various ways humans are inept compared to animals; how a dolphin can detect human pregnancy; how some animals have a precise sense of equality; the diffuse brain of the octopus; the emotional lives of elephants; our brutality toward pigs; why the intelligence of plants is merely “handwaving”; how humans are the only animals to show disgust with their own bodies; our sublimation of violent instincts; mammals and social learning; Matthew Scully's Dominion and the “caring stewardship” of animals among Christians; whether humane meat on a mass scale is possible; the emergence of lab meat; Martha's advice on what you can do to protect animals; JR Ackerley's book My Dog Tulip; euthanasia; and various tales of Bowie, my beloved, late beagle.The subject of animal rights was first tackled on the Dishcast with vegan activist John Oberg, and we posted a ton of your commentary here. Browse the Dishcast archive for another convo you might enjoy (the first 102 episodes are free in their entirety — subscribe to get everything else). Coming up soon: Spencer Klavan on How to Save the West: Ancient Wisdom for 5 Modern Crises and Matthew Crawford, author of Shop Class as Soulcraft. Later on, two NYT columnists — David Brooks and Pamela Paul — and the authors of Where Have All the Democrats Gone?, John Judis and Ruy Teixeira.Have a question you want me to ask one of these future guests? Email dishpub@gmail.com, and please put the question in the subject line. Please send any guest recs, pod dissent and other comments to dish@andrewsullivan.com.

Arts & Ideas
New Thinking: Work and protest

Arts & Ideas

Play Episode Listen Later Oct 13, 2023 35:29


Jane Eyre and Shirley by Charlotte Bronte both refer to the unrest in Yorkshire which took place in the early years of the nineteenth century as new technology threatened jobs in the mills. Literary historian Sophie Coulombeau discusses parallels between the Luddites and concerns over AI now, and looks at what is real and what is fictional in the novels studied by Jonathan Brockbank of the University of York. Tania Shew shares some of the accounts of strikes outside the workplace which she has uncovered in her research. These include a charity worker strike and school strikes organised by pupils in 1911. How far do they strike a chord with more modern strike action? Dr Jonathan Brockbank is a Lecturer in Modern Literature at the University of York who is exploring Luddite protests and their depiction in literature. Dr Tania Shew is the holder of the Isaiah Berlin Junior Research Fellowship at Wolfson College, Oxford researching the women's suffrage movement. You can hear her discussing her work on suffrage sex strikes in this episode of New Thinking called Women's History https://www.bbc.co.uk/programmes/p0bsjyr8 Dr Sophie Coulombeau teaches literature at the University of York and has published articles on the writing of Frances Burney, Elizabeth Montagu, William Godwin and Jeremy Bentham. She is editing a volume of essays, Mary Hamilton and Her Circles, alongside colleagues working on the “Unlocking the Mary Hamilton Papers” project at the John Rylands Library and is a BBC/AHRC New Generation Thinker on the scheme which promotes research on the radio. This New Thinking episode of the Arts & Ideas podcast was made in partnership with the Arts and Humanities Research Council (AHRC), part of UKRI. You can find more collected on the Free Thinking programme website of BBC Radio 3 under New Research or if you sign up for the Arts & Ideas podcast you can hear discussions about a range of topics.

Unf*cking The Republic
Understanding Socialism: Part Six. Epilogue.

Unf*cking The Republic

Play Episode Listen Later Oct 2, 2023 56:41


The final, final (no really) installment in our series on socialism looks narrowly at the period between World War One and the Russian Revolution to identify factors that contributed to the Bolshevik departure from Marxist theory and how nationalism squashed any hope for an internationalist movement. We revisit the words of the theorists and activists we covered in the series from Jeremy Bentham to Eugene Debs and raise difficult questions about the future of socialist activity in the United States specifically and whether new ideas are required to battle the ravages of capitalism. Chapters Intro: 00:04:17 Part One: 00:05:41 Part Two: 00:16:08 Post Show Musings: 00:37:16 Outro: 00:55:37 Book Love Joseph A. Schumpeter: Capitalism, Socialism, and Democracy John M. Thompson: Revolutionary Russia, 1917 Bernard Harcourt: Critique and Praxis Ray Ginger: The Bending Cross: A Biography of Eugene Victor Debs Karl Marx: The Communist Manifesto Karl Marx: Das Kapital  Michael Harrington: Socialism: Past and Future Victor Serge + Natalia Ivanovna Sedova: Life and Death of Leon Trotsky Anne Sebba: Ethel Rosenberg: An American Tragedy Peter Kropotkin: The Conquest of Bread  Staughton Lynd + Andrej Grubačic: Wobblies and Zapatistas: Conversations on Anarchism, Marxism, and Radical History Emma Goldman: Anarchism and Other Essays Anthony J. Nocella II, Mark Seis and Jeff Shantz: Classic Writings in Anarchist Criminology: A Historical Dismantling of Punishment and Domination. Margaret MacMillan: The War That Ended Peace: The Road to 1914 Resources The Collector: What do Hegel and Marx Have in Common? Socialist Alternative: Robert Owen and Utopian Socialism Marxists.org: Encyclopedia of Marxism: Events Washington State University: Introduction to 19th-Century Socialism Howard Zinn: Commemorating Emma Goldman: 'Living My Life' Stanford: Hegel's Dialectics The History of Economic Thought: Cesare Beccaria  Stanford: Jeremy Bentham Foundation for Economic Education: Robert Owen: The Woolly-Minded Cotton Spinner Stanford: Karl Marx  Central European Economic and Social History: Economic Development In Europe In The 19th Century Marxists.org: Encyclopedia of Marxism The New Yorker: Karl Marx, Yesterday and Today Marxists.org: Glossary of Organisations Northwestern Whitepaper: The Second Industrial Revolution The Collector: Revolutions of 1848 Chemins de Mémoire: Franco-Prussian War of 1870 Journal of Modern History: 1870 in European History and Historiography JSTOR: Paul Avrich: The Legacy of Bakunin Marxists.org: Bakunin The Anarchist Library: The Federative Principle The Anarchist Library: Property Is Theft Jacobin: Why Kautsky was Right  The New Yorker: Dreyfus Affair The Jacobin: John Dewey Marxists.org: Anarchism and Anarcho-Syndicalism  Spartacus Ed: Karl Kautsky U.S. Bureau of Labor Statistics: FAQs -- If you like the pod version of #UNFTR, make sure to check out the video version on YouTube where Max shows his beautiful face! www.youtube.com/@UNFTR Please leave us a rating and review on Apple Podcasts: unftr.com/rate and follow us on Facebook, Twitter and Instagram at @UNFTRpod. Visit us online at unftr.com. Join the Unf*cker-run Facebook group: facebook.com/groups/2051537518349565 Buy yourself some Unf*cking Coffee® at shop.unftr.com. Subscribe to Unf*cking The Republic® at unftr.com/blog to get the essays these episode are framed around sent to your inbox every week. Check out the UNFTR Pod Love playlist on Spotify: spoti.fi/3yzIlUP. Visit our bookshop.org page at bookshop.org/shop/UNFTRpod to find the full UNFTR book list, and find book recommendations from our Unf*ckers at bookshop.org/lists/unf-cker-book-recommendations. Access the UNFTR Musicless feed by following the instructions at unftr.com/accessibility. Unf*cking the Republic® is produced by 99 and engineered by Manny Faces Media (mannyfacesmedia.com). Original music is by Tom McGovern (tommcgovern.com). The show is written and hosted by Max and distributed by 99. Podcast art description: Image of the US Constitution ripped in the middle revealing white text on a blue background that says, "Unf*cking the Republic®."Support the show: https://www.buymeacoffee.com/unftrSee omnystudio.com/listener for privacy information.

Philosophize This!
Episode #186 ... Are we heading for a digital prison? - Panopticon (Foucault, Bentham, Cave)

Philosophize This!

Play Episode Listen Later Aug 24, 2023 40:15


Today we talk about Jeremy Bentham's concept of the Panopticon. Michel Foucault's comparison to society in 1975. The historical role of intelligence as a justification for dominance. The anatomy of free will, and how a digital world may systematically limit our free will without us knowing it.    Thank you to the sponsors of this episode:  LMNT - www.drinkLMNT.com/PHILO Better Help - www.betterhelp.com/PHILTHIS   Get more: Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis Philosophize This! Clips: https://www.youtube.com/@philosophizethisclips   Be social: Twitter: https://twitter.com/iamstephenwest Instagram: https://www.instagram.com/philosophizethispodcast TikTok: https://www.tiktok.com/@philosophizethispodcast Facebook: https://www.facebook.com/philosophizethisshow   Thank you for making the show possible.

FLF, LLC
Making Practical the Application of the Biblical Conception of Law [God, Law, and Liberty]

FLF, LLC

Play Episode Listen Later Jun 1, 2023 30:57


Today David uses apparently contradictory verses in Proverbs to demonstrate how easy it is to read the Bible according to the legal positivism of Jeremy Bentham. Then he applies the wisdom hidden in that apparent contradiction to how we should argue in federal court in support of the constitutionality of state laws prohibiting the application of transgender procedures to minors. Today’s episode may tell you if you’re really a legal positivist.

FLF, LLC
From God to Man: The Transformation of America's View of Law [God, Law, and Liberty]

FLF, LLC

Play Episode Listen Later May 26, 2023 27:31


Today David explains how America transitioned from the biblical conception of law espoused by Bracton and Blackstone from the 13th through the 18th centuries to Jeremy Bentham’s positivistic, utilitarian conception of law embraced by Oliver Wendell Holmes who changed the conception of law held today by most Christian lawyers. The applicability in our day of quotes from Holmes over a century ago will be eye-opening. To paraphrase Dorothy’s comment to Toto, you don’t live in a Christian cosmos anymore.

FLF, LLC
Do Christians Have a Biblical Conception of Law? [God, Law, and Liberty]

FLF, LLC

Play Episode Listen Later May 19, 2023 26:27


Today David uses the remarks of Dr. Jonathan Burnside, Professor of Biblical Law at a law school (believe it or not) but in the United Kingdom to demonstrate how the Christian’s conception of law may be more shaped by Jeremy Bentham than the way the Bible presents law to us. It explains why common law makes little sense to people today.