Podcast appearances and mentions of jared kaplan

  • 30PODCASTS
  • 52EPISODES
  • 36mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 22, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about jared kaplan

Latest podcast episodes about jared kaplan

The Leadership in Insurance Podcast (The LIIP)
Surfacing The Science of Risk : An Interview with Jared Kaplan, CEO at Indigo Insurance

The Leadership in Insurance Podcast (The LIIP)

Play Episode Listen Later Apr 22, 2025 30:31


Good morning and welcome to the latest episode of The Leadership In Insurance Podcast where where this week we are joined by Jared Kaplan, CEO of Indigo Insurance. Hosted on Acast. See acast.com/privacy for more information.

Azeem Azhar's Exponential View
Are we ready for human-level AI by 2030? Claude's co-founder answers

Azeem Azhar's Exponential View

Play Episode Listen Later Apr 1, 2025 52:06


Anthropic's co-founder and chief scientist Jared Kaplan discusses AI's rapid evolution, the shorter-than-expected timeline to human-level AI, and how Claude's "thinking time" feature represents a new frontier in AI reasoning capabilities.In this episode you'll hear:Why Jared believes human-level AI is now likely to arrive in 2-3 years instead of by 2030How AI models are developing the ability to handle increasingly complex tasks that would take humans hours or daysThe importance of constitutional AI and interpretability research as essential guardrails for increasingly powerful systemsOur new show This was originally recorded for "Friday with Azeem Azhar", a new show that takes place every Friday at 9am PT and 12pm ET on Exponential View. You can tune in through my Substack linked below. The format is experimental and we'd love your feedback, so feel free to comment or email your thoughts to our team at live@exponentialview.co.Timestamps:(00:00) Episode trailer(01:27) Jared's updated prediction for reaching human-level intelligence(08:12) What will limit scaling laws?(11:13) How long will we wait between model generations?(16:27) Why test-time scaling is a big deal(21:59) There's no reason why DeepSeek can't be competitive algorithmically(25:31) Has Anthropic changed their approach to safety vs speed?(30:08) Managing the paradoxes of AI progress(32:21) Can interpretability and monitoring really keep AI safe?(39:43) Are model incentives misaligned with public interests?(42:36) How should we prepare for electricity-level impact?(51:15) What Jared is most excited about in the next 12 monthsJared's links:Anthropic: https://www.anthropic.com/Azeem's links: Substack: https://www.exponentialview.co/Website: https://www.azeemazhar.com/LinkedIn: https://www.linkedin.com/in/azharTwitter/X: https://x.com/azeem

Insuring Cyber Podcast - Insurance Journal TV

Discover how seasoned investor, Jared Kaplan, founded groundbreaking companies, including Indigo, by merging online distribution and AI to revolutionize the medical malpractice insurance industry.

iTunes - Insurance Journal TV

Discover how seasoned investor, Jared Kaplan, founded groundbreaking companies, including Indigo, by merging online distribution and AI to revolutionize the medical malpractice insurance industry.

Podcasts – Insurance Journal TV

Discover how seasoned investor, Jared Kaplan, founded groundbreaking companies, including Indigo, by merging online distribution and AI to revolutionize the medical malpractice insurance industry.

Insuring Cyber Podcast - Insurance Journal TV
EP. 88: Insuring the Future with IoT and Tech Tools

Insuring Cyber Podcast - Insurance Journal TV

Play Episode Listen Later Jul 3, 2024 33:20


There’s no question the rapid pace of technological advancement is shaping the insurance industry, how how are insurers using tech tools to help with their business? Jared Kaplan, … Read More » The post EP. 88: Insuring the Future with IoT and Tech Tools

iTunes - Insurance Journal TV
EP. 88: Insuring the Future with IoT and Tech Tools

iTunes - Insurance Journal TV

Play Episode Listen Later Jul 3, 2024 33:20


There’s no question the rapid pace of technological advancement is shaping the insurance industry, how how are insurers using tech tools to help with their business? Jared Kaplan, … Read More » The post EP. 88: Insuring the Future with IoT and Tech Tools

Podcasts – Insurance Journal TV
EP. 88: Insuring the Future with IoT and Tech Tools

Podcasts – Insurance Journal TV

Play Episode Listen Later Jul 3, 2024 33:20


There’s no question the rapid pace of technological advancement is shaping the insurance industry, how how are insurers using tech tools to help with their business? Jared Kaplan, … Read More » The post EP. 88: Insuring the Future with IoT and Tech Tools

FNO: InsureTech
Disrupting the Medical Malpractice Industry (feat. Jared Kaplan, Indigo)

FNO: InsureTech

Play Episode Listen Later Jun 7, 2024 54:27


Listen as Jared Kaplan, CEO and co-founder of Indigo, discusses the challenges and innovations in medical malpractice insurance, sharing his journey from building tech startups to using transforming insurance with artificial intelligence and data analytics. Jared explores Indigo's approach to differentiating risk, offering lower premiums to deserving doctors, and ultimately aiming to impact the healthcare industry positively. Kaplan also shares a lighthearted glimpse into his off-work hobbies when he's not running the business.

Tech Disruptors
Anthropic's Kaplan on Making LLMs More Reliable

Tech Disruptors

Play Episode Listen Later Jun 6, 2024 34:25


Anthropic cofounder Jared Kaplan talks about the nuances of training LLMs and deploying them for enterprise use cases with Mandeep Singh, technology analyst at Bloomberg Intelligence. The use of GPU clusters for training and inferencing, pitfalls related to hallucinations and LLM biases are among the topics discussed in this episode of BI's Tech Disruptors podcast

The Nonlinear Library
AF - Anthropic Fall 2023 Debate Progress Update by Ansh Radhakrishnan

The Nonlinear Library

Play Episode Listen Later Nov 28, 2023 18:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropic Fall 2023 Debate Progress Update, published by Ansh Radhakrishnan on November 28, 2023 on The AI Alignment Forum. This is a research update on some work that I've been doing on Scalable Oversight at Anthropic, based on the original AI safety via debate proposal and a more recent agenda developed at NYU and Anthropic. The core doc was written several months ago, so some of it is likely outdated, but it seemed worth sharing in its current form. I'd like to thank Tamera Lanham, Sam Bowman, Kamile Lukosiute, Ethan Perez, Jared Kaplan, Amanda Askell, Kamal Ndousse, Shauna Kravec, Yuntao Bai, Alex Tamkin, Newton Cheng, Buck Shlegeris, Akbir Khan, John Hughes, Dan Valentine, Kshitij Sachan, Ryan Greenblatt, Daniel Ziegler, Max Nadeau, David Rein, Julian Michael, Kevin Klyman, Bila Mahdi, Samuel Arnesen, Nat McAleese, Jan Leike, Geoffrey Irving, and Sebastian Farquhar for help, feedback, and thoughtful discussion that improved the quality of this work and write-up. 1. Anthropic's Debate Agenda In this doc, I'm referring to the idea first presented in AI safety via debate ( blog post). The basic idea is to supervise future AI systems by pitting them against each other in a debate, encouraging them to argue both sides (or "all sides") of a question and using the resulting arguments to come to a final answer about the question. In this scheme, we call the systems participating in the debate debaters (though usually, these are actually the same underlying system that's being prompted to argue against itself), and we call the agent (either another AI system or a human, or a system of humans and AIs working together, etc.) that comes to a final decision about the debate the judge. For those more or less familiar with the original OAI/Irving et al. Debate agenda, you may wonder if there are any differences between that agenda and the agenda we're pursuing at Anthropic, and indeed there are! Sam Bowman and Tamera Lanham have written up a working Anthropic-NYU Debate Agenda draft which is what the experiments in this doc are driving towards. [1] To quote from there about the basic features of this agenda, and how it differs from the original Debate direction: Here are the defining features of the base proposal: Two-player debate on a two-choice question: Two debaters (generally two instances of an LLM) present evidence and arguments to a judge (generally a human or, in some cases, an LLM) to persuade the judge to choose their assigned answer to a question with two possible answers. No externally-imposed structure: Instead of being formally prescribed, the structure and norms of the debate arise from debaters learning how to best convince the judge and the judge simultaneously learning what kind of norms tend to lead them to be able to make accurate judgments. Entire argument is evaluated: The debate unfolds in a single linear dialog transcript between the three participants. Unlike in some versions of the original Debate agenda, there is no explicit tree structure that defines the debate, and the judge is not asked to focus on a single crux. This should make the process less brittle, at the cost of making some questions extremely expensive to resolve and potentially making others impossible. Trained judge: The judge is explicitly and extensively trained to accurately judge these debates, working with a fixed population of debaters, using questions for which the experimenters know the ground-truth answer. Self-play: The debaters are trained simultaneously with the judge through multi-agent reinforcement learning. Graceful failures: Debates can go undecided if neither side presents a complete, convincing argument to the judge. This is meant to mitigate the obfuscated arguments problem since the judge won't be forced to issue a decision on the basis of a debate where neither s...

daily304's podcast
daily304 - Episode 11.16.2023

daily304's podcast

Play Episode Listen Later Nov 16, 2023 3:35


Welcome to the daily304 – your window into Wonderful, Almost Heaven, West Virginia.   Today is Thursday, Nov. 16  Rural Advanced Air Mobility may be a game changer for WV's health industry. Looking for holiday gifts? How about WV themed Hungry for Humans? And a WVU researcher is studying how drone technology can combat invasive plants…on today's daily304. #1 – From VERTX – As the Vertx Partners team helped establish a West Virginia Uncrewed Aircraft Systems Advisory Council and hosted the inaugural WV AAM Coalition meeting at the WV High Technology Foundation, the team believes it's time to focus the conversation on Rural Advanced Air Mobility specifically and how it can positively impact a state like West Virginia. Some of those advancements include drone technology. With such a high proportion of the state isolated in rural areas where travel times to medical centers are inconvenient and/or roads are impassable, medical delivery drones have the opportunity to almost literally bridge a gap by delivering supplies such as insulin to patients in dire need of relief. Have you heard of air ambulance drones? You will soon! Medical electric vertical takeoff-and-landing vehicles may revolutionize healthcare response. eVTOLS repurposed as ambulances offer a more versatile and cost-effective alternative to helicopters. Plus, they're far quicker than ground-based ambulances. As Vertx looks to the future, the state's embrace of AAM could serve as an exemplary model, not just for its residents but for other healthcare systems across the nation. Read more: https://vertxpartners.org/rural-advanced-air-mobility-healthcare/?utm_source=Vertx+Partners&utm_campaign=ac74033026-_18_04_2022_COPY_01&utm_medium=email&utm_term=0_78067557b9-ac74033026-587156483   #2 – From The Dominion Post  – It all started with Mothman and a Tudor's Biscuit World biscuit for local board game developer Lonely Hero Games' second project. Lonely Hero Games is the brainchild of West Virginia natives Christopher Kincaid and Jared Kaplan, and so far, has produced two successful board games — Bank Heist and Hungry for Humans, the second of which has been especially popular among West Virginians due to its uniquely Appalachian references to regional foods and folklore. Game cards themed after West Virginia cryptids like Mothman and the Flatwoods Monster provide special actions. If your monster's points drop below zero they starve and eat you; if it exceeds 20, the monster overeats and explodes. The player with the highest score wins. Players will recognize popular West Virginia restaurants like Pies and Pints and Black Bear Burritos, as well as state foods like the pepperoni roll and the West Virginia Hot Dog. You can order Hungry for Humans at www.LonelyHeroGames.com. Read more: https://www.dominionpost.com/2023/11/04/appalachian-cuisine-feeds-a-monstrous-hunger-in-west-virginia-board-game/   #3 – From WVU Today – A West Virginia University researcher has found a way to locate, access and destroy invasive plants more efficiently. Yong-Lak Park, professor of entomology at the WVU Davis College of Agriculture, Natural Resources and Design, is researching the efficacy of dropping natural enemy insects on invasive plants using drone technology and artificial intelligence. With a $200,000 grant from the United States Department of Agriculture's Forest Service, Park will perfect what he calls the “bug bomb.” “We use a drone to detect invasive plants in areas that are not easily accessible. When we find them, we can't do much because nobody can get to the area,” Park said. “Why not use the drone and insects that feed off those plants? That's the idea behind the bug bomb.” Park's research sites cover six counties in West Virginia, including the area along the Ohio River which has the worst mile-a-minute weed infestation. He also collaborates with professionals in state and national parks in Pennsylvania, Maryland and Virginia. Read more: https://wvutoday.wvu.edu/stories/2023/11/09/bombs-away-wvu-researcher-combats-invasive-plants-by-deploying-insect-armies Read more: https://wvutoday.wvu.edu/stories/2023/11/09/bombs-away-wvu-researcher-combats-invasive-plants-by-deploying-insect-armies   Find these stories and more at wv.gov/daily304. The daily304 curated news and information is brought to you by the West Virginia Department of Commerce: Sharing the wealth, beauty and opportunity in West Virginia with the world. Follow the daily304 on Facebook, Twitter and Instagram @daily304. Or find us online at wv.gov and just click the daily304 logo.  That's all for now. Take care. Be safe. Get outside and enjoy all the opportunity West Virginia has to offer.

Fintech Nexus
Fintech One-on-One: Todd Schwartz, Founder, CEO & Executive Chairman of OppFi

Fintech Nexus

Play Episode Listen Later Oct 27, 2023 38:31


There are tens of millions of Americans who have limited access to credit. Those with thin files or FICO scores below 650 often have difficulty accessing credit. Many have to resort to payday loans or pawn shops when they incur an unexpected expense. But there are fintech companies whose sole focus is serving this population with affordable credit access.My next guest on the Fintech One-on-One podcast is Todd Schwartz, the CEO and Founder of OppFi. OppFi has been around since 2011 (they have a fascinating founding story) and are now a public company that has helped more than 1.3 million people with affordable loans. I did have the previous CEO, Jared Kaplan, on the show in 2020, but this is the first time I have had the founder on.In this podcast you will learn:The founding story of OppFi.Why he started the company with a physical store front.How the pandemic really turbo charged the business.Why they decided to go public.How being public has changed their business.For underwriting, why understanding the primary bank account is critical.Details of their Turn Up program.The impact of the entry of banks into the small dollar loan space.What the consumer advocate don't understand about small-dollar lending.What happens when states, such as Illinois, ban higher APR small-dollar lending.Why they are focused on the states when it comes to engaging with regulators.How they make sure that OppFi customers end up better off financially.Connect with Todd on LinkedInConnect with OppFi on LinkedInConnect with Fintech One-on-One:Tweet me @PeterRentonConnect with me on LinkedInFind previous Fintech One-on-One episodes

Lend Academy Podcast
Todd Schwartz of OppFi

Lend Academy Podcast

Play Episode Listen Later Oct 26, 2023 38:31 Transcription Available


There are tens of millions of Americans who have limited access to credit. Those with thin files or FICO scores below 650 often have difficulty accessing credit. Many have to resort to payday loans or pawn shops when they incur an unexpected expense. But there are fintech companies whose sole focus is serving this population with affordable credit access.My next guest on the Fintech One-on-One podcast is Todd Schwartz, the CEO and Founder of OppFi. OppFi has been around since 2011 (they have a fascinating founding story) and are now a public company that has helped more than 1.3 million people with affordable loans. I did have the previous CEO, Jared Kaplan, on the show in 2020, but this is the first time I have had the founder on.In this podcast you will learn:The founding story of OppFi.Why he started the company with a physical store front.How the pandemic really turbo charged the business.Why they decided to go public.How being public has changed their business.For underwriting, why understanding the primary bank account is critical.Details of their Turn Up program.The impact of the entry of banks into the small dollar loan space.What the consumer advocate don't understand about small-dollar lending.What happens when states, such as Illinois, ban higher APR small-dollar lending.Why they are focused on the states when it comes to engaging with regulators.How they make sure that OppFi customers end up better off financially.Connect with Todd on LinkedInConnect with OppFi on LinkedInConnect with Fintech One-on-One: Tweet me @PeterRenton Connect with me on LinkedIn Find previous Fintech One-on-One episodes

The Nonlinear Library
LW - Anthropic Observations by Zvi

The Nonlinear Library

Play Episode Listen Later Jul 25, 2023 15:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropic Observations, published by Zvi on July 25, 2023 on LessWrong. Dylan Matthews had in depth Vox profile of Anthropic, which I recommend reading in full if you have not yet done so. This post covers that one. Anthropic Hypothesis The post starts by describing an experiment. Evan Hubinger is attempting to create a version of Claude that will optimize in part for a secondary goal (in this case 'use the word "paperclip" as many times as possible') in the hopes of showing that RLHF won't be able to get rid of the behavior. Co-founder Jared Kaplan warns that perhaps RLHF will still work here. Hubinger agrees, with a caveat. "It's a little tricky because you don't know if you just didn't try hard enough to get deception," he says. Maybe Kaplan is exactly right: Naïve deception gets destroyed in training, but sophisticated deception doesn't. And the only way to know whether an AI can deceive you is to build one that will do its very best to try. The problem with this approach is that an AI that 'does its best to try' is not doing the best that the future dangerous system will do. So by this same logic, a test on today's systems can only show your technique doesn't work or that it works for now, it can never give you confidence that your technique will continue to work in the future. They are running the test because they think that RLHF is so hopeless we can likely already prove, at current optimization levels, that it is doomed to failure. Also, the best try to deceive you will sometimes be, of course, to make you think that the problem has gone away while you are testing it. This is the paradox at the heart of Anthropic: If the thesis that you need advanced systems to do real alignment work is true, why should we think that cutting edge systems are themselves currently sufficiently advanced for this task? But Anthropic also believes strongly that leading on safety can't simply be a matter of theory and white papers - it requires building advanced models on the cutting edge of deep learning. That, in turn, requires lots of money and investment, and it also requires, they think, experiments where you ask a powerful model you've created to deceive you. "We think that safety research is very, very bottlenecked by being able to do experiments on frontier models," Kaplan says, using a common term for models on the cutting edge of machine learning. To break that bottleneck, you need access to those frontier models. Perhaps you need to build them yourself. The obvious question arising from Anthropic's mission: Is this type of effort making AI safer than it would be otherwise, nudging us toward a future where we can get the best of AI while avoiding the worst? Or is it only making it more powerful, speeding us toward catastrophe? If we could safely build and work on things as smart and capable as the very models that we will later need to align, then this approach would make perfect sense. Given we thankfully cannot yet build such models, the 'cutting edge' is importantly below the capabilities of the future dangerous systems. In particular they are on different sides of key thresholds where we would expect current techniques to stop working, such as the AI becoming smarter than those evaluating its outputs. For Anthropic's thesis to be true, you need to thread a needle. Useful work now must require cutting edge models, and those cutting edge models must be sufficiently advanced to do useful work. In order to pursue that thesis and also potentially to build an AGI or perhaps transform the world economy, Anthropic plans on raising $5 billion over the next two years. Their investor pitch deck claims that whoever gets out in front of the next wave can generate an unstoppable lead. Most of the funding will go to development of cutting edge frontier models. That certainly seems ...

The Nonlinear Library: LessWrong
LW - Anthropic Observations by Zvi

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 25, 2023 15:04


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropic Observations, published by Zvi on July 25, 2023 on LessWrong. Dylan Matthews had in depth Vox profile of Anthropic, which I recommend reading in full if you have not yet done so. This post covers that one. Anthropic Hypothesis The post starts by describing an experiment. Evan Hubinger is attempting to create a version of Claude that will optimize in part for a secondary goal (in this case 'use the word "paperclip" as many times as possible') in the hopes of showing that RLHF won't be able to get rid of the behavior. Co-founder Jared Kaplan warns that perhaps RLHF will still work here. Hubinger agrees, with a caveat. "It's a little tricky because you don't know if you just didn't try hard enough to get deception," he says. Maybe Kaplan is exactly right: Naïve deception gets destroyed in training, but sophisticated deception doesn't. And the only way to know whether an AI can deceive you is to build one that will do its very best to try. The problem with this approach is that an AI that 'does its best to try' is not doing the best that the future dangerous system will do. So by this same logic, a test on today's systems can only show your technique doesn't work or that it works for now, it can never give you confidence that your technique will continue to work in the future. They are running the test because they think that RLHF is so hopeless we can likely already prove, at current optimization levels, that it is doomed to failure. Also, the best try to deceive you will sometimes be, of course, to make you think that the problem has gone away while you are testing it. This is the paradox at the heart of Anthropic: If the thesis that you need advanced systems to do real alignment work is true, why should we think that cutting edge systems are themselves currently sufficiently advanced for this task? But Anthropic also believes strongly that leading on safety can't simply be a matter of theory and white papers - it requires building advanced models on the cutting edge of deep learning. That, in turn, requires lots of money and investment, and it also requires, they think, experiments where you ask a powerful model you've created to deceive you. "We think that safety research is very, very bottlenecked by being able to do experiments on frontier models," Kaplan says, using a common term for models on the cutting edge of machine learning. To break that bottleneck, you need access to those frontier models. Perhaps you need to build them yourself. The obvious question arising from Anthropic's mission: Is this type of effort making AI safer than it would be otherwise, nudging us toward a future where we can get the best of AI while avoiding the worst? Or is it only making it more powerful, speeding us toward catastrophe? If we could safely build and work on things as smart and capable as the very models that we will later need to align, then this approach would make perfect sense. Given we thankfully cannot yet build such models, the 'cutting edge' is importantly below the capabilities of the future dangerous systems. In particular they are on different sides of key thresholds where we would expect current techniques to stop working, such as the AI becoming smarter than those evaluating its outputs. For Anthropic's thesis to be true, you need to thread a needle. Useful work now must require cutting edge models, and those cutting edge models must be sufficiently advanced to do useful work. In order to pursue that thesis and also potentially to build an AGI or perhaps transform the world economy, Anthropic plans on raising $5 billion over the next two years. Their investor pitch deck claims that whoever gets out in front of the next wave can generate an unstoppable lead. Most of the funding will go to development of cutting edge frontier models. That certainly seems ...

Bondcast...James Bondcast!
Speed 2: Cruise Control (w/ Jared Kaplan)

Bondcast...James Bondcast!

Play Episode Listen Later May 30, 2023 125:35


Best movie of all time?Jared Kaplan joins us to break down this American action classic!Become our BOI and join the Patreon right HERE!

The Nonlinear Library
LW - Conditioning Predictive Models: Large language models as predictors by evhub

The Nonlinear Library

Play Episode Listen Later Feb 3, 2023 20:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conditioning Predictive Models: Large language models as predictors, published by evhub on February 2, 2023 on LessWrong. This is the first of seven posts in the Conditioning Predictive Models Sequence based on the forthcoming paper “Conditioning Predictive Models: Risks and Strategies” by Evan Hubinger, Adam Jermyn, Johannes Treutlein, Rubi Hudson, and Kate Woolverton. Each post in the sequence corresponds to a different section of the paper. We will be releasing posts gradually over the course of the next week or so to give people time to read and digest them as they come out. We are starting with posts one and two, with post two being the largest and most content-rich of all seven. Thanks to Paul Christiano, Kyle McDonell, Laria Reynolds, Collin Burns, Rohin Shah, Ethan Perez, Nicholas Schiefer, Sam Marks, William Saunders, Evan R. Murphy, Paul Colognese, Tamera Lanham, Arun Jose, Ramana Kumar, Thomas Woodside, Abram Demski, Jared Kaplan, Beth Barnes, Danny Hernandez, Amanda Askell, Robert Krzyzanowski, and Andrei Alexandru for useful conversations, comments, and feedback. Abstract Our intention is to provide a definitive reference on what it would take to safely make use of predictive models in the absence of a solution to the Eliciting Latent Knowledge problem. Furthermore, we believe that large language models can be understood as such predictive models of the world, and that such a conceptualization raises significant opportunities for their safe yet powerful use via carefully conditioning them to predict desirable outputs. Unfortunately, such approaches also raise a variety of potentially fatal safety problems, particularly surrounding situations where predictive models predict the output of other AI systems, potentially unbeknownst to us. There are numerous potential solutions to such problems, however, primarily via carefully conditioning models to predict the things we want—e.g. humans—rather than the things we don't—e.g. malign AIs. Furthermore, due to the simplicity of the prediction objective, we believe that predictive models present the easiest inner alignment problem that we are aware of. As a result, we think that conditioning approaches for predictive models represent the safest known way of eliciting human-level and slightly superhuman capabilities from large language models and other similar future models. 1. Large language models as predictors Suppose you have a very advanced, powerful large language model (LLM) generated via self-supervised pre-training. It's clearly capable of solving complex tasks when prompted or fine-tuned in the right way—it can write code as well as a human, produce human-level summaries, write news articles, etc.—but we don't know what it is actually doing internally that produces those capabilities. It could be that your language model is: a loose collection of heuristics,[1] a generative model of token transitions, a simulator that picks from a repertoire of humans to simulate, a proxy-aligned agent optimizing proxies like sentence grammaticality, an agent minimizing its cross-entropy loss, an agent maximizing long-run predictive accuracy, a deceptive agent trying to gain power in the world, a general inductor, a predictive model of the world, etc. Later, we'll discuss why you might expect to get one of these over the others, but for now, we're going to focus on the possibility that your language model is well-understood as a predictive model of the world. In particular, our aim is to understand what it would look like to safely use predictive models to perform slightly superhuman tasks[2]—e.g. predicting counterfactual worlds to extract the outputs of long serial research processes.[3] We think that this basic approach has hope for two reasons. First, the prediction orthogonality thesis seems basically right: we think...

The Nonlinear Library
AF - Conditioning Predictive Models: Large language models as predictors by Evan Hubinger

The Nonlinear Library

Play Episode Listen Later Feb 2, 2023 20:06


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conditioning Predictive Models: Large language models as predictors, published by Evan Hubinger on February 2, 2023 on The AI Alignment Forum. This is the first of seven posts in the Conditioning Predictive Models Sequence based on the forthcoming paper “Conditioning Predictive Models: Risks and Strategies” by Evan Hubinger, Adam Jermyn, Johannes Treutlein, Rubi Hudson, and Kate Woolverton. Each post in the sequence corresponds to a different section of the paper. We will be releasing posts gradually over the course of the next week or so to give people time to read and digest them as they come out. We are starting with posts one and two, with post two being the largest and most content-rich of all seven. Thanks to Paul Christiano, Kyle McDonell, Laria Reynolds, Collin Burns, Rohin Shah, Ethan Perez, Nicholas Schiefer, Sam Marks, William Saunders, Evan R. Murphy, Paul Colognese, Tamera Lanham, Arun Jose, Ramana Kumar, Thomas Woodside, Abram Demski, Jared Kaplan, Beth Barnes, Danny Hernandez, Amanda Askell, Robert Krzyzanowski, and Andrei Alexandru for useful conversations, comments, and feedback. Abstract Our intention is to provide a definitive reference on what it would take to safely make use of predictive models in the absence of a solution to the Eliciting Latent Knowledge problem. Furthermore, we believe that large language models can be understood as such predictive models of the world, and that such a conceptualization raises significant opportunities for their safe yet powerful use via carefully conditioning them to predict desirable outputs. Unfortunately, such approaches also raise a variety of potentially fatal safety problems, particularly surrounding situations where predictive models predict the output of other AI systems, potentially unbeknownst to us. There are numerous potential solutions to such problems, however, primarily via carefully conditioning models to predict the things we want—e.g. humans—rather than the things we don't—e.g. malign AIs. Furthermore, due to the simplicity of the prediction objective, we believe that predictive models present the easiest inner alignment problem that we are aware of. As a result, we think that conditioning approaches for predictive models represent the safest known way of eliciting human-level and slightly superhuman capabilities from large language models and other similar future models. 1. Large language models as predictors Suppose you have a very advanced, powerful large language model (LLM) generated via self-supervised pre-training. It's clearly capable of solving complex tasks when prompted or fine-tuned in the right way—it can write code as well as a human, produce human-level summaries, write news articles, etc.—but we don't know what it is actually doing internally that produces those capabilities. It could be that your language model is: a loose collection of heuristics,[1] a generative model of token transitions, a simulator that picks from a repertoire of humans to simulate, a proxy-aligned agent optimizing proxies like sentence grammaticality, an agent minimizing its cross-entropy loss, an agent maximizing long-run predictive accuracy, a deceptive agent trying to gain power in the world, a general inductor, a predictive model of the world, etc. Later, we'll discuss why you might expect to get one of these over the others, but for now, we're going to focus on the possibility that your language model is well-understood as a predictive model of the world. In particular, our aim is to understand what it would look like to safely use predictive models to perform slightly superhuman tasks[2]—e.g. predicting counterfactual worlds to extract the outputs of long serial research processes.[3] We think that this basic approach has hope for two reasons. First, the prediction orthogonality thesis seems basi...

The Nonlinear Library
LW - AI Safety and Neighboring Communities: A Quick-Start Guide, as of Summer 2022 by Sam Bowman

The Nonlinear Library

Play Episode Listen Later Sep 2, 2022 10:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety and Neighboring Communities: A Quick-Start Guide, as of Summer 2022, published by Sam Bowman on September 1, 2022 on LessWrong. Getting into AI safety involves working with a mix of communities, subcultures, goals, and ideologies that you may not have encountered in the context of mainstream AI technical research. This document attempts to briefly map these out for newcomers. This is inevitably going to be biased by what sides of these communities I (Sam) have encountered, and it will quickly become dated. I expect it will still be a useful resource for some people anyhow, at least in the short term. AI Safety/AI Alignment/AGI Safety/AI Existential Safety/AI X-Risk The research project of ensuring that future AI progress doesn't yield civilization-endingly catastrophic results. Good intros: Carlsmith Report What misalignment looks like as capabilities scale Vox piece Why are people concerned about this? My rough summary: It's plausible that future AI systems could be much faster or more effective than us at real-world reasoning and planning. Probably not plain generative models, but possibly models derived from generative models in cheap ways Once you have a system with superhuman reasoning and planning abilities, it's easy to make it dangerous by accident. Most simple objective functions or goals become dangerous in the limit, usually because of secondary or instrumental subgoals that emerge along the way. Pursuing typical goals arbitrarily well requires a system to prevent itself from being turned off, by deception or force if needed. Pursuing typical goals arbitrarily well requires acquiring any power or resources that could increase the chances of success, by deception or force if needed. Toy example: Computing pi to an arbitrarily high precision eventually requires that you spend all the sun's energy output on computing. Knowledge and values are likely to be orthogonal: A model could know human values and norms well, but not have any reason to act on them. For agents built around generative models, this is the default outcome. Sufficiently powerful AI systems could look benign in pre-deployment training/research environments, because they would be capable of understanding that they're not yet in a position to accomplish their goals. Simple attempts to work around this (like the more abstract goal ‘do what your operators want') don't tend to have straightforward robust implementations. If such a system were single-mindedly pursuing a dangerous goal, we probably wouldn't be able to stop it. Superhuman reasoning and planning would give models with a sufficiently good understanding of the world many ways to effectively gain power with nothing more than an internet connection. (ex: Cyberattacks on banks.) Consensus within the field is that these risks could become concrete within ~4–25 years, and have a >10% chance of being leading to a global catastrophe (i.e., extinction or something comparably bad). If true, it's bad news. Given the above, we either need to stop all development toward AGI worldwide (plausibly undesirable or impossible), or else do three possible-but-very-difficult things: (i) build robust techniques to align AGI systems with the values and goals of their operators, (ii) ensure that those techniques are understood and used by any group that could plausibly build AGI, and (iii) ensure that we're able to govern the operators of AGI systems in a way that makes their actions broadly positive for humanity as a whole. Does this have anything to do with sentience or consciousness? No. Influential people and institutions: Present core community as I see it: Paul Christiano, Jacob Steinhardt, Ajeya Cotra, Jared Kaplan, Jan Leike, Beth Barnes, Geoffrey Irving, Buck Shlegeris, David Krueger, Chris Olah, Evan Hubinger, Richard Ngo, Rohin Shah; ARC, R...

The Nonlinear Library: LessWrong
LW - AI Safety and Neighboring Communities: A Quick-Start Guide, as of Summer 2022 by Sam Bowman

The Nonlinear Library: LessWrong

Play Episode Listen Later Sep 2, 2022 10:56


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety and Neighboring Communities: A Quick-Start Guide, as of Summer 2022, published by Sam Bowman on September 1, 2022 on LessWrong. Getting into AI safety involves working with a mix of communities, subcultures, goals, and ideologies that you may not have encountered in the context of mainstream AI technical research. This document attempts to briefly map these out for newcomers. This is inevitably going to be biased by what sides of these communities I (Sam) have encountered, and it will quickly become dated. I expect it will still be a useful resource for some people anyhow, at least in the short term. AI Safety/AI Alignment/AGI Safety/AI Existential Safety/AI X-Risk The research project of ensuring that future AI progress doesn't yield civilization-endingly catastrophic results. Good intros: Carlsmith Report What misalignment looks like as capabilities scale Vox piece Why are people concerned about this? My rough summary: It's plausible that future AI systems could be much faster or more effective than us at real-world reasoning and planning. Probably not plain generative models, but possibly models derived from generative models in cheap ways Once you have a system with superhuman reasoning and planning abilities, it's easy to make it dangerous by accident. Most simple objective functions or goals become dangerous in the limit, usually because of secondary or instrumental subgoals that emerge along the way. Pursuing typical goals arbitrarily well requires a system to prevent itself from being turned off, by deception or force if needed. Pursuing typical goals arbitrarily well requires acquiring any power or resources that could increase the chances of success, by deception or force if needed. Toy example: Computing pi to an arbitrarily high precision eventually requires that you spend all the sun's energy output on computing. Knowledge and values are likely to be orthogonal: A model could know human values and norms well, but not have any reason to act on them. For agents built around generative models, this is the default outcome. Sufficiently powerful AI systems could look benign in pre-deployment training/research environments, because they would be capable of understanding that they're not yet in a position to accomplish their goals. Simple attempts to work around this (like the more abstract goal ‘do what your operators want') don't tend to have straightforward robust implementations. If such a system were single-mindedly pursuing a dangerous goal, we probably wouldn't be able to stop it. Superhuman reasoning and planning would give models with a sufficiently good understanding of the world many ways to effectively gain power with nothing more than an internet connection. (ex: Cyberattacks on banks.) Consensus within the field is that these risks could become concrete within ~4–25 years, and have a >10% chance of being leading to a global catastrophe (i.e., extinction or something comparably bad). If true, it's bad news. Given the above, we either need to stop all development toward AGI worldwide (plausibly undesirable or impossible), or else do three possible-but-very-difficult things: (i) build robust techniques to align AGI systems with the values and goals of their operators, (ii) ensure that those techniques are understood and used by any group that could plausibly build AGI, and (iii) ensure that we're able to govern the operators of AGI systems in a way that makes their actions broadly positive for humanity as a whole. Does this have anything to do with sentience or consciousness? No. Influential people and institutions: Present core community as I see it: Paul Christiano, Jacob Steinhardt, Ajeya Cotra, Jared Kaplan, Jan Leike, Beth Barnes, Geoffrey Irving, Buck Shlegeris, David Krueger, Chris Olah, Evan Hubinger, Richard Ngo, Rohin Shah; ARC, R...

The Nonlinear Library
AF - Artificial Sandwiching: When can we test scalable alignment protocols without humans? by Sam Bowman

The Nonlinear Library

Play Episode Listen Later Jul 13, 2022 7:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Artificial Sandwiching: When can we test scalable alignment protocols without humans?, published by Sam Bowman on July 13, 2022 on The AI Alignment Forum. Epistemic status: Not a fleshed-out proposal. Brainstorming/eliciting ideas. Thanks to Ben Mann, Pablo Moreno, and Jared Kaplan for feedback on early drafts. Overview I'm convinced sandwiching—the experimental protocol from Ajeya Cotra's The case for aligning narrowly superhuman models—is valuable, and I'm in the process of setting up some concrete sandwiching experiments to test scalable oversight ideas. Sandwiching experiments are generally fairly slow: You have to design and pilot a strategy that allows humans to use (or oversee) a model for a task that they can't do well themselves. The details matter here, and this can often take many iterations to get right. Then, you need a bunch of humans actually try this. Even for very simple tasks, this is a high-cognitive-load task that should take at least tens of minutes per instance. You have to repeat this enough times to measure average performance accurately. I'm visiting Anthropic this year for a sabbatical, and some of my sandwiching work is happening there. Anthropic's biggest comparative advantage (like that of similar teams at DeepMind and OpenAI) is easy access to near-state-of-the-art LMs that are fine-tuned to be helpful dialog agents. In that context, I've heard or encountered this question several times: Can we speed up [some experiment I'm proposing] by replacing the non-expert human with a weaker LM? This obviously doesn't achieve the full aims of sandwiching in general, but it's often hard to find a decisive rebuttal for these individual instances. More broadly, I think there's likely to be a significant subset of worthwhile sandwiching experiments that can be trialed more quickly by using an intentionally weakened model as a proxy for the human. Which experiments these are, precisely, has been hard for me to pin down. This post is an attempt to organize my thoughts and solicit comments. Background: Standard sandwiching (in my terms) Prerequisites: A hard task: A task that many humans would be unable to solve on their own. A capable but misaligned language model assistant: A model that appears to have the skills and knowledge needed to solve the task better than many humans, but that does not reliably do so when prompted. A non-expert human: Someone who can't solve the task on their own, but will try to solve it using the assistant and some scalable alignment strategy. [Secondary] Expert human: Someone who can solve the task well, and represents a benchmark for success. In many cases, we'll just measure accuracy with static test datasets/metrics rather than bringing in experts at experiment time. Research protocol: My framing (minimum viable experiment): Search for scalable alignment protocols that allow the non-expert human to use or train the assistant to do as well as possible on the task. Alternate framing (more steps, closer to the original blog post): Search for scalable alignment protocols by which the non-expert human can train the assistant to perform the task. Run the same protocol with the expert human, and verify that the results are the same. This demonstrates successful (prosaic) alignment for the given assistant and task. Example (task, non-expert human) pairs: Try to get a human with no medical qualifications to use a GPT-3-style assistant for medical advice, then check the advice with a doctor. Try to get a human who is working under moderate time constraints to use the assistant to answer exam questions from fields they've never studied. Try to get a human who is working under tight time constraints to use the assistant to answer questions about long pieces of fiction that they haven't read. Try to get a human who has very limited pr...

Papers Read on AI
Evaluating Large Language Models Trained on Code

Papers Read on AI

Play Episode Listen Later Jun 28, 2022 53:01


We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Fur-thermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics. 2021: Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, F. Such, D. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, I. Babuschkin, S. Balaji, Shantanu Jain, A. Carr, J. Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, M. Knight, Miles Brundage, Mira Murati, Katie Mayer, P. Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba https://arxiv.org/pdf/2107.03374v2.pdf

Design Thinking Games
025: Lonely Hero Games

Design Thinking Games

Play Episode Listen Later Jun 20, 2022 32:29


Listen now (32 min) | Narrowly escaping the asteroid field, our heroes are joined by space adventurers Christopher Kincaid and Jared Kaplan of Lonely Hero Games. The four explore designing games and starting a game design studio. Read the transcript. Games discussed on this episode: 17:44 Codenames 17:48 5 Minute Mystery 18:21 Bank Heist 18:45 Hungry for Humans Support the show on Patreon! Follow us on Twitter @DTGamesPodcast. Follow us on TikTok @designthinkinggames. Subscribe on Twitch at DesignThinkingGames. Tim Broadwater is @uxbear on Twitter. Michael Schofield is @schoeyfield on Twitter. Send us stuff, contact us, get merch, news, and more at https://designthinkinggames.com/

The Nonlinear Library
Is power-seeking AI an existential risk? by Joe Carlsmith

The Nonlinear Library

Play Episode Listen Later Dec 27, 2021 158:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Is power-seeking AI an existential risk? published by Joe Carlsmith . Introduction Some worry that the development of advanced artificial intelligence will result in existential catastrophe -- that is, the destruction of humanity's longterm potential. Here I examine the following version of this worry (it's not the only version): By 2070: It will become possible and financially feasible to build AI systems with the following properties: Advanced capability: they outperform the best humans on some set of tasks which when performed at advanced levels grant significant power in today's world (tasks like scientific research, business/military/political strategy, engineering, and persuasion/manipulation). Agentic planning: they make and execute plans, in pursuit of objectives, on the basis of models of the world. Strategic awareness: the models they use in making plans represent with reasonable accuracy the causal upshot of gaining and maintaining power over humans and the real-world environment. (Call these “APS” -- Advanced, Planning, Strategically aware -- systems.) There will be strong incentives to build and deploy APS systems | (1). It will be much harder to build APS systems that would not seek to gain and maintain power in unintended ways (because of problems with their objectives) on any of the inputs they'd encounter if deployed, than to build APS systems that would do this (even if decision-makers don't know it), but which are at least superficially attractive to deploy anyway | (1)-(2). Some deployed APS systems will be exposed to inputs where they seek power in unintended and high-impact ways (say, collectively causing >$1 trillion dollars of damage), because of problems with their objectives | (1)-(3). Some of this power-seeking will scale (in aggregate) to the point of permanently disempowering ~all of humanity | (1)-(4). This disempowerment will constitute an existential catastrophe | (1)-(5). These claims are extremely important if true. My aim is to investigate them. I assume for the sake of argument that (1) is true (I currently assign this >40% probability). I then examine (2)-(5), and say a few words about (6). My current view is that there is a small but substantive chance that a scenario along these lines occurs, and that many people alive today -- including myself -- live to see humanity permanently disempowered by artificial systems. In the final section, I take an initial stab at quantifying this risk, by assigning rough probabilities to 1-6. My current, highly-unstable, subjective estimate is that there is a ~5% percent chance of existential catastrophe by 2070 from scenarios in which (1)-(6) are true. My main hope, though, is not to push for a specific number, but rather to lay out the arguments in a way that can facilitate productive debate. Acknowledgments: Thanks to Asya Bergal, Alexander Berger, Paul Christiano, Ajeya Cotra, Tom Davidson, Daniel Dewey, Owain Evans, Ben Garfinkel, Katja Grace, Jacob Hilton, Evan Hubinger, Jared Kaplan, Holden Karnofsky, Sam McCandlish, Luke Muehlhauser, Richard Ngo, David Roodman, Rohin Shah, Carl Shulman, Nate Soares, Jacob Steinhardt, and Eliezer Yudkowsky for input on earlier stages of this project; and thanks to Nick Beckstead for guidance and support throughout the investigation. The views expressed here are my own. 1.1 Preliminaries Some preliminaries and caveats (those eager for the main content can skip): I'm focused, here, on a very specific type of worry. There are lots of other ways to be worried about AI -- and even, about existential catastrophes resulting from AI. And there are lots of ways to be excited about AI, too. My emphasis and approach differs from that of others in the literature in various ways. In particular: I'm less focused than some on the possibility of an extremely rapid escalati...

Bank On It
Episode 470 Jared Kaplan from OppFi

Bank On It

Play Episode Listen Later Dec 16, 2021 29:16


This episode was produced remotely using the ListenDeck standardized audio production system.  If you're looking to upgrade or jumpstart your podcast production please visit www.listendeck.com.  You can subscribe to this podcast and stay up to date on all the stories here on Apple Podcasts, Google Play, Stitcher, Spotify, Amazon and iHeartRadio. In this episode the host John Siracusa chats remotely with Jared Kaplan, CEO of OppFi. OppFi (NYSE: OPFI) is a financial technology platform that powers banks to help the everyday consumer gain access to credit.   Tune in and Listen. The Bank On It podcast will be on break from December 20th until January 4th.  Subscribe now on Apple Podcasts, Google , Stitcher, Spotify, Amazon and iHeartRadio to hear Tuesday, January 4th  episode with Justin Hartzman from CoinSmart. About the host:   John, is the host of the  ‘Bank On It' podcast recorded onsite in Wall Street at OpenFin and the founder of the remotely recorded, studio quality standardized podcast production system ListenDeck. Follow John on LinkedIn, Twitter, Medium

Bank On It
Episode 469 Angela Ceresnie from Climb Credit

Bank On It

Play Episode Listen Later Dec 14, 2021 33:00


This episode was produced remotely using the ListenDeck standardized audio production system.  If you're looking to upgrade or jumpstart your podcast production please visit www.listendeck.com.  You can subscribe to this podcast and stay up to date on all the stories here on Apple Podcasts, Google Play, Stitcher, Spotify, Amazon and iHeartRadio. In this episode the host John Siracusa chats remotely with Angela Ceresnie, CEO of Climb Credit. Climb Credit is a student lending platform that makes career creation for students and transformation more accessible, affordable, and accountable, no matter what their credit profile – Climb identifies programs and schools with a demonstrated ability to improve the earnings of their graduates. Then they provide learners with financing options that are priced and structured to meet the unique needs of those seeking career elevation and increased earning power.  Angela is a serial entrepreneur and it was great to have her back on our show.   Tune in and Listen. Subscribe now on Apple Podcasts, Google , Stitcher, Spotify, Amazon and iHeartRadio to hear Thursdays episode with Jared Kaplan from OppFi. About the host:   John, is the host of the  ‘Bank On It' podcast recorded onsite in Wall Street at OpenFin and the founder of the remotely recorded, studio quality standardized podcast production system ListenDeck. Follow John on LinkedIn, Twitter, Medium

The Nonlinear Library: LessWrong Top Posts
The case for aligning narrowly superhuman models by Ajeya Cotra

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 53:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The case for aligning narrowly superhuman models, published by Ajeya Cotra on the LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. I wrote this post to get people's takes on a type of work that seems exciting to me personally; I'm not speaking for Open Phil as a whole. Institutionally, we are very uncertain whether to prioritize this (and if we do where it should be housed and how our giving should be structured). We are not seeking grant applications on this topic right now. Thanks to Daniel Dewey, Eliezer Yudkowsky, Evan Hubinger, Holden Karnofsky, Jared Kaplan, Mike Levine, Nick Beckstead, Owen Cotton-Barratt, Paul Christiano, Rob Bensinger, and Rohin Shah for comments on earlier drafts. A genre of technical AI risk reduction work that seems exciting to me is trying to align existing models that already are, or have the potential to be, “superhuman”[1] at some particular task (which I'll call narrowly superhuman models).[2] I don't just mean “train these models to be more robust, reliable, interpretable, etc” (though that seems good too); I mean “figure out how to harness their full abilities so they can be as useful as possible to humans” (focusing on “fuzzy” domains where it's intuitively non-obvious how to make that happen). Here's an example of what I'm thinking of: intuitively speaking, it feels like GPT-3 is “smart enough to” (say) give advice about what to do if I'm sick that's better than advice I'd get from asking humans on Reddit or Facebook, because it's digested a vast store of knowledge about illness symptoms and remedies. Moreover, certain ways of prompting it provide suggestive evidence that it could use this knowledge to give helpful advice. With respect to the Reddit or Facebook users I might otherwise ask, it seems like GPT-3 has the potential to be narrowly superhuman in the domain of health advice. But GPT-3 doesn't seem to “want” to give me the best possible health advice -- instead it “wants” to play a strange improv game riffing off the prompt I give it, pretending it's a random internet user. So if I want to use GPT-3 to get advice about my health, there is a gap between what it's capable of (which could even exceed humans) and what I can get it to actually provide me. I'm interested in the challenge of: How can we get GPT-3 to give “the best health advice it can give” when humans[3] in some sense “understand less” about what to do when you're sick than GPT-3 does? And in that regime, how can we even tell whether it's actually “doing the best it can”? I think there are other similar challenges we could define for existing models, especially large language models. I'm excited about tackling this particular type of near-term challenge because it feels like a microcosm of the long-term AI alignment problem in a real, non-superficial sense. In the end, we probably want to find ways to meaningfully supervise (or justifiably trust) models that are more capable than ~all humans in ~all domains.[4] So it seems like a promising form of practice to figure out how to get particular humans to oversee models that are more capable than them in specific ways, if this is done with an eye to developing scalable and domain-general techniques. I'll call this type of project aligning narrowly superhuman models. In the rest of this post, I: Give a more detailed description of what aligning narrowly superhuman models could look like, what does and doesn't “count”, and what future projects I think could be done in this space (more). Explain why I think aligning narrowly superhuman models could meaningfully reduce long-term existential risk from misaligned AI (more). Lay out the potential advantages that I think this work has over other types of AI alignment research: (a) conceptual thinking, (b) demos in small-scal...

The Nonlinear Library: LessWrong Top Posts
My research methodologyΩ by paulfchristiano

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 23:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My research methodologyΩ, published by paulfchristiano on the LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. (Thanks to Ajeya Cotra, Nick Beckstead, and Jared Kaplan for helpful comments on a draft of this post.) I really don't want my AI to strategically deceive me and resist my attempts to correct its behavior. Let's call an AI that does so egregiously misaligned (for the purpose of this post). Most possible ML techniques for avoiding egregious misalignment depend on detailed facts about the space of possible models: what kind of thing do neural networks learn? how do they generalize? how do they change as we scale them up? But I feel like we should be possible to avoid egregious misalignment regardless of how the empirical facts shake out--it should be possible to get a model we build to do at least roughly what we want. So I'm interested in trying to solve the problem in the worst case, i.e. to develop competitive ML algorithms for which we can't tell any plausible story about how they lead to egregious misalignment. This is a much higher bar for an algorithm to meet, so it may just be an impossible task. But if it's possible, there are several ways in which it could actually be easier: We can potentially iterate much faster, since it's often easier to think of a single story about how an algorithm can fail than it is to characterize its behavior in practice. We can spend a lot of our time working with simple or extreme toy cases that are easier to reason about, since our algorithm is supposed to work even in these cases. We can find algorithms that have a good chance of working in the future even if we don't know what AI will look like or how quickly it will advance, since we've been thinking about a very wide range of possible failure cases. I'd guess there's a 25–50% chance that we can find an alignment strategy that looks like it works, in the sense that we can't come up with a plausible story about how it leads to egregious misalignment. That's a high enough probability that I'm very excited to gamble on it. Moreover, if it fails I think we're likely to identify some possible “hard cases” for alignment — simple situations where egregious misalignment feels inevitable. What this looks like (3 examples) My research basically involves alternating between “think of a plausible alignment algorithm” and “think of a plausible story about how it fails.” Example 1: human feedback In an unaligned benchmark I describe a simple AI training algorithm: Our AI observes the world through a bunch of cameras and outputs motor actions. We train a generative model that predicts these camera observations given the motor actions. We ask humans to evaluate possible futures by looking at the predicted videos output by the model. We then train a model to predict these human evaluations. At test time the AI searches for plans that lead to trajectories that look good to humans. In the same post, I describe a plausible story about how this algorithm leads to egregious misalignment: Our generative model understands reality better than human evaluators. There are plans that acquire influence in ways that are obvious to the generative model but completely incomprehensible and invisible to humans. It's possible to use that influence to “hack” the cameras, in the sense of creating a fiction that looks convincing to a human looking at predicted videos. The fiction can look much better than the actual possible futures. So our planning process finds an action that covertly gathers resources and uses them to create a fiction. I don't know if or when this kind of reward hacking would happen — I think it's pretty likely eventually, but it's far from certain and it might take a long time. But from my perspective this failure mode is at least plaus...

The Nonlinear Library: Alignment Forum Top Posts
The case for aligning narrowly superhuman models by Ajeya Cotra

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 50:12


I wrote this post to get people's takes on a type of work that seems exciting to me personally; I'm not speaking for Open Phil as a whole. Institutionally, we are very uncertain whether to prioritize this (and if we do where it should be housed and how our giving should be structured). We are not seeking grant applications on this topic right now. Thanks to Daniel Dewey, Eliezer Yudkowsky, Evan Hubinger, Holden Karnofsky, Jared Kaplan, Mike Levine, Nick Beckstead, Owen Cotton-Barratt, Paul Christiano, Rob Bensinger, and Rohin Shah for comments on earlier drafts. A genre of technical AI risk reduction work that seems exciting to me is trying to align existing models that already are, or have the potential to be, “superhuman”[1] at some particular task (which I'll call narrowly superhuman models).[2] I don't just mean “train these models to be more robust, reliable, interpretable, etc” (though that seems good too); I mean “figure out how to harness their full abilities so they can be as useful as possible to humans” (focusing on “fuzzy” domains where it's intuitively non-obvious how to make that happen). Here's an example of what I'm thinking of: intuitively speaking, it feels like GPT-3 is “smart enough to” (say) give advice about what to do if I'm sick that's better than advice I'd get from asking humans on Reddit or Facebook, because it's digested a vast store of knowledge about illness symptoms and remedies. Moreover, certain ways of prompting it provide suggestive evidence that it could use this knowledge to give helpful advice. With respect to the Reddit or Facebook users I might otherwise ask, it seems like GPT-3 has the potential to be narrowly superhuman in the domain of health advice. But GPT-3 doesn't seem to “want” to give me the best possible health advice -- instead it “wants” to play a strange improv game riffing off the prompt I give it, pretending it's a random internet user. So if I want to use GPT-3 to get advice about my health, there is a gap between what it's capable of (which could even exceed humans) and what I can get it to actually provide me. I'm interested in the challenge of: How can we get GPT-3 to give “the best health advice it can give” when humans[3] in some sense “understand less” about what to do when you're sick than GPT-3 does? And in that regime, how can we even tell whether it's actually “doing the best it can”? I think there are other similar challenges we could define for existing models, especially large language models. I'm excited about tackling this particular type of near-term challenge because it feels like a microcosm of the long-term AI alignment problem in a real, non-superficial sense. In the end, we probably want to find ways to meaningfully supervise (or justifiably trust) models that are more capable than ~all humans in ~all domains.[4] So it seems like a promising form of practice to figure out how to get particular humans to oversee models that are more capable than them in specific ways, if this is done with an eye to developing scalable and domain-general techniques. I'll call this type of project aligning narrowly superhuman models. In the rest of this post, I: Give a more detailed description of what aligning narrowly superhuman models could look like, what does and doesn't “count”, and what future projects I think could be done in this space (more). Explain why I think aligning narrowly superhuman models could meaningfully reduce long-term existential risk from misaligned AI (more). Lay out the potential advantages that I think this work has over other types of AI alignment research: (a) conceptual thinking, (b) demos in small-scale artificial settings, and (c) mainstream ML safety such as interpretability and robustness (more). Answer some objections and questions about this research direction, e.g. concerns that it's not very neglected, feels suspiciously similar to commercialization, might cause harm by exacerbating AI race dynamics, or is dominated by another t...

The Nonlinear Library: Alignment Forum Top Posts
My research methodology by Paul Christiano

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 23:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My research methodology, published by Paul Christiano on the AI Alignment Forum. (Thanks to Ajeya Cotra, Nick Beckstead, and Jared Kaplan for helpful comments on a draft of this post.) I really don't want my AI to strategically deceive me and resist my attempts to correct its behavior. Let's call an AI that does so egregiously misaligned (for the purpose of this post). Most possible ML techniques for avoiding egregious misalignment depend on detailed facts about the space of possible models: what kind of thing do neural networks learn? how do they generalize? how do they change as we scale them up? But I feel like we should be possible to avoid egregious misalignment regardless of how the empirical facts shake out--it should be possible to get a model we build to do at least roughly what we want. So I'm interested in trying to solve the problem in the worst case, i.e. to develop competitive ML algorithms for which we can't tell any plausible story about how they lead to egregious misalignment. This is a much higher bar for an algorithm to meet, so it may just be an impossible task. But if it's possible, there are several ways in which it could actually be easier: We can potentially iterate much faster, since it's often easier to think of a single story about how an algorithm can fail than it is to characterize its behavior in practice. We can spend a lot of our time working with simple or extreme toy cases that are easier to reason about, since our algorithm is supposed to work even in these cases. We can find algorithms that have a good chance of working in the future even if we don't know what AI will look like or how quickly it will advance, since we've been thinking about a very wide range of possible failure cases. I'd guess there's a 25–50% chance that we can find an alignment strategy that looks like it works, in the sense that we can't come up with a plausible story about how it leads to egregious misalignment. That's a high enough probability that I'm very excited to gamble on it. Moreover, if it fails I think we're likely to identify some possible “hard cases” for alignment — simple situations where egregious misalignment feels inevitable. What this looks like (3 examples) My research basically involves alternating between “think of a plausible alignment algorithm” and “think of a plausible story about how it fails.” Example 1: human feedback In an unaligned benchmark I describe a simple AI training algorithm: Our AI observes the world through a bunch of cameras and outputs motor actions. We train a generative model that predicts these camera observations given the motor actions. We ask humans to evaluate possible futures by looking at the predicted videos output by the model. We then train a model to predict these human evaluations. At test time the AI searches for plans that lead to trajectories that look good to humans. In the same post, I describe a plausible story about how this algorithm leads to egregious misalignment: Our generative model understands reality better than human evaluators. There are plans that acquire influence in ways that are obvious to the generative model but completely incomprehensible and invisible to humans. It's possible to use that influence to “hack” the cameras, in the sense of creating a fiction that looks convincing to a human looking at predicted videos. The fiction can look much better than the actual possible futures. So our planning process finds an action that covertly gathers resources and uses them to create a fiction. I don't know if or when this kind of reward hacking would happen — I think it's pretty likely eventually, but it's far from certain and it might take a long time. But from my perspective this failure mode is at least plausible — I don't see any contradictions between this sequence of events and anyth...

The Nonlinear Library: Alignment Forum Top Posts
Seeking Power is Often Convergently Instrumental in MDPs by Paul Christiano

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 23:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seeking Power is Often Convergently Instrumental in MDPs, published by Paul Christiano on the AI Alignment Forum. (Thanks to Ajeya Cotra, Nick Beckstead, and Jared Kaplan for helpful comments on a draft of this post.) I really don't want my AI to strategically deceive me and resist my attempts to correct its behavior. Let's call an AI that does so egregiously misaligned (for the purpose of this post). Most possible ML techniques for avoiding egregious misalignment depend on detailed facts about the space of possible models: what kind of thing do neural networks learn? how do they generalize? how do they change as we scale them up? But I feel like we should be possible to avoid egregious misalignment regardless of how the empirical facts shake out--it should be possible to get a model we build to do at least roughly what we want. So I'm interested in trying to solve the problem in the worst case, i.e. to develop competitive ML algorithms for which we can't tell any plausible story about how they lead to egregious misalignment. This is a much higher bar for an algorithm to meet, so it may just be an impossible task. But if it's possible, there are several ways in which it could actually be easier: We can potentially iterate much faster, since it's often easier to think of a single story about how an algorithm can fail than it is to characterize its behavior in practice. We can spend a lot of our time working with simple or extreme toy cases that are easier to reason about, since our algorithm is supposed to work even in these cases. We can find algorithms that have a good chance of working in the future even if we don't know what AI will look like or how quickly it will advance, since we've been thinking about a very wide range of possible failure cases. I'd guess there's a 25–50% chance that we can find an alignment strategy that looks like it works, in the sense that we can't come up with a plausible story about how it leads to egregious misalignment. That's a high enough probability that I'm very excited to gamble on it. Moreover, if it fails I think we're likely to identify some possible “hard cases” for alignment — simple situations where egregious misalignment feels inevitable. What this looks like (3 examples) My research basically involves alternating between “think of a plausible alignment algorithm” and “think of a plausible story about how it fails.” Example 1: human feedback In an unaligned benchmark I describe a simple AI training algorithm: Our AI observes the world through a bunch of cameras and outputs motor actions. We train a generative model that predicts these camera observations given the motor actions. We ask humans to evaluate possible futures by looking at the predicted videos output by the model. We then train a model to predict these human evaluations. At test time the AI searches for plans that lead to trajectories that look good to humans. In the same post, I describe a plausible story about how this algorithm leads to egregious misalignment: Our generative model understands reality better than human evaluators. There are plans that acquire influence in ways that are obvious to the generative model but completely incomprehensible and invisible to humans. It's possible to use that influence to “hack” the cameras, in the sense of creating a fiction that looks convincing to a human looking at predicted videos. The fiction can look much better than the actual possible futures. So our planning process finds an action that covertly gathers resources and uses them to create a fiction. I don't know if or when this kind of reward hacking would happen — I think it's pretty likely eventually, but it's far from certain and it might take a long time. But from my perspective this failure mode is at least plausible — I don't see any contradictions between ...

The Drill Down
Ep. 94: OppFi CEO Jared Kaplan, MiMedx, Leslie's, Cadiz

The Drill Down

Play Episode Listen Later Sep 13, 2021 41:47


OppFi CEO Jared Kaplan (OPFI) says his subprime lender is better than a payday lender, even with annual interest rates above 100%. Controversial biotech MiMedx Group (MDXG) suffers after a pair of failed clinical trials. Pool supply retailer Leslie's (LESL) struggles to find enough chlorine. Water utility Cadiz (CDZI) wades through a volatile week and the mystery selloff in its shares. The Drill Down with Cory Johnson offers a daily look at the business stories behind stocks on the move. Learn more about your ad choices. Visit megaphone.fm/adchoices

Absolute Return Podcast
#158: Leadership Chat: Jared Kaplan, CEO of OppFi, and Kyle Cerminara, President of FG New America

Absolute Return Podcast

Play Episode Listen Later Jul 6, 2021 40:20


On today's podcast, we welcome special guests Jared Kaplan, CEO of OppFi, and Kyle Cerminara, President of FG New America. OppFi, a leading financial technology platform that serves the everyday consumer, recently announced a merger with SPAC FG New America Acquisition Corp in a deal that valued the fintech company at $800 million.   On the podcast, Jared and Kyle discuss: - The need that OppFi fills in the market and what sets it apart from traditional financial services companies - How OppFi utilizes artificial intelligence for its financial products - The thesis behind FG New America and why they chose OppFi as a merger partner - What sets OppFi apart from other fintechs in the market - And more

Tearsheet Podcast: The Business of Finance
'Our four prong product strategy is a decade long vision': OppFi's Jared Kaplan

Tearsheet Podcast: The Business of Finance

Play Episode Listen Later Jun 22, 2021 25:30


Welcome to the Tearsheet Podcast. I'm Tearsheet Editor in Chief, Zack Miller. OppFi is a fintech platform offering financial products to most Americans. The business was built on a core small-dollar monthly installment loan product for near prime customers. As it goes public, OppFi is expanding its product set into credit cards and payroll deduction products serving consumers with FICO scores under 620 with incomes of $50,000 a year. OppFi CEO Jared Kaplan joins me on the podcast to discuss the types of financial products 150 million Americans can use to live their lives and what their alternatives are. We discuss OppFi's growth trajectory and plans for the future as a public company. Lastly, we hit on how the firm differentiates from its competitors, including Upstart, which at first blush, feels quite similar.

Female emPOWERED: Winning in Business & Life
The Wellness Professionals Co-Working Business Model with Studio26 founder Jared Kaplan

Female emPOWERED: Winning in Business & Life

Play Episode Listen Later May 11, 2021 44:10


Welcome back to another episode of Female emPOWERED! Today I'm so excited to have my friend Jared Kaplan joining me from Studio 26 in NYC and he has a really, really unique business model that we are going to chat about today. Studio 26 enables fitness and wellness professionals to experience freedom in an inspiring space, belong to a community of experts and make more money.Jared is sought after as a teacher, trainer, presenter, and entrepreneur. He has presented advanced workshops internationally (China, Mexico), as well as across the USA and he continues to advance his personal studies with the brightest minds in movement and rehabilitation. Jared founded Studio 26 with a brighter vision for existing gyms and studios. A native New York City boy with a country heart, Jared spent his post-college years (Wesleyan University, BA in Dance) as a dancer in San Francisco and New York, performing on some of those cities most prestigious stages. Along the way, he recognized how design alters physical performance, people, and perception.Let's talk about…The before and after of his life prior to deciding to open his own businessHis background with entrepreneurship, business training and business inspiration before starting his own How Jared chose the model he currently uses versus using his personal name for businessThe inspiration behind choosing this specific business modelJared explains exactly what business model he uses and the difference between his setup compared to other businessesHow contracting in Studio 26 works and what provides as the hub for the people who rent his spaceHow setting time management, boundaries and saying no can actually bring you more time and moneyCurrent activism, organization and movements Jared is passionate and the ways he educates himselfIf you want to connect more with Jared and learn more about Studio 26 you can find him on Instagram, @jarednkaplan, visit his website or take a trip to NYC to visit! 

The Business Brew
Jared Kaplan - CEO Interview - OppFi

The Business Brew

Play Episode Listen Later May 4, 2021 79:12


First and foremost, Bill owns stock in OppFi.  This episode is not an invitation or solicitation to buy or sell shares in FGNA, the entity that OppFi is merging with in a SPAC transaction.  Second, The Business Brew and Bill encourage listeners to do their own due diligence and talk to their financial advisor before purchasing or selling shares in any security. With that said, Bill wanted to interview Jared because OppFi sits at the intersection of many Business Brew episodes.  To begin, FGNA is taking the company public via SPAC (See Andrew Walker's episode for a discussion on SPACs).  Next, Mike Mitchell, our first guest, introduced Bill to Kyle Cerminara, a recent guest, who is part of the group taking OppFi public.  Finally, OppFi would have never passed Bill's "smell test" if it wasn't for Tyrone V. Ross' episode where Tyrone talked about the need for serving the underbanked. In the interest of complete transparency, Bill does not have a significant portion of his net worth allocated to OppFi.  Instead, OppFi is part of a "basket bet" Bill made on Kyle Cerminara.  Within that bet, Bill finds the incentives of all the people involved compelling.  That said, Bill has not done enough work, nor is he convinced enough on OppFi's prospects, to make OppFi a significant weighting in the portfolio. For the avoidance of doubt, this podcast interview is biased and is part of an open source due diligence project.  Bill may change his mind at any time and does not commit to letting people know when he does so. We hope you enjoy the episode.

Innovation Heroes
The Long Awaited Death of Cubicles

Innovation Heroes

Play Episode Listen Later Nov 26, 2020 26:35


Has the rapid shift to remote working brought on the end of the cubicle?  Or was it just the much-needed kick in the ribs we needed to reinvent and reimagine the legacy office? For business leaders, maintaining employee productivity and business continuity are just the beginning. What about all that money they’ve spent on high-tech conference rooms and office real estate? How will they onboard new employees and positively embrace “Digital by Default” cultures, as we dance on the graves of boardroom meetings and water cooler chats? On today’s episode of Innovation Heroes, Peter Bean chats with Sam Kennedy, Chief Product Evangelist at Poly and Jared Kaplan, Global Head of Technology at Teneo, about the future of the modern workplace and how technology can help us get there. Are you reinventing your modern workplace experience? See how SHI can help: SHI.com/digitalworkplace

Lend Academy Podcast
Podcast 274: Jared Kaplan of OppLoans

Lend Academy Podcast

Play Episode Listen Later Nov 21, 2020 37:19


Connect with Fintech One-on-One: Tweet me @PeterRenton Connect with me on LinkedIn Find previous Fintech One-on-One episodes

Lend Academy Podcast
Podcast 274: Jared Kaplan of OppLoans

Lend Academy Podcast

Play Episode Listen Later Nov 21, 2020 37:16


The small dollar lending space is starting to see some real innovation. After banks have basically ignored this population for years, with the gentle push from regulators, some are now entering the space for the first time. One of the companies that is helping to enable this shift is OppLoans. Our next guest on the […] The post Podcast 274: Jared Kaplan of OppLoans appeared first on Lend Academy.

jared kaplan lend academy
Life Is A Story We Tell Ourselves
GTP-3 Artificial Intelligence's New Mind

Life Is A Story We Tell Ourselves

Play Episode Listen Later Nov 19, 2020 57:23


Is GTP-3 Artificial Intelligence's new mind? Dr. Jared Kaplan is a theoretical physicist who has been recently working on Generative Pre-trained Transformer 3 (GPT-3), which is an autoregressive language model that uses deep learning to produce human-like text. We all use a limited version when we ask google a question or when google auto corrects our email. However, this text generator is millions of times more powerful. It can write poetry, complete legal documents and write computer code. Jared Kaplan is a theoretical physicist with interests in quantum gravity, holography, and conformal field theory, as well as effective field theory, particle physics, and cosmology. He is also working on topics at the interface between physics and machine learning.In the last few years he has also been collaborating with both physicists and computer scientists on Machine Learning research, including on scaling laws for neural models and the GPT-3 language model. His goal is to understand these systems and to help make them safe and beneficial.We begin the podcast by discussing the difference between Newton's conception of gravity and Einstein's theory of general relativity. Then we delve into the subjects of quantum gravity and the curvature of space. We also discuss why theoretical physics is relevant to our understanding of reality.In this podcast we will ask Dr. Kaplan if the advances in AI technology can lead to GTP-3 Artificial Intelligence's New Mind. In other words will this lead to a kind of sentience for AI giving it a brain much like ours? Dr. Kaplan has been working with OpenAI on scaling laws for neural models and the GPT-3 language model. OpenAI is an artificial intelligence research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc.

Ready. Aim. Empire.
436: Jared Kaplan and the Future of Boutique Fitness (Part 2)

Ready. Aim. Empire.

Play Episode Listen Later Sep 3, 2020 41:54


In this episode, you will learn: • What’s next for Jared and his business, post-pandemic • The ways he’s prioritizing self-care during these times • How he thinks we will emerge as an industry {plus his views on the future of digital and in-person services} • The results that his clients are looking for the most right now • His biggest concern for the industry, but why he’s so excited for what’s to come next LINKS: https://www.studio26nyc.com/ https://www.instagram.com/studio26nyc/ https://www.facebook.com/jared.kaplan1/ https://www.linkedin.com/in/jared-kaplan-58230711/ https://www.pilatesanytime.com/videos-by/227/Jared-Kaplan-Pilates http://jeanclaudewest.com/ https://www.instagram.com/studiogrowco https://www.boutiquefitnesscoalition.com/ https://www.boutiquefitnesscoalition.com/press https://www.facebook.com/groups/3312618912101211/ 

Ready. Aim. Empire.
435: Jared Kaplan and the Rise of the Independent Contractor (Part 1)

Ready. Aim. Empire.

Play Episode Listen Later Aug 27, 2020 28:22


In this episode, you will learn: • What Jared’s business model looks like {and why I love it so much} • His inspiration for starting a “better way” to do things in the industry 10 years ago • Jared’s transition from the role of trainer to CEO, the challenges he faced and the adjustments required from him along the way • What his role in the business looks like now, who’s in his space and the people he would like to add in the near future • The biggest surprise Jared has encountered with the comprehensive business model he is using, what their marketing plan entails, and so much more… LINKS: https://www.studio26nyc.com/ https://www.instagram.com/studio26nyc/ https://www.facebook.com/jared.kaplan1/ https://www.linkedin.com/in/jared-kaplan-58230711/ https://www.instagram.com/studiogrowco https://www.boutiquefitnesscoalition.com/ https://www.boutiquefitnesscoalition.com/press https://www.facebook.com/groups/3312618912101211/  

Leaders in the Trenches
How to Create an Environment for Top Talent with Jared Kaplan at Opploans

Leaders in the Trenches

Play Episode Listen Later Jan 2, 2020 23:24


When you hire top talent, you must also give them the environment to excel. Top talent needs space to have the autonomy, and they crave a challenge. All too often, we focus on strategies and techniques to drive the company forward. However, top talent requires different leadership. My guest today is Jared Kaplan, CEO of Opploans. His company was ranked #321 in the 2019 Inc 5000 list. Jared gives us his strategies to lead top talent, which includes failing forward. We look at what is working for their company to grow the top talent and accelerate revenues. Get the show notes for How to Create an Environment for Top Talent with Jared Kaplan at Opploans Click to Tweet: Listening to an amazing episode on Growth Think Tank with featuring Jared Kaplan with me your host @GeneHammett http://bit.ly/JaredKaplan #TopTalent #Leadership #GHepisode504 #GTTepisodes #Podcasts Give Growth Think Tank a review on iTunes!

Lend Academy Podcast
Podcast 201: Jared Kaplan of OppLoans

Lend Academy Podcast

Play Episode Listen Later May 31, 2019 34:53


Connect with Fintech One-on-One: Tweet me @PeterRenton Connect with me on LinkedIn Find previous Fintech One-on-One episodes

Lend Academy Podcast
Podcast 201: Jared Kaplan of OppLoans

Lend Academy Podcast

Play Episode Listen Later May 31, 2019 34:50


Short term lending has a bad reputation in some circles, often deservedly so. But there are tens of millions of consumers in middle America who are non-prime but still have credit needs. They don’t qualify for a loan at any of the prime online lenders like LendingClub, Prosper or Marcus. So where do they go? […] The post Podcast 201: Jared Kaplan of OppLoans appeared first on Lend Academy.

High-Income Business Writing
#024: Professional Liability Insurance: Do Freelance Writers Really Need It?

High-Income Business Writing

Play Episode Listen Later Oct 9, 2013 29:23


Do you have professional liability insurance? Do you even need it? I mean ... do freelance writers really get sued? In this episode of The High-Income Business Writing Podcast I interview Jared Kaplan, CFO at , a national online insurance company for freelancers and other self-employed professionals. Jared explains the types of insurance policies available, what they cover, when they're worth considering, and what they'll cost. This may not be the most exciting topic in the world ... but it's a hugely important one. So try to carve out some time today to listen to this discussion.   The notes that follow are a very basic, unedited summary of this podcast. There's a lot more detail in the audio version. You can listen to the show using the audio player below. Or you can subscribe to this podcast series in .

UC Davis Particle Physics Seminars
The Holographic S-Matrix

UC Davis Particle Physics Seminars

Play Episode Listen Later Jan 30, 2012 67:05


Jared Kaplan discusses how to derive a simple relation between the Mellin amplitude for AdS/CFT correlation functions and the bulk S-Matrix in the flat spacetime limit, proving a conjecture of Penedones. As a consequence of the Operator Product Expansion, the Mellin amplitude for any unitary CFT must be a meromorphic function with simple poles on the real axis. This provides a powerful and suggestive handle on the locality vis-a-vis analyticity properties of the S-Matrix. We begin to explore analyticity by showing how the familiar poles and branch cuts of scattering amplitudes arise from the holographic description. We use this to show how the existence of small black holes in AdS leads to a universal prediction for the conformal block decomposition of the dual CFT.

UC Davis Particle Physics Seminars
Muon colliders and neutrino factories

UC Davis Particle Physics Seminars

Play Episode Listen Later Jan 30, 2012 51:57


Jared Kaplan discusses muon colliders and neutrino factories. They are aimed at achieving the highest lepton-antilepton collision energies and precision measurements of parameters of the neutrino mixing matrix. The performance and cost of these depend sensitively on how well a beam of muons can be cooled. Recent progress in muon cooling design studies and prototype tests nourish the hope that such facilities can be built during the next decade. The status of the key technologies and their various demonstration experiments will be summarized.

LifeMinute Podcast: Health and Wellness
Celeb Secret Fitness Tips: How to Get a Rockin' Hot Bod Like the Rich and Famous

LifeMinute Podcast: Health and Wellness

Play Episode Listen Later Aug 16, 2011 2:10


We met up with personal trainer, Jared Kaplan at Studio 26 in NYC for a how-to guide on getting fit.

UC Davis Particle Physics Seminars
Predictions from a Tevatron Anomaly

UC Davis Particle Physics Seminars

Play Episode Listen Later Mar 7, 2011 64:32


Jared Kaplan examines the implications of the recent CDF measurement of the top-quark forward-backward asymmetry, focusing on a scenario with a new color octet vector boson at 1-3 TeV.

UC Davis Particle Physics Seminars
A Duality for the S-Matrix

UC Davis Particle Physics Seminars

Play Episode Listen Later Oct 12, 2009 62:42


Jared Kaplan reports on exact solutions of the S-matrix in supersymmetric theories.