Podcasts about Gradient

Multi-variable generalization of the derivative of a function

  • 427PODCASTS
  • 961EPISODES
  • 45mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 7, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Gradient

Show all podcasts related to gradient

Latest podcast episodes about Gradient

Venture in the South
S4:E165 The Weekly Update and an interview with Josh Miller, CEO of Gradient Health

Venture in the South

Play Episode Listen Later Apr 7, 2025 37:07


S4:E165 David opens withThe Weekly Update and then interviews Josh Miller, CEO of Gradient Health a data provider for medical AI. It's timely because Gradient Health has been growing revenue and customers over 100% YoY for several years and will soon reach profitability (again) in an era when AI has become the capital incinerator extraordinaire. I talk a lot about the AI Bubble on this pod and how so many big AI startups are not sustainable, so I wanted to flip the conversation to a positive perspective and talk about a type of AI startup that is sustainable and has a clear path to profitability and thus to liquidity. After all, that's what this show is about, informing founders, operators and investors about both the pitfalls and the opportunities of the innovation economy. (interview recorded 4.3.25)Follow David on LinkedIn or reach out to David on Twitter/X @DGRollingSouth for comments. Follow Paul on LinkedIn or reach out to Paul on Twitter/X @PalmettoAngel We invite your feedback and suggestions at www.ventureinthesouth.com or email david@ventureinthesouth.com. Learn more about RollingSouth at rollingsouth.vc or email david@rollingsouth.vc.

JACC Podcast
Exploring High-Gradient Aortic Stenosis | JACC Baran

JACC Podcast

Play Episode Listen Later Mar 25, 2025 27:52


Hosts Mitsuaki Sawano, MD, and Shun Kohsaka, MD, FACC, welcome Saki Ito, MD, a physician scientist at the Mayo Clinic, to discuss key topics expected at the upcoming ACC.25 in Chicago and a nuanced phenotype of aortic stenosis (AS): high-gradient AS with an aortic valve area (AVA) greater than 1.0 cm². Drawing from a recent retrospective study at Mayo Clinic, Dr. Ito examines the heterogeneous nature of this group—including patients with bicuspid valves and elevated stroke volume due to larger body size—and the potential prognostic benefit of aortic valve replacement (AVR) despite the absence of classic severity markers.

Run it Red with Ben Sims
Ben Sims 'Run It Red' 119

Run it Red with Ben Sims

Play Episode Listen Later Mar 5, 2025 119:31


Run it Red 119 is here. This month's got killer sounds from the likes of D'Julz, Tal Fussmann, Scan 7, As One, Kr!z, Seddig and loads more. Full tracklist, as always, is below so check the labels/artists where you can

Machine Learning Street Talk
Want to Understand Neural Networks? Think Elastic Origami! - Prof. Randall Balestriero

Machine Learning Street Talk

Play Episode Listen Later Feb 8, 2025 78:10


Professor Randall Balestriero joins us to discuss neural network geometry, spline theory, and emerging phenomena in deep learning, based on research presented at ICML. Topics include the delayed emergence of adversarial robustness in neural networks ("grokking"), geometric interpretations of neural networks via spline theory, and challenges in reconstruction learning. We also cover geometric analysis of Large Language Models (LLMs) for toxicity detection and the relationship between intrinsic dimensionality and model control in RLHF.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?Goto https://tufalabs.ai/***Randall Balestrierohttps://x.com/randall_balestrhttps://randallbalestriero.github.io/Show notes and transcript: https://www.dropbox.com/scl/fi/3lufge4upq5gy0ug75j4a/RANDALLSHOW.pdf?rlkey=nbemgpa0jhawt1e86rx7372e4&dl=0TOC:- Introduction - 00:00:00: Introduction- Neural Network Geometry and Spline Theory - 00:01:41: Neural Network Geometry and Spline Theory - 00:07:41: Deep Networks Always Grok - 00:11:39: Grokking and Adversarial Robustness - 00:16:09: Double Descent and Catastrophic Forgetting- Reconstruction Learning - 00:18:49: Reconstruction Learning - 00:24:15: Frequency Bias in Neural Networks- Geometric Analysis of Neural Networks - 00:29:02: Geometric Analysis of Neural Networks - 00:34:41: Adversarial Examples and Region Concentration- LLM Safety and Geometric Analysis - 00:40:05: LLM Safety and Geometric Analysis - 00:46:11: Toxicity Detection in LLMs - 00:52:24: Intrinsic Dimensionality and Model Control - 00:58:07: RLHF and High-Dimensional Spaces- Conclusion - 01:02:13: Neural Tangent Kernel - 01:08:07: ConclusionREFS:[00:01:35] Humayun – Deep network geometry & input space partitioninghttps://arxiv.org/html/2408.04809v1[00:03:55] Balestriero & Paris – Linking deep networks to adaptive spline operatorshttps://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf[00:13:55] Song et al. – Gradient-based white-box adversarial attackshttps://arxiv.org/abs/2012.14965[00:16:05] Humayun, Balestriero & Baraniuk – Grokking phenomenon & emergent robustnesshttps://arxiv.org/abs/2402.15555[00:18:25] Humayun – Training dynamics & double descent via linear region evolutionhttps://arxiv.org/abs/2310.12977[00:20:15] Balestriero – Power diagram partitions in DNN decision boundarieshttps://arxiv.org/abs/1905.08443[00:23:00] Frankle & Carbin – Lottery Ticket Hypothesis for network pruninghttps://arxiv.org/abs/1803.03635[00:24:00] Belkin et al. – Double descent phenomenon in modern MLhttps://arxiv.org/abs/1812.11118[00:25:55] Balestriero et al. – Batch normalization's regularization effectshttps://arxiv.org/pdf/2209.14778[00:29:35] EU – EU AI Act 2024 with compute restrictionshttps://www.lw.com/admin/upload/SiteAttachments/EU-AI-Act-Navigating-a-Brave-New-World.pdf[00:39:30] Humayun, Balestriero & Baraniuk – SplineCam: Visualizing deep network geometryhttps://openaccess.thecvf.com/content/CVPR2023/papers/Humayun_SplineCam_Exact_Visualization_and_Characterization_of_Deep_Network_Geometry_and_CVPR_2023_paper.pdf[00:40:40] Carlini – Trade-offs between adversarial robustness and accuracyhttps://arxiv.org/pdf/2407.20099[00:44:55] Balestriero & LeCun – Limitations of reconstruction-based learning methodshttps://openreview.net/forum?id=ez7w0Ss4g9(truncated, see shownotes PDF)

Studio Sessions
38. The Mystical State of Paradox: A Journey Beyond Binary Thinking

Studio Sessions

Play Episode Listen Later Jan 21, 2025 55:18 Transcription Available


This episode delves into the concept of paradox and its implications for navigating a binary world. We explore the limitations of either/or thinking and the importance of embracing duality, where seemingly opposing forces coexist and even necessitate each other. We discuss a range of paradoxes, including simplicity and complexity, certainty and doubt, ambition and contentedness, and how these concepts can be better understood as interconnected and complementary rather than mutually exclusive.Drawing on personal experiences and artistic examples, we examine how embracing paradox can lead to greater understanding and creativity. We discuss the importance of maintaining a beginner's mind, the dangers of ego-driven art, and the delicate balance between self-indulgence and audience engagement. The conversation also touches on the role of technology, language, and the mystical in shaping our perception of reality. -Ai If you enjoyed this episode, please consider giving us a rating and/or a review. We read and appreciate all of them. Thanks for listening, and we'll see you in the next episode. Links To Everything: Video Version of The Podcast: https://geni.us/StudioSessionsYT Matt's YouTube Channel: https://geni.us/MatthewOBrienYT Matt's 2nd Channel: https://geni.us/PhotoVideosYT Alex's YouTube Channel: https://geni.us/AlexCarterYT Matt's Instagram: https://geni.us/MatthewIG Alex's Instagram: https://geni.us/AlexIG

Engineering Kiosk
#179 MLOps: Machine Learning in die Produktion bringen mit Michelle Golchert und Sebastian Warnholz

Engineering Kiosk

Play Episode Listen Later Jan 21, 2025 76:51


Machine Learning Operations (MLOps) mit Data Science Deep Dive.Machine Learning bzw. die Ergebnisse aus Vorhersagen (sogenannten Prediction-Models) sind aus der modernen IT oder gar aus unserem Leben nicht mehr wegzudenken. Solche Modelle kommen wahrscheinlich öfter zum Einsatz, als dir eigentlich bewusst ist. Die Programmierung, Erstellung und das Trainieren dieser Modelle ist die eine Sache. Das Deployment und der Betrieb ist die andere Thematik. Letzteres nennt man Machine Learning Operations, oder kurz “MLOps”. Dies ist das Thema dieser Episode.Wir klären was eigentlich MLOps ist und wie es sich zum klassischen DevOps unterscheidet, wie man das eigene Machine Learning-Modell in Produktion bringt und welche Stages dafür durchlaufen werden müssen, was der Unterschied von Model-Training und Model-Serving ist, welche Aufgabe eine Model-Registry hat, wie man Machine Learning Modelle in Produktion eigentlich monitored und debugged, was Model-Drift bzw. die Drift-Detection ist, ob der Feedback-Cycle durch Methoden wie Continuous Delivery auch kurz gehalten werden kann, aber auch welche Skills als MLOps Engineer wichtig sind.Um all diese Fragen zu beantworten, stehen uns Michelle Golchert und Sebastian Warnholz vom Data Science Deep Dive Podcast rede und Antwort.Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:

“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Unlocking HR's Hidden Insights with Siadhal Magos [AI Corner]

“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies

Play Episode Listen Later Jan 17, 2025 20:28


Think AI is just about robots summarizing your emails? Think again. Metaview co-founder and CEO Siadhal Magos reveals how AI's real impact in an org is making sense of workplace conversations, performance reviews, and all the messy human stuff that HR deals with.Start recognizing the opportunities presented by ‘unstructured data' to transform HR from subjective guesswork to data-driven decisions, without losing the human touch.*Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.coFor coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.—

Ideas Worth Exploring
Optimization algorithms and the life lessons they taught me

Ideas Worth Exploring

Play Episode Listen Later Jan 12, 2025 31:03


Optimization is about maximizing or minimizing something. The math behind how you do that has taught me some important life lessons, including: (Gradient descent) Incremental progress with regular course corrections will eventually get you where you want to go. (Soft constraints) Giving yourself some wiggle room can help you avoid black and white thinking. (Local minima) Sometimes things have to get worse before they get better. (Multi-objective optimization) Managing the tradeoffs between multiple conflicting goals can help you reach both goals better. (Overfitting) Optimizing too hard can actually make you worse off, and so it's important to know when to quit. Links: Joggling: https://www.designreview.byu.edu/collections/being-the-best-in-the-world-is-easy Exchanging money for time: https://www.lesswrong.com/posts/cvDtmPNCyrkpg4d4F/units-of-exchangehttps://www.clearerthinking.org/tools/value-of-your-time-calculator Russian internet trolls: https://en.wikipedia.org/wiki/Russian_web_brigades 

The Gradient Podcast
2024 in AI, with Nathan Benaich

The Gradient Podcast

Play Episode Listen Later Dec 26, 2024 108:43


Episode 142Happy holidays! This is one of my favorite episodes of the year — for the third time, Nathan Benaich and I did our yearly roundup of all the AI news and advancements you need to know. This includes selections from this year's State of AI Report, some early takes on o3, a few minutes LARPing as China Guys……… If you've stuck around and continue to listen, I'm really thankful you're here. I love hearing from you. You can find Nathan and Air Street Press here on Substack and on Twitter, LinkedIn, and his personal site. Check out his writing at press.airstreet.com. Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Outline* (00:00) Intro* (01:00) o3 and model capabilities + “reasoning” capabilities* (05:30) Economics of frontier models* (09:24) Air Street's year and industry shifts: product-market fit in AI, major developments in science/biology, "vibe shifts" in defense and robotics* (16:00) Investment strategies in generative AI, how to evaluate and invest in AI companies* (19:00) Future of BioML and scientific progress: on AlphaFold 3, evaluation challenges, and the need for cross-disciplinary collaboration* (32:00) The “AGI” question and technology diffusion: Nathan's take on “AGI” and timelines, technology adoption, the gap between capabilities and real-world impact* (39:00) Differential economic impacts from AI, tech diffusion* (43:00) Market dynamics and competition* (50:00) DeepSeek and global AI innovation* (59:50) A robotics renaissance? robotics coming back into focus + advances in vision-language models and real-world applications* (1:05:00) Compute Infrastructure: NVIDIA's dominance, GPU availability, the competitive landscape in AI compute* (1:12:00) Industry consolidation: partnerships, acquisitions, regulatory concerns in AI* (1:27:00) Global AI politics and regulation: international AI governance and varying approaches* (1:35:00) The regulatory landscape* (1:43:00) 2025 predictions * (1:48:00) ClosingLinks and ResourcesFrom Air Street Press:* The State of AI Report* The State of Chinese AI* Open-endedness is all we'll need* There is no scaling wall: in discussion with Eiso Kant (Poolside)* Alchemy doesn't scale: the economics of general intelligence* Chips all the way down* The AI energy wars will get worse before they get betterOther highlights/resources:* Deepseek: The Quiet Giant Leading China's AI Race — an interview with DeepSeek CEO Liang Wenfeng via ChinaTalk, translated by Jordan Schneider, Angela Shen, Irene Zhang and others* A great position paper on open-endedness by Minqi Jiang, Tim Rocktäschel, and Ed Grefenstette — Minqi also wrote a blog post on this for us!* for China Guys only: China's AI Regulations and How They Get Made by Matt Sheehan (+ an interview I did with Matt in 2022!)* The Simple Macroeconomics of AI by Daron Acemoglu + a critique by Maxwell Tabarrok (more links in the Report)* AI Nationalism by Ian Hogarth (from 2018)* Some analysis on the EU AI Act + regulation from Lawfare Get full access to The Gradient at thegradientpub.substack.com/subscribe

AM Best Radio Podcast
AI in Insurance: Unlocking the Future With Gradient AI's Smith

AM Best Radio Podcast

Play Episode Listen Later Dec 19, 2024 15:07


Stan Smith, CEO of Gradient AI, explores the transformative impact of generative artificial intelligence on the insurance industry, predicting innovations in claims management, underwriting, and operational efficiency for 2025 and beyond.

Frontend First
Creating a background gradient from an image

Frontend First

Play Episode Listen Later Dec 12, 2024 48:39


Ryan talks to Sam about reproducing iOS's new image background treatment for his Open Graph Preview tool, opengraph.ing. They talk about different approaches for generating gradients from images, including finding the vibrant color of an image, luminosity-weighted averages, k-means clustering, and more.Timestamps:0:00 - Intro3:07 - Apple's new OG image gradient10:06 - Luminosity-weighted average14:22 - Finding the vibrant color of an image21:41 - Contrast ratios on favicons32:21 - K-means clustering41:25 - Refactoring UI tip about rotating the hueLinks:Open Graph PreviewRefactoring UI

Open Source Startup Podcast
E160: Open Source Secrets Management with Infisical

Open Source Startup Podcast

Play Episode Listen Later Dec 12, 2024 39:33


Vlad Matsiiako is CEO & Co-Founder of Infisical, the open source secrets management platform. Their open source project, also called infisical, has 16K stars on GitHub and helps users sync secrets across their teams and infrastructure. Infisical has raised $3M from investors including Gradient and YC. In this episode, we dig into their path from closed to open source, their big user wins (including government users), the importance of reliability for products in and around this category, the organic growth that came from their community, their AI strategy & more!

Pod 4 Good
From 36 Degrees North to Gradient: Devon Laney's Journey in Tulsa

Pod 4 Good

Play Episode Listen Later Dec 12, 2024 60:56


In this episode, we explore Devon Laney's journey as a leader in promoting entrepreneurship in Tulsa. He talks about his vision to change the city's innovation scene through his work at Gradient, a hub for entrepreneurs. Devon's experience at 36 Degrees North set the stage for his current efforts to build a strong business community. Listeners will learn about his plans to make Tulsa a key player in innovation and business growth.Devon is passionate about supporting entrepreneurs and outlines how Gradient aims to boost Tulsa's business revival. His goal is to create a space where startups can thrive, focusing on teamwork, community support, and innovation as drivers of economic progress. This episode is ideal for anyone interested in the dynamics of building a solid innovation ecosystem.As the episode wraps up, Devon talks about the challenges and opportunities for Tulsa's entrepreneurial scene. His leadership is about growth and inspiring new creativity and resilience in business. Committed to developing talent and encouraging innovation, Devon's work highlights the power of visionary leadership. Tune in to see how Devon is shaping Tulsa's future and what lies ahead for local entrepreneurs.

The Gradient Podcast
Philip Goff: Panpsychism as a Theory of Consciousness

The Gradient Podcast

Play Episode Listen Later Dec 12, 2024 60:04


Episode 141I spoke with Professor Philip Goff about:* What a “post-Galilean” science of consciousness looks like* How panpsychism helps explain consciousness and the hybrid cosmopsychist viewEnjoy!Philip Goff is a British author, idealist philosopher, and professor at Durham University whose research focuses on philosophy of mind and consciousness. Specifically, it focuses on how consciousness can be part of the scientific worldview. He is the author of multiple books including Consciousness and Fundamental Reality, Galileo's Error: Foundations for a New Science of Consciousness and Why? The Purpose of the Universe.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:05) Goff vs. Carroll on the Knowledge Arguments and explanation* (08:00) Preferences for theories* (12:55) Curiosity (Grounding, Essence) and the Knowledge Argument* (14:40) Phenomenal transparency and physicalism vs. anti-physicalism* (29:00) How Exactly does Panpsychism Help Explain Consciousness* (30:05) The argument for hybrid cosmopsychism* (36:35) “Bare” subjects / subjects before inheriting phenomenal properties* (40:35) Bundle theories of the self* (43:35) Fundamental properties and new subjects as causal powers* (50:00) Integrated Information Theory* (55:00) Fundamental assumptions in hybrid cosmopsychism* (1:00:00) OutroLinks:* Philip's homepage and Twitter* Papers* Putting Consciousness First* Curiosity (Grounding, Essence) and the Knowledge Argument Get full access to The Gradient at thegradientpub.substack.com/subscribe

TechCrunch Startups – Spoken Edition
Google's Gradient backs Cake, a managed open-source AI infrastructure platform

TechCrunch Startups – Spoken Edition

Play Episode Listen Later Dec 5, 2024 6:44


A new company is emerging from stealth today with backing from Google's AI-focused venture fund to help businesses compile their open-source AI infrastructure and reduce their engineering overheads. Cake integrates and secures more than 100 components for enterprises. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Quack 12 Podcast
We Beat Washington w/ Thomas (Gradient) Hiura

Quack 12 Podcast

Play Episode Listen Later Dec 3, 2024 48:11


Thomas Hiura (AKA Gradient) joins the show to help recap our victorious victory over Washington and what it means to us Duck fans!Here's a link to some of Thomas' work!https://li.sten.to/byefornowCheck out our patreon for Duck-related content. Please, give us a five-star rating and review on apple podcasts!Follow us on twitter! @quack12podcastAnd our Youtube Channel!

PALADIN FINANCIAL TALK
Cybersecurity Tips with Michael Johnson

PALADIN FINANCIAL TALK

Play Episode Listen Later Nov 30, 2024


In this episode Jeff welcomes Gradient's Chief Information Officer Michael Johnson to talk about helpful cybersecurity tips to help protect you and your information from scammers.

PALADIN FINANCIAL TALK
Cybersecurity Tips with Michael Johnson

PALADIN FINANCIAL TALK

Play Episode Listen Later Nov 30, 2024


In this episode Jeff welcomes Gradient's Chief Information Officer Michael Johnson to talk about helpful cybersecurity tips to help protect you and your information from scammers.

Gardeners' Question Time
Staffordshire: Dogs, Gradient Gardens and Aphrodisiacs

Gardeners' Question Time

Play Episode Listen Later Nov 29, 2024 42:45


How can I stop my dog from digging holes in my garden? What conditions do walnut trees grow best in? If you could lose one pest from your garden, what would it be? Peter Gibbs and a team of experts are in Staffordshire, to solve the gardening conundrums of the audience. Returning to the National Memorial Arboretum with Peter are grow-your-own legend Bob Flowerdew, pest and disease expert Pippa Greenwood and garden designer Bunny Guinness.Later in the programme, is your garden on an gradient? Garden designer Matthew Wilson provides tricks and tips on the best way to effectively garden on an incline.Producer: Dan Cocker
 Assistant Producer: Rahnee Prescod
 Executive Producer: Carly MaileA Somethin' Else production for BBC Radio 4

Gamereactor TV - English
Philips Hue Lightstrip (Quick Look) - Comparing the Solo, Plus, and Gradient

Gamereactor TV - English

Play Episode Listen Later Nov 27, 2024 4:55


Gamereactor Gadgets TV – English
Philips Hue Lightstrip (Quick Look) - Comparing the Solo, Plus, and Gradient

Gamereactor Gadgets TV – English

Play Episode Listen Later Nov 27, 2024 4:55


Gamereactor TV - Italiano
Philips Hue Lightstrip (Quick Look) - Comparing the Solo, Plus, and Gradient

Gamereactor TV - Italiano

Play Episode Listen Later Nov 27, 2024 4:55


Gamereactor TV - Norge
Philips Hue Lightstrip (Quick Look) - Comparing the Solo, Plus, and Gradient

Gamereactor TV - Norge

Play Episode Listen Later Nov 27, 2024 4:55


Gamereactor TV - Español
Philips Hue Lightstrip (Quick Look) - Comparing the Solo, Plus, and Gradient

Gamereactor TV - Español

Play Episode Listen Later Nov 27, 2024 4:55


Gamereactor TV - Inglês
Philips Hue Lightstrip (Quick Look) - Comparing the Solo, Plus, and Gradient

Gamereactor TV - Inglês

Play Episode Listen Later Nov 27, 2024 4:55


The Gradient Podcast
Some Changes at The Gradient

The Gradient Podcast

Play Episode Listen Later Nov 21, 2024 34:25


Hi everyone!If you're a new subscriber or listener, welcome. If you're not new, you've probably noticed that things have slowed down from us a bit recently. Hugh Zhang, Andrey Kurenkov and I sat down to recap some of The Gradient's history, where we are now, and how things will look going forward. To summarize and give some context:The Gradient has been around for around 6 years now – we began as an online magazine, and began producing our own newsletter and podcast about 4 years ago. With a team of volunteers — we take in a bit of money through Substack that we use for subscriptions to tools we need and try to pay ourselves a bit — we've been able to keep this going for quite some time. Our team has less bandwidth than we'd like right now (and I'll admit that at least some of us are running on fumes…) — we'll be making a few changes:* Magazine: We're going to be scaling down our editing work on the magazine. While we won't be accepting pitches for unwritten drafts for now, if you have a full piece that you'd like to pitch to us, we'll consider posting it. If you've reached out about writing and haven't heard from us, we're really sorry. We've tried a few different arrangements to manage the pipeline of articles we have, but it's been difficult to make it work. We still want this to be a place to promote good work and writing from the ML community, so we intend to continue using this Substack for that purpose. If we have more editing bandwidth on our team in the future, we want to continue doing that work. * Newsletter: We'll aim to continue the newsletter as before, but with a “Best from the Community” section highlighting posts. We'll have a way for you to send articles you want to be featured, but for now you can reach us at our editor@thegradient.pub. * Podcast: I'll be continuing this (at a slower pace), but eventually transition it away from The Gradient given the expanded range. If you're interested in following, it might be worth subscribing on another player like Apple Podcasts, Spotify, or using the RSS feed.* Sigmoid Social: We'll keep this alive as long as there's financial support for it.If you like what we do and/or want to help us out in any way, do reach out to editor@thegradient.pub. We love hearing from you.Timestamps* (0:00) Intro* (01:55) How The Gradient began* (03:23) Changes and announcements* (10:10) More Gradient history! On our involvement, favorite articles, and some plugsSome of our favorite articles!There are so many, so this is very much a non-exhaustive list:* NLP's ImageNet moment has arrived* The State of Machine Learning Frameworks in 2019* Why transformative artificial intelligence is really, really hard to achieve* An Introduction to AI Story Generation* The Artificiality of Alignment (I didn't mention this one in the episode, but it should be here)Places you can find us!Hugh:* Twitter* Personal site* Papers/things mentioned!* A Careful Examination of LLM Performance on Grade School Arithmetic (GSM1k)* Planning in Natural Language Improves LLM Search for Code Generation* Humanity's Last ExamAndrey:* Twitter* Personal site* Last Week in AI PodcastDaniel:* Twitter* Substack blog* Personal site (under construction) Get full access to The Gradient at thegradientpub.substack.com/subscribe

The CleanTechies Podcast
#220 How This Top Investor Believes ClimateTech Startups Can Still Exit

The CleanTechies Podcast

Play Episode Listen Later Nov 15, 2024 31:19 Transcription Available


Listen Time: Full Show 38:51 (no ads) | Free Preview 31:19Today, we are speaking with Shaun Abrahamson from Third Sphere. Third Sphere is a leading ClimateTech investor that also has a debt strategy for high-volume production startups in the hardware space. They have backed companies like ChargeLab, ClimateBase, Gradient, OneWheel, Therma, Revivn, & Wasted (check out the episode we did with Revivn). This is Shaun's third time on the pod — if this is your first time hearing him, you'll understand why because he is very thoughtful. During NY Climate Week, the Third Sphere team put on a great event titled “Climate Tech Exits.” It was really well beloved by many NY CW attendees. They covered:**Why exits are so crucial to the success of Climate investing **The state of climate tech exits**How it looks compared to tech broadly**What to learn from the original tech winners like Google & Apple **What the paths forward might be**The patterns of successful founders In today's episode, we recap the key points of their event and then dig into Shaun's advice to founders on how to ensure they are doing things right, in order to see an exit.We are sure you'll find it educational.

45 Graus
#174 Zita Marinho - Como funcionam os algoritmos do ChatGPT e de outros Large Language Models?

45 Graus

Play Episode Listen Later Nov 1, 2024 78:12


Zita Marinho é investigadora na Google DeepMind, onde atualmente trabalha em Reinforcement Learning (“aprendizagem por reforço”, uma área de Machine Learning). Possui um duplo doutoramento em Robótica pelo Instituto de Robótica da Carnegie Mellon University e do Instituto Superior Tecnico em 2018. Os seus interesses de investigação estão na interseção entre algoritmos de aprendizagem automática e Processamento de Linguagem Natural.  -> Apoie este podcast e faça parte da comunidade de mecenas do 45 Graus em: 45grauspodcast.com -> Workshops de Pensamento Crítico. _______________ Índice: (0:00) Introdução (6:40) Algoritmos de redes neuronais | Nobel da Física 2024 | Importância de ter várias camadas | Vanishing and exploding gradients (20:20) Como aprendem os modelos? Gradient descent e backpropagation | Redes recorrentes | Nobel e os modelos de Ising (28:13) A revolução dos Transformers, tipo ChatGPT. Paper Attention is All You Need (37:16) O que fez o ChatGPT de inovador? | Comparação com o cérebro humano | ChatGPT va outros modelos LLM actuais (e.g. Gemini, Claude) (44:23) Dicas de prompting (48:14) Forcas e fraquezas dos modelos actuais. | Propriedades emergentes misteriosas Relatório da BCG | Riscos de alucinação  (54:59) Futuro | reasoners (OpenAI's o1 Model) | Dicas de prompting | Limitações de dados | Alphaproof medalha de prata nas Olimpíadas da Matemática | robótica | redes convolucionais | Dos modelos GPT ao chatbot (1:10:52) Artigo “The bitter lesson”, de Rich Sutton | Deep Blue Livros recomendados: The Learning Brain, Thad A. Polk, A Brief History of Mathematical Thought, de Luke Heaton, A Brief History of Intelligence, de Max Bennett, Language Models: A Guide for the Perplexed _______________ Esta conversa foi editada por: Hugo Oliveira  

Everybody in the Pool
E67: You would not believe how much water went into making your iPhone

Everybody in the Pool

Play Episode Listen Later Oct 31, 2024 33:34


Usually on the show we talk about carbon-emitting sectors, but today we're talking about one of the planet's most precious resources: water. Specifically, industrial water, which is used in staggering amounts, horribly contaminated, and sometimes just put right back into the environment. Anurag Bajpayee, CEO of Gradient, discusses the company's technology-driven end-to-end water solutions and their goal to conserve water for future generations and give nature its water back. Gradient's approach is practical, driven by the outcome of cleaning and recycling water, rather than a specific technology or innovation (although there are plenty of innovations), and their latest achievement is a process to concentrate and destroy PFAS—aka, forever chemicals.LINKS:Gradient: https://www.gradiant.com/All episodes: https://www.everybodyinthepool.com/Subscribe to the Everybody in the Pool newsletter: https://www.mollywood.co/Become a member and get an ad-free version of the podcast: https://plus.acast.com/s/everybody-in-the-poolPlease subscribe and tell your friends about Everybody in the Pool! Send feedback or become a sponsor at in@everybodyinthepool.com! To support the show and get an ad-free listening experience, please jump in and become a member of Everybody in the Pool! https://plus.acast.com/s/everybody-in-the-pool. Hosted on Acast. See acast.com/privacy for more information.

HLTH Matters
AI @ HLTH : GradientAI Apply Context to Clinical Data that Provides Sophisticated Insights to Clinical Workflows

HLTH Matters

Play Episode Listen Later Oct 31, 2024 24:20


Host Sandy Vance is behind the mic in this episode with Chris Chang, CEO and Co-Founder at GradientAI. Have any questions for Chris? Reach out via email or check out their website.They explore why GradientAI focuses specifically on the healthcare industry and how its innovative solutions are enhancing patient care and operational efficiency. They also address common concerns surrounding AI adoption and share exciting use cases and future developments. Listen in as Sandy and Chris navigate the intersection of GradientAI and AI in healthcare, revealing the potential to reshape the future of medicine.GradientAI enables enterprises to automate their complex data workflows using its AI-powered data-reasoning platform. They work with many integrated care providers and health tech companies to power processes across the organization, from medical note auditing to the improvement of care to patients, benefit management, claims processing, and data reconciliation. Enterprise AI automation in healthcare, powered by Nightingale - a domain-specific model designed specifically for healthcare.In this episode, they talk about:Why GradientAI Focuses on Healthcare and the Benefits of AI in HealthcareHow GradientAI Helps the Healthcare IndustryPros and Cons of Workflow Tools That Limit Information in HealthcareThe Transformative Technology of AI in the Healthcare IndustryUse Cases, Trials, and Development of AI in HealthcareAddressing Concerns of Those Wary About AI in HealthcareThe Future of GradientAI and AI in HealthcareA Little About Chris:Chris is the Co-Founder and CEO at Gradient, who most recently led Studio AI at Netflix and was an architect of early domain expert LLMs for content production and intelligence. Prior to joining Netflix, Chris held a variety of leadership roles at Pinterest, Opendoor, and Meta. Chris started his career in finance and holds a dual bachelor's degree in computer science and business and a master's degree in computer science from UPenn.

JACC Speciality Journals
JACC: Advances - A Novel Echocardiographic Parameter to Confirm Low-Gradient Aortic Stenosis Severity

JACC Speciality Journals

Play Episode Listen Later Oct 23, 2024 3:19


Darshan H. Brahmbhatt, Podcast Editor of JACC: Advances, discusses a recently published original research paper about a novel echocardiographic parameter to confirm low-gradient aortic stenosis severity.

JACC Speciality Journals
JACC: Advances - Impact of Residual Transmitral Mean Pressure Gradient on Outcomes After Mitral Transcatheter Edge-to-Edge Repair

JACC Speciality Journals

Play Episode Listen Later Oct 23, 2024 3:22


Darshan H. Brahmbhatt, Podcast Editor of JACC: Advances, discusses a recently published original research paper on the impact of residual transmitral mean pressure gradient on outcomes after mitral transcatheter edge-to-edge repair.

Dungeon Master of None
330 - Gradient Descent

Dungeon Master of None

Play Episode Listen Later Oct 19, 2024 63:51


Matt and Rob discuss Gradient Descent, a module for Mothership and the greatest sci-fi megadungeon made out of an abandoned android manufacturing facility in deep space (and also maybe the best sci-fi megadungeon period). https://www.patreon.com/DungeonMasterOfNone  Join the DMofNone Discord! Music: Pac Div - Roll the Dice Soen - Monarch

The Gradient Podcast
Jacob Andreas: Language, Grounding, and World Models

The Gradient Podcast

Play Episode Listen Later Oct 10, 2024 112:43


Episode 140I spoke with Professor Jacob Andreas about:* Language and the world* World models* How he's developed as a scientistEnjoy!Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:40) Jacob's relationship with grounding fundamentalism* (05:21) Jacob's reaction to LLMs* (11:24) Grounding language — is there a philosophical problem?* (15:54) Grounding and language modeling* (24:00) Analogies between humans and LMs* (30:46) Grounding language with points and paths in continuous spaces* (32:00) Neo-Davidsonian formal semantics* (36:27) Evolving assumptions about structure prediction* (40:14) Segmentation and event structure* (42:33) How much do word embeddings encode about syntax?* (43:10) Jacob's process for studying scientific questions* (45:38) Experiments and hypotheses* (53:01) Calibrating assumptions as a researcher* (54:08) Flexibility in research* (56:09) Measuring Compositionality in Representation Learning* (56:50) Developing an independent research agenda and developing a lab culture* (1:03:25) Language Models as Agent Models* (1:04:30) Background* (1:08:33) Toy experiments and interpretability research* (1:13:30) Developing effective toy experiments* (1:15:25) Language Models, World Models, and Human Model-Building* (1:15:56) OthelloGPT's bag of heuristics and multiple “world models”* (1:21:32) What is a world model?* (1:23:45) The Big Question — from meaning to world models* (1:28:21) From “meaning” to precise questions about LMs* (1:32:01) Mechanistic interpretability and reading tea leaves* (1:35:38) Language and the world* (1:38:07) Towards better language models* (1:43:45) Model editing* (1:45:50) On academia's role in NLP research* (1:49:13) On good science* (1:52:36) OutroLinks:* Jacob's homepage and Twitter* Language Models, World Models, and Human Model-Building* Papers* Semantic Parsing as Machine Translation (2013)* Grounding language with points and paths in continuous spaces (2014)* How much do word embeddings encode about syntax? (2014)* Translating neuralese (2017)* Analogs of linguistic structure in deep representations (2017)* Learning with latent language (2018)* Learning from Language (2018)* Measuring Compositionality in Representation Learning (2019)* Experience grounds language (2020)* Language Models as Agent Models (2022) Get full access to The Gradient at thegradientpub.substack.com/subscribe

The Cartesian Cafe
Jay McClelland | Neural Networks: Artificial and Biological

The Cartesian Cafe

Play Episode Listen Later Oct 2, 2024 179:15


Jay McClelland is a pioneer in the field of artificial intelligence and is a cognitive psychologist and professor at Stanford University in the psychology, linguistics, and computer science departments. Together with David Rumelhart, Jay published the two volume work Parallel Distributed Processing, which has led to the flourishing of the connectionist approach to understanding cognition. In this conversation, Jay gives us a crash course in how neurons and biological brains work. This sets the stage for how psychologists such as Jay, David Rumelhart, and Geoffrey Hinton historically approached the development of models of cognition and ultimately artificial intelligence. We also discuss alternative approaches to neural computation such as symbolic and neuroscientific ones. Patreon (bonus materials + video chat): https://www.patreon.com/timothynguyen Part I. Introduction 00:00 : Preview 01:10 : Cognitive psychology 07:14 : Interdisciplinary work and Jay's academic journey 12:39 : Context affects perception 13:05 : Chomsky and psycholinguists 8:03 : Technical outline Part II. The Brain 00:20:20 : Structure of neurons 00:25:26 : Action potentials 00:27:00 : Synaptic processes and neuron firing 00:29:18 : Inhibitory neurons 00:33:10 : Feedforward neural networks 00:34:57 : Visual system 00:39:46 : Various parts of the visual cortex 00:45:31 : Columnar organization in the cortex 00:47:04 : Colocation in artificial vs biological networks 00:53:03 : Sensory systems and brain maps Part III. Approaches to AI, PDP, and Learning Rules 01:12:35 : Chomsky, symbolic rules, universal grammar 01:28:28 : Neuroscience, Francis Crick, vision vs language 01:32:36 : Neuroscience = bottom up 01:37:20 : Jay's path to AI 01:43:51 : James Anderson 01:44:51 : Geoff Hinton 01:54:25 : Parallel Distributed Processing (PDP) 02:03:40 : McClelland & Rumelhart's reading model 02:31:25 : Theories of learning 02:35:52 : Hebbian learning 02:43:23 : Rumelhart's Delta rule 02:44:45 : Gradient descent 02:47:04 : Backpropagation 02:54:52 : Outro: Retrospective and looking ahead Image credits: http://timothynguyen.org/image-credits/ Further reading: Rumelhart, McClelland. Parallel Distributed Processing. McClelland, J. L. (2013). Integrating probabilistic models of perception and interactive neural networks: A historical and tutorial review   Twitter: @iamtimnguyen   Webpage: http://www.timothynguyen.org

JACC Speciality Journals
JACC: Asia - Fractional Flow Reserve and Fractional Flow Reserve Gradient From CCTA for Predicting Future Coronary Events

JACC Speciality Journals

Play Episode Listen Later Oct 1, 2024 1:55


In this episode, Jian'an Wang examines a study on the predictive power of integrating fractional flow reserve computed tomography (FFR CT) and its local gradient in assessing future coronary events in patients. The findings suggest that this combined approach significantly enhances risk prediction, offering valuable insights for more informed clinical decision-making in managing coronary artery disease.

The Gradient Podcast
Evan Ratliff: Our Future with Voice Agents

The Gradient Podcast

Play Episode Listen Later Sep 26, 2024 79:59


Episode 139I spoke with Evan Ratliff about:* Shell Game, Evan's new podcast, where he creates an AI voice clone of himself and sets it loose. * The end of the Longform Podcast and his thoughts on the state of journalism. Enjoy!Evan is an award-winning investigative journalist, bestselling author, podcast host, and entrepreneur. He's the author of the The Mastermind: A True Story of Murder, Empire, and a New Kind of Crime Lord; the writer and host of the hit podcasts Shell Game and Persona: The French Deception; and the cofounder of The Atavist Magazine, Pop-Up Magazine, and the Longform Podcast. As a writer, he's a two-time National Magazine Award finalist. As an editor and producer, he's a two-time Emmy nominee and National Magazine Award winner.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:05) Evan's ambitious and risky projects* (04:45) Wearing different personas as a journalist* (08:31) Boundaries and acceptability in using voice agents* (11:42) Impacts on other people* (13:12) “The kids these days” — how will new technologies impact younger people?* (17:12) Evan's approach to children's technology use* (20:05) Techno-solutionism and improvements in medicine, childcare* (24:15) Evan's perspective on simulations of people* (27:05) On motivations for building tech startups* (30:42) Evan's outlook for Shell Game's impact and motivations for his work* (36:05) How Evan decided to write for a career* (40:02) How voice agents might impact our conversations* (43:52) Evan's experience with Longform and podcasting* (47:15) Perspectives on doing good interviews* (52:11) Mimicking and inspiration, developing style* (57:15) Writers and their motivations, the state of longform journalism* (1:06:15) The internet and writing* (1:09:41) On the ending of Longform* (1:19:48) OutroLinks:* Evan's homepage and Twitter* Shell Game, Evan's new podcast* Longform Podcast Get full access to The Gradient at thegradientpub.substack.com/subscribe

The Chris Voss Show
The Chris Voss Show Podcast – Unlocking the Power of Experiential Marketing with Pauline Oudin CEO and Partner of Gradient Experience

The Chris Voss Show

Play Episode Listen Later Sep 19, 2024 46:12


Unlocking the Power of Experiential Marketing with Pauline Oudin CEO and Partner of Gradient Experience Gradientexperience.com About the Guest(s): Pauline Oudin is the CEO and Partner of Gradient Experience, an experiential marketing agency that emphasizes elevating human connections to drive results. Throughout her career, Pauline has consistently delivered measurable successes for high-profile brands including Cartier, Beam Suntory, and the L'Oreal Group. A French native with over 30 years spent across the US and UK, she is celebrated for her innovative and creative approach to marketing by Fast Company. Episode Summary: In this engaging episode of The Chris Voss Show, host Chris Voss dives into the dynamic world of experiential marketing with his guest, Pauline Oudin. As the CEO and Partner of Gradient Experience, Pauline shares her expert insights on how transforming human interactions can powerfully elevate brand results. They explore the nuances of experiential marketing, distinguishing it from traditional marketing and discussing its long-term impact on consumer loyalty and brand recognition. Pauline illustrates the value of creating memorable experiences over passive advertising. Through detailed examples from high-profile clients such as Cartier and Beam Suntory, she explains how brands can foster emotional connections and enhance consumer engagement. Additionally, the conversation touches on the innovations in measurability within the field, demonstrating how modern tools can track the effectiveness of these interactive campaigns. Pauline emphasizes the importance of integrated, participatory, and community-building approaches in making these experiences truly impactful. Key Takeaways: Experiential Marketing Defined: Unlike traditional advertising, experiential marketing involves creating interactive and participatory experiences that foster emotional connections and brand loyalty. Measurability Innovations: New AI tools and technologies are enhancing the ability to measure the long-term impact of experiential marketing, providing deeper insights into customer engagement and ROI. Case Studies: Examples from major brands like Cartier and Beam Suntory highlight how well-executed experiential marketing campaigns can create lasting impressions and build community. Pandemic Adaptations: The episode discusses how brands adapted their strategies during the pandemic, emphasizing the continued need for live, interactive content. Priceless Brand Connections: Pauline underscores the importance of prioritizing long-term customer relationships and brand integrity over short-term sales tactics. Notable Quotes: “When you think about most marketing, brands tell you a story…experiential marketing is all about living the story.” - Pauline Oudin “Hearts move minds. We like to believe we are logical when we purchase products, but at the end of the day, it's all about emotional connections.” - Pauline Oudin “In experiential marketing, it's not just about who was affected on the day of the event, but the ripples that experience creates.” - Pauline Oudin “We live in such an exciting time with AI tools allowing us to measure things we couldn't before.” - Pauline Oudin “If you turn off your social media ads and your TV spots, what's left is what you've built in brand equity.” - Pauline Oudin

A Duck in a Tree
A Duck in a Tree 2024-09-14 | Estrangement in Effect

A Duck in a Tree

Play Episode Listen Later Sep 17, 2024 58:51


The 636th of a series of weekly radio programmes created by :zoviet*france: First broadcast 14 September 2024 by Resonance 104.4 FM and CJMP 90.1 FM Thanks to the artists and sound recordists included here for their fine work. track list … :zoviet*france: - A Duck in a Tree Link 636 00 Michael Serafin-Wells (Bipolar Explorer) - Intro 01 :zoviet*france: - Lap Steel Phrase 2 [x2] 02 :zoviet*france: - The White Sea 03 Rabbitsquirrel  - Your Favorite Until Blue in the Nails 04 Fletina - Late Shift [extract] 05 Mutant Beatniks - Drift 06 Rubbish Music - Huge and Disgusting 300 Tonne Fatberg 07 :zoviet*france: - Zerlest Before 08 [unknown sound recordist] - A '1400' Class 0-4-2 Tank Engine, No.1465, Propelling the Single Coach of a Push and Pull Train, Chatters Past, down the Gradient, Towards Rhosymedre Halt 09 Dave Phillips - [untitled – 'IIII' track 5] 10 [unknown sound recordist / BBC] - Morocco – Souk (Market) Rabat 11 Michael Scott Dawson - I'll Always Answer ++ Michael Serafin-Wells (Bipolar Explorer) - Outro … :zoviet*france: - A Duck in a Tree Link 636

The Gradient Podcast
Meredith Ringel Morris: Generative AI's HCI Moment

The Gradient Podcast

Play Episode Listen Later Sep 12, 2024 97:45


Episode 138I spoke with Meredith Morris about:* The intersection of AI and HCI and why we need more cross-pollination between AI and adjacent fields* Disability studies and AI* Generative ghosts and technological determinism* Developing a useful definition of AGII didn't get to record an intro for this episode since I've been sick. Enjoy!Meredith is Director for Human-AI Interaction Research for Google DeepMind and an Affiliate Professor in The Paul G. Allen School of Computer Science & Engineering and in The Information School at the University of Washington, where she participates in the dub research consortium. Her work spans the areas of human-computer interaction (HCI), human-centered AI, human-AI interaction, computer-supported cooperative work (CSCW), social computing, and accessibility. She has been recognized as an ACM Fellow and ACM SIGCHI Academy member for her contributions to HCI.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Meredith's influences and earlier work* (03:00) Distinctions between AI and HCI* (05:56) Maturity of fields and cross-disciplinary work* (09:03) Technology and ends* (10:37) Unique aspects of Meredith's research direction* (12:55) Forms of knowledge production in interdisciplinary work* (14:08) Disability, Bias, and AI* (18:32) LaMPost and using LMs for writing* (20:12) Accessibility approaches for dyslexia* (22:15) Awareness of AI and perceptions of autonomy* (24:43) The software model of personhood* (28:07) Notions of intelligence, normative visions and disability studies* (32:41) Disability categories and learning systems* (37:24) Bringing more perspectives into CS research and re-defining what counts as CS research* (39:36) Training interdisciplinary researchers, blurring boundaries in academia and industry* (43:25) Generative Agents and public imagination* (45:13) The state of ML conferences, the need for more cross-pollination* (46:42) Prestige in conferences, the move towards more cross-disciplinary work* (48:52) Joon Park Appreciation* (49:51) Training interdisciplinary researchers* (53:20) Generative Ghosts and technological determinism* (57:06) Examples of generative ghosts and clones, relationships to agentic systems* (1:00:39) Reasons for wanting generative ghosts* (1:02:25) Questions of consent for generative clones and ghosts* (1:05:01) Labor involved in maintaining generative ghosts, psychological tolls* (1:06:25) Potential religious and spiritual significance of generative systems* (1:10:19) Anthropomorphization* (1:12:14) User experience and cognitive biases* (1:15:24) Levels of AGI* (1:16:13) Defining AGI* (1:23:20) World models and AGI* (1:26:16) Metacognitive abilities in AGI* (1:30:06) Towards Bidirectional Human-AI Alignment* (1:30:55) Pluralistic value alignment* (1:32:43) Meredith's perspective on deploying AI systems* (1:36:09) Meredith's advice for younger interdisciplinary researchersLinks:* Meredith's homepage, Twitter, and Google Scholar* Papers* Mediating Group Dynamics through Tabletop Interface Design* SearchTogether: An Interface for Collaborative Web Search* AI and Accessibility: A Discussion of Ethical Considerations* Disability, Bias, and AI* LaMPost: Design and Evaluation of an AI-assisted Email Writing Prototype for Adults with Dyslexia* Generative Ghosts* Levels of AGI Get full access to The Gradient at thegradientpub.substack.com/subscribe

The Gradient Podcast
Davidad Dalrymple: Towards Provably Safe AI

The Gradient Podcast

Play Episode Listen Later Sep 5, 2024 80:50


Episode 137I spoke with Davidad Dalrymple about:* His perspectives on AI risk* ARIA (the UK's Advanced Research and Invention Agency) and its Safeguarded AI ProgrammeEnjoy—and let me know what you think!Davidad is a Programme Director at ARIA. He was most recently a Research Fellow in technical AI safety at Oxford. He co-invented the top-40 cryptocurrency Filecoin, led an international neuroscience collaboration, and was a senior software engineer at Twitter and multiple startups.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:36) Calibration and optimism about breakthroughs* (03:35) Calibration and AGI timelines, effects of AGI on humanity* (07:10) Davidad's thoughts on the Orthogonality Thesis* (10:30) Understanding how our current direction relates to AGI and breakthroughs* (13:33) What Davidad thinks is needed for AGI* (17:00) Extracting knowledge* (19:01) Cyber-physical systems and modeling frameworks* (20:00) Continuities between Davidad's earlier work and ARIA* (22:56) Path dependence in technology, race dynamics* (26:40) More on Davidad's perspective on what might go wrong with AGI* (28:57) Vulnerable world, interconnectedness of computers and control* (34:52) Formal verification and world modeling, Open Agency Architecture* (35:25) The Semantic Sufficiency Hypothesis* (39:31) Challenges for modeling* (43:44) The Deontic Sufficiency Hypothesis and mathematical formalization* (49:25) Oversimplification and quantitative knowledge* (53:42) Collective deliberation in expressing values for AI* (55:56) ARIA's Safeguarded AI Programme* (59:40) Anthropic's ASL levels* (1:03:12) Guaranteed Safe AI — * (1:03:38) AI risk and (in)accurate world models* (1:09:59) Levels of safety specifications for world models and verifiers — steps to achieve high safety* (1:12:00) Davidad's portfolio research approach and funding at ARIA* (1:15:46) Earlier concerns about ARIA — Davidad's perspective* (1:19:26) Where to find more information on ARIA and the Safeguarded AI Programme* (1:20:44) OutroLinks:* Davidad's Twitter* ARIA homepage* Safeguarded AI Programme* Papers* Guaranteed Safe AI* Davidad's Open Agency Architecture for Safe Transformative AI* Dioptics: a Common Generalization of Open Games and Gradient-Based Learners (2019)* Asynchronous Logic Automata (2008) Get full access to The Gradient at thegradientpub.substack.com/subscribe

The Gradient Podcast
Clive Thompson: Tales of Technology

The Gradient Podcast

Play Episode Listen Later Aug 29, 2024 147:35


Episode 136I spoke with Clive Thompson about:* How he writes* Writing about the climate and biking across the US* Technology culture and persistent debates in AI* PoetryEnjoy—and let me know what you think!Clive is a journalist who writes about science and technology. He is a contributing writer forWired magazine, and is currently writing his next book about micromobility and cycling across the US.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:07) Clive's life as a Tarantino movie* (03:07) Boring life and interesting art, life as material for art* (10:25) Cycling across the US — Clive's new book on mobility and decarbonization* (15:07) Turning inward in writing* (27:21) Including personal experience in writing* (31:53) Personal and less personal writing* (36:08) Conveying uncertainty and the “voice from nowhere” in traditional journalism* (41:10) Finding the natural end of a piece* (1:02:10) Writing routine* (1:05:08) Theories of change in Clive's writing* (1:12:33) How Clive saw things before the rest of us* (1:27:00) Automation in software engineering* (1:31:40) The anthropology of coders, poetry as a framework* (1:43:50) Proust discourse* (1:45:00) Technology culture in NYC + interaction between the tech world and other worlds* (1:50:30) Technological developments Clive wants to see happen (free ideas)* (2:01:11) Clive's argument for memorizing poetry* (2:09:24) How Clive finds poetry* (2:18:03) Clive's pursuit of freelance writing and making compromises* (2:27:25) OutroLinks:* Clive's Twitter and website* Selected writing* The Attack of the Incredible Grading Machine (Lingua Franca, 1999)* The Know-It-All Machine (Lingua Franca, 2001)* How to teach AI some common sense (Wired, 2018)* Blogs to Riches (NY Mag, 2006)* Clive vs. Jonathan Franzen on whether the internet is good for writing (The Chronicle of Higher Education, 2013)* The Minecraft Generation (New York Times, 2016)* What AI College Exam Proctors are Really Teaching Our Kids (Wired, 2020)* Companies Don't Need to Be Creepy to Make Money (Wired, 2021)* Is Sucking Carbon Out of the Air the Solution to Our Climate Crisis? (Mother Jones, 2021)* AI Shouldn't Compete with Workers—It Should Supercharge Them (Wired, 2022)* Back to BASIC—the Most Consequential Programming Language in the History of Computing Wired, 2024) Get full access to The Gradient at thegradientpub.substack.com/subscribe

Fintech Leaders
Darian Shirazi from Gradient Ventures - Empathy, Curiosity, and the Immigrant Mindset

Fintech Leaders

Play Episode Listen Later Aug 27, 2024 40:04


Send us a Text Message.Miguel Armaza interviews Darian Shirazi, General Partner at Gradient Ventures, an early-stage venture fund focused on AI and data-rich companies, with Google as its major LP.Prior to Gradient, Darian founded and sold Radius, a B2B Customer Data Platform. He is also an early investor in companies including Lyft, Udemy, Carbon Health, Palantir, and Niva. Fun fact, he was Facebook's first intern back when the company had only 10 people.We discuss:Joining Facebook as its first intern and working directly for Mark Zuckerberg. Why is Mark such a special tech leader?Gradient's three different strategies to invest in AIWhy the evolution of AI will be exponential and not linearThe incredible story of how Darian's family had to escape Iran and rebuilt itself into one of the strongest tech families in the US… and a lot more! Want more podcast episodes? Join me and follow Fintech Leaders today on Apple, Spotify, or your favorite podcast app for weekly conversations with today's global leaders that will dominate the 21st century in fintech, business, and beyond.Do you prefer a written summary? Check out the Fintech Leaders newsletter and join ~70,000+ readers and listeners worldwide!Miguel Armaza is Co-Founder and General Partner of Gilgamesh Ventures, a seed-stage investment fund focused on fintech in the Americas. He also hosts and writes the Fintech Leaders podcast and newsletter.Miguel on LinkedIn: https://bit.ly/3nKha4ZMiguel on Twitter: https://bit.ly/2Jb5oBcFintech Leaders Newsletter: bit.ly/3jWIp

The Gradient Podcast
Judy Fan: Reverse Engineering the Human Cognitive Toolkit

The Gradient Podcast

Play Episode Listen Later Aug 22, 2024 92:39


Episode 136I spoke with Judy Fan about:* Our use of physical artifacts for sensemaking* Why cognitive tools can be a double-edged sword* Her approach to scientific inquiry and how that approach has developedEnjoy—and let me know what you think!Judy is Assistant Professor of Psychology at Stanford and director of the Cognitive Tools Lab. Her lab employs converging approaches from cognitive science, computational neuroscience, and artificial intelligence to reverse engineer the human cognitive toolkit, especially how people use physical representations of thought — such as sketches and prototypes — to learn, communicate, and solve problems.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:49) Throughlines and discontinuities in Judy's research* (06:26) “Meaning” in Judy's research* (08:05) Production and consumption of artifacts* (13:03) Explanatory questions, why we develop visual artifacts, science as a social enterprise* (15:46) Unifying principles* (17:45) “Hard limits” to knowledge and optimism* (21:47) Tensions in different fields' forms of sensemaking and establishing truth claims* (30:55) Dichotomies and carving up the space of possible hypotheses, conceptual tools* (33:22) Cognitive tools and projectivism, simplified models vs. nature* (40:28) Scientific training and science as process and habit* (45:51) Developing mental clarity about hypotheses* (51:45) Clarifying and expressing ideas* (1:03:21) Cognitive tools as double-edged* (1:14:21) Historical and social embeddedness of tools* (1:18:34) How cognitive tools impact our imagination* (1:23:30) Normative commitments and the role of cognitive science outside the academy* (1:32:31) OutroLinks:* Judy's Twitter and lab page* Selected papers (there are lots!)* Overviews* Drawing as a versatile cognitive tool (2023)* Using games to understand the mind (2024)* Socially intelligent machines that learn from humans and help humans learn (2024)* Research papers * Communicating design intent using drawing and text (2024)* Creating ad hoc graphical representations of number (2024)* Visual resemblance and interaction history jointly constrain pictorial meaning (2023)* Explanatory drawings prioritize functional properties at the expense of visual fidelity (2023)* SEVA: Leveraging sketches to evaluate alignment between human and machine visual abstraction (2023)* Parallel developmental changes in children's production and recognition of line drawings of visual concepts (2023)* Learning to communicate about shared procedural abstractions (2021)* Visual communication of object concepts at different levels of abstraction (2021)* Relating visual production and recognition of objects in the human visual cortex (2020)* Collabdraw: an environment for collaborative sketching with an artificial agent (2019)* Pragmatic inference and visual abstraction enable contextual flexibility in visual communication (2019)* Common object representations for visual production and recognition (2018) Get full access to The Gradient at thegradientpub.substack.com/subscribe

The Gradient Podcast
L.M. Sacasas: The Questions Concerning Technology

The Gradient Podcast

Play Episode Listen Later Aug 15, 2024 107:20


Episode 135I spoke with L. M. Sacasas about:* His writing and intellectual influences* The value of asking hard questions about technology and our relationship to it* What happens when we decide to outsource skills and competency* Evolving notions of what it means to be human and questions about how to live a good lifeEnjoy—and let me know what you think!Michael is Executive Director of the Christian Study Center of Gainesville, Florida and author of The Convivial Society, a newsletter about technology and society. He does some of the best writing on technology I've had the pleasure to read, and I highly recommend his newsletter.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:12) On podcasts as a medium* (06:12) Michael's writing* (12:38) Michael's intellectual influences, contingency* (18:48) Moral seriousness* (22:00) Michael's ambitions for his work* (26:17) The value of asking the right questions (about technology)* (34:18) Technology use and the “natural” pace of human life* (46:40) Outsourcing of skills and competency, engagement with others* (55:33) Inevitability narratives and technological determinism, the “Borg Complex”* (1:05:10) Notions of what it is to be human, embodiment* (1:12:37) Higher cognition vs. the body, dichotomies* (1:22:10) The body as a starting point for philosophy, questions about the adoption of new technologies* (1:30:01) Enthusiasm about technology and the cultural milieu* (1:35:30) Projectivism, desire for knowledge about and control of the world* (1:41:22) Positive visions for the future* (1:47:11) OutroLinks:* Michael's Substack: The Convivial Society and his book, The Frailest Thing: Ten Years of Thinking about the Meaning of Technology* Michael's Twitter* Essays* Humanist Technology Criticism* What Does the Critic Love?* The Ambling Mind* Waste Your Time, Your Life May Depend On It* The Work of Art* The Stuff of (a Well-Lived) Life Get full access to The Gradient at thegradientpub.substack.com/subscribe

Consensus in Conversation
Barclay Rogers of Graphyte on Durable Carbon Removal, Climate Science, and Biomass Solutions

Consensus in Conversation

Play Episode Listen Later Aug 8, 2024 50:20


The world currently removes less than .0001% of the carbon required to meet the IPCC's 2050 goal – so there's A LOT of work to be done – which is what makes Graphyte's ready-to-go solution all the more valuable.Barclay Rogers, a former environmental lawyer, mechanical engineer, and multi-time founder, is bringing his unique set of experiences and leadership skills to making scalable, affordable CO2 removal an immediate reality with his new startup, Graphyte. Rather than rely on energy-intensive and still-developing technology like Direct Air Capture, Barclay and his team realized that they could use nature's own hyper-efficient carbon capture process – photosynthesis – to leverage natural resources in their revolutionary Carbon Casting process, a first-of-its-kind technique that traps carbon in easy to store bricks forged from the biomass waste generated by farms, logging camps, and paper mills. With their world-leading carbon removal operations already underway at their Pine Bluff, Arkansas facility, Graphyte is able to offer durable carbon removal that's scalable, affordable, and, maybe most importantly, ready right now. Hear Barclay Rogers talk about Graphyte's origin story, the importance of scalability for climate solutions, and why his native Arkansas is the perfect home for biomass-based carbon removal.Episode Highlights00:00 Barclay Rogers on carbon removal in the heartland00:32 Conor Gaughan introduces Barclay Rogers and Graphyte04:37 Arkansas roots, natural resources, and mechanical engineering09:36 Environmental law, government, and pivot to entrepreneurship 19:20 Startup career, the carbon industry, and the potential of biomass 25:25 The origin of Graphtye, durable carbon removal, and scalability33:43 Innovation curves, public policy factors, and the value of carbon40:24 Breakthrough Ventures, climate change, and growing a community 46:11 Leaving a legacy and finding motivation47:43 Where to learn more and end creditsIf you liked this episode, listen next to Dr. Vince Romanin of Gradient on Heat Pumps, Zero-Carbon Infrastructure, and the Triple Bottom LineMore on Graphyte and Barclay Rogers:graphyte.com linkedin.com/company/graphytecarbon linkedin.com/in/barclayrogers Connect with Conor Gaughan on linkedin.com/in/ckgone and threads.net/@ckgoneHave questions, or a great idea for a potential guest? Email us at CiC@consensus-digital.com If you enjoyed this episode, please rate and review the show on Apple Podcasts and Spotify – it really makes a difference! Consensus in Conversation is a podcast by Consensus Digital Media produced in association with Reasonable Volume. Hosted on Acast. See acast.com/privacy for more information.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Thank you for 1m downloads of the podcast and 2m readers of the Substack!

united states god ceo american new york world australia english google ai apple vision voice talk americans san francisco new york times research war chinese rich australian data european union market search microsoft italian holy new zealand drop south iphone illinois selling irish code ladies supreme court chatgpt missouri memory os valley atlantic software whatsapp washington post reddit wars cloud singapore midwest philippines indonesia laugh ios scottish intelligence new yorker context mark zuckerberg scaling architecture uma oracle stopping snap bloomberg cto substack malaysia vc similar iq whispers adapt ipo determine southeast asia fireworks optimizing openai gemini residence laughing gateway gdp fusion nvidia nah acknowledge hardware financial times chess api document av wang frontier chrome blank verge 10k mojo scarlett johansson winds vertical gpt ftc nexus aws ml lama gorilla llama boston marathon small talk goldman mandarin apis bedtime ruler great lakes consensus nome amd synthetic tt frameworks band aids romain chameleons nano biases ids opus hirsch weights chai sam altman ops mamba llm skynet colbert gg gpu pdfs crowdstrike venn gnome google chrome 5b modular skyfall soit soc mozilla zuck wix cuz nama kv haiku imo vespa rag rudyard kipling gpus sonnets golden gate bridge 7b quadrants sdks benchmarking ilya irobot ccs lambda san fernando valley alessio perplexity lightspeed asics anthropic lms crackle stack overflow scarjo little italy noose 8b restful economically lex fridman cpus shutterstock malay riaa asic mistral suno inflection gcp opex tts superintelligence vertex a16z multimodal latency ozymandias larry ellison observability olympiads datadog gradient proxies asr icm baits drop zone devrel rpc mimicry netlify etched ai news cloud platforms temasek gpc sandbagging eclair jamba gbt gpd apple notes augments exa character ai neurips li bai ai engineer huggingface george hotz singlish harvard yard entropic gbd code interpreter icml phy ml ops ai winter martin casado crosstrek technium latent space johnny ive numina inprint sohu i okay
Choses à Savoir
Pourquoi parle-t-on de vol de gradient ?

Choses à Savoir

Play Episode Listen Later Jul 23, 2024 1:48


Certains oiseaux de mer sont capables d'exploiter la différence de vitesse existant entre deux masses d'air pour voler à une grande vitesse, sans pour autant dépenser d'énergie. Cette technique du "vol de gradient" est également utilisée par les pilotes de planeurs radiocommandés. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

SuperDataScience
799: AGI Could Be Near: Dystopian and Utopian Implications, with Dr. Andrey Kurenkov

SuperDataScience

Play Episode Listen Later Jul 9, 2024 105:48


No-code games with GenAI, the creative possibilities of LLMs, and our proximity to AGI: In this episode, Jon Krohn talks to Andrey Kurenkov about what turned him from an AGI skeptic to a positivist. You'll also hear about his wildly popular podcast “Last Week in AI” and how the NVIDIA-backed startup Astrocade is helping videogame enthusiasts to create their own games through generative AI. A must-listen! This episode is brought to you by AWS Inferentia (https://go.aws/3zWS0au) and AWS Trainium (https://go.aws/3ycV6K0). Interested in sponsoring a SuperDataScience Podcast episode? Email natalie@superdatascience.com for sponsorship information. In this episode you will learn: • All about The Gradient and Last Week in AI [10:42] • All about Astrocade and Andrey's role at the startup [24:35] • Balancing UX and creative control at Astrocade [42:00] • The creative possibilities of LLMs [1:04:15] • The rapid emergence of AGI [1:10:31] Additional materials: www.superdatascience.com/799

Fularsız Entellik
AI 101: Eğitim Şart

Fularsız Entellik

Play Episode Listen Later Jun 20, 2024 18:31


Yapay zeka serimizin beşinci bölümünde sinir ağlarının eğitimine odaklanıyoruz. İşin detayına gireceğiz ve üç önemli kavram göreceğiz:Kayıp fonksiyonu veya hata fonksiyonuhatayı azaltmanın bir yöntemi olan gradient descentve onun etkili biçimde uygulanmasını sağlayan backpropagation tekniği..Konular:(01:53) Cost function(05:01) Gradient descent(11:04) Backpropagation(15:28) Vanishing Gradient(16:51) Test vs Eğitim(17:47) Patreon TeşekkürleriKaynaklar:Video: The Most Important Algorithm in Machine LearningVideo: Backpropagation explained | Part 1 - The intuitionVideo: Watching Neural Networks Learn.------- Podbee Sunar -------Bu podcast, AgeSA hakkında reklam içerir.AgeSA BES ile Yatırımda Rahat Edin. Yüksek kazançlı geniş fon seçenekleri, %30 Devlet Katkısı ve finansal danışmanlık AgeSA'da. Yatırımlarınla ilgili daha iyi hissetmek için tıkla.SummarySee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.