POPULARITY
Categories
This week on The Common Common Bridge, Richard's guest is M.L. Elrick, a Pulitzer Prize winner. They discuss the closure of a media room at City Hall in Detroit, Michigan, and how media access to City Hall helped expose the crimes of former Detroit Mayor, Kwame Kilpatrick. This ultimately led to his resignation from office and criminal conviction in 2013. Additionally, they explore the responsibility of the press to report facts and push back against the spread of affirmation news reporting that is prevalent in the country today. They also touch upon the role of national media in this context.Support the showEngage the conversation on Substack at The Common Bridge!
In this episode, Dr. Geo provides valuable insights into the role of these often overlooked nutrients in reducing the risk of heart disease and strokes. By incorporating them into your daily routine, you can take proactive steps toward maintaining a healthy cardiovascular system.Don't miss out on this episode packed with evidence-based information and practical tips to improve your heart health. Subscribe to the podcast and join Dr. Geo as he delves deeper into the five essential nutrients that can significantly reduce the risk of heart attacks and strokes.---------------Thank you to our sponsors.This episode is brought to you by ExoDx™ Prostate Test for prostate tissue. The ExoDx™ Prostate Test is a simple, non-DRE, urine-based, liquid biopsy test indicated for men 50 years of age and older with a prostate-specific antigen (PSA) 2-10ng/mL, or PSA in the “gray zone” who may be considering a biopsy. The ExoDx Prostate test provides a risk score that determines a patient's potential risk of clinically significant prostate cancer (Gleason Score ≥7). The test is included in the National Comprehensive Cancer Network (NCCN) guidelines and has been clinically validated at the cut-point of 15.6 with a 91% sensitivity and 92% negative predictive value, meaning there is less than a 9% chance of having aggressive prostate cancer below the validated cut-point of 15.6. Ask your urologist about the ExoDx Prostate Test.This episode is also sponsored by Calroy Health and Sciences, presenting their revolutionary combination of products for vascular health. Arterosil HP® supports the structure and normal function of the endothelial glycocalyx, and their New Nitric Oxide Support with Calroy's proprietary Vascanox HP formula targets nitric oxide metabolism using multiple pathways.This Dr. Geo Podcast is supported by AG1 (Athletic Greens). AG1 contains 75 high-quality vitamins, minerals, whole-food sourced ingredients, probiotics, and adaptogens to help you start your day right. This special blend of ingredients supports your gut health, nervous system, immune system, energy, recovery, focus, and aging. All the things. Enjoy AG1 (Athletic Greens).----------------Thanks for listening to this week's episode. Subscribe to The Dr. Geo YouTube Channel to get more content like this and learn how you can live better with age.You can also listen to this episode and future episodes of the Dr. Geo Podcast by clicking HERE.----------------Follow Dr. Geo on social media. Facebook, Instagram Click here to become a member of Dr. Geo's Health Community.Improve your urological health with Dr. Geo's formulated supplement lines: XY Wellness for Prostate cancer lifestyle and nutrition: Mr. Happy Nutraceutical Supplements for prostate health and male optimal living.You can also check out Dr. Geo's online dispensary for other supplement recommendations
On this episode, we're joined by Jean Marc Alkazzi, Applied AI at idealworks. Jean focuses his attention on applied AI, leveraging the use of autonomous mobile robots (AMRs) to improve efficiency within factories and more.We discuss:- Use cases for autonomous mobile robots (AMRs) and how to manage a fleet of them. - How AMRs interact with humans working in warehouses.- The challenges of building and deploying autonomous robots.- Computer vision vs. other types of localization technology for robots.- The purpose and types of simulation environments for robotic testing.- The importance of aligning a robotic fleet's workflow with concrete business objectives.- What the update process looks like for robots.- The importance of avoiding your own biases when developing and testing AMRs.- The challenges associated with troubleshooting ML systems.Resources: Jean Marc Alkazzi - https://www.linkedin.com/in/jeanmarcjeanazzi/idealworks |LinkedIn - https://www.linkedin.com/company/idealworks-gmbh/idealworks | Website - https://idealworks.com/Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#OCR #DeepLearning #AI #Modeling #ML
In this week's episode, Anna (https://twitter.com/annarrose) sits down with Yi Sun (https://twitter.com/theyisun), co-founder of Axiom (https://www.axiom.xyz/). Yi was recently on the show to discuss ZK ML's, however this time they take a closer look at the Axiom project and what it means to be a ZK coprocessor for Ethereum. During the interview they also explore what problems Axiom are trying to solve, how ZKPs are used to help bring historic data into smart contracts and what new use cases this architecture can support. Here's some additional links for this episode: Axiom Demo Release (https://demo.axiom.xyz/account-age) Certifying Zero-Knowledge Circuits with Refinement Types by Junrui Liu, Ian Kretz, Hanzhi Liu, Bryan Tan, Jonathan Wang, Yi Sun Axiom, Luke Pearson, Anders Miltner Isil Dillig and Yu Feng (https://eprint.iacr.org/2023/547.pdf) Episode 265: Where ZK and ML intersect with Yi Sun and Daniel Kang (https://zeroknowledge.fm/265-2/) Check out the ZK Jobs Board here: ZK Jobs (https://jobsboard.zeroknowledge.fm/). Find your next job working in ZK! Aleo (https://www.aleo.org/) is a new Layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. Interested in building private applications? Check out Aleo's programming language called Leo that enables non-cryptographers to harness the power of ZKPs to deploy decentralized exchanges, hidden information games, regulated stablecoins, and more. Visit http://developer.aleo.org (http://developer.aleo.org/). For questions, join their Discord at aleo.org/discord (http://aleo.org/discord). If you like what we do: * Find all our links here! @ZeroKnowledge | Linktree (https://linktr.ee/zeroknowledge) * Subscribe to our podcast newsletter (https://zeroknowledge.substack.com) * Follow us on Twitter @zeroknowledgefm (https://twitter.com/zeroknowledgefm) * Join us on Telegram (https://zeroknowledge.fm/telegram) * Catch us on YouTube (https://zeroknowledge.fm/)
In this thought-provoking episode of Conf T, hosts Bryan Young and Tom Porto sit down with Joe Marshall, also known as Rooster Cogburn. Joe, a Senior Security Strategist with Talos, brings his expertise in artificial intelligence and machine learning to the table. AI vs ML: We kick off the episode by addressing a common confusion - what is the difference between Artificial Intelligence (AI) and Machine Learning (ML)? Thanks to an enlightening explanation from ChatGPT, we now know that AI is the broader concept of machines being able to carry out tasks in a way we would consider 'smart', while ML is a specific subset of AI that involves the idea of training machines on data to learn and make decisions. AI Confidence and Source Citation: A deep dive into the issues surrounding AI's confidence and its lack of source citation. We discuss why it's so important for AI to cite its sources and how it can be improved. Ethical Guardrails in AI: Joe gives an insightful perspective on the ethical guardrails in place for AI and how they can potentially be circumvented, sparking a thoughtful debate on the ethical use of this technology. AI Sentience: Is AI sentient? This question sparks a fascinating discussion about the nature of sentience and how it applies, or doesn't apply, to AI. AI and the Dark Web: We explore the sinister side of AI with a look at its use on the Dark Web. How is AI being leveraged by those operating outside the law, and what can be done about it? AI for Phishing and Other Nefarious Tasks: We delve into the concerning issue of AI being used for phishing and other malicious activities. What are the implications, and how can we defend against it? AI Pranks: On a lighter note, we discuss some of the more amusing uses of AI, sharing anecdotes about pranks and other humorous applications and even pulling one live on the show on Mr. Porto. Combating AI-Driven Cybersecurity Threats: Joe gives his expert advice on how to go on the offensive against AI-driven cybersecurity threats. Using Failure to Determine Personhood: An intriguing concept is presented - using failure as a way to determine whether an entity is a real person or AI. Future of AI: Finally, we contemplate the future of AI, now that the barrier to entry has been drastically lowered. What's next on the horizon for this rapidly evolving technology? Tune in to gain some valuable insights into these topics and much more. Whether you're a tech enthusiast or a professional in the field, this is an episode you won't want to miss.
Learn about the Coleman Young most folks never knew when legendary Free Press reporter Bill McGraw schools ML, Marc and […]
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Yann LeCun is VP & Chief AI Scientist at Meta and Silver Professor at NYU affiliated with the Courant Institute of Mathematical Sciences & the Center for Data Science. He was the founding Director of FAIR and of the NYU Center for Data Science. After a postdoc in Toronto he joined AT&T Bell Labs in 1988, and AT&T Labs in 1996 as Head of Image Processing Research. He joined NYU as a professor in 2003 and Meta/Facebook in 2013. He is the recipient of the 2018 ACM Turing Award for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing". Huge thanks to David Marcus for helping to make this happen. In Today's Episode with Yann LeCun: 1.) The Road to AI OG: How did Yann first hear about machine learning and make his foray into the world of AI? For 10 years plus, machine learning was in the shadows, how did Yan not get discouraged when the world did not appreciate the power of AI and ML? What does Yann know now that he wishes he had known when he started his career in machine learning? 2.) The Next Five Years of AI: Hope or Horror: Why does Yann believe it is nonsense that AI is dangerous? Why does Yann think it is crazy to assume that AI will even want to dominate humans? Why does Yann believe digital assistants will rule the world? If digital assistants do rule the world, what interface wins? Search? Chat? What happens to Google when digital assistants rule the world? 3.) Will Anyone Have Jobs in a World of AI: From speaking to many economists, why does Yann state "no economist thinks AI will replace jobs"? What jobs does Yann expect to be created in the next generation of the AI economy? What jobs does Yann believe are under more immediate threat/impact? Why does Yann expect the speed of transition to be much slower than people anticipate? Why does Yann believe Elon Musk is wrong to ask for the pausing of AI developments? 4.) Open or Closed: Who Wins: Why does Yann know that the open model will beat the closed model? Why is it superior for knowledge gathering and idea generation? What are some core historical precedents that have proved this to be true? What did Yann make of the leaked Google Memo last week? 5.) Startup vs Incumbent: Who Wins: Who does Yann believe will win the next 5 years of AI; startups or incumbents? How important are large models to winning in the next 12 months? In what ways does regulation and legal stop incumbents? How has he seen this at Meta? Has his role at Meta ever stopped him from being impartial? How does Yan deal with that?
Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan, Jonathan, and Matthew are all here this week to discuss the latest news and announcements in the world of cloud and AI - including New Relic Grok, Athena Provisioned Capacity from AWS, and updates to the Azure Virtual Desktop. Titles we almost went with this week: None! This week's title was SO GOOD we didn't bother with any alternates. Sometimes it's just like that, you know? A big thanks to this week's sponsor: Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.
How are companies leveraging IoT to improve sustainability? Jeffrey Hausman, Chief Product Officer at Samsara, joins Ryan Chacon on the IoT For All Podcast to discuss leveraging IoT for a sustainable future. They cover the role of electric vehicles in creating a sustainable supply chain, ESG goals, the value of workplace safety, the challenges of implementing IoT for sustainability initiatives, and the technologies and trends that will be part of a sustainable future. Jeffrey Hausman leads Samsara's global product organization where he oversees the company's platform, product vision, and development activities to help customers improve the safety, efficiency, and sustainability of their physical operations. With over 25 years of experience, he brings a proven track record for scaling large and transformative software companies. Prior to Samsara, Jeffrey led ServiceNow's Operations Management Portfolio as Senior Vice President and General Manager. Previously, he held senior executive positions at McAfee, Symantec, Hewlett-Packard, and Veritas, and has served as a CEO and COO for privately held companies. Earlier in his career, Jeffrey worked as a consultant to Fortune 500 companies as part of Booz & Co. Jeffrey received his MBA at Dartmouth's Tuck School of Business and holds a bachelor's degree in math and economics from Claremont McKenna College. Samsara is the pioneer of the Connected Operations™ Cloud, which is a system of record that enables organizations that depend on physical operations to harness Internet of Things (IoT) data to develop actionable insights and improve their operations. Samsara operates in North America and Europe and serves tens of thousands of customers across a wide range of industries including transportation, wholesale and retail trade, construction, field services, logistics, utilities and energy, government, healthcare and education, manufacturing, and food and beverage. The company's mission is to increase the safety, efficiency, and sustainability of the operations that power the global economy. Discover more about sustainability and IoT at https://www.iotforall.com More about Samsara: https://www.samsara.com Connect with Jeffrey: https://www.linkedin.com/in/jehausman/ Key Questions and Topics from this Episode: (00:00) Welcome to the IoT For All Podcast (00:52) Introduction to Jeffrey and Samsara (04:14) How IoT is being used to improve sustainability (07:55) The role of EVs in a sustainable suppy chain (11:23) How does safety play a role in ESG goals? (13:50) The value of workplace safety (17:01) IoT challenges and advice for solving them (20:10) Technologies and trends to look out for (24:32) Learn more and follow up SUBSCRIBE TO THE CHANNEL: https://bit.ly/2NlcEwm Join Our Newsletter: https://www.iotforall.com/iot-newsletter Follow Us on Twitter: https://twitter.com/iotforall Check out the IoT For All Media Network: https://www.iotforall.com/podcast-overview
How do we prepare our kids for jobs that don't exist? Studies show that technology is progressing at such a rapid pace that up to 85% of the jobs that will be available in 2040 have not been created yet. Will AI, ML, and hardware advancements create a society where careers we take for granted today won't exist in the future? In this episode featuring hosts Grace Ewura-Esi and Amy Tobey, Producer John Taylor puts a personal face on this idea through his 13-year-old daughter, Ella, who wants to be a chef when she grows up. Together, they explore this issue with Executive Chef-turned-Dell Computer Advocate Tim Banks, as well as employment attorney Michael Lotito, whose Emma Coalition seeks solutions to TIDE, the technologically induced displacement of Employment. Between trips to fully-automated restaurants and the latest advancements in 3D food replication, we discover that Gen Z's humanity may be their biggest asset in tomorrow's job market.Additional ResourcesConnect with Amy Tobey: LinkedIn or TwitterConnect with Grace Andrews: LinkedIn or Twitter.Connect with John Taylor: LinkedInConnect with Alexander Kolchinsky: LinkedInConnect with Michael Lotito: LinkedInConnect With Tim Banks: LinkedInVisit Origins.dev for more informationEnjoyed This Episode?If you did, be sure to follow and share it with your friends!Post a review and share it! If you enjoyed tuning in, then please leave us a review. We'd also appreciate it if you would share the podcast with your friends and colleagues, as you get to know the people and technologies at the center of our digital world.Traceroute is a podcast from Equinix and is a production of Stories Bureau. This episode was produced by John Taylor with help from Tim Balint and Cat Bagsic. It was edited by Joshua Ramsey and mixed by Jeremy Tuttle, with additional editing and sound design by Mathr de Leon. Our theme song was composed by Ty Gibbons.
On today's episode of the Dr. Geo podcast, we have a special guest, Dr. Mohit Khera, a renowned urologist and professor in the Scott Department of Urology at Baylor College of Medicine. He holds the F. Brantley Scott Chair in Urology and has extensive experience treating male and female sexual dysfunction, men's health, and hormone replacement therapy.This captivating episode reveals the intricate relationship between testosterone and the prostate. Throughout the episode, we explore the nature of testosterone, its receptors, and its effects on the body. We also address controversies surrounding testosterone and its relationship to the prostate, including the development of prostate cancer. Join us as we gain insights from Dr. Khera's wealth of knowledge on this fascinating topic._________________Thank you to our sponsors.This episode is brought to you by ExoDx™ Prostate Test for prostate tissue. The ExoDx™ Prostate Test is a simple, non-DRE, urine-based, liquid biopsy test indicated for men 50 years of age and older with a prostate-specific antigen (PSA) 2-10ng/mL, or PSA in the “gray zone” who may be considering a biopsy. The ExoDx Prostate test provides a risk score that determines a patient's potential risk of clinically significant prostate cancer (Gleason Score ≥7). The test is included in the National Comprehensive Cancer Network (NCCN) guidelines and has been clinically validated at the cut-point of 15.6 with a 91% sensitivity and 92% negative predictive value, meaning there is less than a 9% chance of having aggressive prostate cancer below the validated cut-point of 15.6. Ask your urologist about the ExoDx Prostate Test.This episode is also brought to you by AG1 (Athletic Greens). AG1 contain 75 high-quality vitamins, minerals, whole-food sourced ingredients, probiotics, and adaptogens to help you start your day right. This special blend of ingredients supports your gut health, your nervous system, your immune system, your energy, recovery, focus, and aging. All the things. Enjoy AG1 (Athletic Greens).----------------Thanks for listening to this week's episode. Subscribe to The Dr. Geo YouTube Channel to get more content like this and learn how you can live better with age.You can also listen to this episode and future episodes of the Dr. Geo Podcast by clicking HERE.----------------Follow Dr. Geo on social media. Facebook, Instagram Click here to become a member of Dr. Geo's Health Community.Improve your urological health with Dr. Geo's formulated supplement lines: XY Wellness for Prostate cancer lifestyle and nutrition: Mr. Happy Nutraceutical Supplements for prostate health and male optimal living.You can also check out Dr. Geo's online dispensary for other supplement recommendations Dr. Geo's Supplement Store____________________________________DISCLAIMER: This audio is educational and does not constitute medical advice. This...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding mesa-optimization using toy models, published by tilker on May 7, 2023 on LessWrong. Overview Solving the problem of mesa-optimization would probably be easier if we understood how models do search internally We are training GPT-type models on the toy task of solving mazes and studying them in both a mechanistic interpretability and behavioral context. This post lays out our model training setup, hypotheses we have, and the experiments we are performing and plan to perform. Experimental results will be forthcoming in our next post. We invite members of the LW community to challenge our hypotheses and the potential relevance of this line of work. We will follow up soon with some early results. Our main source code is open source, and we are open to collaborations. Introduction Some threat models of misalignment presuppose the existence of an agent which has learned to perform a search over actions to effectively achieve goals. Such a search process might involve exploring different sequences of actions in parallel and evaluating the best sequence of actions to achieve some goal. To deepen our understanding of what it looks like when models are actually performing search, we chose to train simple GPT-2 like models to find the shortest paths through mazes. Maze-solving models provide a tractable and interesting object of study, as the structure of both the problem and solutions is extensively studied. This relative simplicity makes identifying and understanding search through the lens of mechanistic and behavioral experiments much more concrete than working with pre-trained LLMs and more feasible in the context of limited computational resources. Connections to mesa-optimization Mesa-optimizers are learned optimizers for an objective that can be distinct from the base-objective. Inner misalignment can occur when the AI system develops an internal optimization process that inadvertently leads to the pursuit of an unintended goal. In the context of search, the propensity for mesa-optimization may be increased as the system explores various future states, potentially identifying alternative objectives that appear at least as rewarding or efficient in achieving the desired outcome. Existing literature on search has highlighted the potential for unintended consequences of search in ML systems. One lens of viewing the problem of mesa-optimization is that the behavior of a system changes in an undesirable way upon a distributional shift, and we believe that mazes provide a number of mechanisms to create such distributional shifts. Training setup We first aim to train a transformer model to predict the shortest path between a given start and end position in a maze. The maze exists as a 2D grid, with each position on the grid encoded as a single token. For example, a 5x5 maze has 25 coordinates that have corresponding tokens in the vocabulary. To the transformer, the maze is described as: An adjacency list containing all connections between pairs of positions: for example, (0,0) (0,1). A "wall" in the maze is merely a missing connection between positions in the maze, but otherwise not explicitly stated. The start and end positions are coordinates on the maze grid, such as (3,3) and (4,0), respectively. A training example contains a maze (as an adjacency list), start and end coordinates, and a path consisting of position tokens We use an autoregressive decoder-only transformer model (implemented using TransformerLens), which (at inference) makes predictions one token at a time based on previously generated tokens. Our transformer models incorporate layer normalization and MLP layers by default. One training sample consists of a maze, as well as a unique path connecting randomly selected origin and target coordinates (circle and cross). The solved maze shown abo...
Cronies voted to give Detroit elected officials big raises, so ML, Marc and Shawn talk to one of the reporters […]
In today's podcast, Dr. Geo discusses the importance of understanding successful aging. Defining aging is the first step in this process, as it provides a clear target to aim for.How can we age successfully? That's the question we'll be exploring in today's podcast. Join Dr. Geo as he dives into the factors contributing to successful aging and how you can optimize your health and well-being as you age.______________Thank you to our sponsors.This episode is brought to you by ExoDx™ Prostate Test for prostate tissue. The ExoDx™ Prostate Test is a simple, non-DRE, urine-based, liquid biopsy test indicated for men 50 years of age and older with a prostate-specific antigen (PSA) 2-10ng/mL, or PSA in the “gray zone” who may be considering a biopsy. The ExoDx Prostate test provides a risk score that determines a patient's potential risk of clinically significant prostate cancer (Gleason Score ≥7). The test is included in the National Comprehensive Cancer Network (NCCN) guidelines and has been clinically validated at the cut-point of 15.6 with a 91% sensitivity and 92% negative predictive value, meaning there is less than a 9% chance of having aggressive prostate cancer below the validated cut-point of 15.6. Ask your urologist about the ExoDx Prostate Test.This episode is also brought to you by AG1 (Athletic Greens). AG1 contain 75 high-quality vitamins, minerals, whole-food sourced ingredients, probiotics, and adaptogens to help you start your day right. This special blend of ingredients supports your gut health, your nervous system, your immune system, your energy, recovery, focus, and aging. All the things. Enjoy AG1 (Athletic Greens).----------------Thanks for listening to this week's episode. Subscribe to The Dr. Geo YouTube Channel to get more content like this and learn how you can live better with age.You can also listen to this episode and future episodes of the Dr. Geo Podcast by clicking HERE.----------------Follow Dr. Geo on social media. Facebook, Instagram Click here to become a member of Dr. Geo's Health Community.Improve your urological health with Dr. Geo's formulated supplement lines: XY Wellness for Prostate cancer lifestyle and nutrition: Mr. Happy Nutraceutical Supplements for prostate health and male optimal living.You can also check out Dr. Geo's online dispensary for other supplement recommendations Dr. Geo's Supplement Store____________________________________DISCLAIMER: This audio is educational and does not constitute medical advice. This audio's content is my opinion and not that of my employer(s) or any affiliated company.Use of this information is at your own risk. Geovanni Espinosa, N.D., will not assume any liability for any direct or indirect losses or damages that may result from the use of the information contained in this video, including but not limited to economic loss, injury, illness, or death.
Happy Anniversary to us, 2023 Rock & Roll Hall inductees, Jackson Mahomes arrested, Jamie Foxx's mystery illness, Britney Spears' tit pic, Trump on the Royal Family, Gwyneth Paltrow's past dongs, 90's music is the best, frosted tips, and another YouTube antagonizer. Milestones: Our YouTube hits 7500 subs in the first 7 years 2 months. It is our 7-year anniversary! Pray for Jamie Foxx. No one knows what's wrong with him... but pray for him. He allegedly "wrote" this breaking his silence. Nick Cannon is taking his Beat Shazam gig. Geoffrey Fieger is recovering from his stroke. ML recently did a piece on the attorney. Dana Nessel takes awesome legal and highly unethical tropical trips paid for by Traverse City lawyers/donors. Music: 2023 Rock and Roll Hall of Fame Class. Tons of snubs! Drew is really into Johnny Depp's vocals. Anybody in the mood for Al Stewart? Jackson Mahomes has finally been arrested for sexual battery. Richard Simmons needs to make a comeback. Texas Massacre shooter Francisco Oropesa has been arrested. Politricks: The White House doesn't want to talk about Joe Biden's bastard grandchild. Apparently there is some whistleblower out there exposing Joe Biden. Meanwhile, Donald Trump is coming to town. He's not even thinking about that E. Jean Carroll thing. Trump gave an interview to English TV and slammed Meghan Markle along the way. Britney Spears decided to post her boobs on Instagram. Her first husband, Jason Alexander, has remarried. Happy horse execution season! The Kentucky Derby is this weekend. We randomly say Dannielynn Birkhead's name... so we have to try and call Bobby Trendy. Mark McGrath is among the many stars who rocked frosted tips. Jack Nicholson finally showed his face in public. He looks better than the paparazzi photos. Bad news for Brad Falchuk as Gwyneth Paltrow went on Call Her Daddy to talk about nailing everybody that isn't her husband. Ed Sheeran is skipping his grandma's funeral to try and win his Marvin Gaye case. He promises to quit music if he loses the case. Kevin Costner's wife files for divorce. Some people are saying she's banging Cal Ripken Jr. Maria Menounos has been battling cancer. Gordon Chaffin doesn't mess with people in the bike lane. P.K. Subban made a joke at Lizzo's expense and the obese singer is angry. P.K. is trying to change the narrative with a hell of an excuse. Steve Martin has a new audiobook out. Hollywood writers are on strike. Rob Lowe is proud of his nepo-baby. Barstool Sports are not fans of James Corden. Music in the 90's was the best. Drew is still stuck in the 60's. Thanks for sticking with us for 7 years. Visit Our Presenting Sponsor Hall Financial – Michigan's highest rated mortgage company If you'd like to help support the show… please consider subscribing to our YouTube Page, Facebook, Instagram and Twitter (Drew and Mike Show, Marc Fellhauer, Trudi Daniels, Jim Bentley and BranDon). Or don't, whatever.
In episode 71 of The Gradient Podcast, Daniel Bashir speaks to Ted Underwood.Ted is a professor in the School of Information Sciences with an appointment in the Department of English at the University of Illinois at Urbana Champaign. Trained in English literary history, he turned his research focus to applying machine learning to large digital collections. His work explores literary patterns that become visible across long timelines when we consider many works at once—often, his work involves correcting and enriching digital collections to make them more amenable to interesting literary research.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:42) Ted's background / origin story, * (04:35) Context in interpreting statistics, “you need a model,” the need for data about human responses to literature and how that manifested in Ted's work* (07:25) The recognition that we can model literary prestige/genre because of ML* (08:30) Distant reading and the import of statistics over large digital libraries* (12:00) Literary prestige* (12:45) How predictable is fiction? Scales of predictability in texts* (13:55) Degrees of autocorrelation in biography and fiction and the structure of narrative, how LMs might offer more sophisticated analysis* (15:15) Braided suspense / suspense at different scales of a story* (17:05) The Literary Uses of High-Dimensional Space: how “big data” came to impact the humanities, skepticism from humanists and responses, what you can do with word count* (20:50) Why we could use more time to digest statistical ML—how acceleration in AI advances might impact pedagogy* (22:30) The value in explicit models* (23:30) Poetic “revolutions” and literary prestige* (25:53) Distant vs. close reading in poetry—follow-up work for “The Longue Durée”* (28:20) Sophistication of NLP and approaching the human experience* (29:20) What about poetry renders it prestigious?* (32:20) Individualism/liberalism and evolution of poetic taste* (33:20) Why there is resistance to quantitative approaches to literature* (34:00) Fiction in other languages* (37:33) The Life Cycles of Genres* (38:00) The concept of “genre”* (41:00) Inflationary/deflationary views on natural kinds and genre* (44:20) Genre as a social and not a linguistic phenomenon* (46:10) Will causal models impact the humanities? * (48:30) (Ir)reducibility of cultural influences on authors* (50:00) Machine Learning and Human Perspective* (50:20) Fluent and perspectival categories—Miriam Posner on “the radical, unrealized potential of digital humanities.”* (52:52) How ML's vices can become virtues for humanists* (56:05) Can We Map Culture? and The Historical Significance of Textual Distances* (56:50) Are cultures and other social phenomena related to one another in a way we can “map”? * (59:00) Is cultural distance Euclidean? * (59:45) The KL Divergence's use for humanists* (1:03:32) We don't already understand the broad outlines of literary history* (1:06:55) Science Fiction Hasn't Prepared us to Imagine Machine Learning* (1:08:45) The latent space of language and what intelligence could mean* (1:09:30) LLMs as models of culture* (1:10:00) What it is to be a human in “the age of AI” and Ezra Klein's framing* (1:12:45) Mapping the Latent Spaces of Culture* (1:13:10) Ted on Stochastic Parrots* (1:15:55) The risk of AI enabling hermetically sealed cultures* (1:17:55) “Postcards from an unmapped latent space,” more on AI systems' limitations as virtues* (1:20:40) Obligatory GPT-4 section* (1:21:00) Using GPT-4 to estimate passage of time in fiction* (1:23:39) Is deep learning more interpretable than statistical NLP?* (1:25:17) The “self-reports” of language models: should we trust them?* (1:26:50) University dependence on tech giants, open-source models* (1:31:55) Reclaiming Ground for the Humanities* (1:32:25) What scientists, alone, can contribute to the humanities* (1:34:45) On the future of the humanities* (1:35:55) How computing can enable humanists as humanists* (1:37:05) Human self-understanding as a collaborative project* (1:39:30) Is anything ineffable? On what AI systems can “grasp”* (1:43:12) OutroLinks:* Ted's blog and Twitter* Research* The literary uses of high-dimensional space* The Longue Durée of literary prestige* The Historical Significance of Textual Distances* Machine Learning and Human Perspective* The life cycles of genres* Can We Map Culture?* Cohort Succession Explains Most Change in Literary Culture* Other Writing* Reclaiming Ground for the Humanities* We don't already understand the broad outlines of literary history* Science fiction hasn't prepared us to imagine machine learning.* How predictable is fiction?* Mapping the latent spaces of culture* Using GPT-4 to measure the passage of time in fiction Get full access to The Gradient at thegradientpub.substack.com/subscribe
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How MATS addresses “mass movement building” concerns, published by Ryan Kidd on May 4, 2023 on LessWrong. Recently, many AI safety movement-building programs have been criticized for attempting to grow the field too rapidly and thus: Producing more aspiring alignment researchers than there are jobs or training pipelines; Driving the wheel of AI hype and progress by encouraging talent that ends up furthering capabilities; Unnecessarily diluting the field's epistemics by introducing too many naive or overly deferent viewpoints. At MATS, we think that these are real and important concerns and support mitigating efforts. Here's how we address them currently. Claim 1: There are not enough jobs/funding for all alumni to get hired/otherwise contribute to alignment How we address this: Some of our alumni's projects are attracting funding and hiring further researchers. Three of our alumni have started alignment teams/organizations that absorb talent (Vivek's MIRI team, Leap Labs, Apollo Research), and more are planned (e.g., a Paris alignment hub). With the elevated interest in AI and alignment, we expect more organizations and funders to enter the ecosystem. We believe it is important to install competent, aligned safety researchers at new organizations early, and our program is positioned to help capture and upskill interested talent. Sometimes, it is hard to distinguish truly promising researchers in two months, hence our four-month extension program. We likely provide more benefits through accelerating researchers than can be seen in the immediate hiring of alumni. Alumni who return to academia or industry are still a success for the program if they do more alignment-relevant work or acquire skills for later hiring into alignment roles. Claim 2: Our program gets more people working in AI/ML who would not otherwise be doing so, and this is bad as it furthers capabilities research and AI hype How we address this: Considering that the median MATS scholar is a Ph.D./Masters student in ML, CS, maths, or physics and only 10% are undergrads, we believe most of our scholars would have ended up working in AI/ML regardless of their involvement with the program. In general, mentors select highly technically capable scholars who are already involved in AI/ML; others are outliers. Our outreach and selection processes are designed to attract applicants who are motivated by reducing global catastrophic risk from AI. We principally advertise via word-of-mouth, AI safety Slack workspaces, AGI Safety Fundamentals and 80,000 Hours job boards, and LessWrong/EA Forum. As seen in the figure below, our scholars generally come from AI safety and EA communities. MATS Summer 2023 interest form: “How did you hear about us?” (370 responses) We additionally make our program less attractive than comparable AI industry programs by introducing barriers to entry. Our grant amounts are significantly less than our median scholar could get from an industry internship, and the application process requires earnest engagement with complex AI safety questions. We additionally require scholars to have background knowledge at the level of AGI Safety Fundamentals, which is an additional barrier to entry that e.g. MLAB didn't require. We think that ~1 more median MATS scholar focused on AI safety is worth 5-10 more median capabilities researchers (because most do pointless stuff like image generation, and there is more low-hanging fruit in safety). Even if we do output 1-5 median capabilities researchers per cohort (which seems very unlikely), we likely produce far more benefit to alignment with the remaining scholars. Claim 3: Scholars might defer to their mentors and fail to critically analyze important assumptions, decreasing the average epistemic integrity of the field How we address this: Our scholars are enc...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alignment Research @ EleutherAI, published by Curtis Huebner on May 3, 2023 on LessWrong. The past and future of AI alignment at Eleuther Initially, EleutherAI focused mainly on supporting open source research. AI alignment was something that was acknowledged by many of the core members as important, but it was not the primary focus. We mainly had discussions about the topic in the #alignment channel and other parts of our discord while we worked on other projects. As EAI grew, AI alignment started to get taken more seriously, especially by its core members. What started off as a single channel turned into a whole host of channels about different facets of alignment. We also hosted several reading groups related to alignment, such as the modified version of Richard Ngo's curriculum and an interpretability reading group. Eventually alignment became the central focus for a large segment of EAIs leadership, so much so that all our previous founders went off to do full time alignment research at Conjecture and OpenAI. Right now, the current leadership believes making progress in AI alignment is very important. The organization as a whole is involved in a mix of alignment research, interpretability work, and other projects that we find interesting. Moving forward, EAI remains committed to facilitating and enabling open source research, and plans to ramp up its alignment and interpretability research efforts. We want to increase our understanding and control of modern ML systems and minimize existential risks posed by artificial intelligence. Our meta-level approach to alignment It is our impression that AI alignment is still a very pre-paradigmatic field. Progress in the field often matches the research pattern we see in the ELK report, where high level strategies are proposed, and problems or counterexamples are found. Sometimes these issues can be fixed, but oftentimes fundamental issues are identified that make an initially promising approach less interesting. A consequence of this is that it's difficult to commit to an object level strategy to make progress on AI alignment, and even harder to commit to any grand strategic plan to solve the problem. Instead it makes more sense to have a meta level strategy that makes us better able to leverage our unique position within the AI research ecosystem, and pivot when we get new information. Going forward, this means we want to pursue interesting projects that meet a few general desiderata. Our volunteers, partners, and collaborators are enthusiastic about the project. We believe that pursuing the project won't lead to a net increase in existential risks from AI. We'll check for this even if the project is ostensibly a project that will greatly increase our understanding of AI alignment. The project is something that EAI is better equipped to do than anyone else in the space, or the project seems interesting or important, but neglected by the broader community. In order to pull this off, we aim to stay on top of both the latest developments in AI and alignment research. We'll also carefully consider new projects before we embark on or allocate resources for them. Problems we are interested in and research we are doing right now Given the current state AI landscape, there are a few directions that we find especially interesting. We'd love to collaborate with others to make progress on these issues. Interpretability work Interpretability work, especially with current models, seems like a very tractable and scalable research direction. It seems especially easy for current ML researchers to pick up and make progress on it. EAI is well equipped to enable this kind of research, especially for larger language models that more closely resemble the ones we see in modern production systems. This is arguably where most of our recent eff...
AWS Inferentia2-based Amazon EC2 Inf2 instances can help you deploy your 100B+ parameter generative AI models at scale. Inf2 instances deliver up to 40% better price performance than comparable Amazon EC2 instances. Tune in to learn more about this new launch that helps you increase performance, reduce costs, and also improve energy efficiency when deploying your ML applications. Inf2 PDP https://go.aws/44oez5T Neuron documentation https://bit.ly/44oLmrz AWS Inferentia https://go.aws/3NAyhFr AWS Trainium https://go.aws/3nkivnH
The recent explosion of AI technology has brought up many different responses from people. Whether it is fear, excitement, confusion, or disdain, for most of us, the ideas and possibilities elicit a very particular reaction. Tim Miner is on the show today to shed some light on this subject, as well as how his company, By The People, is aiding their clients to make use of the newest technology. Tim makes a strong argument for why we should, above all else, be hopeful and positive about these innovations. He talks about why some of the doomsday thinking about AI and ML is misguided, the precautions that are in place to prevent danger and offense, and how AI can improve all areas of human life.Ultimately, Tim sees the arrival of ChatGPT and comparable services as allowing us to dream bigger, and imagine new futures. So, however you may wish to use AI, whether in your business or beyond, the message is a resounding one. Make sure to join us for this great conversation!What you'll learn about in this episode:Tim gives a broad history and definition of AI technology. Some thoughts on the ethics of AI, ML, and ChatGPT. Tim weighs in on the process of machine learning and the safeguards against its supposed dangers. Balancing complex and useful answers with the need to steer clear of offense. Comparing the capabilities of the current prominent chatbots.Implementing AI for increased productivity. The power of facilitating a dialogue with a chatbot and allowing two-way conversation! AI and education; writing style, plagiarism, and more. The capabilities of the technology for increasing team productivity and efficiency. Tim unpacks the meaning of the name of his company and how he helps his clients. Fueling innovation with the help of AI through increased creativity and efficiency.The most prominent tools for using AI in design work.Information about Tim's webinars and how to access them! Transcript: HereAdditional Resources:Website: https://bythepeople.tech/LinkedIn: https://www.linkedin.com/in/timminerTim Miner: https://twitter.com/tim_minerLinks Mentioned:ChatGPT: https://openai.com/blog/chatgptSalesforce: https://www.salesforce.com/IBM: https://www.ibm.com/Tact.ai: https://www.tact.ai/SpaceX: https://www.spacex.com/Bard: https://bard.google.com/60 Minutes: https://www.cbsnews.com/60-minutes/ Sharon Spano:Website: sharonspano.comFacebook: facebook.com/SharonSpanoPHDInstagram: instagram.com/drsharonspano/LinkedIn: linkedin.com/in/sharonspano/Book: thetimemoneybook.comContact: sharon@sharonspano.comTwitter: twitter.com/SharonSpanoThe Other Side of Potential Podcast: sharonspano.com/podcast/
First up, Jason breaks down Uber's huge Q1 results! (1:21) Then, Unlearn.AI CEO Charles Fisher joins to discuss the advancements his company is making in fast-tracking clinical trials (9:49), how machine learning is used in drug development (16:24), Unlearn's business model (35:24), and more! (0:00) Jason kicks off the show (1:21) Uber's Q1 earnings (8:31) QuickNode - Get one month free by using code TWIST at https://go.quicknode.com/twist (9:49) The foundation of Unlearn.AI (12:42) Unlearn.AI's impact on the clinical trial process (14:24) Criticisms of the current clinical trial model (16:24) ML's Impact on drug discovery (24:16) CacheFly - Get 10 terabytes free by signing up at https://twist.cachefly.com (25:42) The data used in the medical system today (34:05) Microsoft for Startups Founders Hub - Apply in 5 minutes for six figures in discounts at http://aka.ms/thisweekinstartups (35:24) Unlearn.AI's business model (40:22) Building a foundational model for health (44:06) The pace of AI today vs. the previous decade (46:35) More on Unlearn.AI's business model (48:15) Creating foundational datasets in health FOLLOW Charles: https://twitter.com/charleskfisher FOLLOW Jason: https://linktr.ee/calacanis Subscribe to our YouTube to watch all full episodes: https://www.youtube.com/channel/UCkkhmBWfS7pILYIk0izkc3A?sub_confirmation=1 FOUNDERS! Subscribe to the Founder University podcast: https://podcasts.apple.com/au/podcast/founder-university/id1648407190
This episode features an interview with Maxim Fateev, Co-founder and CEO of Temporal, an open source, distributed, and scalable workflow orchestration engine capable of running millions of workflows. He has 20 years of experience architecting mission-critical systems at Uber, Google, Amazon, and Microsoft. In this episode, Sam sits down with Maxim to discuss workflow services, the power behind Temporal, and bringing determinism to highly complex environments.-------------------“[Temporal] has this notion of workflows, which can run for a very long time and handle external events, you can treat them as a durable actor. And they're very good at implementing a lifecycle. For example, you can have an object per model and let this object handle all the events. Like, new data came in, notify this object, this object will go and retrain it. Or, it'll run an activity to superiorly check the status. So you can have end-to-end lifecycle implemented fully in Temporal.” – Maxim Fateev-------------------Episode Timestamps:(01:03): What's top of mind for Maxim in workflow services(04:09): What open source data means to Maxim(11:07): Maxim explains his time at AWS and building Cadence at Uber(23:09): Use cases and the community of Temporal(28:26): How Temporal is being used for ML workloads(32:28): One question Maxim wishes to be asked(36:38): Maxim's advice for those working with complex distributed systems(39:11): Backstage takeaways with executive producer, Audra Montenegro-------------------Links:LinkedIn - Connect with MaximTemporal.ioWatch Maxim's talk “Designing a Workflow Engine from First Principles”Replay Conference 2023
Today's interview is with Ryan McDonald, Chief Scientist at ASAPP. Ryan joins me today to talk about his experience over the last 20 years in the language technology space (AI: NLP, ML, LLMs), recent developments in the generative AI space, the challenges that enterprises face in embracing and leveraging this technology and how ASAPP is advancing AI to augment human activity to address real-world problems for enterprises, particularly in the area of customer care. This interview follows on from my recent interview – Well-being and the changing nature of management and leadership – Interview with Ray Biggs, Head of Customer Care at John Lewis & Waitrose – and is number 464 in the series of interviews with authors and business leaders that are doing great things, providing valuable insights, helping businesses innovate and delivering great service and experience to both their customers and their employees.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Systems that cannot be unsafe cannot be safe, published by Davidmanheim on May 2, 2023 on LessWrong. Epistemic Status: Trying to clarify a confusion people outside of the AI safety community seem to have about what safety means for AI systems. In engineering and design, there is a process that includes, among other stages, specification, creation, verification and validation, and deployment. Verification and validation are where most people focus when thinking about safety - can we make sure the system performs correctly? I think this is a conceptual error that I want to address. "Verification and validation (also abbreviated as V&V) are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose." - Wikipedia Both of these terms are used slightly differently across fields, but in general, verification is the process of making sure that the system fulfills the design requirements and/or other standards. This pre-supposes that the system has some defined requirements or a standard, at least an implicit one, and that it could fail to meet that bar. That is, the specification of the system includes what it must and must not do, and if the system does not do what it should, or does something that it should not, it fails. Machine learning systems, especially language models, aren't well understood. The potential applications are varied and uncertain, entire classes of new and surprising failure modes are still being found, and we have nothing like a specification of what the system should or should not do, must or must not do, and where it can and cannot be used. To take a very concrete example, metal rods have safety characteristics, and they might be rated for use up to some weight limit, under some specific load for some amount of time, in certain temperature ranges, for some amount of time. These can all be tested. If the bar does not stay within a predefined range of characteristics at a given temperature, with a given load, it fails. It can also be found to be acceptable in one temperature range, but not another, or similar. At the end of verification and validation, the bar is deemed to have passed or failed for a given application, based on what the requirements for that larger system are. At its best, red-teaming and safety audits of ML systems check lots of known failure modes, and determine whether they are susceptible. There is no pre-defined standard or set of characteristics that are checked, no real ability to consider application specific requirements, and no ability to specify where the system should not or must not be used. Until we have some safety standard for machine learning models, they aren't "partly safe" or "assumed safe," or "good enough for consumer use." If we lack a standard for safety, ideally one where there is consensus that it is sufficient for a specific application, then exploration or verification of the safety of a machine learning model is meaningless. If a model is released to the public without a clear indication about what the system can safely be used for, with verification that it passed a relevant standard, and clear instruction that it cannot be used elsewhere, it is an unsafe model. Anyone who claims otherwise seems fundamentally confused about what safety means for such systems. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
MLOps Coffee Sessions #154 with Melissa Barr & Michael Mui, Machine Learning Education at Uber co-hosted by Lina Weichbrodt. // Abstract Melissa and Michael discuss the education program they developed for Uber's machine learning platform service, Michelangelo, during a guest appearance on a podcast. The program teaches employees how to use machine learning both in general and specifically for Uber. The platform team can obtain valuable feedback from users and use it to enhance the platform. The course was designed using engineering principles, making it applicable to other products as well. // Bio Melissa Barr Melissa is a Technical Program Manager for ML & AI at Uber. She is based in New York City. She drives projects across Uber's ML platform, delivery, and personalization teams. She also built out the first version of the ML Education Program in 2021. Michael Mui Melissa is a Staff Technical Lead Manager on Uber AI's Machine Learning Platform team. He leads the Distributed ML Training team which focuses on building elastic, scalable, and fault-tolerant distributed machine learning libraries and systems used to power machine learning development productivity across Uber. He also co-leads Uber's internal ML Education initiatives. Outside of Uber, Michael also teaches ML at the Parsons School of Design in NYC as an Adjunct Faculty (mostly for the museum passes!) and guest lectures at the University of California, Berkeley. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links https://www.uber.com/blog/ml-education-at-uber-program-design-and-outcomes/https://www.uber.com/blog/ml-education-at-uber/https://www.uber.com/en-PH/blog/ml-education-at-uber/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Melissa on LinkedIn: https://www.linkedin.com/in/melissabarr1/ Connect with Michael on LinkedIn: https://www.linkedin.com/in/michael-c-mui/Connect with Lina on LinkedIn: https://www.linkedin.com/in/lina-weichbrodt-344a066a/ Timestamps: [00:00] Melissa and Michael's preferred coffee [01:51] Takeaways [05:40] Please subscribe to our newsletters and leave reviews on our podcasts! [06:18] Machine learning at Uber education program [07:45] The Uber courses [10:03] Tailoring the Uber education system [12:27] Growing out of the ML-Ed platform efforts [14:14] Expanding the ML Market Size [15:23] Relationship evolution [17:36] Reproducibility best practices [21:46] Learning development timeline [26:29] Courses effectiveness evaluation [29:57] Tracking Progress Challenge [31:25] ML platforms for internal tools [35:07] Impact of ML Education at Uber [39:30] Recommendations to companies who want to start an ML-Ed platform [41:12] Early ML Adoption Program [42:11] Homegrown or home-built platform [42:54] Feature creation to a course [45:24] ML Education at Uber: Frameworks Inspired by Engineering Principles [49:42] The Future of ML Education at Uber [52:28] Unclear ways to spread ML knowledge [54:20] Module for Generative AI and ChatGPT [55:05] Measurement of success [56:39] Wrap up
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Systems that cannot be unsafe cannot be safe, published by David Manheim on May 2, 2023 on The AI Alignment Forum. In engineering and design, there is a process that includes, among other stages, specification, creation, verification and validation, and deployment. Verification and validation are where most people focus when thinking about safety - can we make sure the system performs correctly? I think this is a conceptual error that I want to address. "Verification and validation (also abbreviated as V&V) are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose." - Wikipedia Both of these terms are used slightly differently across fields, but in general, verification is the process of making sure that the system fulfills the design requirements and/or other standards. This pre-supposes that the system has some defined requirements or a standard, and that it could fail to meet that bar. That is, the specification of the system includes what it must and must not do, and if the system does not do what it should, or does something that it should not, it fails. Machine learning systems, especially language models, aren't well understood. The potential applications are varied and uncertain, entire classes of new and surprising failure modes are still being found, and we have nothing like a specification of what the system should or should not do, must or must not do, and where it can and cannot be used. To take a very concrete example, metal rods have safety characteristics, and they might be rated for use up to some weight limit, under some specific load for some amount of time, in certain temperature ranges, for some amount of time. These can all be tested. If the bar does not stay within a predefined range of characteristics at a given temperature, with a given load, it fails. It can also be found to be acceptable in one temperature range, but not another, or similar. At the end of verification and validation, the bar is deemed to have passed or failed for a given application, based on what the requirements for that larger system are. At its best, red-teaming and safety audits of ML systems check lots of known failure modes, and determine whether they are susceptible. There is no pre-defined standard or set of characteristics that are checked, no real ability to consider application specific requirements, and no ability to specify where the system should not or must not be used. Until we have some safety standard for machine learning models, they aren't "partly safe" or "assumed safe," or "good enough for consumer use." If we lack a standard for safety, one where there is consensus that it is sufficient for a specific application, exploration or verification of the safety of a machine learning model is meaningless. If a model is released to the public without a clear indication about what the system can safely be used for, with verification that it passed a relevant standard, and clear instruction that it cannot be used elsewhere, it is an unsafe model. Anyone who claims otherwise seems fundamentally confused about what safety means for such systems. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Joël has been integrating a third-party platform into a testing pipeline...and it has not been going well. Because it's not something she usually keeps up-to-date with, Stephanie is excited to learn about more of the open-source side of things in Ruby, what's new in the Ruby tooling world, and what folks are thinking about regarding the future of the language. Today's topic is inspired by an internal thoughtbot Slack thread about writing a custom matcher for Rspec. Stephanie and Joël contrast DSLs vs. Object APIs and also talk about: CanCanCan vs Pundit RSpec DSL When is a DSL helpful? Why not use both DSLs & Object APIs? Extensibility When does a DSL become a framework? This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. RubyKaigi 2023 (https://rubykaigi.org/2023/) Mystified by RSpec's DSL? by Jason Swett (https://www.codewithjason.com/mystified-rspecs-dsl-parentheses-can-add-clarity/) Building Custom RSpec Matchers with Regular Objects (https://thoughtbot.com/blog/building-custom-rspec-matchers-with-regular-objects) FactoryBot (https://github.com/thoughtbot/factory_bot) Writing a Domain-Specific Language in Ruby by Gabe Berke-Williams (https://thoughtbot.com/blog/writing-a-domain-specific-language-in-ruby) Capybara (https://teamcapybara.github.io/capybara/) Acceptance Tests at a Single Level of Abstraction (https://thoughtbot.com/blog/acceptance-tests-at-a-single-level-of-abstraction) CanCanCan (https://github.com/CanCanCommunity/cancancan) Pundit (https://www.capvidia.com/products/pundit) Discrete Math and Functional Programming (https://www.amazon.com/Discrete-Mathematics-Functional-Programming-VanDrunen/dp/1590282604) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a little bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I've been integrating a third-party platform into our testing pipeline for my client. It has not been going well. We've been struggling a little bit, mostly just because tests just kind of crash. Our testing pipeline is pretty complex. It's a lot of one script, some environment variables, does a few things, shells out to another script, which is in a different language. Does a few more things, shells out to another script, maybe calls out to rake, calls out to a shell script. There are four or five of these in a chain, and it's a bit of a mess. Somewhere along in there, something is not compatible with this third-party service that we're trying to integrate with. I was pairing this week with a colleague. And we were able to reproduce a situation where we were able to get a failure under some conditions and a success under other conditions. So these are basically, if we run the whole chain of scripts that call each other from the beginning, we know we get a failure. And if we skipped entirely the chain of scripts that set up things and then just manually try to invoke a third-party service, that works. And so now we know that there's something in between that's incompatible, and now it's just about narrowing things down. There are a few different approaches we could take. We could try to sort of work our way forward. We know a known point where it breaks and then just try to start the chain one step further and see where it fails. We could try to get fancy and do a binary search, like split it in half and then half and half again. We ended up doing it the other way, where we started at the end. We had our known good point and then just stepping one step back and saying, okay, now we introduce the last script in the chain. Does that work? Okay, that pass is great. Let's go one step further; two scripts up in the chain. And at some point, we find, okay, here's the one script that fails. Now, what is it within this script? And it was a really fun debugging session where we were just narrowing things down until we found the source of the bug. STEPHANIE: Wow, that sounds pretty complicated. It just seems like there are so many layers going on. And it was really challenging to pinpoint where the source of the issue was. JOËL: Definitely. I think all the layers made it really complicated. But having a process that we could follow and then kind of narrowing it down made it almost mechanical to figure out where the bug was once we got to a point where we had a known good point and a known bad point. STEPHANIE: Yeah, that makes sense. Kind of sounds like if you are using git bisect or something like that to narrow down the scope of where the issue could be. I'm curious because this is like a bunch of shell scripts and rake tasks or commands or whatever. What would have made this debugging process easier? JOËL: I think having fewer scripts in this chain. STEPHANIE: [laughs] That's fair. JOËL: We don't need so many scripts that call out to each other in different languages trying to share data via environment variables. So we've got a bit of a Rube Goldberg machine, and we're trying to patch in yet another piece in there. STEPHANIE: Yeah, that's really tough. I was curious if there was, I don't know, any logging or any other clues that you were getting along the way because I know from experience how painful it is to debug that kind of code. JOËL: It's interesting because I feel like normally logging is something that's really useful. In this particular case, we run into an exception at some point. So it's more of under what conditions does the exception happen? The important thing was to find that there is a point where it breaks, and there's a point where it doesn't, and realizing that if we ran some of these commands just directly without going through the whole pipeline, that things did work and that we were not triggering that exception. So all of a sudden, now that tells us, okay, something in our pipeline is wrong. And then we can just start narrowing things down. So yeah, adventures in debugging. Sometimes it's really frustrating, but then when you have a good process, and you find the bug, it's incredibly satisfying. STEPHANIE: I like that you used a process that can be applied to many different problems, in this particular case, debugging a testing pipeline. Maybe not something that we do every day, but certainly, it comes up, and now we have tools to address those kinds of issues as well. JOËL: So my week has been up and down with all of this debugging. What's been new in your world? STEPHANIE: I've been doing some travel planning because I'm going to RubyKaigi in Japan. JOËL: Whoa. STEPHANIE: This is actually going to be my first international conference, so I'm really looking forward to that. I just have never been compelled to travel abroad to go to a tech conference. But I'm really looking forward to going to RubyKaigi because now I've been to the U.S.-based conferences a few times. And I'm excited to see how things are different at an international conference and specifically a RubyKaigi because, obviously, there's a lot of really cool Ruby work happening over there in Japan. So I'm excited to learn about more of the open-source side of things of Ruby, what's new in the Ruby tooling world, and just what folks are thinking about in terms of the future of the language. That's not something I normally keep super up-to-date on. But I'm excited to be around people who do think and talk about these things a lot and maybe get some new insights into my own work. JOËL: Do you find that you tend to keep up more with some of the frameworks like Rails rather than the underlying language itself? STEPHANIE: Yeah, that's a good question. I do think because the framework changes a little more frequently, new releases are kind of more applicable to the work that I'm doing. Whereas language updates or upgrades are a little bit less top of mind for me because the point is that it doesn't have to change [laughs] all that much, and we can continue to work with things as expected and not be disrupted. So it is definitely like a whole new world for me, but I'm really looking forward to it. I think it will be really interesting and just kind of a whole other space to explore that I haven't really because I've usually been focused on more of the web development and industry work side of things. JOËL: What's a Ruby feature that either is coming out in the future or that came out in the last couple of releases that got you really excited? STEPHANIE: I think the conversation about typing in Ruby is something that has been on my radar but has also been ebbing and flowing over time. And I did see a few talks at RubyKaigi this year that are going to talk about how to introduce gradual typing in Ruby. And now that it has been out for a little bit and people have been using it, how people are feeling about it, pros and cons, and kind of where they're going to take it or not take it from there. JOËL: Have you done much TypeScript? STEPHANIE: I have been working more in TypeScript recently but did spend most of my front-end work coding days in JavaScript. And so that transition itself was pretty challenging for me where I suddenly felt a language that I did know pretty well. I was having to be in that...in somewhat of a beginner's mindset again. Even just reading the code itself, there were just so many new things to be looking at in terms of the syntax. And it was a difficult but ultimately pretty rewarding experience because the way I thought about JavaScript afterwards was much more refined, I think. JOËL: Types definitely, I think, change the way you think about code; at least, that's been my experience. STEPHANIE: Yeah, absolutely. I haven't gotten the pleasure to work with types in Ruby just yet, but I've just heard different experiences. And I'm excited to see what experts have to say about it. JOËL: That's the fun of going to a conference. STEPHANIE: Absolutely. So yeah, if any listeners are also headed to RubyKaigi, yeah, look out for me. JOËL: I was recently having a conversation with someone about the fact that a lot of languages provide ways to sort of embed many languages within them. So the Lisp family of languages are really big into macros and metaprogramming. Some other languages are big into giving you the ability to build your own ASTs or have really strong parsing capabilities so that you can produce your own, again, mini-language. And Ruby does this as well. It's pretty popular among the Ruby community to build DSLs, Domain-Specific Languages using some of Ruby's built-in abilities. But it seems to be a sort of universal need or at the very least a universal desire among programmers. Have you ever found yourself as a code author wanting to embed a sort of smaller language within your application? STEPHANIE: I don't think I have, to be honest. It's a very interesting question. Because I think the motivation to build your own mini-language using Ruby would have to be you'd have to have a really good reason for it, and in my experience, I haven't quite encountered that yet. Because, yeah, it seems like a lot of upfront work, a lot of overhead to introduce something like that, especially if it's not necessarily either a really, really particular domain that others might find a use for, or it just doesn't end up seeming worthwhile if I can just write regular, old Ruby code. JOËL: I think you're not alone. I think the Ruby community has been kind of a bit of a pendulum here where several years ago, everything that could be made into a DSL was. Now the pendulum kind of has been swinging the other way. And we see DSLs, but they're not quite as frequent. For those who maybe have not experienced a DSL or aren't quite familiar with the concept, how would you describe the idea? STEPHANIE: I think I would describe domain-specific languages as a bit of a mini-language that is created for a very particular problem space in mind to make development for that domain easier. Oftentimes, I've also kind of seen people describe the benefit of DSLs as being able to read that language as if it were plain English. And so, in my head, I have kind of, at least in the Ruby world, right? We see that a lot in different gems. RSpec, for example, has its own internal DSL, and many people really enjoy it because it took the domain of testing. And the way you write it kind of is how you might read or understand it in English. And so it's a bit easier to talk about what you're expecting in your tests. JOËL: Yeah, it's so high-level and minimal and domain-specific that it almost stops feeling like it's a programming language and can almost feel like it's a high-level configuration for this very particular domain, sometimes even to the point where the idea is that a non-programmer could read it and understand what's going on. STEPHANIE: I think RSpec is actually one of the first Ruby DSLs that you might encounter when you're learning Ruby for the first time. And I've definitely seen developers who are new to Ruby, you know, they're writing code, and they're like, okay, I'm ready to write a test now. And the project uses RSpec because that's what most of us use in our Rails applications. And then they see, like you said, almost a configuration language, and they are really confused. They're not really sure what they're reading. They struggle with the syntax a lot. And it ends up being a point of frustration when they're first starting out if they're not just copying and pasting other existing RSpec tests. I'm curious if you've seen that before. JOËL: I've definitely seen that. And it's a little bit ironic because oftentimes, an argument for DSL is that it makes things simpler that you don't even have to know Ruby; you can just write it. It's simpler. It's easier to write. It's easier to understand. And to a certain extent, maybe that's true. But for someone who does know Ruby and doesn't know your particular little domain language, now they're encountering something that they don't know. And they're having to learn it, and they're having to struggle with it. And it might behave a little bit weirdly compared to how Ruby normally works. And so sometimes it doesn't make it easier for adoption. But it does look really good in a README. STEPHANIE: That's totally fair. I think the other thing that's interesting about RSpec is that a lot of it is really just stylistic. I actually read a blog post by Jason Swett and the headline of it was "Mystified by RSpec's DSL? Some parentheses can add clarity." And he basically goes on to tell us that really RSpec is just leaning on some of Ruby's syntactic sugar of omitting parentheses for method calls. And if you just add the parentheses back in your it blocks or your describes, it can read a lot more like regular Ruby. And you might have a better time understanding what's going on when you realize that we're just passing our descriptors as arguments along with some blocks. JOËL: That's ironic given that oftentimes, the goal of these is to make it look like not Ruby. STEPHANIE: I agree; it is ironic. [laughs] MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! JOËL: I think another drawback that I've seen with DSLs is that they oftentimes are more limited in their capabilities. So if the designer of the gem didn't explicitly think of your use case, then oftentimes, it can be really hard to extend or to support edge cases that are not specifically designed for that language in the way that plain Ruby is often much more flexible. STEPHANIE: Yeah, that's really interesting because when a gem does have some kind of DSL, a lot of effort probably went into making that the main interface that you would work with or you would use. And when that isn't working for your use case, the design of the underlying objects may or may not be helpful for the changes that you want to make. JOËL: I think it's interesting that you mentioned the underlying objects because those are often sort of not meant for public consumption when you're building a gem that's DSL forward. I think, in many cases, my ideal gem would make those underlying objects the primary interface and then maybe offer DSL as a kind of nice-to-have layer on top for those situations that maybe aren't as complex where writing things in the domain language might actually be quite nice. But keeping those underlying objects as the interface, it's nice to use and well-documented for the majority of people. STEPHANIE: Yeah, I like that too because then you can get the best of both worlds. So speaking of trying to make a DSL work for you, have you ever experienced having to kind of work around the DSL to get the functionality you were hoping to achieve? JOËL: So I think we're talking about the idea of having both a DSL and the underlying objects. And RSpec is a great example of this with their custom matchers. RSpec itself is a DSL, but then they also offer a DSL to allow you to create custom matchers. And it's not super well documented. I always forget how to define them, and so I oftentimes don't bother. It's just kind of too much of a pain for something that doesn't always provide that much value. But if it were easy, I would probably do it more. Eventually, I realized that you could use just regular Ruby objects as custom matchers. And they just seemed to respond to certain methods, just regular old objects and polymorphism. And all of a sudden, now I'm back into all of the tools and mechanisms that I am familiar with, like the back of my hand. I can write objects all day. I can TDD them. I can apply any patterns that I want to if I'm doing something really complicated. I can extract helpers. All of that works really well with the knowledge that I already have without having to sink a lot of time into trying to learn the built-in DSL. So, for the most part, now, when I define custom matchers, I'll often jump directly to creating a regular object and making it conform to the matcher interface rather than relying on the DSL for that. So once we go back to the test, now we're back in DSL land. Now we're no longer talking in terms of objects so much. We'll have some nice methods and they will all kind of read like English. So to pull a recent example that I worked on, I might say something like expect this policy object method to conform to this truth table. STEPHANIE: That's a really interesting example. It actually kind of sounds like it hits the sweet spot of what you were describing earlier in the sense that it has a really nice DSL, but also, you can create your own objects, and that has an interface that you can implement. And yes, have your cake and eat it too. [laughs] But the idea that then you're kind of converting it back to the DSL because that is just what we know, and it has become so normalized. I was talking earlier about okay; when is a DSL worthwhile? When is the use case a good reason to implement it? And especially for gems that I think that are really popular that we as a Ruby community have collectively used most of the time on our projects because we have oftentimes a lot of the same problems that we're solving. It seems like this has become its own shared language, right? JOËL: Yeah, there are definitely some DSLs that we all end up learning because they're just so prominent in the Ruby community, even Rails itself ships with several built-in DSLs. STEPHANIE: Yeah, absolutely. FactoryBot is another one, too. It is a gem by thoughtbot. And actually, in preparation to talk about DSLs with you today, I scoured our blog and found a really great blog post, "Writing a Domain-Specific Language in Ruby" by Gabe Berke-Williams. And it is basically like, here's how to write something like FactoryBot and creating your own little mini Ruby DSL for something that would be very similar to what FactoryBot does for fixtures. JOËL: That's a great resource, and we'll make sure to link that in the show notes. We've been talking about some of the limitations of DSLs or some aspects of them maybe that we personally don't like. What are maybe examples of DSLs that you do enjoy working with? STEPHANIE: Yeah, I have an example for this one. I really enjoy using Capybara's DSL for acceptance testing. I did have to go down the route of writing some custom selectors for...I just had some HTML elements within kind of a complicated table and was trying to figure out how to write some selectors so that I could write the test as if it were in, you know, quote, unquote, "plain English" like, within this table, expect some value. And that was an interesting journey. But I think that it really helped me have a better understanding of accessibility of just the underlying building blocks of the page that I was working with. And, yeah, I really appreciate being able to read those tests from a user perspective and kind of know exactly what they're doing when they're interacting with this virtual browser without having to run it in headful mode and see it for myself. JOËL: It's always great when a DSL can give you that experience of abstracting enough to where it makes the code delightful to work with while also not having too high a cost to learn or being too restrictive in what it allows you to do. Would you make a difference between something that's a DSL versus maybe just code that's written at a higher level of abstraction? So maybe to get back to your example with Capybara, it's really nice to have these nice custom matchers and all of these things to work with HTML pages. If I'm writing, let's say, a helper method at the bottom of a test, I don't think that feels quite like it's a DSL yet. But it's definitely a higher level than specifying CSS selectors. So would you make a difference between those two things? STEPHANIE: That's a good question. I think it's one of those you know it when you see it kind of questions because it just depends on the amount of abstraction, like you mentioned, and maybe even metaprogramming. That takes something from the core language to morph into what you could qualify as a separate language. What do you think about this? JOËL: Yeah, part of me almost wonders if this exists kind of on a continuum, and the boundary might be a little bit fuzzy. I think there might be some other qualifications that come with it as well. Even though DSLs are typically higher-level helpers, it's usually more than just that. There are also sort of slightly different semantics in the way that you would tend to use them to the point where while they may be just Ruby methods, we don't use them like Ruby methods, and even to the point that we don't think of them as Ruby methods. To go back to that article you mentioned from Jason, where just reminding people, hey, if you put params on this, all of a sudden, it helps you remember, oh, it's just a Ruby method instead of being like, oh, this is a language keyword or something. STEPHANIE: Yeah, I wonder if there's also something to the idea of domain specificity where it should be self-service within the domain that you're working. And then it has limitations once you are trying to do something separate from the domain. JOËL: Right, it's an element of focus to this. And I think it's probably also a language is not just one helper; it's a collection typically. So it's probably a series of high-level helpers, potentially. They might not be methods, even though that is ultimately one of the primary interfaces we use to run code in Ruby. So it's a collection of methods that are high-level, but the collection itself is focused. And oftentimes, they're meant to be used in a way where it's not just a traditional method call. STEPHANIE: Right. There's some amount of you bringing to the table your own use case in how you use those methods. JOËL: Yeah, so it might be mimicking a language keyword. It might be mimicking the idea of a configuration. We see that a little bit with ActiveRecord and some of the, let's say, the association and validation APIs. Those kind of feel like, yes, they're embedded in a class, but they feel like either keywords or even just straight-up configuration where you set key-value pairs of things to configure how a particular class is going to work. STEPHANIE: Yeah, that's true for a lot of things in Rails, too, if we're talking about routes and initializers as well. JOËL: So I've complained about some things I don't like about DSLs. I really like the routing DSL in Rails. STEPHANIE: Why is that? JOËL: I think it's very compact and readable. And that's an element that's really nice about DSLs is that it can make things feel very readable and, oftentimes, we read code more often than we write it. And routes have...I was going to say fewer edge cases, but I have seen some really gnarly route files that are pretty awful to work with, especially if you're mostly writing RESTful controllers, and I would recommend that people do. It's really nice to just be able to skim through a route file and be like, oh, these are the resources in my app and the actions I can do on each resource. And here are the ones that are nested. STEPHANIE: Yeah, it almost sounds like a DSL can provide guardrails towards the recommended way of tackling that particular domain. The routes DSL really discourages you from doing anything too complicated because they are encouraging you to follow the Rails convention. And so I think that goes back to the specificity piece of if you've written a DSL, it's because you've thought very deeply about this particular domain and how common problems show up and how you would want people to be empowered by the language rather than inhibited by it. JOËL: I think, thinking more about that, the word that comes to mind is declarative. When you read code that's written with DSLs, typically, it's very declarative. It's more just describing a thing as opposed to either procedural, a series of commands to do, or even OO, where you're composing objects and sending messages to each other. And so problems that lend themselves to being implemented through more descriptive and declarative approaches probably are really good candidates for a DSL. STEPHANIE: Yeah, I like that a lot because when we talk about domains, we're not necessarily talking about a business domain, which is kind of the other way that some folks think about that word. We're talking about a problem space. And the idea of the language being declarative to describe the problem space makes a lot of sense to me because you want it to be flexible enough for different use cases but all within the idea of testing or browser navigation or whatever. JOËL: Yeah. I feel like there's a lot of... there are probably more problems that can be converted to declarative solutions than might initially kind of strike you. Sometimes the problem isn't quite as bounded. And so when you want customizations that are not supported by your DSL, then it kind of falls apart. So I think a classic situation that might feel like something declarative is authorization. Authorization are a series of rules for who can access what, and it would seem like this is a great case for a DSL. Wouldn't it be great to have just one file you can just kind of skim, and we can just see all of the access rules? Access rules that are basically asking to be done declaratively. And we have gems like that. The original CanCan gem and then the successor CanCanCan are trying to follow that approach. Have you used either of those gems? STEPHANIE: I did use the CanCanCan gem a while ago. JOËL: What was your experience with that style of authorization? STEPHANIE: It has been a while but I do remember having to check that original file of like all the different authorizations kind of repeatedly coming back to it to remember, okay, for this rule, what should be allowed to happen here? JOËL: So I think that's definitely one of the benefits is that you have all of your rules stored in one place, and you can kind of scan through the list. My experience, though, is that in practice, it often kind of balloons up and has all of these edge cases in it. And in some earlier versions, I don't know if that's still a problem today, it could even be difficult to accomplish certain things. If you're going to say that access to this particular object depends not on properties of that object itself but on some custom join or association or something like that, that could be really clunky to do or sometimes impossible depending on how esoteric it is or if there's some really complex custom logic to do. And once you're doing something like that, you don't really want to have that logic in your...in this case, it would be the abilities file but inside because that's not really something you express via the DSL anymore. Now you're dropping into OO or procedural world. STEPHANIE: Right. It seems a bit far removed from where we do actually care about the different abilities, especially for one-off cases. JOËL: That is interesting because I feel like there's a bit of a read-versus write-situation happening there as well. It's particularly nice to have, I think, everything in one abilities file for reading and for auditing. I've definitely been in code where there's like three or four ways to authorize, and they're all being used inconsistently, and that's not nice at all. On the other hand, it can be hard with DSL sometimes to customize or to go beyond the rules that are built in. In the case of authorization, you've effectively built a little mini-rules engine. And if you don't have a good way for people to add custom rules without just embedding procedural code into your abilities file, it's going to quickly get out of hand. STEPHANIE: Yeah, that makes sense. On the topic of authorization, you did mention an example earlier when you were writing a policy object. JOËL: I've generally found that that's been my go-to pattern for authorization. I enjoy the Pundit gem that provides some kind of light scaffolding around working with policy objects, but it's a general pattern, and you can absolutely write your own. You don't need a gem for that. Now we're definitely not in the DSL world. We're not doing this declaratively. We're leaning very heavily on OO and saying we're just going to create objects. They talk to each other. They can do anything that any Ruby object can do and as simple or as complex as they need to be. So you have the full power of Ruby and all the patterns that you're used to using. The downside is it is a little bit harder to read and to kind of just audit what's happening in terms of permission because there's no high-level overview anymore. Now you've just got to look through a bunch of classes. So maybe that's the trade-off, flexibility, extensibility versus more declarative style and easy overview. STEPHANIE: That makes a lot of sense because we were talking earlier about guardrails. And because those boundaries do exist, that might not give us the flexibility we want compared to just writing regular Ruby objects. But yeah, we do get the benefit of, like you said, auditing, and at least if we don't try to do some really gnarly, custom stuff, [laughs] something that's easier to read and comprehend. JOËL: And, again, maybe that's where in the best of both worlds situation, you say, hey, I'm creating some form of rules engine, whether it's for describing routes, or authorization, permissions, or users can build custom business rules for a product or something like that. And it's all object-based under the hood. And then, we provide a DSL to make it nice to work with these rules. If a programmer using our gem wants to write a custom rule that just really extends what the ones we shipped can do, allow them to do that via the object API. We have all the objects available to you that underlie the DSL. Add more rules yourself. And then maybe those can be plugged back into the DSL like we saw with the RSpec and custom matchers. Or maybe you have to say, okay, if I have a custom rule object, now I have to just stay in the object space. And I think both of those solutions are okay. But now you've sort of kept those two worlds separate and still allowed people to extend. STEPHANIE: I like that as contributing to the language because language is never static. It changes over time. And that's a way that people can continue to evolve a language that may have been originally written at a certain time and place. JOËL: Moving on from DSLs, we got some listener feedback recently from James, who was listening to our episode on discrete math. And James really appreciated the episode and wanted to share a resource with us. This is the book "Discrete Math and Functional Programming" by Thomas VanDrunen. It's an introduction to discrete math as a theoretical concept taught side by side with the very practical aspect of learning to use the language standard ML, and both of those factor into each other. So you're kind of learning a little bit of theory and some practice, at the same time, getting to implement some discrete math concepts in standard ML to get a feel for them. Yeah, I've not read this book, but I love the concept of pairing a theoretical piece and a practical piece. So I'll drop a link to it in the show notes as well. Thank you, James. STEPHANIE: Yeah, thanks, James. And I guess this is just a little reminder that if our listeners have any feedback or questions they want to write in about, you can reach us at hosts@bikeshed.fm. JOËL: On that note. Shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
SemiWiki Founder Daniel Nenni joins to discuss GPU, CPU, and Semiconductor markets. [SPON: Get 10% off Tasty Vite Ramen with code BROKENSILICON: https://bit.ly/3wKx6v1 ] [SPON: dieshrink = 3% off Everything, brokensilicon = 25% off Windows: https://biitt.ly/shbSk ] 0:00 Welcoming back Daniel Nenni 4:03 What ultimately caused the shortages? What ended them? 13:09 Bringing Silicon Manufacturing Back to the West, Chips Act 25:05 Is TSMC 3nm delaying Nvidia Blackwell? 30:45 Does Nvidia NEED to move to chiplets? 39:17 RX 7600 Manufacturing Costs 47:59 AMD's TSMC Advantage 57:58 Will Intel Foundries beat TSMC? How bad were Intel's Q1 Earnings? 1:11:44 Can Pat Gelsinger save Intel from becoming another IBM? 1:22:59 The Importance of TSMC 3nm Nodes, 2nm Pricing, AMD Design Costs 1:32:55 Impact of ML on EDA and Chip Design, Jim Keller at Atomic Semi Check out the Semiconductor Insiders Podcast: https://semiwiki.com/podcast/ Last Time Daniel Nenni was on: https://youtu.be/w8JmHsKhP9g https://www.amd.com/en/press-releases/press-release-2017jan31 https://ir.amd.com/news-events/press-releases/detail/1115/amd-reports-fourth-quarter-and-full-year-2022-financial https://www.techpowerup.com/review/amd-rx-480/ https://www.adapteva.com/white-papers/silicon-cost-calculator/ https://wccftech.com/nvidia-next-gen-3nm-gpus-not-launching-until-2025-tsmc-report/ https://www.digitimes.com.tw/tech/dt/n/shwnws.asp?CnlID=1&Cat=40&id=0000662749_VR76FCFB51XKH38BW2VPS https://videocardz.com/newz/nvidia-geforce-rtx-4060-ti-ad106-350-gpu-has-been-pictured https://seekingalpha.com/article/4597564-intel-corporation-intc-q1-2023-earnings-call-transcript
A local man who helps undrafted football players realize their NFL dream tells ML, Marc and Shawn how he works […]
Em dia de luta dos trabalhadores, a comitiva da Má Língua trabalha arduamente para o povo. As comemorações do 25 de Abril foram o tema em análise pelo painel de comentadores menos isento do país. Rita Blanco confessa que foi a autora do discurso de Marcelo Rebelo de Sousa enquanto Júlia Pinheiro ensina a Rui Zink uma palavra nova: pelé. Já Manuel Serrão, de cravo ao peito, vai oferecer uma caixa de cotonetes à bancada do Chega para que limpem os ouvidos discretamente. Uma última nota para dizer que este episódio foi gravado antes de ter chegado a conhecimento público a arruaça no gabinete do ministro João Galamba. Com o alto patrocínio do governo ps, não faltará pimenta nestas línguas nos próximos episódios. See omnystudio.com/listener for privacy information.
Today's guest is Shahmeer Mirza, Director of R&D and IT Strategy at 7-Eleven in Irving, Texas. Founded in 1927 as the world's first convenience store, 7‑Eleven now operates, franchises and licenses more than 13,000 stores in the U.S. and Canada. Their top priority has always been to give customers the most convenient experience possible to consistently meet their needs. 7‑Eleven aims to be a one-stop shop for consumers, a place people can always rely on to deliver what they want, when, where and how they want it. Shahmeer joined 7-Eleven as their Director of R&D in December 2020 where he focuses on digital innovations that transform the future of convenience. He has a history of delivering customer-facing applied machine learning and computer vision solutions, and is experienced in growing large teams with deep subject matter expertise across a variety of engineering domains. Shahmeer is also a public speaker and inventor with a strong passion for shaping strategic roadmaps and delivering innovative products. In the episode, Shahmeer will talk about: An insight into the broader data organization at 7Eleven, Day-to-day life of the R&D team, How the team is structured for success, Upcoming projects in ML and Computer Vision, Why 7-Eleven is a great place to work, & Career opportunities within the team
WBSRocks: Business Growth with ERP and Digital Transformation
AI is changing the world. You have an infusion of AI at every step in the process, whether you talk about the first mile or the last mile. Whether you talk about AI being used to intelligently and automatically capture paper-based invoices or to enrich, augment, and predict incomplete data. But you can't get results from AI if you have data silos. In fact, AI initiatives might fire back if you trained your models with the wrong data.In today's episode, our guest, Claus Jepsen, discusses why AI and ML solutions are less effective if businesses still have data siloed. He also discusses Unit4 stories and their unique approach to the cloud. Finally, he discusses issues with the first and last mile of AI and how they each offer unique challenges.For more information on growth strategies for SMBs using ERP and digital transformation, visit our community at wbs.rocks or elevatiq.com. To ensure that you never miss an episode of the WBS podcast, subscribe on your favorite podcasting platform.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more. The complete show notes for this episode can be found at https://twimlai.com/go/627.
Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan and Jonathan are your hosts this week as we discuss all the latest news and announcements in the world of the cloud and AI - including Amazon's new AI, Bedrock, as well as new AI tools from other developers. We also address the new updates to AWS's CodeWhisperer, and return to our Cloud Journey Series where we discuss *insert dramatic music* - Kubernetes! Titles we almost went with this week: ⭐I'm always Whispering to My Code as an Individual
A really unlucky review of a late goal caused host Matt Perrault to have a small losing day at 2-2 than a nice winning day at 3-1. It's a Friday and Matt is feeling a 3-leg ML parlay plus a MLB bet to go along with 3 NHL plays for the Friday episode of the Daily Juice presented by BetMGM
In this inspiring episode, join Dr. Geo as he interviews Dr. Ralph Moss about his personal journey through prostate cancer. From their initial acquaintance to the friendship they share today, Dr. Moss opens up about the life-changing diagnosis and the emotional rollercoaster he experienced. Listen as he recounts the story of finding a comprehensive cancer center and his determination to seek the best care. This conversation is filled with heart, humor, and insights. Tune in to learn more about Dr. Ralph's Moss journey. _____________Thank you to our sponsors.This episode is brought to you by ExoDx™ Prostate Test for prostate tissue. The ExoDx™ Prostate Test is a simple, non-DRE, urine-based, liquid biopsy test indicated for men 50 years of age and older with a prostate-specific antigen (PSA) 2-10ng/mL, or PSA in the “gray zone” who may be considering a biopsy. The ExoDx Prostate test provides a risk score that determines a patient's potential risk of clinically significant prostate cancer (Gleason Score ≥7). The test is included in the National Comprehensive Cancer Network (NCCN) guidelines and has been clinically validated at the cut-point of 15.6 with a 91% sensitivity and 92% negative predictive value, meaning there is less than a 9% chance of having aggressive prostate cancer below the validated cut-point of 15.6. Ask your urologist about the ExoDx Prostate Test.This episode is also brought to you by AG1 (Athletic Greens). AG1 contain 75 high-quality vitamins, minerals, whole-food sourced ingredients, probiotics, and adaptogens to help you start your day right. This special blend of ingredients supports your gut health, your nervous system, your immune system, your energy, recovery, focus, and aging. All the things. Enjoy AG1 (Athletic Greens).----------------Thanks for listening to this week's episode. Subscribe to The Dr. Geo YouTube Channel to get more content like this and learn how you can live better with age.You can also listen to this episode and future episodes of the Dr. Geo Podcast by clicking HERE.----------------Follow Dr. Geo on social media. Facebook, Instagram Click here to become a member of Dr. Geo's Health Community.Improve your urological health with Dr. Geo's formulated supplement lines: XY Wellness for Prostate cancer lifestyle and nutrition: Mr. Happy Nutraceutical Supplements for prostate health and male optimal living.You can also check out Dr. Geo's online dispensary for other supplement recommendations Dr. Geo's Supplement Store____________________________________DISCLAIMER: This audio is educational and does not constitute medical advice. This audio's content is my opinion and not that of my employer(s) or any affiliated company.Use of this information is at your own risk. Geovanni Espinosa, N.D., will not assume any liability for any direct or indirect losses or damages that may result from the use of...
Neal Bloom is a Managing Partner at Interlock Capital, a community of founders, investors, and subject matter experts. Victoria talks to Neal about what he finds attractive about startups and companies he's excited about, out of all the pitches he receives, how many he gets to say yes to, and when working with a team, what he uses to manage information and contacts for investors. Interlock Capital (https://interlock.capital/) Follow Interlock Capital on LinkedIn (https://www.linkedin.com/company/interlock-capital/), or Twitter (https://twitter.com/InterlockCap). Follow Neal Bloom on LinkedIn (https://www.linkedin.com/in/nealbbloom/) or Twitter (https://twitter.com/NealBloom). Check out his website (https://withkoji.com/@Nealbloom) and blog (https://freshbrewedtech.com/)! Follow thoughtbot on Twitter (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is the Giant Robots Smashing Into Other Giant Robots Podcast where we explore the design, development, and business of great products. I'm your host, Victoria Guido. And with me today is Neal Bloom, Managing Partner at Interlock Capital, a community of founders, investors, and subject matter experts. Neal, thank you for joining us. NEAL: Hey, thanks for having me. It's so great to be here with you. VICTORIA: Fantastic. I'm excited to finally get a chance to talk with you. I met you at an investor hike that you organize once a month. NEAL: A founders' hike, yeah. I get up nice and early on the first Wednesday of each month in Torrey Pines in San Diego. And we hike up and down the hill with ocean views. It's not a bad day. VICTORIA: It's a great way to start the morning, I think, and to meet other people, other builders of products in technology. So tell me more about your work at Interlock Capital. NEAL: Sure. It really kind of organically happened that I became an investor, but not planned at all. I have an aerospace background then built my own edtech and talent tech marketplace. I call it the LinkedIn for students is really what we built as our first startup called Portfolium. We sold it, and I got really into startup communities, especially because of some people who helped me with my first startup. I want to be a part of building an even better ecosystem for others. And that turned into a podcast, a blog, an event series. And once I had the capital from my exit, turned into angel investing as well, too, and really just found that as I got to know people over time, the more and more I got to know them, the more certain ones stood out that said, wow, I don't just want to help them for the good of it. I also just want to be along for the ride. And I started writing checks to other founders. So that was the beginning of my investor journey about five years ago. And over COVID, a whole bunch of other later-stage experience operators, either founder-level or executives at tech companies, said, "I want to learn to do this. Can I do it alongside you?" And we created Interlock Capital as an investment syndicate. A group of us can share and utilize our brainpower, our time, and our capital to help companies. It's kind of our focus. So that's why we call it a community because it's not just kind of a one-way pitch us, and we'll write you a check. It's very much get to know the people, find the exact right domain experts who have subject matter expertise, who've been there and done that before. If they like the company and they want to personally invest, then we go to the greater group and say, "Hey, everyone, who wants to join this deal specifically?" So 18 investments later from Interlock Capital, we now also have an investment fund. So now we write two checks into every company. We do our syndicated style, pass the hat, if you will, "Hey, everyone, anyone want to invest in just this deal?" And then match it from our fund. And we're writing between $300,000 to $500,000 checks into early-stage software or/and software plus hardware companies. VICTORIA: What an incredible journey. And I love that it's led you to creating a community as part of what you do as an investment capital group. What do you find interesting about these startups and these companies that you want to be interested in? NEAL: Part of it is how much you learn about yourself, to be honest. I get to meet three to five new founders a day in a variety of ways, whether it's straight Zoom and pitch, or grab a coffee, or see them on a hike. We're kind of constantly introducing ourselves to each other. There's a bit of learning about how to size someone up to a certain regard. So you're kind of building this inner algorithm of how to top-prank people and their ideas. That's one interesting way that I never thought I would be doing professionally. There's a lot that we say versus what we do, and that's a data point that I have to keep track of because I get pitched amazing ideas that will literally change the world for so much better. And you get really excited about it, and you get invested in it. And I call it founder love. You fall in love with these founders specifically and almost say, "I don't even care what you're working on. I just want to work more with you. How do we do it?" So there's a lot of that. So there are some dating aspects [laughs] in terms of founder dating, like getting to know people. There's the determining how do we date towards marriage? Meaning, I'll write you a check, and I'm along for the ride for the next ten years. And then there's the kind of relationship maintenance which is okay; I wrote the check, now what? Where can I be helpful to the company? How can I anticipate their needs so that they have to think one more thing of how to satisfy me? It's quite the opposite way around. I'm trying not to be a barrier. I'm trying to work for them while they're sleeping. So yeah, it's really interesting the kind of the relationship aspect that goes into getting to know and helping founders take their ideas and turn it into reality. VICTORIA: That's very cool. And I have talked to people who have met you and talked to your company and just how supportive and helpful you all are even if you choose not to invest. So I think that's a really valuable resource for people. And I wonder, do you think it's something unique about the San Diego community in particular that is exciting right now? NEAL: I think so. I think San Diego specifically has always had this culture of give-before-you-get mentality, and so we kind of lead with that. There are a lot of people moving here. And you could choose many places that could be great, like LA versus San Diego, and there's a certain kind of person that chooses here versus somewhere else. And what I have found is there's a certain kind of give-before-you-get cultural mentality here that somehow people register pretty quickly and come with. And so that's an underlying greatness about us here. There's also because of the great environment we live in, by the beach, healthy lifestyle. I think we choose to work on things that maybe are also satisfying, just like our personal lives, meaning we work on things that matter, that are going to change the world, that are life-changing. That's not to say that we don't need certain other kinds of technology. I'm sure at some point, we felt we needed Twitter, and maybe we don't feel like that now. [laughs] But here, it feels like everyone's working on very impactful things, and I think that's really special to think about. Some examples of that is we've got an interesting subset of the SaaS world in nonprofit tech. So GoFundMe was founded in San Diego. They have since acquired three other nonprofit tech SaaS companies in San Diego, like Classy. So that's kind of interesting. You've got people who want to build a business that services nonprofits, and now they're all under one roof. So yeah, I think there is something special. We can dive deeper into some of the other sub-industries or categories that are interesting here, too, if you're interested. VICTORIA: Well, I could talk about San Diego all day. NEAL: [laughs] VICTORIA: Because I'm a fairly new resident, and I'm in love with it, obviously. [laughs] But let's talk more about products that can change the world. Like, what's one that you're really excited about that you've heard recently? NEAL: Ooh. I would start a little high level in certain categories that I'm really liking. I like things I'm seeing in the infrastructure space right now, meaning, you know, whether it's pipes and our water utilities, and I would include that in energy and EV, you know, kind of a mobility piece. There's even the commercial side of mobility, so trucking and freight. That whole infrastructure layer is really interesting to me right now. A certain company that, full disclosure, we invested in recently is a company called EarthGrid. They have a product that is boring holes tunnel-wise underground, but they're using just electricity and air, so plasma. And it's fascinating. They can bore holes 100 times the norm right now. They don't need to potentially trench, meaning they don't need to cut above the surface. They can just dig for miles straight underneath the ground, so they can go under things with that. And really a lot of the expensive pieces, closing lanes on freeways or highways to put fiber in or plumbing and all that. So it's really interesting to see that. Now, one element is the technology is interesting. But they have a plan to actually own their own tunnels