Podcasts about Cannell

  • 111PODCASTS
  • 151EPISODES
  • 43mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Mar 4, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Cannell

Latest podcast episodes about Cannell

Thoroughbred Racing Radio Network
AmWager Mardi Gras ATR from Gulfstream-Part 2: Joe Kristufek/Fair Grounds, NYRA's Andy Serling, FL Horseman's Ton Cannell, Brendan Walsh

Thoroughbred Racing Radio Network

Play Episode Listen Later Mar 4, 2025


Gangland Wire
Mafia Cops: NYPD Corruption and Murder

Gangland Wire

Play Episode Listen Later Mar 3, 2025 38:11


Retired Intelligence Detective Gary Jenkins brings you the best in mob history with his unique perception of the mafia. The Mafia Cops: NYPD Corruption and Murder with Michael Connell. In this explosive episode of Gangland Wire, I uncover the shocking true story of two NYPD detectives who became hitmen for the Mafia. Louis Eppolito and Stephen Caracappa weren't just dirty cops—they were fully embedded in the Lucchese crime family, leaking intelligence, setting up murders, and betraying the very system they swore to uphold. Joining me is Michael Connell, author of a gripping account of their crimes. We break down how these officers, once respected members of law enforcement, used their badges to serve the mob. Eppolito's deep family ties to organized crime and Caracappa's access to high-level police intelligence made them the perfect duo for Gaspipe Casso and the Lucchese family. Their corruption ran so deep that they not only provided inside information but also carried out Mafia-ordered executions—including the tragic killing of an innocent man due to a case of mistaken identity. We discuss how their downfall unfolded, from a shocking whistleblower to the relentless detective work that finally exposed them. We dive into the role of Betty Heidel, a grieving mother determined to find justice for her murdered son, and Detective Tommy Dades, who helped piece together the case that brought Eppolito and Caracappa to justice. This story concerns power, betrayal, and the dark intersection between law enforcement and organized crime. Don't miss this deep dive into one of NYPD's most astonishing corruption cases. Find Michael's book Blood on the Badge at this link. Subscribe to get new gangster stories every week. Hit me up on Venmo for a cup of coffee or a shot and a beer @ganglandwire Click here to "buy me a cup of coffee" To go to the store or make a donation or rent Ballot Theft: Burglary, Murder, Coverup, click here To rent Brothers against Brothers, the documentary, click here.  To rent Gangland Wire, the documentary, click here To buy my Kindle book, Leaving Vegas: The True Story of How FBI Wiretaps Ended Mob Domination of Las Vegas Casinos. To subscribe on iTunes click here. Please give me a review and help others find the podcast. Donate to the podcast. Click here! #TrueCrime #BostonMafia #OrganizedCrime #GanglandWire #AngiuloFamily #FBI #Surveillance #MafiaHistory Transcript [0:00] Well, hey, all you wiretappers out there. Good to be back here in Studio Gangland Wire. I have an author today, some stories about the mafia cops, the mob cops in New York City, Caracapa and Eppolito. Those two guys were bad dudes. So I have Michael Connell. Welcome, Michael. Hey, it's great to be here. Thanks. Thanks. Great to see you again. Yeah, you too. Yeah, you've been on the show before, haven't you? I have, yes. For my previous book, I guess that we were here together three or so years ago. Was it that long? Was it Abrellis? Was it Abrellis' book? It was Abrellis, right, exactly. Yeah, that guy's a character. Abrellis, also known as Kid Twist, who went out the window of a hotel in Coney Island. [0:48] Nobody knows exactly how he went out, but one thing's for sure, it wasn't voluntary. The canary could sing, but he couldn't fly, right? Exactly. So, guys, I know you all know me, but I'm retired intelligence unit detective Gary Jenkins, Kansas City Police Department. Got this show, Gangland Wire, and we deal with the mafia almost every week. So this story is blood and the bads, the mafia, two killer cops, and a scandal that shocked the nation. I know you know some of y'all will know this story about Steve Caraappa and Lou Eppilito I want to tell you what Joe Pistone who everybody knows is Donnie Brasco, said about this book "Cannell pulls back the veil to refill law enforcement's most lurid chapter an entwined tale of decorated detectives on the mafia payroll a true account of police depravity unearthed...

Arroe Collins
Michael Cannell's Blood And The Badge When Crime Families Team Up With The Law

Arroe Collins

Play Episode Listen Later Feb 12, 2025 17:33


Through the 1970s and 1980s Louis Eppolito and Stephen Caracappa served in the NYPD, rising through the ranks, each becoming decorated detectives. They are also responsible for what may qualify as the department's darkest chapter. For years the two cops operated not only as paid informants for the Lucchese organized crime family, but served as mob henchmen, committed a multitude of crimes and were involved in at least fifteen murders. And they came remarkably close to getting away with all of it. Michael Cannell, a former editor at the New York Times and author of the critically acclaimed A Brotherhood Betrayed, now has written the definitive account of the crooked cops' escapades and the trail of terror they left-which included the deaths and wrongful imprisonment of wholly innocent people-in BLOOD AND THE BADGE: The Mafia, Two Killer Cops, and a Scandal That Shocked the Nation (January 14, 2025; SMP). "Cannell pulls back the veil to reveal law enforcement's most lurid chapter, an entwined tale of decorated detectives on the mafia payroll - a true account of police depravity unearthed with intensive reporting." -Joe Pistone, New York Times bestselling author of Donnie Brasco "Michael Cannell's Blood and the Badge details the extraordinary 'Killer Cops' investigation, a harrowing story of corruption and murder within law enforcement itself. Cannell misses nothing."-Nicholas Pileggi, bestselling author of Wiseguy and co-writer of the Academy Award-winner Goodfellas Become a supporter of this podcast: https://www.spreaker.com/podcast/arroe-collins-unplugged-totally-uncut--994165/support.

True Murder: The Most Shocking Killers
BLOOD AND THE BADGE-Michael Cannell

True Murder: The Most Shocking Killers

Play Episode Listen Later Feb 3, 2025 63:57


For the first time in forty years, former New York Times editor Michael Cannell has unearthed the full story behind two ruthless New York cops who acted as double agents for the Mafia.No episode in NYPD history surpasses the depravities of Louis Eppolito and Stephen Caracappa, two decorated detectives who covertly acted as mafia informants and paid assassins in the Scorsese world of 1980s Brooklyn.For more than ten years, Eppolito and Caracappa moonlighted as the mob's early warning alert system, leaking names of mobsters secretly cooperating with the government and crippling investigations by sharing details of surveillance, phone taps and impending arrests. The Lucchese boss called the two detectives his crystal ball: Whatever detectives knew, the mafia soon learned. Most grievously, Eppolito and Caracappa earned bonuses by staging eight mob hits, pulling the trigger themselves at least once.Incredibly, when evidence of their wrongdoing arose in 1994, FBI officials failed to muster an indictment. The allegations lay dormant for a decade and were only revisited due to relentless follow up by Tommy Dades, a cop determined to break the cold case before his retirement. Eppolito and Caracappa were finally tried and then sentenced to life in prison in 2009, nearly thirty years after their crimes took place.Cannell's Blood and the Badge is based on entirely new research and never-before-released interviews with mobsters themselves, including Sammy “the Bull” Gravano. Joining me to discuss his new book, BLOOD AND THE BADGE: The Mafia, Two Killer Cops, and a Scandal that shocked the Nation—N.Y. Times editor and author Michael Cannell Follow and comment on Facebook-TRUE MURDER: The Most Shocking Killers in True Crime History https://www.facebook.com/profile.php?id=100064697978510Check out TRUE MURDER PODCAST @ truemurderpodcast.com

Arroe Collins Like It's Live
Michael Cannell's Blood And The Badge When Crime Families Team Up With The Law

Arroe Collins Like It's Live

Play Episode Listen Later Jan 20, 2025 17:33


Through the 1970s and 1980s Louis Eppolito and Stephen Caracappa served in the NYPD, rising through the ranks, each becoming decorated detectives. They are also responsible for what may qualify as the department's darkest chapter. For years the two cops operated not only as paid informants for the Lucchese organized crime family, but served as mob henchmen, committed a multitude of crimes and were involved in at least fifteen murders. And they came remarkably close to getting away with all of it. Michael Cannell, a former editor at the New York Times and author of the critically acclaimed A Brotherhood Betrayed, now has written the definitive account of the crooked cops' escapades and the trail of terror they left-which included the deaths and wrongful imprisonment of wholly innocent people-in BLOOD AND THE BADGE: The Mafia, Two Killer Cops, and a Scandal That Shocked the Nation (January 14, 2025; SMP). "Cannell pulls back the veil to reveal law enforcement's most lurid chapter, an entwined tale of decorated detectives on the mafia payroll - a true account of police depravity unearthed with intensive reporting." -Joe Pistone, New York Times bestselling author of Donnie Brasco "Michael Cannell's Blood and the Badge details the extraordinary 'Killer Cops' investigation, a harrowing story of corruption and murder within law enforcement itself. Cannell misses nothing."-Nicholas Pileggi, bestselling author of Wiseguy and co-writer of the Academy Award-winner Goodfellas Become a supporter of this podcast: https://www.spreaker.com/podcast/arroe-collins-like-it-s-live--4113802/support.

House of Mystery True Crime History
Michael Cannell - Blood and the Badge

House of Mystery True Crime History

Play Episode Listen Later Jan 15, 2025 27:39


For the first time in forty years, former New York Times editor Michael Cannell unearths the full story behind two ruthless New York cops who acted as double agents for the Mafia.No episode in NYPD history surpasses the depravities of Louis Eppolito and Stephen Caracappa, two decorated detectives who covertly acted as mafia informants and paid assassins in the Scorsese world of 1980s Brooklyn.For more than ten years, Eppolito and Caracappa moonlighted as the mob's early warning alert system, leaking names of mobsters secretly cooperating with the government and crippling investigations by sharing details of surveillance, phone taps and impending arrests. The Lucchese boss called the two detectives his crystal ball: Whatever detectives knew, the mafia soon learned. Most grievously, Eppolito and Caracappa earned bonuses by staging eight mob hits, pulling the trigger themselves at least once.Incredibly, when evidence of their wrongdoing arose in 1994, FBI officials failed to muster an indictment. The allegations lay dormant for a decade and were only revisited due to relentless follow up by Tommy Dades, a cop determined to break the cold case before his retirement. Eppolito and Caracappa were finally tried and then sentenced to life in prison in 2009, nearly thirty years after their crimes took place.Cannell's Blood and the Badge is based on entirely new research and never-before-released interviews with mobsters themselves, including Sammy “the Bull” Gravano. Eppolito and Caracappa's story is more relevant than ever as police conduct comes under ever-increasing scrutiny.Support this show http://supporter.acast.com/houseofmysteryradio. Become a member at https://plus.acast.com/s/houseofmysteryradio. Hosted on Acast. See acast.com/privacy for more information.

Thecuriousmanspodcast
Michael Cannell Interview Episode 499

Thecuriousmanspodcast

Play Episode Listen Later Jan 14, 2025 55:40


Matt Crawford speaks with author Michael Cannell about his book, Blood and the Badge: The Mafia, Two Killer Cops, and a Scandal That Shocked the World. No episode in NYPD history surpasses the crimes committed by Louis Eppolito and Stephen Caracappa, two decorated detectives who covertly acted as mafia informants and paid assassins for the mob in 1980s Brooklyn. For more than ten years, Eppolito and Caracappa moonlighted as the mob's early warning system, leaking names of mobsters secretly cooperating with the government and crippling investigations by sharing details of surveillance, phone taps, and impending arrests. The Lucchese boss called the two detectives his crystal ball: Whatever detectives knew, the mafia soon learned. Most grievously, Eppolito and Caracappa earned bonuses by staging eight mob hits, pulling the trigger themselves at least once. Cannell takes us on a deep dive, grabs us by the throat and never let's go. Expertly researched and written, Blood and the Badge reads like a screenplay, almost too outlandish to believe. But make no mistake, these stories are true and so are their victims. Cannell makes sure we maintain our humanity as we read and wait to see if justice will prevail. 

Thoroughbred Racing Radio Network
Monday NYRA Bets ATR-Part 2: FL Decoupling w/ Tom Cannell, David Terre, Rich Migliore

Thoroughbred Racing Radio Network

Play Episode Listen Later Jan 13, 2025


SportTalk Chattanooga
Kriby Cannell - November 6th 2024

SportTalk Chattanooga

Play Episode Listen Later Nov 6, 2024 13:41


See omnystudio.com/listener for privacy information.

In Her Business
Stephanie Cannell | Cannell Insurance Group

In Her Business

Play Episode Listen Later Oct 18, 2024 37:39


S1 | E14: In this episode, we discuss the insurance crisis and the difficulty of obtaining homeowners' insurance in California. We're joined by Stephanie Cannell, an experienced insurance broker, who offers insights into what's happening and how it affects us all.

MBC Grand Broadcasting, Inc.
2012 Cedaredge CJ Cannell 6 - 24 2020

MBC Grand Broadcasting, Inc.

Play Episode Listen Later Sep 23, 2024 6:50


2012 Cedaredge CJ Cannell 6 - 24 2020 by MBC Grand, Inc.

Folk on Foot
Melissa Harrison & Laura Cannell in Suffolk

Folk on Foot

Play Episode Listen Later Jul 11, 2024 59:02


On a beautiful day in May the novelist, nature writer and podcaster Melissa Harrison and the composer and multi instrumentalist Laura Cannell take us for a walk in the glorious Suffolk countryside. Laura plays a recorder duet with a nightingale, Melissa reads from her acclaimed novel “All Among The Barley” - appropriately enough in a field of ripening barley - and we hunt for barn owl pellets “like dark Kinder Eggs” as Melissa has it. Then Laura takes out her fiddle and - using her distinctive “overbowing” technique - plays music inspired by ancient traditions and a deep sense of place.---We rely on support from our listeners to keep this show on the road. If you like what we do please either...Become a member and get great rewards: patreon.com/folkonfootOr just buy us a coffee: ko-fi.com/folkonfootSign up for our newsletter at www.folkonfoot.comFollow us on Twitter/Facebook/Instagram: @folkonfoot---Find out more about Melissa at https://melissaharrison.co.uk/ and Laura at https://lauracannell.com/ Hosted on Acast. See acast.com/privacy for more information.

UK Law Weekly
George v Cannell [2024] UKSC 19

UK Law Weekly

Play Episode Listen Later Jun 24, 2024 6:52


If a malicious falsehood is published, can damages be recovered for injury to feelings even if no financial loss was suffered? https://uklawweekly.substack.com/subscribe Music from bensound.com

Two Hundred A Day
Episode 137: This Case Is Closed

Two Hundred A Day

Play Episode Listen Later Jun 16, 2024 107:40


Nathan and Eppy finish the main tour through the first season with the 90-minute S1E6 This Case Is Closed. Jim's client sends him to Newark to check up on a soon-to-be son-in-law, and Jim starts getting heat from the mob, the cops and the feds - and he doesn't know why! With an unconventional narrative structure, plenty of good car chases and lots of Cannell-written memorable dialogue, this Roy Huggins script delivers a solid early-series Rockford experience. Note: This episode was split into two for syndication, and the sydicated versions are what's currently available on streaming services. We watched and talked about the original 90-minute version of the episode. We have another podcast: Plus Expenses. Covering our non-Rockford media, games and life chatter, Plus Expenses is available via our Patreon (https://www.patreon.com/twohundredaday) at ALL levels of support. Want more Rockford Files trivia, notes and ephemera? Check out the Two Hundred a Day Rockford Files Files (http://tinyurl.com/200files)! We appreciate all of our listeners, but offer a special thanks to our patrons (https://www.patreon.com/twohundredaday). In particular, this episode is supported by the following Gumshoe and Detective-level patrons: * Richard Hatem * Bill Anderson * Brian Perrera * Eric Antener * Jordan Bockelman * Michael Zalisco * Joe Greathead * Mitch Hampton's Journey of an Aesthete Podcast (https://www.jouneyofanaesthetepodcast.com) * Dael Norwood wrote a book! Trading Freedom: How Trade with China Defined Early America (https://press.uchicago.edu/ucp/books/book/chicago/T/bo123378154.html) * Chuck Suffel's comic Sherlock Holmes & the Wonderland Conundrum (http://whatchareadingpress.com) * Paul Townend recommends the Fruit Loops podcast (https://fruitloopspod.com) * Shane Liebling's Roll For Your Party dieroller app (https://rollforyour.party/) * Jay Adan's Miniature Painting (http://jayadan.com) * Brian Bernsen's Facebook page of Rockford Files filming locations (https://www.facebook.com/brianrockfordfiles/) * Brian Cummins, Robert Lindsey, Nathan Black, Jay Thompson, David Nixon, Colleen Kelly, Tom Clancy, Andre Appignani, Pumpkin Jabba Peach Pug, Dave P, Dave Otterson, Kip Holley and Dale Church! Thanks to: * Fireside.fm (https://fireside.fm) for hosting us * Audio Hijack (https://rogueamoeba.com/audiohijack/) for helping us record and capture clips from the show

Trivia Tracks With Pryce Robertson
TV Thursday: Renegade

Trivia Tracks With Pryce Robertson

Play Episode Listen Later May 16, 2024 3:15


One of the hottest shows on TV during the '90s, the action-adventure series starred Lorenzo Lamas as a bounty hunter wrongfully accused of a murder he never committed. 

Uncanny Landscapes
Uncanny Landscapes #23 - Laura Cannell

Uncanny Landscapes

Play Episode Listen Later Apr 26, 2024 59:43


Laura Cannell is a musician based in Suffolk, East Anglia, England. Her work combines experimental, folk, early and medieval music, as well as a number of unique and rare techniques. Her website is https://lauracannell.com/ where you can find out about recordings and gigs, or you can visit her Brawl Records bandcamp site to check out music and buy recordings. The music in this episode is from Laura's recent 'Lore' series, available through the bandcamp site.   Host Justin Hopper has an Uncanny Landscapes substack. The Substack is free, and includes the podcast + more. JH can be found via LinkTree or on Instagram.   Title sounds by The Belbury Poly, courtesy Ghost Box Records.   The Uncanny Landscapes icon is by Stefan Musgrove.

Each Other's Mothers Podcast
Episode 49 | Jream Big ft. Tessa Cannell

Each Other's Mothers Podcast

Play Episode Listen Later Apr 19, 2024 38:30


VOICEMAIL VOICEMAIL VOICEMAIL: https://www.speakpipe.com/EachOthersMothers WATCH JOH'S SPECIAL: https://youtu.be/55XHzQ_vG04 JohannaMedranda.com Produced by Mar | Recorded at HMC Studios IG @johanna.medranda @itchysnitchy @tessawrekt --- Support this podcast: https://podcasters.spotify.com/pod/show/eachothersmotherspodcast/support

The Every Day Novelist
Questions 1103: Cannell’s Secret Sauce

The Every Day Novelist

Play Episode Listen Later Apr 4, 2024 6:01


Indiana Jim asks: Stephen J. Cannell produced some of the best television shows ever, and I know you're an aficionado. What was his secret sauce for engaging and enduring episodic storytelling? WiseguyRockford FilesThe A TeamSilk StalkingsThe Commish The post Questions 1103: Cannell's Secret Sauce appeared first on The Every Day Novelist.

Poem-a-Day
Skipwith Cannell: "The Coming of Night"

Poem-a-Day

Play Episode Listen Later Mar 16, 2024 4:41


Recorded by Academy of American Poets staff for Poem-a-Day, a series produced by the Academy of American Poets. Published on March 16, 2024. www.poets.org

Ian Talks Comedy
Babs Greyhosky (Magnum P.I., Riptide, A-Team)

Ian Talks Comedy

Play Episode Listen Later Mar 16, 2024 44:12


Babs Greyhosky joined me to discuss her love of early sitcoms; her and her father watching The Rockford Files; always wanting to be a writer; having a family friend in Don Bellasario; wrote spec scripts as a substitute teacher; moved to LA and became a secretary for Battlestar Galactica writers; typed pilot of Magnum, PI; humor; first assignment; her name; memorable episodes; stories the king; Greatest American Hero asked to write wedding episode and another fan favorite; Bill Culp; how writing The A-Team was part of the Cannell factory; left before Peppard / Mr. T feud; Cannell v. Larson; Juanita Bartlett; Babs being the only female one hour executive producer in Hollywood; Riptide; people wrote to TV Guide to find out if she was a woman; added depth to Riptide; parodied its competition "Moonlighting" in its last episode; Stefanie Kramer; J.J. Starbuck; got George Clooney his SAG card; writing for Xena, Farscape, and Sheena; residuals; teaching at USC film school; teaching Fernando Kalife of Seven Days; becoming a therapist for veterans with PTSD; her selection of short stories, Hero Avenue; EMDR; Hero Avenue - Kindle edition by Greyhosky, Babs. Literature & Fiction Kindle eBooks @ Amazon.com.

Thoroughbred Racing Radio Network
Thursday Oaklawn Park ATR from NHC at Horseshoe Vegas-Part 2: Joe Clancy, Brian Troop, Thoro-Graph/Jeff Franklin, FHBPA's Tom Cannell, TP's Kaitlin Free

Thoroughbred Racing Radio Network

Play Episode Listen Later Mar 14, 2024


Manx Radio - Update
Steam Packet MD speaks, Lord Street plan is 'nothing', 32 years a GP now an MBE, MEDS closure questioned, Lib Van Bishop consultation & remembering Marilyn Cannell. It's Update with Andy Wint #iom #news #manxradio

Manx Radio - Update

Play Episode Listen Later Jan 4, 2024 26:21


Steam Packet MD speaks, Lord Street plan is 'nothing', 32 years a GP now an MBE, MEDS closure questioned, Lib Van Bishop consultation & remembering Marilyn Cannell. It's Update with Andy Wint #iom #news #manxradio

This Classical Life
Jess Gillam with... Laura Cannell

This Classical Life

Play Episode Listen Later Dec 30, 2023 27:03


Jess Gillam shares music with composer Laura Cannell, including a sublime Biber Requiem, new music by Kenya Grace, traditional Norwegian fiddle music and a Memphis Soul Stew!Playlist: Tchaikovsky – Piano Concerto No. 1; first movement [Vladimir Ashkenazy, London Symphony Orchestra, Lorin Maazel] Kenya Grace - Strangers John Mackey – Concerto for Soprano Saxophone and Wind Ensemble [ Timothy McAllister, ASU Wind Ensemble] Sven Nyhus – Fanitullen (The Devil's Dance) Domenico Scarlatti – Sonata in F minor, K. 466 [Vladimir Horowitz] Tarquinio Merula - Ciaccona [His Majesty's Sackbuts and Cornetts] King Curtis – Memphis Soul Stew Heinrich Ignaz Franz von Biber - Requiem in F minor - Dies Irae [Gabrieli Consort, Paul McCreesh]

Thoroughbred Racing Radio Network
Tuesday AmWager ATR from UA RTIP Symposium-Part 2: Steve Crist, Lisa Lazarus/Mandy Minger, Patricia McQueen, Jessica Paquette/Bailey Armour, Herb Oster/Tom Cannell

Thoroughbred Racing Radio Network

Play Episode Listen Later Dec 5, 2023


Music Life
Don't wait for permission, with Kathryn Tickell, Laura Cannell, Amy Thatcher and Ruth Lyon

Music Life

Play Episode Listen Later Aug 18, 2023 37:03


British folk musicians Kathryn Tickell, Laura Cannell, Amy Thatcher and Ruth Lyon discuss their musical and personal identities, the music they made when they were younger, and whether or not place affects the music they create. Kathryn Tickell is from the North Tyne Valley of Northumberland and comes from a musical family of pipers, singers, fiddlers and accordion players. She took up the Northumbrian small pipes at the age of nine, and began learning tunes from old shepherd friends and family. Her work has evolved to traverse jazz, and music from around the world, to large-scale orchestral works. She has released 15 of her own albums to date, and has recorded and performed with Evelyn Glennie, the London Sinfonietta, Sting, and many others. In 2015 she was awarded an OBE for services to folk music. Laura Cannell is a composer and violinist whose music straddles the worlds of experimental, folk, chamber and medieval music. She came to prominence with her debut album, Quick Sparrows over the Black Earth, and is known for her compositions that draw on the emotional influences of landscapes, and explore the spaces between ancient and experimental music. She's also the founder of independent record label Brawl Records, and is curator of the Modern Ritual performance series. Amy Thatcher is one of the UK's leading folk accordionists, who's based in the North East of England. Her first album, Paper Bird, was recorded when she was just 16 years old, and she released her first album proper, Solo, in 2019. She's worked with the likes of the Royal Northern Sinfonia and Sting. Ruth Lyon is a folk and chamber-pop artist who has established herself as a key member of the music scene in Newcastle, UK. She grew up in the countryside of the North York Moors, inheriting a love of the outdoors as well as a sense of melancholy from the landscape, something that is instilled in the music she creates. Her most recent EP, Direct Debit to Vogue, showcases her soulful vocals and her witty, raw lyricism, expressing the power in fragility and the beauty in imperfection.

LessWrong Curated Podcast
"Brain Efficiency Cannell Prize Contest Award Ceremony" by Alexander Gietelink Oldenziel

LessWrong Curated Podcast

Play Episode Listen Later Jul 28, 2023 13:00


Previously Jacob Cannell wrote the post "Brain Efficiency" which makes several radical claims: that the brain is at the pareto frontier of speed, energy efficiency and memory bandwith, that this represent a fundamental physical frontier.Here's an AI-generated summaryThe article “Brain Efficiency: Much More than You Wanted to Know” on LessWrong discusses the efficiency of physical learning machines. The article explains that there are several interconnected key measures of efficiency for physical learning machines: energy efficiency in ops/J, spatial efficiency in ops/mm^2 or ops/mm^3, speed efficiency in time/delay for key learned tasks, circuit/compute efficiency in size and steps for key low-level algorithmic tasks, and learning/data efficiency in samples/observations/bits required to achieve a level of circuit efficiency, or per unit thereof. The article also explains why brain efficiency matters a great deal for AGI timelines and takeoff speeds, as AGI is implicitly/explicitly defined in terms of brain parity. The article predicts that AGI will consume compute & data in predictable brain-like ways and suggests that AGI will be far more like human simulations/emulations than you'd otherwise expect and will require training/education/raising vaguely like humans1.Jake further has argued that this has implication for FOOM and DOOM. Considering the intense technical mastery of nanoelectronics, thermodynamics and neuroscience required to assess the arguments here I concluded that a public debate between experts was called for. This was the start of the Brain Efficiency Prize contest which attracted over a 100 in-depth technically informed comments. Now for the winners! Please note that the criteria for winning the contest was based on bringing in novel and substantive technical arguments as assesed by me. In contrast, general arguments about the likelihood of FOOM or DOOM while no doubt interesting did not factor into the judgement. And the winners of the Jake Cannell Brain Efficiency Prize contest areEge ErdilDaemonicSigilspxtr... and Steven Byrnes!Source:https://www.lesswrong.com/posts/fm88c8SvXvemk3BhW/brain-efficiency-cannell-prize-contest-award-ceremonyNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.[125+ Karma Post] ✓

The Nonlinear Library
LW - Brain Efficiency Cannell Prize Contest Award Ceremony by Alexander Gietelink Oldenziel

The Nonlinear Library

Play Episode Listen Later Jul 24, 2023 11:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brain Efficiency Cannell Prize Contest Award Ceremony, published by Alexander Gietelink Oldenziel on July 24, 2023 on LessWrong. Previously Jacob Cannell wrote the post "Brain Efficiency" which makes several radical claims: that the brain is at the pareto frontier of speed, energy efficiency and memory bandwith, that this represent a fundamental physical frontier. Here's an AI-generated summary The article "Brain Efficiency: Much More than You Wanted to Know" on LessWrong discusses the efficiency of physical learning machines. The article explains that there are several interconnected key measures of efficiency for physical learning machines: energy efficiency in ops/J, spatial efficiency in ops/mm^2 or ops/mm^3, speed efficiency in time/delay for key learned tasks, circuit/compute efficiency in size and steps for key low-level algorithmic tasks, and learning/data efficiency in samples/observations/bits required to achieve a level of circuit efficiency, or per unit thereof. The article also explains why brain efficiency matters a great deal for AGI timelines and takeoff speeds, as AGI is implicitly/explicitly defined in terms of brain parity. The article predicts that AGI will consume compute & data in predictable brain-like ways and suggests that AGI will be far more like human simulations/emulations than you'd otherwise expect and will require training/education/raising vaguely like humans1. Jake further has argued that this has implication for FOOM and DOOM. Considering the intense technical mastery of nanoelectronics, thermodynamics and neuroscience required to assess the arguments here I concluded that a public debate between experts was called for. This was the start of the Brain Efficiency Prize contest which attracted over a 100 in-depth technically informed comments. Now for the winners! Please note that the criteria for winning the contest was based on bringing in novel and substantive technical arguments as assesed by me. In contrast, general arguments about the likelihood of FOOM or DOOM while no doubt interesting did not factor into the judgement. And the winners of the Jake Cannell Brain Efficiency Prize contest are Ege Erdil DaemonicSigil spxtr ... and Steven Byrnes! Each has won $150, provided by Jake Cannell, Eli Tyre and myself. I'd like to heartily congratulate the winners and thank everybody who engaged in the debate. The discussion were sometimes heated but always very informed. I was wowed and amazed by the extraordinary erudition and willingness for honest compassionate intellectual debate displayed by the winners. So what are the takeaways? I will let you be the judge. Again, remember the choice of the winners was made on my (layman) assesment that the participant brought in novel and substantive technical arguments and thereby furthered the debate. Steven Byrnes The jury was particularly impressed by Byrnes' patient, open-minded and erudite participation in the debate. He has kindly written a post detailing his views. Here's his summary Some ways that Jacob & I seem to be talking past each other I will, however, point to some things that seem to be contributing to Jacob & me talking past each other, in my opinion. Jacob likes to talk about detailed properties of the electrons in a metal wire (specifically, their de Broglie wavelength, mean free path, etc.), and I think those things cannot possibly be relevant here. I claim that once you know the resistance/length, capacitance/length, and inductance/length of a wire, you know everything there is to know about that wire's electrical properties. All other information is screened off. For example, a metal wire can have a certain resistance-per-length by having a large number of mobile electrons with low mobility, or it could have the same resistance-per-length by having a smaller number of mobile...

The Nonlinear Library: LessWrong
LW - Brain Efficiency Cannell Prize Contest Award Ceremony by Alexander Gietelink Oldenziel

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 24, 2023 11:28


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brain Efficiency Cannell Prize Contest Award Ceremony, published by Alexander Gietelink Oldenziel on July 24, 2023 on LessWrong. Previously Jacob Cannell wrote the post "Brain Efficiency" which makes several radical claims: that the brain is at the pareto frontier of speed, energy efficiency and memory bandwith, that this represent a fundamental physical frontier. Here's an AI-generated summary The article "Brain Efficiency: Much More than You Wanted to Know" on LessWrong discusses the efficiency of physical learning machines. The article explains that there are several interconnected key measures of efficiency for physical learning machines: energy efficiency in ops/J, spatial efficiency in ops/mm^2 or ops/mm^3, speed efficiency in time/delay for key learned tasks, circuit/compute efficiency in size and steps for key low-level algorithmic tasks, and learning/data efficiency in samples/observations/bits required to achieve a level of circuit efficiency, or per unit thereof. The article also explains why brain efficiency matters a great deal for AGI timelines and takeoff speeds, as AGI is implicitly/explicitly defined in terms of brain parity. The article predicts that AGI will consume compute & data in predictable brain-like ways and suggests that AGI will be far more like human simulations/emulations than you'd otherwise expect and will require training/education/raising vaguely like humans1. Jake further has argued that this has implication for FOOM and DOOM. Considering the intense technical mastery of nanoelectronics, thermodynamics and neuroscience required to assess the arguments here I concluded that a public debate between experts was called for. This was the start of the Brain Efficiency Prize contest which attracted over a 100 in-depth technically informed comments. Now for the winners! Please note that the criteria for winning the contest was based on bringing in novel and substantive technical arguments as assesed by me. In contrast, general arguments about the likelihood of FOOM or DOOM while no doubt interesting did not factor into the judgement. And the winners of the Jake Cannell Brain Efficiency Prize contest are Ege Erdil DaemonicSigil spxtr ... and Steven Byrnes! Each has won $150, provided by Jake Cannell, Eli Tyre and myself. I'd like to heartily congratulate the winners and thank everybody who engaged in the debate. The discussion were sometimes heated but always very informed. I was wowed and amazed by the extraordinary erudition and willingness for honest compassionate intellectual debate displayed by the winners. So what are the takeaways? I will let you be the judge. Again, remember the choice of the winners was made on my (layman) assesment that the participant brought in novel and substantive technical arguments and thereby furthered the debate. Steven Byrnes The jury was particularly impressed by Byrnes' patient, open-minded and erudite participation in the debate. He has kindly written a post detailing his views. Here's his summary Some ways that Jacob & I seem to be talking past each other I will, however, point to some things that seem to be contributing to Jacob & me talking past each other, in my opinion. Jacob likes to talk about detailed properties of the electrons in a metal wire (specifically, their de Broglie wavelength, mean free path, etc.), and I think those things cannot possibly be relevant here. I claim that once you know the resistance/length, capacitance/length, and inductance/length of a wire, you know everything there is to know about that wire's electrical properties. All other information is screened off. For example, a metal wire can have a certain resistance-per-length by having a large number of mobile electrons with low mobility, or it could have the same resistance-per-length by having a smaller number of mobile...

Seek Travel Ride
Rory Mansfield and Ben Cannell: Introducing Les Lanternes Rouge Challenge - Cycling the Tour de France Self Supported

Seek Travel Ride

Play Episode Listen Later Jun 24, 2023 39:49


In this episode I speak with Rory Mansfield and Ben Cannell, two cyclists from the UK who are about to embark on their own cycling challenge which they have named Les Lanternes Rouges.  This July they will be looking to cycle the entire Tour de France 2023 route, including the stage transfers, fully self-supported.  I spoke to them about how an idea conjured up almost ten years ago, is now about to come to life.  What their challenge involves, and what they are most looking forward to on their own 'lap of France'.  You can follow along with Rory and Ben's challenge via the links below:Instagram Account - @Les_Lanternes_Rouges Tracking and route: https://linktr.ee/les_lanternes_rouges To find out more about their Charity Fundraiser head to: Props BristolEnjoying listening to Seek Travel Ride?  Then be sure to follow the show in your podcast player - to be notified when new episodes are released.  Also follow us on Social Media:Instagram - @SeekTravelRideTwitter -  @BellaCyclingNEW! - Leave a Voice Message! Have something you'd like to tell me? Want to chat about this episode more or tell me about your own bicycle adventures? Well now You can now get in touch and leave a voice message! Just click here and record a voicemail message - I may even include it in future episodes! Join the Seek Travel Ride Facebook group - a place where you can discuss episodes in more detail, learn more about our guests and also where you can share more about your own adventures on a bike! Enjoying listening to Seek Travel Ride? Then please give the show some love and leave a rating and review on your podcast player.Also be sure to follow us on your favourite Podcast Player so you get notified when new episodes are released. You can also follow us via:Instagram - @SeekTravelRideTwitter - @BellaCyclingWebsite: Seek Travel Ride Facebook - Seek Travel Ride

The Nonlinear Library
LW - My side of an argument with Jacob Cannell about chip interconnect losses by Steven Byrnes

The Nonlinear Library

Play Episode Listen Later Jun 21, 2023 21:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My side of an argument with Jacob Cannell about chip interconnect losses, published by Steven Byrnes on June 21, 2023 on LessWrong. Context / How I came to write this Jacob Cannell (@jacob_cannell) made some claims about fundamental limits of interconnect losses on chips at Brain Efficiency: Much More Than You Wanted To Know, and in particular claimed that current chips have interconnect losses close to as low as they will ever get. When I read that claim I thought it was wrong, as was Jacob's discussion of interconnect losses more generally, but I didn't (and still don't) think the disagreement directly mattered for AI x-risk, so I mostly didn't want to spend time arguing about it. But then later Alexander Gietelink Oldenziel wrote $250 prize for checking Jake Cannell's Brain Efficiency, and I wound up in a 15-comment back-and-forth with Jacob about it, before ducking out. (Other people continued that thread afterwards). Unfortunately, I quit the discussion while still being confused about where Jacob was coming from. So this post will not be maximally good and useful, sorry. Nevertheless, here's a summary of my current perspective and understanding, in case anyone cares. I believe that Jacob plans to write at least one response comment in the comment section at the bottom of this post; if he hasn't yet, you should check back later. (Jargon level: medium-low maybe? There is still some unexplained physics & EE jargon, but hopefully I made the centrally important parts accessible to non-experts. DM or email me if something is confusing, and I will try to fix it.) (All numbers in this post should be treated as Fermi estimates.) (Thanks very much to Jacob for his extraordinary patience in trying to explain to me his perspective on this topic. And also his perspective on many other topics!) Background to the technical disagreement “Interconnects” send information from one point to another on a chip. The fundamental thermodynamic limit for the power required to send a bit of information from point A to point B is 0. As a stupid example, there is a lot of digital information on Earth, and it all travels 1e12 meters in orbit around the sun each year for roughly zero energy cost. Chip interconnect losses are obviously much much higher than the thermodynamic limit of “zero”—they might even constitute a majority of chip power consumption these days. Everyone knows that, and so does Jacob. So what is he saying? I think Jacob divides the world of interconnects into two categories, “reversible” and “irreversible” interconnects, with the former including optical interconnects and superconducting wires, and the latter including normal wires and brain axons. (I'm stating this categorization without endorsing it.) I think Jacob takes “reversible” interconnects (optical interconnects & superconducting wires) to have a fundamental interconnect loss limit of zero, but to have practical limits such that we're not expecting to cut orders of magnitude from the total interconnect loss budget this way. I agree with his conclusion here, although we had some disagreements in how we got there. But anyway, that's off-topic for this post. (See my brief discussion of optical interconnects here—basically, nobody seems to have even a roadmap to making optical interconnects with such low power that they could replace almost all (say, >90%) of the aggregate on-chip interconnect length.) Instead our main dispute was about voltages-on-wires, the workhorse of within-chip communication. Pause for: On-chip wire interconnects for dummies: As background, here is the oversimplified cartoon version of integrated circuits. There are a bunch of metal wires, and there are transistors that act as switches that connect or disconnect pairs of wires from each other. Depending on which transistors are “on” versus “off...

The Nonlinear Library
LW - Contra Yudkowsky on Doom from Foom #2 by jacob cannell

The Nonlinear Library

Play Episode Listen Later Apr 27, 2023 24:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra Yudkowsky on Doom from Foom #2, published by jacob cannell on April 27, 2023 on LessWrong. This is a follow up and partial rewrite to/of an earlier part #1 post critiquing EY's specific argument for doom from AI go foom, and a partial clarifying response to DaemonicSigil's reply on efficiency. AI go Foom? By Foom I refer to the specific idea/model (as popularized by EY, MIRI, etc) that near future AGI will undergo a rapid intelligence explosion (hard takeoff) to become orders of magnitude more intelligent (ex from single human capability to human civilization capability) - in a matter of only days or hours - and then dismantle humanity (figuratively as in disempower or literally as in "use your atoms for something else"). Variants of this idea still seems important/relevant drivers of AI risk arguments today: Rob Besinger recently says "STEM-capable artificial general intelligence (AGI) is likely to vastly outperform human intelligence immediately (or very quickly)." I believe the probability of these scenarios is small and the arguments lack technical engineering prowress concerning the computational physics of - and derived practical engineering constraints on - intelligence. During the manhattan project some physicists became concerned about the potential of a nuke detonation igniting the atmosphere. Even a small non-epsilon possibility of destroying the entire world should be taken very seriously. So they did some detailed technical analysis which ultimately output a probability below their epsilon allowing them to continue on their merry task of creating weapons of mass destruction. In the 'ideal' scenario, the doom foomers (EY/MIRI) would present a detailed technical proposal that could be risk evaluated. They of course have not provided that, and indeed it would seem to be an implausible ask. Even if they were claiming to have the technical knowledge on how to produce a fooming AGI, providing that analysis itself could cause someone to create said AGI and thereby destroy the world![1] In the historical precedent of the manhattan project, the detailed safety analysis only finally arrived during the first massive project that succeeded at creating the technology to destroy the world. So we are left with indirect, often philosophical arguments, which I find unsatisfying. To the extent that EY/MIRI has produced some technical work related to AGI[2], I find it honestly to be more philosophical than technical, and in the latter capacity more amateurish than expert. I have spent a good chunk of my life studying the AGI problem as an engineer (neuroscience, deep learning, hardware, GPU programming, etc), and reached the conclusion that fast FOOM is unlikely. Proving that of course is very difficult, so I instead gather much of the evidence that led me to that conclusion. However I can't reveal all of the evidence, as the process is rather indistinguishable from searching for the design of AGI itself.[3] The valid technical arguments for/against the Foom mostly boils down to various efficiency considerations. Quick background: pareto optimality/efficiency Engineering is complex and full of fundamental practical tradeoffs: larger automobiles are safer via higher mass, but have lower fuel economy larger wings produce more lift but also more drag at higher speeds highly parallel circuits can do more total work per clock and are more energy efficient but the corresponding parallel algorithms are more complex to design/code, require somewhat more work to accomplish a task, delay/latency becomes more problematic for larger circuits, etc adiabatic and varying degrees of reversible circuit designs are possible but they are slower, larger, more complex, less noise tolerant, and still face largely unresolved design challenges with practical clock synchronization, etc quan...

The Nonlinear Library
LW - $250 prize for checking Jake Cannell's Brain Efficiency by Alexander Gietelink Oldenziel

The Nonlinear Library

Play Episode Listen Later Apr 26, 2023 1:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $250 prize for checking Jake Cannell's Brain Efficiency, published by Alexander Gietelink Oldenziel on April 26, 2023 on LessWrong. This is to announce a $250 prize for spotchecking or otherwise indepth reviewing Jacob Cannell's technical claims concerning thermodynamic & physical limits on computations and the claim of biological efficiency of the brain in his post Brain Efficiency: Much More Than You Wanted To Know I've been quite impressed by Jake's analysis ever since it came out. I have been puzzled why there has been so little discussion about his analysis since if true it seems to be quite important. That said I have to admit I personally cannot asses whether the analysis is correct. This is why I am announcing this prize. Whether Jake's claims concerning DOOM & FOOM really follow from his analysis is up for debate. Regardless, to me it seems to have large implications on how the future might go and how future AI will look like. I will personally judge whether I think an entry warrants the prize. If you are also interested in seeing this situation resolved, I encourage you to increase the prize pool! As an example, DaemonicSigil's recent post is in the right direction.However, after reading Jacob Cannell's response I did not feel the post seriously engaged with the technical material, retreating to the much weaker claim that maybe exotic reversible computation could break the limits that Jacob's posits which I found unconvincing. The original post is quite clear that the limits are only for nonexotic computing architectures. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - Contra Yudkowsky on AI Doom by jacob cannell

The Nonlinear Library

Play Episode Listen Later Apr 24, 2023 13:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra Yudkowsky on AI Doom, published by jacob cannell on April 24, 2023 on LessWrong. Eliezer Yudkowsky predicts doom from AI: that humanity faces likely extinction in the near future (years or decades) from a rogue unaligned superintelligent AI system. Moreover he predicts that this is the default outcome, and AI alignment is so incredibly difficult that even he failed to solve it. EY is an entertaining and skilled writer, but do not confuse rhetorical writing talent for depth and breadth of technical knowledge. I do not have EY's talents there, or Scott Alexander's poetic powers of prose. My skill points instead have gone near exclusively towards extensive study of neuroscience, deep learning, and graphics/GPU programming. More than most, I actually have the depth and breadth of technical knowledge necessary to evaluate these claims in detail. I have evaluated this model in detail and found it substantially incorrect and in fact brazenly naively overconfident. Intro Even though the central prediction of the doom model is necessarily un-observable for anthropic reasons, alternative models (such as my own, or moravec's, or hanson's) have already made substantially better predictions, such that EY's doom model has low posterior probability. EY has espoused this doom model for over a decade, and hasn't updated it much from what I can tell. Here is the classic doom model as I understand it, starting first with key background assumptions: The brain inefficiency assumption: The human brain is inefficient in multiple dimensions/ways/metrics that translate into intelligence per dollar; inefficient as a hardware platform in key metrics such as thermodynamic efficiency. The mind inefficiency or human incompetence assumption: In terms of software he describes the brain as an inefficient complex "kludgy mess of spaghetti-code". He derived these insights from the influential evolved modularity hypothesis as popularized in ev pysch by Tooby and Cosmides. He boo-hooed neural networks, and in fact actively bet against them in actions by hiring researchers trained in abstract math/philosophy, ignoring neuroscience and early DL, etc. The more room at the bottom assumption: Naturally dovetailing with points 1 and 2, EY confidently predicts there is enormous room for further hardware improvement, especially through strong drexlerian nanotech. The alien mindspace assumption: EY claims human mindspace is an incredibly narrow twisty complex target to hit, whereas the space of AI mindspace is vast, and AI designs will be something like random rolls from this vast alien mindspace resulting in an incredibly low probability of hitting the narrow human target. Doom naturally follows from these assumptions: Sometime in the near future some team discovers the hidden keys of intelligence and creates a human-level AGI which then rewrites its own source code, initiating a self improvement recursion which quickly results in the AGI developing strong nanotech and killing all humans within a matter of days or even hours. If assumptions 1 and 2 don't hold (relative to 3) then there is little to no room for recursive self improvement. If assumption 4 is completely wrong then the default outcome is not doom regardless. Every one of his key assumptions is mostly wrong, as I and others predicted well in advance. EY seems to have been systematically overconfident as an early futurist, and then perhaps updated later to avoid specific predictions, but without much updating his mental models (specifically his nanotech-woo model, as we will see). Brain Hardware Efficiency EY correctly recognizes that thermodynamic efficiency is a key metric for computation/intelligence, and he confidently, brazenly claims (as of late 2021), that the brain is about 6 OOM from thermodynamic efficiency limits: Which brings me to...

Fanacek
S3 E5 Stephen J. Cannell

Fanacek

Play Episode Listen Later Mar 15, 2023 45:26


Stephen J. Cannell is a TV Gawd.  His vanity card at the end of his programs feels like home to me.  Many of us grew up on Mr. Cannell's awesome shows.  The Rockford Files, Baretta, The A-Team, and 21 Jumpstreet are among his many contributions to the boob tube.  As we discuss his life and career, we'll also make sure to talk about the first time I saw porn, my obsession with Robert Culp, getting hit on by a famous casting director, Ben Vereen in a rap song, and my possible involvement in the death of Robert Blake.  

The Nonlinear Library
AF - My take on Jacob Cannell's take on AGI safety by Steve Byrnes

The Nonlinear Library

Play Episode Listen Later Nov 28, 2022 51:49


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My take on Jacob Cannell's take on AGI safety, published by Steve Byrnes on November 28, 2022 on The AI Alignment Forum. Jacob Cannell wrote some blog posts about AGI safety / alignment and neuroscience between 2010 and 2015, which I read and enjoyed quite early on when I was first getting interested in the same topics a few years ago. So I was delighted to see him reappear on Lesswrong a year ago where he has been a prolific and thought-provoking blogger and commenter (in his free time while running a startup!). See complete list of Jacob's blog posts and comments. Having read a bunch of his writings, and talked to him in various blog comments sections, I thought it would be worth trying to write up the places where he and I seem to agree and disagree. This exercise will definitely be helpful for me, hopefully helpful for Jacob, and maybe helpful for people who are already pretty familiar with at least one of our two perspectives. (My perspective is here.) I'm not sure how helpful it will be for everyone else. In particular, I'm probably skipping over, without explanation, important areas where Jacob & I already agree—of which there are many! (Before publishing I shared this post with Jacob, and he kindly left some responses / clarifications / counterarguments, which I have interspersed in the text, in gray boxes. I might reply back to some of those—check the comments section in the near future.) 1. How to think about the human brain 1.1 “Evolved modularity” versus “Universal learning machine” Pause for background: A. “Evolved modularity”: This is a school of thought wherein the human brain is a mishmosh of individual specific evolved capabilities, including a specifically-evolved language algorithm, a specifically-evolved “intuitive biology” algorithm, a specifically-evolved “intuitive physics” algorithm, an “intuitive human social relations” algorithm, a vision-processing algorithm, etc., all somewhat intermingled for sure, but all innate. Famous advocates of “evolved modularity” these days include Steven Pinker (see How the Mind Works) and Gary Marcus. I'm unfamiliar with the history but Jacob mentions early work by Cosmides & Tooby. B. “Universal learning machine”: Jacob made up this term in his 2015 post “The Brain as a Universal Learning Machine”, to express the diametrically-opposite school of thought, wherein the brain has one extremely powerful and versatile within-lifetime learning algorithm, and this one algorithm learns language and biology and physics and social relations etc. This school of thought is popular among machine learning people, and it tends to be emphasized by computational neuroscientists, particularly in the “connectionist” tradition. Here are two other things that are kinda related: “Evolutionary psychology” is the basic idea of getting insight into psychological phenomena by thinking about evolution. In principle, “evolutionary psychology” and “evolved modularity” are different things, but unfortunately people seem to conflate them sometimes. For example, I read a 2018 book entitled Beyond Evolutionary Psychology, and it was entirely devoted to (a criticism of) evolved modularity, as opposed to evolutionary psychology per se. Well, I for one think that evolved modularity is basically wrong (as usually conceived; see next subsection), but I also think that doing evolutionary psychology (i.e., getting insight into psychological phenomena by thinking about evolution) is both possible and an excellent idea. Not only that, but I also think that actual evolutionary psychologists have in fact produced lots of good insights, as long as you're able to sift them out from a giant pile of crap, just like in every field. “Cortical uniformity” is the idea—due originally to Vernon Mountcastle in the 1970s and popularized by Jeff Hawkins in On Intellig...

The Nonlinear Library
LW - Empowerment is All We Need by jacob cannell

The Nonlinear Library

Play Episode Listen Later Oct 24, 2022 29:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Empowerment is All We Need, published by jacob cannell on October 23, 2022 on LessWrong. Intro What/who would you like to become in a thousand subjective years? or a million? Perhaps, like me, you wish to become posthuman: to transcend mortality and biology, to become a substrate independent mind, to wear new bodies like clothes, to grow more intelligent, wise, wealthy, and connected, to explore the multiverse, perhaps eventually to split, merge, and change - to vasten. Regardless of who you are now or what specific values you endorse today, I suspect you too would at least desire these possibilities as options. Absent some culture specific social stigmas, who would not like more wealth, health, and power? more future optionality? As biological creatures, our fundamental evolutionary imperative is to be fruitful and multiply, so our core innate high level value should be inclusive genetic fitness. But for intelligent long lived animals like ourselves, reproduction is a terminal goal in the impossibly distant future: on the order of around 1e11 neural clock cycles from birth[1], to be more precise. Explicit optimization of inclusive genetic fitness through simulation and planning over such vast time horizons is simply implausible - especially for a mere 20 watt irreversible computer such as the human brain, no matter how efficient. Fortunately there exists an accessible common goal which is ultimately instrumentally convergent for nearly all final goals: power-seeking, or simply: empowerment. Omohundro proposed an early version of the instrumental convergence hypothesis as applied to AI in his 2008 paper the Basic AI Drives, however the same principle was already recognized by Klyubin et al in their 2005 paper "Empowerment: A Universal Agent-Centric Measure of Control"[2]: Our central hypothesis is that there exist a local and universal utility function which may help individuals survive and hence speed up evolution by making the fitness landscape smoother. The function is local in the sense that it doesn't rely on infinitely long history of past experience, does not require global knowledge about the world, and that it provides localized feedback to the individual. To a sugar-feeding bacterium, high sugar concentration means longer survival time and hence more possibilities of moving to different places for reproduction, to a chimpanzee higher social status means more mating choice and interaction, to a person more money means more opportunities and more options. The common feature of the above examples is the striving for situations with more options, with more potential for control or influence. To capture this notion quantitatively, as a proper utility function, we need to quantify how much control or influence an animal or human (an agent from now on) has. Salge et al later summarized these arguments into the Behavioral Empowerment Hypothesis[3]: The adaptation brought about by natural evolution produced organisms that in absence of specific goals behave as if they were maximizing their empowerment. Empowerment provides a succinct unifying explanation for much of the apparent complexity of human values: our drives for power, knowledge, self-actualization, social status/influence, curiosity and even fun[4] can all be derived as instrumental subgoals or manifestations of empowerment. Of course empowerment alone can not be the only value or organisms would never mate: sexual attraction is the principle deviation later in life (after sexual maturity), along with the related cooperative empathy/love/altruism mechanisms to align individuals with family and allies (forming loose hierarchical agents which empowerment also serves). The key central lesson that modern neuroscience gifted machine learning is that the vast apparent complexity of the adult human brain, with al...

The Nonlinear Library
LW - AI Timelines via Cumulative Optimization Power: Less Long, More Short by jacob cannell

The Nonlinear Library

Play Episode Listen Later Oct 6, 2022 30:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Timelines via Cumulative Optimization Power: Less Long, More Short, published by jacob cannell on October 6, 2022 on LessWrong. TLDR: We can best predict the future by using simple models which best postdict the past (ala Bayes/Solomonoff). A simple model based on net training compute postdicts the relative performance of successful biological and artificial neural networks. Extrapolation of this model into the future leads to short AI timelines: ~75% chance of AGI by 2032. Cumulative Optimization Power[1]: a Simple Model of Intelligence A simple generalized scaling model predicts the emergence of capabilities in trained ANNs(Artificial Neural Nets) and BNNs(Biological Neural Nets): perf ~= P = CT For sufficiently flexible and efficient NN architectures and learning algorithms, the relative intelligence and capabilities of the best systems are simply proportional to net training compute or intra-lifetime cumulative optimization power P, where P = CT (compute ops/cycle training cycles), assuming efficient allocation of (equivalent uncompressed) model capacity bits N roughly proportional to data size bits D. Intelligence Rankings Imagine ordering some large list of successful BNNs(brains or brain modules) by intelligence (using some committee of experts), and from that deriving a relative intelligence score for each BNN. Obviously such a scoring will be noisy in its least significant bits: is a bottlenose dolphin more intelligent than an american crow? But the most significant bits are fairly clear: C. Elegans is less intelligent than Homo Sapiens. Now imagine performing the same tedious ranking process for various successful ANNs. Here the task is more challenging because ANNs tend to be far more specialized, but the general ordering is still clear: char-RNN is less intelligent than GPT-3. We could then naturally combine the two lists, and make more fine-grained comparisons by including specialized sub-modules of BNNs (vision, linguistic processing, etc). The initial theory is that P - intra-lifetime cumulative optimization power (net training compute) - is a very simple model which explains a large amount of the entropy/variance in a rank order intelligence measure: much more so than any other simple proposed candidates (at least that I'm aware of). Since P follow a predictable temporal trajectory due to Moore's Law style technological progress, we can then extrapolate the trends to predict the arrival of AGI. This simple initial theory has a few potential flaws/objections, which we will then address. Initial Exemplars I've semi-randomly chosen 15 exemplars for more detailed analysis: 8 BNNs, and 9 ANNs. Here are the 8 BNNs (6 whole brains and 2 sub-systems) in randomized order: Honey Bee Human Raven Human Linguistic Cortex Cat C. Elegans Lizard Owl Monkey Visual Cortex The ranking of the 6 full brains in intelligence is rather obvious and likely uncontroversial. Ranking all 8 BNNs in terms of P (net training compute) is still fairly obvious. Here are the 9 ANNs, also initially in randomized order: AlphaGo: First ANN to achieve human pro-level play in Go Deepspeech 2: ANN speech transcription system VPT: Diamond-level minecraft play Alexnet: Early CNN imagenet milestone, subhuman performance 6-L MNIST MLP: Early CNN milestone on MNIST, human level Chinchilla: A 'Foundation' Large Language Model GPT-3: A 'Foundation' Large Language Model DQN Atari: First strong ANN for Atari, human level on some games VIT L/14@336px: OpenAI CLIP 'Foundation' Large Vision Model Most of these systems are specialists in non-overlapping domains, such that direct performance comparison is mostly meaningless, but the ranking of the 3 vision systems should be rather obvious based on the descriptions. The DQN Atari and VPT agents are somewhat comparable to animal brains. How would you ran...

BlockHash: Exploring the Blockchain
Travis Cannell - Head of Product for Orchid Labs

BlockHash: Exploring the Blockchain

Play Episode Listen Later Aug 9, 2022 32:18


This week on episode 262 of the BlockHash Podcast, we have the Head of Product for Orchid Labs, Travis Cannell! Travis Cannell is the Head of Product for Orchid Labs. In the past he's been a marketing professional specializing in User Acquisition for the past 10+ years for companies including Intuit and Experian. He has repeatedly grown products from launch to market success through scalable customer acquisition. At the moment, he's laser-focused on demonstrating how Orchid's novel rollup model can enable nanopayments and help scale blockchains, no matter the ecosystem. The podcast is available on… Apple Podcasts: https://podcasts.apple.com/us/podcast/blockhash-exploring-the-blockchain/id1241712666 Amazon Music: https://music.amazon.com/podcasts/6dc84ee4-845b-4bea-b812-b876daab2c7e/BlockHash-Exploring-the-Blockchain Spotify: https://open.spotify.com/show/4AGqU8qxIYVkxXM4q2XpO1 Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy9iNmNhNWM0L3BvZGNhc3QvcnNz Website: www.blockhashpodcast.com On Social Media… Website: https://www.orchid.com/ Twitter: https://twitter.com/OrchidProtocol LinkedIn: https://www.linkedin.com/company/orchid-labs/ Find Brandon Zemp & the podcast on Social Media… Instagram: https://www.instagram.com/theblockhash/ Instagram: https://www.instagram.com/zempcapital/ Twitter: https://twitter.com/zempcapital Facebook: https://www.facebook.com/theblockhash LinkedIn: www.linkedin.com/in/brandonzemp Sign up for the "Future Economy" newsletter… Newsletter: https://futureeconomy.memberful.com/join

Polygon Alpha Podcast
The Decentralized VPN | Orchid Protocol | Travis Cannell

Polygon Alpha Podcast

Play Episode Listen Later Aug 3, 2022 56:09


Audio from the July 27th, 2022 installment of “Polygon Alpha” with Travis Cannell - Head of Product at Orchid Protocol.LinkTree - https://linktr.ee/polygonalphapodcastPolygon Alpha Shorts - https://tinyurl.com/PolygonAlphaShortsYouTube - https://www.youtube.com/c/PolygonTVApple - Follow the show on Apple Podcast!Spotify - Follow the show on Spotify!RSS feed - https://api.substack.com/feed/podcast/863588.rssThe Orchid Network - Enables a decentralized virtual private network (VPN), allowing users to buy bandwidth from a decentralized pool of service providers. - Orchid uses an ERC-20 utility token called OXT, a new VPN protocol for token-incentivized bandwidth proxying, and smart-contracts with algorithmic advertising and payment functions. - Orchid's users connect to bandwidth sellers using a provider directory, and they pay using probabilistic nanopayments so Ethereum transaction fees on packets are acceptably low. - Orchid Accounts: Orchid accounts are the decentralized entities that store digital currency on a blockchain to pay for services through nanopayments. The nanopayment smart contract governs Orchid accounts. The Orchid client requires an account in order to pay for VPN service. - The Orchid Client: An open-source, Virtual Private Network (VPN) client that supports decentralized Orchid accounts, as well as WireGuard and OpenVPN connections. The client can string together multiple VPN tunnels in an onion route and can provide local traffic analysis. - The Orchid DApp: The Orchid dApp allows you to create and manage Orchid Accounts. The operations supported by the account manager are simply an interface to the decentralized smart contract that holds the funds and governs how they are added and removed.~~~~Thank you so much for listening & watching the video, if you've not subscribed to the channel please do! We'll continue to bring new videos to you!Polygon offers scalable, affordable, secure and carbon-neutral web3 infrastructure built on Ethereum. Our products offer developers to create user-friendly applications #onPolygon with low transaction fees and without ever sacrificing securityPolygon official channel:Website: polygon.technologyTwitter: twitter.com/0xPolygonTelegram Community: t.me/polygonofficialTelegram announcement: t.me/PolygonAnnouncementsReddit: www.reddit.com/r/0xPolygon/Discord: discord.com/invite/polygonFacebook: www.facebook.com/0xPolygon.Technology/Polygon Alpha Podcast This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit polygonalpha.substack.com

Full Court Press
The Full Court Press - Dianna Cannell & Natalie Robbins

Full Court Press

Play Episode Listen Later Apr 21, 2022 15:59


Dianna & Natalie join The Full Court Press to talk 20 years of Lacrosse and their Sky View Lacrosse Alumni event coming up

Embodied Faith: on Relational Neuroscience, Spiritual Formation, and Faith
028 Jesus: Man of Trauma, and Acquainted with Grief (with Jeff Cannell)

Embodied Faith: on Relational Neuroscience, Spiritual Formation, and Faith

Play Episode Listen Later Apr 13, 2022 38:21


Did Jesus suffer trauma? Did he experience PTSD? We know Jesus suffered in his death.  But did he suffer throughout his life?As Isaiah 53:3 says, the servant of God is “despised and rejected of men, a man of sorrows and acquainted with grief.”  What if that pain, those sorrows, that grief is better expressed as trauma? That's what we are talking about today during Holy Week, as we remember Jesus' path to the cross.  Our guest is pastor (and my good friend) Jeff Cannell, ministering at Central Vineyard in Columbus, OH. Get the FREE ebook, The Brain God Gave Us, when you join the Embodied Faith community (connecting you to new episodes, posts, and other resources).Please subscribe and review on iTunes, Spotify, YouTube.If you would like coaching or spiritual direction that aligns with this podcast, then connect with Cyd Holsclaw here.

Follow the White Rabbit
All About Orchid and the Future of the Internet with Travis Cannell

Follow the White Rabbit

Play Episode Listen Later Apr 13, 2022 44:19


This week host Derek E. Silva joins Travis Cannell, Head of Product & Marketing at Orchid. A great conversation on how Travis first met Co-founder Jay Freeman at the College of Creative Studies in Santa Barbara, why Orchid's VPN is different from other privacy tools, and his predictions on the future of the Internet.

Two Amazon Sellers and a Microphone
#204 - Selling Products On Amazon Using The Drop Shipping Method with Shayne Cannell-Cohen

Two Amazon Sellers and a Microphone

Play Episode Listen Later Apr 13, 2022 45:55


"Shayne Cannell-Cohen, runs a multiple 6-figure Amazon dropshipping business at the age of 23 years old within just 14 months grossing over $280,000. He also coaches other people how to do the same exact thing, regardless of their experience with e-commerce or how many hours they have available to put towards a side hustle." Instagram: https://www.instagram.com/shayne.ecom/ Website: https://leadingdigitalecom.com/ Ultimate Beginner's Guide to Selling on Amazon: https://leadingdgtalecom.kartra.com/page/nrQ11 Make sure to subscribe to the podcast so that you are notified of new episodes!

Independent Music Podcast
#363 – Laura Cannell, Shida Shahabi, Armon בשחור לבן, Darama, Fears, Mogambo - 21 March 2022

Independent Music Podcast

Play Episode Listen Later Mar 21, 2022 69:47


We're spending significant time in the northern hemisphere this week, starting off with legendary Afghan Ghazal singer Dr. Mohammad Sadiq Fitrat a.k.a. Nashenas, and moving swiftly into contemporary hypnotic Thai sounds from Mogambo. Trips to Sweden, Israel, and the UK follow, including the fabulous new recorder-led record from Laura Cannell, and surf-infused pop music from Armon בשחור לבן. We have more South Asian-influenced sounds from London's Darama, whose debut EP tips a hat to UK funky and garage as well as Punjabi rhythms. We also have a stunning record from Sweden's Shida Shahabi. We dip under the equator briefly for a wonderful set of field recordings from Rio de Janeiro's Verónica Cerrotta. Elsewhere, there's the cracking spoken word metronomic sounds of Reigns, heavy Canadian noise from BUÑUEL and much more. Tracklisting Nashenas – The Way I Love My Beloved (Strut Records, Germany) Mogambo – Urvi (उर्वी) (Siamese Twins Records, Thailand) Reigns – The Horder (Wrong Speed Records, UK) Verónica Cerrotta – IV (Pakapi Records, Argentina) Shida Shahabi – Alice (FatCat Records, UK) Armon ארמון – בשחור לבן (Kame'a Music, Israel) BUÑUEL – Hornets (Profound Lore, Canada) Laura Cannell – For the Gatherers (Brawl Records, UK) Fears – 16 (TULLE, UK) Darama – Vera (More Time, UK) This week's episode is sponsored by The state51 Conspiracy, a creative hub for music. Head to state51.com to find releases by JK Flesh vs Gnod, Steve Jansen, MrUnderwSood, Wire, Ghost Box, Lo Recordings, Subtext Records and many more

NEWS THAT MATTER
Why Amazon Dominates Online Shopping

NEWS THAT MATTER

Play Episode Listen Later Feb 13, 2022 44:42


Fake reviews are a big problem on Amazon. There are Facebook groups where bad actors solicit paid positive reviews to bots and click farms that upvote negative reviews in an effort to take out competition. These illegitimate reviews can boost sales of unsafe products and hurt business for legitimate sellers, causing brands like Nike and Birkenstock to sever ties with Amazon. Despite all the fake reviews on Amazon, there are professionals who earn a living reviewing products. Sean Cannell makes tens of thousands of dollars a month as a professional Amazon reviewer. He is part of the Amazon Affiliate program and reviews camera gear on his Think Media YouTube channel. Cannell makes a cut of every sale those reviews generates on Amazon.com. Meanwhile, Walmart is looking to catch up with Amazon's dominance in e-commerce as more and more shoppers take their business online. This is why Amazon dominates online shopping.

Young Blood (Men’s Health Matters)
To Be A Trans Man with Zac Cannell

Young Blood (Men’s Health Matters)

Play Episode Listen Later Feb 7, 2022 52:29


Zac Cannell is a transgender man who lived the first 25 years of his life as a woman. Zac felt like he didn't belong in his body from a young age and it made growing into adulthood very confusing. Fortunately, he was supported by his family when he came out, but the same can't be said for everybody. Research shows people in the LGBTIQ+ community experience a disproportionately high rate of mental health challenges. As an advocate, facilitator and health professional, Zac has seen it all.

The Nonlinear Library
LW - Brain Efficiency: Much More than You Wanted to Know by jacob cannell

The Nonlinear Library

Play Episode Listen Later Jan 6, 2022 37:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brain Efficiency: Much More than You Wanted to Know, published by jacob cannell on January 6, 2022 on LessWrong. What if the brain is highly efficient? To be more specific, there are several interconnected key measures of efficiency for physical learning machines: energy efficiency in ops/J spatial efficiency in ops/mm^2 or ops/mm^3 speed efficiency in time/delay for key learned tasks compute/algorithmic efficiency in circuit size and steps for key learned tasks learning/data efficiency in samples/observations/bits required to achieve a level of compute efficiency, or per unit thereof Why should we care? Brain efficiency matters a great deal for AGI timelines and takeoff speeds, as AGI is implicitly/explicitly defined in terms of brain parity. If the brain is about 6 OOM away from the practical physical limits of energy efficiency, then roughly speaking we should expect about 6 OOM of further Moore's Law hardware improvement past the point of brain parity: perhaps two decades of progress at current rates, which could be compressed into a much shorter time period by an intelligence explosion - a hard takeoff. But if the brain is already near said practical physical limits, then merely achieving brain parity in AGI at all will already require using up most of the optimizational slack, leaving not much left for a hard takeoff - thus a slower takeoff. In worlds where brains are efficient, AGI is first feasible only near the end of Moore's Law (for non-exotic, reversible computers), whereas in worlds where brains are highly inefficient, AGI's arrival is more decorrelated, but would probably come well before any Moore's Law slowdown. In worlds where brains are ultra-efficient, AGI necessarily becomes neuromorphic or brain-like, as brains are then simply what economically efficient intelligence looks like in practice, as constrained by physics. This has important implications for AI-safety: it predicts/postdicts the success of AI approaches based on brain reverse engineering (such as DL) and the failure of non-brain like approaches, it predicts that AGI will consume compute & data in predictable brain like ways, and it suggests that AGI will be far more like human simulations/emulations than you'd otherwise expect and will require training/education/raising vaguely like humans, and thus that neuroscience and psychology are perhaps more useful for AI safety than abstract philosophy and mathematics. If we live in such a world where brains are highly efficient, those of us interested in creating benevolent AGI should immediately drop everything and learn how brains work. Energy Computation is an organization of energy in the form of ordered state transitions transforming physical information towards some end. Computation requires an isolation of the computational system and its stored information from the complex noisy external environment. If state bits inside the computational system are unintentionally affected by the external environment, we call those bit errors due to noise, errors which must be prevented by significant noise barriers and or potentially costly error correction techniques. Thermodynamics Information is conserved under physics, so logical erasure of a bit from the computational system entails transferring said bit to the external environment, necessarily creating waste heat. This close connection between physical bit erasure and thermodynamics is expressed by the Landauer Limit[1], which is often quoted as Eb>kBln2 However the full minimal energy barrier analysis involves both transition times and transition probability, and this minimal simple lower bound only applies at the useless limit of 50% success/error probability or infinite transition time. The key transition error probability α is constrained by the bit energy: α=e−EbkBT[2][3] Here's a range of ...

trialsitenews's podcast
Alan Cannell talks COVID-19: Nobody Likes Cheap Solutions

trialsitenews's podcast

Play Episode Listen Later Nov 11, 2020 32:51


We sit down with Alan Cannell, who has been a collaborator with TrialSite News down in Brazil and talk about treatments for Covid-19. An engineer by training he is originally from the Isle Of Man between Scotland and Ireland and moved to Brazil IN THE 1970s and now has a family there. He has been applying his methodical engineering mindset to studying the trends of using Ivermectin at least at the local level in Brazil.

Going In Circles
Going in Circles LIVE Preakness preview with Tom Cannell and Jason Beides (part 1)

Going In Circles

Play Episode Listen Later Sep 30, 2020 93:58


On today's show we previewed this Saturday's Preakness stakes which is in the unusual spot of being the final leg of the Triple Crown. Longtime owner and handicapper Tom Cannell leads of the show with his insights. He is followed by NY jockey agent Jason Beides who offers his professional opinion of the Preakness and other races on the card. --- Send in a voice message: https://anchor.fm/charles-simon6/message