POPULARITY
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Compleat Cybornaut, published by ukc10014 on May 19, 2023 on LessWrong. A cluster of conceptual frameworks and research programmes have coalesced around a 2022 post by janus, which introduced language models as ‘simulators' (of other types of AIs such as agents, oracles, or genies). One such agenda, cyborgism, was coined in a post by janus and Nicholas Kees and is being researched as part of the 2023 editions of AI Safety Camp and SERI MATS. The objective of this document is to provide an on-ramp to the topic, one that is hopefully accessible to people not hugely familiar with simulator theory or language models. So what is cyborgism? Cyborgism proposes to use AIs, particularly language models (i.e. generative-pretrained transformers or GPTs), in ways that exploit their (increasingly) general-purpose intelligence, while retaining human control over the ‘dangerous bits' of AI – i.e. agency, planning, and goal-formation. The overall objective is to leverage human cognitive ability while minimising the risks associated with agentic AI. Aside from agency, a core assertion of cyborgism is that certain commonly-used language models are not well-suited to many tasks human users throw at them, but that humans, if appropriately-trained and equipped, might more effectively use GPTs in ways that are ‘natural' for the model, while dramatically increasing the productive and creative potential of the human. Specifically, some current systems, such as ChatGPT, are released or predominantly used in a ‘tuned' version, which has a host of shortcomings. One such tuning method, reinforcement-learning from human feedback (RLHF) has a specific weakness relevant to cyborgism: the tuning process severely limits, or collapses, a valuable aspect of the GPT, namely its wild, unconstrained creativity. Superficially, the cyborgism approach may resemble a human-plus-oracle setup, but there is a subtle and important distinction: an oracle, it is argued, might ‘smuggle in' some of the trappings of an agent. In contrast, the human cyborg embeds the output of the language model into their own workflow and thinking - model and human work as an integrated system. The cyborg leverages the model's creative, albeit non-agentic, potential while continuously ‘steering' or ‘course-correcting' the model to ensure its output remains relevant to the actual goal. However, cyborgism might entail a high alignment tax: absent appropriate workflows and tools, a setup consisting of a human plus non-agentic GPT might be considerably less productive than a purely agentic AI (as the human component becomes a bottleneck). Background Concepts Before getting into practical cyborgism, it is helpful to summarize some relevant theories and intuitions about how language models work. Why is in-context learning relevant? Neural networks generally, and language models specifically, go through several types of training: the large-scale (in terms of compute, time, and data) pre-training when all the neural weights are set in an end-to-end optimisation process; one or more fine-tuning rounds to focus the model on a specific use domain (during which the weights also change); and, in the case of certain models, including GPT-4, ChatGPT, and text-davinci-003, various types of supplementary tuning, which in the case of GPT-4 seems to include RLHF and rule-based reward modelling (RBRM). The final phase of training, known as ‘in-context learning', happens during the session with the user, and doesn't involve actual changes in neural weights, but does still significantly alter the type of output the model generates, based on the accumulated context of its interaction with an user in a given session. The mechanisms by which this happens are debated, but from a cyborgism perspective, the context provides a powerful way of guiding or cont...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Compleat Cybornaut, published by ukc10014 on May 19, 2023 on LessWrong. A cluster of conceptual frameworks and research programmes have coalesced around a 2022 post by janus, which introduced language models as ‘simulators' (of other types of AIs such as agents, oracles, or genies). One such agenda, cyborgism, was coined in a post by janus and Nicholas Kees and is being researched as part of the 2023 editions of AI Safety Camp and SERI MATS. The objective of this document is to provide an on-ramp to the topic, one that is hopefully accessible to people not hugely familiar with simulator theory or language models. So what is cyborgism? Cyborgism proposes to use AIs, particularly language models (i.e. generative-pretrained transformers or GPTs), in ways that exploit their (increasingly) general-purpose intelligence, while retaining human control over the ‘dangerous bits' of AI – i.e. agency, planning, and goal-formation. The overall objective is to leverage human cognitive ability while minimising the risks associated with agentic AI. Aside from agency, a core assertion of cyborgism is that certain commonly-used language models are not well-suited to many tasks human users throw at them, but that humans, if appropriately-trained and equipped, might more effectively use GPTs in ways that are ‘natural' for the model, while dramatically increasing the productive and creative potential of the human. Specifically, some current systems, such as ChatGPT, are released or predominantly used in a ‘tuned' version, which has a host of shortcomings. One such tuning method, reinforcement-learning from human feedback (RLHF) has a specific weakness relevant to cyborgism: the tuning process severely limits, or collapses, a valuable aspect of the GPT, namely its wild, unconstrained creativity. Superficially, the cyborgism approach may resemble a human-plus-oracle setup, but there is a subtle and important distinction: an oracle, it is argued, might ‘smuggle in' some of the trappings of an agent. In contrast, the human cyborg embeds the output of the language model into their own workflow and thinking - model and human work as an integrated system. The cyborg leverages the model's creative, albeit non-agentic, potential while continuously ‘steering' or ‘course-correcting' the model to ensure its output remains relevant to the actual goal. However, cyborgism might entail a high alignment tax: absent appropriate workflows and tools, a setup consisting of a human plus non-agentic GPT might be considerably less productive than a purely agentic AI (as the human component becomes a bottleneck). Background Concepts Before getting into practical cyborgism, it is helpful to summarize some relevant theories and intuitions about how language models work. Why is in-context learning relevant? Neural networks generally, and language models specifically, go through several types of training: the large-scale (in terms of compute, time, and data) pre-training when all the neural weights are set in an end-to-end optimisation process; one or more fine-tuning rounds to focus the model on a specific use domain (during which the weights also change); and, in the case of certain models, including GPT-4, ChatGPT, and text-davinci-003, various types of supplementary tuning, which in the case of GPT-4 seems to include RLHF and rule-based reward modelling (RBRM). The final phase of training, known as ‘in-context learning', happens during the session with the user, and doesn't involve actual changes in neural weights, but does still significantly alter the type of output the model generates, based on the accumulated context of its interaction with an user in a given session. The mechanisms by which this happens are debated, but from a cyborgism perspective, the context provides a powerful way of guiding or cont...
On this episode Grammy Award Winning Pioneering DJ Kid Capri stops by to share his story. The Grammy Award Winning DJ and Producer, Kid Capri, is globally known as an originator, innovator, and pioneer of DJ Culture. His grass roots hustler mentality became a template for success emulated by the DJs that followed in his footsteps. Dubbed The Guru of Mixtapes, Kid Capri literally redefined the term "DJ" as he blazed a trail with his array of mixtapes and crowd hyping performances. "A true master is somebody that makes other people want to do what they do." He starred in the popular HBO series Russel Simmons Def Comedy Jam and recently returned for The Netflix Def Comedy Jam's 25th Anniversary Special. Kid has produced tracks for Snoop Dogg, Jay-Z, Madonna, Heavy D, and 50 Cent to name a few. He has years of touring experience with legends like Diddy, Jay Z, Aaliyah, Salt-N Peppa, Timbaland, and most recently RBRM. Kid won a Grammy for producing a song on Jay-Z's album, Hard Knock Life. In 2017 Kid narrated Kendrick Lamar's DAMN album; the only hip hop album to ever win a Pulitzer Prize. Kid premiered his signature crowd hyping vocals at Jennifer Lopez's ALL I HAVE Las Vegas Residency during the magical portion of her show dedicated to The Bronx. For more info follow Kid Capri on Instagram @kidcapri101
This special episode features the music of New Edition. It all started with 5 young boys aged 13 to 15 from the Boston (Roxbury), Massachusetts projects known as Orchard Park. We will start with their discovery by Maurice Starr and Streetwise Records to their illustrious career at MCA Records and beyond. From bubble gum to new jack swing From group to solo From success to despair Yes, “Ronnie, Bobby, Ricky, & Mike, Ralph and Johnny too...” Whether it’s a group of 5 or 6, solo, or their many configurations; B.B.D, Heads of State, or RBRM, they have staying power. New Edition are entering their 5th decade of making music and entertaining their loyal fans. There is no doubt that they have been able to stand the rain and always find their way home again.
Hits from New Edition (NE, Bobby Brown, Ralph Tresvant, Johnny Gill and Bell Biv DeVoe )
Imagine you have Ronnie DeVoe's life. You're plucked in your early teens to round out New Edition and you never look back. Then, after several huge hits, you veer off with Ricky and Mike and invent New Jack Swing (and eclipse the success of New Edition) with Bell Biv DeVoe. Hits like "Poison" and "Do Me" were revolutionary. The core of New Edition - Ronnie, Ricky, Mike, Ralph Tresvant, Bobby Brown, and Johnny Gill - has never changed even if the guys float in and out depending on availability and the level of drama among them, but a new offshoot is about to make news again. RBRM is Ricky, Bobby, Ronnie and Mike and they're embarking on a two month tour at the end of April. The shows will incorporate classics from all iterations of the band, as well as Bobby's solo hits. In here Ronnie and I talk about all of it including the current state of New Edition as well as r&b in general, his real estate business, and what it's like having his wife Shamari featured on the Real Housewives of Atlanta. You won't want to miss this one! http://www.rbrm4theloveofittour.com
When acts come through the Capital District to perform, I like to give them a warm welcome on the radio... Especially acts that have stood the test of time. That holds true for the men of New Edition. Bobby Brown & Bell Biv DeVoe, now known collectively as RBRM (Ronnie, Bobby, Ricky & Mike) are part of arguably the best boy band group not named the Jackson Five, so I had to pay homage to the men that brought so much to R&B over the last 30 years. Check some of your favorite songs from Bobby Brown and Bell Biv DeVoe in this mix here. Recorded LIVE on Friday, November 2, 2018, on The Heat Wave: Hip Hop, R&B, Reggae & Club Bangers WEEK NIGHTS (Mon-Fri) @8pmEST on Hot 99.1fm Albany, Hot991.com & the Hot 99.1 app! For Bookings: BookDJShOw@gmail.com Follow DJ ShOw on Social Media Twitter: @djShOwOfficial facebook.com/djShOwNY IG & Snapchat: @djShOw
We discuss our journey to the great #RBRM concert this past weekend and the many observations we had from this show. We also discuss the newly coined "Apex predator" Jimmy Butler, Beyonce being accused of witchcraft, Lebron James and Ryan Cogler working on Space Jam, and Stephen Jackson has some words for Andrew Wiggins. Also, we discuss the Final Four brackets in the Dopest Black TV Show of All Time tourney! Make sure you vote now!!! In the Dopest Black TV Show of All Time Bracket!! https://brackify.com/bracket/21290/Dopest-Black-TV-Shows-of-All-Time Leave us a comment on I-TUNES and let us know what you think about the show!! Follow us on twitter @straightolc email us at straightolc@gmail.com Hit the Voicemail at 641-715-3900 Ext. 769558
Azealia’s Wildn’ Out episode fail finally airs. Rihanna married?? RBRM vs. New Edition. “Who Said It?” So many random acts of racism we had to dedicate our main topic to it.
Somethin' In Common is back off another long hiatus. Check out the latest episode where Kym gives everyone an update on her father's health scare that she has been helping him with for the past few months. Mike and Kym are also ranting on RBRM which is another sub group of New Edition. ( Kym is not happy at all about this lol) They are also discussing the latest new about the beautiful McClure twins dad who was recently outed as being a closet racist. That and much more on the latest edition of Somethin' In Common. Hit that link and check it out.
Leading up to their show on June 16, 2018 at Chene Park in Detroit, Cal spoke to Ronnie Devoe, and Ricky Bell from Bell Biv Devoe, RBRM and New Edition fame. This was a dream come true for Cal.
New Edition have yet another incarnation, RBRM - Ronnie, Bobby, Ricky and Mike. Kelly and Sharon discuss this new splinter group and what that means for you. They also delve into why you shouldn't tell your wife everything if you plan on keeping your job, what's happening with the "Roseanne" reboot reboot and the importance of mental health in the arts community. Thank you for listening!