POPULARITY
Robert "Hoagy" Hoagland lived in Newtown, Connecticut, with his wife, Laurie, and one young adult son. The couple had two other sons. Hoagy was a part-time paralegal, former chef and had his real estate appraiser's license. He was your all-around, average and average-looking 50-something-year-old guy. Until Sunday, July 28, 2013. Hoagy got up early, went to get bagels for him and his son and then paid some bills. He went out to mow the lawn and was seen by a neighbor around 11 am. At noon, his son Max asked to use the car and Hoagy let him. When Max returned a few hours later, his Dad was gone.Hoagy was supposed to pick up his wife at JFK airport the next day, Monday, July 29th, but he never showed. When he still wasn't back by Tuesday, Laurie reported him as a missing person. All of the family vehicles were accounted for, and Hoagy had left behind his keys, wallet, thousands of dollars in cash in a home safe and his daily medication. Hoagy's family never heard from him again. It was as if he literally just disappeared. Until 10 years later, when his wife, his sons and investigators would get the shock of their lives.Show Notes: https://podtail.com/en/podcast/wicked-deeds/
We study alignment audits—systematic investigations into whether an AI is pursuing hidden objectives—by training a model with a hidden misaligned objective and asking teams of blinded researchers to investigate it.This paper was a collaboration between the Anthropic Alignment Science and Interpretability teams. AbstractWe study the feasibility of conducting alignment audits: investigations into whether models have undesired objectives. As a testbed, we train a language model with a hidden objective. Our training pipeline first teaches the model about exploitable errors in RLHF reward models (RMs), then trains the model to exploit some of these errors. We verify via out-of-distribution evaluations that the model generalizes to exhibit whatever behaviors it believes RMs rate highly, including ones not reinforced during training. We leverage this model to study alignment audits in two ways. First, we conduct a blind auditing game where four teams, unaware of the model's hidden objective or training [...] ---Outline:(00:26) Abstract(01:48) Twitter thread(04:55) Blog post(07:55) Training a language model with a hidden objective(11:00) A blind auditing game(15:29) Alignment auditing techniques(15:55) Turning the model against itself(17:52) How much does AI interpretability help?(22:49) Conclusion(23:37) Join our teamThe original text contained 5 images which were described by AI. --- First published: March 13th, 2025 Source: https://www.lesswrong.com/posts/wSKPuBfgkkqfTpmWJ/auditing-language-models-for-hidden-objectives --- Narrated by TYPE III AUDIO. ---Images from the article:
Often the first notes of the evening set the pace, the mood and the tone for the entire rehearsal. As you'll hear on this track, Danny Cox walked into last week's session ready to set the Floodometer on sizzle. And it certainly worked. The Flood has been doing this great old 1920s jazz standard for only a couple of years now, but it's already become one of the band's go-to tunes for a good time, especially whenever Danny has new musical ideas to explore.About the SongThis week's featured tune — “Am I Blue?” — has a special place at the intersection of jazz and movie histories. That's because in 1944 a sassy performance of the 1929 classic marked songwriter Hoagy Carmichael's big break in Hollywood.Hoagy is best known, of course, for performing his own compositions (“Stardust” and “Georgia on My Mind,” “Up a Lazy River,” “Memphis in June” and so many others).However, when Carmichael was cast to play the character “Cricket” in Humphrey Bogart's To Have and Have Not, director Howard Hawks wanted a scene in which Hoagy — as a honky tonk piano player in a Martinique dive — is doing the Harry Akst-Grant Clarke tune when a 19-year-old Lauren Bacall makes her film debut.“My first scene required me to sing ‘Am I Blue,'” Carmichael wrote in his 1965 autobiography Sometimes I Wonder. “‘Am I Nervous' would have been a more appropriate title. I chewed a match to help my jitters…. The match was a good decision, it turned out, because it became a definite part of the character.”With some comic results. One morning during the shooting, Carmichael had a scene with Bogart, who walked onto the set chewing on a match. “My heart sank,” Hoagy wrote. “What can you say to the star of the picture when he's apparently intent on stealing your stuff?”Only the next day did Carmichael learn it had all been a gag. “Bogey let me go on thinking they had actually shot the scene that way.”Meanwhile…Elsewhere in the film, Hoagy is seen playing an accompaniment for the very nervous young Bacall as her character, “Slim,” sings his and Johnny Mercer's song, "How Little We Know,” which they wrote specifically for the movie.A 16-year-old Andy Williams recorded the song as a possible alternative track to dub Bacall's low voice; however, Bacall always maintained that the producers ended up using her singing in the film rather the dub.“I'm not sure what the truth of it was,” Williams later wrote in his own autobiography, “but I'm not going to argue about it with the formidable Ms. Bacall!”Meanwhile, more films awaited Hoagy Carmichael. As he wrote, he was cast in "every picture in which a world-weary character in bad repair sat around and sang or leaned over a piano.… It was usually the part of the hound-dog-faced old musical philosopher noodling on the honky-tonk piano, saying to a tart with a heart of gold: 'He'll be back, honey. He's all man'."Song HistoriesIf you would like to read more about the history of “Am I Blue?” check out this earlier Flood Watch report on the song.And for the backstories on other songs in The Flood's repertoire, peruse the newsletter's Song Stories section. Click here to give it a look. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit 1937flood.substack.com
durée : 00:32:01 - Les Nuits de France Culture - par : Albane Penaranda - C'est à Hoagy Carmichael que rendait hommage le quatrième volet qu'en 1991, dans "Le Rythme et la raison", Daniela Langer consacrait aux grands compositeurs et paroliers des grandes heures de la comédie musicale de Broadway, une émission sous-titrée "Des rengaines banales et irrésistibles"… - réalisation : Véronique Lamendour - invités : René Urtreger Pianiste de jazz; Stephanie Crawford Chanteuse de jazz américaine
Hoagy Carmichael was not quite 28 years old when he wrote what music historians consider THE song of the 20th century.Just how big is “Stardust” in the Great American Songbook?* Well, for starters, this is a song that has been recorded as an instrumental or a vocal more than 1,500 times. * Forty years after its publication in 1928, it was still earning more than $50,000 annually in royalties. * The lyrics that Mitchell Parish later brought to Hoagy's song have been translated into 30 languages.“Stardust” simply is “the most-recorded song in the history of the world,” music curator John Edward Hasse of the Smithsonian Institution once told John Barbour of The Associated Press, “and that right there qualifies it as it as the song of the century.”The closest competitor, he said, is “Yesterday” by John Lennon and Paul McCartney, and, at No. 3, W.C. Handy's “St. Louis Blues.”Young Hoagy and His SongLate summer 1927 found Hoagy Carmichael back home in Indiana after a romp in Florida; the young man was hanging out near the campus of Indiana University, from which he had graduated a few years earlier.As he related in his first autobiography, The Stardust Road, in 1946:It was a hot night, sweet with the death of summer and the hint and promise of fall. A waiting night, a night marking time, the end of a season. The stars were bright, close to me, and the North Star hung low over the trees.I sat down on the “spooning wall” at the edge of the campus and all the things that the town and the university and the friends I had flooded through my mind. Beautiful Kate (Cameron), the campus queen... and Dorothy Kelly. But not one girl — all the girls — young and lovely. Was Dorothy the loveliest? Yes. The sweetest? Perhaps. But most of them had gone their ways. Gone as I'd gone mine....Never to be 21 again; so in love again. Never feel the things I'd felt. The memory of love's refrain....Carmichael wrote that he then looked up at the sky, whistling softly, and that the melody flowing from his feelings was “Stardust.” Excited, he ran to a campus hangout where the owner was ready to close. Hoagy successfully begged for a few minutes of piano time so he could solidify that theme in his head.True?Is that really how it happened? “What can I say?” historian Hasse told the AP decades later. “It is truly a thing of legend.”The same year, Carmichael recorded an upbeat instrumental version of the song for Gennett Records. The next year, he left Indiana for New York City after Mills Music hired him as a composer. The Reception WidensWest Virginian Don Redman recorded the song in the same year, and by 1929 it was performed regularly by Duke Ellington at the Cotton Club; however, it was Isham Jones' 1930 rendition that made the song popular on radio, prompting multiple acts to record it.For instance, in 1936, RCA released double-sided versions of “Stardust,” Tommy Dorsey on one side and Benny Goodman on the other.Then 1940 was a banner year, with releases of the song by Frank Sinatra, Artie Shaw and Glenn Miller. Since then, “Stardust” has entered the repertoire of every serious jazz singer and instrumentalist around the world.Willie's VersionIn 1978, country superstar Willie Nelson surprised fans with his release of his Star Dust album, which went golden after staying on the best-seller charts for more than 135 weeks.Nelson recalled singing it in the Austin, Texas, Opera House. “There was a kind of stunned silence in the crowd for a moment, and then they exploded with cheering and whistling and applauding. The kids thought ‘Stardust' was a new song I had just written….”Our Take on the TuneSince its composition nearly a hundred years ago now, this song has been performed by many folks as a slow, romantic ballad, drawing out the words and the melody. Good for them. However, when Hoagy wrote this classic, he performed it with a bit of the sass and sway that characterized the jazz of his day, and we in The Flood like to carry on that tradition. The song has some of the best chords of anything in our repertoire and in this take from last week's rehearsal you'll hear two solos in which Danny Cox is finding all kinds of interesting ideas. Click here to come along on his quest.More from Year 2024?It's been a busy, interesting year in the Floodisphere, with lots of new tunes as well as re-imaginings of old ones from The Flood's songbag.If you'd like to join us in a little auld-lang-synery, our free Radio Floodango music streaming features a randomized playlist built around the tunes in all the weekly podcasts of the year. Click here to give Year 2024 a re-listen. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit 1937flood.substack.com
Morgan Park's Home of the Hoagy has been serving sandwiches since 1969, including the Chicago classic sweet steak, and is set to reopen Thursday after a month-long hiatus. Earlier this year, host Jacoby Cochran took producer Michelle Navarro to try her very first sweet steak. In this episode, Michelle lets us know what she thinks while Jacoby talks about this Chicago sandwich's role as a childhood staple, what makes it “a meal on a bun” and why it was worth the wait. Good news: Star Farm Fresh Market and Kitchen Want some more City Cast Chicago news? Then make sure to sign up for our Hey Chicago newsletter. Follow us @citycastchicago You can also text us or leave a voicemail at: 773 780-0246 Learn more about the sponsors of this Oct. 2 episode: Steppenwolf Theatre Become a member of City Cast Chicago. Interested in advertising with City Cast? Find more info HERE
AJBB Extra! Bubber, Bix, and Hoagy Carmichael and His Orchestra. "Rockin' Chair" (1930). What a band: Benny Goodman (Saxophone, Clarinet) Bud Freeman (Tenor Saxophone) Tommy Dorsey (Trombone) Jimmy Dorsey (Alto Saxophone) Jack Teagarden (Trombone) Bix Beiderbecke (Cornet) Bubber Miley (Trumpet) Joe Venuti (Violin) Irving Brodsky (Piano) Eddie Lang (Guitar) Gene Krupa (Drums). Enjoy!
Some classic Chicago foods, from deep-dish to jibaritos to Italian beef to paczki, can be found all over the city — or at least you can find several spots. But there's one classic sandwich that you can really only find at one spot these days. Morgan Park's Home of the Hoagy has been serving up sandwiches since 1969, including the sweet steak. The sandwich holds a special place for South Siders. Host Jacoby Cochran takes producer Michelle Navarro to try her first sweet steak and talks about how it was a childhood staple, what makes it “a meal on a bun” and why it's worth the wait. Good news: The Rooted and Radical Youth Poetry Festival Want some more City Cast Chicago news? Then make sure to sign up for our Hey Chicago newsletter. Follow us @citycastchicago You can also text us or leave a voicemail at: 773 780-0246 Become a member of City Cast Chicago. Interested in advertising with City Cast? Find more info HERE Learn more about your ad choices. Visit megaphone.fm/adchoices
Hoagy Carmichael, The Triple-Digit Midget is back for Round 2. Hoagy is the driving force behind Hoagy's Heroes, a Fund Raising organization that helps raise money for charity through long distance riding experiences in conjunction with The Iron Butt Association. The last time we had him on he was on Cloud 9, charged and ready to go and full of stories. The best stories were heard off the record for fears of his local Church parishioners potential for the call for his Ruination. Sit back with us as Hoagy takes the reins. https://www.hoagysheroes.org Our Master link: https://linktr.ee/thebikerslifestylepodcast
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AutoInterpretation Finds Sparse Coding Beats Alternatives, published by Hoagy on July 17, 2023 on LessWrong. Produced as part of the SERI ML Alignment Theory Scholars Program - Summer 2023 Cohort Huge thanks to Logan Riggs, Aidan Ewart, Lee Sharkey, Robert Huben for their work on the sparse coding project, Lee Sharkey and Chris Mathwin for comments on the draft, EleutherAI for compute and OpenAI for GPT-4 credits. Summary We use OpenAI's automatic interpretation protocol to analyse features found by dictionary learning using sparse coding and compare the interpretability scores thereby found to a variety of baselines. We find that for both the residual stream (layer 2) and MLP (layer 1) of Eleuther's Pythia70M, sparse coding learns a set of features that is superior to all tested baselines, even when removing the bias and looking just at the learnt directions. In doing so we provide additional evidence to the hypothesis that NNs should be conceived as using distributed representations to represent linear features which are only weakly anchored to the neuron basis. As before these results are still somewhat preliminary and we hope to expand on them and make them more robust over the coming month or two, but we hope people find them fruitful sources of ideas. If you want to discuss, feel free to message me or head over to our thread in the EleutherAI discord. All code available at the github repo. Methods Sparse Coding The feature dictionaries learned by sparse coding are learnt by simple linear autoencoders with a sparsity penalty on the activations. For more background on the sparse coding approach to feature-finding see the Conjecture interim report that we're building from, or Robert Huben's explainer. Automatic Interpretation As Logan Riggs' recently found, many of the directions found through sparse coding seem highly interpretable, but we wanted a way to quantify this, and make sure that we were detecting a real difference in the level of interpretability. To do this we used the methodology outlined in this OpenAI paper, details can be found in their code base. To quickly summarise, we are analysing features which are defined as scalar-valued functions of the activations of a neural network, limiting ourselves here to features defined on a single layer of a language model. The original paper simply defined features as the activation of individual neurons but we will in general be looking at linear combinations of neurons. We give a feature an interpretability score by first generating a natural language explanation for the feature, which is expected to explain how strongly a feature will be active in a certain context, for example 'the feature activates on legal terminology'. Then, we give this explanation to an LLM and ask it to predict the feature for hundreds of different contexts, so if the tokens are ['the' 'lawyer' 'went' 'to' 'the' 'court'] the predicted activations might be [0, 10, 0, 0, 8]. The score is defined as the correlation between the true and predicted activations. To generate the explanations we follow OpenAI and take a 64-token sentence-fragment from each of the first 50,000 lines of OpenWebText. For each feature, we calculate the average activation and take the 20 fragments with the highest activation. Of these 20, we pass 5 to GPT-4, along with the rescaled per-token activations. From these 5 fragments, GPT-4 suggests an explanation for when the neuron fires. GPT3.5 is then used to simulate the feature, given the explanation, across both another 5 highly activating fragments, and 5 randomly selected fragments (with non-zero variation). The correlation scores are calculated across all 10 fragments ('top-and-random'), as well as for the top and random fragments separately. Comparing Feature Dictionaries We use dictionary learning with a sparsi...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AutoInterpretation Finds Sparse Coding Beats Alternatives, published by Hoagy on July 17, 2023 on The AI Alignment Forum. Produced as part of the SERI ML Alignment Theory Scholars Program - Summer 2023 Cohort Huge thanks to Logan Riggs, Aidan Ewart, Lee Sharkey, Robert Huben for their work on the sparse coding project, Lee Sharkey and Chris Mathwin for comments on the draft, EleutherAI for compute and OpenAI for GPT-4 credits. Summary We use OpenAI's automatic interpretation protocol to analyse features found by dictionary learning using sparse coding and compare the interpretability scores thereby found to a variety of baselines. We find that for both the residual stream (layer 2) and MLP (layer 1) of Eleuther's Pythia70M, sparse coding learns a set of features that is superior to all tested baselines, even when removing the bias and looking just at the learnt directions. In doing so we provide additional evidence to the hypothesis that NNs should be conceived as using distributed representations to represent linear features which are only weakly anchored to the neuron basis. As before these results are still somewhat preliminary and we hope to expand on them and make them more robust over the coming month or two, but we hope people find them fruitful sources of ideas. If you want to discuss, feel free to message me or head over to our thread in the EleutherAI discord. All code available at the github repo. Methods Sparse Coding The feature dictionaries learned by sparse coding are learnt by simple linear autoencoders with a sparsity penalty on the activations. For more background on the sparse coding approach to feature-finding see the Conjecture interim report that we're building from, or Robert Huben's explainer. Automatic Interpretation As Logan Riggs' recently found, many of the directions found through sparse coding seem highly interpretable, but we wanted a way to quantify this, and make sure that we were detecting a real difference in the level of interpretability. To do this we used the methodology outlined in this OpenAI paper, details can be found in their code base. To quickly summarise, we are analysing features which are defined as scalar-valued functions of the activations of a neural network, limiting ourselves here to features defined on a single layer of a language model. The original paper simply defined features as the activation of individual neurons but we will in general be looking at linear combinations of neurons. We give a feature an interpretability score by first generating a natural language explanation for the feature, which is expected to explain how strongly a feature will be active in a certain context, for example 'the feature activates on legal terminology'. Then, we give this explanation to an LLM and ask it to predict the feature for hundreds of different contexts, so if the tokens are ['the' 'lawyer' 'went' 'to' 'the' 'court'] the predicted activations might be [0, 10, 0, 0, 8]. The score is defined as the correlation between the true and predicted activations. To generate the explanations we follow OpenAI and take a 64-token sentence-fragment from each of the first 50,000 lines of OpenWebText. For each feature, we calculate the average activation and take the 20 fragments with the highest activation. Of these 20, we pass 5 to GPT-4, along with the rescaled per-token activations. From these 5 fragments, GPT-4 suggests an explanation for when the neuron fires. GPT3.5 is then used to simulate the feature, given the explanation, across both another 5 highly activating fragments, and 5 randomly selected fragments (with non-zero variation). The correlation scores are calculated across all 10 fragments ('top-and-random'), as well as for the top and random fragments separately. Comparing Feature Dictionaries We use dictionary learning ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AutoInterpretation Finds Sparse Coding Beats Alternatives, published by Hoagy on July 17, 2023 on LessWrong. Produced as part of the SERI ML Alignment Theory Scholars Program - Summer 2023 Cohort Huge thanks to Logan Riggs, Aidan Ewart, Lee Sharkey, Robert Huben for their work on the sparse coding project, Lee Sharkey and Chris Mathwin for comments on the draft, EleutherAI for compute and OpenAI for GPT-4 credits. Summary We use OpenAI's automatic interpretation protocol to analyse features found by dictionary learning using sparse coding and compare the interpretability scores thereby found to a variety of baselines. We find that for both the residual stream (layer 2) and MLP (layer 1) of Eleuther's Pythia70M, sparse coding learns a set of features that is superior to all tested baselines, even when removing the bias and looking just at the learnt directions. In doing so we provide additional evidence to the hypothesis that NNs should be conceived as using distributed representations to represent linear features which are only weakly anchored to the neuron basis. As before these results are still somewhat preliminary and we hope to expand on them and make them more robust over the coming month or two, but we hope people find them fruitful sources of ideas. If you want to discuss, feel free to message me or head over to our thread in the EleutherAI discord. All code available at the github repo. Methods Sparse Coding The feature dictionaries learned by sparse coding are learnt by simple linear autoencoders with a sparsity penalty on the activations. For more background on the sparse coding approach to feature-finding see the Conjecture interim report that we're building from, or Robert Huben's explainer. Automatic Interpretation As Logan Riggs' recently found, many of the directions found through sparse coding seem highly interpretable, but we wanted a way to quantify this, and make sure that we were detecting a real difference in the level of interpretability. To do this we used the methodology outlined in this OpenAI paper, details can be found in their code base. To quickly summarise, we are analysing features which are defined as scalar-valued functions of the activations of a neural network, limiting ourselves here to features defined on a single layer of a language model. The original paper simply defined features as the activation of individual neurons but we will in general be looking at linear combinations of neurons. We give a feature an interpretability score by first generating a natural language explanation for the feature, which is expected to explain how strongly a feature will be active in a certain context, for example 'the feature activates on legal terminology'. Then, we give this explanation to an LLM and ask it to predict the feature for hundreds of different contexts, so if the tokens are ['the' 'lawyer' 'went' 'to' 'the' 'court'] the predicted activations might be [0, 10, 0, 0, 8]. The score is defined as the correlation between the true and predicted activations. To generate the explanations we follow OpenAI and take a 64-token sentence-fragment from each of the first 50,000 lines of OpenWebText. For each feature, we calculate the average activation and take the 20 fragments with the highest activation. Of these 20, we pass 5 to GPT-4, along with the rescaled per-token activations. From these 5 fragments, GPT-4 suggests an explanation for when the neuron fires. GPT3.5 is then used to simulate the feature, given the explanation, across both another 5 highly activating fragments, and 5 randomly selected fragments (with non-zero variation). The correlation scores are calculated across all 10 fragments ('top-and-random'), as well as for the top and random fragments separately. Comparing Feature Dictionaries We use dictionary learning with a sparsi...
What is a “buttermilk sky?,” I asked myself. It's such an evocative image, and for years I would simply envision a magnificent sunset of red and gold, suffused through a canopy of fluffy clouds. I googled it, and I was right! Up came rows of beautiful celestial pics, and although those photos are fantastic, the image in my mind's eye had them all beat - attached as it was to an indefinable, stirring emotion within me, mystical in its effect. That lyric was summoned from the ether by Jack Brooks, with Hoagy Carmichael writing the music. Hoagy introduced the tune in the film “Canyon Passage” with Dana Andrews, but, its appeal transcended the Cowboy genre. It was so popular that in December of 1946 there were four versions of the song in the top 20, led by Kay Kyser's band at #1. You may be aware that Hoagy was also the composer of “Stardust”, considered by most to be the best pop standard of the Great American Songbook ever written. And, he wrote the anthemic “Georgia on my Mind”. Film clips of him reveal one of the most relaxed and natural performers who ever appeared on celluloid - usually seated at a piano. In my Acting class we screened “The Best Years of Our Lives” and Hoagy's encounter with the amputee Harold Russell, as he gives avuncular advice to the traumatized vet while tinkling the ivories is one of my favorite moments in a classic film stuffed with unforgettable scenes. I'm not sure, but my sense is that the image of the buttermilk sky reminds us of the evanescence of life, and the sweet sadness of a longed-for love.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Replication] Conjecture's Sparse Coding in Small Transformers, published by Hoagy on June 16, 2023 on LessWrong. Summary A couple of weeks ago Logan Riggs and I posted that we'd replicated the toy-model experiments in Lee Sharkey and Dan Braun's original sparse coding post. Now we've replicated the work they did (slides) extending the technique to custom-trained small transformers (in their case 16 residual dimensions, in ours 32). We've been able to replicate all of their core results, and our main takeaways from the last 2 weeks of research are the following: We can recover many more features from activation space than the dimension of the activation space which have a high degree of cosine similarity with the features learned by other, larger dictionaries, which in toy models was a core indicator of having learned the correct features. The distribution of MCS scores between trained dictionaries of different sizes is highly bimodal, suggesting that there is a particular set of features that are consistently found as sparse basis vectors of the activations, across different dictionary sizes. The maximum-activating examples of these features usually seem human interpretable, though we haven't yet done a systematic comparison to the neuron basis. The diagonal lines seen in the original post are an artefact of dividing the l1_loss coefficient by the dictionary size. Removing this means that the same l1_loss is applicable to a broad range of dictionary sizes. We find that as dict size increases, MMCS initially increases rapidly at first, but then plateaus. The learned feature vectors, including those ones that appear repeatedly, do not appear to be at all sparse with respect to the neuron basis. As before, all code is available on GitHub and if you'd like to follow the research progress and potentially contribute, then join us on our EleutherAI thread. Thanks to Robert Huben (Robert_AIZI) for extensive comments on this draft, and Lee Sharkey, Dan Braun and Robert Huben for their comments during our work. Next steps & Request For Funding We'd like to test how interpretable the features that we have found are, in a quantitative manner. We've got the basic structure ready to apply OpenAI's automated-intepretability library to the found features, which we would then compare to baselines such as the neuron basis and the PCA and ICA of the activation data. This requires quite a lot of tokens- something like 20c worth of tokens for each query depending on the number of example sentences given. We would need to analyse hundreds of neurons to get a representative sample size, for each of the dictionaries or approaches, so we're looking for funding for this research on the order of a few thousand dollars, potentially more if it were available. If you'd be interested in supporting this research, please get in touch. We'll also be actively searching for funding, and trying to get OpenAI to donate some compute. Assuming the result of this experiment are promising, (meaning that we are able to exceed the baselines in terms of quality or quantity of highly interpretable features), we plan then to focus on scaling up to larger models and experimenting with variations of the technique which incorporate additional information, such as combining activation data from multiple layers, or using the weight vectors to inform the selection of features. We're also very interested to work with people developing mathematical or toy models of superposition. Results Background We used Andrej Karpathy's nanoGPT to train a small transformer, with 6 layers, a 32 dimensional residual stream, MLP widths of 128 and 4 attention heads with dimension 8. We trained on a node of 4xRTX3090s for about 9h, reaching a loss of 5.13 on OpenWebText. This transformer is the model from which we took activations, and t...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Replication] Conjecture's Sparse Coding in Small Transformers, published by Hoagy on June 16, 2023 on LessWrong. Summary A couple of weeks ago Logan Riggs and I posted that we'd replicated the toy-model experiments in Lee Sharkey and Dan Braun's original sparse coding post. Now we've replicated the work they did (slides) extending the technique to custom-trained small transformers (in their case 16 residual dimensions, in ours 32). We've been able to replicate all of their core results, and our main takeaways from the last 2 weeks of research are the following: We can recover many more features from activation space than the dimension of the activation space which have a high degree of cosine similarity with the features learned by other, larger dictionaries, which in toy models was a core indicator of having learned the correct features. The distribution of MCS scores between trained dictionaries of different sizes is highly bimodal, suggesting that there is a particular set of features that are consistently found as sparse basis vectors of the activations, across different dictionary sizes. The maximum-activating examples of these features usually seem human interpretable, though we haven't yet done a systematic comparison to the neuron basis. The diagonal lines seen in the original post are an artefact of dividing the l1_loss coefficient by the dictionary size. Removing this means that the same l1_loss is applicable to a broad range of dictionary sizes. We find that as dict size increases, MMCS initially increases rapidly at first, but then plateaus. The learned feature vectors, including those ones that appear repeatedly, do not appear to be at all sparse with respect to the neuron basis. As before, all code is available on GitHub and if you'd like to follow the research progress and potentially contribute, then join us on our EleutherAI thread. Thanks to Robert Huben (Robert_AIZI) for extensive comments on this draft, and Lee Sharkey, Dan Braun and Robert Huben for their comments during our work. Next steps & Request For Funding We'd like to test how interpretable the features that we have found are, in a quantitative manner. We've got the basic structure ready to apply OpenAI's automated-intepretability library to the found features, which we would then compare to baselines such as the neuron basis and the PCA and ICA of the activation data. This requires quite a lot of tokens- something like 20c worth of tokens for each query depending on the number of example sentences given. We would need to analyse hundreds of neurons to get a representative sample size, for each of the dictionaries or approaches, so we're looking for funding for this research on the order of a few thousand dollars, potentially more if it were available. If you'd be interested in supporting this research, please get in touch. We'll also be actively searching for funding, and trying to get OpenAI to donate some compute. Assuming the result of this experiment are promising, (meaning that we are able to exceed the baselines in terms of quality or quantity of highly interpretable features), we plan then to focus on scaling up to larger models and experimenting with variations of the technique which incorporate additional information, such as combining activation data from multiple layers, or using the weight vectors to inform the selection of features. We're also very interested to work with people developing mathematical or toy models of superposition. Results Background We used Andrej Karpathy's nanoGPT to train a small transformer, with 6 layers, a 32 dimensional residual stream, MLP widths of 128 and 4 attention heads with dimension 8. We trained on a node of 4xRTX3090s for about 9h, reaching a loss of 5.13 on OpenWebText. This transformer is the model from which we took activations, and t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Replication] Conjecture's Sparse Coding in Small Transformers, published by Hoagy on June 16, 2023 on The AI Alignment Forum. Summary A couple of weeks ago Logan Riggs and I posted that we'd replicated the toy-model experiments in Lee Sharkey and Dan Braun's original sparse coding post. Now we've replicated the work they did (slides) extending the technique to custom-trained small transformers (in their case 16 residual dimensions, in ours 32). We've been able to replicate all of their core results, and our main takeaways from the last 2 weeks of research are the following: We can recover many more features from activation space than the dimension of the activation space which have a high degree of cosine similarity with the features learned by other, larger dictionaries, which in toy models was a core indicator of having learned the correct features. The distribution of MCS scores between trained dictionaries of different sizes is highly bimodal, suggesting that there is a particular set of features that are consistently found as sparse basis vectors of the activations, across different dictionary sizes. The maximum-activating examples of these features usually seem human interpretable, though we haven't yet done a systematic comparison to the neuron basis. The diagonal lines seen in the original post are an artefact of dividing the l1_loss coefficient by the dictionary size. Removing this means that the same l1_loss is applicable to a broad range of dictionary sizes. We find that as dict size increases, MMCS initially increases rapidly at first, but then plateaus. The learned feature vectors, including those ones that appear repeatedly, do not appear to be at all sparse with respect to the neuron basis. As before, all code is available on GitHub and if you'd like to follow the research progress and potentially contribute, then join us on our EleutherAI thread. Thanks to Robert Huben (Robert_AIZI) for extensive comments on this draft, and Lee Sharkey, Dan Braun and Robert Huben for their comments during our work. Next steps & Request For Funding We'd like to test how interpretable the features that we have found are, in a quantitative manner. We've got the basic structure ready to apply OpenAI's automated-intepretability library to the found features, which we would then compare to baselines such as the neuron basis and the PCA and ICA of the activation data. This requires quite a lot of tokens- something like 20c worth of tokens for each query depending on the number of example sentences given. We would need to analyse hundreds of neurons to get a representative sample size, for each of the dictionaries or approaches, so we're looking for funding for this research on the order of a few thousand dollars, potentially more if it were available. If you'd be interested in supporting this research, please get in touch. We'll also be actively searching for funding, and trying to get OpenAI to donate some compute. Assuming the result of this experiment are promising, (meaning that we are able to exceed the baselines in terms of quality or quantity of highly interpretable features), we plan then to focus on scaling up to larger models and experimenting with variations of the technique which incorporate additional information, such as combining activation data from multiple layers, or using the weight vectors to inform the selection of features. We're also very interested to work with people developing mathematical or toy models of superposition. Results Background We used Andrej Karpathy's nanoGPT to train a small transformer, with 6 layers, a 32 dimensional residual stream, MLP widths of 128 and 4 attention heads with dimension 8. We trained on a node of 4xRTX3090s for about 9h, reaching a loss of 5.13 on OpenWebText. This transformer is the model from which we took activ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Replication] Conjecture's Sparse Coding in Toy Models, published by Hoagy on June 2, 2023 on The AI Alignment Forum. Summary In the post Taking features out of superposition with sparse autoencoders, Lee Sharkey, Beren Millidge and Dan Braun (formerly at Conjecture) showed a potential technique for removing superposition from neurons using sparse coding. The original post shows the technique working on simulated data, but struggling on real models, while a recent update shows promise, at least for extremely small models. We've now replicated the toy-model section of this post and are sharing the code on github so that others can test and extend it, as the original code is proprietary. Additional replications have also been done by Trenton Bricken at Anthropic and most recently Adam Shai. Thanks to Lee Sharkey for answering questions and Pierre Peigne for some of the data-generating code. Future Work We hope to expand on this in the coming days/weeks by: trying to replicate the results on small transformers as reported in the recent update post. using automated interpretability to test whether the directions found by sparse coding are in fact more cleanly interpretable than, for example, the neuron basis. trying to get this technique working on larger models. If you're interested in working on similar things, let us know! We'll be working on these directions in the lead-up to SERI MATS. Comparisons Mean Maximum Cosine Similarity (MMCS) with true features Number of dead neurons (capped at 100) MMCS with larger dicts with same L1 penalty Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Universality and Hidden Information in Concept Bottleneck Models, published by Hoagy on April 5, 2023 on The AI Alignment Forum. Summary I use the CUB dataset to finetune models to classify images of birds, via a layer trained to predict relevant concepts (e.g. "has_wing_pattern::spotted"). These models, when trained end-to-end, naturally put additional relevant information into the concept layer, which is encoded from run to run in very similar ways. The form of encoding is robust to initialisation of the concept-heads, and fairly robust to changes in architecture (presence of dropout, weighting of the two loss components), though not to initialisation of the entire model. The additional information seems to be primarily placed into low-entropy concepts. This suggests that if steganography were to arise in, for example, fine-tuned autoregressive models, there would be convergent ways for this information to hide, which I previously thought unlikely, and which constrains the space of possible mitigations. This work is ongoing and there are any more experiments to run but this is an intermediate post to show the arc of my current results and to help gather feedback. Code for all experiments on GitHub. Background In my Distilled Representations Research Agenda I laid out the basic plan for this research. The quick version of this is that we showed that we can create toy cases where we train auto-encoders to encode vectors of a certain dimension, within which some dimensions have a preferred way of being represented, while others don't, and by training multiple models, we can distinguish between these different dimensions and create a new model which encodes only those dimensions which have a preferred encoding. The next step is to see whether this toy model has any relevance to a real-world case, which I explore in this post. Training Procedure This work uses the CUB dataset. This is a dataset which contains >10K images, each labelled one of 200 species of bird. Each image also has 109 features which describe the birds' appearances. These features are divided into 28 categories, such as beak-shape. Usually only one of these features is true for any given image, though the raw annotations can have multiple attributes in a class be true. I perform experiments on the CUB dataset using Concept Bottleneck Models, specifically by finetuning an ImageNet-trained model to predict the species of bird from an image, with an auxiliary loss which incentivises each neuron in a layer near the final outputs to fire if and only if a human labeller has indicated that a particular 'concept', i.e. feature of the bird, is true of this bird. The model has a separate fully-connected 2-layer network for each concept, and then a single fully-connected 2-layer network to predict the class from the concept vector. These fully-connected layers are reinitialised from scratch with each run but the ImageNet model is pre-trained. The use of the pre-trained network is important to this shared information but also causes a host of other behaviour changes such that I don't think non-pre-trained networks are an ideal model of behaviour either (see final section). Concept bottleneck models have three basic forms: Independent: the image-to-concept model and concept-to-class model are trained totally separately, and only combined into a single model at test time. Sequential: the image-to-concept model is trained first, and then the concept-to-class model is trained to predict the class from the generated concept vectors - but without propagating the gradient back to the image-to-concept model. Joint: the model is trained end-to-end using a loss combined of the concept prediction error and class prediction error. The relative weight of these can be adjusted for different tradeoffs between performance and con...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Automating Consistency, published by Hoagy on February 17, 2023 on The AI Alignment Forum. tldr: Ask models to justify statements. Remove context, ask if statements are true/good. If not, penalise. Apply this again to the justifying statements. Status: Just a quick thought. Doubt this is a new idea but I don't think I've encountered it. Happy to delete if it's a duplicate. Mods: If you think this is closer to capability work than alignment work please remove. Background A failure of current LLMs is that after they've said something that's incorrect, they can then double down and spout nonsense to try and justify their past statements. (Exhibit A: Sydney Bing vs Avatar 2) We can suppress this by giving it poor ratings in RLHF, but perhaps we can do better by automating the process. Setup: We start with a standard RLHF context. We have an LLM which assigns probabilities to statements (can extract this from the logits of the tokens 'Yes' and 'No'). These can be propositions about the world, X, or about the relationship between propositions, X supports Y. To make it easier, we fine-tune or prompt to give these statements within a defined syntax. We also have a value model that evaluates sequences, on which the LLM is trained to perform well. Method: We prompt the model to make true statements {T} and then to provide logical or empirical support for these claims, {S}. We then remove the context and ask the model whether supporting statement Si is true. Separately we also ask whether, if true, Si would support Ti. If either of these conditions are not met, we add a strong negative penalty to the value model's evaluation of the original outputs. Train for higher value model scores while incorporating this penalty. Apply the same procedure each of the supporting statements Si Value Consistency: This could be combined with values-based fine-tuning by alternating logical consistency with asking it whether the output is consistent with the preferred values. This is similar to Anthropic's Constitutional AI but by combining it with the ability to recurse down the tree of justifications, it may be able to embed the values ore deeply in its behaviour. The recent genre of 'would you rather say slur X or kill Y people' represents the kind of failure I imagine this could help prevent. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Robert "Hoagy" Hoagland disappeared from Sandy Hook, Connecticut on July 28th, 2013. His disappearance would remain a mystery until December 5th of 2022 when Richard King would solve this case in Rockhill, NY just 83 miles from where Robert disappeared.Support the showAs always we want to thank each and every one of you for the continued support! If you haven't done so already please follow us on Facebook and Instagram. We would also LOVE if you would write us a review on apple podcasts! XOXO,Ashley & Sierra CLICK HERE⤵️⤵️⤵️https://linktr.ee/weeklydoseofwicked?utm_source=linktree_profile_share<sid=6148575e-7853-4821-ae73-dc352c3340ab
Today, we'll be discussing part two of Robert Hoagland's case, but if you follow us on our socials, or have been keeping up with the news, you'll know that there have been some…. rather large developments in this case over the past week. For those of you who may not be in the know… we'll keep this synopsis spoiler free. And for everyone else, we still think you'll be interested to hear our conversation. Follow us on on Instagram at wickeddeedspod Like our Facebook Page, Wicked Deeds Follow us on Twitter @WickedDeeds Visit our website wickeddeedspodcast.com for a list of photos related to this case and our source material.
Today, we'll be discussing the case of a devoted father of three who vanished into thin air after being seen mowing his lawn one summer morning. At first, anything could have been possible, he could have run away from his life or foul play could have been at hand. But as more and more details emerge, everything you think you know will be thrown out the window. Follow us on on Instagram at wickeddeedspod Like our Facebook Page, Wicked Deeds Follow us on Twitter @WickedDeeds Visit our website wickeddeedspodcast.com for a list of photos related to this case and our source material.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Distilled Representations Research Agenda, published by Hoagy on October 18, 2022 on The AI Alignment Forum. Introduction I've recently been given funding from the Long Term Future Fund to develop work on an agenda I'll tentatively call Distilled Representations, and I'll be working on this full-time over the next 6 months with Misha Wagner (part time). We're working on a way of training autoencoders so that they can only represent information in certain ways - ways that we can define in a flexible manner. It works by training multiple autoencoders to encode a set of objects, while for some objects defining a preferred representation that the autoencoders are encouraged to encode the objects as. We then distill these multiple autoencoders into single autoencoder which encodes only that information which is encoded in the same way across the different autoencoders. If we are correct, this new autoencoder should only encode information using the preferred strategy. Vitally, this can be not just the original information in the preferred representations, but also information represented by generalizations of that encoding strategy. It is similar to work such as Concept Bottleneck Models but we hope the distillation from multiple models should allow interpretable spaces in a much broader range of cases. The rest of this post gives more detail of the intuition that we hope to build into a useful tool, some toy experiments we've performed to validate the basic concepts, the experiments that we hope to build in the future, and the reasons we hope it can be a useful tool for alignment. We'd like to make sure we understand what similar work has been done and where this work could be useful. If you're familiar with disentangled representations, or interpretability tools more generally, we're interested in having a chat. You can reach me here on LessWrong or at hoagycunningham@gmail.com. Previous versions of similar ideas can be found in my ELK submission and especially Note-taking Without Hidden Messages. Intuition The intuition that this work builds on is the following: With neural networks, the meanings of the weights and activations are usually opaque but we're often confident about the kind of thing that the network must be representing, at least for some cases or parts of the input distribution. In those cases where we understand what the network is representing, we can condense this understanding into a vector, thus defining a 'preferred representation' which encapsulates that knowledge. We can compress the NN's state with an autoencoder, while in those cases with preferred representations, encouraging the encoding to be as close as possible to the preferred representation. We expect that this running this compression results in the known information being compressed in the manner specified by the preferred representations, while other important information also being snuck in wherever possible. If we then train multiple encoder/decoder systems, they will use the preferred representation, but also will use generalizations of the preferred representations. Additional info that is not a generalization of the preferred representation scheme will also be encoded, but the encoding scheme for additional information will vary between different encoder/decoder pairs. Using methods such as retraining a new encoder to encode for randomly shuffled decoders at each batch, we can create an encoder that uses a generalization of our preferred encoding scheme, without containing additional, misleading information. There are quite a few leaps in this reasoning, and we view the key assumptions / hypotheses to be tested as the following: In relevant situations we can define preferred representations. We can force encoders to use not just these representations but meaningful generalizations ...
Nowadays “The Nearness of You” often conjures a mental image of 21st century song stylist Norah Jones. And, indeed, Jones did release a gorgeous rendition of the song in 2001, leading some to think that she wrote it. Actually, though, the tune is twice Norah's age.Hoagy Carmichael wrote the song in 1938, originally intending it for an odd little movie project. In his Carmichael biography Stardust Melody, Richard H. Sudhalter reports that Hoagy dashed off the yet-unnamed melody for “a screen adaptation of Shakespeare's ‘A Midsummer Night's Dream,' featuring a 15-year-old Mickey Rooney as Puck,” but that production fell through.Then with lyrics from Ned Washington, the composition became “The Nearness of You” and was scheduled for inclusion in the feature film Romance in the Rough. However, that film, too, was never produced.Celluloid ConfusionThere is some celluloid confusion about the song's place in films. Sudhalter notes that despite accounts to the contrary, “The Nearness of You” was never scheduled to be included in the 1938 Paramount film, Romance in the Dark, starring John Boles, Gladys Swarthout and John Barrymore.Probably because of the similar titles — Romance in the Rough vs. Romance in the Dark — writers often have mistakenly credited the introduction of “The Nearness of You” to Swarthout in Romance in the Dark. That error is recorded in at least one reference book, numerous sheet music books and nowadays on hundreds of websites.ChartingIn reality, after the Hollywood false stops, the song had to wait for republication in 1940 to win its place as a beloved jazz standard. That was the year Glenn Miller and his Orchestra introduced a recording of “The Nearness of You” with vocals by Ray Eberle. The Bluebird label recording appeared on the pop charts at the end of June and remained there for 11 weeks, peaking at No. 5. In 1953 the song charted again, this time with Bob Manning singing with Monty Kelly and His Orchestra. His recording climb the charts to No. 16. Since then, the tune has been recorded dozens of times, by everyone from Ella Fitzgerald and Louis Armstrong to James Taylor, Barbra Streisand, Etta James and Seal.Dueling Pundits It's fun how the critics have differed wildly in their comments about this particular Carmichael creation. For instance, in his book American Popular Songs: The Great Innovators, 1900-1950, Alec Wilder called the song “simple and unclever,” adding that it is “the sort of song that an academic musical mind would sneer at.” As if to answer that challenge, Yale University music professor Allen Forte devoted no fewer than five pages to the song in his book Listening to Classic American Popular Songs. Forte called it “unusual,” “remarkable,” and “striking,” and even offering an effusive “Congratulations, Hoagy!” for Carmichael's slightly concealed replication of the refrain's opening phrase in the verse.Our Take on the TuneThis tune really hasn't made The Flood set list yet — we've only just started working with it — but it sure seems like it wants to settle down with us. Listen to everybody listening to everybody else. For instance, check out how midway through, Veezy's solo establishes a lovely mood that Danny beautifully echos when he takes his turn. Yeah, it's not a regular Floodified number yet, but stay tuned.More BalladsBy the way, if softer sounds are what your day is calling for, we've got the play list for you! Tune into the Ballads Channel of the free Radio Floodango music streaming service for the tunes for your mood. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit 1937flood.substack.com
Back in the days when we would ride on — and occasionally even got to perform on — the good ol' Delta Queen steamboat, it often meant a reunion with a dear friend, the boat's band leader, the legendary New Orleans cornetist Connie Jones. We learned “Memphis in June” from Connie. On his album, it was an instrumental, but whenever we'd ask for it on board the boat, Connie would sing it. Here from a recent rehearsal is our take on the tune, conjuring up memories of sunny days up in The Delta Queen's Texas Lounge, seeing Connie, eyes closed and grinning as he purred those sweet Paul Francis Webster lyrics. Here then, in memory of Connie Jones, is Hoagy's sweet love song to summer.
“Memphis in June” was written by Hoagy Carmichael and Paul Francis Webster for the 1945 George Raft movie “Johnny Angel.” Carmichael himself, playing a character named "Celestial O'Brien,” performed the tune in the film, then revisited it on several subsequent recordings.A Musicians' FavoriteWhile it isn't one of the better known Carmichael compositions, “Memphis in June” is a particular favorite among musicians, covered 40 times over the years in various genres and formats.For instance, 16 years after the movie, Nina Simone delivered perhaps the definitive version of the song. (Of course, almost any song Simone approached was definitively addressed). Simone's jazzy 1961 reading is tinged with a bluer quality that puts the emphasis on Memphis.A half century later, Annie Lennox brought a wonderful interpretation of the composition to her 2014 “Nostalgia” album, a reading full of warmth and feeling that some think are missing from Hoagy's 1945 original.Meanwhile, Bob Dylan gave a hearty shout-out to song on his 1985 “Empire Burlesque” album. Remember how Bob's song “Tight Connection To My Heart” drew this word picture?Well, they're not showing any lights tonightAnd there's no moon.There's just a hot-blooded singer Singing "Memphis in June.” Our Take on the TuneBack in the days when we would ride on — and occasionally even perform on — the good ol' Delta Queen steamboat, it often meant a reunion with a dear friend, the boat's band leader, legendary New Orleans cornetist Connie Jones. We learned “Memphis in June” from Connie. On his album, it was an instrumental, but whenever we'd ask for it on board the boat, Connie would sing it. From a recent rehearsal, this is The Flood's latest take on the tune, conjuring up memories of sunny days up in The Delta Queen's Texas Lounge, seeing Connie, eyes closed and grinning as he purred those sweet lyrics. Here then, in memory of Connie Jones, is Hoagy's love song to summer. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit 1937flood.substack.com
Many “river” songs try to tap into the romance of riverboats, but few can claim to be born on the river and be written by an actual steamboater. New Orleans clarinetist Sidney Arodin got his start in music in the 1920s working in bands that entertained on excursion boats that traveled up and down the Mississippi River. “(Up a) Lazy River,” Arodin's most famous competition, was a tossed-off composition based on a common jazz chord progression that Arodin learned on the boats to use as a warm-up exercise before a performance. For the tune, Sid simply slowed it down to a lazier pace.Sidney's StartNow, the story goes that at 15 Arodin got his first clarinet and took lessons for only two months. This was the sum total of his "legit" musical training, until years later when he took a week off to teach himself a bit of music theory after being fired from a band for not knowing how to read the sheet music.His very first gig was a Saturday night dance in his hometown of Westwego, Louisiana. When a combo hired from New Orleans hit town minus an ill clarinetist, Arodin ran barefoot through mud and oyster shells to grab his own clarinet.From his 16th birthday onward, Arodin was rarely at home. First, he was hired on the riverboats, then he eventually made it to New York City where he worked with Johnny Stein's Original New Orleans Jazz Band beginning in 1922. (In the mid-'20s in that group he played with the young still-unknown Jimmy Durante.)The Hoagy Carmichael ConnectionIt was during Sidney's New York years that he met his famous collaborator. As Hoagy Carmichael wrote in his 1965 autobiography, Sometimes I Wonder, it was a mutual friend, Harry Hostetter, who introduced them.“Harry met a lot of musicians in the Broadway area,” Hoagy wrote, “and he came running up to me one night.”“‘This guy, Sidney Arodin, plays a pretty clarinet and he's got a tune you gotta hear, Hoag.'”“‘Where?'”“‘Over on 56th Street in a clip joint.'”That night, Hoagy and Harry dropped in. “It was a shabby brick-front walk-up on the second floor,” he recalled, “and the only customer was a balding man of about 55 with a hired girl on each arm, drinking champagne. They must have clipped this gent for five hundred at least before they let him out. Harry and I were guests of Sidney's, so none of the girls glanced our way.“Sidney played his tune and I was highly pleased,” Carmichael wrote. “I knew Harry couldn't be wrong. In the ensuing weeks, I wrote a verse and a lyric and titled it ‘Lazy River.'”Carmichael went on, “The ambition of every songwriter was now accomplished, although I didn't know it then — that of having in his folio something on the order of a folk song that could be played and sung in most any manner, something that could be sung all the way through by drunken quartets or by blondes over a piano bar.”Carmichael made the very first recording of “Lazy River” in 1930 for Victor with Tommy and Jimmy Dorsey, Joe Venuti and Red Norvo. That was followed in 1932 with cover versions both by Louis Armstrong and by Phil Harris. One of the most beloved rendering of “Lazy River” came in 1942 with The Mill Brothers' Decca recording, but, hey, there is no shortage of versions to choose from. It is one of the most recorded numbers in the Great American Songbook, and hundreds of covers of “(Up A) Lazy River” have been released, and are still coming out today.Sid's Last YearsIn the 1930s, Arodin returned to Louisiana to gig with combos assembled by assorted New Orleans trumpeters, including Wingy Manone, Sharkey Bonano and Louis Prima. But for his last seven years starting in 1941, Arodin's health failed and his musical appearances became less frequent.Curiously, while he cut quite a few sides with quite a few groups during his playing career, Sidney himself never recorded his most famous song.Doug and The Jazz BoxOur take on the tune: Our buddy Doug Chaffin wasn't feeling too swell earlier this week when Randy Hamilton, Danny Cox and Charlie Bowen landed on his doorstep. However, we brought with us a secret medicine just guaranteed to make him feel better. It was Charlie's new guitar, a sweet 2016 D'Angelico Excel — a hollow-body arch top jazz box — which we immediately put into Doug's experienced hands. Well, after Doug strummed a chord or two, we could hear him already smiling behind his face mask. Listen to him just swinging in the living room on this great old jazz standard.More Doug? Coming Right Up!If you'd like to spend a little more time with Doug Chaffin in your ears, check out to the Doug Channel on our free music streaming service, Radio Floodango. A couple dozen Doug-enriched tracks await you. Click here to get started. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit 1937flood.substack.com
In this episode of Listening with Leckrone, we focus on a single song, Hoagy Carmichael's legendary "Stardust." With over 1,500 recordings, "Stardust" lends itself to a wide range of musical styles and interpretations. Mike guides us through a handful of these renditions, exploring how each artist put their own spin on the iconic song.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Remaking EfficientZero (as best I can), published by Hoagy on July 4, 2022 on The AI Alignment Forum. Introduction When I first heard about EfficientZero, I was amazed that it could learn at a sample efficiency comparable to humans. What's more, it was doing it without the gigantic amount of pre-training the humans have, which I'd always felt made comparing sample efficiencies with humans rather unfair. I also wanted to practice my ML programming, so I thought I'd make my own version. This article uses what I've learned to give you an idea, not just of how the EfficientZero algorithm works, but also of what it looks like to implement in practice. The algorithm itself has already been well covered in a LessWrong post here. That article inspired me to write this and if it's completely new to you it might be a good place to start - the focus here will be more on what the algorithm looks like as a piece of code. The code below is all written by me and comes from a cleaned and extra-commented version of EfficientZero which draws from the papers (MuZero, Efficient Zero), the open implementation pf MuZero by Werner Duvaud, the pseudocode provided by the MuZero paper, and the original implementation of EfficientZero. You can have a look at the full code and run it at on github. It's currently functional and works on trivial games like cartpole but struggles to learn much on Atari games within a reasonable timeframe, not certain if this reflects an error or just insufficient time. Testing on my laptop or Colab for Atari games is slow - if anyone could give access to some compute to do proper testing that would be amazing! Grateful to Misha Wagner for feedback on both code and post. Algorithm Overview AlphaZero EfficientZero is based on MuZero, which itself is based on AlphaZero, a refinement of the architecture which was the first beat the Go world champion. With AlphaZero, you play a deterministic game, like chess, by developing a neural network that evaluates game states, associating each possible state of the board with a value, the discounted expected return (in zero-sum games like chess, discount rate is 0 and this is just win%). Since the algorithm can have access to a game 'simulator', it can test out different moves, and responses to those moves before actually playing them. More specifically, from an initial game state it can traverse the tree of potential games, making different moves, playing against itself, and evaluating these derived game states. After traversing this tree, and seeing the quality of the states reached, we can average the values of the derived states to get a better estimate of how good that initial game state actually was, and make our final move based on these estimates. When playing out these hypothetical games, we are playing roughly according to our policy, but if we start finding that a move that looked promising leads to bad situations we can start avoiding that, thereby improving on our original policy. In the limit, this constrains our position evaluation function to be consistent with itself, meaning that if position A is rated highly, and our response in that situation would be to move to position B, then B should also be rated highly, etc. This allows us the maximize the value of our training data, because if we learn that state C is bad, we will also learn to avoid states which would lead to C and vice versa. Note that this constraint is what similar to that enforced by the Minimax algorithm, but AZ and descendants propagate the average value of the found states, rather than the minimum, up the tree to avoid compounding NN error. MuZero While AlphaZero was very impressive, from a research direction, it seemed (to me) fundamentally limited by the fact that it requires a fully deterministic space in which to play - to search the tree o...
We could do Hoagy Carmichael tunes all night long — and, well, sometimes we pretty much do. Here, from last night's Floodifying, is our latest take on a lesser known Carmichael work, Hoagy's cool 1932 composition called simply “New Orleans.”
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ELK Sub - Note-taking in internal rollouts, published by Hoagy on March 9, 2022 on The AI Alignment Forum. My ELK submission was labelled under 'Strategy: train a reporter that is useful for another AI'. This is definitely a fair description, though the AI it needs to be useful to is itself - the reporter is essentially internalized. I also agree that the proposed counterexample, hiding information in what seems like human-comprehensible speech, is the biggest flaw. Nonetheless I think my proposal has enough additional detail and scope for extension that it's worth fleshing out in its own post - so here we are. Some of the responses to counterexamples below also go beyond my original proposal. For anyone interested, here is my original proposal (Google Doc) which contains the same idea in somewhat less generality. In this post I'll first flesh out my proposal in slightly more general terms, and then use it to try and continue the example/counter-example dialogue on ELK. I know there were a number of proposals in this area and I'd be very interested to see how others' could be integrated with my own. In particular I think mine is weak on how to force the actor to use human language accurately. I expect there are lots of ways of leveraging existing data for this purpose but I've only explored this very lightly. Many thanks to ARC for running the competition and to Misha Wagner for reviewing and discussing the proposal. Proposal Summary Creating Human Semantic Space We should think of the actor as described in ELK as having two kinds of space in which it works: The space of potential observations of its sensors and actuators, O The space of internal representations, ie its Bayes Net, I The machine takes in observations, converts it into its internal representation I, simulates the action in this internal representation, and then cashes this out in its actuators. What we want to do is force it to convert back and forth between I and a third space, the space of human understandable meaning, which we will call H. How can we achieve this? My basic idea is that we leverage the world of text and image based models to create a model called a Synonymizer. This device, in its ideal form, would be trained to take in any piece of text, image, video, audio, or combination thereof, and output a piece of media in any of these forms which preserves the semantic content as far as possible while having complete freedom to change the format. (Initial experiments in this vein would probably use GPT-3 primed with 'This document will contain a paragraph, followed by a paragraph with the same meaning but in a different structure..') The set of equivalency classes according to this Synonymizer (though they would in practice by fuzzy) should then be isomorphic to human semantic space, H. Learning to use H as a human would Next, when training, during its internal rollouts, we would periodically force the machine to translate from I into a space which is then passed through the Synonymizer, before being converted back into I, forcing it to learn a mapping between the two: Of course, at this point is isn't using human semantics, just a very weirdly regularized latent space, but having this should also allow us to bring in a vast amount of possible training data, utilizing labelled image, video, text description of events, etc, which could all be used to train the model, and thereby force it to understand language and video as a human does in order to maximize performance. For example, while training the model to predict future events in a piece of video, text description of future frames can be appended, forcing it to learn to read text into its internal understanding of the situation in order to perform well. The combination of this training example and the Synonymizer should hopefully go a long...
durée : 00:58:02 - « The Nearness of You » (Hoagy Carmichael / Ned Washington) (1938) - par : Laurent Valero - "The Nearness of You" written in 1938 by Hoagy Carmichael & Ned Washington, for the Paramount film "Romance in the Dark" ... - réalisé par : Antoine Courtin
The biggest names in Hollywood and Broadway recorded for AFRS during the war years, The American Forces Network can trace its origins back to May 26, 1942, when the War Department established the Armed Forces Radio Service (AFRS). The U.S. Army began broadcasting from London during World War II, using equipment and studio facilities borrowed from the British Broadcasting Corporation (BBC). The first transmission to U.S. troops began at 5:45 p.m. on July 4, 1943, and included less than five hours of recorded shows, a BBC news and sports broadcast. That day, Corporal Syl Binkin became the first U.S. Military broadcasters heard over the air. The signal was sent from London via telephone lines to five regional transmitters to reach U.S. troops in the United Kingdom as they prepared for the inevitable invasion of Nazi-occupied Europe. Fearing competition for civilian audiences the BBC initially tried to impose restrictions on AFN broadcasts within Britain (transmissions were only allowed from American Bases outside London and were limited to 50 watts of transmission power) and a minimum quota of British produced programming had to be carried. Nevertheless AFN programmes were widely enjoyed by the British civilian listeners who could receive them and once AFN operations transferred to continental Europe (shortly after D-Day) AFN were able to broadcast with little restriction with programmes available to civilian audiences across most of Europe (including Britain) after dark. As D-Day approached, the network joined with the BBC and the Canadian Broadcasting Corporation to develop programs especially for the Allied Expeditionary Forces. Mobile stations, complete with personnel, broadcasting equipment, and a record library were deployed to broadcast music and news to troops in the field. The mobile stations reported on front line activities and fed the news reports back to studio locations in London.---------------------------------------------------------------------------Entertainment Radio Stations Live 24/7 Sherlock Holmes/CBS Radio Mystery Theaterhttps://live365.com/station/Sherlock-Holmes-Classic-Radio--a91441https://live365.com/station/CBS-Radio-Mystery-Theater-a57491----------------------------------------------------------------------------
Hoagy Carmichael aka The Triple Digit Midget is the President and Founder of an organization called Hoagy's Heroes a Long Distance Charity Riders Org. that has raised $347,997 for various Charity's and his riders have logged1,593,122 miles in the process. Hoagy is quite the character with a spirited outlook on life and is one hard riding biker having organized the largest Iron Butt ride in history in 2013, at that time breaking a Guinness World Record. Hoagy has 5 different personalities or "Hoagy's" as we found out. So sit back and enjoy the ride and listen to the stories because he has plenty. After we finished recording this episode he regaled us with many more, some a little "too-Salty" for him to relate to the public, although we did try during the course of the interview. Out of respect for the man and his respect to those in his community we kept them to just us. Check out his website and Facebook page: https:/hoagysheroes.org https://www.facebook.com/Hoagys-Heroes-134619534585 Master Link to all of our platforms: https://linktr.ee/thebikerslifestylepodcast
The biggest names in Hollywood and Broadway recorded for AFRS during the war years, The American Forces Network can trace its origins back to May 26, 1942, when the War Department established the Armed Forces Radio Service (AFRS). The U.S. Army began broadcasting from London during World War II, using equipment and studio facilities borrowed from the British Broadcasting Corporation (BBC). The first transmission to U.S. troops began at 5:45 p.m. on July 4, 1943, and included less than five hours of recorded shows, a BBC news and sports broadcast. That day, Corporal Syl Binkin became the first U.S. Military broadcasters heard over the air. The signal was sent from London via telephone lines to five regional transmitters to reach U.S. troops in the United Kingdom as they prepared for the inevitable invasion of Nazi-occupied Europe. Fearing competition for civilian audiences the BBC initially tried to impose restrictions on AFN broadcasts within Britain (transmissions were only allowed from American Bases outside London and were limited to 50 watts of transmission power) and a minimum quota of British produced programming had to be carried. Nevertheless AFN programmes were widely enjoyed by the British civilian listeners who could receive them and once AFN operations transferred to continental Europe (shortly after D-Day) AFN were able to broadcast with little restriction with programmes available to civilian audiences across most of Europe (including Britain) after dark. As D-Day approached, the network joined with the BBC and the Canadian Broadcasting Corporation to develop programs especially for the Allied Expeditionary Forces. Mobile stations, complete with personnel, broadcasting equipment, and a record library were deployed to broadcast music and news to troops in the field. The mobile stations reported on front line activities and fed the news reports back to studio locations in London. --------------------------------------------------------------------------- Entertainment Radio Stations Live 24/7 Sherlock Holmes/CBS Radio Mystery Theater https://live365.com/station/Sherlock-Holmes-Classic-Radio--a91441 https://live365.com/station/CBS-Radio-Mystery-Theater-a57491 ----------------------------------------------------------------------------
durée : 00:57:58 - "Stardust" (Hoagy Carmichael / Mitchell Parish) 1927 - par : Laurent Valero - "En 2004, la prestigieuse Bibliothèque du Congrès à Washington D.C., choisira le premier enregistrement de Stardut réalisé par Carmichael lui même en 1927, parmi les 50 compositions sélectionnées, pour être conservées au National Recording Registry des États-Unis !" Laurent Valero - réalisé par : Antoine Courtin
durée : 00:57:58 - "Stardust" (Hoagy Carmichael / Mitchell Parish) 1927 - par : Laurent Valero - "En 2004, la prestigieuse Bibliothèque du Congrès à Washington D.C., choisira le premier enregistrement de Stardut réalisé par Carmichael lui même en 1927, parmi les 50 compositions sélectionnées, pour être conservées au National Recording Registry des États-Unis !" Laurent Valero - réalisé par : Antoine Courtin
Today's grid was unique - a giant IF in the center of it, and in between the I and F ran the amusing answer to, 19D, Exclamation upon seeing this puzzle, THATSABIGIF, which got (at least here at JAMDTNYTC HQ, a big laugh). Jean made short order of today's crossword, Mike had to struggle with his old nemesis, food -- in particular, the spelling of 20A, Carmichael who composed "Heart and Soul", HOAGY (not HOGIE, aka a tasty sandwich), and the answer to 3D, They can rate up to 350,000 on the Scoville scale, HABANEROPEPPERS (not JALAPENOPEPPERS, which are a mere 2,500-8,000).
Victims of human trafficking are often “branded” with tattoos of pimps' names or bar codes. Gang members' tattoos are proof of allegiance. But what happens when a human trafficking victim escapes enslavement? When a gang member leaves the life? Those tattoos … those markings remain as painful reminders of the horrifying situations they were in. Chris Baker runs Ink 180, a tattoo studio in Oswego that also focuses most of its energies on removing or covering up those painful reminders. He's a legitimate force for good in the universe, and it was fascinating to talk with him over food from RV's Home of the Hoagy in Oswego. Chris's “thINK 180” podcast is one of the 10 that we want to press onto Phonation: A Chicago Podcast Compilation, a vinyl-only collection of area podcasts. Help fund the project (deadline 6/25) by searching “Phonation” on Kickstarter.
Victims of human trafficking are often “branded” with tattoos of pimps' names or bar codes. Gang members' tattoos are proof of allegiance. But what happens when a human trafficking victim escapes enslavement? When a gang member leaves the life? Those tattoos … those markings remain as painful reminders of the horrifying situations they were in. Chris Baker runs Ink 180, a tattoo studio in Oswego that also focuses most of its energies on removing or covering up those painful reminders. He's a legitimate force for good in the universe, and it was fascinating to talk with him over food from RV's Home of the Hoagy in Oswego. Chris's “thINK 180” podcast is one of the 10 that we want to press onto Phonation: A Chicago Podcast Compilation, a vinyl-only collection of area podcasts. Help fund the project (deadline 6/25) by searching “Phonation” on Kickstarter.
If you'd been around in 1932 and had your ears on, you might have thought that songwriter Hoagy Carmichael had already peaked. Oh, sure, he'd been writing for only for about eight years, but, shoot, by then he'd already published … let's see… “Stardust” and “Georgia on My Mind,” “Rockin' Chair” and “Riverboat Shuffle” and “Up a Lazy River.” Those songs right there were enough to warrant a legacy chapter in the Great American Songbook. So, you'd've been forgiven in 1932 for not realizing our man Hoagy had another half century of great originals to bring us. Ahead lay … oh, “Lazybones” and “The Nearness of You,” “Heart and Soul” and “Memphis in June,” “Hong Kong Blues,” “I Get Along Without You Very Well,” “Ole Buttermilk Sky,” “In the Still of the Night,” “Skylark.” Heck, we could do Hoagy tunes all night long — and, well, sometimes we do. Here, from last night's Floodifying, is our first run at one of Carmichael's 1932 compositions, a sweet, sexy little tune simply called, “New Orleans.”
We continue our Composer Series: Hoagy Carmichael with thanks to James Spencer bringing us some cool tunes from this great composer ! www.cocktailnation.net K. D. Lang -Skylark Les Baxter- The Nearness of You Mel Tormé-Heart and Soul Julie London-Memphis in June Chet Baker-Daybreak Janet Blair- I Get Along With You Very Well Three Suns-Blue Orchids Shirley Horn-Georgia on My Mind Melanchrino Strings-Stardust George Shearing -One Morning in May Beegie Adair-Ole Buttermilk Sky James Spencer -Mercerville Hoagy Carmichael -When Love Goes Wrong Emile Pandolfi-Two Sleepy People Jackie Gleason -Am I Blue Oscar Peterson-In The Still Of The Night Herbie Mann-Lazybones
For another show in our periodic series that offers listeners an opportunity to call with questions about Indiana's heritage, our host, author and historian Nelson Price, will be joined by Charlie Hyde, CEO of the Benjamin Harrison Presidential Site. In between phone calls from listeners, Nelson and Charlie will interview each other about an array of topics, including how Benjamin Harrison, the only president elected from Indiana, handled an epidemic in 1892, his final year in the White House. Because of the Covid-19 pandemic, the 1892 cholera epidemic has been the focus of recent articles in the national press. During our show, Charlie will explain how President Harrison responded when a ship from Hamburg, Germany, one of the epicenters of the epidemic, was en route to the United States. The cholera epidemic began in April 1892 in India and quickly spread to Europe, all while Harrison was dealing with a health crisis in his own family: First Lady Caroline Scott Harrison was gravely ill with tuberculosis and would die later in 1892 in the White House. In between insights from Charlie and Nelson about this episode and others, listeners are invited to phone the WICR-FM (88.7) studio at 317-788-3314 and pose questions about any aspect of the state's history; typically on Hoosier History Live, questions from listeners are limited to the final 20 minutes of the show. As we continue to salute Black History Month, Nelson will also discuss the 28th U.S. Colored Troops, an African-American regiment from Indiana during the Civil War. Even before the regiment was formed in 1864, Black soldiers from Indiana had been fighting for the Union Army in an infantry unit from Massachusetts; that legendary regiment was depicted in the movie Glory (1989) starring Denzel Washington. Benjamin Harrison (1833-1901), who became a brigadier general during the Civil War, initiated attempts during his subsequent political career to expand civil rights for African Americans; Charlie will describe these attempts during our show. Charlie also will discuss the reaction in Indianapolis in 1888 when Harrison, a Republican, was nominated as a presidential candidate. It is the focus of a current exhibit at the presidential site titled The Night Indianapolis Roared. Another topic during our show will relate to the recent dedication in Carmel of a sculpture and interactive kiosks in honor of Hoagy Carmichael (1899-1981). The famous composer, who grew up in Bloomington and Indianapolis, is profiled in Nelson's book Indiana Legends, 4th edition (Hawthorne Publishing, 2005). Nelson had several interviews with Hoagy's son, Randy Carmichael, who died in 2018; Randy had been a Hoosier History Live guest seven years earlier. The sculpture and kiosks honoring Hoagy are near the Center for the Performing Arts, which includes the Great American Songbook Foundation.
On July 28, 2013, 50-year-old Robert “Hoagy” Hoagland ate bagels with his son, mowed the lawn, bought a map and disappeared. Did the husband and father of 3 simply walk away from his life? Or did his disappearance have something to do with the two drug dealers he had recently confronted in an abandoned factory? Over seven years later, questions still swirl around the missing family man who took nothing with him, leaving behind a shattered family whose lives have been forever changed by his absence. For a complete list of our sources, along with photos and videos, please visit our website, And Then They Were Gone. Sources: Family Man Mysteriously Vanishes Wikipedia: Disappearance of Robert Hoagland Facebook: Help Us Find Hoagy Disappeared Blog: Robert Hoagland Man's disappearance still a riddle One year later, Newtown man's disappearance still troubles loved ones Cops probe report of missing Newtown man in Putnam Disappeared - Season 7 Episode 8 ''A Family Man'' Six months after Newtown man's disappearance, no clues Reddit: What is your theory about Robert Hoagland's disappearance? --- Send in a voice message: https://anchor.fm/andthentheyweregone/message Support this podcast: https://anchor.fm/andthentheyweregone/support
The great Hilton Valentine assumed room temperature this week in R&R brothers and sisters.....A sad bit of news to be sure.I met Hilton Valentine at a record show in Connecticut a few years ago and purchased a few 45's from him. We got to discuss some of his guitars, amps, influences, etc. I asked him where he got the idea for the guitar into on "Baby let me take you home"......he laughed and told me from Jimmy Page who played guitar on Hoagy Lands 45 rpm the same year. What? Where can I get a copy? Hilton had one for sale so after a short negotiation, I had the original in my hand...What? The Bert Berns penned Hoagy 45 wasn't the original? No Berns got it from Dylan [Baby, Let me follow you down] who got it from a traditional folk song....you can't make this stuff up! So the incredible guitar intro is Page according to Hilton but Wikipedia has Eric Gale as the guitarist...? Jimmy Page is one of the funkiest guitarists ever! His timing on the Yardbirds stuff and especially Zeppelin are outrageous.I can't figure out the intro no matter who's playing it! Dig it!.....
America has always had amazing songwriters whose works simply change the way we all talk to each other, none more so than the great Hoagy Carmichael. In his 80 years, Hoagy wrote hundreds of songs, including 50 that achieved hit record status for numerous artists, and they still do. For example, a few years, Norah Jones charted with Hoagy’s “The Nearness of You,” a song that was written and first recorded 40 years before Norah was even born. But, then, shoot, any of Hoagy’s wonderful songs would be enough to build a legend on. “Georgia on My Mind.” “Skylark.” “Heart and Soul.” ‘Stardust.” The Flood’s been doing Hoagy Carmichael songs for decades, and we’re sure to be adding some more to our repertoire in the new year. But we always come back to this one, our favorite, the first of Hoagy’s tunes we ever tackled.
Berkeley, California, is situated on the east shore of the San Francisco Bay. It’s a picturesque college town, known today for being the most liberal city in the U.S. A decade after the events to be recounted here, Berkeley was the epicenter of the counterculture movement, home to hippies and anarchists who flocked to the Bay Area from around the world. But before free love and bell bottoms gave this California city an identity that lasts even still, it was just another place in the U.S — where families raised children and worked toward the American dream. Even then, it was a beautiful city; its air fragrant with wild poppy and eucalyptus ... yet the dream for some here would melt into wickedness. Even the best home, the best grades, the best life couldn’t save at least one little girl... from something... so big… so bad… a girl who wanted a pet parakeet... and carried a picture of her poodle Hoagy next to photos of her classmate... in her little... red... wallet. ---- *From the archives of* The San Francisco Examiner, Oakland Tribune, San Francisco Chronicle, San Mateo Times, Roseville Press-Times, Associated Press, Reuters, Santa Rosa Press Democrat, New York Daily News, United Press International, *Book:* Shallow Grave in Trinity County by Harry Farrell. ~ *Researched and Written by F.T Norton* *Hosted and Co-Written by Jack Luna* *Produced by The Operator* Support this podcast at — https://redcircle.com/dark-topic/donations Advertising Inquiries: https://redcircle.com/brands
Tiffany Pauldon-Banks is a serial Entrepreneur and this time around she is bringing Chicago style Hoagys to Chattanooga. Learn with me what exactly a hoagy is and how Chicago sets it apart. I had lots of fun with this conversation as we laugh our way through her amazing story of how Lil Mama’s Chicago Style Hoagy was born! Tiffany is the real deal and I cannot wait for her to officially open her doors on Patten Parkway this fall. Spoiler alert: She made me a Hoagy the next day and it can only be described as a party in you mouth. I didn’t know that much flavor could fit into one bite. Don’t be hungry while listening unless you want to torture yourself! Here is Lil Mama!
Hoagy's first records with territory bands and also member of the Paul Whiteman Orchestra. Songs like "March of The Hoodlums" should really be revived! Hoagy sings, arranges, composes and plays piano (and cornet) on these records. --- Support this podcast: https://anchor.fm/john-clark49/support
A bumper episode this week! The boys chat about the birth of another ram lamb, "Hoagy", another infamous "Gundog Escape" and some of the incredible achievements coming out of the lockdown period. They also discuss a potential return to boxing for "Iron Mike" as well as taking a deeper dive into this week's Motley review... focusing on the story of Kenton Park Estate's cider and how to use what you have at your disposal to maximise profit. The lockdown quiz also makes a return this week and the boys answer your most burning questions! Enjoy...
durée : 00:59:03 - "Georgia on my Mind" d'Hoagy Carmichael (musique) et Stuart Gorrell (paroles) - par : Laurent Valero - 1960, star montante Ray Charles sort le hit en single, il est n°1, remporte un Grammy Award! --“I’ve never known a lady named Georgia, and I wasn’t dreaming of the state, even though I was born there. It was just a beautiful, romantic melody.” Ray Charles (Brother Ray : Ray Charles 'Own Story) 1978 - réalisé par : Patrick Lérisset
durée : 00:59:03 - "Georgia on my Mind" d'Hoagy Carmichael (musique) et Stuart Gorrell (paroles) - par : Laurent Valero - 1960, star montante Ray Charles sort le hit en single, il est n°1, remporte un Grammy Award! --“I’ve never known a lady named Georgia, and I wasn’t dreaming of the state, even though I was born there. It was just a beautiful, romantic melody.” Ray Charles (Brother Ray : Ray Charles 'Own Story) 1978 - réalisé par : Patrick Lérisset
Hoagy Carmichael is a producer, director, author and bamboo rod builder. Growing up as the son of one of America’s most cherished songwriters, Hoagy is no stranger to the pressures of passing on legacies. In 1968, he met bamboo rod builder Edmund Everett Garrison, where he would eventually chronicle the work of one of fly fishing’s greats. You can find this video in the Members section of AnchoredOutdoors.com - sign up today and get your first month free, you can cancel at any time. Buy Hoagy's Books at www.booksbycarmichael.com Outline of This Episode [4:15] Cancer has taken a toll on his health [7:16] Born and raised in L.A., Hoagy grew up surrounded by show business [10:35] Hoagy didn’t learn how to fish until 1967 [21:46] Garrison’s rods were different [25:17] Hoagy worked on Mr. Roger’s Neighborhood [30:15] He has written 5 books on fly fishing [37:40] Do women really catch more fish? [42:05] Is he a part of the fly fishing industry? [46:07] How would he like to be remembered? This episode of Anchored is brought to you by Norvise!
durée : 00:32:02 - Les Nuits de France Culture - C'est à Hoagy Carmichael que rendait hommage le quatrième volet qu'en 1991, dans "Le Rythme et la raison", Daniela Langer consacrait aux grands compositeurs et paroliers des grandes heures de la comédie musicale de Broadway, une émission sous-titrée "Des rengaines banales et irrésistibles"…
durée : 00:57:44 - "Skylark" (Hoagy Carmichael / Johnny Mercer) - par : Laurent Valero - "Composé en 1941, "Skylark" fût dit-on écrit sur la base d’une improvisation au cornet de Bix Beiderbecke, ami intime de Carmichael, qui intitula dans un premier temps cette composition Bick’s licks. Le thème fut écrit en vue de la création d’une comédie musicale à Broadway..." Laurent Valero - réalisé par : Patrick Lérisset
durée : 00:57:44 - "Skylark" (Hoagy Carmichael / Johnny Mercer) - par : Laurent Valero - "Composé en 1941, "Skylark" fût dit-on écrit sur la base d’une improvisation au cornet de Bix Beiderbecke, ami intime de Carmichael, qui intitula dans un premier temps cette composition Bick’s licks. Le thème fut écrit en vue de la création d’une comédie musicale à Broadway..." Laurent Valero - réalisé par : Patrick Lérisset
July 28, 2013. Newtown, Connecticut. 50-year old Robert “Hoagy” Hoagland is last seen mowing the lawn outside his house. The following day, his wife, Lori, returns from a trip to Turkey, but Hoagy fails to pick her up at the airport as scheduled. When Lori returns home, she discovers that her husband is missing and that his vehicle and all of his personal possessions have been left behind. Investigators explore the possibility that Hoagy's disappearance might be connected to drug addiction issues involving his son, Max, and a recent confrontation Hoagy had with some of Max's criminal associates. However, they also cannot rule out the idea that Hoagy disappeared voluntarily, as he had once run away from his family on a previous occasion. This week's episode of “The Trail Went Cold” explores a truly perplexing missing persons case in which there is no conclusive evidence to suggest what happened. Additional Reading: https://en.wikipedia.org/wiki/Disappearance_of_Robert_Hoagland https://www.nbcconnecticut.com/news/local/Newtown-Man-Missing-for-Nearly-a-Year-268545472.html https://www.newstimes.com/policereports/article/Newtown-man-s-disappearance-featured-on-TV-show-7954415.php https://www.newstimes.com/policereports/article/Man-s-disappearance-still-a-riddle-4757047.php https://www.newstimes.com/local/article/Six-months-after-Newtown-man-s-disappearance-no-5218083.php#photo-5843666 http://www.newstimes.com/local/article/Newtown-man-s-disappearance-troubles-loved-ones-a-5677519.php “The Trail Went Cold” is on Patreon! Visit www.patreon.com/thetrailwentcold to become a patron and gain access to our exclusive bonus content. The Trail Went Cold is produced and edited by Magill Foote. All music is composed by Vince Nitro.
July 28, 2013. Newtown, Connecticut. 50-year old Robert “Hoagy” Hoagland is last seen mowing the lawn outside his house. The following day, his wife, Lori, returns from a trip to Turkey, but Hoagy fails to pick her up at the airport as scheduled. When Lori returns home, she discovers that her husband is missing and that his vehicle and all of his personal possessions have been left behind. Investigators explore the possibility that Hoagy’s disappearance might be connected to drug addiction issues involving his son, Max, and a recent confrontation Hoagy had with some of Max’s criminal associates. However, they also cannot rule out the idea that Hoagy disappeared voluntarily, as he had once run away from his family on a previous occasion. This week’s episode of “The Trail Went Cold” explores a truly perplexing missing persons case in which there is no conclusive evidence to suggest what happened. Additional Reading: https://en.wikipedia.org/wiki/Disappearance_of_Robert_Hoagland https://www.nbcconnecticut.com/news/local/Newtown-Man-Missing-for-Nearly-a-Year-268545472.html https://www.newstimes.com/policereports/article/Newtown-man-s-disappearance-featured-on-TV-show-7954415.php https://www.newstimes.com/policereports/article/Man-s-disappearance-still-a-riddle-4757047.php https://www.newstimes.com/local/article/Six-months-after-Newtown-man-s-disappearance-no-5218083.php#photo-5843666 http://www.newstimes.com/local/article/Newtown-man-s-disappearance-troubles-loved-ones-a-5677519.php “The Trail Went Cold” is on Patreon! Visit www.patreon.com/thetrailwentcold to become a patron and gain access to our exclusive bonus content. Click here to subscribe to the podcast on iTunes. Click here to listen to the podcast on Stitcher. Click here to subscribe to the podcast on Google Play Music. Click here to subscribe to the podcast on Spotify. The Trail Went Cold is produced and edited by Magill Foote. All music is composed by Vince Nitro.
Anyone who has beat out Heart and Soul on the piano, fell in love with the soundtrack to “Sleepless in Seattle” or can remember Ray Charles singing Georgia on My Mind is familiar with Hoagy Carmichael’s music. His song Stardust has been recorded over 2000 times and was selected for inclusion in the National Recording Registry at the Library of Congress in 2004. For the last six years, Carmichael’s son Hoagy Bix, Tony-nominated director Susan H. Schulman, and choreographer Michael Lichtefeld have been developing a new musical featuring songs from Carmichael’s catalog called STARDUST ROAD. Hear what they have to say about that, Hoagy Carmichael's legacy, and more. https://rduonstage.com/2019/10/20/podcast-transcript-the-birth-of-the-new-musical-stardust-road-with-hoagy-bix-carmichael-tony-nominee-susan-h-schulman-and-michael-lichtefeld/ (To read a transcript of this episode, click here.) About the Guests Hoagy Bix Carmichael is a film, television, and theatrical producer. He worked as assistant director for Hecht Hill Lancaster of such films as “The Rabbit Trap” (Universal Pictures), “Elmer Gantry” (Columbia Pictures), and ”Separate Tables” (Columbia Pictures). While at WGBH/TV in Boston, he co-produced many productions including “On Being Black,” “The Music Shop” and “The Advocates” for PBS. He was the managing director/producer for “Mister Rogers’ Neighborhood.” Mr. Carmichael co-manages the Hoagy Carmichael music catalog, and was the Artistic Producer of the “Hoagy Carmichael Centennial Celebration.” A founding member of AmSong, Inc., an advocacy organization for American songwriters, Carmichael served as its president for three years. https://www.hoagy.com/ (https://www.hoagy.com/) Susan H. Schulman’s Broadway credits include the Tony Award-winning musical THE SECRET GARDEN as well as its highly successful national tour, the revival of SWEENEY TODD at the Circle in the Square, for which she received a Tony Award nomination, the revival of THE SOUND OF MUSIC (Tony nomination for Outstanding Revival) and LITTLE WOMEN, the musical and its successful national tour. For her direction of the highly acclaimed musical VIOLET, winner of The New York Drama Critic’s Circle Award for Best Musical, Schulman received a Drama Desk nomination for Best Director. She received an Obie Award for directing MERRILY WE ROLL ALONG at the York Theatre a production which also received the Lucille Lortel Award for Outstanding Revival, as well as several Outer Critics and Drama Desk nominations. For the prestigious Stratford Festival of Canada, she has directed nine productions and her many regional and national tour productions are SUNSET BOULEVARD with Petula Clark and the premiere of HEARTLAND. Schulman is a member of the executive board of SDC, a graduate of the Yale Drama School, Hofstra University, and New York’s famed High School of Performing Arts. Michael Lichtefeld choreographed six Broadway musicals including, LITTLE WOMEN, THE SOUND OF MUSIC, THE SECRET GARDEN, GENTLEMEN PREFER BLONDES and LAUGHING ROOM ONLY. He worked off-Broadway choreographing eight musicals and 10 national/international tours. For the Stratford Shakespeare Festival, he choreographed nine musicals and directed/choreographed SOUTH PACIFIC and MY ONE AND ONLY. He has also been nominated for the Drama Desk Award and three outer Critics’ Circle Awards. This summer he will travel to Australia for the 25th Anniversary remount of THE SECRET GARDEN with Susan H. Schulman. Connect with RDU on Stage Facebook – @rduonstage Twitter – @rduonstage Instagram – @rduonstage Web http://www.rduonstage.com/ (www.rduonstage.com) Support this podcast
»The Hoagy Carmichael Songbook« præsenteres af Radio Jazz studievært Tom Buhmann. Dette er andet og sidste afsnit. Hoagland Howard "Hoagy" Carmichael (1899 – 1981) var en amerikansk sanger, komponist, pianist og skuespiller. Sendt i Radio Jazz i 2019 Der er mere jazz på www.radiojazz.dk
»The Hoagy Carmichael Songbook« præsenteres af Radio Jazz studievært Tom Buhmann. Hoagland Howard "Hoagy" Carmichael (1899 – 1981) var en amerikansk sanger, komponist, pianist og skuespiller. Sendt i Radio Jazz i 2019 Der er mere jazz på www.radiojazz.dk
Well, the New Year is upon us. Every year for the past several years I have usually played a special big band show called 1945 - 1946 New Year's Dancing Party. I ran this last year and it is still available. I didn't want to run this again so I am going to repeat a show from 2014. Back in November, Hoagy Carmichael had a birthday. Since there were other things going on, I missed doing a show on him. So today's show is one I recorded in 2014. It is a Hoagy Carmichael Celebration. Hoagy composed quite a few songs that are now part of the Great American Songbook. I think you'll enjoy this look at the music and career of the late, great composer, Hoagy Carmichael as we ring in the New Year. Happy Holidays everyone and thank you for listening. Please visit this podcast at http://bigbandbashfm.blogspot.com
For her Phinal Phyllis Phave Phriday, Phyllis Fletcher joins Mike for a listen back to times when alleged “woo girl” Jen Flash Andrews was thrown out of restaurants, and in one instance, given up by a tell-tale pen. Mike shares some of his own delinquent dining behavior through the years (with apologies to Hoagy’s Corner), and hits Fletch with one more round of his patent-pending gotcha segment, While We Have You. Remember: Proceed with caution when retrieving your ill-gotten 100 Grand Bar from Bill Radke’s bathrobe pocket. You have been warned. This episode features TBTL clips from the third hour of Feb. 3, 2009, and the third hour of April 3, 2009.
For her Phinal Phyllis Phave Phriday, Phyllis Fletcher joins Mike for a listen back to times when alleged “woo girl” Jen Flash Andrews was thrown out of restaurants, and in one instance, given up by a tell-tale pen. Mike shares some of his own delinquent dining behavior through the years (with apologies to Hoagy’s Corner), and hits Fletch with one more round of his patent-pending gotcha segment, While We Have You. Remember: Proceed with caution when retrieving your ill-gotten 100 Grand Bar from Bill Radke’s bathrobe pocket. You have been warned. This episode features TBTL clips from the third hour of Feb. 3, 2009, and the third hour of April 3, 2009.
Hoagy checks in with us to talk about some upcoming rides, a great opportunity for new LD riders to join the IBA, and of course, a few crazy stories.www.hoagysheroes.orgHoagy on FacebookHoagy's Heroes on Facebook
It may have been written by a man who never even set foot in the state, but that hasn't stopped 'Georgia on My Mind' becoming a Southern anthem. Mike Hobart looks back on the song's origins. Credits: Rendez-Vous Digital, The Island Def Jam Music Group, Not Now Music and EG Jazz See acast.com/privacy for privacy and opt-out information.
Lucy Branch is a conservator. She specialises in the conservation of sculptural and architectural bronze and contemporary materials. She has worked on high-profile projects including Eros, Nelson's Column and the Queen Victoria Memorial. She has led the conservation work on some of Britain's best known contemporary sculpture including Ron Arad's The Big Blue and Wendy Taylor's Conqueror. Her novel, A Rarer Gift Than Gold, is published by Clink Street. She is director of the company Antique Bronze Ltd. Hoagy B Carmichael - son of the composer, singer, musician and bandleader Hoagy Carmichael - is co-producer of Stardust Road, a forthcoming musical which celebrates his father's work. Hoagy Carmichael studied law before going on to write hit songs including Stardust, Georgia on My Mind and The Nearness of You. Stardust Road is at St. James Theatre, Palace Street, London. Cormac Murphy-O'Connor was cardinal archbishop of Westminster from 2000-2009. Born to Irish parents and brought up in Reading, he was 15 when he announced he wanted to be a priest. He studied at the English College in Rome and was ordained in 1956. The following year he began his ministry as a priest in Portsmouth. He was ordained bishop of Arundel and Brighton in 1977. In his memoir, An English Spring, he writes about his role in the Church during periods of turbulence and change. An English Spring is published by Bloomsbury. Eddie Pepitone is a US comedian and actor. Born into an Italian-American family in Brooklyn, he took up improvisation in his teens and later became a full time stand-up comedian, leaving the east coast for Los Angeles. He has appeared in many US television shows including Arrested Development; Flight of the Conchords, Monk and ER. Eddie Pepitone's show, What Rough Beast, is at the Soho Theatre, London.
Hoagy Charmichael tells us all about Hoagy's Heroes - a long distance riding charity that benefits A Special Wish Foundation, The Children of Fallen Soldier's Relief Fund, and the August Levy Learning Center. Hoagy's HeroesHoagy's Heroes on Facebook
The early recorded history of jazz, blues, and country music in America usually isn’t associated with a place like Richmond, Indiana. However, for a brief period early in the 20th century the Gennett record label based in Richmond recorded music from artists such as Gene Autry, Charley Patton, Blind Lemon Jefferson, and Hoagy Carmichael. Learn about the history of the label from Rick Kennedy, the author of Jelly Roll, Bix, and Hoagy. Music featured: Charley Patton – Down the Dirt Road Blues Fiddlin’ Doc Roberts – Deer Walk Bix Beiderbecke – Davenport Blues William Harris – Bullfrog Blues Hoagy Carmichael & Pals – Stardust King Oliver’s Creole Jazz Band – Chimes Blues Jelly Roll Morton – King Porter Stomp Scrapper Blackwell – Blue Day Blues Recommended: Blind Lemon Jefferson – Mosquito Moan Charley Patton – Spoonful Blues New Orleans Rhythm Kings – Mr. Jelly Lord Fletcher Henderson – Honey Bunch Marion McKay – Hootenanny Sounds Used (freesound.org): ‘locomotive.wav’ by laurent, ‘Stem_Train.wav’ by knufds, Note: When the episode was originally published, we used an incorrect estimation of the number of white males in the KKK in Richmond in the 1920′s. The issue was corrected in this version of the program. We regret the error.
Judy Carmichael interviews Hoagy Bix Carmichael
Big Band Serenade presents Hoagy Carmichael one of America's great composers of popular songs you have all heard. Our program includes a Tony Thomas Interview with Hoagy Carmichael and his great music. Songs played are listed in order of play. 1)"Georgia On My Mind"-1930,2)"Ole Buttermilk Sky",3)"Lazy River"-1930,4)" It Anin't Gonna Be Like That"-1946, 5)"After Twelve O'Clock"-1932,6)"Giner and Spice"-1945, 7)"The Nearness Of You"-1940To learn more about Hoagy go to www.hoagy.com