Country on the Scandinavian peninsula
POPULARITY
Categories
Jaran Mellerud breaks down why Finland, Iceland, Norway, and Sweden have captured almost all of Europe's bitcoin mining industry.Welcome back to The Mining Pod! Today, the CEO of Hashlabs Mining, Jaran Mellerud, joins Colin to discuss Bitcoin mining in Finland, Iceland, Norway, and Sweden. Bitcoin mining on the European market is almost exclusively confined to these Nordic countries – this episode explores why. Jaran and Colin discuss power markets in these countries, why mining has gotten more expensive in Sweden and Norway, the toothless attempt to ban mining in Norway, how bitcoin miners are helping with district heating in Finland, and more.Timestamps:00:00 Start01:53 Jarran intro02:31 Why is hashrate concentrated in Nordic countries?05:42 How has mining there changed recently?16:33 Cooling Hashlabs20:37 Finland nuclear surplus23:06 Finland district heating24:53 Is Norway banning mining?29:02 Iceland vs Norway AI data centers31:25 Public vs private mining33:16 Bitdeer expanding Norway36:59 Iceland & Finland hydro & geothermal38:57 Mining elsewhere in the EU?41:20 Green manufacturing failing & excess powerPublished twice weekly, "The Mining Pod" interviews the best builders and operators in the Bitcoin and Bitcoin mining landscape. Subscribe to get notifications when we publish interviews on Tuesday and a news show on Friday!
We've arrived in 2025, and assuming President-Elect Trump is inaugurated, this means he will be the first president to take office convicted of felony crimes. And since we know that an organization's tone is often set by its leadership, we can't help wondering: how will we see Trump's leadership impact the culture of the United States, and how will this play out in our collective futures? Let's be real about where we are in this moment in history, courtesy of the Atlantic: “According to a report last year by the Varieties of Democracy Institute at the University of Gothenburg, in Sweden, when it comes to global freedom, we have returned to a level last seen in 1986. About 5.7 billion people—72 percent of the world's population—now live under authoritarian rule. Even the United States, vaunted beacon of democracy, is about to inaugurate a president who openly boasts of wanting to be a “dictator on day one,” who regularly threatens to jail his opponents and sic the military on the “enemy within,” and who jokes about his election being the country's last…..Many Americans understand today what political exhaustion and complacency look and feel like. But the dissident is the one who hopes against hope.” We can't imagine it'll be particularly easy, but we do believe we have reason to hope. Hope is the consequence of action, and is often self-fulfilling (we act, we hope, we act some more). This is why today, we're asking you this: How are YOU Trump-proofing your life? What to listen for: Putting self care first - like, REAL self care - and the story about Sara's thunderclap headaches How to stay informed while keeping your sanity Necessary mindsets, including trusting yourself, grieving, choosing your lane / letting go of the rest, getting real about power - and asking ourselves what we're willing to sacrifice (comfort, convenience, or more) to stand up for what's right Simple example: Do you believe fact-checking and real people are important parts of social media platforms? If so, will you get yourself off Meta's platforms? A reminder not to reinvent the wheel, but find and support organizations doing the work with your time, money, and energy. Here's a list to start with - let us know what organizations you support and we'll add them to our list! Connect with Us! A reminder not to reinvent the wheel, but to find and support organizations doing the work with your time, money, and energy. Here's a list to start with - let us know what organizations you support and we'll add them to our list! To give us input on what you want from our newsletter, and/or share your Asian immigration stories, reach us via email at hello@dearwhitewomen.com. Follow Dear White Women so you don't miss these conversations! Like what you hear? Don't miss another episode and subscribe! Catch up on more commentary between episodes by following us on Facebook, Instagram, and Twitter – and even more opinions and resources if you join our email list.
New year, new opportunities: Game-changing developments are on the horizon for Arctic Minerals (STO: ARCT). In this exclusive interview, newly appointed Executive Director Peter George reveals the company's recent strategic breakthroughs, from highlighting the potential of the Hennes Bay project in Sweden to key financial decisions aimed at boosting shareholder value. He also discusses the fresh talent joining the team and explains why investors should take note of the company's rising stock potential. Discover more about Arctic Minerals' projects by visiting their website: https://www.arcticminerals.se/en/Watch the full YouTube interview here: https://youtu.be/V2toq4MMfFkAnd follow us to stay updated: https://www.youtube.com/@GlobalOneMedia?sub_confirmation=1
Förslag: Hårdare krav för att få bli svensk medborgare. / Många polisbilar har inga hjärtstartare. / Rekordmånga barn ringde Bris under julen för att få stöd. / Den sista resan fick två priser på Guldbaggegalan Lyssna på alla avsnitt i Sveriges Radio Play. Av Ingrid Forsberg och Jenny Pejler.
376: Brian Beckstead | Cadbury Marathon | Leanne Pompeani This episode is sponsored Altra Running check out their latest shoe the LONE PEAK 9+ and listen in later in to Moose's interview with Brian Beckstead Altra Footwear Co-Founder. Visit altrarunning.com.au or visit your local specialty running store to shop now. Leanne Pompeani returns to catch us up on her last few races and makes a huge announcement for her debut marathon, training into that Julian finds some positives in the rebuild even when he gets lost. Brad plays the long game as works around the heat. This week's running news is presented by Axil Coffee https://axilcoffee.com.au/ Valencia 10k Andreas Almgren of Sweden takes out the in a new European record of 26:53, just ahead of Dominic Lobalu of Switzerland and Vincent Langat of Kenya. Hellen Ekale Lobun set a massive personal best of 29:31 ahead of Girmawit Gebrzihair and Fotyen Hailyu of Ethiopia. Official Results Fraser Darcy won the 2025 Cadbury Marathon in Hobart Tasmania, in 2:26 while Camille O'Donaghue won in 2:54:46. Sarah Klein won the half marathon in 1:17:25 while Richie Egan also won in 1:09:52. Official Results Adelaide University Athletic Club hosted the inaugural Run the Loop, with Caitlin Adams and Jonothan Harris posting the fastest times on the 2.2km loop. Official Results Julian reviews the recently released Altra Lone Peak 9+, a trail shoe available through all running specialty shops. Julian goes a little into the history of Altra and then delves into how he's been finding the Lone Peak 9+. Listener Question whether its beneficial to do a little more intensity with less days of running. Moose on the Loose muses on convicted dopers re-entering professional racing, then the whispers speak about potential moves between training groups and lining up races This week's guest is CEO and co-founder of Altra Running Brian Beckstead. Brian chats with Julian about his origins in Orem, Utah, meeting his future team and using their observations in barefoot running to inform the philosophy that went into developing the first Altra models, as well as building up their distribution on the cusp of the barefoot trend. Brian reflects on some of the significant moments and mistakes that have characterised Altra's growth as a running brand, committing to trail running and getting adopted by the hiking community then talks about making innovation and running shoe foam, and how Altra position themselves in the mainstream running culture. He breaks down some truths and beliefs of running brands, talks about his current involvement, then gives further detail into the Altra Lone Peak 9+ before rounding out with the future direction of Altra.
Många ramlar och skadar sig på hala gator. / Sverige skickar krigsfartyg till Natouppdrag i Östersjön. / Kunder betalar inte för taxi. / Julgransplundring i Skellefteå på Tjugondag Knut Lyssna på alla avsnitt i Sveriges Radio Play. Av Ingrid Forsberg och Jenny Pejler.
As the crew of the Crescendo get to know the old/new captain, tensions are running high...We're an actual play podcast where professional actors in Sweden play the best of Swedish RPGs! Led by one of Swedens most experienced and appreciated podcast game masters we play Coriolis, a game published by Fria Ligan (Free League publishing).Starring: Anneli Heed, Ingela Lundh, Mattias Redbo, Amanda Stenback and Jakob Hultcrantz Hansson.Game master: Andreas LundströmCharacter art by: Moa Frithiofsson
San Jose Shark legend and NHL Network star Jason "Daddy" Demers is here and boy does he have some great stories. Sweden being a thorn in his side every time he put on a Team Canada jersey. Playing alongside Jumbo Joe. And of course the very cool saga of him making it to his 700th game. Now he's a media darling crushing it across all kinds of platforms. Plus, the boys put him through the ringer in a game of Pass Shoot Score. NEW EPISODES EVERY MONDAY & WEDNESDAY! PRESENTED by BetMGM. Download the BETMGM app and use code “NETTERS” and enjoy up to $1500 in bonus bets if you lose your first wager! SUPPORT OUR SPONSORS: BAUER. Bauer is the go to destination for all your training needs. Head to http://www.bauer.com/training to explore tools like the Digital Reactor Danger for stickhandling or the Reactor Slide Board to add strength to your stride. CASHAPP. Download CashApp and take control of your finances! https://apps.apple.com/us/app/cash-ap... RIKI. Head to https://rikispirits.com/ to find out where to get RIKI near you. Follow @friday.beers and @rikispirits to stay up to date with upcoming RIKI contests and giveaways FUNKAWAY. To check out the full family of FunkAway products go to http://www.funkaway.com to learn more funk'in cool stuff. And head over to Amazon right now and grab FunkAway products with just a few clicks. FIREBALL . Fireball's iconic cinnamon flavor tastes fire and goes down easy, making it the ultimate crowd pleasure. Go pick up some from your local liquor store and join us in drinking Fireball during our game days this season! #IgniteYourRivalry EVERYMANJACK. Give Every Man Jack a shot today and go to http://www.everymanjack.com and use code “NETTERS” at checkout for 25% off your first order CBDMD. Visit http://www.cbdmd.com to explore their extensive range of products and find the perfect solution for your needs. Don't forget to use code “FRIDAY” at checkout to get 30% OF + Free Shipping. DOLLAR SHAVE CLUB. Dollar Shave Club products are now available everywhere, so you can order from their website, Amazon, or get them at your favorite retailer near you. Visit their site right now for 20% off $20 or more, and get your products delivered right to your door. Visit http://www.dollarshaveclub.com/netters and use promo code NETTERS for 20% off $20 or more CHOMPS. If you are looking for the PERFECT on the go snack that has zero grams of sugar and packed with high quality protein, then Chomps is for you. To learn more about Chomps, click here! http://www.chomps.com/emptynetters Learn more about your ad choices. Visit megaphone.fm/adchoices
Discover the magic of Sweden as we explore the vibrant streets of Stockholm, the coastal charm of Gothenburg, and the breathtaking beauty of Swedish Lapland. For travel tips and inspiration, start planning your journey at VisitSweden.com Sponsored by: Visit Sweden
A round-up of the main headlines in Sweden on January 13th 2025. You can hear more reports on our homepage www.radiosweden.se, or in the app Sveriges Radio Play. Presenter/Producer: Michael Walsh
Support Rendering Unconscious by becoming a paid subscriber to Patreon/ Substack, where we post exclusive content regularly. All paid subscribers receive a link to our Discord server where you can chat with us and others in our community with similar interests. So join us and join in the conversation! Vanessa & Carl's Patreon: https://www.patreon.com/c/vanessa23carl Vanessa's Substack: https://vanessa23carl.substack.com Carl's Substack: https://thefenriswolf.substack.com RU326: EWELL J. JULIANA ON THE INTERNAL BLACK STAR, SATURN, CREATIVITY, WRITING, MAGIC, OCCULTURE: http://www.renderingunconscious.org/art/326-ewell-j-juliana-on-the-internal-black-star-saturn-creativity-writing-magic/ Watch this discussion at YouTube: https://youtu.be/7Ztpn8pMNJs?si=8-SHgSuY4HP1Z7Hh Ewell J. Juliana is a multifaceted author, philosopher, and social worker whose work spans multiple disciplines, including philosophy, esotericism, music, and visual arts. As the founder of The Internal Black Star concept in 1994, which was formally established in 2000, and The Art of Mystification in 2001, Juliana has seamlessly blended influences from occultism and esotericism with artistic and spiritual expression. His creative endeavors and philosophical insights are further enriched by his dedication as a social worker, where he has focused on the well-being of individuals and communities. Follow at Instagram: https://www.instagram.com/theinternalblack/ Join us for Kenneth Anger: American Cinemagician with Carl Abrahamsson, Begins February 2: https://www.morbidanatomy.org/classes/ Watch all of Carl's films at The Fenris Wolf Substack. https://thefenriswolf.substack.com Join us in London for the book launch for Meetings with Remarkable Magicians: Life in the Occult Underground by Carl Abrahamsson at Watkins Books, February 27th. https://www.watkinsbooks.com/event-details/meetings-with-remarkable-magicians-life-in-the-occult-underground-carl-abrahamsson Then on February 28th, join us at Freud Museum, London for “Be Careful What You Wish For – Female & Male Existential Malaise and Hysteric Approaches in ‘The Substance' and ‘Seconds'. https://www.freud.org.uk/event/be-careful-what-you-wish-for-female-male-existential-malaise-and-hysteric-approaches-in-the-substance-and-seconds/ Rendering Unconscious is also a book series! The first two volumes are now available: Rendering Unconscious: Psychoanalytic Perspectives vols. 1 & 2 (Trapart Books, 2024). https://amzn.to/4eKruV5 Rendering Unconscious Podcast is hosted by Dr. Vanessa Sinclair, a psychoanalyst based in Sweden, who works with people internationally: http://www.drvanessasinclair.net Instagram: https://www.instagram.com/renderingunconscious/ TikTok: https://www.tiktok.com/@renderingunconscious Blusky: https://bsky.app/profile/drsinclair.bsky.social Support Rendering Unconscious Podcast: Patreon: https://www.patreon.com/vanessa23carl Substack: https://vanessa23carl.substack.com Make a Donation: https://www.paypal.com/donate/?business=PV3EVEFT95HGU&no_recurring=0¤cy_code=USD The song at the end of the episode is “Freedom” by Ewell J. Julianna. Available at Bandcamp. Check out TAOM Records. https://taomrecords.bandcamp.com/music Image: Ewell J. Juliana
Welcome back to the Magician On Duty Journey Series! On this edition we welcome AndArc (@andarc) AndArc: Charting a Dark and Groovy Path Through the Electronic Music Scene Swedish DJ AndArc (Andreas Arcombe) has taken the electronic music world by storm, amassing hundreds of thousands of followers across social media platforms. Known for his Berlin-inspired Down/Midtempo Techno, Andreas has established himself as one of Sweden's most exciting emerging artists. His sets are renowned for their dark, groovy beats and signature "dirty basslines," delivering a unique sonic experience that keeps audiences captivated. AndArc's versatility as a DJ shines through in his mastery of multiple techno subgenres, including Melodic Techno, Indie Dance, and Dark Disco, all laced with a distinct bouncy energy. While his sound is predominantly rooted in midtempo grooves, he also thrives in the deeper, darker downtempo atmospheres of after-hour sets. This dynamic range has earned him a growing reputation as a rising star in the international electronic music community. “Expect heavy basslines that bring a sense of darkness, paired with sparkling melodic highs that transport you to another world,” says AndArc. “It's a journey—one that guarantees butterflies in your stomach and keeps you moving until dawn.” AndArc's Musical Journey For Andreas, music is more than a profession; it's a deeply personal passion. Each set tells a story—a journey through life's highs and lows. “My sets often start in a darker place,” he shares. “From there, you'll navigate bouncy, chaotic roads before finding moments of calm. But not for too long—time flies, and it's all about having fun. I want listeners to feel every twist and turn, ending with something unforgettable.” AndArc's latest podcast showcases this philosophy, blending his signature style with unreleased gems; “Podcasts allow me to fully express myself and share my story through music. It's an incredible way to connect with people,” Andreas reflects. Follow AndArc here: https://soundcloud.com/andarc https://www.instagram.com/andreasarcombe https://www.tiktok.com/@familjenarcombe
Last week, we gave beginners to the world of strength training our ten best insights and tips, to make the journey to success in the gym easier. Today, we answer some of the most common questions beginners have about starting to lift weights! Timestamps: 03:40 - Question 1: Protein is important. But how important are carbohydrates and carbs? 07:30 - Question 2: Do I really need to follow a program from day 1 as a beginner? 11:20 - Question 3: When can I count myself as strong? 15:00 - Question 4: Should I stretch before/after I lift? 19:10 - Question 5: Beginner programs often only have like three sets of five for a few exercises. Is that really enough? 22:30 - Question 6: I can't progress. Week after week I can only manage to lift the same weights for the same reps. Why? 27:00 - Question 7: How do I warm-up before lifting weights? 32:20 - Question 8: Machines or free weights? 36:50 - Question 9: My 13-year-old nephew started strength training recently. What advice would you give him for long-term success? *** Do you like what you hear so far? Please leave a five-star review in your podcast player. And hit that follow button! You can also follow us on Instagram. You'll find Daniel at @strengthdan, and Philip at @philipwildenstam. Become a part of our community on Facebook here. *** This podcast is brought to you by Styrkelabbet AB, Sweden. To support us, download the world's best gym workout tracker app StrengthLog here. It's completely ad-free and the most generous fitness app on the market, giving you access to unlimited workout logging, lots of workouts and training programs, and much, much more even if you stay a free user for life. If you want a t-shirt with ”Train hard, eat well, die anyway”, check out our shop here.
Booliska Sweden ayaa helay warbixin ka dhan ah askari Israaiili ah oo booqasho ku joogay dalka Sweden. Waxaa jiray ku dhawaad 600,000 oo booqashooyin shaneemo ah oo kayar sannadkii 2024 marka loo eego sanadkii ka sii horeyay. Ugaarsiga sharciyeesan ee yeeyda ee sanadkan ayaa hadda laga joojiyay dhammaan shanta degmo.
From Treasure Island Oldies.com, this is the Rock & Roll News for the Week of January 12, 2025. This weekly Podcast covers events that took place this week in Rock & Roll History; who was in the studio recording what would become a big hit, and spotlight artists that are celebrating birthdays this week.Join me for the entire weekly four hour radio show, Treasure Island Oldies, The Home of Lost Treasures at www.treasureislandoldies.com.On the air every week since 1997, TreasureIslandOldies.com is one of the longest continuously-running radio shows on the Internet; and this year we are celebrating our 28th Anniversary! The show is hosted by veteran record label executive and broadcaster, Michael Godin. During his career at A&M Records, he became Vice-President of A&R and discovered and signed Bryan Adams to the label, along with multi award-winning songwriter and recording artist, Paul Janz. Michael also signed The Payolas whose Eyes Of A Stranger has become a classic. He returned to his radio roots in 1997 when Treasure Island Oldies began and continues to this day.The Treasure Island Oldies Broadcast Partners Network is always interested in welcoming new stations to its ever-growing network of stations around the world, including Canada, USA, England, Scotland, New Zealand, Sweden, and Ireland. If you'd like to air Treasure Island Oldies or the Rock & Roll News Podcast on your station, contact michael@treasureislandoldies.com.Keep up to date with late breaking news by coming to the Treasure Island Oldies Blog.And follow Michael Godin on Facebook.
We're shifting our focus across the Baltic Sea to Estonia, Finland and Russia, to see what happens when war once against inevitably breaks out between Sweden and Russia. This time, it's personal! Tsar Ivan the Terrible and king Johan of Sweden really don't get along and their differences spill over into a drawn-out conflict. One particularly dramatic event involves a dispute between Johan's German and Scottish mercenaries!
Due to overwhelming demand (>15x applications:slots), we are closing CFPs for AI Engineer Summit NYC today. Last call! Thanks, we'll be reaching out to all shortly!The world's top AI blogger and friend of every pod, Simon Willison, dropped a monster 2024 recap: Things we learned about LLMs in 2024. Brian of the excellent TechMeme Ride Home pinged us for a connection and a special crossover episode, our first in 2025. The target audience for this podcast is a tech-literate, but non-technical one. You can see Simon's notes for AI Engineers in his World's Fair Keynote.Timestamp* 00:00 Introduction and Guest Welcome* 01:06 State of AI in 2025* 01:43 Advancements in AI Models* 03:59 Cost Efficiency in AI* 06:16 Challenges and Competition in AI* 17:15 AI Agents and Their Limitations* 26:12 Multimodal AI and Future Prospects* 35:29 Exploring Video Avatar Companies* 36:24 AI Influencers and Their Future* 37:12 Simplifying Content Creation with AI* 38:30 The Importance of Credibility in AI* 41:36 The Future of LLM User Interfaces* 48:58 Local LLMs: A Growing Interest* 01:07:22 AI Wearables: The Next Big Thing* 01:10:16 Wrapping Up and Final ThoughtsTranscript[00:00:00] Introduction and Guest Welcome[00:00:00] Brian: Welcome to the first bonus episode of the Tech Meme Write Home for the year 2025. I'm your host as always, Brian McCullough. Listeners to the pod over the last year know that I have made a habit of quoting from Simon Willison when new stuff happens in AI from his blog. Simon has been, become a go to for many folks in terms of, you know, Analyzing things, criticizing things in the AI space.[00:00:33] Brian: I've wanted to talk to you for a long time, Simon. So thank you for coming on the show. No, it's a privilege to be here. And the person that made this connection happen is our friend Swyx, who has been on the show back, even going back to the, the Twitter Spaces days but also an AI guru in, in their own right Swyx, thanks for coming on the show also.[00:00:54] swyx (2): Thanks. I'm happy to be on and have been a regular listener, so just happy to [00:01:00] contribute as well.[00:01:00] Brian: And a good friend of the pod, as they say. Alright, let's go right into it.[00:01:06] State of AI in 2025[00:01:06] Brian: Simon, I'm going to do the most unfair, broad question first, so let's get it out of the way. The year 2025. Broadly, what is the state of AI as we begin this year?[00:01:20] Brian: Whatever you want to say, I don't want to lead the witness.[00:01:22] Simon: Wow. So many things, right? I mean, the big thing is everything's got really good and fast and cheap. Like, that was the trend throughout all of 2024. The good models got so much cheaper, they got so much faster, they got multimodal, right? The image stuff isn't even a surprise anymore.[00:01:39] Simon: They're growing video, all of that kind of stuff. So that's all really exciting.[00:01:43] Advancements in AI Models[00:01:43] Simon: At the same time, they didn't get massively better than GPT 4, which was a bit of a surprise. So that's sort of one of the open questions is, are we going to see huge, but I kind of feel like that's a bit of a distraction because GPT 4, but way cheaper, much larger context lengths, and it [00:02:00] can do multimodal.[00:02:01] Simon: is better, right? That's a better model, even if it's not.[00:02:05] Brian: What people were expecting or hoping, maybe not expecting is not the right word, but hoping that we would see another step change, right? Right. From like GPT 2 to 3 to 4, we were expecting or hoping that maybe we were going to see the next evolution in that sort of, yeah.[00:02:21] Brian: We[00:02:21] Simon: did see that, but not in the way we expected. We thought the model was just going to get smarter, and instead we got. Massive drops in, drops in price. We got all of these new capabilities. You can talk to the things now, right? They can do simulated audio input, all of that kind of stuff. And so it's kind of, it's interesting to me that the models improved in all of these ways we weren't necessarily expecting.[00:02:43] Simon: I didn't know it would be able to do an impersonation of Santa Claus, like a, you know, Talked to it through my phone and show it what I was seeing by the end of 2024. But yeah, we didn't get that GPT 5 step. And that's one of the big open questions is, is that actually just around the corner and we'll have a bunch of GPT 5 class models drop in the [00:03:00] next few months?[00:03:00] Simon: Or is there a limit?[00:03:03] Brian: If you were a betting man and wanted to put money on it, do you expect to see a phase change, step change in 2025?[00:03:11] Simon: I don't particularly for that, like, the models, but smarter. I think all of the trends we're seeing right now are going to keep on going, especially the inference time compute, right?[00:03:21] Simon: The trick that O1 and O3 are doing, which means that you can solve harder problems, but they cost more and it churns away for longer. I think that's going to happen because that's already proven to work. I don't know. I don't know. Maybe there will be a step change to a GPT 5 level, but honestly, I'd be completely happy if we got what we've got right now.[00:03:41] Simon: But cheaper and faster and more capabilities and longer contexts and so forth. That would be thrilling to me.[00:03:46] Brian: Digging into what you've just said one of the things that, by the way, I hope to link in the show notes to Simon's year end post about what, what things we learned about LLMs in 2024. Look for that in the show notes.[00:03:59] Cost Efficiency in AI[00:03:59] Brian: One of the things that you [00:04:00] did say that you alluded to even right there was that in the last year, you felt like the GPT 4 barrier was broken, like IE. Other models, even open source ones are now regularly matching sort of the state of the art.[00:04:13] Simon: Well, it's interesting, right? So the GPT 4 barrier was a year ago, the best available model was OpenAI's GPT 4 and nobody else had even come close to it.[00:04:22] Simon: And they'd been at the, in the lead for like nine months, right? That thing came out in what, February, March of, of 2023. And for the rest of 2023, nobody else came close. And so at the start of last year, like a year ago, the big question was, Why has nobody beaten them yet? Like, what do they know that the rest of the industry doesn't know?[00:04:40] Simon: And today, that I've counted 18 organizations other than GPT 4 who've put out a model which clearly beats that GPT 4 from a year ago thing. Like, maybe they're not better than GPT 4. 0, but that's, that, that, that barrier got completely smashed. And yeah, a few of those I've run on my laptop, which is wild to me.[00:04:59] Simon: Like, [00:05:00] it was very, very wild. It felt very clear to me a year ago that if you want GPT 4, you need a rack of 40, 000 GPUs just to run the thing. And that turned out not to be true. Like the, the, this is that big trend from last year of the models getting more efficient, cheaper to run, just as capable with smaller weights and so forth.[00:05:20] Simon: And I ran another GPT 4 model on my laptop this morning, right? Microsoft 5. 4 just came out. And that, if you look at the benchmarks, it's definitely, it's up there with GPT 4. 0. It's probably not as good when you actually get into the vibes of the thing, but it, it runs on my, it's a 14 gigabyte download and I can run it on a MacBook Pro.[00:05:38] Simon: Like who saw that coming? The most exciting, like the close of the year on Christmas day, just a few weeks ago, was when DeepSeek dropped their DeepSeek v3 model on Hugging Face without even a readme file. It was just like a giant binary blob that I can't run on my laptop. It's too big. But in all of the benchmarks, it's now by far the best available [00:06:00] open, open weights model.[00:06:01] Simon: Like it's, it's, it's beating the, the metalamas and so forth. And that was trained for five and a half million dollars, which is a tenth of the price that people thought it costs to train these things. So everything's trending smaller and faster and more efficient.[00:06:15] Brian: Well, okay.[00:06:16] Challenges and Competition in AI[00:06:16] Brian: I, I kind of was going to get to that later, but let's, let's combine this with what I was going to ask you next, which is, you know, you're talking, you know, Also in the piece about the LLM prices crashing, which I've even seen in projects that I'm working on, but explain Explain that to a general audience, because we hear all the time that LLMs are eye wateringly expensive to run, but what we're suggesting, and we'll come back to the cheap Chinese LLM, but first of all, for the end user, what you're suggesting is that we're starting to see the cost come down sort of in the traditional technology way of Of costs coming down over time,[00:06:49] Simon: yes, but very aggressively.[00:06:51] Simon: I mean, my favorite thing, the example here is if you look at GPT-3, so open AI's g, PT three, which was the best, a developed model in [00:07:00] 2022 and through most of 20 2023. That, the models that we have today, the OpenAI models are a hundred times cheaper. So there was a 100x drop in price for OpenAI from their best available model, like two and a half years ago to today.[00:07:13] Simon: And[00:07:14] Brian: just to be clear, not to train the model, but for the use of tokens and things. Exactly,[00:07:20] Simon: for running prompts through them. And then When you look at the, the really, the top tier model providers right now, I think, are OpenAI, Anthropic, Google, and Meta. And there are a bunch of others that I could list there as well.[00:07:32] Simon: Mistral are very good. The, the DeepSeq and Quen models have got great. There's a whole bunch of providers serving really good models. But even if you just look at the sort of big brand name providers, they all offer models now that are A fraction of the price of the, the, of the models we were using last year.[00:07:49] Simon: I think I've got some numbers that I threw into my blog entry here. Yeah. Like Gemini 1. 5 flash, that's Google's fast high quality model is [00:08:00] how much is that? It's 0. 075 dollars per million tokens. Like these numbers are getting, So we just do cents per million now,[00:08:09] swyx (2): cents per million,[00:08:10] Simon: cents per million makes, makes a lot more sense.[00:08:12] Simon: Yeah they have one model 1. 5 flash 8B, the absolute cheapest of the Google models, is 27 times cheaper than GPT 3. 5 turbo was a year ago. That's it. And GPT 3. 5 turbo, that was the cheap model, right? Now we've got something 27 times cheaper, and the Google, this Google one can do image recognition, it can do million token context, all of those tricks.[00:08:36] Simon: But it's, it's, it's very, it's, it really is startling how inexpensive some of this stuff has got.[00:08:41] Brian: Now, are we assuming that this, that happening is directly the result of competition? Because again, you know, OpenAI, and probably they're doing this for their own almost political reasons, strategic reasons, keeps saying, we're losing money on everything, even the 200.[00:08:56] Brian: So they probably wouldn't, the prices wouldn't be [00:09:00] coming down if there wasn't intense competition in this space.[00:09:04] Simon: The competition is absolutely part of it, but I have it on good authority from sources I trust that Google Gemini is not operating at a loss. Like, the amount of electricity to run a prompt is less than they charge you.[00:09:16] Simon: And the same thing for Amazon Nova. Like, somebody found an Amazon executive and got them to say, Yeah, we're not losing money on this. I don't know about Anthropic and OpenAI, but clearly that demonstrates it is possible to run these things at these ludicrously low prices and still not be running at a loss if you discount the Army of PhDs and the, the training costs and all of that kind of stuff.[00:09:36] Brian: One, one more for me before I let Swyx jump in here. To, to come back to DeepSeek and this idea that you could train, you know, a cutting edge model for 6 million. I, I was saying on the show, like six months ago, that if we are getting to the point where each new model It would cost a billion, ten billion, a hundred billion to train that.[00:09:54] Brian: At some point it would almost, only nation states would be able to train the new models. Do you [00:10:00] expect what DeepSeek and maybe others are proving to sort of blow that up? Or is there like some sort of a parallel track here that maybe I'm not technically, I don't have the mouse to understand the difference.[00:10:11] Brian: Is the model, are the models going to go, you know, Up to a hundred billion dollars or can we get them down? Sort of like DeepSeek has proven[00:10:18] Simon: so I'm the wrong person to answer that because I don't work in the lab training these models. So I can give you my completely uninformed opinion, which is, I felt like the DeepSeek thing.[00:10:27] Simon: That was a bomb shell. That was an absolute bombshell when they came out and said, Hey, look, we've trained. One of the best available models and it cost us six, five and a half million dollars to do it. I feel, and they, the reason, one of the reasons it's so efficient is that we put all of these export controls in to stop Chinese companies from giant buying GPUs.[00:10:44] Simon: So they've, were forced to be, go as efficient as possible. And yet the fact that they've demonstrated that that's possible to do. I think it does completely tear apart this, this, this mental model we had before that yeah, the training runs just keep on getting more and more expensive and the number of [00:11:00] organizations that can afford to run these training runs keeps on shrinking.[00:11:03] Simon: That, that's been blown out of the water. So yeah, that's, again, this was our Christmas gift. This was the thing they dropped on Christmas day. Yeah, it makes me really optimistic that we can, there are, It feels like there was so much low hanging fruit in terms of the efficiency of both inference and training and we spent a whole bunch of last year exploring that and getting results from it.[00:11:22] Simon: I think there's probably a lot left. I think there's probably, well, I would not be surprised to see even better models trained spending even less money over the next six months.[00:11:31] swyx (2): Yeah. So I, I think there's a unspoken angle here on what exactly the Chinese labs are trying to do because DeepSea made a lot of noise.[00:11:41] swyx (2): so much for joining us for around the fact that they train their model for six million dollars and nobody quite quite believes them. Like it's very, very rare for a lab to trumpet the fact that they're doing it for so cheap. They're not trying to get anyone to buy them. So why [00:12:00] are they doing this? They make it very, very obvious.[00:12:05] swyx (2): Deepseek is about 150 employees. It's an order of magnitude smaller than at least Anthropic and maybe, maybe more so for OpenAI. And so what's, what's the end game here? Are they, are they just trying to show that the Chinese are better than us?[00:12:21] Simon: So Deepseek, it's the arm of a hedge, it's a, it's a quant fund, right?[00:12:25] Simon: It's an algorithmic quant trading thing. So I, I, I would love to get more insight into how that organization works. My assumption from what I've seen is it looks like they're basically just flexing. They're like, hey, look at how utterly brilliant we are with this amazing thing that we've done. And it's, it's working, right?[00:12:43] Simon: They but, and so is that it? Are they, is this just their kind of like, this is, this is why our company is so amazing. Look at this thing that we've done, or? I don't know. I'd, I'd love to get Some insight from, from within that industry as to, as to how that's all playing out.[00:12:57] swyx (2): The, the prevailing theory among the Local Llama [00:13:00] crew and the Twitter crew that I indexed for my newsletter is that there is some amount of copying going on.[00:13:06] swyx (2): It's like Sam Altman you know, tweet, tweeting about how they're being copied. And then also there's this, there, there are other sort of opening eye employees that have said, Stuff that is similar that DeepSeek's rate of progress is how U. S. intelligence estimates the number of foreign spies embedded in top labs.[00:13:22] swyx (2): Because a lot of these ideas do spread around, but they surprisingly have a very high density of them in the DeepSeek v3 technical report. So it's, it's interesting. We don't know how much, how many, how much tokens. I think that, you know, people have run analysis on how often DeepSeek thinks it is cloud or thinks it is opening GPC 4.[00:13:40] swyx (2): Thanks for watching! And we don't, we don't know. We don't know. I think for me, like, yeah, we'll, we'll, we basically will never know as, as external commentators. I think what's interesting is how, where does this go? Is there a logical floor or bottom by my estimations for the same amount of ELO started last year to the end of last year cost went down by a thousand X for the [00:14:00] GPT, for, for GPT 4 intelligence.[00:14:02] swyx (2): Would, do they go down a thousand X this year?[00:14:04] Simon: That's a fascinating question. Yeah.[00:14:06] swyx (2): Is there a Moore's law going on, or did we just get a one off benefit last year for some weird reason?[00:14:14] Simon: My uninformed hunch is low hanging fruit. I feel like up until a year ago, people haven't been focusing on efficiency at all. You know, it was all about, what can we get these weird shaped things to do?[00:14:24] Simon: And now once we've sort of hit that, okay, we know that we can get them to do what GPT 4 can do, When thousands of researchers around the world all focus on, okay, how do we make this more efficient? What are the most important, like, how do we strip out all of the weights that have stuff in that doesn't really matter?[00:14:39] Simon: All of that kind of thing. So yeah, maybe that was it. Maybe 2024 was a freak year of all of the low hanging fruit coming out at once. And we'll actually see a reduction in the, in that rate of improvement in terms of efficiency. I wonder, I mean, I think we'll know for sure in about three months time if that trend's going to continue or not.[00:14:58] swyx (2): I agree. You know, I [00:15:00] think the other thing that you mentioned that DeepSeq v3 was the gift that was given from DeepSeq over Christmas, but I feel like the other thing that might be underrated was DeepSeq R1,[00:15:11] Speaker 4: which is[00:15:13] swyx (2): a reasoning model you can run on your laptop. And I think that's something that a lot of people are looking ahead to this year.[00:15:18] swyx (2): Oh, did they[00:15:18] Simon: release the weights for that one?[00:15:20] swyx (2): Yeah.[00:15:21] Simon: Oh my goodness, I missed that. I've been playing with the quen. So the other great, the other big Chinese AI app is Alibaba's quen. Actually, yeah, I, sorry, R1 is an API available. Yeah. Exactly. When that's really cool. So Alibaba's Quen have released two reasoning models that I've run on my laptop.[00:15:38] Simon: Now there was, the first one was Q, Q, WQ. And then the second one was QVQ because the second one's a vision model. So you can like give it vision puzzles and a prompt that these things, they are so much fun to run. Because they think out loud. It's like the OpenAR 01 sort of hides its thinking process. The Query ones don't.[00:15:59] Simon: They just, they [00:16:00] just churn away. And so you'll give it a problem and it will output literally dozens of paragraphs of text about how it's thinking. My favorite thing that happened with QWQ is I asked it to draw me a pelican on a bicycle in SVG. That's like my standard stupid prompt. And for some reason it thought in Chinese.[00:16:18] Simon: It spat out a whole bunch of like Chinese text onto my terminal on my laptop, and then at the end it gave me quite a good sort of artistic pelican on a bicycle. And I ran it all through Google Translate, and yeah, it was like, it was contemplating the nature of SVG files as a starting point. And the fact that my laptop can think in Chinese now is so delightful.[00:16:40] Simon: It's so much fun watching you do that.[00:16:43] swyx (2): Yeah, I think Andrej Karpathy was saying, you know, we, we know that we have achieved proper reasoning inside of these models when they stop thinking in English, and perhaps the best form of thought is in Chinese. But yeah, for listeners who don't know Simon's blog he always, whenever a new model comes out, you, I don't know how you do it, but [00:17:00] you're always the first to run Pelican Bench on these models.[00:17:02] swyx (2): I just did it for 5.[00:17:05] Simon: Yeah.[00:17:07] swyx (2): So I really appreciate that. You should check it out. These are not theoretical. Simon's blog actually shows them.[00:17:12] Brian: Let me put on the investor hat for a second.[00:17:15] AI Agents and Their Limitations[00:17:15] Brian: Because from the investor side of things, a lot of the, the VCs that I know are really hot on agents, and this is the year of agents, but last year was supposed to be the year of agents as well. Lots of money flowing towards, And Gentic startups.[00:17:32] Brian: But in in your piece that again, we're hopefully going to have linked in the show notes, you sort of suggest there's a fundamental flaw in AI agents as they exist right now. Let me let me quote you. And then I'd love to dive into this. You said, I remain skeptical as to their ability based once again, on the Challenge of gullibility.[00:17:49] Brian: LLMs believe anything you tell them, any systems that attempt to make meaningful decisions on your behalf, will run into the same roadblock. How good is a travel agent, or a digital assistant, or even a research tool, if it [00:18:00] can't distinguish truth from fiction? So, essentially, what you're suggesting is that the state of the art now that allows agents is still, it's still that sort of 90 percent problem, the edge problem, getting to the Or, or, or is there a deeper flaw?[00:18:14] Brian: What are you, what are you saying there?[00:18:16] Simon: So this is the fundamental challenge here and honestly my frustration with agents is mainly around definitions Like any if you ask anyone who says they're working on agents to define agents You will get a subtly different definition from each person But everyone always assumes that their definition is the one true one that everyone else understands So I feel like a lot of these agent conversations, people talking past each other because one person's talking about the, the sort of travel agent idea of something that books things on your behalf.[00:18:41] Simon: Somebody else is talking about LLMs with tools running in a loop with a cron job somewhere and all of these different things. You, you ask academics and they'll laugh at you because they've been debating what agents mean for over 30 years at this point. It's like this, this long running, almost sort of an in joke in that community.[00:18:57] Simon: But if we assume that for this purpose of this conversation, an [00:19:00] agent is something that, Which you can give a job and it goes off and it does that thing for you like, like booking travel or things like that. The fundamental challenge is, it's the reliability thing, which comes from this gullibility problem.[00:19:12] Simon: And a lot of my, my interest in this originally came from when I was thinking about prompt injections as a source of this form of attack against LLM systems where you deliberately lay traps out there for this LLM to stumble across,[00:19:24] Brian: and which I should say you have been banging this drum that no one's gotten any far, at least on solving this, that I'm aware of, right.[00:19:31] Brian: Like that's still an open problem. The two years.[00:19:33] Simon: Yeah. Right. We've been talking about this problem and like, a great illustration of this was Claude so Anthropic released Claude computer use a few months ago. Fantastic demo. You could fire up a Docker container and you could literally tell it to do something and watch it open a web browser and navigate to a webpage and click around and so forth.[00:19:51] Simon: Really, really, really interesting and fun to play with. And then, um. One of the first demos somebody tried was, what if you give it a web page that says download and run this [00:20:00] executable, and it did, and the executable was malware that added it to a botnet. So the, the very first most obvious dumb trick that you could play on this thing just worked, right?[00:20:10] Simon: So that's obviously a really big problem. If I'm going to send something out to book travel on my behalf, I mean, it's hard enough for me to figure out which airlines are trying to scam me and which ones aren't. Do I really trust a language model that believes the literal truth of anything that's presented to it to go out and do those things?[00:20:29] swyx (2): Yeah I definitely think there's, it's interesting to see Anthropic doing this because they used to be the safety arm of OpenAI that split out and said, you know, we're worried about letting this thing out in the wild and here they are enabling computer use for agents. Thanks. The, it feels like things have merged.[00:20:49] swyx (2): You know, I'm, I'm also fairly skeptical about, you know, this always being the, the year of Linux on the desktop. And this is the equivalent of this being the year of agents that people [00:21:00] are not predicting so much as wishfully thinking and hoping and praying for their companies and agents to work.[00:21:05] swyx (2): But I, I feel like things are. Coming along a little bit. It's to me, it's kind of like self driving. I remember in 2014 saying that self driving was just around the corner. And I mean, it kind of is, you know, like in, in, in the Bay area. You[00:21:17] Simon: get in a Waymo and you're like, Oh, this works. Yeah, but it's a slow[00:21:21] swyx (2): cook.[00:21:21] swyx (2): It's a slow cook over the next 10 years. We're going to hammer out these things and the cynical people can just point to all the flaws, but like, there are measurable or concrete progress steps that are being made by these builders.[00:21:33] Simon: There is one form of agent that I believe in. I believe, mostly believe in the research assistant form of agents.[00:21:39] Simon: The thing where you've got a difficult problem and, and I've got like, I'm, I'm on the beta for the, the Google Gemini 1. 5 pro with deep research. I think it's called like these names, these names. Right. But. I've been using that. It's good, right? You can give it a difficult problem and it tells you, okay, I'm going to look at 56 different websites [00:22:00] and it goes away and it dumps everything to its context and it comes up with a report for you.[00:22:04] Simon: And it's not, it won't work against adversarial websites, right? If there are websites with deliberate lies in them, it might well get caught out. Most things don't have that as a problem. And so I've had some answers from that which were genuinely really valuable to me. And that feels to me like, I can see how given existing LLM tech, especially with Google Gemini with its like million token contacts and Google with their crawl of the entire web and their, they've got like search, they've got search and cache, they've got a cache of every page and so forth.[00:22:35] Simon: That makes sense to me. And that what they've got right now, I don't think it's, it's not as good as it can be, obviously, but it's, it's, it's, it's a real useful thing, which they're going to start rolling out. So, you know, Perplexity have been building the same thing for a couple of years. That, that I believe in.[00:22:50] Simon: You know, if you tell me that you're going to have an agent that's a research assistant agent, great. The coding agents I mean, chat gpt code interpreter, Nearly two years [00:23:00] ago, that thing started writing Python code, executing the code, getting errors, rewriting it to fix the errors. That pattern obviously works.[00:23:07] Simon: That works really, really well. So, yeah, coding agents that do that sort of error message loop thing, those are proven to work. And they're going to keep on getting better, and that's going to be great. The research assistant agents are just beginning to get there. The things I'm critical of are the ones where you trust, you trust this thing to go out and act autonomously on your behalf, and make decisions on your behalf, especially involving spending money, like that.[00:23:31] Simon: I don't see that working for a very long time. That feels to me like an AGI level problem.[00:23:37] swyx (2): It's it's funny because I think Stripe actually released an agent toolkit which is one of the, the things I featured that is trying to enable these agents each to have a wallet that they can go and spend and have, basically, it's a virtual card.[00:23:49] swyx (2): It's not that, not that difficult with modern infrastructure. can[00:23:51] Simon: stick a 50 cap on it, then at least it's an honor. Can't lose more than 50.[00:23:56] Brian: You know I don't, I don't know if either of you know Rafat Ali [00:24:00] he runs Skift, which is a, a travel news vertical. And he, he, he constantly laughs at the fact that every agent thing is, we're gonna get rid of booking a, a plane flight for you, you know?[00:24:11] Brian: And, and I would point out that, like, historically, when the web started, the first thing everyone talked about is, You can go online and book a trip, right? So it's funny for each generation of like technological advance. The thing they always want to kill is the travel agent. And now they want to kill the webpage travel agent.[00:24:29] Simon: Like it's like I use Google flight search. It's great, right? If you gave me an agent to do that for me, it would save me, I mean, maybe 15 seconds of typing in my things, but I still want to see what my options are and go, yeah, I'm not flying on that airline, no matter how cheap they are.[00:24:44] swyx (2): Yeah. For listeners, go ahead.[00:24:47] swyx (2): For listeners, I think, you know, I think both of you are pretty positive on NotebookLM. And you know, we, we actually interviewed the NotebookLM creators, and there are actually two internal agents going on internally. The reason it takes so long is because they're running an agent loop [00:25:00] inside that is fairly autonomous, which is kind of interesting.[00:25:01] swyx (2): For one,[00:25:02] Simon: for a definition of agent loop, if you picked that particularly well. For one definition. And you're talking about the podcast side of this, right?[00:25:07] swyx (2): Yeah, the podcast side of things. They have a there's, there's going to be a new version coming out that, that we'll be featuring at our, at our conference.[00:25:14] Simon: That one's fascinating to me. Like NotebookLM, I think it's two products, right? On the one hand, it's actually a very good rag product, right? You dump a bunch of things in, you can run searches, that, that, it does a good job of. And then, and then they added the, the podcast thing. It's a bit of a, it's a total gimmick, right?[00:25:30] Simon: But that gimmick got them attention, because they had a great product that nobody paid any attention to at all. And then you add the unfeasibly good voice synthesis of the podcast. Like, it's just, it's, it's, it's the lesson.[00:25:43] Brian: It's the lesson of mid journey and stuff like that. If you can create something that people can post on socials, you don't have to lift a finger again to do any marketing for what you're doing.[00:25:53] Brian: Let me dig into Notebook LLM just for a second as a podcaster. As a [00:26:00] gimmick, it makes sense, and then obviously, you know, you dig into it, it sort of has problems around the edges. It's like, it does the thing that all sort of LLMs kind of do, where it's like, oh, we want to Wrap up with a conclusion.[00:26:12] Multimodal AI and Future Prospects[00:26:12] Brian: I always call that like the the eighth grade book report paper problem where it has to have an intro and then, you know But that's sort of a thing where because I think you spoke about this again in your piece at the year end About how things are going multimodal and how things are that you didn't expect like, you know vision and especially audio I think So that's another thing where, at least over the last year, there's been progress made that maybe you, you didn't think was coming as quick as it came.[00:26:43] Simon: I don't know. I mean, a year ago, we had one really good vision model. We had GPT 4 vision, was, was, was very impressive. And Google Gemini had just dropped Gemini 1. 0, which had vision, but nobody had really played with it yet. Like Google hadn't. People weren't taking Gemini [00:27:00] seriously at that point. I feel like it was 1.[00:27:02] Simon: 5 Pro when it became apparent that actually they were, they, they got over their hump and they were building really good models. And yeah, and they, to be honest, the video models are mostly still using the same trick. The thing where you divide the video up into one image per second and you dump that all into the context.[00:27:16] Simon: So maybe it shouldn't have been so surprising to us that long context models plus vision meant that the video was, was starting to be solved. Of course, it didn't. Not being, you, what you really want with videos, you want to be able to do the audio and the images at the same time. And I think the models are beginning to do that now.[00:27:33] Simon: Like, originally, Gemini 1. 5 Pro originally ignored the audio. It just did the, the, like, one frame per second video trick. As far as I can tell, the most recent ones are actually doing pure multimodal. But the things that opens up are just extraordinary. Like, the the ChatGPT iPhone app feature that they shipped as one of their 12 days of, of OpenAI, I really can be having a conversation and just turn on my video camera and go, Hey, what kind of tree is [00:28:00] this?[00:28:00] Simon: And so forth. And it works. And for all I know, that's just snapping a like picture once a second and feeding it into the model. The, the, the things that you can do with that as an end user are extraordinary. Like that, that to me, I don't think most people have cottoned onto the fact that you can now stream video directly into a model because it, it's only a few weeks old.[00:28:22] Simon: Wow. That's a, that's a, that's a, that's Big boost in terms of what kinds of things you can do with this stuff. Yeah. For[00:28:30] swyx (2): people who are not that close I think Gemini Flashes free tier allows you to do something like capture a photo, one photo every second or a minute and leave it on 24, seven, and you can prompt it to do whatever.[00:28:45] swyx (2): And so you can effectively have your own camera app or monitoring app that that you just prompt and it detects where it changes. It detects for, you know, alerts or anything like that, or describes your day. You know, and, and, and the fact that this is free I think [00:29:00] it's also leads into the previous point of it being the prices haven't come down a lot.[00:29:05] Simon: And even if you're paying for this stuff, like a thing that I put in my blog entry is I ran a calculation on what it would cost to process 68, 000 photographs in my photo collection, and for each one just generate a caption, and using Gemini 1. 5 Flash 8B, it would cost me 1. 68 to process 68, 000 images, which is, I mean, that, that doesn't make sense.[00:29:28] Simon: None of that makes sense. Like it's, it's a, for one four hundredth of a cent per image to generate captions now. So you can see why feeding in a day's worth of video just isn't even very expensive to process.[00:29:40] swyx (2): Yeah, I'll tell you what is expensive. It's the other direction. So we're here, we're talking about consuming video.[00:29:46] swyx (2): And this year, we also had a lot of progress, like probably one of the most excited, excited, anticipated launches of the year was Sora. We actually got Sora. And less exciting.[00:29:55] Simon: We did, and then VO2, Google's Sora, came out like three [00:30:00] days later and upstaged it. Like, Sora was exciting until VO2 landed, which was just better.[00:30:05] swyx (2): In general, I feel the media, or the social media, has been very unfair to Sora. Because what was released to the world, generally available, was Sora Lite. It's the distilled version of Sora, right? So you're, I did not[00:30:16] Simon: realize that you're absolutely comparing[00:30:18] swyx (2): the, the most cherry picked version of VO two, the one that they published on the marketing page to the, the most embarrassing version of the soa.[00:30:25] swyx (2): So of course it's gonna look bad, so, well, I got[00:30:27] Simon: access to the VO two I'm in the VO two beta and I've been poking around with it and. Getting it to generate pelicans on bicycles and stuff. I would absolutely[00:30:34] swyx (2): believe that[00:30:35] Simon: VL2 is actually better. Is Sora, so is full fat Sora coming soon? Do you know, when, when do we get to play with that one?[00:30:42] Simon: No one's[00:30:43] swyx (2): mentioned anything. I think basically the strategy is let people play around with Sora Lite and get info there. But the, the, keep developing Sora with the Hollywood studios. That's what they actually care about. Gotcha. Like the rest of us. Don't really know what to do with the video anyway. Right.[00:30:59] Simon: I mean, [00:31:00] that's my thing is I realized that for generative images and images and video like images We've had for a few years and I don't feel like they've broken out into the talented artist community yet Like lots of people are having fun with them and doing and producing stuff. That's kind of cool to look at but what I want you know that that movie everything everywhere all at once, right?[00:31:20] Simon: One, one ton of Oscars, utterly amazing film. The VFX team for that were five people, some of whom were watching YouTube videos to figure out what to do. My big question for, for Sora and and and Midjourney and stuff, what happens when a creative team like that starts using these tools? I want the creative geniuses behind everything, everywhere all at once.[00:31:40] Simon: What are they going to be able to do with this stuff in like a few years time? Because that's really exciting to me. That's where you take artists who are at the very peak of their game. Give them these new capabilities and see, see what they can do with them.[00:31:52] swyx (2): I should, I know a little bit here. So it should mention that, that team actually used RunwayML.[00:31:57] swyx (2): So there was, there was,[00:31:57] Simon: yeah.[00:31:59] swyx (2): I don't know how [00:32:00] much I don't. So, you know, it's possible to overstate this, but there are people integrating it. Generated video within their workflow, even pre SORA. Right, because[00:32:09] Brian: it's not, it's not the thing where it's like, okay, tomorrow we'll be able to do a full two hour movie that you prompt with three sentences.[00:32:15] Brian: It is like, for the very first part of, of, you know video effects in film, it's like, if you can get that three second clip, if you can get that 20 second thing that they did in the matrix that blew everyone's minds and took a million dollars or whatever to do, like, it's the, it's the little bits and pieces that they can fill in now that it's probably already there.[00:32:34] swyx (2): Yeah, it's like, I think actually having a layered view of what assets people need and letting AI fill in the low value assets. Right, like the background video, the background music and, you know, sometimes the sound effects. That, that maybe, maybe more palatable maybe also changes the, the way that you evaluate the stuff that's coming out.[00:32:57] swyx (2): Because people tend to, in social media, try to [00:33:00] emphasize foreground stuff, main character stuff. So you really care about consistency, and you, you really are bothered when, like, for example, Sorad. Botch's image generation of a gymnast doing flips, which is horrible. It's horrible. But for background crowds, like, who cares?[00:33:18] Brian: And by the way, again, I was, I was a film major way, way back in the day, like, that's how it started. Like things like Braveheart, where they filmed 10 people on a field, and then the computer could turn it into 1000 people on a field. Like, that's always been the way it's around the margins and in the background that first comes in.[00:33:36] Brian: The[00:33:36] Simon: Lord of the Rings movies were over 20 years ago. Although they have those giant battle sequences, which were very early, like, I mean, you could almost call it a generative AI approach, right? They were using very sophisticated, like, algorithms to model out those different battles and all of that kind of stuff.[00:33:52] Simon: Yeah, I know very little. I know basically nothing about film production, so I try not to commentate on it. But I am fascinated to [00:34:00] see what happens when, when these tools start being used by the real, the people at the top of their game.[00:34:05] swyx (2): I would say like there's a cultural war that is more that being fought here than a technology war.[00:34:11] swyx (2): Most of the Hollywood people are against any form of AI anyway, so they're busy Fighting that battle instead of thinking about how to adopt it and it's, it's very fringe. I participated here in San Francisco, one generative AI video creative hackathon where the AI positive artists actually met with technologists like myself and then we collaborated together to build short films and that was really nice and I think, you know, I'll be hosting some of those in my events going forward.[00:34:38] swyx (2): One thing that I think like I want to leave it. Give people a sense of it's like this is a recap of last year But then sometimes it's useful to walk away as well with like what can we expect in the future? I don't know if you got anything. I would also call out that the Chinese models here have made a lot of progress Hyde Law and Kling and God knows who like who else in the video arena [00:35:00] Also making a lot of progress like surprising him like I think maybe actually Chinese China is surprisingly ahead with regards to Open8 at least, but also just like specific forms of video generation.[00:35:12] Simon: Wouldn't it be interesting if a film industry sprung up in a country that we don't normally think of having a really strong film industry that was using these tools? Like, that would be a fascinating sort of angle on this. Mm hmm. Mm hmm.[00:35:25] swyx (2): Agreed. I, I, I Oh, sorry. Go ahead.[00:35:29] Exploring Video Avatar Companies[00:35:29] swyx (2): Just for people's Just to put it on people's radar as well, Hey Jen, there's like there's a category of video avatar companies that don't specifically, don't specialize in general video.[00:35:41] swyx (2): They only do talking heads, let's just say. And HeyGen sings very well.[00:35:45] Brian: Swyx, you know that that's what I've been using, right? Like, have, have I, yeah, right. So, if you see some of my recent YouTube videos and things like that, where, because the beauty part of the HeyGen thing is, I, I, I don't want to use the robot voice, so [00:36:00] I record the mp3 file for my computer, And then I put that into HeyGen with the avatar that I've trained it on, and all it does is the lip sync.[00:36:09] Brian: So it looks, it's not 100 percent uncanny valley beatable, but it's good enough that if you weren't looking for it, it's just me sitting there doing one of my clips from the show. And, yeah, so, by the way, HeyGen. Shout out to them.[00:36:24] AI Influencers and Their Future[00:36:24] swyx (2): So I would, you know, in terms of like the look ahead going, like, looking, reviewing 2024, looking at trends for 2025, I would, they basically call this out.[00:36:33] swyx (2): Meta tried to introduce AI influencers and failed horribly because they were just bad at it. But at some point that there will be more and more basically AI influencers Not in a way that Simon is but in a way that they are not human.[00:36:50] Simon: Like the few of those that have done well, I always feel like they're doing well because it's a gimmick, right?[00:36:54] Simon: It's a it's it's novel and fun to like Like that, the AI Seinfeld thing [00:37:00] from last year, the Twitch stream, you know, like those, if you're the only one or one of just a few doing that, you'll get, you'll attract an audience because it's an interesting new thing. But I just, I don't know if that's going to be sustainable longer term or not.[00:37:11] Simon: Like,[00:37:12] Simplifying Content Creation with AI[00:37:12] Brian: I'm going to tell you, Because I've had discussions, I can't name the companies or whatever, but, so think about the workflow for this, like, now we all know that on TikTok and Instagram, like, holding up a phone to your face, and doing like, in my car video, or walking, a walk and talk, you know, that's, that's very common, but also, if you want to do a professional sort of talking head video, you still have to sit in front of a camera, you still have to do the lighting, you still have to do the video editing, versus, if you can just record, what I'm saying right now, the last 30 seconds, If you clip that out as an mp3 and you have a good enough avatar, then you can put that avatar in front of Times Square, on a beach, or whatever.[00:37:50] Brian: So, like, again for creators, the reason I think Simon, we're on the verge of something, it, it just, it's not going to, I think it's not, oh, we're going to have [00:38:00] AI avatars take over, it'll be one of those things where it takes another piece of the workflow out and simplifies it. I'm all[00:38:07] Simon: for that. I, I always love this stuff.[00:38:08] Simon: I like tools. Tools that help human beings do more. Do more ambitious things. I'm always in favor of, like, that, that, that's what excites me about this entire field.[00:38:17] swyx (2): Yeah. We're, we're looking into basically creating one for my podcast. We have this guy Charlie, he's Australian. He's, he's not real, but he pre, he opens every show and we are gonna have him present all the shorts.[00:38:29] Simon: Yeah, go ahead.[00:38:30] The Importance of Credibility in AI[00:38:30] Simon: The thing that I keep coming back to is this idea of credibility like in a world that is full of like AI generated everything and so forth It becomes even more important that people find the sources of information that they trust and find people and find Sources that are credible and I feel like that's the one thing that LLMs and AI can never have is credibility, right?[00:38:49] Simon: ChatGPT can never stake its reputation on telling you something useful and interesting because That means nothing, right? It's a matrix multiplication. It depends on who prompted it and so forth. So [00:39:00] I'm always, and this is when I'm blogging as well, I'm always looking for, okay, who are the reliable people who will tell me useful, interesting information who aren't just going to tell me whatever somebody's paying them to tell, tell them, who aren't going to, like, type a one sentence prompt into an LLM and spit out an essay and stick it online.[00:39:16] Simon: And that, that to me, Like, earning that credibility is really important. That's why a lot of my ethics around the way that I publish are based on the idea that I want people to trust me. I want to do things that, that gain credibility in people's eyes so they will come to me for information as a trustworthy source.[00:39:32] Simon: And it's the same for the sources that I'm, I'm consulting as well. So that's something I've, I've been thinking a lot about that sort of credibility focus on this thing for a while now.[00:39:40] swyx (2): Yeah, you can layer or structure credibility or decompose it like so one thing I would put in front of you I'm not saying that you should Agree with this or accept this at all is that you can use AI to generate different Variations and then and you pick you as the final sort of last mile person that you pick The last output and [00:40:00] you put your stamp of credibility behind that like that everything's human reviewed instead of human origin[00:40:04] Simon: Yeah, if you publish something you need to be able to put it on the ground Publishing it.[00:40:08] Simon: You need to say, I will put my name to this. I will attach my credibility to this thing. And if you're willing to do that, then, then that's great.[00:40:16] swyx (2): For creators, this is huge because there's a fundamental asymmetry between starting with a blank slate versus choosing from five different variations.[00:40:23] Brian: Right.[00:40:24] Brian: And also the key thing that you just said is like, if everything that I do, if all of the words were generated by an LLM, if the voice is generated by an LLM. If the video is also generated by the LLM, then I haven't done anything, right? But if, if one or two of those, you take a shortcut, but it's still, I'm willing to sign off on it.[00:40:47] Brian: Like, I feel like that's where I feel like people are coming around to like, this is maybe acceptable, sort of.[00:40:53] Simon: This is where I've been pushing the definition. I love the term slop. Where I've been pushing the definition of slop as AI generated [00:41:00] content that is both unrequested and unreviewed and the unreviewed thing is really important like that's the thing that elevates something from slop to not slop is if A human being has reviewed it and said, you know what, this is actually worth other people's time.[00:41:12] Simon: And again, I'm willing to attach my credibility to it and say, hey, this is worthwhile.[00:41:16] Brian: It's, it's, it's the cura curational, curatorial and editorial part of it that no matter what the tools are to do shortcuts, to do, as, as Swyx is saying choose between different edits or different cuts, but in the end, if there's a curatorial mind, Or editorial mind behind it.[00:41:32] Brian: Let me I want to wedge this in before we start to close.[00:41:36] The Future of LLM User Interfaces[00:41:36] Brian: One of the things coming back to your year end piece that has been a something that I've been banging the drum about is when you're talking about LLMs. Getting harder to use. You said most users are thrown in at the deep end.[00:41:48] Brian: The default LLM chat UI is like taking brand new computer users, dropping them into a Linux terminal and expecting them to figure it all out. I mean, it's, it's literally going back to the command line. The command line was defeated [00:42:00] by the GUI interface. And this is what I've been banging the drum about is like, this cannot be.[00:42:05] Brian: The user interface, what we have now cannot be the end result. Do you see any hints or seeds of a GUI moment for LLM interfaces?[00:42:17] Simon: I mean, it has to happen. It absolutely has to happen. The the, the, the, the usability of these things is turning into a bit of a crisis. And we are at least seeing some really interesting innovation in little directions.[00:42:28] Simon: Just like OpenAI's chat GPT canvas thing that they just launched. That is at least. Going a little bit more interesting than just chat, chats and responses. You know, you can, they're exploring that space where you're collaborating with an LLM. You're both working in the, on the same document. That makes a lot of sense to me.[00:42:44] Simon: Like that, that feels really smart. The one of the best things is still who was it who did the, the UI where you could, they had a drawing UI where you draw an interface and click a button. TL draw would then make it real thing. That was spectacular, [00:43:00] absolutely spectacular, like, alternative vision of how you'd interact with these models.[00:43:05] Simon: Because yeah, the and that's, you know, so I feel like there is so much scope for innovation there and it is beginning to happen. Like, like, I, I feel like most people do understand that we need to do better in terms of interfaces that both help explain what's going on and give people better tools for working with models.[00:43:23] Simon: I was going to say, I want to[00:43:25] Brian: dig a little deeper into this because think of the conceptual idea behind the GUI, which is instead of typing into a command line open word. exe, it's, you, you click an icon, right? So that's abstracting away sort of the, again, the programming stuff that like, you know, it's, it's a, a, a child can tap on an iPad and, and make a program open, right?[00:43:47] Brian: The problem it seems to me right now with how we're interacting with LLMs is it's sort of like you know a dumb robot where it's like you poke it and it goes over here, but no, I want it, I want to go over here so you poke it this way and you can't get it exactly [00:44:00] right, like, what can we abstract away from the From the current, what's going on that, that makes it more fine tuned and easier to get more precise.[00:44:12] Brian: You see what I'm saying?[00:44:13] Simon: Yes. And the this is the other trend that I've been following from the last year, which I think is super interesting. It's the, the prompt driven UI development thing. Basically, this is the pattern where Claude Artifacts was the first thing to do this really well. You type in a prompt and it goes, Oh, I should answer that by writing a custom HTML and JavaScript application for you that does a certain thing.[00:44:35] Simon: And when you think about that take and since then it turns out This is easy, right? Every decent LLM can produce HTML and JavaScript that does something useful. So we've actually got this alternative way of interacting where they can respond to your prompt with an interactive custom interface that you can work with.[00:44:54] Simon: People haven't quite wired those back up again. Like, ideally, I'd want the LLM ask me a [00:45:00] question where it builds me a custom little UI, For that question, and then it gets to see how I interacted with that. I don't know why, but that's like just such a small step from where we are right now. But that feels like such an obvious next step.[00:45:12] Simon: Like an LLM, why should it, why should you just be communicating with, with text when it can build interfaces on the fly that let you select a point on a map or or move like sliders up and down. It's gonna create knobs and dials. I keep saying knobs and dials. right. We can do that. And the LLMs can build, and Claude artifacts will build you a knobs and dials interface.[00:45:34] Simon: But at the moment they haven't closed the loop. When you twiddle those knobs, Claude doesn't see what you were doing. They're going to close that loop. I'm, I'm shocked that they haven't done it yet. So yeah, I think there's so much scope for innovation and there's so much scope for doing interesting stuff with that model where the LLM, anything you can represent in SVG, which is almost everything, can now be part of that ongoing conversation.[00:45:59] swyx (2): Yeah, [00:46:00] I would say the best executed version of this I've seen so far is Bolt where you can literally type in, make a Spotify clone, make an Airbnb clone, and it actually just does that for you zero shot with a nice design.[00:46:14] Simon: There's a benchmark for that now. The LMRena people now have a benchmark that is zero shot app, app generation, because all of the models can do it.[00:46:22] Simon: Like it's, it's, I've started figuring out. I'm building my own version of this for my own project, because I think within six months. I think it'll just be an expected feature. Like if you have a web application, why don't you have a thing where, oh, look, the, you can add a custom, like, so for my dataset data exploration project, I want you to be able to do things like conjure up a dashboard, just via a prompt.[00:46:43] Simon: You say, oh, I need a pie chart and a bar chart and put them next to each other, and then have a form where submitting the form inserts a row into my database table. And this is all suddenly feasible. It's, it's, it's not even particularly difficult to do, which is great. Utterly bizarre that these things are now easy.[00:47:00][00:47:00] swyx (2): I think for a general audience, that is what I would highlight, that software creation is becoming easier and easier. Gemini is now available in Gmail and Google Sheets. I don't write my own Google Sheets formulas anymore, I just tell Gemini to do it. And so I think those are, I almost wanted to basically somewhat disagree with, with your assertion that LMS got harder to use.[00:47:22] swyx (2): Like, yes, we, we expose more capabilities, but they're, they're in minor forms, like using canvas, like web search in, in in chat GPT and like Gemini being in, in Excel sheets or in Google sheets, like, yeah, we're getting, no,[00:47:37] Simon: no, no, no. Those are the things that make it harder, because the problem is that for each of those features, they're amazing.[00:47:43] Simon: If you understand the edges of the feature, if you're like, okay, so in Google, Gemini, Excel formulas, I can get it to do a certain amount of things, but I can't get it to go and read a web. You probably can't get it to read a webpage, right? But you know, there are, there are things that it can do and things that it can't do, which are completely undocumented.[00:47:58] Simon: If you ask it what it [00:48:00] can and can't do, they're terrible at answering questions about that. So like my favorite example is Claude artifacts. You can't build a Claude artifact that can hit an API somewhere else. Because the cause headers on that iframe prevents accessing anything outside of CDNJS. So, good luck learning cause headers as an end user in order to understand why Like, I've seen people saying, oh, this is rubbish.[00:48:26] Simon: I tried building an artifact that would run a prompt and it couldn't because Claude didn't expose an API with cause headers that all of this stuff is so weird and complicated. And yeah, like that, that, the more that with the more tools we add, the more expertise you need to really, To understand the full scope of what you can do.[00:48:44] Simon: And so it's, it's, I wouldn't say it's, it's, it's, it's like, the question really comes down to what does it take to understand the full extent of what's possible? And honestly, that, that's just getting more and more involved over time.[00:48:58] Local LLMs: A Growing Interest[00:48:58] swyx (2): I have one more topic that I, I [00:49:00] think you, you're kind of a champion of and we've touched on it a little bit, which is local LLMs.[00:49:05] swyx (2): And running AI applications on your desktop, I feel like you are an early adopter of many, many things.[00:49:12] Simon: I had an interesting experience with that over the past year. Six months ago, I almost completely lost interest. And the reason is that six months ago, the best local models you could run, There was no point in using them at all, because the best hosted models were so much better.[00:49:26] Simon: Like, there was no point at which I'd choose to run a model on my laptop if I had API access to Cloud 3. 5 SONNET. They just, they weren't even comparable. And that changed, basically, in the past three months, as the local models had this step changing capability, where now I can run some of these local models, and they're not as good as Cloud 3.[00:49:45] Simon: 5 SONNET, but they're not so far away that It's not worth me even using them. The other, the, the, the, the continuing problem is I've only got 64 gigabytes of RAM, and if you run, like, LLAMA370B, it's not going to work. Most of my RAM is gone. So now I have to shut down my Firefox tabs [00:50:00] and, and my Chrome and my VS Code windows in order to run it.[00:50:03] Simon: But it's got me interested again. Like, like the, the efficiency improvements are such that now, if you were to like stick me on a desert island with my laptop, I'd be very productive using those local models. And that's, that's pretty exciting. And if those trends continue, and also, like, I think my next laptop, if when I buy one is going to have twice the amount of RAM, At which point, maybe I can run the, almost the top tier, like open weights models and still be able to use it as a computer as well.[00:50:32] Simon: NVIDIA just announced their 3, 000 128 gigabyte monstrosity. That's pretty good price. You know, that's that's, if you're going to buy it,[00:50:42] swyx (2): custom OS and all.[00:50:46] Simon: If I get a job, if I, if, if, if I have enough of an income that I can justify blowing $3,000 on it, then yes.[00:50:52] swyx (2): Okay, let's do a GoFundMe to get Simon one it.[00:50:54] swyx (2): Come on. You know, you can get a job anytime you want. Is this, this is just purely discretionary .[00:50:59] Simon: I want, [00:51:00] I want a job that pays me to do exactly what I'm doing already and doesn't tell me what else to do. That's, thats the challenge.[00:51:06] swyx (2): I think Ethan Molik does pretty well. Whatever, whatever it is he's doing.[00:51:11] swyx (2): But yeah, basically I was trying to bring in also, you know, not just local models, but Apple intelligence is on every Mac machine. You're, you're, you seem skeptical. It's rubbish.[00:51:21] Simon: Apple intelligence is so bad. It's like, it does one thing well.[00:51:25] swyx (2): Oh yeah, what's that? It summarizes notifications. And sometimes it's humorous.[00:51:29] Brian: Are you sure it does that well? And also, by the way, the other, again, from a sort of a normie point of view. There's no indication from Apple of when to use it. Like, everybody upgrades their thing and it's like, okay, now you have Apple Intelligence, and you never know when to use it ever again.[00:51:47] swyx (2): Oh, yeah, you consult the Apple docs, which is MKBHD.[00:51:49] swyx (2): The[00:51:51] Simon: one thing, the one thing I'll say about Apple Intelligence is, One of the reasons it's so disappointing is that the models are just weak, but now, like, Llama 3b [00:52:00] is Such a good model in a 2 gigabyte file I think give Apple six months and hopefully they'll catch up to the state of the art on the small models And then maybe it'll start being a lot more interesting.[00:52:10] swyx (2): Yeah. Anyway, I like This was year one And and you know just like our first year of iPhone maybe maybe not that much of a hit and then year three They had the App Store so Hey I would say give it some time, and you know, I think Chrome also shipping Gemini Nano I think this year in Chrome, which means that every app, every web app will have for free access to a local model that just ships in the browser, which is kind of interesting.[00:52:38] swyx (2): And then I, I think I also wanted to just open the floor for any, like, you know, any of us what are the apps that, you know, AI applications that we've adopted that have, that we really recommend because these are all, you know, apps that are running on our browser that like, or apps that are running locally that we should be, that, that other people should be trying.[00:52:55] swyx (2): Right? Like, I, I feel like that's, that's one always one thing that is helpful at the start of the [00:53:00] year.[00:53:00] Simon: Okay. So for running local models. My top picks, firstly, on the iPhone, there's this thing called MLC Chat, which works, and it's easy to install, and it runs Llama 3B, and it's so much fun. Like, it's not necessarily a capable enough novel that I use it for real things, but my party trick right now is I get my phone to write a Netflix Christmas movie plot outline where, like, a bunch of Jeweller falls in love with the King of Sweden or whatever.[00:53:25] Simon: And it does a good job and it comes up with pun names for the movies. And that's, that's deeply entertaining. On my laptop, most recently, I've been getting heavy into, into Olama because the Olama team are very, very good at finding the good models and patching them up and making them work well. It gives you an API.[00:53:42] Simon: My little LLM command line tool that has a plugin that talks to Olama, which works really well. So that's my, my Olama is. I think the easiest on ramp to to running models locally, if you want a nice user interface, LMStudio is, I think, the best user interface [00:54:00] thing at that. It's not open source. It's good.[00:54:02] Simon: It's worth playing with. The other one that I've been trying with recently, there's a thing called, what's it called? Open web UI or something. Yeah. The UI is fantastic. It, if you've got Olama running and you fire this thing up, it spots Olama and it gives you an interface onto your Olama models. And t
Fluent Fiction - Swedish: Finding Her Voice: Elin's Brave Stand at Ericsson Globe Find the full episode transcript, vocabulary words, and more:fluentfiction.com/sv/episode/2025-01-12-08-38-19-sv Story Transcript:Sv: Det var en kall vinterdag i Stockholm.En: It was a cold winter day in Stockholm.Sv: Snön föll försiktigt över staden och täckte gatorna i ett vitt täcke.En: Snow was gently falling over the city, covering the streets in a white blanket.Sv: Skolbussen körde längs de snöklädda vägarna, fylld med förväntansfulla elever från klass 6B.En: The school bus drove along the snow-covered roads, filled with eager students from class 6B.Sv: De var på väg till Ericsson Globe.En: They were on their way to Ericsson Globe.Sv: Ericsson Globe, en stor och imponerande byggnad, stack ut mot den gråa himlen.En: Ericsson Globe, a large and impressive building, stood out against the gray sky.Sv: Den liknade en gigantisk golfboll i snölandskapet.En: It resembled a gigantic golf ball in the snowy landscape.Sv: Läraren, fru Andersson, hade berättat att de skulle lära sig om svensk arkitektur.En: The teacher, Mrs. Andersson, had mentioned that they would learn about Swedish architecture.Sv: Elin satt längst bak i bussen och tittade ut genom fönstret.En: Elin sat at the back of the bus, looking out the window.Sv: Hon kände ett pirr i magen.En: She felt a tingling in her stomach.Sv: Hon älskade design och arkitektur, men kände sig ofta bortglömd mellan sina mer högljudda klasskamrater som Lars och Maja.En: She loved design and architecture, but often felt overlooked among her louder classmates like Lars and Maja.Sv: "Någon som vet något om Globen?En: "Does anyone know anything about the Globen?"Sv: " frågade fru Andersson när de klev av bussen.En: asked Mrs. Andersson as they stepped off the bus.Sv: "Lars och Maja?En: "Lars and Maja?Sv: Kanske ni vet?En: Perhaps you know?"Sv: "Lars och Maja började genast prata ivrigt om byggnadens höjd och hur den byggts.En: Lars and Maja immediately began to talk excitedly about the building's height and how it was constructed.Sv: Elin, som brukade hålla sig i bakgrunden, kände plötsligt en stark önskan att dela med sig av sina egna tankar.En: Elin, who usually stayed in the background, suddenly felt a strong desire to share her own thoughts.Sv: Men rädslan för att misslyckas höll henne tillbaka.En: But the fear of failure held her back.Sv: När de kom in i Globen, fascinerad av dess inre struktur, tog Elin ett djupt andetag.En: When they entered the Globen, fascinated by its interior structure, Elin took a deep breath.Sv: "Jag vill också säga något," sa hon tyst för sig själv.En: "I want to say something too," she said quietly to herself.Sv: När gruppen samlades runt en modell av Globen, tog Elin mod till sig.En: As the group gathered around a model of the Globen, Elin gathered the courage.Sv: Hon räckte upp handen.En: She raised her hand.Sv: "Jag.En: "I...Sv: jag kan berätta mer om designen.En: I can tell more about the design."Sv: "Alla vände sig mot henne.En: Everyone turned towards her.Sv: Elins hjärta slog snabbt.En: Elin's heart beat fast.Sv: Hon började prata om Globen som en symbol för svensk innovation.En: She began talking about the Globen as a symbol of Swedish innovation.Sv: Hon berättade hur den representerade enheten och skaparkraften i svensk kultur.En: She explained how it represented unity and creativity in Swedish culture.Sv: Till hennes förvåning lyssnade alla noga.En: To her surprise, everyone listened intently.Sv: Lars och Maja såg imponerade ut.En: Lars and Maja looked impressed.Sv: Elin fann sin röst och fortsatte med nyvunnen trygghet, hennes ord flöt fritt.En: Elin found her voice and continued with newfound confidence, her words flowing freely.Sv: När hon var klar, applåderade hennes klasskamrater.En: When she was done, her classmates applauded.Sv: "Bra jobbat, Elin!En: "Well done, Elin!"Sv: " ropade fru Andersson och klappade henne på axeln.En: called Mrs. Andersson and patted her on the shoulder.Sv: På vägen hem kände Elin sig stolt.En: On the way home, Elin felt proud.Sv: Hon hade vågat ta plats och hennes idéer hade blivit hörda.En: She had dared to take her place, and her ideas had been heard.Sv: Hon visste nu att hennes röst var viktig och betydelsefull.En: She now knew that her voice was important and meaningful.Sv: När bussen rullade tillbaka genom den vintriga staden, visste Elin att detta bara var början på hennes resa att dela sina idéer med världen.En: As the bus rolled back through the wintry city, Elin knew this was just the beginning of her journey to share her ideas with the world.Sv: Hon log för sig själv och tittade ut genom fönstret med ett lugnt hjärta.En: She smiled to herself and looked out the window with a calm heart. Vocabulary Words:gentle: försiktigtimpressive: imponerandecovering: täckereager: förväntansfullablanket: täckegray: gråatingling: pirroverlooked: bortglömdbackground: bakgrundendesire: önskanfailure: misslyckasinterior structure: inre strukturfascinated: fascineradgathered: samladescourage: modunity: enhetencreativity: skaparkraftenintently: noganewfound: nyvunnenconfidence: trygghetflowing freely: flöt frittapplauded: applåderadedared: vågatmeaningful: betydelsefullcalm: lugntjourney: resasymbol: symbolinnovation: innovationshare: delaideas: idéer
Good evening and a huge welcome back to the show, I hope you've had a great day and you're ready to kick back and relax with another episode of Brett's old time radio show. Hello, I'm Brett your host for this evening and welcome to my home in beautiful Lyme Bay where it's lovely December night. I hope it's just as nice where you are. You'll find all of my links at www.linktr.ee/brettsoldtimeradioshow A huge thankyou for joining me once again for our regular late night visit to those dusty studio archives of Old Time radio shows right here at my home in the united kingdom. Don't forget I have an instagram page and youtube channel both called brett's old time radio show and I'd love it if you could follow me. Feel free to send me some feedback on this and the other shows if you get a moment, brett@tourdate.co.uk #sleep #insomnia #relax #chill #night #nighttime #bed #bedtime #oldtimeradio #drama #comedy #radio #talkradio #hancock #tonyhancock #hancockshalfhour #sherlock #sherlockholmes #radiodrama #popular #viral #viralpodcast #podcast #podcasting #podcasts #podtok #podcastclip #podcastclips #podcasttrailer #podcastteaser #newpodcastepisode #newpodcast #videopodcast #upcomingpodcast #audiogram #audiograms #truecrimepodcast #historypodcast #truecrime #podcaster #viral #popular #viralpodcast #number1 #instagram #youtube #facebook #johnnydollar #crime #fiction #unwind #devon #texas #texasranger #beer #seaton #seaside #smuggler #colyton #devon #seaton #beer #branscombe #lymebay #lymeregis #brett #brettorchard #orchard #greatdetectives #greatdetectivesofoldtimeradio #detectives #johnnydollar #thesaint #steptoe #texasrangers The Man Called X An espionage radio drama that aired on CBS and NBC from July 10, 1944, to May 20, 1952. The radio series was later adapted for television and was broadcast for one season, 1956–1957. People Herbert Marshall had the lead role of agent Ken Thurston/"Mr. X", an American intelligence agent who took on dangerous cases in a variety of exotic locations. Leon Belasco played Mr. X's comedic sidekick, Pegon Zellschmidt, who always turned up in remote parts of the world because he had a "cousin" there. Zellschmidt annoyed and helped Mr. X. Jack Latham was an announcer for the program, and Wendell Niles was the announcer from 1947 to 1948. Orchestras led by Milton Charles, Johnny Green, Felix Mills, and Gordon Jenkins supplied the background music. William N. Robson was the producer and director. Stephen Longstreet was the writer. Production The Man Called X replaced America — Ceiling Unlimited on the CBS schedule. Television The series was later adapted to a 39-episode syndicated television series (1956–1957) starring Barry Sullivan as Thurston for Ziv Television. Episodes Season 1 (1956) 1 1 "For External Use Only" Eddie Davis Story by : Ladislas Farago Teleplay by : Stuart Jerome, Harold Swanton, and William P. Templeton January 27, 1956 2 2 "Ballerina Story" Eddie Davis Leonard Heideman February 3, 1956 3 3 "Extradition" Eddie Davis Ellis Marcus February 10, 1956 4 4 "Assassination" William Castle Stuart Jerome February 17, 1956 5 5 "Truth Serum" Eddie Davis Harold Swanton February 24, 1956 6 6 "Afghanistan" Eddie Davis Leonard Heidman March 2, 1956 7 7 "Embassy" Herbert L. Strock Laurence Heath and Jack Rock March 9, 1956 8 8 "Dangerous" Eddie Davis George Callahan March 16, 1956 9 9 "Provocateur" Eddie Davis Arthur Weiss March 23, 1956 10 10 "Local Hero" Leon Benson Ellis Marcus March 30, 1956 11 11 "Maps" Eddie Davis Jack Rock May 4, 1956 12 12 "U.S. Planes" Eddie Davis William L. Stuart April 13, 1956 13 13 "Acoustics" Eddie Davis Orville H. Hampton April 20, 1956 14 14 "The General" Eddie Davis Leonard Heideman April 27, 1956 Season 2 (1956–1957) 15 1 "Missing Plates" Eddie Davis Jack Rock September 27, 1956 16 2 "Enemy Agent" Eddie Davis Teleplay by : Gene Levitt October 4, 1956 17 3 "Gold" Eddie Davis Jack Laird October 11, 1956 18 4 "Operation Janus" Eddie Davis Teleplay by : Jack Rock and Art Wallace October 18, 1956 19 5 "Staff Headquarters" Eddie Davis Leonard Heideman October 25, 1956 20 6 "Underground" Eddie Davis William L. Stuart November 1, 1956 21 7 "Spare Parts" Eddie Davis Jack Laird November 8, 1956 22 8 "Fallout" Eddie Davis Teleplay by : Arthur Weiss November 15, 1956 23 9 "Speech" Eddie Davis Teleplay by : Ande Lamb November 22, 1956 24 10 "Ship Sabotage" Eddie Davis Jack Rock November 29, 1956 25 11 "Rendezvous" Eddie Davis Ellis Marcus December 5, 1956 26 12 "Switzerland" Eddie Davis Leonard Heideman December 12, 1956 27 13 "Voice On Tape" Eddie Davis Teleplay by : Leonard Heideman December 19, 1956 28 14 "Code W" Eddie Davis Arthur Weiss December 26, 1956 29 15 "Gas Masks" Eddie Davis Teleplay by : Jack Rock January 3, 1957 30 16 "Murder" Eddie Davis Lee Berg January 10, 1957 31 17 "Train Blow-Up" Eddie Davis Ellis Marcus February 6, 1957 32 18 "Powder Keg" Jack Herzberg Les Crutchfield and Jack Rock February 13, 1957 33 19 "Passport" Eddie Davis Norman Jolley February 20, 1957 34 20 "Forged Documents" Eddie Davis Charles Mergendahl February 27, 1957 35 21 "Australia" Lambert Hill Jack Rock March 6, 1957 36 22 "Radio" Eddie Davis George Callahan March 13, 1957 37 23 "Business Empire" Leslie Goodwins Herbert Purdum and Jack Rock March 20, 1957 38 24 "Hungary" Eddie Davis Fritz Blocki and George Callahan March 27, 1957 39 25 "Kidnap" Eddie Davis George Callahan April 4, 1957 sleep insomnia relax chill night nightime bed bedtime oldtimeradio drama comedy radio talkradio hancock tonyhancock hancockshalfhour sherlock sherlockholmes radiodrama popular viral viralpodcast podcast brett brettorchard orchard east devon seaton beer lyme regis village condado de alhama spain murcia The Golden Age of Radio Also known as the old-time radio (OTR) era, was an era of radio in the United States where it was the dominant electronic home entertainment medium. It began with the birth of commercial radio broadcasting in the early 1920s and lasted through the 1950s, when television gradually superseded radio as the medium of choice for scripted programming, variety and dramatic shows. Radio was the first broadcast medium, and during this period people regularly tuned in to their favourite radio programs, and families gathered to listen to the home radio in the evening. According to a 1947 C. E. Hooper survey, 82 out of 100 Americans were found to be radio listeners. A variety of new entertainment formats and genres were created for the new medium, many of which later migrated to television: radio plays, mystery serials, soap operas, quiz shows, talent shows, daytime and evening variety hours, situation comedies, play-by-play sports, children's shows, cooking shows, and more. In the 1950s, television surpassed radio as the most popular broadcast medium, and commercial radio programming shifted to narrower formats of news, talk, sports and music. Religious broadcasters, listener-supported public radio and college stations provide their own distinctive formats. Origins A family listening to the first broadcasts around 1920 with a crystal radio. The crystal radio, a legacy from the pre-broadcast era, could not power a loudspeaker so the family must share earphones During the first three decades of radio, from 1887 to about 1920, the technology of transmitting sound was undeveloped; the information-carrying ability of radio waves was the same as a telegraph; the radio signal could be either on or off. Radio communication was by wireless telegraphy; at the sending end, an operator tapped on a switch which caused the radio transmitter to produce a series of pulses of radio waves which spelled out text messages in Morse code. At the receiver these sounded like beeps, requiring an operator who knew Morse code to translate them back to text. This type of radio was used exclusively for person-to-person text communication for commercial, diplomatic and military purposes and hobbyists; broadcasting did not exist. The broadcasts of live drama, comedy, music and news that characterize the Golden Age of Radio had a precedent in the Théâtrophone, commercially introduced in Paris in 1890 and available as late as 1932. It allowed subscribers to eavesdrop on live stage performances and hear news reports by means of a network of telephone lines. The development of radio eliminated the wires and subscription charges from this concept. Between 1900 and 1920 the first technology for transmitting sound by radio was developed, AM (amplitude modulation), and AM broadcasting sprang up around 1920. On Christmas Eve 1906, Reginald Fessenden is said to have broadcast the first radio program, consisting of some violin playing and passages from the Bible. While Fessenden's role as an inventor and early radio experimenter is not in dispute, several contemporary radio researchers have questioned whether the Christmas Eve broadcast took place, or whether the date was, in fact, several weeks earlier. The first apparent published reference to the event was made in 1928 by H. P. Davis, Vice President of Westinghouse, in a lecture given at Harvard University. In 1932 Fessenden cited the Christmas Eve 1906 broadcast event in a letter he wrote to Vice President S. M. Kinter of Westinghouse. Fessenden's wife Helen recounts the broadcast in her book Fessenden: Builder of Tomorrows (1940), eight years after Fessenden's death. The issue of whether the 1906 Fessenden broadcast actually happened is discussed in Donna Halper's article "In Search of the Truth About Fessenden"[2] and also in James O'Neal's essays.[3][4] An annotated argument supporting Fessenden as the world's first radio broadcaster was offered in 2006 by Dr. John S. Belrose, Radioscientist Emeritus at the Communications Research Centre Canada, in his essay "Fessenden's 1906 Christmas Eve broadcast." It was not until after the Titanic catastrophe in 1912 that radio for mass communication came into vogue, inspired first by the work of amateur ("ham") radio operators. Radio was especially important during World War I as it was vital for air and naval operations. World War I brought about major developments in radio, superseding the Morse code of the wireless telegraph with the vocal communication of the wireless telephone, through advancements in vacuum tube technology and the introduction of the transceiver. After the war, numerous radio stations were born in the United States and set the standard for later radio programs. The first radio news program was broadcast on August 31, 1920, on the station 8MK in Detroit; owned by The Detroit News, the station covered local election results. This was followed in 1920 with the first commercial radio station in the United States, KDKA, being established in Pittsburgh. The first regular entertainment programs were broadcast in 1922, and on March 10, Variety carried the front-page headline: "Radio Sweeping Country: 1,000,000 Sets in Use." A highlight of this time was the first Rose Bowl being broadcast on January 1, 1923, on the Los Angeles station KHJ. Growth of radio Broadcast radio in the United States underwent a period of rapid change through the decade of the 1920s. Technology advances, better regulation, rapid consumer adoption, and the creation of broadcast networks transformed radio from a consumer curiosity into the mass media powerhouse that defined the Golden Age of Radio. Consumer adoption Through the decade of the 1920s, the purchase of radios by United States homes continued, and accelerated. The Radio Corporation of America (RCA) released figures in 1925 stating that 19% of United States homes owned a radio. The triode and regenerative circuit made amplified, vacuum tube radios widely available to consumers by the second half of the 1920s. The advantage was obvious: several people at once in a home could now easily listen to their radio at the same time. In 1930, 40% of the nation's households owned a radio,[8] a figure that was much higher in suburban and large metropolitan areas. The superheterodyne receiver and other inventions refined radios even further in the next decade; even as the Great Depression ravaged the country in the 1930s, radio would stay at the centre of American life. 83% of American homes would own a radio by 1940. Government regulation Although radio was well established with United States consumers by the mid-1920s, regulation of the broadcast medium presented its own challenges. Until 1926, broadcast radio power and frequency use was regulated by the U.S. Department of Commerce, until a legal challenge rendered the agency powerless to do so. Congress responded by enacting the Radio Act of 1927, which included the formation of the Federal Radio Commission (FRC). One of the FRC's most important early actions was the adoption of General Order 40, which divided stations on the AM band into three power level categories, which became known as Local, Regional, and Clear Channel, and reorganized station assignments. Based on this plan, effective 3:00 a.m. Eastern time on November 11, 1928, most of the country's stations were assigned to new transmitting frequencies. Broadcast networks The final element needed to make the Golden Age of Radio possible focused on the question of distribution: the ability for multiple radio stations to simultaneously broadcast the same content, and this would be solved with the concept of a radio network. The earliest radio programs of the 1920s were largely unsponsored; radio stations were a service designed to sell radio receivers. In early 1922, American Telephone & Telegraph Company (AT&T) announced the beginning of advertisement-supported broadcasting on its owned stations, and plans for the development of the first radio network using its telephone lines to transmit the content. In July 1926, AT&T abruptly decided to exit the broadcasting field, and signed an agreement to sell its entire network operations to a group headed by RCA, which used the assets to form the National Broadcasting Company. Four radio networks had formed by 1934. These were: National Broadcasting Company Red Network (NBC Red), launched November 15, 1926. Originally founded as the National Broadcasting Company in late 1926, the company was almost immediately forced to split under antitrust laws to form NBC Red and NBC Blue. When, in 1942, NBC Blue was sold and renamed the Blue Network, this network would go back to calling itself simply the National Broadcasting Company Radio Network (NBC). National Broadcasting Company Blue Network (NBC Blue); launched January 10, 1927, split from NBC Red. NBC Blue was sold in 1942 and became the Blue Network, and it in turn transferred its assets to a new company, the American Broadcasting Company on June 15, 1945. That network identified itself as the American Broadcasting Company Radio Network (ABC). Columbia Broadcasting System (CBS), launched September 18, 1927. After an initially struggling attempt to compete with the NBC networks, CBS gained new momentum when William S. Paley was installed as company president. Mutual Broadcasting System (Mutual), launched September 29, 1934. Mutual was initially run as a cooperative in which the flagship stations owned the network, not the other way around as was the case with the other three radio networks. Programming In the period before and after the advent of the broadcast network, new forms of entertainment needed to be created to fill the time of a station's broadcast day. Many of the formats born in this era continued into the television and digital eras. In the beginning of the Golden Age, network programs were almost exclusively broadcast live, as the national networks prohibited the airing of recorded programs until the late 1940s because of the inferior sound quality of phonograph discs, the only practical recording medium at that time. As a result, network prime-time shows would be performed twice, once for each coast. Rehearsal for the World War II radio show You Can't Do Business with Hitler with John Flynn and Virginia Moore. This series of programs, broadcast at least once weekly by more than 790 radio stations in the United States, was written and produced by the radio section of the Office of War Information (OWI). Live events Coverage of live events included musical concerts and play-by-play sports broadcasts. News The capability of the new medium to get information to people created the format of modern radio news: headlines, remote reporting, sidewalk interviews (such as Vox Pop), panel discussions, weather reports, and farm reports. The entry of radio into the realm of news triggered a feud between the radio and newspaper industries in the mid-1930s, eventually culminating in newspapers trumping up exaggerated [citation needed] reports of a mass hysteria from the (entirely fictional) radio presentation of The War of the Worlds, which had been presented as a faux newscast. Musical features The sponsored musical feature soon became one of the most popular program formats. Most early radio sponsorship came in the form of selling the naming rights to the program, as evidenced by such programs as The A&P Gypsies, Champion Spark Plug Hour, The Clicquot Club Eskimos, and King Biscuit Time; commercials, as they are known in the modern era, were still relatively uncommon and considered intrusive. During the 1930s and 1940s, the leading orchestras were heard often through big band remotes, and NBC's Monitor continued such remotes well into the 1950s by broadcasting live music from New York City jazz clubs to rural America. Singers such as Harriet Lee and Wendell Hall became popular fixtures on network radio beginning in the late 1920s and early 1930s. Local stations often had staff organists such as Jesse Crawford playing popular tunes. Classical music programs on the air included The Voice of Firestone and The Bell Telephone Hour. Texaco sponsored the Metropolitan Opera radio broadcasts; the broadcasts, now sponsored by the Toll Brothers, continue to this day around the world, and are one of the few examples of live classical music still broadcast on radio. One of the most notable of all classical music radio programs of the Golden Age of Radio featured the celebrated Italian conductor Arturo Toscanini conducting the NBC Symphony Orchestra, which had been created especially for him. At that time, nearly all classical musicians and critics considered Toscanini the greatest living maestro. Popular songwriters such as George Gershwin were also featured on radio. (Gershwin, in addition to frequent appearances as a guest, had his own program in 1934.) The New York Philharmonic also had weekly concerts on radio. There was no dedicated classical music radio station like NPR at that time, so classical music programs had to share the network they were broadcast on with more popular ones, much as in the days of television before the creation of NET and PBS. Country music also enjoyed popularity. National Barn Dance, begun on Chicago's WLS in 1924, was picked up by NBC Radio in 1933. In 1925, WSM Barn Dance went on the air from Nashville. It was renamed the Grand Ole Opry in 1927 and NBC carried portions from 1944 to 1956. NBC also aired The Red Foley Show from 1951 to 1961, and ABC Radio carried Ozark Jubilee from 1953 to 1961. Comedy Radio attracted top comedy talents from vaudeville and Hollywood for many years: Bing Crosby, Abbott and Costello, Fred Allen, Jack Benny, Victor Borge, Fanny Brice, Billie Burke, Bob Burns, Judy Canova, Eddie Cantor, Jimmy Durante, Burns and Allen, Phil Harris, Edgar Bergen, Bob Hope, Groucho Marx, Jean Shepherd, Red Skelton and Ed Wynn. Situational comedies also gained popularity, such as Amos 'n' Andy, Easy Aces, Ethel and Albert, Fibber McGee and Molly, The Goldbergs, The Great Gildersleeve, The Halls of Ivy (which featured screen star Ronald Colman and his wife Benita Hume), Meet Corliss Archer, Meet Millie, and Our Miss Brooks. Radio comedy ran the gamut from the small town humor of Lum and Abner, Herb Shriner and Minnie Pearl to the dialect characterizations of Mel Blanc and the caustic sarcasm of Henry Morgan. Gags galore were delivered weekly on Stop Me If You've Heard This One and Can You Top This?,[18] panel programs devoted to the art of telling jokes. Quiz shows were lampooned on It Pays to Be Ignorant, and other memorable parodies were presented by such satirists as Spike Jones, Stoopnagle and Budd, Stan Freberg and Bob and Ray. British comedy reached American shores in a major assault when NBC carried The Goon Show in the mid-1950s. Some shows originated as stage productions: Clifford Goldsmith's play What a Life was reworked into NBC's popular, long-running The Aldrich Family (1939–1953) with the familiar catchphrases "Henry! Henry Aldrich!," followed by Henry's answer, "Coming, Mother!" Moss Hart and George S. Kaufman's Pulitzer Prize-winning Broadway hit, You Can't Take It with You (1936), became a weekly situation comedy heard on Mutual (1944) with Everett Sloane and later on NBC (1951) with Walter Brennan. Other shows were adapted from comic strips, such as Blondie, Dick Tracy, Gasoline Alley, The Gumps, Li'l Abner, Little Orphan Annie, Popeye the Sailor, Red Ryder, Reg'lar Fellers, Terry and the Pirates and Tillie the Toiler. Bob Montana's redheaded teen of comic strips and comic books was heard on radio's Archie Andrews from 1943 to 1953. The Timid Soul was a 1941–1942 comedy based on cartoonist H. T. Webster's famed Caspar Milquetoast character, and Robert L. Ripley's Believe It or Not! was adapted to several different radio formats during the 1930s and 1940s. Conversely, some radio shows gave rise to spinoff comic strips, such as My Friend Irma starring Marie Wilson. Soap operas The first program generally considered to be a daytime serial drama by scholars of the genre is Painted Dreams, which premiered on WGN on October 20, 1930. The first networked daytime serial is Clara, Lu, 'n Em, which started in a daytime time slot on February 15, 1932. As daytime serials became popular in the early 1930s, they became known as soap operas because many were sponsored by soap products and detergents. On November 25, 1960, the last four daytime radio dramas—Young Dr. Malone, Right to Happiness, The Second Mrs. Burton and Ma Perkins, all broadcast on the CBS Radio Network—were brought to an end. Children's programming The line-up of late afternoon adventure serials included Bobby Benson and the B-Bar-B Riders, The Cisco Kid, Jack Armstrong, the All-American Boy, Captain Midnight, and The Tom Mix Ralston Straight Shooters. Badges, rings, decoding devices and other radio premiums offered on these adventure shows were often allied with a sponsor's product, requiring the young listeners to mail in a boxtop from a breakfast cereal or other proof of purchase. Radio plays Radio plays were presented on such programs as 26 by Corwin, NBC Short Story, Arch Oboler's Plays, Quiet, Please, and CBS Radio Workshop. Orson Welles's The Mercury Theatre on the Air and The Campbell Playhouse were considered by many critics to be the finest radio drama anthologies ever presented. They usually starred Welles in the leading role, along with celebrity guest stars such as Margaret Sullavan or Helen Hayes, in adaptations from literature, Broadway, and/or films. They included such titles as Liliom, Oliver Twist (a title now feared lost), A Tale of Two Cities, Lost Horizon, and The Murder of Roger Ackroyd. It was on Mercury Theatre that Welles presented his celebrated-but-infamous 1938 adaptation of H. G. Wells's The War of the Worlds, formatted to sound like a breaking news program. Theatre Guild on the Air presented adaptations of classical and Broadway plays. Their Shakespeare adaptations included a one-hour Macbeth starring Maurice Evans and Judith Anderson, and a 90-minute Hamlet, starring John Gielgud.[22] Recordings of many of these programs survive. During the 1940s, Basil Rathbone and Nigel Bruce, famous for playing Sherlock Holmes and Dr. Watson in films, repeated their characterizations on radio on The New Adventures of Sherlock Holmes, which featured both original stories and episodes directly adapted from Arthur Conan Doyle's stories. None of the episodes in which Rathbone and Bruce starred on the radio program were filmed with the two actors as Holmes and Watson, so radio became the only medium in which audiences were able to experience Rathbone and Bruce appearing in some of the more famous Holmes stories, such as "The Speckled Band". There were also many dramatizations of Sherlock Holmes stories on radio without Rathbone and Bruce. During the latter part of his career, celebrated actor John Barrymore starred in a radio program, Streamlined Shakespeare, which featured him in a series of one-hour adaptations of Shakespeare plays, many of which Barrymore never appeared in either on stage or in films, such as Twelfth Night (in which he played both Malvolio and Sir Toby Belch), and Macbeth. Lux Radio Theatre and The Screen Guild Theater presented adaptations of Hollywood movies, performed before a live audience, usually with cast members from the original films. Suspense, Escape, The Mysterious Traveler and Inner Sanctum Mystery were popular thriller anthology series. Leading writers who created original material for radio included Norman Corwin, Carlton E. Morse, David Goodis, Archibald MacLeish, Arthur Miller, Arch Oboler, Wyllis Cooper, Rod Serling, Jay Bennett, and Irwin Shaw. Game shows Game shows saw their beginnings in radio. One of the first was Information Please in 1938, and one of the first major successes was Dr. I.Q. in 1939. Winner Take All, which premiered in 1946, was the first to use lockout devices and feature returning champions. A relative of the game show, which would be called the giveaway show in contemporary media, typically involved giving sponsored products to studio audience members, people randomly called by telephone, or both. An early example of this show was the 1939 show Pot o' Gold, but the breakout hit of this type was ABC's Stop the Music in 1948. Winning a prize generally required knowledge of what was being aired on the show at that moment, which led to criticism of the giveaway show as a form of "buying an audience". Giveaway shows were extremely popular through 1948 and 1949. They were often panned as low-brow, and an unsuccessful attempt was even made by the FCC to ban them (as an illegal lottery) in August 1949.[23] Broadcast production methods The RCA Type 44-BX microphone had two live faces and two dead ones. Thus actors could face each other and react. An actor could give the effect of leaving the room by simply moving their head toward the dead face of the microphone. The scripts were paper-clipped together. It has been disputed whether or not actors and actresses would drop finished pages to the carpeted floor after use. Radio stations Despite a general ban on use of recordings on broadcasts by radio networks through the late 1940s, "reference recordings" on phonograph disc were made of many programs as they were being broadcast, for review by the sponsor and for the network's own archival purposes. With the development of high-fidelity magnetic wire and tape recording in the years following World War II, the networks became more open to airing recorded programs and the prerecording of shows became more common. Local stations, however, had always been free to use recordings and sometimes made substantial use of pre-recorded syndicated programs distributed on pressed (as opposed to individually recorded) transcription discs. Recording was done using a cutting lathe and acetate discs. Programs were normally recorded at 331⁄3 rpm on 16 inch discs, the standard format used for such "electrical transcriptions" from the early 1930s through the 1950s. Sometimes, the groove was cut starting at the inside of the disc and running to the outside. This was useful when the program to be recorded was longer than 15 minutes so required more than one disc side. By recording the first side outside in, the second inside out, and so on, the sound quality at the disc change-over points would match and result in a more seamless playback. An inside start also had the advantage that the thread of material cut from the disc's surface, which had to be kept out of the path of the cutting stylus, was naturally thrown toward the centre of the disc so was automatically out of the way. When cutting an outside start disc, a brush could be used to keep it out of the way by sweeping it toward the middle of the disc. Well-equipped recording lathes used the vacuum from a water aspirator to pick it up as it was cut and deposit it in a water-filled bottle. In addition to convenience, this served a safety purpose, as the cellulose nitrate thread was highly flammable and a loose accumulation of it combusted violently if ignited. Most recordings of radio broadcasts were made at a radio network's studios, or at the facilities of a network-owned or affiliated station, which might have four or more lathes. A small local station often had none. Two lathes were required to capture a program longer than 15 minutes without losing parts of it while discs were flipped over or changed, along with a trained technician to operate them and monitor the recording while it was being made. However, some surviving recordings were produced by local stations. When a substantial number of copies of an electrical transcription were required, as for the distribution of a syndicated program, they were produced by the same process used to make ordinary records. A master recording was cut, then electroplated to produce a stamper from which pressings in vinyl (or, in the case of transcription discs pressed before about 1935, shellac) were moulded in a record press. Armed Forces Radio Service Frank Sinatra and Alida Valli converse over Armed Forces Radio Service during World War II The Armed Forces Radio Service (AFRS) had its origins in the U.S. War Department's quest to improve troop morale. This quest began with short-wave broadcasts of educational and information programs to troops in 1940. In 1941, the War Department began issuing "Buddy Kits" (B-Kits) to departing troops, which consisted of radios, 78 rpm records and electrical transcription discs of radio shows. However, with the entrance of the United States into World War II, the War Department decided that it needed to improve the quality and quantity of its offerings. This began with the broadcasting of its own original variety programs. Command Performance was the first of these, produced for the first time on March 1, 1942. On May 26, 1942, the Armed Forces Radio Service was formally established. Originally, its programming comprised network radio shows with the commercials removed. However, it soon began producing original programming, such as Mail Call, G.I. Journal, Jubilee and GI Jive. At its peak in 1945, the Service produced around 20 hours of original programming each week. From 1943 until 1949 the AFRS also broadcast programs developed through the collaborative efforts of the Office of the Coordinator of Inter-American Affairs and the Columbia Broadcasting System in support of America's cultural diplomacy initiatives and President Franklin Roosevelt's Good Neighbour policy. Included among the popular shows was Viva America which showcased leading musical artists from both North and South America for the entertainment of America's troops. Included among the regular performers were: Alfredo Antonini, Juan Arvizu, Nestor Mesta Chayres, Kate Smith,[26] and John Serry Sr. After the war, the AFRS continued providing programming to troops in Europe. During the 1950s and early 1960s it presented performances by the Army's only symphonic orchestra ensemble—the Seventh Army Symphony Orchestra. It also provided programming for future wars that the United States was involved in. It survives today as a component of the American Forces Network (AFN). All of the shows aired by the AFRS during the Golden Age were recorded as electrical transcription discs, vinyl copies of which were shipped to stations overseas to be broadcast to the troops. People in the United States rarely ever heard programming from the AFRS,[31] though AFRS recordings of Golden Age network shows were occasionally broadcast on some domestic stations beginning in the 1950s. In some cases, the AFRS disc is the only surviving recording of a program. Home radio recordings in the United States There was some home recording of radio broadcasts in the 1930s and 1940s. Examples from as early as 1930 have been documented. During these years, home recordings were made with disc recorders, most of which were only capable of storing about four minutes of a radio program on each side of a twelve-inch 78 rpm record. Most home recordings were made on even shorter-playing ten-inch or smaller discs. Some home disc recorders offered the option of the 331⁄3 rpm speed used for electrical transcriptions, allowing a recording more than twice as long to be made, although with reduced audio quality. Office dictation equipment was sometimes pressed into service for making recordings of radio broadcasts, but the audio quality of these devices was poor and the resulting recordings were in odd formats that had to be played back on similar equipment. Due to the expense of recorders and the limitations of the recording media, home recording of broadcasts was not common during this period and it was usually limited to brief excerpts. The lack of suitable home recording equipment was somewhat relieved in 1947 with the availability of magnetic wire recorders for domestic use. These were capable of recording an hour-long broadcast on a single small spool of wire, and if a high-quality radio's audio output was recorded directly, rather than by holding a microphone up to its speaker, the recorded sound quality was very good. However, because the wire cost money and, like magnetic tape, could be repeatedly re-used to make new recordings, only a few complete broadcasts appear to have survived on this medium. In fact, there was little home recording of complete radio programs until the early 1950s, when increasingly affordable reel-to-reel tape recorders for home use were introduced to the market. Recording media Electrical transcription discs The War of the Worlds radio broadcast by Orson Welles on electrical transcription disc Before the early 1950s, when radio networks and local stations wanted to preserve a live broadcast, they did so by means of special phonograph records known as "electrical transcriptions" (ETs), made by cutting a sound-modulated groove into a blank disc. At first, in the early 1930s, the blanks varied in both size and composition, but most often they were simply bare aluminum and the groove was indented rather than cut. Typically, these very early recordings were not made by the network or radio station, but by a private recording service contracted by the broadcast sponsor or one of the performers. The bare aluminum discs were typically 10 or 12 inches in diameter and recorded at the then-standard speed of 78 rpm, which meant that several disc sides were required to accommodate even a 15-minute program. By about 1936, 16-inch aluminum-based discs coated with cellulose nitrate lacquer, commonly known as acetates and recorded at a speed of 331⁄3 rpm, had been adopted by the networks and individual radio stations as the standard medium for recording broadcasts. The making of such recordings, at least for some purposes, then became routine. Some discs were recorded using a "hill and dale" vertically modulated groove, rather than the "lateral" side-to-side modulation found on the records being made for home use at that time. The large slow-speed discs could easily contain fifteen minutes on each side, allowing an hour-long program to be recorded on only two discs. The lacquer was softer than shellac or vinyl and wore more rapidly, allowing only a few playbacks with the heavy pickups and steel needles then in use before deterioration became audible. During World War II, aluminum became a necessary material for the war effort and was in short supply. This caused an alternative to be sought for the base on which to coat the lacquer. Glass, despite its obvious disadvantage of fragility, had occasionally been used in earlier years because it could provide a perfectly smooth and even supporting surface for mastering and other critical applications. Glass base recording blanks came into general use for the duration of the war. Magnetic wire recording In the late 1940s, wire recorders became a readily obtainable means of recording radio programs. On a per-minute basis, it was less expensive to record a broadcast on wire than on discs. The one-hour program that required the four sides of two 16-inch discs could be recorded intact on a single spool of wire less than three inches in diameter and about half an inch thick. The audio fidelity of a good wire recording was comparable to acetate discs and by comparison the wire was practically indestructible, but it was soon rendered obsolete by the more manageable and easily edited medium of magnetic tape. Reel-to-reel tape recording Bing Crosby became the first major proponent of magnetic tape recording for radio, and he was the first to use it on network radio, after he did a demonstration program in 1947. Tape had several advantages over earlier recording methods. Running at a sufficiently high speed, it could achieve higher fidelity than both electrical transcription discs and magnetic wire. Discs could be edited only by copying parts of them to a new disc, and the copying entailed a loss of audio quality. Wire could be divided up and the ends spliced together by knotting, but wire was difficult to handle and the crude splices were too noticeable. Tape could be edited by cutting it with a blade and neatly joining ends together with adhesive tape. By early 1949, the transition from live performances preserved on discs to performances pre-recorded on magnetic tape for later broadcast was complete for network radio programs. However, for the physical distribution of pre-recorded programming to individual stations, 16-inch 331⁄3 rpm vinyl pressings, less expensive to produce in quantities of identical copies than tapes, continued to be standard throughout the 1950s. Availability of recordings The great majority of pre-World War II live radio broadcasts are lost. Many were never recorded; few recordings antedate the early 1930s. Beginning then several of the longer-running radio dramas have their archives complete or nearly complete. The earlier the date, the less likely it is that a recording survives. However, a good number of syndicated programs from this period have survived because copies were distributed far and wide. Recordings of live network broadcasts from the World War II years were preserved in the form of pressed vinyl copies issued by the Armed Forces Radio Service (AFRS) and survive in relative abundance. Syndicated programs from World War II and later years have nearly all survived. The survival of network programming from this time frame is more inconsistent; the networks started prerecording their formerly live shows on magnetic tape for subsequent network broadcast, but did not physically distribute copies, and the expensive tapes, unlike electrical transcription ("ET") discs, could be "wiped" and re-used (especially since, in the age of emerging trends such as television and music radio, such recordings were believed to have virtually no rerun or resale value). Thus, while some prime time network radio series from this era exist in full or almost in full, especially the most famous and longest-lived of them, less prominent or shorter-lived series (such as serials) may have only a handful of extant episodes. Airchecks, off-the-air recordings of complete shows made by, or at the behest of, individuals for their own private use, sometimes help to fill in such gaps. The contents of privately made recordings of live broadcasts from the first half of the 1930s can be of particular interest, as little live material from that period survives. Unfortunately, the sound quality of very early private recordings is often very poor, although in some cases this is largely due to the use of an incorrect playback stylus, which can also badly damage some unusual types of discs. Most of the Golden Age programs in circulation among collectors—whether on analogue tape, CD, or in the form of MP3s—originated from analogue 16-inch transcription disc, although some are off-the-air AM recordings. But in many cases, the circulating recordings are corrupted (decreased in quality), because lossless digital recording for the home market did not come until the very end of the twentieth century. Collectors made and shared recordings on analogue magnetic tapes, the only practical, relatively inexpensive medium, first on reels, then cassettes. "Sharing" usually meant making a duplicate tape. They connected two recorders, playing on one and recording on the other. Analog recordings are never perfect, and copying an analogue recording multiplies the imperfections. With the oldest recordings this can even mean it went out the speaker of one machine and in via the microphone of the other. The muffled sound, dropouts, sudden changes in sound quality, unsteady pitch, and other defects heard all too often are almost always accumulated tape copy defects. In addition, magnetic recordings, unless preserved archivally, are gradually damaged by the Earth's magnetic field. The audio quality of the source discs, when they have survived unscathed and are accessed and dubbed anew, is usually found to be reasonably clear and undistorted, sometimes startlingly good, although like all phonograph records they are vulnerable to wear and the effects of scuffs, scratches, and ground-in dust. Many shows from the 1940s have survived only in edited AFRS versions, although some exist in both the original and AFRS forms. As of 2020, the Old Time Radio collection at the Internet Archive contains 5,121 recordings. An active group of collectors makes digitally available, via CD or download, large collections of programs. RadioEchoes.com offers 98,949 episodes in their collection, but not all is old-time radio. Copyright status Unlike film, television, and print items from the era, the copyright status of most recordings from the Golden Age of Radio is unclear. This is because, prior to 1972, the United States delegated the copyrighting of sound recordings to the individual states, many of which offered more generous common law copyright protections than the federal government offered for other media (some offered perpetual copyright, which has since been abolished; under the Music Modernization Act of September 2018, any sound recording 95 years old or older will be thrust into the public domain regardless of state law). The only exceptions are AFRS original productions, which are considered work of the United States government and thus both ineligible for federal copyright and outside the jurisdiction of any state; these programs are firmly in the public domain (this does not apply to programs carried by AFRS but produced by commercial networks). In practice, most old-time radio recordings are treated as orphan works: although there may still be a valid copyright on the program, it is seldom enforced. The copyright on an individual sound recording is distinct from the federal copyright for the underlying material (such as a published script, music, or in the case of adaptations, the original film or television material), and in many cases it is impossible to determine where or when the original recording was made or if the recording was copyrighted in that state. The U.S. Copyright Office states "there are a variety of legal regimes governing protection of pre-1972 sound recordings in the various states, and the scope of protection and of exceptions and limitations to that protection is unclear."[39] For example, New York has issued contradicting rulings on whether or not common law exists in that state; the most recent ruling, 2016's Flo & Eddie, Inc. v. Sirius XM Radio, holds that there is no such copyright in New York in regard to public performance.[40] Further complicating matters is that certain examples in case law have implied that radio broadcasts (and faithful reproductions thereof), because they were distributed freely to the public over the air, may not be eligible for copyright in and of themselves. The Internet Archive and other organizations that distribute public domain and open-source audio recordings maintain extensive archives of old-time radio programs. Legacy United States Some old-time radio shows continued on the air, although in ever-dwindling numbers, throughout the 1950s, even after their television equivalents had conquered the general public. One factor which helped to kill off old-time radio entirely was the evolution of popular music (including the development of rock and roll), which led to the birth of the top 40 radio format. A top 40 show could be produced in a small studio in a local station with minimal staff. This displaced full-service network radio and hastened the end of the golden-age era of radio drama by 1962. (Radio as a broadcast medium would survive, thanks in part to the proliferation of the transistor radio, and permanent installation in vehicles, making the medium far more portable than television). Full-service stations that did not adopt either top 40 or the mellower beautiful music or MOR formats eventually developed all-news radio in the mid-1960s. Scripted radio comedy and drama in the vein of old-time radio has a limited presence on U.S. radio. Several radio theatre series are still in production in the United States, usually airing on Sunday nights. These include original series such as Imagination Theatre and a radio adaptation of The Twilight Zone TV series, as well as rerun compilations such as the popular daily series When Radio Was and USA Radio Network's Golden Age of Radio Theatre, and weekly programs such as The Big Broadcast on WAMU, hosted by Murray Horwitz. These shows usually air in late nights and/or on weekends on small AM stations. Carl Amari's nationally syndicated radio show Hollywood 360 features 5 old-time radio episodes each week during his 5-hour broadcast. Amari's show is heard on 100+ radio stations coast-to-coast and in 168 countries on American Forces Radio. Local rerun compilations are also heard, primarily on public radio stations. Sirius XM Radio maintains a full-time Radio Classics channel devoted to rebroadcasts of vintage radio shows. Starting in 1974, Garrison Keillor, through his syndicated two-hour-long program A Prairie Home Companion, has provided a living museum of the production, tone and listener's experience of this era of radio for several generations after its demise. Produced live in theaters throughout the country, using the same sound effects and techniques of the era, it ran through 2016 with Keillor as host. The program included segments that were close renditions (in the form of parody) of specific genres of this era, including Westerns ("Dusty and Lefty, The Lives of the Cowboys"), detective procedurals ("Guy Noir, Private Eye") and even advertising through fictional commercials. Keillor also wrote a novel, WLT: A Radio Romance based on a radio station of this era—including a personally narrated version for the ultimate in verisimilitude. Upon Keillor's retirement, replacement host Chris Thile chose to reboot the show (since renamed Live from Here after the syndicator cut ties with Keillor) and eliminate much of the old-time radio trappings of the format; the show was ultimately canceled in 2020 due to financial and logistics problems. Vintage shows and new audio productions in America are accessible more widely from recordings or by satellite and web broadcasters, rather than over conventional AM and FM radio. The National Audio Theatre Festival is a national organization and yearly conference keeping the audio arts—especially audio drama—alive, and continues to involve long-time voice actors and OTR veterans in its ranks. Its predecessor, the Midwest Radio Theatre Workshop, was first hosted by Jim Jordan, of Fibber McGee and Molly fame, and Norman Corwin advised the organization. One of the longest running radio programs celebrating this era is The Golden Days of Radio, which was hosted on the Armed Forces Radio Service for more than 20 years and overall for more than 50 years by Frank Bresee, who also played "Little Beaver" on the Red Ryder program as a child actor. One of the very few still-running shows from the earlier era of radio is a Christian program entitled Unshackled! The weekly half-hour show, produced in Chicago by Pacific Garden Mission, has been continuously broadcast since 1950. The shows are created using techniques from the 1950s (including home-made sound effects) and are broadcast across the U.S. and around the world by thousands of radio stations. Today, radio performers of the past appear at conventions that feature re-creations of classic shows, as well as music, memorabilia and historical panels. The largest of these events was the Friends of Old Time Radio Convention, held in Newark, New Jersey, which held its final convention in October 2011 after 36 years. Others include REPS in Seattle (June), SPERDVAC in California, the Cincinnati OTR & Nostalgia Convention (April), and the Mid-Atlantic Nostalgia Convention (September). Veterans of the Friends of Old Time Radio Convention, including Chairperson Steven M. Lewis of The Gotham Radio Players, Maggie Thompson, publisher of the Comic Book Buyer's Guide, Craig Wichman of audio drama troupe Quicksilver Audio Theater and long-time FOTR Publicist Sean Dougherty have launched a successor event, Celebrating Audio Theater – Old & New, scheduled for October 12–13, 2012. Radio dramas from the golden age are sometimes recreated as live stage performances at such events. One such group, led by director Daniel Smith, has been performing re-creations of old-time radio dramas at Fairfield University's Regina A. Quick Center for the Arts since the year 2000. The 40th anniversary of what is widely considered the end of the old time radio era (the final broadcasts of Yours Truly, Johnny Dollar and Suspense on September 30, 1962) was marked with a commentary on NPR's All Things Considered. A handful of radio programs from the old-time era remain in production, all from the genres of news, music, or religious broadcasting: the Grand Ole Opry (1925), Music and the Spoken Word (1929), The Lutheran Hour (1930), the CBS World News Roundup (1938), King Biscuit Time (1941) and the Renfro Valley Gatherin' (1943). Of those, all but the Opry maintain their original short-form length of 30 minutes or less. The Wheeling Jamboree counts an earlier program on a competing station as part of its history, tracing its lineage back to 1933. Western revival/comedy act Riders in the Sky produced a radio serial Riders Radio Theatre in the 1980s and 1990s and continues to provide sketch comedy on existing radio programs including the Grand Ole Opry, Midnite Jamboree and WoodSongs Old-Time Radio Hour. Elsewhere Regular broadcasts of radio plays are also heard in—among other countries—Australia, Croatia, Estonia,[46] France, Germany, Ireland, Japan, New Zealand, Norway, Romania, and Sweden. In the United Kingdom, such scripted radio drama continues on BBC Radio 3 and (principally) BBC Radio 4, the second-most popular radio station in the country, as well as on the rerun channel BBC Radio 4 Extra, which is the seventh-most popular station there. #starradio #totalstar #star1075 #heart #heartradio #lbc #bbc #bbcradio #bbcradio1 #bbcradio2 #bbcradio3 #bbcradio4 #radio4extra #absoluteradio #absolute #capital #capitalradio #greatesthitsradio #hitsradio #radio #adultcontemporary #spain #bristol #frenchay #colyton #lymeregis #seaton #beer #devon #eastdevon #brettorchard #brettsoldtimeradioshow #sundaynightmystery #lymebayradio fe2f4df62ffeeb8c30c04d3d3454779ca91a4871