POPULARITY
Wherein we talk about a seminal Christmas Classic Nothing Lasts Forever by Roderick Thorp, and its movie adaptation Die Hard. We also talk about: If this is a Christmas movie?, 80's Movie Trope Nostalgia, the Inspiration for the book, a much darker ending in the book, favorite scenes, and other Christmas movies to check out this year.
Wherein we talk about the movie version of The Postman and we also talk about: restarting social media after the apocalypse, Kevin Costner and the female gaze, just rolling all the women into one character, Tom Petty as himself, Bill the pony - master swordsman, Giovanni Ribisi as himself, getting a status made of you for the wrong thing, and Danielle saying her famous catch phrase.
Wherein we talk about the second half of The Postman and we also talk about: Olympic foley artists, if theSociety of Cincinnatus was a real thing, emperor naming conventions, women in STEM roles, conmen and the Big Lie, and what to do with super soldiers after the war.
Wherein we talk about the first half of The Postman by David Brin and we also talk about: Patreon rewards (that no one wants), Predictions of the Future from the Past, reactions to the book, Deceased postmen as ultimate wingmen, Gordon constantly rolling for deception checks, Vague advice from powerful computers, and what might happen next.
Wherein we chat about our new book - The Postman by David Brin, the movie, principally by Kevin Costner, and we also talk about: which government functions we might restart after the apocalypse, Other Chris' ad algorithm, what kind of vibes we can expect from the book, the movie completely bombing, and the absolutely stacked film year that was 1997.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RTFB: California's AB 3211, published by Zvi on July 30, 2024 on LessWrong. Some in the tech industry decided now was the time to raise alarm about AB 3211. As Dean Ball points out, there's a lot of bills out there. One must do triage. Dean Ball: But SB 1047 is far from the only AI bill worth discussing. It's not even the only one of the dozens of AI bills in California worth discussing. Let's talk about AB 3211, the California Provenance, Authenticity, and Watermarking Standards Act, written by Assemblymember Buffy Wicks, who represents the East Bay. SB 1047 is a carefully written bill that tries to maximize benefits and minimize costs. You can still quite reasonably disagree with the aims, philosophy or premise of the bill, or its execution details, and thus think its costs exceed its benefits. When people claim SB 1047 is made of crazy pills, they are attacking provisions not in the bill. That is not how it usually goes. Most bills involving tech regulation that come before state legislatures are made of crazy pills, written by people in over their heads. There are people whose full time job is essentially pointing out the latest bill that might break the internet in various ways, over and over, forever. They do a great and necessary service, and I do my best to forgive them the occasional false alarm. They deal with idiots, with bulls in China shops, on the daily. I rarely get the sense these noble warriors are having any fun. AB 3211 unanimously passed the California assembly, and I started seeing bold claims about how bad it would be. Here was one of the more measured and detailed ones. Dean Ball: The bill also requires every generative AI system to maintain a database with digital fingerprints for "any piece of potentially deceptive content" it produces. This would be a significant burden for the creator of any AI system. And it seems flatly impossible for the creators of open weight models to comply. Under AB 3211, a chatbot would have to notify the user that it is a chatbot at the start of every conversation. The user would have to acknowledge this before the conversation could begin. In other words, AB 3211 could create the AI version of those annoying cookie notifications you get every time you visit a European website. … AB 3211 mandates "maximally indelible watermarks," which it defines as "a watermark that is designed to be as difficult to remove as possible using state-of-the-art techniques and relevant industry standards." So I decided to Read the Bill (RTFB). It's a bad bill, sir. A stunningly terrible bill. How did it unanimously pass the California assembly? My current model is: 1. There are some committee chairs and others that can veto procedural progress. 2. Most of the members will vote for pretty much anything. 3. They are counting on Newsom to evaluate and if needed veto. 4. So California only sort of has a functioning legislative branch, at best. 5. Thus when bills pass like this, it means a lot less than you might think. Yet everyone stays there, despite everything. There really is a lot of ruin in that state. Time to read the bill. Read The Bill (RTFB) It's short - the bottom half of the page is all deleted text. Section 1 is rhetorical declarations. GenAI can produce inauthentic images, they need to be clearly disclosed and labeled, or various bad things could happen. That sounds like a job for California, which should require creators to provide tools and platforms to provide labels. So we all can remain 'safe and informed.' Oh no. Section 2 22949.90 provides some definitions. Most are standard. These aren't: (c) "Authentic content" means images, videos, audio, or text created by human beings without any modifications or with only minor modifications that do not lead to significant changes to the perceived contents or meaning of the cont...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RTFB: California's AB 3211, published by Zvi on July 30, 2024 on LessWrong. Some in the tech industry decided now was the time to raise alarm about AB 3211. As Dean Ball points out, there's a lot of bills out there. One must do triage. Dean Ball: But SB 1047 is far from the only AI bill worth discussing. It's not even the only one of the dozens of AI bills in California worth discussing. Let's talk about AB 3211, the California Provenance, Authenticity, and Watermarking Standards Act, written by Assemblymember Buffy Wicks, who represents the East Bay. SB 1047 is a carefully written bill that tries to maximize benefits and minimize costs. You can still quite reasonably disagree with the aims, philosophy or premise of the bill, or its execution details, and thus think its costs exceed its benefits. When people claim SB 1047 is made of crazy pills, they are attacking provisions not in the bill. That is not how it usually goes. Most bills involving tech regulation that come before state legislatures are made of crazy pills, written by people in over their heads. There are people whose full time job is essentially pointing out the latest bill that might break the internet in various ways, over and over, forever. They do a great and necessary service, and I do my best to forgive them the occasional false alarm. They deal with idiots, with bulls in China shops, on the daily. I rarely get the sense these noble warriors are having any fun. AB 3211 unanimously passed the California assembly, and I started seeing bold claims about how bad it would be. Here was one of the more measured and detailed ones. Dean Ball: The bill also requires every generative AI system to maintain a database with digital fingerprints for "any piece of potentially deceptive content" it produces. This would be a significant burden for the creator of any AI system. And it seems flatly impossible for the creators of open weight models to comply. Under AB 3211, a chatbot would have to notify the user that it is a chatbot at the start of every conversation. The user would have to acknowledge this before the conversation could begin. In other words, AB 3211 could create the AI version of those annoying cookie notifications you get every time you visit a European website. … AB 3211 mandates "maximally indelible watermarks," which it defines as "a watermark that is designed to be as difficult to remove as possible using state-of-the-art techniques and relevant industry standards." So I decided to Read the Bill (RTFB). It's a bad bill, sir. A stunningly terrible bill. How did it unanimously pass the California assembly? My current model is: 1. There are some committee chairs and others that can veto procedural progress. 2. Most of the members will vote for pretty much anything. 3. They are counting on Newsom to evaluate and if needed veto. 4. So California only sort of has a functioning legislative branch, at best. 5. Thus when bills pass like this, it means a lot less than you might think. Yet everyone stays there, despite everything. There really is a lot of ruin in that state. Time to read the bill. Read The Bill (RTFB) It's short - the bottom half of the page is all deleted text. Section 1 is rhetorical declarations. GenAI can produce inauthentic images, they need to be clearly disclosed and labeled, or various bad things could happen. That sounds like a job for California, which should require creators to provide tools and platforms to provide labels. So we all can remain 'safe and informed.' Oh no. Section 2 22949.90 provides some definitions. Most are standard. These aren't: (c) "Authentic content" means images, videos, audio, or text created by human beings without any modifications or with only minor modifications that do not lead to significant changes to the perceived contents or meaning of the cont...
Wherein we talk about the fourth year of our little podcast and the best things we read and watched and also evaluate last year's goals and set some new ones. An automated transcript is available at this link
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Schumer Report on AI (RTFB), published by Zvi on May 25, 2024 on LessWrong. Or at least, Read the Report (RTFR). There is no substitute. This is not strictly a bill, but it is important. The introduction kicks off balancing upside and avoiding downside, utility and risk. This will be a common theme, with a very strong 'why not both?' vibe. Early in the 118th Congress, we were brought together by a shared recognition of the profound changes artificial intelligence (AI) could bring to our world: AI's capacity to revolutionize the realms of science, medicine, agriculture, and beyond; the exceptional benefits that a flourishing AI ecosystem could offer our economy and our productivity; and AI's ability to radically alter human capacity and knowledge. At the same time, we each recognized the potential risks AI could present, including altering our workforce in the short-term and long-term, raising questions about the application of existing laws in an AI-enabled world, changing the dynamics of our national security, and raising the threat of potential doomsday scenarios. This led to the formation of our Bipartisan Senate AI Working Group ("AI Working Group"). They did their work over nine forums. 1. Inaugural Forum 2. Supporting U.S. Innovation in AI 3. AI and the Workforce 4. High Impact Uses of AI 5. Elections and Democracy 6. Privacy and Liability 7. Transparency, Explainability, Intellectual Property, and Copyright 8. Safeguarding Against AI Risks 9. National Security Existential risks were always given relatively minor time, with it being a topic for at most a subset of the final two forums. By contrast, mundane downsides and upsides were each given three full forums. This report was about response to AI across a broad spectrum. The Big Spend They lead with a proposal to spend 'at least' $32 billion a year on 'AI innovation.' No, there is no plan on how to pay for that. In this case I do not think one is needed. I would expect any reasonable implementation of that to pay for itself via economic growth. The downsides are tail risks and mundane harms, but I wouldn't worry about the budget. If anything, AI's arrival is a reason to be very not freaked out about the budget. Official projections are baking in almost no economic growth or productivity impacts. They ask that this money be allocated via a method called emergency appropriations. This is part of our government's longstanding way of using the word 'emergency.' We are going to have to get used to this when it comes to AI. Events in AI are going to be happening well beyond the 'non-emergency' speed of our government and especially of Congress, both opportunities and risks. We will have opportunities that appear and compound quickly, projects that need our support. We will have stupid laws and rules, both that were already stupid or are rendered stupid, that need to be fixed. Risks and threats, not only catastrophic or existential risks but also mundane risks and enemy actions, will arise far faster than our process can pass laws, draft regulatory rules with extended comment periods and follow all of our procedures. In this case? It is May. The fiscal year starts in October. I want to say, hold your damn horses. But also, you think Congress is passing a budget this year? We will be lucky to get a continuing resolution. Permanent emergency. Sigh. What matters more is, what do they propose to do with all this money? A lot of things. And it does not say how much money is going where. If I was going to ask for a long list of things that adds up to $32 billion, I would say which things were costing how much money. But hey. Instead, it looks like he took the number from NSCAI, and then created a laundry list of things he wanted, without bothering to create a budget of any kind? It also seems like they took the origin...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Schumer Report on AI (RTFB), published by Zvi on May 25, 2024 on LessWrong. Or at least, Read the Report (RTFR). There is no substitute. This is not strictly a bill, but it is important. The introduction kicks off balancing upside and avoiding downside, utility and risk. This will be a common theme, with a very strong 'why not both?' vibe. Early in the 118th Congress, we were brought together by a shared recognition of the profound changes artificial intelligence (AI) could bring to our world: AI's capacity to revolutionize the realms of science, medicine, agriculture, and beyond; the exceptional benefits that a flourishing AI ecosystem could offer our economy and our productivity; and AI's ability to radically alter human capacity and knowledge. At the same time, we each recognized the potential risks AI could present, including altering our workforce in the short-term and long-term, raising questions about the application of existing laws in an AI-enabled world, changing the dynamics of our national security, and raising the threat of potential doomsday scenarios. This led to the formation of our Bipartisan Senate AI Working Group ("AI Working Group"). They did their work over nine forums. 1. Inaugural Forum 2. Supporting U.S. Innovation in AI 3. AI and the Workforce 4. High Impact Uses of AI 5. Elections and Democracy 6. Privacy and Liability 7. Transparency, Explainability, Intellectual Property, and Copyright 8. Safeguarding Against AI Risks 9. National Security Existential risks were always given relatively minor time, with it being a topic for at most a subset of the final two forums. By contrast, mundane downsides and upsides were each given three full forums. This report was about response to AI across a broad spectrum. The Big Spend They lead with a proposal to spend 'at least' $32 billion a year on 'AI innovation.' No, there is no plan on how to pay for that. In this case I do not think one is needed. I would expect any reasonable implementation of that to pay for itself via economic growth. The downsides are tail risks and mundane harms, but I wouldn't worry about the budget. If anything, AI's arrival is a reason to be very not freaked out about the budget. Official projections are baking in almost no economic growth or productivity impacts. They ask that this money be allocated via a method called emergency appropriations. This is part of our government's longstanding way of using the word 'emergency.' We are going to have to get used to this when it comes to AI. Events in AI are going to be happening well beyond the 'non-emergency' speed of our government and especially of Congress, both opportunities and risks. We will have opportunities that appear and compound quickly, projects that need our support. We will have stupid laws and rules, both that were already stupid or are rendered stupid, that need to be fixed. Risks and threats, not only catastrophic or existential risks but also mundane risks and enemy actions, will arise far faster than our process can pass laws, draft regulatory rules with extended comment periods and follow all of our procedures. In this case? It is May. The fiscal year starts in October. I want to say, hold your damn horses. But also, you think Congress is passing a budget this year? We will be lucky to get a continuing resolution. Permanent emergency. Sigh. What matters more is, what do they propose to do with all this money? A lot of things. And it does not say how much money is going where. If I was going to ask for a long list of things that adds up to $32 billion, I would say which things were costing how much money. But hey. Instead, it looks like he took the number from NSCAI, and then created a laundry list of things he wanted, without bothering to create a budget of any kind? It also seems like they took the origin...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RTFB: On the New Proposed CAIP AI Bill, published by Zvi on April 10, 2024 on LessWrong. A New Bill Offer Has Arrived Center for AI Policy proposes a concrete actual model bill for us to look at. Here was their announcement: WASHINGTON - April 9, 2024 - To ensure a future where artificial intelligence (AI) is safe for society, the Center for AI Policy (CAIP) today announced its proposal for the "Responsible Advanced Artificial Intelligence Act of 2024." This sweeping model legislation establishes a comprehensive framework for regulating advanced AI systems, championing public safety, and fostering technological innovation with a strong sense of ethical responsibility. "This model legislation is creating a safety net for the digital age," said Jason Green-Lowe, Executive Director of CAIP, "to ensure that exciting advancements in AI are not overwhelmed by the risks they pose." The "Responsible Advanced Artificial Intelligence Act of 2024" is model legislation that contains provisions for requiring that AI be developed safely, as well as requirements on permitting, hardware monitoring, civil liability reform, the formation of a dedicated federal government office, and instructions for emergency powers. The key provisions of the model legislation include: 1. Establishment of the Frontier Artificial Intelligence Systems Administration to regulate AI systems posing potential risks. 2. Definitions of critical terms such as "frontier AI system," "general-purpose AI," and risk classification levels. 3. Provisions for hardware monitoring, analysis, and reporting of AI systems. 4. Civil + criminal liability measures for non-compliance or misuse of AI systems. 5. Emergency powers for the administration to address imminent AI threats. 6. Whistleblower protection measures for reporting concerns or violations. The model legislation intends to provide a regulatory framework for the responsible development and deployment of advanced AI systems, mitigating potential risks to public safety, national security, and ethical considerations. "As leading AI developers have acknowledged, private AI companies lack the right incentives to address this risk fully," said Jason Green-Lowe, Executive Director of CAIP. "Therefore, for advanced AI development to be safe, federal legislation must be passed to monitor and regulate the use of the modern capabilities of frontier AI and, where necessary, the government must be prepared to intervene rapidly in an AI-related emergency." Green-Lowe envisions a world where "AI is safe enough that we can enjoy its benefits without undermining humanity's future." The model legislation will mitigate potential risks while fostering an environment where technological innovation can flourish without compromising national security, public safety, or ethical standards. "CAIP is committed to collaborating with responsible stakeholders to develop effective legislation that governs the development and deployment of advanced AI systems. Our door is open." I discovered this via Cato's Will Duffield, whose statement was: Will Duffield: I know these AI folks are pretty new to policy, but this proposal is an outlandish, unprecedented, and abjectly unconstitutional system of prior restraint. To which my response was essentially: I bet he's from Cato or Reason. Yep, Cato. Sir, this is a Wendy's. Wolf. We need people who will warn us when bills are unconstitutional, unworkable, unreasonable or simply deeply unwise, and who are well calibrated in their judgment and their speech on these questions. I want someone who will tell me 'Bill 1001 is unconstitutional and would get laughed out of court, Bill 1002 has questionable constitutional muster in practice and unconstitutional in theory, we would throw out Bill 1003 but it will stand up these days because SCOTUS thinks the commerc...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RTFB: On the New Proposed CAIP AI Bill, published by Zvi on April 10, 2024 on LessWrong. A New Bill Offer Has Arrived Center for AI Policy proposes a concrete actual model bill for us to look at. Here was their announcement: WASHINGTON - April 9, 2024 - To ensure a future where artificial intelligence (AI) is safe for society, the Center for AI Policy (CAIP) today announced its proposal for the "Responsible Advanced Artificial Intelligence Act of 2024." This sweeping model legislation establishes a comprehensive framework for regulating advanced AI systems, championing public safety, and fostering technological innovation with a strong sense of ethical responsibility. "This model legislation is creating a safety net for the digital age," said Jason Green-Lowe, Executive Director of CAIP, "to ensure that exciting advancements in AI are not overwhelmed by the risks they pose." The "Responsible Advanced Artificial Intelligence Act of 2024" is model legislation that contains provisions for requiring that AI be developed safely, as well as requirements on permitting, hardware monitoring, civil liability reform, the formation of a dedicated federal government office, and instructions for emergency powers. The key provisions of the model legislation include: 1. Establishment of the Frontier Artificial Intelligence Systems Administration to regulate AI systems posing potential risks. 2. Definitions of critical terms such as "frontier AI system," "general-purpose AI," and risk classification levels. 3. Provisions for hardware monitoring, analysis, and reporting of AI systems. 4. Civil + criminal liability measures for non-compliance or misuse of AI systems. 5. Emergency powers for the administration to address imminent AI threats. 6. Whistleblower protection measures for reporting concerns or violations. The model legislation intends to provide a regulatory framework for the responsible development and deployment of advanced AI systems, mitigating potential risks to public safety, national security, and ethical considerations. "As leading AI developers have acknowledged, private AI companies lack the right incentives to address this risk fully," said Jason Green-Lowe, Executive Director of CAIP. "Therefore, for advanced AI development to be safe, federal legislation must be passed to monitor and regulate the use of the modern capabilities of frontier AI and, where necessary, the government must be prepared to intervene rapidly in an AI-related emergency." Green-Lowe envisions a world where "AI is safe enough that we can enjoy its benefits without undermining humanity's future." The model legislation will mitigate potential risks while fostering an environment where technological innovation can flourish without compromising national security, public safety, or ethical standards. "CAIP is committed to collaborating with responsible stakeholders to develop effective legislation that governs the development and deployment of advanced AI systems. Our door is open." I discovered this via Cato's Will Duffield, whose statement was: Will Duffield: I know these AI folks are pretty new to policy, but this proposal is an outlandish, unprecedented, and abjectly unconstitutional system of prior restraint. To which my response was essentially: I bet he's from Cato or Reason. Yep, Cato. Sir, this is a Wendy's. Wolf. We need people who will warn us when bills are unconstitutional, unworkable, unreasonable or simply deeply unwise, and who are well calibrated in their judgment and their speech on these questions. I want someone who will tell me 'Bill 1001 is unconstitutional and would get laughed out of court, Bill 1002 has questionable constitutional muster in practice and unconstitutional in theory, we would throw out Bill 1003 but it will stand up these days because SCOTUS thinks the commerc...
Wherein we talk about Paul's Return to Arrakis as he grapples with survival and embracing or rejecting revenge for his family and we also talk about: banning TikTok, European holidays, performance enhancement for darts, designing a messiah, infamous popcorn buckets and our own terrible purpose.
Wherein we talk about one of the most banned books in America - the classic coming of age tale, Are You There, God? It's Me, Margaret by Judy Blume and we also talk about: The benefits of being a regular at McDonald's, 50 years of Hip Hop, first experiences with periods, the death penalty, how Judy Blume is cool AF, Jewish sleep away camp, trying out churches, and make our parties.
Wherein we talk about THE Christmas Adaptation - A Christmas Carol by Charles Dickens and we also talk about: where our ghosts of Christmas past would take us, other Hallmark Movies Danielle watched, shifting the idea of how to celebrate Christmas, why was Scrooge such a dick anyway?, how hard is it to make pudding?, and the various movie versions that we've enjoyed.
Wherein we talk about not one, but two John Carpenter movies - The Thing and They Live, based on Who Goes There by John Campbell Jr and Eight o'Clock in the Morning by Ray Nelson, respectively and we also discuss: if Impossible Burgers are Kosher, selling out your species for money, the best fight scene in cinema history, dunking on 2001 yet again, box office success via scheduling, hating on aliens for being ugly, and some other movies to check out if you liked these. An automated transcript is available at this link
Wherein we talk about Kenneth Branagh's latest Hercule Poirot movie - A Haunting in Venice, the Agatha Christie book it was mostly based on - Hallowe'en Party and we also discuss: options for Shofar horns, what Travis can remember from the book, mustaches and Dutch angles, Historic quarantine techniques, comfort shows, and potential business ventures in Venice. An automated transcript is available at this link
Wherein we talk about the movie version of Starship Troopers and we also discuss: medical treatment biases, Danielle's thoughts on bugs, Football in the future, Neil Patrick Harris' evolving wardrobe, ineffectual training videos, and some bugs we did like.
Wherein we talk about the second half of Starship Troopers and we also discuss: how the government in this book was established, Parallels between bug and military brains, Fighting like a gentleman in the shower, how many lives justify a war, and the Dark Forest theory of alien contact.
Wherein we talk about the first 9 chapters of Starship Troopers and we also discuss: earning citizenship via military service, a modern vision of the military from the 1960s, using force to discourage using force, biblical advice not to follow, and saying 'thank you' as incentive to do jobs we don't want to.
Wherein we introduce our new book, Starship Troopers by Robert Heinlein, and we also discuss: microchip bit compatibility, Old Man moments at the post office, thoughts on bugs, kids toys for R Rated movies, was Heinlein a fascist?, filming in recognizable cities where the story doesn't actually take place, and how Danielle might react to the movie.
Part 2, The Fellas and Friends dive deeper into their thoughts on the future of trades jobs, the things you learn, and Friends why Quality over Quantity. Short jokes....how many can be slipped into the conversation? Support the show#heattreatedgarage #fellas #myfriendsarebetterthanyourfriends #ickyvicky #tetanusscout #htgadventures #socialbutterflymedia #crawleroffroad #podcast #nailedit #trailhated
Wherein we talk about the doomed Hulu adaptation of Kindred and we also discuss: Ancestry, Bluey, linguistic surveys, major changes in the show, TV accents, practical reminders about searches and warrants, if the show pulled its punches, and what we might've seen in a season 2.
Wherein we talk about the second section of Kindred and we also discuss: Marvel Fatigue, things that lasted longer than the Confederacy, condescending doctors, the miracle of the ballpoint pen, abusive behavior patterns, when it all just becomes too much (for Dana), and a surprise cameo.
Wherein we talk about the first section of Kindred and we also discuss: bygone bookstores and the death of memory, Reactions to the book, Casual brutality in a casually brutal time, Time travel pranks, Biblical names as plot hints, and the cunning use of not-yet-ancient coins.
Wherein we talk about our new book, Kindred by Octavia Butler and we also discuss: Go Karts, Alumni benefits, movies we watched over break, our prior engagement with Octavia Butler and her works, her life/background and the inspiration for this book.
Wherein we talk about the books we read and movies we watched in our third year, we give ourselves a performance review, and we announce a special giveaway.
Wherein we unwrap our early Xmas / Mid-Hannukah gift, The Noel Diary By Richard Paul Evans and we also discuss: Age gap issues, Other Chris's notes, who are these books even for, Filler Food, if this is even a Christmas book, Road trip as seduction technique, Air BnB'ing your movie sets and connections to Dexter.
Wherein we talk about the movie, Francis Ford Coppola's Bram Stoker's Dracula and we also discuss: wedding talk, abandoned malls, favorite movie vampires, failed dark universe, the movie being all sexed up, cool costumes, getting married for real for a movie scene, and a more Dracula-y ending.
Wherein we talk about the final third of Dracula by Bram Stoker and we also discuss: Origins of Trick or Treating, other ways they could've organized the story, heavenly fair treatment for non-consenting vampires, Book-to-book connections, exciting slang from the Americas, and where Dracula messed up his escape plan.
Wherein we talk about the middle third of Dracula by Bram Stoker and we also discuss: alternate costumes for Ren Faires, taking your sweet time in telling very important news to people, mourning techniques for men, decapitation as kindness, and creative uses for communion bread.
Wherein we talk about the first third of Dracula by Bram Stoker and we also discuss: approaches to learning math, already knowing about vampires as a spoiler, extensive contemporaneous note taking and letter writing, and MLM schemes for eating flies.
Wherein we talk about our new book, Dracula by Bram Stoker and we also discuss: St Louis claims to fame, why vampires are so interesting, the 125th anniversary of the book, who is Dracula based on anyway?, the billion adaptations of the book, and the 30th anniversary of this particular adaptation.
Wherein we chat about the movie Valerian and and the City of A Thousand Planets and we also discuss: Everwood and other CW/WB shows getting shafted during mergers, hygiene implications of animal-based duplication, the lead actors and shifting character traits, Space Bono and Jell-o Rihanna and what's in a name anyway?
Wherein we chat about the first four story arcs of the comic Valerian and Laureline and we also discuss: recent travels and new kittens, the original format of the comic and how we would've coped having to read it weekly, time travel rules in this universe, specific reasons why Laureline is the best, presidential orbital escape plans, French kissing standards and things that happen off screen, and various Star Trek maneuvers.
Wherein we chat about this seasons book (RTFB's first comic book), the long-running French comic classic Valerian and Laureline, which was later adapted into the movie Valerian and the City of a Thousand Planets, and we also discuss: testing the validity of religions via Highlander combat, Zambian Heroes Day, Boy Scout camps, and some of our other favorite cult classics.
Wherein we review Agatha Christie's Death on the Nile and Kenneth Branagh's recent movie adaptation and we also discuss: The life and times of Agatha Christie (and Dr Who), movie making as a vacation package, reactions to the Egyptian sets from someone who's actually been there, sharing an "intimate" dance with your brand new boss, if stalking is actually a crime, and why viking river cruises are just the best.
Wherein we talk about the 2011 US movie version of The Girl With the Dragon Tattoo and we also discuss: How to know when you have strep throat, tattoo chat, knowing 'whodunnit' based on the cast of actors, 'quitting' smoking, other book clubs, horrifying coffee mugs, and 'good' hacking scenes in movies.
Wherein we talk about the rest of The Girl With the Dragon Tattoo and we also discuss: Murder as a family past time, Quitting while you're ahead, Walking, just, straight into a dungeon, what DID happen to Harriet?, nonsense hacking, and a goodnight kiss of suffering.
Wherein we talk about the middle section of The Girl With the Dragon Tattoo and we also discuss: Achieving girmdark, solidifying our mental maps of this place, more modern technology, household revenge methods, different ways tattoos can be momentos, the vital importance of newspaper archives, and breaking a cold case via bible study.
Wherein we talk about the first ten chapters of The Girl With the Dragon Tattoo and we also discuss: Kidz Bop, Reactions to the reading section, needing to draw ourselves a map, the Good Way to write a whodunit, luxury jails in Scandinavia, open relationships with your married coworkers, sexual harassment from government appointed custodians, and the health of the different relationships between men and women in the book so far.
Wherein we introduce our new book - The Girl With the Dragon Tattoo by Stieg Larsson and we also discuss: Nordic Noir, crazy tattoos we might or not might get, some details about the book and movies, and general expectations.
Wherein we talk about our favorite moments from the show's sophomore year, we give ourselves a performance review on last years goals, and set new ones for year 3.
Dane Mizutani of the Pioneer Press - and a cool golf podcast called Bunker 2 Bunker which you should all definitely check out here -https://podcasts.apple.com/us/podcast/bunker-2-bunker/id1557287821 - drops into breakdown the first two games of the Wild/Vegas series. And Brandon makes himself edit….himself. (recorded 5-19-21)
Dane stops by to run down the suddenly intriguing Wild. Plus he gives us a reco on his favorite eats in the TC. (recorded 3-3-21)
Hockey and pizza January wraps up with none other than Kevin Falness who brings all the energy, info, and stories that you could possibly ask for in a Wild-centric podcast. Supported by Manscaped (https://www.manscaped.com/) Use Promo Code “MSFU” for 20% off and FREE shipping!
Pizza and Hockey January week 4 features Dane Mizutani of the Pioneer Press. We discuss his background, and a TON of Wild stuff. Supported by Manscaped (https://www.manscaped.com/) Use Promo Code “MSFU” for 20% off and FREE shipping!
Hockey and Pizza January rolls on as Dave Schwartz of KARE 11 is back, talking tons of Wild, and previewing NFL conference championship games. Supported by Manscaped (https://www.manscaped.com/) Use Promo Code “MSFU” for 20% off and FREE shipping!
Peter Campbell, chef and owner of Red Wagon Pizza Company stops by to talk about how he got into the restaurant business, his philosophy on what his actual product is, and what running a restaurant during a pandemic is like. Really cool conversation about a food everyone loves. Supported by Manscaped (https://www.manscaped.com/) Use Promo Code “MSFU” for 20% off and FREE shipping!
Hockey season is here, and Kevin Gorg stopped by to break it all down. Tons and tons of Wild stuff, plus World Juniors recap. Supported by Manscaped (https://www.manscaped.com/) - Promo Code “MSFU” for 20% off and FREE shipping!