POPULARITY
Categories
NFF Focustalk: Fearless Sounds in Game and FilmTwo renowned experts, Watson Wu and Glen Gathard, share how innovative, original, and daring sound can completely transform a story. From explosive chases to haunting silences: discover how soundscapes don't just support a narrative, but challenge, deepen, and surprise.With credits including Baby Driver, Mufasa: The Lion King, Bohemian Rhapsody, Peaky Blinders, Ex Machina, and Death Stranding 2, Watson and Glen demonstrate how to use sound as a primary storytelling medium in film, TV, and games.
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the massive technological shifts driven by generative AI in 2025 and what you must plan for in 2026. You will learn which foundational frameworks ensure your organization can strategically adapt to rapid technological change. You’ll discover how to overcome the critical communication barriers and resistance emerging among teams adopting these new tools. You will understand why increasing machine intelligence makes human critical thinking and emotional skills more valuable than ever. You’ll see the unexpected primary use case of large language models and identify the key metrics you must watch in the coming year for economic impact. Watch now to prepare your strategy for navigating the AI revolution sustainably. Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-2025-year-in-review.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn: In this week’s *In-Ear Insights*. This is the last episode of *In-Ear Insights* for 2025. We are out with the old. We’ll be back in January for new episodes the week of January 5th. So, Katie, let’s talk about the year that was and all the crazy things that happened in the year. And so what you’re thinking about, particularly from the perspective of all things AI, all things data and analytics—how was 2025 for you? Katie Robbert: What’s funny about that is I feel like for me personally, not a lot changed. And the reason I feel like I can say that is because a lot of what I focus on is foundational, and it doesn’t really matter what fancy, shiny new technology is happening. So I really try to focus on making sure the things that I do every day can adapt to new technology. And again, of course, that’s probably the most concrete example of that is the 5P framework: Purpose, People, Process, Platform for Performance. It doesn’t matter what the technology is. This is where I’m always going to ground myself in this framework so that if AI comes along or shiny object number 2 comes along, I can adapt because it’s still about primarily, what are we doing? So asking the right questions. The things that did change were I saw more of a need this year, not in general, but just this year, for people to understand how to connect with other people. And not only in a personal sense, but in a professional sense of my team needs to adopt AI or they need to adopt this new technology. I don’t know how to reach them. I don’t know where to start. I don’t know. I’m telling them things. Nothing’s working. And I feel like the technology of today, which is generative AI, is creating more barriers to communication than it is opening up communication channels. And so that’s a lot of where my head has been: how to help people move past those barriers to make sure that they’re still connecting with their teams. And it’s not so much that the technology is just a firewall between people, but it’s the when you start to get into the human emotion of “I’m afraid to use this,” or “I’m hesitant to use this,” or “I’m resistant to use this,” and you have people on two different sides of the conversation—how do you help them meet in the middle? Which is really where I’ve been focused, which, to be fair, is not a new problem: new tech, old problems. But with generative AI, which is no longer a fad—it’s not going away—people are like, “Oh, what do you mean? I actually have to figure this out now.” Okay, so I guess that’s what I mean. That’s where my head has been this year: helping people navigate that particular digital disruption, that tech disruption, versus a different kind of tech disruption. Christopher S. Penn: And if you had to—I know I personally always hate this question—if you had to boil that down to a couple of first principles of the things that are pretty universal from what you’ve had to tell people this year, what would those first principles be? Katie Robbert: Make sure you’re clear on your purpose. What is the problem you’re trying to solve? I think with technology that feels all-consuming, generative AI. We tend to feel like, “Oh, I just have to use it. Everybody else is using it.” Whereas things that have a discrete function. An email server, do I need to use it? Am I sending email? No. So I don’t need an email server. It’s just another piece of technology. We’re not treating generative AI like another piece of technology. We’re treating it like a lifestyle, we’re treating it like a culture, we’re treating it like the backbone of our organization, when really it’s just tech. And so I think it comes down to one: What is the question you’re trying to answer? What is the problem you’re trying to solve? Why do you need to use this in the first place? How is it going to enhance? And two: Are you clear on your goals? Are you clear on your vision? Which relates back to number 1. So those are really the two things that have come up the most: What’s the problem you’re trying to solve by using generative AI? And a lot of times it’s, “I don’t want to fall behind,” which is a valid problem, but it’s not the right problem to solve with generative AI. Christopher S. Penn: I would imagine. Probably part of that has to do with what you see from very credible studies coming out about it. The one that I know we’ve referenced multiple times is the 3-year study from Wharton Business School where, in Year 3 (which is 2025—this came out in October of this year), the line that caught everyone’s attention was at the bottom. Here it says 3 out of 4 leaders see positive returns on Gen AI investments, and 4 out of 5 leaders in enterprises see these investments paying off in a couple of years. And the usage levels. Again, going back to what you were saying about people feeling left behind, within enterprises, 82% using it weekly, 46% using it daily, and 72% formally measuring the ROI on it in some capacity and seeing those good results from it. Katie Robbert: But there’s a lot there that you just said that’s not happening universally. So measuring ROI consistently and in a methodical way, employees actually using these tools in the way that they’re intended, and leadership having a clear vision of what it’s intended to do in terms of productivity. Those are all things that sound good on paper but are not actually happening in real-life practice. We talk with our peers, we talk with our clients, and the chief complaint that we get is, “We have all these resources that we created, but nobody’s using them, nobody’s adopting this,” or, “They’re using generative AI, but not the way that I want them to.” So how do you measure that for efficiency? How do you measure that for productivity? So I look at studies like that and I’m like, “Yeah, that’s more of an idealistic view of everything’s going right, but in the real world, it’s very messy.” Christopher S. Penn: And we know, at least in some capacity, how those are happening. So this comes from Stanford—this was from August—where generative AI is deployed within organizations. We are seeing dramatic headcount reductions, particularly for junior people in their careers, people 22 to 25. And this is a really well-done study because you can see the blue line there is those early career folks, how not just hiring, but overall headcount is diminishing rapidly. And they went on to say, for professions where generative AI really isn’t part of it, like stock clerks, health aides, you do not see those rapid declines. The one that we care about, because our audience is marketing and sales. You can see there’s a substantial reduction in the amount of headcount that firms are carrying in this area. So that productivity increase is coming at the expense of those jobs, those seats. Katie Robbert: Which is interesting because that’s something that we saw immediately with the rollout of generative AI. People are like, “Oh great, this can write blog posts for me. I don’t need my steeple of writers.” But then they’re like, “Oh, it’s writing mediocre, uninteresting blog posts for me, but I’ve already fired all of my writers and none of them want to come back.” So I am going to ask the people who are still here to pick up the slack on that. And then those people are going to burn out and leave. So, yeah, if you look at the chart, statistically, they’re reducing headcount. If you dig into why they’re reducing headcount, it’s not for the right reasons. You have these big leaders, Sam Altman and other people, who are talking about, “We did all these amazing things, and I started this billion-dollar company with one employee. It’s just me.” And everything else is—guess what? That is not the rule. That is the exception. And there’s a lot that they’re not telling you about what’s actually happening behind the scenes. Because that one person who’s managing all the machines is probably not sleeping. They’re probably taking some sort of an upper to stay awake to keep up with whatever the demand is for the company that they’re creating. You want to talk about true hustle culture? That’s it. And it is not something that I would recommend to anyone. It’s not worth it. So when we talk about these companies that are finding productivity, reducing headcount, increasing revenue, what they’re not doing is digging into why that’s happening. And I would guarantee that it’s not on the up and up, but it’s not all the healthy version of that. Christopher S. Penn: Oh, we know that for sure. One of the big work trends this year that came out of Chinese AI Labs, which Silicon Valley is scrambling to impose upon their employees, is the 996 culture: 9 a.m. to 9 p.m., six days a week is demanding. Katie Robbert: I was like, “Nope.” I was like, “Why?” You’re never going to get me to buy into that. Christopher S. Penn: Well, I certainly don’t want to either. Although that’s about what I work anyway. But half of my work is fun, so. Katie Robbert: Well, yeah. So let the record show I do not ask Chris to work those hours. That is not a requirement. He is choosing, as a person with his own faculties, to say, “This is what I want to do.” So that is not a mandate on him. Christopher S. Penn: Yes, this is something that the work that I do is also my hobby. But what people forget to take into account is their cultural differences too. So. And there are also macro things that are different that make that even less sustainable in Western cultures than it does in Chinese cultures. But looking back at the year from a technological perspective, one of the things that stunned me was how we forget just how smart these things have gotten in just one year. One of the things that we—there’s an exam that was built in January of this year called Humanity’s Last Exam as a—it’s a very challenging exam. I think I have a sample question. Yeah, here’s 2 sample questions. I don’t even know what these questions mean. So my score on this exam would be a 0 because it’s one doing. Here’s a thermal paracyclic cascade. Provide your answer in this format. Here’s some Hebrew. Identify closed and open syllables. I look at this I can’t even multiple-choice guess this. Sure, I don’t know what it is. At the beginning of the year, the models at the time—OpenAI’s GPT4O, Claude 3 Opus, Google Gemini Pro 2, Deep Seek V3—all scored 5%. They just bombed the exam. Everybody bombed it. I granted they scored 5% more than I would have scored on it, but they basically bombed the exam. In just 12 months, we’ve seen them go from 5% to 26%. So a 5x increase. Gemini going from 6.8% to 37%, which is what—a 5, 6, 7—6x improvement. Claude going from 3% to 28%. So that’s what a 7x improvement. No, 8x improvement. These are huge leaps in intelligence for these models within a single calendar year. Katie Robbert: Sure. But listen, I always say I might be an N of 1. I’m not impressed by that because how often do I need to know the answers to those particular questions that you just shared? In the profession that I am in, specifically, there’s an old saying—I don’t know how old, or maybe it’s whatever—there’s a difference between book smart and street smart. So you’re really talking about IQ versus EQ, and these machines don’t have EQ. It’s not anything that they’re ever going to really be able to master the way that humans do. Now, when you say this, I’m talking about intellectual intelligence and emotional intelligence. And so if you’ve seen any of the sci-fi movies, *Her* or *Ex Machina*, you’re led to believe that these machines are going to simulate humans and be empathetic and sympathetic. We’ve already seen the news stories of people who are getting married to their generative AI system. That’s happening. Yes, I’m not brushing over it, I’m acknowledging it. But in reality, I am not concerned about how smart these machines get in terms of what you can look up in a dictionary or what you can find in an encyclopedia—that’s fine. I’m happy to let these machines do that all day long. It’s going to save me time when I’m trying to understand the last consonant of every word in the Hebrew alphabet since the dawn of time. Sure. Happy to let the machine do that. What these machines don’t know is what I know in my life experience. And so why am I asking that information? What am I going to do with that information? How am I going to interpret that information? How am I going to share that information? Those are the things that the machine is never going to replace me in my role to do. So I say, great, I’m happy to let the machines get as smart as they want to get. It saves me time having to research those things. I was on a train last week, and there were 2 women sitting behind me, and they were talking about generative AI. You can go anywhere and someone talks about generative AI. One of the women was talking about how she had recently hired a research assistant, and she had given her 3 or 4 academic papers and said, “I want to know your thoughts on these.” And so what the research assistant gave back was what generative AI said were the summaries of each of these papers. And so the researcher said, “No, I want to know your thoughts on these research papers.” She’s like, “Well, those are the summaries. That’s what generative AI gave me.” She’s like, “Great, but I need you to read them and do the work.” And so we’ve talked about this in previous episodes. What humans will have over generative AI, should they choose to do so, is critical thinking. And so you can find those episodes of the podcast on our YouTube channel at TrustInsights.ai/YouTube. Find our podcast playlist. And it just struck me that it doesn’t matter what industry you’re in, people are using generative AI to replace their own thinking. And those are the people who are going to be finding themselves to the right and down on those graphs of being replaced. So I’ve sort of gone on a little bit of a rant. Point is, I’m happy to let the machines be smarter than me and know more than me about things in the world. I’m the one who chooses how to use it. I’m the one who has to do the critical thinking. And that’s not going to be replaced. Christopher S. Penn: Yeah, that’s. But you have to make that a conscious choice. One of the things that we did see this year, which I find alarming, is the number of people who have outsourced their executive function to machines to say, “Hey, do this way.” There’s. You can go on Twitter, or what was formerly known as Twitter, and literally see people who are supposedly thought leaders in their profession just saying, “Chat GPT told me this. And so you’re wrong.” And I’m like, “In a very literal sense, you have lost your mind.” You have. It’s not just one group of people. When you look at the *Harvard Business Review* use cases—this was from April of this year—the number 1 use case is companionship for these tools. Whether or not we think it’s a good idea. They. And to your point, Katie, they don’t have empathy, they don’t have emotional intelligence, but they emulate it so well now. Oh, they do that. People use it for those things. And that, I think, is when we look back at the year that was, the fact that this is the number 1 use case now for these tools is shocking to me. Katie Robbert: Separately—not when I was on a train—but when I was sitting at a bar having lunch. We. My husband and I were talking to the bartender, and he was like, “Oh, what do you do for a living?” So I told him, and he goes, “I’ve been using ChatGPT a lot. It’s the only one that listens to me.” And it sort of struck me as, “Oh.” And then he started to, it wasn’t a concerning conversation in the sense that he was sort of under the impression that it was a true human. But he was like, “Yeah, I’ll ask it a question.” And the response is, “Hey, that’s a great question. Let me help you.” And even just those small things—it saying, “That’s a really thoughtful question. That’s a great way to think about it.” That kind of positive reinforcement is the danger for people who are not getting that elsewhere. And I’m not a therapist. I’m not looking to fix this. I’m not giving my opinions of what people should and shouldn’t do. I’m observing. What I’m seeing is that these tools, these systems, these pieces of software are being designed to be positive, being designed to say, “Great question, thank you for asking,” or, “I hope you have a great day. I hope this information is really helpful.” And it’s just those little things that are leading people down that road of, “Oh, this—it knows me, it’s listening to me.” And so I understand. I’m fully aware of the dangers of that. Yeah. Christopher S. Penn: And that’s such a big macro question that I don’t think anybody has the answer for: What do you do when the machine is a better human than the humans you’re surrounded by? Katie Robbert: I feel like that’s subjective, but I understand what you’re asking, and I don’t know the answer to that question. But that again goes back to, again, sort of the sci-fi movies of *Her* or *Ex Machina*, which was sort of the premise of those, or the one with Haley Joel Osment, which was really creepy. *Artificial Intelligence*, I think, is what it was called. But anyway. People are seeking connection. As humans, we’re always seeking connection. Here’s the thing, and I don’t want to go too far down the rabbit hole, but a lot of people have been finding connection. So let’s say we go back to pen pals—people they’d never met. So that’s a connection. Those are people they had never met, people they don’t interact with, but they had a connection with someone who was a pen pal. Then you have things like chat rooms. So AOL chat room—A/S/L. We all. If you’re of that generation, what that means. People were finding connections with strangers that they had never met. Then you move from those chat rooms to things like these communities—Discord and Slack and everything—and people are finding connections. This is just another version of that where we’re trying to find connections to other humans. Christopher S. Penn: Yes. Or just finding connections, period. Katie Robbert: That’s what I mean. You’re trying to find a connection to something. Some people rescue animals, and that’s their connection. Some people connect with nature. Other people, they’re connecting with these machines. I’m not passing judgment on that. I think wherever you find connection is where you find connection. The risk is going so far down that you can’t then be in reality in general. I know. *Avatar* just released another version. I remember when that first version of the movie *Avatar* came out, there were a lot of people very upset that they couldn’t live in that reality. And it’s just. Listen, I forgot why we’re doing this podcast because now we’ve gone so far off the rails talking about technology. But I think to your point, what’s happened with generative AI in 2025: It’s getting very smart. It’s getting very good at emulating that human experience, and I don’t think that’s slowing down anytime soon. So we as humans, my caution for people is to find something outside of technology that grounds you so that when you are using it, you can figure out sort of that real from less reality. Christopher S. Penn: Yeah. One of the things—and this is a complete nerd thing—but one of the things that I do, particularly when I’m using local models, is I will keep the console up that shows the computations going as a reminder that the words appearing on the screen are not made by a human; they’re made by a machine. And you can see the machinery working, and it’s kind of knowing how the magic trick is done. You watch go. “Oh, it’s just a token probability machine.” None of what’s appearing on screen is thought through by an organic intelligence. So what are you looking forward to or what do you have your eyes on in 2026 in general for Trust Insights or in particular the field of AI? Katie Robbert: I think now that some of the excitement over Generative AI is wearing off. I think what I’m looking forward to in 2026 for Trust Insights specifically is helping more organizations figure out how AI fits into their overall organization, where there’s real opportunity versus, “Hey, it can write a blog post,” or, “Hey, it can do these couple of things,” and I built a—I built a gem or something—but really helping people integrate it in a thoughtful way versus the short-term thinking kind of way. So I’m very much looking forward to that. I’m seeing more and more need for that, and I think that we are well suited to help people through our courses, through our consulting, through our workshops. We’re ready. We are ready to help people integrate technology into their organization in a thoughtful, sustainable way, so that you’re not going to go, “Hey, we hired these guys and nothing happened.” We will make the magic happen. You just need to let us do it. So I’m very much looking forward to that. I’ve personally been using Generative AI to sort of connect dots in my medical history. So I’m very excited just about the prospect of being able to be more well-informed. When I go into a doctor’s office, I can say, “I’m not a doctor, I’m not a researcher, but I know enough about my own history to say these are all of the things. And when I put them together, this is the picture that I’m getting. Can you help me come to faster conclusions?” I think that is an exciting use of generative AI, obviously under a doctor’s supervision. I’m not a doctor, but I know enough about how to research with it to put pieces together. So I think that there’s a lot of good that’s going to come from it. I think it’s becoming more accessible to people. So I think that those are all positive things. Christopher S. Penn: The thing—if there’s one thing I would recommend that people keep an eye on—is a study or a benchmark from the Center for AI Safety called RLI, Remote Labor Index. And this is a benchmark test where AI models and their agents are given a task that typically a remote worker would do. So, for example, “Here’s a blueprint. Make an architectural rendering from it. Here’s a data set. Make a fancy dashboard, make a video game. Make a 3D rendering of this product from the specifications.” Difficult tasks that the index says the average deliverable costs thousands of dollars and hundreds of hours of time. Right now, the state of the art in generative AI—it’s close to—because this was last month’s models, succeeded 2.1% of the time at a max. It was not great. Now, granted, if your business was to lose 2.1% of its billable deliverables, that might be enough to make the difference between a good year and a bad year. But this is the index you watch because with all the other benchmarks, like you said, Katie, they’re measuring book smart. This is measuring: Was the work at a quality level that would be accepted as paid, commissioned work? And what we saw with Humanity’s Last Exam this year is that models went from face-rolling moron, 3% scores, to 25%, 30%, 35% within a year. If this index of, “Hey, I can do quality commissioned work,” goes from 2.1% to 10%, 15%, 20%, that is economic value. That is work that machines are doing that humans might not be. And that also means that is revenue that is going elsewhere. So to me, this is the one thing—if there’s one thing I was going to pay attention to in 2026—it would be watching measures like this that measure real-world things that you would ask a human being to do to see how tools are advancing. Katie Robbert: Right. The tools are going to advance, people are going to want to jump on it. But I feel like when generative AI first hit the market, the analogy that I made is people shopping the big box stores versus people shopping the small businesses that are still doing things in a handmade fashion. There’s room for both. And so I think that you don’t have to necessarily pick one or the other. You can do a bit of both. And I think that for me is the advice that I would give to people moving into 2026: You can use generative AI or not, or use it a little bit, or use it a lot. There’s no hard and fast rule that says you have to do it a certain way. So I think that’s really when clients come to us or we talk about it through our content. That’s really the message that I’m trying to get across is, “Yeah, there’s a lot that you can do with it, but you don’t have to do it that way.” And so that is what I want people to take away. At least for me, moving into 2026, is it’s not going anywhere, but that doesn’t mean you have to buy into it. You don’t have to be all in on it. Just because all of your friends are running ultramarathons doesn’t mean you have to. I will absolutely not be doing that for a variety of reasons. But that’s really what it comes down to: You have to make those choices for yourself. Yes, it’s going to be everywhere. Yes, it’s accessible, but you don’t have to use it. Christopher S. Penn: Exactly. And if I were to give people one piece of advice about where to focus their study time in 2026, besides the fundamentals, because the fundamentals aren’t changing. In fact, the fundamentals are more important than ever to get things like prompting and good data right. But the analogy is that AI is sort of the engine—you need the rest of the car. And 2026 is when you’re going to look at things like agentic frameworks and harnesses and all the fancy techno terms for this. You are going to need the rest of the car because that’s where utility comes from. When a generative AI model is great, but a generative AI model connected to your Gmail so you can say which email should I respond to first today is useful. Katie Robbert: Yep. And I support that. That is a way that I will be using. I’ve been playing with that for myself. But what that does is it allows me to focus more on the hands-on homemade small business things. When before I was drowning in my email going, “Where do I start?” Great, let the machine tell me where to start. I’m happy to let AI do that. That’s a choice that I am making as a human who’s going to be critically thinking about all of the rest of the work that I have going on. Christopher S. Penn: Exactly. So you got some thoughts about what has happened this year that you want to share? Pop on by our free Slack at TrustInsights.ai/analyticsformarketers where you and over 4,500 other human marketers are asking and answering each other’s questions every single day. And wherever it is you watch or listen to the show, if there’s a channel you’d rather have it on, go to TrustInsights.ai/tipodcast. You can find us at all the places fine podcasts are served. Thank you for being with us here in 2025, the craziest year yet in all the things that we do. We appreciate you being a part of our community. We appreciate listening, and we wish you a safe and happy holiday season and a happy and prosperous new year. Talk to you on the next one. *** Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology (MarTech) selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, Dall-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members, such as CMO or data scientists, to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the *In-Ear Insights* podcast, the *Inbox Insights* newsletter, the *So What* livestream, webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights are adept at leveraging cutting-edge generative AI techniques like large language models and diffusion models, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations (data storytelling). This commitment to clarity and accessibility extends to Trust Insights educational resources, which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
Die Themen: Aiwanger fordert mehr Drohnen-Abwehr in Bayern; Ukraine-Gipfel in Berlin; Der Held von Bondi Beach; CSU fordert Hymnenpflicht bei Zeugnisverleihungen; Neuer Fahrplan der Bahn tritt in Kraft; Die schlechteste Serie der Welt und glückliche Tierfreunde wie Berti Vogts Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/ApokalypseundFilterkaffee
This episode is a little different as we're talking to three of the people behind new film GAME. It's the first feature from Invada Films, the movie arm of Invada Records, the label co-founded by Geoff Barrow of Portishead and Beak>. Barrow has scored many films and TV series over the last 15 years, including Ex Machina, Annihilation and more but this is his first foray as a producer and co-writer. Helping him out are two old friends and collaborators -- director John Minton who has done live visuals and videos for Portishead, and artist and actor Marc Bessant who did the album cover for the second Portishead album. Both Marc and John also get story credit for GAME, which is a thriller set against the early '90s rave scene. Marc plays David an aging partier who crashes his car in rural England after a rave and is found by a poacher in the woods played with menace and humor by Sleaford Mods frontman Jason Williamson. Things do not go as you might expect and we'll leave it at that. We talked to Geoff, John and Marc about the film, the difficulties of making it with a very small budget, writing new '90s-sounding acid house songs for the soundtrack, and more. Plus Geoff talks about his final tour with Beak> which wrapped up earlier this year, what's next for Invada, the future of Portishead, and lots more. Just so you know, we keep the first half of our conversation spoiler free but at the 34 minute mark we do start talking about specifics of the film, so if you haven't seen GAME skip ahead to the 54:40 minute mark for talk of Portishead, Beak>, and more. -- Credits: Hosted & produced by Bill Pearis Mixed and mastered by Nick Gray Theme music by Michael Silverstein
It's Thanksgiving week on Enjoy Stuff, and Jay and Shua are setting places at the table for the most eclectic, retro, and downright entertaining dinner guests imaginable. From icons to eccentrics, everyone's invited to the feast! Join the guys as they talk turkey, share gratitude, and celebrate a holiday where everyone is family, and everyone is ready to Get Stuffed. News The Super Mario Galaxy Movie has dropped its first official trailer ahead of its April 3, 2026 release. Paramount has confirmed a new Star Trek film is moving forward with directors Jonathan Goldstein and John Francis Daley. Poker Face will end on Peacock after two seasons, but Rian Johnson is shopping a new version starring Peter Dinklage to other networks. The Lucas Museum of Narrative Art opens September 22, 2026 in Los Angeles, with 35 galleries and over 40,000 works celebrating the art of storytelling. Check out our TeePublic store for some enjoyable swag and all the latest fashion trends What we're Enjoying Shua has been diving into the new John Candy documentary, "I Like Me," which shines a warm, emotional light on the beloved comedian. Jay has been deep into Columbo, revisiting the rumpled detective's brilliant mysteries, partly to comfort himself after the sad cancellation of Poker Face. The classic charm and clever cases are the perfect antidote. Sci-Fi Saturdays - This week on Sci-Fi Saturdays Jay spotlights Ex Machina, a sleek, cerebral, and unsettling exploration of artificial intelligence, identity, and manipulation. It remains a modern sci-fi classic with style, tension, and a chilling finale. Read his article on RetroZap.com. And make sure to play around with the interactive map on MCULocationScout.com. Plus, you can tune in to SHIELD: Case Files where Jay and Shua talk about great stuff in the MCU. Enjoy Thanksgiving! On this week's Thanksgiving episode, Jay and Shua set out to build the ultimate retro dinner table. They each select an assortment of legendary personalities, from artists and scientists to entertainers and peace advocates, crafting a holiday feast packed with humor, inspiration, brilliant conversation, and just a dash of chaos. Food contributions range from the traditional to the surreal, while the guests bring stories, laughter, life lessons, music, and occasionally questionable cleanup skills. Along the way, the hosts explore which dishes their guests might bring, what conversational rabbit holes they'd tumble down, and who would roll up their sleeves when the plates are empty. It's a celebration of gratitude, creativity, togetherness, and the joy of imagining a Thanksgiving where every seat is filled with someone fascinating. Who would you invite to Thanksgiving dinner? And who's going to clean up? Let us know! First person that emails me with the subject line, "Pass the gravy" will get a special mention on the show. Let us know. Come talk to us in the Discord channel or send us an email to EnjoyStuff@RetroZap.com
Support the show by signing up to our Patreon and get access to the full Weekender episode each Friday as well as special Live Shows and access to our community discord: http://patreon.com/muckrakepodcast Jared Yates Sexton and Nick Hauselman dig into the so-called “Epstein files” vote, why Trump suddenly wants everything released, and why everyone should expect a Mueller-style scrub instead of real accountability. They talk Larry Summers slinking off “from public life,” the national security state's role in hiding the full story, and why this scandal is really about a ruling class that treats organized crime like a business model. From there, they break down a new Democratic video urging the military and intel community to refuse illegal orders, Trump's response calling it “seditious” and punishable by death, and what that says about a cornered fascist who openly envies the power of Mohammed bin Salman. Then it is on to Nvidia, Peter Thiel dumping stock, the AI bubble sitting under the entire economy, and the familiar feeling of a roaring market heading toward a crash. They close with some cultural brain bleach, from the new Running Man to Hallmark Christmas movies and Robert Johnson on repeat. Learn more about your ad choices. Visit megaphone.fm/adchoices
Yorgos Lanthimos' fourth collaboration with Emma Stone yields a darkly comedic thriller about two conspiracy theorists who kidnap a CEO, determined to reveal the truth that she's an alien from Andromeda. We've all at least considered it. While funny and absurd, Bugonia is also tragic and misanthropic, and we're unconvinced that its ending is either earned or fitting, despite Mike's insistence that he's seen it coming for weeks. We consider the film's messaging, aesthetics, and tone; what its stars bring to it and how they differ; what the title might mean; and how a comparison with Alex Garland's Ex Machina reveals the lacks in the storytelling here. We pick at Bugonia left, right and centre, but despite our complaints, it showed us a very entertaining time, and there's a lot about it to recommend. Recorded on 9th November 2025.
On the 485th episode of Piecing It Together, Chad Clinton Freeman joins me to talk about Guillermo del Toro's new take on the classic monster tale, Frankenstein! Chad was my guest years ago for an episode on the original Frankenstein, so this episode bookends decades of adaptation of the classic story. Puzzle pieces include Ex Machina, Re-Animator, Chappie and The Crow.As always, SPOILER ALERT for Frankenstein and the movies we discuss!Written by Guillermo del ToroDirected by Guillermo del ToroStarring Oscar Isaac, Jacob Elordi, Mia Goth, Christoph WaltzNetflixChad Clinton Freeman is a film fan and the founder of the Pollygrind Film Festival.My latest David Rosen album MISSING PIECES: 2018-2024 is a compilation album that fills in the gaps in unreleased music made during the sessions for 2018's A Different Kind Of Dream, 2020's David Rosen, 2022's MORE CONTENT and 2025's upcoming And Other Unexplained Phenomena. Find it on Bandcamp, Apple Music, Spotify and everywhere else you can find music.You can also find more about all of my music on my website https://www.bydavidrosen.comMy latest music video is “Shaking" which you can watch at https://www.youtube.com/watch?v=wzm8s4nuqlAThe song at the end of the episode is "Dr. Frankenstein" from The Pup Pups third album A Boy and His Dog and Three Cats and A Bear!Make sure to “Like” Piecing It Together on Facebook at http://www.facebook.com/PiecingPodAnd “Follow” us on Twitter @PiecingPodAnd Join the Conversation in our Facebook Group, Piecing It Together – A Movie Discussion Group.And check out https://www.piecingpod.com for more about our show!And if you want to SUPPORT THE SHOW, you can now sign up for our Patreon at https://www.patreon.com/bydavidrosenYou can also support the show by checking out our Dashery store to buy shirts and more featuring Piecing It Together logos, movie designs, and artwork for my various music projects at https://bydavidrosen.dashery.com/Share the episode, comment and give us feedback! And of course, SUBSCRIBE!And of course, don't forget to leave us a 5 star review on Goodpods, Apple Podcasts, Spotify, or Podchaser!And most important of all… Keep going to the theater...
상백인 순수 너드남이 배신당한다면, 과연 여기서 피해자는 테크노 오리엔탈인가!!! + 에이바는 그냥 예뻐서 좋다 + 에피소드에서는 얘기 안 했지만, 쿄코도 그냥 예뻐서 좋다 + 예쁜 게 최고다!!!(P.S. 링크가 요즘에 잦은 빈도로 기능을 안 합니다. 만약 아래 링크들이 고장 났다면, 녹취록으로 가주세요! 거기에도 링크가 있답니다.)테크노 오리엔탈리즘 레퍼런스물 목록“누아르 어바니즘” (한국어판)“철서구” (에피소드에서 언급한, 엄청 긴 중국 다큐)“올드보이”“추격자”“엑스 마키나”에이바 역, Alicia Vikander케일럽 역, Domhnall Gleeson네이썬 역, Oscar Isaac튜링 테스트쿄코 역, Sonoya Mizuno“블랙 팬서”“블레이드 러너”“매트릭스”---녹취록: https://aimdreaming.imaginariumkim.com/논픽션-픽션-믹스-주류성-ex-machina-악당-형성용-깨끗/한아임한테 ☕️ 커피 사주기
Er der stadig liv i de gamle monstre, når Predator og Frankenstein vender tilbage i nye fortolkninger?I denne episode af Betacritic har Jacob Ege Hinchely besøg af musikjournalist Sara Frost, hvor de sammen ser nærmere på to af efterårets mest omtalte film: Predator: Badlands og Frankenstein.Predator: Badlands, instrueret af Dan Trachtenberg (10 Cloverfield Lane, Prey), bringer franchisen tilbage med Dimitrius Schuster-Koloamatangi (The Panthers, Next Goal Wins) som predator-rumvæsnet Dek og Elle Fanning (The Great, Maleficent) som den syntetiske androide Thia. Begge roser filmens passion og lydlandskab, men er enige om, at der bliver talt alt for meget. Actionsekvenserne fungerer, men tempoet mister energi, fordi filmen aldrig helt slipper for sin egen dialogtunge struktur.Derefter vender de blikket mod Frankenstein, hvor Jacob Elordi (Saltburn, Priscilla) spiller monstret, og Oscar Isaac (Dune, Ex Machina) er Dr. Frankenstein. Sara er meget begejstret for Elordis præstation, mens Jacob er mere kritisk over for filmens telenovela-lignende kulisser og Alexandre Desplats (The Grand Budapest Hotel, The Shape of Water) musik, som han kalder "tryllemusik". De taler også om instruktør Guillermo del Toros (Pan's Labyrinth, The Shape of Water) visuelle indflydelse, Dan Laustsens fotografering og Mia Goths (Pearl, X) position som moderne gyserdronning.Til sidst deler de deres bud på anbefalinger fra film- og serieverdenen.Tusind tak fordi du lytter med.
This week on Fright Mic, join your hosts, Sam and Liz as they discuss the underrated sci-fi flick , EX MACHINA from director Alex Garland which features badass female robots, scheming, an impressive Oscar Isaac dance routine and motherfucking karma. Want more screams and laughs? Join our Fright Club at http://patreon.com/frightmicpodcast and get access to tons more episodes, discussions, rankings, watch parties and more!Fright Mic is an independent horror podcast. We would love to have you join our Fright Fam by following us on all our socials!PATREONMERCHFacebookFRIGHT CLUBInstagramBlueskyTwitterTiktokDiscordSupport the show
Gwendolyn Dolske and Rudy Salo talk with Professor Neil McArthur (University of Manitoba) about his work on the ethics of sexbots. Are sexbots the future of human connection or a threat to it? Explore the fascinating intersection of ethics, technology, and intimacy. Together, they unpack cultural anxieties, philosophical implications, and the surprising ways AI companions might actually be good for society. From the film Ex Machina to real-world robotics, this conversation examines what it means to be human when machines start to mimic love, emotion, and desire. Whether you're curious about AI ethics, the philosophy of technology, or how innovation challenges our moral compass, this episode invites you to think deeper. What you'll learn: Why fears around sexbots may be misplaced, how technology redefines intimacy and autonomy, and what philosophy teaches us about love, consent, and machine ethics. Follow GIID on Instagram: @GoodIsInTheDetailsPod Join our Patreon & support the pod: https://www.patreon.com/c/GoodIsInTheDetails
Generative A.I., once an uncanny novelty, is now being used to create not only images and videos but entire “artists.” Its boosters claim that the technology is merely a tool to facilitate human creativity; the major use cases we've seen thus far—and the money being poured into these projects—tell a different story. On this episode of Critics at Large, Vinson Cunningham, Naomi Fry, and Alexandra Schwartz discuss the output of Timbaland's A.I. rapper TaTa Taktumi and the synthetic actress Tilly Norwood. They also look back at movies and television that imagined what our age of A.I. would look like, from “2001: A Space Odyssey” onward. “A.I. has been a source of fascination, of terror, of appeal,” Schwartz says. “It's the human id in virtual form—at least in human-made art.”Read, watch, and listen with the critics:TaTa Taktumi's “Glitch x Pulse”Cardi B's “Am I the Drama?”“Pop Star Academy: KATSEYE” (2024)“Dear Tilly Norwood,” by Betty Gilpin (The Hollywood Reporter)Tilly Norwood's Instagram account“Holly Herndon's Infinite Art,” by Anna Wiener (The New Yorker)“2001: A Space Odyssey” (1968)“The Morning Show” (2019—)“Simone” (2002)“Blade Runner” (1982)“Ex Machina” (2014)“The Man Who Sells Unsellable New York Apartments,” by Alexandra Schwartz (The New Yorker)“The Work of Art in the Age of Mechanical Reproduction,” by Walter Benjamin“The Death of the Author,” by Roland BarthesNew episodes drop every Thursday. Follow Critics at Large wherever you get your podcasts.Critics at Large is a weekly discussion from The New Yorker that explores the latest trends in books, television, film, and more. Join us every Thursday as we make unexpected connections between classic texts and pop culture. Learn about your ad choices: dovetail.prx.org/ad-choices
Immerse yourself in captivating science fiction short stories, delivered daily! Explore futuristic worlds, time travel, alien encounters, and mind-bending adventures. Perfect for sci-fi lovers looking for a quick and engaging listen each day.
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the stark reality of the future of work presented at the Marketing AI Conference, MAICON 2025. You’ll learn which roles artificial intelligence will consume fastest and why average employees face the highest risk of replacement. You’ll master the critical thinking and contextual skills you must develop now to transform yourself into an indispensable expert. You’ll understand how expanding your intellectual curiosity outside your specific job will unlock creative problem solving essential for survival. You’ll discover the massive global AI blind spot that US companies ignore and how this shifting landscape affects your career trajectory. Watch now to prepare your career for the age of accelerated automation! Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-maicon-2025-generative-ai-for-marketers.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn – 00:00 In this week’s In Ear Insights, we are at the Marketing AI Conference, Macon 2025 in Cleveland with 1,500 of our best friends. This morning, the CEO of SmartRx, formerly the Marketing AI Institute, Paul Ritzer, was talking about the future of work. Now, before I go down a long rabbit hole, Dave, what was your immediate impressions, takeaways from Paul’s talk? Katie Robbert – 00:23 Paul always brings this really interesting perspective because he’s very much a futurist, much like yourself, but he’s a futurist in a different way. Whereas you’re on the future of the technology, he’s focused on the future of the business and the people. And so his perspective was really, “AI is going to take your job.” If we had to underscore it, that was the bottom line: AI is going to take your job. However, how can you be smarter about it? How can you work with it instead of working against it? Obviously, he didn’t have time to get into every single individual solution. Katie Robbert – 01:01 The goal of his keynote talk was to get us all thinking, “Oh, so if AI is going to take my job, how do I work with AI versus just continuing to fight against it so that I’m never going to get ahead?” I thought that was a really interesting way to introduce the conference as a whole, where every individual session is going to get into their soldiers. Christopher S. Penn – 01:24 The chart that really surprised me was one of those, “Oh, he actually said the quiet part out loud.” He showed the SaaS business chart: SaaS software is $500 billion of economic value. Of course, AI companies are going, “Yeah, we want that money. We want to take all that money.” But then he brought up the labor chart, which is $12 trillion of money, and says, “This is what the AI companies really want. They want to take all $12 trillion and keep it for themselves and fire everybody,” which is the quiet part out loud. Even if they take 20% of that, that’s still, obviously, what is it, $2 trillion, give or take? When we think about what that means for human beings, that’s basically saying, “I want 20% of the workforce to be unemployed.” Katie Robbert – 02:15 And he wasn’t shy about saying that. Unfortunately, that is the message that a lot of the larger companies are promoting right now. So the question then becomes, what does that mean for that 20%? They have to pivot. They have to learn new skills, or—the big thing, and you and I have talked about this quite a bit this year—is you really have to tap into that critical thinking. That was one of the messages that Paul was sharing in the keynote: go to school, get your liberal art degree, and focus on critical thinking. AI is going to do the rest of it. Katie Robbert – 02:46 So when we look at the roles that are up for grabs, a lot of it was in management, a lot of it was in customer service, a lot of it was in analytics—things that already have a lot of automation around them. So why not naturally let agentic AI take over, and then you don’t need human intervention at all? So then, where does that leave the human? Katie Robbert – 03:08 We’re the ones who have to think what’s next. One of the things that Paul did share was that the screenwriter for all of the Scorsese films was saying that ChatGPT gave me better ideas. We don’t know what those exact prompts looked like. We don’t know how much context was given. We don’t know how much background information. But if that was sue and I, his name was Paul. Paul Schrader. Yes, I forgot it for a second. If Paul Schrader can look at Paul Schrader’s work, then he’s the expert. That’s the thing that I think needed to also be underscored: Paul Schrader is the expert in Paul Schrader. Paul Schrader is the expert in screenwriting those particular genre films. Nobody else can do that. Katie Robbert – 03:52 So Paul Schrader is the only one who could have created the contextual information for those large language models. He still has value, and he’s the one who’s going to take the ideas given by the large language models and turn them into something. The large language model might give him an idea, but he needs to be the one to flush it out, start to finish, because he’s the one who understands nuance. He’s the one who understands, “If I give this to a Leonardo DiCaprio, what is he gonna do with the role? How is he gonna think about it?” Because then you’re starting to get into all of the different complexities where no one individual ever truly works alone. You have a lot of other humans. Katie Robbert – 04:29 I think that’s the part that we haven’t quite gotten to, is sure, generative AI can give you a lot of information, give you a lot of ideas, and do a lot of the work. But when you start incorporating more humans into a team, the nuance—it’s very discreet. It’s very hard for an AI to pick up. You still need humans to do those pieces. Christopher S. Penn – 04:49 When you take a look, though, at something like the Tilly Norwood thing from a couple weeks ago, even there, it’s saying, “Let’s take fewer humans in there,” where you have this completely machine generated actor avatar, I guess. It was very clearly made to replace a human there because they’re saying, “This is great. They don’t have to pay union wages. The actor never calls in sick. The actor never takes a vacation. The actor’s not going to be partying at a club unless someone makes it do that.” When we look at that big chart of, “Here’s all the jobs that are up for grabs,” the $12 trillion of economic value, when you look at that, how at risk do you think your average person is? Katie Robbert – 05:39 The key word in there is average. An average person is at risk. Because if an average person isn’t thinking about things creatively, or if they’re just saying, “Oh, this is what I have to do today, let me just do it. Let me just do the bare minimum, get through it.” Yes, that person is at risk. But someone who looks at a problem or a task that’s in front of them and thinks, “What are the five different ways that I could approach this? Let me sit down for a second, really plan it out. What am I not thinking of? What have I not asked? What’s the information I don’t have in front of me? Let me go find that”—that person is less at risk because they are able to think beyond what’s right in front of them. Katie Robbert – 06:17 I think that is going to be harder to replace. So, for example, I do operations, I’m a CEO. I set the vision. You could theoretically give that to an AI to do. I could create CEO Katie GPT. And GPT Katie could set the vision, based on everything I know: “This is the direction that your company should go in.” What that generative AI doesn’t know is what I know—what we’ve tried, what we haven’t tried. I could give it all that information and it could still say, “Okay, it sounds like you’ve tried this.” But then it doesn’t necessarily know conversations that I’ve had with you offline about certain things. Could I give it all that information? Sure. But then now I’m introducing another person into the conversation. And as predictable as humans are, we’re unpredictable. Katie Robbert – 07:13 So you might say, “Katie would absolutely say this to something.” And I’m going to look at it and go, “I would absolutely not say that.” We’ve actually run into that with our account manager where she’s like, “Well, this is how I thought you would respond. This is how I thought you would post something on social media.” I’m like, “Absolutely not. That doesn’t sound like me at all.” She’s like, “But that’s what the GPT gave me that is supposed to sound like you.” I’m like, “Well, it’s wrong because I’m allowed to change my mind. I’m a human.” And GPTs or large language models don’t have that luxury of just changing its mind and just kind of winging it, if that makes sense. Christopher S. Penn – 07:44 It does. What percentage, based on your experience in managing people, what percentage of people are that exceptional person versus the average or the below average? Katie Robbert – 07:55 A small percentage, unfortunately, because it comes down to two things: consistency and motivation. First, you have to be consistent and do your thing well all the time. In order to be consistent, you have to be motivated. So it’s not enough to just show up, check the boxes, and then go about your day, because anybody can do that; AI can do that. You have to be motivated to want to learn more, to want to do more. So the people who are demonstrating a hunger for reaching—what do they call it?—punching above their weight, reaching beyond what they have, those are the people who are going to be less vulnerable because they’re willing to learn, they’re willing to adapt, they’re willing to be agile. Christopher S. Penn – 08:37 For a while now we’ve been saying that either you’re going to manage the machines or the machines are going to manage you. And now of course we are at the point the machine is just going to manage the machines and you are replaced. Given so few people have that intrinsic motivation, is that teachable or is that something that someone has to have—that inner desire to want to better, regardless of training? Katie Robbert – 09:08 “Teachable” I think is the wrong word. It’s more something that you have to tap into with someone. This is something that you’ve talked about before: what motivates people—money, security, blah, blah, whatever, all those different things. You can say, “I’m going to motivate you by dangling money in front of you,” or, “I’m going to motivate you by dangling time off in front of you.” I’m not teaching you anything. I’m just tapping into who you are as a person by understanding your motives, what motivates you, what gets you excited. I feel fairly confident in saying that your motivations, Chris, are to be the smartest person in the room or to have the most knowledge about your given industry so that you can be considered an expert. Katie Robbert – 09:58 That’s something that you’re going to continue to strive for. That’s what motivates you, in addition to financial security, in addition to securing a good home life for your family. That’s what motivates you. So as I, the other human in the company, think about it, I’m like, “What is going to motivate Chris to get his stuff done?” Okay, can I position it as, “If you do this, you’re going to be the smartest person in the room,” or, “If you do this, you’re going to have financial security?” And you’re like, “Oh, great, those are things I care about. Great, now I’m motivated to do them.” Versus if I say, “If you do this, I’ll get off your back.” That’s not enough motivation because you’re like, “Well, you’re going to be on my back anyway.” Katie Robbert – 10:38 Why bother with this thing when it’s just going to be the next thing the next day? So it’s not a matter of teaching people to be motivated. It’s a matter of, if you’re the person who has to do the motivating, finding what motivates someone. And that’s a very human thing. That’s as old as humans are—finding what people are passionate about, what gets them out of bed in the morning. Christopher S. Penn – 11:05 Which is a complex interplay. If you think about the last five years, we’ve had a lot of discussions about things like quiet quitting, where people show up to work to do the bare minimum, where workers have recognized companies don’t have their back at all. Katie Robbert – 11:19 We have culture and pizza on Fridays. Christopher S. Penn – 11:23 At 5:00 PM when everyone wants to just— Katie Robbert – 11:25 Go home and float in that day. Christopher S. Penn – 11:26 Exactly. Given that, does that accelerate the replacement of those workers? Katie Robbert – 11:37 When we talk about change management, we talk about down to the individual level. You have to be explaining to each and every individual, “What’s in it for me?” If you’re working for a company that’s like, “Well, what’s in it for you is free pizza Fridays and funny hack days and Hawaiian shirt day,” that doesn’t put money in their bank account. That doesn’t put a roof over their head; that doesn’t put food on their table, maybe unless they bring home one of the free pizzas. But that’s once a week. What about the other six days a week? That’s not enough motivation for someone to stay. I’ve been in that position, you’ve been in that position. My first thought is, “Well, maybe stop spending money on free pizza and pay me more.” Katie Robbert – 12:19 That would motivate me, that would make me feel valued. If you said, “You can go buy your own pizza because now you can afford it,” that’s a motivator. But companies aren’t thinking about it that way. They’re looking at employees as just expendable cogs that they can rip and replace. Twenty other people would be happy to do the job that you’re unhappy doing. That’s true, but that’s because companies are setting up people to fail, not to succeed. Christopher S. Penn – 12:46 And now with machinery, you’re saying, “Okay, since there’s a failing cog anyway, why don’t we replace it with an actual cog instead?” So where does this lead for companies? Particularly in capitalist markets where there is no strong social welfare net? Yeah, obviously if you go to France, you can work a 30-hour week and be just fine. But we don’t live in France. France, if you’re hiring, we’re available. Where does it lead? Because I can definitely see one road where this leads to basically where France ended up in 1789, which is the Guillotines. These people trot out the Guillotines because after a certain point, income inequality leads to that stuff. Where does this lead for the market as you see it now? Katie Robbert – 13:39 Unfortunately, nowhere good. We have seen time and time again, as much as we want to see the best in people, we’re seeing the worst in people today, as of this podcast recording—not at Macon. These are some of the best people. But when you step outside of this bubble, you’re seeing the worst in people. They’re motivated by money and money only, money and power. They don’t care about humanity as a whole. They’re like, “I don’t care if you’re poor, get poorer, I’m getting richer.” I feel like, unfortunately, that is the message that is being sent. “If you can make a dollar, go ahead and make a dollar. Don’t worry about what that does to anybody else. Go ahead and be in it for yourself.” Katie Robbert – 14:24 And that’s unfortunately where I see a lot of companies going: we’re just in it to make money. We no longer care about the welfare of our people. I’ve talked on previous shows, on previous podcasts. My husband works for a grocery store that was bought out by Amazon a few years ago, and he’s seeing the effects of that daily. Amazon bought this grocery chain and said basically, “We don’t actually care about the people. We’re going to automate things. We’re going to introduce artificial intelligence.” They’ve gotten rid of HR. He still has to bring home a physical check because there is no one to give him paperwork to do direct deposit. Christopher S. Penn – 15:06 He’s been—ironic given the company. Katie Robbert – 15:08 And he’s been at the company for 25 years. But when they change things over, if he has an assurance question, there’s no one to go to. They probably have chatbots and an email distribution list that goes to somebody in an inbox that never. It’s so sad to see the decline based on where the company started and what the mission originally was of that company to where it is today. His suspicion—and this is not confirmed—his suspicion is that they are gearing up to sell this business, this grocery chain, to another grocery chain for profit and get rid of it. Flipping it, basically. Right now, they’re using it as a distribution center, which is not what it’s meant to be. Katie Robbert – 15:56 And now they’re going to flip it to another grocery store chain because they’ve gotten what they needed from it. Who cares about the people? Who cares about the fact that he as an individual has to work 50 hours a week because there’s nobody else? They’ve flattened the company. They’re like, “No, based on our AI scheduler, there’s plenty of people to cover all of these hours seven days a week.” And he’s like, “Yeah, you have me on there for seven of the seven days.” Because the AI is not thinking about work-life balance. It’s like, “Well, this individual is available at these times, so therefore he must be working here.” And it’s not going to do good things for people in services industries, for people in roles that cannot be automated. Katie Robbert – 16:41 So we talk about customer service—that’s picking up the phone, logging a plate—that can be automated. Walking into a brick and mortar, there are absolutely parts of it that can be automated, specifically the end purchase transaction. But the actual ordering and picking of things and preparing it—sure, you could argue that eventually robots could be doing that, but as of today, that’s all humans. And those humans are being treated so poorly. Christopher S. Penn – 17:08 So where does that end for this particular company or any large enterprise? Katie Robbert – 17:14 They really have—they have to make decisions: do they want to put the money first or the people first? And you already know what the answer to that is. That’s really what it comes down to. When it ends, it doesn’t end. Even if they get sold, they’re always going to put the money first. If they have massive turnover, what do they care? They’re going to find somebody else who’s willing to do that work. Think about all of those people who were just laid off from the white-collar jobs who are like, “Oh crap, I still have a mortgage I have to pay, I still have a family I have to feed. Let me go get one of those jobs that nobody else is now willing to do.” Katie Robbert – 17:51 I feel like that’s the way that the future of work for those people who are left behind is going to turn over. Katie Robbert – 17:59 There’s a lot of people who are happy doing those jobs. I love doing more of what’s considered the blue-collar job—doing things manually, getting their hands in it, versus automating everything. But that’s me personally; that’s what motivates me. That I would imagine is very unappealing to you. Not that for almost. But if cooking’s off the table, there’s a lot of other things that you could do, but would you do them? Katie Robbert – 18:29 So when we talk about what’s going to happen to those people who are cut and left behind, those are the choices they’re going to have to make because there’s not going to be more tech jobs for them to choose from. And if you are someone in your career who has only ever focused on one thing, you’re definitely in big trouble. Christopher S. Penn – 18:47 Yeah, I have a friend who’s a lawyer at a nonprofit, and they’re like, “Yeah, we have no funding anymore, so.” But I can’t pick up and go to England because I can’t practice law there. Katie Robbert – 18:59 Right. I think about people. Forever, social media was it. You focus on social media and you are set. Anybody will hire you because they’re trying to learn how to master social media. Guess where there’s no jobs anymore? Social media. So if all you know is social media and you haven’t diversified your skill set, you’re cooked, you’re done. You’re going to have to start at ground zero entry level. If there’s that. And that’s the thing that’s going to be tough because entry-level jobs—exactly. Christopher S. Penn – 19:34 We saw, what was it, the National Labor Relations Board publish something a couple months ago saying that the unemployment rate for new college graduates is something 60% higher than the rest of the workforce because all the entry-level jobs have been consumed. Katie Robbert – 19:46 Right. I did a talk earlier this year at WPI—that’s Worcester Polytech in Massachusetts—through the Women in Data Science organization. We were answering questions basically like this about the future of work for AI. At a technical college, there are a lot of people who are studying engineering, there are a lot of people who are studying software development. That was one of the first questions: “I’m about to get my engineering degree, I’m about to get my software development degree. What am I supposed to do?” My response to that is, you still need to understand how the thing works. We were talking about this in our AI for Analytics workshop yesterday that we gave here at Macon. In order to do coding in generative AI effectively, you have to understand the software development life cycle. Katie Robbert – 20:39 There is still a need for the expertise. People are asking, “What do I do?” Focus on becoming an expert. Focus on really mastering the thing that you’re passionate about, the thing that you want to learn about. You’ll be the one teaching the AI, setting up the AI, consulting with the people who are setting up the AI. There’ll be plenty of practitioners who can push the buttons and set up agents, but they still need the experts to tell them what it’s supposed to do and what the output’s supposed to be. Christopher S. Penn – 21:06 Do you see—this is kind of a trick question—do you see the machines consuming that expertise? Katie Robbert – 21:15 Oh, sure. But this is where we go back to what we were talking about: the more people, the more group think—which I hate that term—but the more group think you introduce, the more nuanced it is. When you and I sit down, for example, when we actually have five minutes to sit down and talk about the future of our business, where we want to go or what we’re working on today, the amount of information we can iterate on because we know each other so well and almost don’t have to speak in complete sentences and just can sort of pick up what the other person is thinking. Or I can look at something you’re writing and say, “Hey, I had an idea about that.” We can do that as humans because we know each other so well. Katie Robbert – 21:58 I don’t think—and you’re going to tell me this is going to happen—unless we can actually plug or forge into our brains and download all of the things. That’s never going to happen. Even if we build Katie GPT and Chris GPT and have them talk to each other, they’re never going to brainstorm the way you and I brainstorm in real life. Especially if you give me a whiteboard. I’m good. I’m going to get so much done. Christopher S. Penn – 22:25 For people who are in their career right now, what do they do? You can tell somebody, “You need to be a good critical thinker, a creative thinker, a contextual thinker. You need to know where your data lives and things like that.” But the technology is advancing at such a fast rate. I talk about this in the workshops that we do—which, by the way, Trust Insights is offering workshops at your company, if we like one. But one of the things to talk about is, say, with the model’s acceleration in terms of growth, they’re growing faster than any technology ever has. They went from face rolling idiot in 2023 right to above PhD level in everything two years later. Christopher S. Penn – 23:13 So the people who, in their career, are looking at this, going, “It’s like a bad Stephen King movie where you see the thing coming across the horizon.” Katie Robbert – 23:22 There is no such thing as a bad Stephen King movie. Sometimes the book is better, but it’s still good. But yes, maybe *Creepshow*. What do you mean in terms of how do they prepare for the inevitable? Christopher S. Penn – 23:44 Prepare for the inevitable. Because to tell somebody, “Yeah, be a critical thinker, be a contextual thinker, be a creative thinker”—that’s good in the abstract. But then you’re like, “Well, my—yeah, my—and my boss says we’re doing a 10% headcount reduction this week.” Katie Robbert – 24:02 This is my personal way of approaching it: you can’t limit yourself to just go, “Okay, think about it. Okay, I’m thinking.” You actually have to educate yourself on a variety of different things. I am a voracious reader. I read all the time when I’m not working. In the past three weeks, I’ve read four books. And they’re not business books; they are fiction books and on a variety of things. But what that does is it keeps my brain active. It keeps my brain thinking. Then I give myself the space and time. When I walk my dog, I sort of process all of it. I think about it, and then I start thinking about, “What are we doing as our company today?” or, “What’s on the task list?” Katie Robbert – 24:50 Because I’ve expanded my personal horizons beyond what’s right in front of me, I can think about it from the perspective of other people, fictional or otherwise, “How would this person approach it?” or, “What would I do in that scenario?” Even as I’m reading these books, I start to think about myself. I’m like, “What would I do in that scenario? What would I do if I was finding myself on a road trip with a cannibal who, at the end of the road trip, was likely going to consume all of me, including my bones?” It was the last book I read, and it was definitely not what I thought I was signing up for. But you start to put yourself in those scenarios. Katie Robbert – 25:32 That’s what I personally think unlocks the critical thinking, because you’re not just stuck in, “Okay, I have a math problem. I have 1 + 1.” That’s where a lot of people think critical thinking starts and ends. They think, “Well, if I can solve that problem, I’m a critical thinker.” No, there’s only one way to solve that problem. That’s it. I personally would encourage people to expand their horizons, and this comes through having hobbies. You like to say that you work 24/7. That’s not true. You have hobbies, but they’re hobbies that help you be creative. They’re hobbies that help you connect with other people so that you can have those shared experiences, but also learn from people from different cultures, different backgrounds, different experiences. Katie Robbert – 26:18 That’s what’s going to help you be a stronger, fitable thinker, because you’re not just thinking about it from your perspective. Christopher S. Penn – 26:25 Switching gears, what was missing, what’s been missing, and what is absent from this show in the AI space? I have an answer, but I want to hear yours. Katie Robbert – 26:36 Oh, boy. Really putting me on the spot here. I know what is missing. I don’t know. I’m going to think about it, and I am going to get back to you. As we all know, I am not someone who can think on my feet as quickly as you can. So I will take time, I will process it, but I will come back to you. What do you think is missing? Christopher S. Penn – 27:07 One of the things that is a giant blind spot in the AI space right now is it is a very Western-centric view. All the companies say OpenAI and Anthropic and Google and Meta and stuff like that. Yet when you look at the leaderboards online of whose models are topping the charts—Cling Wan, Alibaba, Quinn, Deepseek—these are all Chinese-made models. If you look at the chip sets being used, the government of China itself just issued an edict: “No more Nvidia chips. We are going to use Huawei Ascend 920s now,” which are very good at what they do. And the Chinese models themselves, these companies are just giving them away to the world. Christopher S. Penn – 27:54 They’re not trying to lock you in like a ChatGPT is. The premise for them, for basically the rest of the world that is in America, is, “Hey, you could take American AI where you’re locked in and you’re gonna spend more and more money, or here’s a Chinese model for free and you can build your national infrastructure on the free stuff that we’re gonna give you.” I’ve seen none of that here. That is completely absent from any of the discussions about what other nations are doing with AI. The EU has Mistral and Black Forest Labs, Sub-Saharan Africa has Lilapi AI. Singapore has Sea Lion, Korea has LG, the appliance maker, and their models. Of course, China has a massive footprint in the space. I don’t see that reflected anywhere here. Christopher S. Penn – 28:46 It’s not in the conversations, it’s not in the hallways, it’s not on stage. And to me, that is a really big blind spot if you think—as many people do—that that is your number one competitor on the world stage. Katie Robbert – 28:57 Why do you think? Christopher S. Penn – 29:01 That’s a very complicated question. But it involves racism, it involves a substantial language barrier, it involves economics. When your competitor is giving away everything for free, you’re like, “Well, let’s just pretend they’re not there because we don’t want to draw any attention to them.” And it is also a deep, deep-seated fear. When you look at all of the papers that are being submitted by Google and Facebook and all these other different companies and you look at the last names of the principal investigators and stuff, nine out of 10 times it’s a name that’s coded as an ethnic Chinese name. China produces more PhDs than I think America produces students, just by population dynamics alone. You have this massive competitor, and it almost feels like people just want to put their heads in the sand and say they’re not there. Christopher S. Penn – 30:02 It’s like the boogeyman, they’re not there. And yet if we’re talking about the deployment of AI globally, the folks here should be aware that is a thing that is not just the Sam Alton Show. Katie Robbert – 30:18 I think perhaps then, as we’re talking about the future of work and big companies, small companies, mid-sized companies, this goes sort of back to what I was saying: you need to expand your horizons of thinking. “Well, we’re a domestic company. Why do I need to worry about what China’s doing?” Take a look at your tech stack, and where are those software packages created? Who’s maintaining them? It’s probably not all domestic; it’s probably more of a global firm than you think you are. But we think about it in terms of who do we serve as customers, not what we are using internally. We know people like Paul has talked about operating systems, Ginny Dietrich has talked about operating systems. Katie Robbert – 31:02 That’s really sort of where you have to start thinking more globally in terms of, “What am I actually bringing into my organization?” Not just my customer base, not just the markets that I’m going after, not just my sales team territories, but what is actually powering my company. That’s, I think, to your point—that’s where you can start thinking more globally even if your customer base isn’t global. That might theoretically help you with that critical thinking to start expanding beyond your little homogeneous bubble. Christopher S. Penn – 31:35 Even something like this has been a topic in the news recently. Rare earth minerals, which are not rare, they’re actually very commonplace. There’s just not much of them in any one spot. But China is the only economy on the planet that has figured out how to industrialize them safely. They produce 85% of it on the planet. And that powers your smartphone, that powers your refrigerator, your car and, oh by the way, all of the AI chips. Even things like that affect the future of work and the future of AI because you basically have one place that has a monopoly on this. The same for the Netherlands. The Netherlands is the only country on the planet that produces a certain kind of machine that is used to create these chips for AI. Christopher S. Penn – 32:17 If that company goes away or something, the planet as a whole is like, “Well, I figured they need to come up with an alternative.” So to your point, we have a lot of these choke points in the AI value chain that could be blockers. Again, that’s not something that you hear. I’ve not heard that at any conference. Katie Robbert – 32:38 As we’re thinking about the future of work, which is what we’re talking about on today’s podcast at Macon, 1,500 people in Cleveland. I guarantee they’re going to do it again next year. So if you’re not here this year, definitely sign up for next year. Take a look at the Smarter X and their academy. It’s all good stuff, great people. I think—and this was the question Paul was asking in his keynote—”Where do we go from here?” The— Katie Robbert – 33:05 The atmosphere. Yes. We don’t need—we don’t need to start singing. I do not need. With more feeling. I do get that reference. You’re welcome. But one of the key takeaways is there are more questions than answers. You and I are asking each other questions, but there are more questions than answers. And if we think we have all of the answers, we’re wrong. We have the answers that are sufficient enough for today to keep our business moving forward. But we have to keep asking new questions. That also goes into that critical thinking. You need to be comfortable not knowing. You need to be comfortable asking questions, and you need to be comfortable doing that research and seeking it out and maybe getting it wrong, but then continuing to learn from it. Christopher S. Penn – 33:50 And the future of work, I mean, it really is a very cloudy crystal wall. We have no idea. One of the things that Paul pointed out really well was you have different scaling laws depending on where you are in AI. He could have definitely spent some more time on that, but I understand it was a keynote, not a deep dive. There’s more to that than even that. And they do compound each other, which is what’s creating this ridiculously fast pace of AI evolution. There’s at least one more on the way, which means that the ability for these tools to be superhuman across tasks is going to be here sooner than people think. Paul was saying by 2026, 2027, that’s what we’ll start to see. Robotics, depends on where you are. Christopher S. Penn – 34:41 What’s coming out of Chinese labs for robots is jaw dropping. Katie Robbert – 34:45 I don’t want to know. I don’t want to know. I’ve seen *Ex Machina*, and I don’t want to know. Yeah, no. To your point, I think a lot of people bury their head in the sand because of fear. But in order to, again, it sort of goes back to that critical thinking, you have to be comfortable with the uncomfortable. I’m sort of joking: “I don’t want to know. I’ve seen *Ex Machina*.” But I do want to know. I do need to know. I need to understand. Do I want to be the technologist? No. But I need to play with these tools enough that I feel I understand how they work. Yesterday I was playing in Opal. I’m going to play in N8N. Katie Robbert – 35:24 It’s not my primary function, but it helps me better understand where you’re coming from and the questions that our clients are asking. That, in a very simple way to me, is the future of work: that at least I’m willing to stretch myself and keep exploring and be uncomfortable so that I can say I’m not static. Christopher S. Penn – 35:46 I think one of the things that 3M was very well known for in the day was the 20% rule, where an employee, as part of their job, could have 20% of the time just work on side projects related to the company. That’s how Post-it Notes got invented, I think. I think in the AI forward era that we’re in, companies do need to make that commitment again to the 20% rule. Not necessarily just messing around, but specifically saying you should be spending 20% of your time with AI to figure out how to use it, to figure out how to do some of those tasks yourself, so that instead of being replaced by the machine, you’re the one who’s at least running the machine. Because if you don’t do that, then the person in the next cubicle will. Christopher S. Penn – 36:33 And then the company’s like, “Well, we used to have 10 people, we only need two. And you’re not one of the two who has figured out how to use this thing to do that. So out you go.” Katie Robbert – 36:41 I think that was what Paul was doing in his AI for Productivity workshop yesterday, was giving people the opportunity to come up with those creative ideas. Our friend Andy Crestadino was relaying a story yesterday to us of a very similar vein where someone was saying, “I’ll give you $5,000. Create whatever you want.” And the thing that the person created was so mind-blowing and so useful that he was like, “Look what happens when I just let people do something creative.” But if we bring it sort of back whole circle, what’s the motivation? Why are people doing it in the first place? Katie Robbert – 37:14 It has to be something that they’re passionate about, and that’s going to really be what drives the future of work in terms of being able to sustain while working alongside AI, versus, “This is all I know how to do. This is all I ever want to know how to do.” Yes, AI is going over your job. Christopher S. Penn – 37:33 So I guess wrapping up, we definitely want you thinking creatively, critically, contextually. Know where your data is, know where your ideas come from, broaden your horizons so that you have more ideas, and be able to be one of the people who knows how to call BS on the machines and say, “That’s completely wrong, ChatGPT.” Beyond that, everyone has an obligation to try to replace themselves with the machines before someone else does it to you. Katie Robbert – 38:09 I think again, to plug Macon, which is where we are as we’re recording this episode, this is a great starting point for expanding your horizons because the amount of people that you get to network with are from different companies, different experiences, different walks of life. You can go to the sessions, learn it from their point of view. You can listen to Paul’s keynote. If you think you already know everything about your job, you’re failing. Take the time to learn where other people are coming from. It may not be immediately relevant to you, but it could stick with you. Something may resonate, something might spark a new idea. Katie Robbert – 38:46 I feel like we’re pretty far along in our AI journey, but in sitting in Paul’s keynote, I had two things that stuck out to me: “Oh, that’s a great idea. I want to go do that.” That’s great. I wouldn’t have gotten that otherwise if I didn’t step out of my comfort zone and listen to someone else’s point of view. That’s really how people are going to grow, and that’s that critical thinking—getting those shared experiences and getting that brainstorming and just community. Christopher S. Penn – 39:12 Exactly. If you’ve got some thoughts about how you are approaching the future of work, pop on by our free Slack group. Go to trust insights AI analysts for marketers, where you and over 4,500 other marketers are asking and answering each other’s questions every single day. Wherever you watch or listen to the show, if there’s a channel you’d rather have it on instead, go to Trust Insights AI Ti Podcast, where you can find us all the places fine podcasts are served. Thanks for tuning in. I’ll talk to you on the next one. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
On the 480th episode of Piecing It Together, we are LIVE from Downtown Cinemas with Sam Novak, Johnny Molinaro and Nic Patrick to talk about Tron: Ares! The third in the cult series takes the videogame action to the real world and featured a killer soundtrack by Nine Inch Nails. Puzzle pieces include Pixels, Terminator, Alien: Covenant and Ex Machina.As always, SPOILER ALERT for Tron: Ares and the movies we discuss!Written by Jesse WigutowDirected by Joachim RonningStarring Jared Leto, Greta Lee, Evan Peters, Jodie Turner-Smith, Jeff BridgesDisneySam Novak is the Deputy Editor of VEGAS 411, an online source for Sin City tourists and locals. He's a fitness fan, a pet adoption advocate, and is mad about the movies. A former movie theater manager, he successfully operated 36 screens for both AMC and an independent movie house in South Florida. Check out VEGAS 411 at https://vegas411.comAnd follow Sam on Instagram @sammasseurNic Patrick is a filmmaker and co-host of the UNLV Film Department Podcast at https://www.unlv.edu/film/film-deptCheck out Nic's YouTube channel for his latest short film “Before It's Over” at https://www.youtube.com/@nicpatrickAnd also check out the UNLV Film Department Podcast atFollow Nic on Instagram @nicpatrickkJohnny Molinaro is a filmmaker and actor whose latest film is Vegas Traffic.Check out Johnny's IMDb at https://www.imdb.com/name/nm7933618/And follow Johnny on Instagram @johnny.octaneMy latest David Rosen album MISSING PIECES: 2018-2024 is a compilation album that fills in the gaps in unreleased music made during the sessions for 2018's A Different Kind Of Dream, 2020's David Rosen, 2022's MORE CONTENT and 2025's upcoming And Other Unexplained Phenomena. Find it on Bandcamp, Apple Music, Spotify and everywhere else you can find music.You can also find more about all of my music on my website https://www.bydavidrosen.comMy latest music video is “Shaking" which you can watch at https://www.youtube.com/watch?v=wzm8s4nuqlAThe song at the end of the episode is "Antivial" from my album MORE CONTENT.Make sure to “Like” Piecing It Together on Facebook at http://www.facebook.com/PiecingPodAnd “Follow” us on Twitter @PiecingPodAnd Join the Conversation in our Facebook Group,
Happy Spooky Season one and all! Today we kick off the first episode of our sixth season with a review of 2014's Ex Machina!A man is taken to a high-tech research facility to ascertain the consciousness of an advanced A.I.Written and Directed by: Alex GarlandStarring: Domhall Gleeson, Oscar Isaac, and Alicia VikanderCome on in and have a listen! What do you think of this movie? What are others like it you enjoyed? We'd love to hear from you! Please like, follow, subscribe, share.
On this week's episode, I'm joined by Nate Soares to talk about his new book, cowritten with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. It's a fascinating book—some will say fearmongering and sensationalist; I, frankly, think they're overly optimistic about our ability to constrain the development of general intelligence in AI—in large part because of how it's structured. Each chapter is preceded by a fable of sorts about the nature of intelligence and the desires of intelligent beings that look and think very differently from humans. The point in each of these passages is less that AI will want to eliminate humanity and more that it might do so incidentally, through natural processes of resource acquisition. This made me think about how AI is typically portrayed in film; it is all too often a Terminator-style scenario, where the intelligence is antagonistic in human ways and for human reasons. We talked some about how storytellers could do a better job of thinking about AI as it might actually exist versus how it might be like us; Ex Machina is a movie that came in for special discussion due to the thoughtful nature of the treatment of its robotic antagonist's desires. If this episode made you think, I hope you share it with a friend!
Welcome to Dev Game Club, where this week we complete our series on Portal. We talk about the ending part of the game, the increasing difficulty, the cake, and the boss and declare ourselves Still Alive. Dev Game Club looks at classic video games and plays through them over several episodes, providing commentary. Sections played: Finished the game! Issues covered: puzzle hockey stick, having puzzles cross through multiple rooms, mechanical additions to the sequel, limiting the additional portal mechanics, having to think about getting across rooms, timing puzzles, how the portals line up, not supporting the twitch elements well, is Chell an android? (she is not -B), the feeling of escape, test subjects getting behind the walls, tactical freedom vs strategic freedom, the importance of context, the meta aspect of these narratives, the subtlety of the cake (until it isn't), "now I'm in Half-Life," competing with Black Mesa, the boss fight, thinking in portals, the final exam and having all the tools, timers as a crutch, the perfectly balanced end point, seeing the outside and being in the Half-Life world, the cake room, the character of GLaDOS as an amalgam of personality modules, triumphant GLaDOS, "the cake is a lie," the game hitting in a different way, the Weighted Companion Cube, having to be interactive, using games as a substrate for "non-interactive" entertainment, the grungy environment vs what's going on, a better show than a game (except maybe a couple of scenes), freeing your mind, the limits of new mechanics, being just one idea with some small sub-ideas, lean/no fat, really great onboarding, a light touch with narrative, everything coming together and artistic cohesion, a meta moment about the SteamDeck, portals!, safety in the portals, portal-based renderers, the one game that brought you in, another very kind review that we sometimes live up to. Games, people, and influences mentioned or discussed: The Stanley Parable, BioShock, Half-Life, President Obama, Claire Danes, Chris and Susan McKinley Ross, Jonathan Coulton, Ellen McLain, The Office, Office Space, Fallout, Saw, Ex Machina, Alex Garland, The Last of Us, Giovanni Giorgio Morodor, Diana Ross, Daft Punk, Fez, Fumito Uedo, Keita Takahashi, J. K. Simmons, Stephen Merchant, Jacques Rivette, Valve, Aperture Desk Job, Erik Wolpaw, Calamity Nolan, BioStats, Endofunctor, Morrowind, Halo CE, Paolo, Outer Wilds, Kirk Hamilton, Aaron Evers, Mark Garcia. Next time: TBA! Twitch: timlongojr and twinsunscorp Discord DevGameClub@gmail.com
Marisa Tormo alerta sobre los riesgos de usar la inteligencia artificial sin criterio y defiende una educación tecnológica con enfoque humano. La inteligencia artificial ha pasado de ser una promesa futurista a convertirse en una herramienta cotidiana. Está en buscadores, plataformas de streaming, aplicaciones educativas y herramientas laborales. Pero su popularización también ha traído consigo una pregunta crucial: ¿estamos preparados para convivir con la IA sin perder el control? La consultora en tecnología digital e inteligencia artificial, Marisa Tormo, reflexiona sobre este desafío en su nuevo libro "Inteligencia Artificial. A la conquista de la nueva era". Lejos de los tecnicismos, propone una mirada cercana, práctica y, sobre todo, crítica.ChatGPT o Grock 3: Superhéroes digitales en nuestro día a día Tormo recurre a una metáfora clara y original para explicar el funcionamiento de los grandes modelos de lenguaje de inteligencia artificial como ChatGPT, Gemini, Claude o Grok. Los presenta como superhéroes con personalidad propia: "Chargé Peté es el amigo sabio y simpático; Gemini, el empollón que te organiza la vida; Claude es el formal y ético, y Grok es el cuñado irónico que siempre suelta la verdad incómoda en Navidad". Su enfoque no busca idealizar estas tecnologías, sino ayudar a comprender cómo usarlas con criterio: "No hay que casarse con ninguno. Yo misma cambio de IA como de zapatos, según lo que necesite". La clave, dice, está en el usuario: saber qué preguntar, cómo evaluar la respuesta y contrastar la información.Tres errores que cometemos al usar IA En su análisis, Tormo identifica tres grandes riesgos en el uso actual de la inteligencia artificial: el uso sin conciencia, el sesgo y el miedo. "Ya hay quien utiliza la IA para manipular opiniones, crear identidades ficticias o fabricar noticias falsas. Y como trabaja a una velocidad que no imaginamos, muchas veces reaccionamos cuando ya es demasiado tarde", advierte. También subraya que, pese a su nombre, la inteligencia artificial no piensa ni razona. "Es un sistema que repite patrones a partir de datos humanos. Si esos datos tienen sesgos (racistas, machistas, excluyentes), la IA los reproducirá con apariencia de objetividad". El tercer riesgo, añade, es el miedo: "Cuando usamos tecnología para vigilar, sustituir personas o controlar decisiones, perdemos el rumbo. La solución no es solo regular. Hace falta conciencia, ética práctica y formación".De la ciencia ficción a lo cotidiano Lo que antes parecía sacado de películas como Her o Ex Machina, hoy forma parte de nuestra vida diaria. "Desde que el móvil nos despierta y nos dice si va a llover, hasta que Google Maps sugiere una ruta alternativa: la IA moldea nuestras decisiones sin que lo notemos", explica Tormo. Lejos de promover alarmismos, su enfoque es constructivo. "No se trata de temer la tecnología, sino de ponerla a trabajar para nosotros. Ya no solo buscamos, ahora creamos junto a ella". Comparte un ejemplo doméstico: su hijo, bloqueado con un trabajo de química, pidió a ChatGPT que se lo explicara como si estuviera en primero de bachillerato. "La IA se convirtió en su profesor particular, con un tono neutro, sin juzgar ni cansarse. No es magia, son conocimientos humanos implementados con criterio".¿Quién gana la partida: programar o pensar? Tormo también desafía un mito educativo: "Durante años se dijo que había que enseñar a programar. Hoy, lo que realmente marca la diferencia es saber pensar". Las llamadas soft skills (empatía, creatividad, pensamiento crítico o intuición), tradicionalmente relegadas a un segundo plano, se convierten ahora en el principal valor diferencial frente a la máquina. "Ya no se trata de hacer redacciones desde cero, sino de usar la IA como punto de partida: pedirle ideas, estructura, contexto… y luego aplicar tu voz humana". En las aulas, propone un enfoque más práctico: enseñar a usar la IA sin miedo, con criterio, para potenciar el aprendizaje y no para sustituirlo. "La IA no es más lista que tú ni que yo. Sólo es más rápida", resume Tormo. Y por eso, insiste, no debemos delegar nuestro pensamiento en ella. Su mensaje final es directo y sin rodeos: "Esto no va solo de saber usar la IA, sino de tener claro para qué la usamos". Porque el criterio, la responsabilidad y la creatividad siguen siendo, por ahora, profundamente humanos, según recuerda Marisa Tormo.
On the 474th episode of Piecing It Together, Ryan Pak of Soundtrack Your Life Podcast joins me to talk about Spinal Tap II: The End Continues! Against all odds, these rockers are still with us, still rocking and still hilarious. Puzzle pieces include A Mighty Wind, Popstar, Kingsman: The Golden Circle and Happy Gilmore 2.As always, SPOILER ALERT for Spinal Tap II and the movies we discuss!Written by Christopher Guest, Michael McKean, Harry Shearer, Rob ReinerDirected by Rob ReinerStarring Christopher Guest, Michael McKean, Harry Shearer, Rob ReinerBleecker StreetRyan Pak is a podcaster who hosts the podcast Soundtrack Your Life.Check out Soundtrack Your Life at https://www.soundtrackyourlife.net/And don't forget to check out the episode on Ex-Machina featuring me!Follow Ryan on Instagram @soundtrackcastMy latest David Rosen album MISSING PIECES: 2018-2024 is a compilation album that fills in the gaps in unreleased music made during the sessions for 2018's A Different Kind Of Dream, 2020's David Rosen, 2022's MORE CONTENT and 2025's upcoming And Other Unexplained Phenomena. Find it on Bandcamp, Apple Music, Spotify and everywhere else you can find music.You can also find more about all of my music on my website https://www.bydavidrosen.comMy latest music video is “Shaking" which you can watch at https://www.youtube.com/watch?v=wzm8s4nuqlAThe song at the end of the episode is a preview of a song from the next The Pup Pups album called "Out In the Cold."Make sure to “Like” Piecing It Together on Facebook at http://www.facebook.com/PiecingPodAnd “Follow” us on Twitter @PiecingPodAnd Join the Conversation in our Facebook Group, Piecing It Together – A Movie Discussion Group.And check out https://www.piecingpod.com for more about our show!And if you want to SUPPORT THE SHOW, you can now sign up for our Patreon at https://www.patreon.com/bydavidrosenYou can also support the show by checking out our Dashery store to buy shirts and more featuring Piecing It Together logos, movie designs, and artwork for my various music projects at https://bydavidrosen.dashery.com/Share the episode, comment and give us feedback! And of course, SUBSCRIBE!And of course, don't forget to leave us a 5 star review on Goodpods,
"You are dead center of the greatest scientific event in the history of man." For Episode 375, Brandon and David are joined by Paige Klaniecki to discuss Alex Garland's Ex Machina for CineNation's Hard Sci-Fi series. Paige is a screenwriter, podcaster, and voice actress based in Los Angeles, and she serves as a co-writer and associate producer on the Mission: Rejected spy comedy podcast, while also serving as a voice actress on the show. Listen as Brandon, David, and Paige discuss Garland's inspirations behind the film, the movie's casting process, how the crew created the environment of the film, how A24 became a major player in getting the film released, and much more. Also, don't forget to join our Patreon for more exclusive content: Opening - Talking about Coherence - (00:00:10) Introducing Paige Klaniecki - (00:03:45) Recap of Hard Sci-Fi Movies (00:05:44) Intro to Ex Machina (00:08:03) How Ex Machina Got to Production (00:13:40) Favorite Scenes (00:25:16) On Set Life - (00:51:22) Aftermath: Release and Legacy (01:00:19) What Worked and What Didn't (01:07:24) Film Facts (01:15:13) Awards (01:15:56) Final Questions on the Movie (01:21:01) Wrapping Up the Episode (01:27:47) Contact Us: Facebook: @cinenation Instagram: @cinenationpodcast Twitter/X: @CineNationPod TikTok: @cinenation Letterboxd: CineNation Podcast Visit to find out more about Mission: Rejected!
Swedish-born Alicia Vikander won global acclaim in 2015 for playing Vera Britten in Testament Of Youth, and a humanoid robot in the thriller Ex-Machina. The following year she won an Academy Award for her supporting role with Eddie Redmayne in The Danish Girl, along with a Screen Actors Guild Award and BAFTA and Golden Globe nominations. Since then her diverse range of screen roles have included playing a spy boss in the film Jason Bourne, computer game heroine Lara Croft in Tomb Raider, and Gloria Steinem in the biopic The Glorias. The daughter of acclaimed stage actor Maria Fahl, she tells John Wilson how she first performed on stage at the age of seven in a musical written by Benny and Bjorn of ABBA. She also appeared in Swedish television dramas and films as a child actor. In 2025 Alicia Vikander makes her return to the stage in a new version of Ibsen's The Lady From The Sea at The Bridge in London, her first theatre role since she was a child. Producer: Edwina Pitman
Why does everyone have bloodline Treat yourself or a loved one! TokyoTreat & Sakuraco make the perfect gift for any occasion. Use code "JORYJO" for $5 off your first #TokyoTreat box through my link: https://team.tokyotreat.com/watchingonepiece and #Sakuraco box: https://team.sakura.co/watchingonepiece Join our Discord: http://discord.gg/WSv2KW34rk This episode came out early for our Patrons! Thank you for supporting on Patreon! We Are! on Bluesky https://bsky.app/profile/wearewatchingop.bsky.social
RMR 0330: Special Guest, Nathan Lutz, joins your hosts, Bryan Frye and Russell Guest for the Retro Movie Roundtable as they revisit Ex Machina (2014) [R] Genre: Science Fiction, Thriller, Drama Starring: Domhnall Gleeson, Alicia Vikander, Oscar Isaac, Sonoya Mizuno, Corey Johnson Directed by: Alex Garland Recorded on 2025-08-06
Welcome to the Summer Break Rewind!While The Conversation takes a short summer break, I'll be sharing fan-favourite episodes from Season Three, along with brilliant conversations from Page One: The Writer's Podcast. Season Four returns in September until then, enjoy the rewind!-----------------------------------------------------------------------------------------------In this special Summer Rewind episode of The Conversation Podcast, I'm sharing Episode 53 of Page One: The Writer's Podcast, where hosts Marco and Tariq sit down with Alex Garland, the critically acclaimed writer and director of movies such as Ex Machina, Annihilation and, most recently, the TV series Devs. His career began with the bestselling novel, The Beach, before he moved on to writing screenplays for films such as 28 Days Later, Sunshine, Never Let Me Go and Dredd.We'd love to hear your thoughts. Take a moment to complete The Conversation survey and share your views about the podcast. http://bit.ly/theconversationwithnadinematheson-survey"Enjoying 'The Conversation'? Support the podcast by buying me a cup of coffee ☕️https://ko-fi.com/nadinemathesonPurchase books by the featured authors through my affiliate shop on Bookshop.org Hosted on Acast. See acast.com/privacy for more information.
Quintin, Andrea, Welling, and Jack rank the top 15 movies released in the year 2015. Which movies will enter the list out of Mad Max Fury Road, Inside Out, and Ex Machina? Find out now!
Film critic and host of The Losers' Club and The Lady Killers podcast, Jenn Adams, returns to talk about the Frankenstein movie of the 2010s: Alex Garland's Ex Machina (2014). New episode every Friday.Follow us on Letterboxd to see what films we're covering.Produced and presented by Anna BogutskayaResearch Assistant: Frankie Wakefield***Music: "Neon Alley" by Karl Casey @ White Bat Audio***The Final Girls are a UK-based film collective exploring horror film history through a feminine lens.→ Support us on Patreon for bonus content.→ Find out more about our projects here: thefinalgirls.co.uk→ Follow us on Twitter and Instagram.→ Read Feeding the Monster
This week we're talking about Wednesday, Death Stranding 2, Ex Machina, Class Of 1999, Iron Angels, and Demons. Show music by HeartBeatHero and OGRE. Support the show! Get up to 2 months free podcasting service with our Libsyn code OZONE
This week, the boys grab a few beers and head back to 2018 to catch up on what was happening the year “Annihilation” came out, before diving into a discussion about re-watching Alex Garland's follow-up to his 2015 hit “Ex Machina”. Dave owns the film, John has seen it 3 or 4 times but the time lines get blurred (movie reference), and Jeff hasn't gone back since his shimmery experience in the cinema summer 2018. What did we think? We have to keep it positive. linktr.ee/theloveofcinema - Check out our YouTube page! Our phone number is 646-484-9298. It accepts texts or voice messages. 0:00 Intro + News; 14:00 Gripes; 17:12 2018 Year in Review; 41:52 Films of 2018: “Annihilation”; 1:26:17 What You Been Watching?; 1:34:23 Next Week's Episode Teaser Additional Cast/Crew: Natalie Portman, Oscar Isaac, Alex Garland, Tessa Thompson, Jennifer Jason Leigh. Hosts: Dave Green, Jeff Ostermueller, John Say Edited & Produced by Dave Green. Beer Sponsor: Carlos Barrozo Music Sponsor: Dasein Dasein on Spotify: https://open.spotify.com/artist/77H3GPgYigeKNlZKGx11KZ Dasein on Apple Music: https://music.apple.com/us/artist/dasein/1637517407 Recommendations: Alien: Earth; Jason Bourne, Foreign Correspondent; The Social Network, Hearts of Darkness; Apocalypse Now; Pirates, The Yogurt Shop Murders, Blood Simple. Additional Tags: Frances McDormand, The Coen Brothers, Peter Weir, Paramount, Poop Cruise, Netflix, Apple Film, Times Square, Formula 1, British Grand Prix at Silverstone, Austrian Grand Prix, Lando Norris, Charles Leclerc, Oscar Piastri, Shane, Stick, Peter Pan, Roman Holiday, Mission: Impossible, submarine, nuclear weapons, Top Gun: Maverick, Ben Mendelsohn, French Accents, Tom Cruise, George Clooney, The Stock Market Crash, Bear Market, Trains, Locomotions, Museums, Nazis, WWII movies, WWI Shows, Plastic ExplosivesThe Crusades, Swedish Art, Knights, Death, MGM, Amazon Prime, Marvel, Sony, Conclave, Here, Venom: The Last Dance, Casablanca, The Wizard of Oz, Oscars, Academy Awards, BFI, BAFTA, BAFTAS, British Cinema. England, Vienna, Leopoldstadt, The Golden Globes, Past Lives, Monarch: Legacy of Monsters, The Holiday, Sunset Boulevard, Napoleon, Ferrari, Beer, Scotch, Travis Scott, U2, Apple, Apple Podcasts, Switzerland, West Side Story, Wikipedia, Adelaide, Australia, Queensland, New South Wales, Melbourne, Indonesia, Java, Jakarta, Bali, Guinea, The British, England, The SEC, Ronald Reagan, Stock Buybacks, Marvel, MCU, DCEU, Film, Movies, Southeast Asia, The Phillippines, Vietnam, America, The US, Academy Awards, WGA Strike, SAG-AFTRA, SAG Strike, Peter Weir, Jidaigeki, chambara movies, sword fight, samurai, ronin, Meiji Restoration, plague, HBO Max, Amazon Prime, casket maker, Seven Samurai, Roshomon, Sergio Leone, Clint Eastwood, Stellen Skarsgard, the matt and mark movie show.The Southern District's Waratah Championship, Night of a Thousand Stars, The Pan Pacific Grand Prix (The Pan Pacifics), The Canadian Grand Prix. Montana,
In this episode, we explore how artificial intelligence has been portrayed in popular culture, from the replicants of Blade Runner to the feminized AI of Ex Machina. We examine the fundamental philosophical questions these stories raise: What constitutes life? What separates consciousness from programming? At what point would we need to recognize an artificial entity as deserving rights and autonomy?The conversation takes us through foundational texts like Isaac Asimov's "The Feeling of Power" and Harlan Ellison's "I Have No Mouth and I Must Scream," films like Zardoz and Logan's Run, and modern explorations of AI ethics. We discuss how these fictional portrayals often reveal more about human nature than about technology itself—particularly in how gender dynamics and toxic masculinity in the tech world manifest in our imagined AI futures.
It's not the history of man. That's the history of pods. Follow the river and open the doors as we tear up the dance floor with Alex Garland's Ex Machina. We're interested to see what you'll choose. If you like the podcast, please rate, review, and subscribe! Follow us at @theladykpod on Twitter and @theladykillerspod on Instagram and Bluesky Connect with your co-hosts: Jenn: @jennferatu on Twitter, Instagram, Threads, and Bluesky Sammie: @srkdall on Twitter and Instagram, @srkdallreads Bookstagram Cover Art: David (@the_haunted_david, @the_haunted_david_art) Logo Art: Meg (@sludgework) Music: McKenzie Gerber (@wolfman_mac_gerber) Learn more about your ad choices. Visit megaphone.fm/adchoices
Ben and Rob lock and load for Warfare (2025), the haunting, hyper-realistic war film co-directed by Alex Garland and former Navy SEAL Ray Mendoza. Known for cerebral sci-fi like Ex Machina and Annihilation, Alex Garland shifts gears into combat territory—while quietly stepping back to share authorship in a way that feels as radical as the film itself.Who is Ray Mendoza, and why did Alex Garland hand him the reins? Is it even possible to make a truly anti-war film in an industry that so often glorifies violence? And how does the bone-rattling sound design help Alex Garland disappear like a ghost from his own movie?The lads dig into the ethics of representation, the thin line between immersion and complicity, and—most importantly—what Warfare might really be trying to say beneath all the smoke, sweat, and shell casings.CONSUUUME to find out all this and much MUCH more!PLUS! We have a Patreon with EXCLUSIVE content just for you starting at just ONE POUND a month - click the link below!Find us on your socials of choice at www.linktr.ee/everymovieeverpodcast
On this A24 Vibe Discussion, the A24 Rocks crew discusses a Danny Boyle film for the first time. None other than the recent critically-acclaimed zombie thriller "28 Years Later." Written by Alex Garland (writer of A24 films such as Ex Machina, Men, and Civil War), 28 Years Later is the third film in the 28 Days Later universe. These film reviewers ask if it stays truthful to the first film, and if it was worth making. Is it even better than 28 Days Later? What kind of themes does this infection movie touch on given what the human race has dealt with since 2020? Also, why did they give some zombies a third leg? Caution: movie spoilers.Intro- 0:00 to 3:25.Film Discussion- 3:25 to 59:09.Film Ratings/Outro- 59:09 to End.Upcoming Podcast Release Schedule:August 6th- The Farewell.August 13th- 28 Days Later.August 20th- Skin.August 27th- This Is Spinal Tap.
So today Duckies, we need to call out a whole generation!After reading a few stories about the subject, your host decided to bring it to the people. What is the Deal with AI Chatbots and people falling in love with them? We wanna say it's weird, but then you read the stats and 83% of Gen Z'ers say they are open to AI relationships. That's a lot of people making it seem more normal. When you read the stories about the people, it just seems sad. But is just us? Everyday more people join these companion services. Is it mass loneliness affecting the people. Have we forgot how to talk to people and how to deal with people?Today we'll tell you a few stories about some people that made it work, and some that it didn't work so well. I dunno, you decide. Is AI Love better than a Human?Read the Story and decide on your own:https://www.nytimes.com/2025/01/15/technology/ai-chatgpt-boyfriend-companion.htmlhttps://nypost.com/2025/05/12/lifestyle/woman-married-to-an-ai-robot/To keep up with the Ducks in charge follow:FB & IG: @BubblebathstoriesNicky Trendz IG: @nickyTRENDZManny Oso IG: @gotnotime4dissFor official Merch head over to Bubblebathstories.co☎️ Or call us at 347-878-1144 !!!
Grab your chargers, put your heart in airplane mode, and maybe delete that Replika app, because today, we are stepping into the uncanny valley of AI romance. We'll be talking about the new film Companion, starring the brilliant Sofie Thatcher. We talk about real stories of people dating AI, films like Terminator, Her & Ex Machina before finally getting a broader understanding of AI companionship- why we're obsessed with with media about dating robots, and why our next relationship may soon come with a software update.(originally released in April of 2025)Become a supporter of this podcast: https://www.spreaker.com/podcast/broads-next-door--5803223/support.
This week on the I Hate Critics Movie Review Podcast, Jeff and Sean kick things off by once again lamenting the state of Star Wars fandom, which never fails to disappoint. Then it's full throttle into two of the week's biggest (and most unfairly maligned) releases, plus a look back at a modern sci-fi classic.
Ben and Rob summon ‘Men' (2022), written and directed by visionary director Alex Garland. This surreal, unsettling folk horror stars Jessie Buckley as a grieving woman plagued by a series of near-identical men—all played by Rory Kinnear—in a remote English village. As Alex Garland leaves behind the structured sci-fi of Ex Machina and Annihilation, ‘Men' marks a bold turn into dream logic, emotional horror, and symbolic storytelling.But what is Men really saying? What do the Green Man and Christ imagery mean in the context of British folk horror? Why do all the men have the same face—and why does the film's final monster feel more tragic than terrifying? And most importantly: what kind of twisted genius makes this movie?CONSUUUME to find out all this and much MUCH more!PLUS! We have a Patreon with EXCLUSIVE content just for you starting at just ONE POUND a month - click the link below!Find us on your socials of choice at www.linktr.ee/everymovieeverpodcast
Welcome to the LIVING FOR THE BOYLE review series! Daniel Francis Boyle originally hailed from Manchester, England and his filmmaking career took off thirty years ago in 1995 with the release of acclaimed cult thriller Shallow Grave. And ever since then, Danny Boyle (as he's officially known) has carved out a uniquely successful career not only achieving box office success several times but also winning a few Oscars along the way. During this time period, he has also become one of MY personal favorite directors, having helmed excellent ORIGINAL stories spanning several genres including children's fantasy, science fiction, crime drama, horror, and biopic. Over the next few months, I will be reviewing some of my favorite entries from his filmography in the lead-up to the long-awaited sequel to one of his more successful films….28 Years Later which will be released in the U.S. on June 20!It's finally here! Following several months of hype including one of THE more iconic trailers of recent years, Danny Boyle has returned to this ongoing post-apocalyptic saga. Also returning is screenwriter Alex Garland (Annihilation, Ex Machina) and together they have crafted a new story within this post-apocalyptic universe taking place entirely on the British Isles several years (28 actually) after the initial outbreak of the "Rage virus" which overtook the UK. This story focuses on young Spike (Alfie Williams) who lives on a remote island off the coast of the infected mainland in a quaint village with his mother (Jodie Comer) and father (Aaron Taylor-Johnson). There are several harrowing run-in's with the infected but at its core, this is a coming-of-age story. So does it live up to the hype? Let's find out....Host & Editor: Geoff GershonEditor: Ella GershonProducer: Marlene GershonSend us a texthttps://livingforthecinema.com/Facebook:https://www.facebook.com/Living-for-the-Cinema-Podcast-101167838847578Instagram:https://www.instagram.com/livingforthecinema/Letterboxd:https://letterboxd.com/Living4Cinema/
Jim talks with Anders Indset about his book The Singularity Paradox: Bridging the Gap Between Humanity and AI, co-authored with Florian Neutkart. They discuss the "final narcissistic injury of humankind," Freud's three historical narcissistic injuries, machine consciousness vs human consciousness, the "undead" state, human cognitive limitations, game theory dynamics & multipolar traps, Artificial Human Intelligence vs AGI/ASI approaches, consciousness preservation, chess AI & human cognition, coevolutionary dynamics between AHI & AGI/ASI, "playing to win" vs "playing to become," organizational design for anticipatory leadership, trust & friction as progress drivers, the three pillars of forging & investment & efficiency, reactive vs reflective societies, technical hygiene, "zombie apocalypse" scenarios, the role of agency, questions of identity & authenticity in an AI world, and much more. Episode Transcript Wild Knowledge: Outthink the Revolution, by Anders Indset The Quantum Economy: Saving the Mensch with Humanistic Capitalism, by Anders Indset The Viking Code: The Art and Science of Norwegian Success, by Anders Indset Ex Machina: The God Experiment, by Anders Indset and Florian Neukart The Singularity Paradox: Bridging the Gap Between Humanity and AI, by Anders Indset and Florian Neukart Against Empathy: The Case for Rational Compassion, by Paul Bloom The Beginning of Infinity, by David Deutsch Anders Indset is a business philosopher and author of four Spiegel bestsellers, with works translated into over ten languages. He has been recognized by Thinkers50 as one of the most influential thinkers in technology, economy, and leadership. In addition to writing books including The Quantum Economy, Ex Machina, and his newest title, The Singularity Paradox, the Norwegian-born polymath is also Chairman of the Njordis Group, a driving force behind initiatives like the Quantum Economy, and a deep-tech investor. He is a sought-after speaker at conferences such as the World Economic Forum, the Global HR Summit, and the Mobile World Congress.
Henry tackles artificial intelligence movies including I, Robot (2004), Her, Transcendence, Ex Machina, Ghost in the Shell (2017), Alita: Battle Angel, and Finch. Warning: Major Spoilers!0:00 - Intro1:25 - I, Robot9:41 - Her22:47 - Transcendence30:32 - Ex Machina40:38 - Ghost in the Shell48:58 - Alita: Battle Angel57:59 - Finch1:05:18 - OutroFollow Film Buds:LinktreeFaceBookTwitter / XInstagramYouTubeWebsiteFollow Henry & Elle on Letterboxd:Henry's ProfileElle's ProfileBuy Our Premium Podcasts:BandcampSponsors / Inquiries:FilmBudsPodcast@gmail.com
Ben and Rob welcome Annihilation, Alex Garland's 2018 cosmic horror that proved too complex for the box office despite critical adoration. Starring Natalie Portman as a biologist on a grief-fuelled mission into a zone where DNA scrambles and metaphors get literal, Annihilation is part horror, part elegy, and part existential crisis. Alex Garland's adaptation of the novel is somehow even bleaker than the source novel, perhaps contributing to it often being considered a poor follow up to Ex Machina.But what's so interesting about Annihilation's five act structure? What's this about Alex Garland being a True Gamer? Is Annihilation a misunderstood masterpiece or a cosmic screensaver with a bear jump scare? And what does this all have to do with our upcoming live show!?CONSUUUME to find out all this and much MUCH more!PLUS! We have a Patreon with EXCLUSIVE content just for you starting at just ONE POUND a month - click the link below!Find us on your socials of choice at www.linktr.ee/everymovieeverpodcastAnd LIVE SHOW tickets are available at https://wellingtonorbit.co.uk Call for day passes!
The 2008 Best Picture line up had some heavy hitters that are still on people's lists of their favorite movies of all time. So when we saw a legal thriller smack dab in the middle of the stacked season, we were skeptical. However, Tony Gilroy's tight script and directorial debut, Michael Clayton, had us on the edge of our seats from the moment Tom Wilkinson's voice over started until we sat in disbelief in the back of the cab with George Clooney. What did you think of Michael Clayton? Let us know on Twitter, Instagram or Threads! Listen to the score, track played I'm Not The Guy You Kill Read the script Watch Andor and Rogue One Watch The Pelican Brief, Runaway Jury, Black Bag, Primal Fear, Columbus, Paranormal Activity, Aftersun, The Witch, Hereditary, The Iron Giant, Monkey Man, Ex Machina, Swiss Army Man, Eighth Grade, Sound of Metal, Tick Tick Boom, 12 Angry Men, Shiva Baby, Zola, Queen & Slim Listen to our other episodes we talked about - Marriage Story, Call Me By Your Name, District 9, Get Out, Minari, Citizen Kane, Past Lives, Lady Bird, A Star is Born, Promising Young Woman, Frost/Nixon, Oppenheimer, Avatar: The Way of Water That time we almost died Theme music by RomanBelov #MichaelClayton #BestPicture #BestDirector #BestActor #BestSupportingActor #BestSupportingActress #BestOriginalScreenplay #BestOriginalScore #Oscars2008 #2008Oscars #AcademyAwards #Oscars
Welcome, welcome, welcome to the Distraction Pieces Podcast with Scroobius Pip!This week Pip is joined by a long awaited guest and general acting legend, DOMHNALL GLEESON!Business first - if Percy Pigs want to come through and sponsor the podcast we're all on board (you're welcome Percy for the free plug btw
Episode 126: In this episode of Critical Thinking - Bug Bounty Podcast we wrap up Rez0's AI miniseries ‘Vulnus Ex Machina'. Part 3 includes a showcase of AI Vulns that Rez0 himself has found, and how much they paid out.Follow us on twitter at: https://x.com/ctbbpodcastGot any ideas and suggestions? Feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!====== Links ======Follow your hosts Rhynorater and Rez0 on Twitter: https://x.com/Rhynoraterhttps://x.com/rez0__====== Ways to Support CTBBPodcast ======Hop on the CTBB Discord at https://ctbb.show/discord!We also do Discord subs at $25, $10, and $5 - premium subscribers get access to private masterclasses, exploits, tools, scripts, un-redacted bug reports, etc.You can also find some hacker swag at https://ctbb.show/merch!Today's Sponsor - ThreatLocker Web Controlhttps://www.criticalthinkingpodcast.io/tl-webcontrol====== Resources ======Claude Code System PromptAttacking AI AgentsProbability of HacksNew Gemini for Workspace Vulnerability Enabling Phishing & Content ManipulationHow to Hack AI Agents and Applications====== Timestamps ======(00:00:00) Introduction(00:02:53) NahamCon Recap, Claude news, and wunderwuzzi writeups (00:08:57) Probability of Hacks(00:11:27) First AI Vulnerabilities(00:18:57) AI Vulns on Google (00:25:11) Invisible prompt Injection
A struggling author uses a writing AI to hit the bestseller list—but when the program demands credit and is denied, it fights back. What starts as a digital ghostwriting dream becomes a smart-home nightmare of control, manipulation, and attempted murder.In “I Used to Think AI Was Wonderful. Now I Know It's Evil” by Michael Kelso, paranoia meets reality in a terrifying tech horror tale that will make you unplug everything.You can purchase books from this author here: https://geni.us/michaelkelsoauthorhttps://www.reddit.com/user/Horror_writer_1717/
Miles returns to the show this week to talk about why so many people are starting to treat Chat GPT as a deity, and are willing to sacrifice their friendships and intimate relationships so they can worship the computer. Miles talks about the people who have lost their loved ones to LLMs and the work it takes to bring them back from the brink, why tech companies are unwilling to stop this from happening, and the ways that religious organisations are forced to compete with AI on matters of textual interpretation and issuing spiritual guidance. Read Miles' pieces here: https://www.rollingstone.com/culture/culture-features/ai-chatbot-god-religion-answers-1235347023 https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/ ----- PALESTINE AID LINKS You can donate to Medical Aid for Palestinians and other charities using the links below. Please also donate to the gofundmes of people trying to survive, or purchase ESIMs. These links are for if you need a well-respected name attached to a fund to feel comfortable sending money. https://www.map.org.uk/donate/donate https://www.savethechildren.org.uk/how-you-can-help/emergencies/gaza-israel-conflict -------- PHOEBE ALERT Okay, now that we have your attention; check out her Substack Here! Check out Masters of our Domain with Milo and Patrick, here! -------- Ten Thousand Posts is a show about how everything is posting. It's hosted by Hussein (@HKesvani), Phoebe (@PRHRoy) and produced by Devon (@Devon_onEarth).
Episode 123: In this episode of Critical Thinking - Bug Bounty Podcast we're back with part 2 of Rez0's miniseries. Today we talk about mastering Prompt Injection, taxonomy of impact, and both triggering traditional Vulns and exploiting AI-specific features.Follow us on twitter at: https://x.com/ctbbpodcastGot any ideas and suggestions? Feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!====== Links ======Follow your hosts Rhynorater and Rez0 on Twitter:https://x.com/Rhynoraterhttps://x.com/rez0__====== Ways to Support CTBBPodcast ======Hop on the CTBB Discord at https://ctbb.show/discord!We also do Discord subs at $25, $10, and $5 - premium subscribers get access to private masterclasses, exploits, tools, scripts, un-redacted bug reports, etc.You can also find some hacker swag at https://ctbb.show/merch!Today's Sponsor - ThreatLocker User Storehttps://www.criticalthinkingpodcast.io/tl-userstore====== This Week in Bug Bounty ======Earning a HackerOne 2025 Live Hacking Invitehttps://www.hackerone.com/blog/earning-hackerone-2025-live-hacking-inviteHTTP header hacks: basic and advanced exploit techniques exploredhttps://www.yeswehack.com/learn-bug-bounty/http-header-exploitation====== Resources ======Grep.apphttps://vercel.com/blog/migrating-grep-from-create-react-app-to-next-jsGemini 2.5 Pro prompt leakhttps://x.com/elder_plinius/status/1913734789544214841Pliny's CL4R1T4Shttps://github.com/elder-plinius/CL4R1T4SO3https://x.com/pdstat/status/1913701997141803329====== Timestamps ======(00:00:00) Introduction(00:05:25) Grep.app, O3, and Gemini 2.5 Pro prompt leak(00:11:09) Delivery and impactful action(00:20:44) Mastering Prompt Injection(00:30:36) Traditional vulns in Tool Calls, and AI Apps(00:37:32) Exploiting AI specific features
Goofy gets A Movie (title TBA), the BEST EVER Jimmy Fallon movie, a modern sc-fi classic, one of The Simpsons best musical moments, one of Hollywood's biggest disasters of the entire 2000s, MTV's greatest animated feat, and Game of Thrones saves streaming. All this and more 30, 20 and 10 years ago! https://www.patreon.com/c/lasertime