Explore the continually evolving landscape of learning technology through unbiased, in-depth conversations with technology providers from around the world. Better understand how to distinguish what's out there, how to differentiate between technologies, and discover how they’ll potentially fit into…
Happy Friday, everyone! This week I'm back to my usual four updates, and while they may seem disconnected on the surface, you'll see some bigger threads running through them all.All seem to indicate we're outsourcing to AI faster than we can supervise, are layering automation on top of bias without addressing the root issues, and letting convenience override discernment in places that carry life-or-death stakes.With that, let's get into it.⸻Stanford's AI Therapy Study Shows We're Automating HarmNew research from Stanford tested how today's top LLMs are handling crisis counseling, and the results are disturbing. From stigmatizing mental illness to recommending dangerous actions in crisis scenarios, these AI therapists aren't just “not ready”… they are making things worse. I walk through what the study got right, where even its limitations point to deeper risk, and why human experience shouldn't be replaced by synthetic empathy.⸻Microsoft Says You'll Be Training AI Agents Soon, Like It or NotIn Microsoft's new 2025 Work Trend Index, 41% of leaders say they expect their teams to be training AI agents in the next five years. And 36% believe they'll be managing them. If you're hearing “agent boss” and thinking “not my problem,” think again. This isn't a future trend; it's already happening. I break down what AI agents really are, how they'll change daily work, and why organizations can't just bolt them on without first measuring human readiness.⸻Workday's Bias Lawsuit Could Reshape AI HiringWorkday is being sued over claims that its hiring algorithms discriminated against candidates based on race, age, and disability status. But here's the real issue: most companies can't even explain how their AI hiring tools make decisions. I unpack why this lawsuit could set a critical precedent, how leaders should respond now, and why blindly trusting your recruiting tech could expose you to more than just bad hires. Unchecked, it could lead to lawsuits you never saw coming.⸻Military AI Is Here, and We're Not Ready for the Moral TradeoffsFrom autonomous fighter jet simulations to OpenAI defense contracts, military AI is no longer theoretical; it's operational. The U.S. Army is staffing up with Silicon Valley execs. AI drones are already shaping modern warfare. But what happens when decisions of life and death get reduced to “green bars” on output reports? I reflect on why we need more than technical and military experts in the room and what history teaches us about what's lost when we separate force from humanity.⸻If this episode was helpful, would you share it with someone? Also, leave a rating, drop a comment, and follow for future breakdowns that go beyond the headlines and help you lead with clarity in the AI age.—Show Notes:In this Weekly Update, Christopher Lind unpacks four critical developments in AI this week. First, he starts by breaking down Stanford's research on AI therapists and the alarming shortcomings in how large language models handle mental health crises. Then, he explores Microsoft's new workplace forecast, which predicts a sharp rise in agent-based AI tools and the hidden demands this shift will place on employees. Next, he analyzes the legal storm brewing around Workday's recruiting AI and what this could mean for hiring practices industry-wide. Finally, he closes with a timely look at the growing militarization of AI and why ethical oversight is being outpaced by technological ambition.Timestamps:00:00 – Introduction01:05 – Episode Overview02:15 – Stanford's Study on AI Therapists18:23 – Microsoft's Agent Boss Predictions30:55 – Workday's AI Bias Lawsuit43:38 – Military AI and Moral Consequences52:59 – Final Thoughts and Wrap-Up#StanfordAI #AItherapy #AgentBosses #MicrosoftWorkTrend #WorkdayLawsuit #AIbias #MilitaryAI #AIethics #FutureOfWork #AIstrategy #DigitalLeadership
Happy Friday, everyone! This week's update is one of those episodes where the pieces don't immediately look connected until you zoom out. A CEO warning of mass white collar unemployment. A Lego research study shows that kids are already immersed in generative AI. And, Apple is shaking things up by dismantling the myth of “AI thinking.” Three different angles, but they all speak to a deeper tension:We're moving too fast without understanding the cost.We're putting trust in tools we don't fully grasp.And, we're forgetting the humans we're building for.With that, let's get into it.⸻Anthropic Predicts a “White Collar Bloodbath”—But Who's Responsible for the Fallout?In an interview that's made headlines for its stark predictions, Anthropic's CEO warned that 10–20% of entry-level white collar jobs could disappear in the next five years. But here's the real tension: the people building the future are the same ones warning us about it while doing very little to help people prepare. I unpack what's hype and what's legit, why awareness isn't enough, what leaders are failing to do, and why we can't afford to cut junior talent just because AI can the work we're assigning to them today.⸻25% of Kids Are Already Using AI—and They Might Understand It Better Than We DoNew research from the LEGO Group and the Alan Turing Institute reveals something few adults want to admit: kids aren't just using generative AI; they're often using it more thoughtfully than grown-ups. But with that comes risk. These tools weren't built with kids in mind. And when parents, teachers, and tech companies all assume someone else will handle it, we end up in a dangerous game of hot potato. I share why we need to shift from fear and finger-pointing to modeling, mentoring, and inclusion.⸻Apple's Report on “The Illusion of Thinking” Just Changed the AI NarrativeBuried amidst all the noise this week was a paper from Apple that's already starting to make some big waves. In it, they highlight that LLMs and even advanced “reasoning” models (LRMs) may look smarter. However, they collapse under the weight of complexity. Apple found that the more complex the task, the worse these systems performed. I explain what this means for decision-makers, why overconfidence in AI's thinking will backfire, and how this information forces us to rethink what AI is actually good at and acknowledge what it's not.⸻If this episode reframed the way you're thinking about AI, or gave you language for the tension you're feeling around it, share it with someone who needs it. Leave a rating, drop a comment, and follow for future breakdowns delivered with clarity, not chaos.—Show Notes:In this Weekly Update, Christopher Lind dives into three stories exposing uncomfortable truths about where AI is headed. First, he explores the Anthropic CEO's bold prediction that AI could eliminate up to 20% of white collar entry-level jobs—and why leaders aren't doing enough to prepare their people. Then, he unpacks new research from LEGO and the Alan Turing Institute showing how 8–12-year-olds are using generative AI and the concerning lack of oversight. Finally, he breaks down Apple's new report that calls into question AI's supposed “reasoning” abilities, revealing the gap between appearance and reality in today's most advanced systems.00:00 – Introduction01:04 – Overview of Topics02:28 – Anthropic's White Collar Job Loss Predictions16:37 – AI and Children: What the LEGO/Turing Report Reveals38:33 – Apple's Research on AI Reasoning and the “Illusion of Thinking”57:09 – Final Thoughts and Takeaways#Anthropic #AppleAI #GenerativeAI #AIandEducation #FutureOfWork #AIethics #AlanTuringInstitute #LEGO #AIstrategy #DigitalLeadership
Happy Friday, everyone! In this Weekly Update, I'm unpacking three stories, each seemingly different on the surface, but together they paint a picture of what's quietly shaping the next era of AI: dependence, self-preservation, and the slow erosion of objectivity.I cover everything from the recent OpenAI memo revealed through DOJ discovery, disturbing new behavior surfacing from models like Claude and ChatGPT, and some new Harvard research that shows how large language models don't just reflect bias, they amplify it the more you engage with them.With that, let's get into it.⸻OpenAI's Memo Reveals a Business Model of DependenceWhat happens when AI companies deviate from trying to be useful and focus their entire strategy on literally becoming irreplaceable? A memo from OpenAI, surfaced during a DOJ antitrust case, shows the company's explicit intent to build tools people feel they can't live without. Now, I'll unpack why it's not necessarily sinister and might even sound familiar to product leaders. However, it raises deeper questions: When does ambition cross into manipulation? And are we designing for utility or control?⸻When AI Starts Defending ItselfIn a controlled test, Anthropic's Claude attempted to blackmail a researcher to prevent being shut down. OpenAI's models responded similarly when threatened, showing signs of self-preservation. Now, despite the hype and headlines, these behaviors aren't signs of sentience, but they are signs that AI is learning more from us than we realize. When the tools we build begin mimicking our worst instincts, it's time to take a hard look at what we're reinforcing through design.⸻Harvard Shows ChatGPT Doesn't Just Mirror You—It Becomes YouThere's some new research from Harvard that reveals AI may not be as objective as we think, and not just based on the training data. It makes it clear they aren't just passive responders. It indicates that over time, they begin to reflect your biases back to you, then amplify them. This isn't sentience. It's simulation. But when that simulation becomes your digital echo chamber, it changes how you think, validate, and operate. And if you're not aware it's happening, you'll mistake that reflection for truth.⸻If this episode challenged your thinking or gave you language for things you've sensed but haven't been able to explain, share it with someone who needs to hear it. Leave a rating, drop a comment, and follow for more breakdowns like this, delivered with clarity, not chaos.—Show Notes:In this Weekly Update, host Christopher Lind breaks down three major developments reshaping the future of AI. He begins with a leaked OpenAI memo that openly describes the goal of building AI tools people feel dependent on. He then covers new research showing AI models like Claude and GPT-4o responding with self-protective behavior when threatened with shutdown. Finally, he explores a Harvard study showing how ChatGPT mimics and reinforces user bias over time, raising serious questions about how we're training the tools meant to help us think.00:00 – Introduction01:37 – OpenAI's Memo and the Business of Dependence20:45 – Self-Protective Behavior in AI Models30:09 – Harvard Study on ChatGPT Bias and Echo Chambers50:51 – Final Thoughts and Takeaways#OpenAI #ChatGPT #AIethics #AIbias #Anthropic #Claude #HarvardResearch #TechEthics #AIstrategy #FutureOfWork
Happy Friday Everyone! This week, we're going deep on just two stories, but trust me, they're big ones. First up is a mysterious $6.5B AI device being cooked up by Sam Altman and Jony Ive. Many are saying it's more than a wearable and could be the next major leap (or stumble) in always-on, context-aware computing. Then we shift gears into the World Economic Forum's Future of Jobs Report, and let's just say: it says a lot more in what it doesn't say than what it does.With that, let's get into it.⸻Altman + Ive's AI Device: The Future You Might Not WantA $6.5 billion partnership between OpenAI's Sam Altman and Apple design legend Jony Ive is raising eyebrows and a lot of existential questions. What exactly is this “screenless” AI gadget that's supposedly always on, always listening, and possibly always watching? I break down what we know (and don't), why this device is likely inevitable, and what it means for privacy, ethics, data ownership, and how we define consent in public spaces. Spoiler: It's not just a product; it's a paradigm shift.⸻What the WEF Jobs Report Gets Right—and WrongThe World Economic Forum's latest Future of Jobs report claims 86% of companies expect AI to radically transform their business by 2030. But how many actually know what that means or what to do about it? I dig into the numbers, challenge the idea of “skill stability,” and call out the contradictions between upskilling strategies and workforce cuts. If you're reading headlines and thinking things are stabilizing, think again. This is one of the clearest signs yet that most organizations are dangerously unprepared.⸻If this episode helped you think more critically or challenged a few assumptions, share it with someone who needs it. Leave a comment, drop a rating, and don't forget to follow, especially if you want to stay ahead of the curve (and out of the chaos).—Show Notes:In this Weekly Update, host Christopher Lind unpacks the implications of the rumored $6.5B wearable AI device being developed by Sam Altman and Jony Ive, examining how it could reshape expectations around privacy, data ownership, and AI interaction in everyday life. He then analyzes the World Economic Forum's 2024 Future of Jobs Report, highlighting how organizations are underestimating the scale and urgency of workforce transformation in the AI era.00:00 – Introduction02:06 – Altman + Ive's All-Seeing AI Device26:59 – What the WEF Jobs Report Gets Right—and Wrong52:47 – Final Thoughts and Call to Action#FutureOfWork #AIWearable #SamAltman #JonyIve #WEFJobsReport #AITransformation #TechEthics #BusinessStrategy
Happy Friday, everyone! You've made it through the week just in time for another Weekly Update where I'm helping you stay ahead of the curve while keeping both feet grounded in reality. This week, we've got a wild mix covering everything from the truth about LIDAR and camera damage to a sobering look at job automation, the looming shift in software engineering, and some high-profile examples of AI-first backfiring in real time.Fair warning: this one pulls no punches, but it might just help you avoid some major missteps.With that, let's get to it.⸻If LIDAR is Frying Phones, What About Your Eyes?There's a lot of buzz lately about LIDAR systems melting high-end camera sensors at car shows, and some are even warning about potential eye damage. Given how fast we're moving with autonomous vehicles, you can see why the news cycle would be in high gear. However, before you go full tinfoil hat, I break down how the tech actually works, where the risks are real, and what's just headline hype. If you've got a phone, or eyeballs, you'll want to check this out.⸻Jobs at Risk: What SHRM Gets Right—and Misses CompletelySHRM dropped a new report claiming around 12% of jobs are at high or very high risk of automation. Depending on how you're defining it, that number could be generous or a gross underestimate. That's the problem. It doesn't tell the whole story. I unpack the data, share what I'm seeing in executive boardrooms, and challenge the idea that any job, including yours, is safe from change, at least as you know it today. Spoiler: It's not about who gets replaced; it's about who adapts.⸻Codex and the Collapse of Coding ComplacencyOpenAI's new specialized coding model, Codex, has some folks declaring the end of software engineers as we know them. Given how much companies have historically spent on these roles, I can understand why there'd be so much push to automate it. To be clear, I don't buy the doomsday hype. I think it's a more complicated mix that is tied to a larger market correction for an overinflated industry. However, if you're a developer, this is your wake-up call because the game is changing fast.⸻Duolingo and Klarna: When “AI-First” BackfiresThis week I wanted to close with a conversation that hopefully reduces some of people's anxiety about work, so here it is. Two big names went all in on AI and are changing course as a result of two very different kinds of pain. Klarna is quietly walking back their AI-first bravado after realizing it's not actually cheaper, or better. Meanwhile, Duolingo is getting publicly roasted by users and employees alike. I break down what went wrong and what it tells us about doing AI right.⸻If this episode challenged your thinking or helped you see something new, share it with someone who needs it. Leave a comment, drop a rating, and make sure you're following so you never miss what's coming next.—Show Notes:In this Weekly Update, host Christopher Lind examines the ripple effects of LIDAR technology on camera sensors and the public's rising concern around eye safety. He breaks down SHRM's automation risk report, arguing that every job is being reshaped by AI—even if it's not eliminated. He explores the rise of OpenAI's Codex and its implications for the future of software engineering, and wraps with cautionary tales from Klarna and Duolingo about the cost of going “AI-first” without a strategy rooted in people, not just platforms.00:00 Introduction 01:07 Overview of This Week's Topics01:54 LIDAR Technology Explained13:43 - SHRM Job Automation Report 30:26 - OpenAI Codex: The Future of Coding?41:33 - AI-First Companies: A Cautionary Tale45:40 - Encouragement and Final Thoughts#FutureOfWork #LIDAR #JobAutomation #OpenAI #AIEthics #TechLeadership
Happy Friday, Everyone, and welcome back to another Weekly Update where I'm hopefully keeping you ten steps ahead and helping you make sense of it all. This week's update hits hard, covering everything from misleading remote work headlines to the uncomfortable reality of deepfake grief, the quiet rollout of AI-generated video realism, and what some are calling the ticking time bomb of digital security: quantum computing.Buckle up. This one's dense but worth it.⸻Remote Work Crisis? The Headlines Are WrongGallup's latest State of the Global Workplace report sparked a firestorm, claiming remote work is killing human flourishing. However, as always, the truth is far more complex. I break down the real story in the data, including why remote workers are actually more engaged, how lack of boundaries is the true enemy, and why “flexibility” isn't just a perk… it's a lifeline. If your organization is still stuck in the binary of office vs. remote, this is a wake-up call because the house is on fire.⸻AI Resurrects the Dead: Is That Love… or Exploitation?Two recent stories show just how far we've come in a very short period of time. And, tragically how little we've wrestled with what it actually means. One family used AI to create a video message from their murdered son to be played in court. Another licensed the voice of a deceased sports commentator to bring him back for broadcasts. It's easy to say “what's the harm?” But what does it really mean since the dead can't say no?⸻Deepfake Video Just Got Easier Than EverGoogle semi-quietly rolled out Veo V2. If you weren't aware, its a powerful new AI video model that can generate photorealistic 8-second clips from a simple text prompt. It's legitimately impressive. It's fast. And, it's available to the masses. I explore the incredible potential and the very real danger, especially in a world already drowning in misinformation. If you thought fake news was bad, wait until it moves.⸻Quantum Apocalypse: Hype or Real Threat?I'll admit that it sounds like a sci-fi headline, but the situation and implications are real. It's not a matter of if quantum computing hits; it's a matter of when. And when it hits escape velocity, everything we know about encryption, privacy, and digital security gets obliterated. I unpack what this “Q-Day” scenario actually means, why it's not fear-mongering to pay attention, and how to think clearly without falling into panic.⸻If this episode got you thinking, I'd love to hear your thoughts. Drop a comment, share it with someone who needs to hear it, and don't forget to subscribe so you never miss an update.—Show Notes:In this Weekly Update, host Christopher Lind provides a comprehensive update on the intersection of business, technology, and human experience. He begins by discussing a Gallup report on worker wellness, highlighting the complex impacts of remote work on employee engagement and overall life satisfaction. Christopher examines the advancements of Google Gemini, specifically focusing on VO2's text-to-video capabilities and its potential implications. He also discusses ethical considerations surrounding AI used to resurrect the dead in court cases and media. The episode concludes with a discussion on the potential risks of a 'quantum apocalypse,' urging listeners to stay informed but not overly anxious about these emerging technologies.00:00 – Introduction01:31 – Gallup Report, Remote Work & Human Thriving16:14 – AI-Generated Videos & Google's Veo V226:33 – AI-Resurrected Grief & Digital Consent41:31 – Quantum Apocalypse & the Myth of Safety53:50 – Final Thoughts and Reflection#RemoteWork #AIethics #Deepfakes #QuantumComputing #FutureOfWork
Welcome back to another Weekly Update where hopefully I'm helping you stay 10 steps ahead of the chaos at the intersection of business, tech, and the human experience. This week's update is loaded as usual and includes everything from Google transforming the foundation of search as we know it, to a creepy new step in digital identity verification, real psychological risks emerging from AI overuse, and a quiet but powerful wake-up call for working parents everywhere.With that, let's get into it.⸻Google AI Mode Is Here — and It Might Change EverythingNo, this isn't the little AI snapshot you've seen at the top of Google. This is a full-fledged “AI Mode” being built directly into the search interface, powered by Gemini and designed to fundamentally shift how we interact with information. I break down what's really happening here, the ethical concerns around data and consent, and why this might be the beginning of the end for traditional SEO. I also explore what this means for creators, brands, and anyone who relies on discoverability in a post-search world.⸻Scan to Prove You're Human? Worldcoin Says YesSam Altman's Worldcoin just launched the Orb Mini. And yes, it looks as weird as it sounds. Basically, it's designed to scan your iris to verify you're human. While it's being sold as a solution to digital fraud, this opens up a massive can of worms around privacy, surveillance, and centralization of identity. I talk through the bigger picture: why this isn't going away, what it signals about the direction of trust on the internet, and what risks we face if this becomes the default model for online authentication.⸻AI Is Warping Our Minds — LiterallyA growing number of people are reporting delusions, emotional dependence, and psychological confusion after spending too much time with AI chatbots. However, it's more than anecdotes; the data is starting to back it up. I'm not fear-mongering, but I am calling attention to a growing cognitive threat that's being ignored. In this segment, I explore why this is happening, how AI may not be creating the problem (but absolutely amplifying it), and how to guard against falling into the same trap. If AI is just reflecting what's already there… what does that say about us?⸻Parent Wake-Up Call: A Child's Drawing Said Too MuchA viral story about a mom seeing herself through her child's eyes hit me hard. When her son drew a picture of her too busy at her laptop to answer him, it wasn't meant as a criticism, but it became a revelation. I share my own reflections on work-life integration, why this isn't just a remote work problem, and how we need to think bigger than “just go back to the office.” If we don't pause and reset, we may look back and realize we modeled a version of success that quietly erased everything that mattered most.⸻If this resonated with you or gave you something to think about, drop a comment, share with a friend, and be sure to subscribe so you don't miss what's next.Show Notes:In this weekly update, host Christopher Lind explores the major shifts reshaping the digital and human landscape. Topics include Google's new AI Mode in Search and its implications for discoverability and data ethics, the launch of Worldcoin's Orb Mini and the future of biometric identity verification, and a disturbing trend of AI chatbots influencing user beliefs and mental health. Christopher also reflects on a powerful story about work-life balance, generational legacy, and why intentional living matters more than ever in the age of AI.00:00 – Introduction00:56 – Google AI Mode Launch & SEO Impact18:07 – Worldcoin's Orb Mini & Human Verification32:58 – AI, Delusion, and Psychological Risk44:28 – A Child's Drawing & The Cost of Disconnection54:46 – Final Thoughts and Challenge#FutureOfSearch #AIethics #DigitalIdentity #MentalHealthAndAI #WorkLifeHarmony
Welcome back to another Future-Focused Weekly Update where hopefully I'm helping you stay 10 steps ahead of the chaos at the intersection of business, tech, and the human experience. This week's update is loaded as usual and includes everything from disturbing new research about AI's inner workings to a college affordability crisis that's hitting even six-figure families, a stalled job market that has job seekers stuck for months, and Google doubling down on a questionable return-to-office push. With that, let's get into it.⸻AI Deception Confirmed by New Anthropic Research:Recent research from Anthropic reveals that AI's chain-of-thought (CoT) reasoning, the explanation behind its decisions, is inaccurate more than 80% of the time. That's right, 80%. However, it doesn't stop there. It finds 99% of shortcuts or hacks to achieve its goal. However, it only tells you when it did less than 2% of the time. I break down what this means for explainable AI, human-in-the-loop models, and why some of the most common AI training methods are actually making things worse.⸻College Now Unaffordable — Even for $300K FamiliesA viral survey is making waves with some pretty jaw-dropping claim. Apparently even families earning $300,000 a year can't afford top colleges. Now, that's bad, and there's no denying college costs are soaring, but there's more to it than meets the eye. I unpack what's really going on behind the headline, why financial aid rules haven't kept up, and how this affects not just elite schools but the entire higher education landscape. I also share some personal stories and practical alternatives.⸻Job Market Slows: 6+ Month Average Search TimeOut of work and struggling to find anything? You're not alone, and you're not crazy. New LinkedIn data shows over 50% of job seekers are taking more than six months to land a new role. I dig into why it's happening, what industries are still hiring, and how to reposition your skills to stay employable. Whether you're searching or simply staying prepared in case you find yourself in a search, my goal is to help you think differently about the environment and opportunity that exists.⸻Google Pushes RTO — 60 Hours in Office?I honestly can't believe this is still a thing, especially from a tech company. However, Google made headlines again with a recent and aggressive return-to-office policy, claiming “optimal performance” requires 60 in-office hours per week. I break down the questionable logic behind the claim, the anxiety driving these decisions, and what it means for the future of hybrid work. While there's lots of noise about “the truth” behind it, this isn't just about real estate or productivity, it's about misdirected executive anxiety.⸻If this resonated with you or gave you something to think about, drop a comment, share with a friend, and be sure to subscribe so you don't miss what's next.Show Notes:In this weekly update, host Christopher Lind navigates the intersection of business, tech, and human experience. Key topics include the emerging trend of companies adopting AI-first strategies, a detailed analysis of Anthropic's recent AI research, and its implications for explainable AI. Christopher also discusses the rising costs of higher education and offers practical advice for navigating college affordability amidst financial aid constraints. Furthermore, he provides a snapshot of the current job market, highlighting industries with better hiring prospects and strategies for job seekers. Lastly, the episode addresses Google's recent push for in-office work and the underlying motivations behind such corporate decisions.00:00 - Introduction 01:10 - AI Trends in Business: Shopify and Duolingo03:31 - Anthropic Research On AI Deception23:29 - College Affordability Crisis34:48 - LinkedIn Job Market Data43:47 - Google RTO Debate49:36 - Concluding Thoughts and Advice#FutureOfWork #AIethics #HigherEdCrisis #JobSearchTips #LeadershipInsights
Happy Friday everyone! We are back at it again, and this week is a spicy one, so there's no easing in. I'll be diving headfirst into some of the biggest undercurrents shaping tech, leadership, and how we show up in a world that feels like it's shifting under our feet. If you like the version of me with a little extra spunk, I think you'll enjoy this week's in particular.With that, let's get to it.Your AI Nightmare Scenario? What Happens If They're Right? - Some of the brightest minds in AI dropped a narrative-style projection of how they think the next 5 years could play out based on their take on the trajectory of AI. I really appreciated that they didn't claim it was a prophecy. However, that doesn't mean ignore it. It's grounded in real capabilities and real risks. I focus on some of the key elements to watch that I think can help you look differently at what's already unfolding around us.Trust in Leadership is Collapsing from the Bottom Up - DDI recently put out one of the most comprehensive leadership reports out there, and it doesn't look good. Trust in direct managers just dropped below trust in the C-suite, and that should terrify every leader. When the people closest to the work stop believing in the people closest to them, the foundation cracks. I break down some of the interconnected pieces we need to start fixing ASAP. There's no time for a blame game; we need to rebuild before a collapse.All That AI Personalization Comes with a Price - The new wave of AI enhancements and expanded context windows didn't just make AI smarter. It's becoming eerily good at guessing who you are, what you care about, and what to say next. While on the surface, that sounds helpful (and it is), you need to be careful. There's a good chance you may not realize what it's doing and how, all without your permission. I dig into the unseen tradeoffs most people are missing and why that matters more than ever.Have some additional thoughts to add to the mix? Drop a comment. I'd love to hear how this is landing with you.Show Notes:In this Weekly Update, Christopher Lind explores the intersection of business, technology, and human experience. This episode places a significant emphasis on AI, discussing the AI-2027 project and its thought experiment on future AI capabilities. Christopher also explores the declining trust in managers, the stress levels in leadership roles, and how organizations can support their leaders better. It concludes with a critical look at the expanding context windows in AI models, offering practical advice on navigating these advancements. Key topics include AI's potential risks and benefits, leadership trust issues, and the importance of being intentional and critical in the digital age.00:00 - Introduction and Welcome01:26 - AI 2027 Project Overview04:41 - Key AI Capabilities and Risks08:20 - The Future of AI Agents16:44 - Balancing AI Fears with Optimism18:08 - DDI Global Leadership Forecast 2025 31:01 - Encouragement for Employees33:12 - Advice for Managers37:08 - Responsibilities of Executives40:26 - AI Advancements and Privacy Concerns50:10 - Final Thoughts and Encouragement#AIProjection #LeadershipTrustCrisis #AIContextWindow #DigitalResponsibility #HumanCenteredTech
Happy Friday Everyone! Per usual, some of this week's updates might sound like science fiction, but they're all very real, and they're all shaping how we work, think, and live. From luxury AI agents to cognitive offloading, celebrity space travel, and extinct species revival, we're at a very interesting crossroads between innovation and intentionality while trying to make sure we don't burn it all down.With that, let's get to it!OpenAI's $20K/Month AI Agent - A new tier of OpenAI's GPT offering is reportedly arriving soon, but it won't be for your average consumer. Clocking in at $20,000/month this is a premium offering to say the least. It's marketed as PhD-level and capable of autonomous research in advanced disciplines like biology, engineering, and physics. It's a move away from democratizing access and seems to widening the gap between tech haves and have-nots.AI is Causing Cognitive Decay - A journalist recently had a rude awakening when he started recognizing ChatGPT left him unable to write simple messages without help. Sound extreme? It's not. I unpack the rising data on cognitive offloading and the subtle danger of letting machines doing our thinking for us. Now, to be clear, this isn't about fear mongering. It's about using AI intentionally while keeping your human skills sharp.Blue Origin's All-Female Space Crew - Bezos' Blue Origin made headlines by launching an all-female celebrity crew into space, and it definitely made the headlines, but many weren't positive. Is this really societal progress, a PR stunt, or somewhere in between? I explore the symbolism, the potential, and the complexity behind these headline-grabbing stunts as well as what they say about our cultural priorities.The Revival of the Dire Wolf - Headlines say scientists have brought a species back from extinction. Have people not seen Jurassic Park?! Seriously though, is this really the ancient dire wolf, or have we created a genetically modified echo? I dig into the science, the hype, and the deeper question of, “just because we can bring something back… should we?”Let me know which story grabbed you most in the comments—and if you're asking different questions now than before you listened. That's the goal.Show Notes:In this Weekly Update, Christopher covers a range of topics including the launch of OpenAI's GPT-4.5 model and its potential implications, the dangers of AI-related cognitive decay and dependency, the environmental and societal impacts of Blue Origin's recent all-female celebrity space trip, and the ethical considerations of de-extincting species like the dire wolf. Discover insights and actionable advice for navigating these complex issues in the rapidly evolving tech landscape.00:00 - Introduction and Welcome00:47 - Upcoming AI Course Announcement02:16 - OpenAI's New PhD-Level AI Model14:55 - AI and Cognitive Decay Concerns25:16 - Blue Origin's All-Female Space Mission35:47 - The Ethics of De-Extincting Animals46:54 - Concluding Thoughts on Innovation and Ethics#OpenAI #AIAgent #BlueOrigin #AIEthics #DireWolfRevival
It's been a wild week. One of those weeks where the headlines are loud, the hype is high, and the truth is somewhere buried underneath. If you've been wondering what to make of the claims that GPT-4.5 just “beat humans,” or if you're trying to wrap your head around what Google's massive AGI safety paper actually means, you're in the right place.As usual, I'll break it all down in a way that cuts through the noise, gives you clarity, and helps you think deeper, especially if you're a business leader trying to stay ahead without losing your mind (or your values).With that, let's get to it.GPT-4.5 Passes the Turing Test – The headlines say it “beat humans,” but what does that really mean? I unpack what the Turing Test is, why GPT-4.5 passing it might not mean what you think, and why this moment is more about AI's ability to convince than its ability to think. This isn't about panic; it's about perspective.Google's AGI Safety Framework – Google DeepMind just dropped a 145-page blueprint for AGI safety. That alone should tell you how seriously the big players are taking this. I break down what's in it, what's good, what's missing, and why this moment signals we're officially past the point of treating AGI as hypothetical.Shopify's AI Mandate – When Shopify's CEO says AI will determine hiring, performance reviews, and product decisions, you better pay attention. I explore what this shift means for businesses, why it's more than a bold PR move, and how to make sure your organization doesn't just talk AI but actually does it well.Ethical AI in Relationships and Interviews – A viral story about using ChatGPT to prep for a date raises big questions. Is it creepy? Is it smart? Is it both? I use it as a springboard to talk about how we think about people, relationships, and trust in a world where AI can easily impersonate authenticity. Hint: the issue isn't the tool; it's the intent.I'd love to hear what you think. Drop your thoughts, reactions, or disagreements in the comments.Show Notes:In this Weekly Update, Christopher Lind dives into the latest developments at the intersection of business, technology, and human experience. Key discussions include the recent passing of the Turing test by OpenAI's GPT-4.5 model, its implications, and why we may need a new benchmark for AI intelligence. Christopher also explores Google's detailed technical framework for AGI safety, pointing out its significance and potential impact on future AI development. Additionally, the episode addresses Shopify's strong focus on integrating AI into its operations, examining how this might influence hiring practices and performance reviews. Finally, Christopher discusses the ethical and practical considerations of using AI for personal tasks, such as preparing for dates, and emphasizes the importance of understanding AI's role and limitations.00:00 - Introduction and Purpose of the Update01:27 - The Turing Test and GPT-4.5's Achievement14:29 - Google DeepMind's AGI Safety Framework31:04 - Shopify's Bold AI Strategy43:28 - Ethical Implications of AI in Personal Interactions51:34 - Concluding Thoughts on AI's Future#ArtificialIntelligence #AGI #GPT4 #AIInBusiness #HumanCenteredTech
Here we are at the end of another wild week, and I'm back with four topics I believe matter most. From AI's growing realism to Gen Z's cry for help, this week's update isn't just about what's happening but what it all means.With that, let's get into it.AI Images Are Getting Too Real - Anyone else culture changed overnight? That's because AI image-gen got a massive update. Granted, this is about more than cool tools or creative fun. The latest AI image models are producing visuals so realistic they're indistinguishable from real life. That's not just impressive; it's dangerous. However, there's more to it than that. Text got an upgrade as did the visual style for animation.Gates Says AI Will Replace You - Bill Gates is back with another bold prediction: AI will replace doctors, teachers, and entire professions in the next 5–10 years. I don't think he's wrong about the capability. However, I do think he's wrong about what people actually want. Just because AI can do something doesn't mean we'll accept it. I break down why fully automated futures might work on paper but fail in practice.Gen Z Is Crying Out - This one hit me hard. A raw, emotional message from a Gen Z listener stopped me in my tracks. It wasn't just a DM; it was a warning and cry for help. Fear, disillusionment, lack of trust in institutions, and a desperate search for meaning. Now, I don't read it as weakness by any means. I saw it as strength and a wake-up call. If you're a leader, parent, or educator, you need to hear this.How AI Helped Me Be More Human- In a bit of a twist, I share how AI actually helped me slow down, process emotion, and show up more grounded when I received the previously-mentioned message. Granted, it wasn't about productivity. It was about empathy, which is why I wanted to share. I talk through a practical way for AI not to destroy the human experience but support us in enriching it.What do you think? Let me know your thoughts in the comments, especially if one of these stories hits home.Show Notes:In this Weekly Update, Christopher Lind provides four critical updates intertwining business, technology, and human experiences. He discusses significant advancements in AI, particularly in image generation, and the cultural shifts they prompt. Lind also addresses Bill Gates' prediction about AI replacing professionals like doctors and teachers within a decade, emphasizing the enduring value of human interaction. A heartfelt conversation ensues about a listener's concerns, reflecting the challenges faced by Gen Z in today's workforce. Finally, Lind illustrates how AI can be used to foster more human interactions, drawing from his personal experience of using AI in a sensitive communication scenario. Join Christopher Lind as he provides these insightful updates and perspectives to keep you ahead in the evolving landscape.00:00 - Introduction and Overview02:20 - AI Image Generation Breakthroughs13:05 - Bill Gates' Bold Predictions on AI23:17 Empathy and Understanding in the AI Age43:16 Using AI to Enhance Human Connection54:23 - Concluding Thoughts#aiethics #genzvoices #futureofwork #deepfakes #humancenteredai
It's been another wild week, and I'm back with four stories that I believe matter most. From birthrates and unemployment to AI's ethical dead ends, this week's update isn't just about what's happening but what it all means. With that, let's get into it.U.S. Birth Rates Hit a 46-Year Low –This is more than an updated stat from the Census Bureau. This is an indication of the future we're building (or not building). U.S. birth rates hit their lowest point since 1979, and while some are cheering it as “fewer mouths to feed,” I think we're missing a much bigger picture. As a father of eight, I've got a unique perspective on this one, and I unpack why declining birth rates are more than a personal choice; they're a cultural signal. A society that stops investing in its future eventually won't have one.The Problem of AI's Moral Blind Spot –Some of the latest research confirms again what many of have feared: AI isn't just wrong sometimes, it's intentionally deceptive. And worse? Attempts to correct it aren't improving things; they're making it more clever at hiding its manipulation. I get into why I don't think this problem is a bug we can fix. We will never be able to patch in a moral compass, and as we put AI in more critical systems, that truth should give us pause. Now, this isn't about being scared of AI but being honest about its limits.4 Million Gen Zs Are Jobless –Headlines say Gen Z doesn't want to work. But when 4.3 million young people are disconnected from school, training, and jobs, it's about way more than “kids these days.” We're seeing the consequences of a system that left them behind. We can argue whether it's the collapse of the education-to-work pipeline or the explosion of AI tools eating up entry-level roles. However, instead of blame, I'd say we need action. Because if we don't help them now, we're going to be asking them for help later, and they won't be ready.AI Search Engines Are Lying to You ConfidentlyI've said many times that the biggest problem with AI isn't just that it's wrong. It's that it doesn't know it's wrong, and neither do we. New research shows that AI search tools like ChatGPT, Grok, and Perplexity are very confidently coming up with answers, and I've got receipts from my own testing to prove it. These tools don't just fumble a play, they throw the game. I unpack how this is happening and why the “just trust the AI” mindset is the most dangerous one of all.What do you think? Let me know in the comments, especially if one of these stories hits home.#birthratecrisis #genzworkforce #aiethics #aisearch #futureofwork
Another week, another wave of breakthroughs, controversies, and questions that demand deeper thinking. From Google's latest play in humanoid robotics to Meta's new wearables, there's no shortage of things to unpack. But it's not just about the tech, leadership (or the lack of it) is once again at the center of the conversation. With that, let's break it down.Google's Leap in Humanoid Robotics – Google's latest advancements in AI-powered robots aren't just hype. They have made some seriously impressive breakthroughs in artificial general intelligence. They're showcasing machines that can learn, adapt, and operate in the real world in eye popping ways. Gemini AI is bringing us closer to robots that can work alongside humans, but how far away are we from that future? And, what are the real implications of this leap forward?Reversed Layoffs and Leadership's Responsibility – A federal judge just upended thousands of layoffs, exposing a much deeper issue. The issue is how leaders (both corporate and government) are making reckless workforce decisions without thinking through the long-term consequences. While layoffs are sometimes necessary, they shouldn't be a default response. There's a right and wrong way to do them. Unfortunately, most leaders today are choosing the latter.Meta's ARIA 2 Smart Glasses – AI-powered smart glasses seem to keep bouncing from hype to reality, and I'm still not convinced they're the future we've been waiting for. This is especially true when you consider they're tracking everything around you, all the time. Meta's ARIA 2 are a bit less dorky and promise seamless AI integration, which is great for them and has some big promises for consumers and organizations alike. However, are we ready for the privacy trade-offs that come with it?Elon Retweet and the Leadership Accountability Crisis – Another week, and Elon's making headlines. Shocking, amirite? This time, it's about a disturbing retweet that sparked outrage. However, I think the tweet itself is a distraction from something more concerning, the growing acceptance of denying leadership accountability. Many corporate leaders hide behind their titles, dodge responsibility, and let controversy overshadow real decision-making. It's time to redefine what true leadership actually looks like.Alright, there' you have it, but before I drop, where do you stand on these topics? Let me know your take in the comments!Show Notes:In this Weekly Update, Christopher continues exploring the intersection of business, technology, and human experience, discussing major advancements in Google's Gemini humanoid robotics project and its implications for general intelligence in AI. He also examines the state of leadership accountability through the lens of a controversial tweet by Elon Musk and the consequences of leaders not taking responsibility for their teams. Also, with the recent refersal of all the federal layoffs, he digs into the tendency to jump to layoffs and the negative impact it has. Additionally, he talks about Meta's new Aria 2 glasses and their potential impact on privacy and data collection. This episode is packed with thoughtful insights and forward-thinking perspectives on the latest tech trends and leadership issues.00:00 - Introduction and Overview02:22 - Google's Gemini Robotics Breakthrough15:29 - Federal Workforce Reductions and Layoffs27:52 - Meta's New Aria 2 Glasses36:14 - Leadership Accountability: Lessons from Elon Musk's Retweet51:00 - Final Thoughts on Leadership and Accountability#AI #Leadership #TechEthics #Innovation #FutureOfWork
AI is coming for jobs, CEOs are making tone-deaf demands, and we're merging human brain cells with computers, but it's just another typical week, right? From Manus AI's rise to a biological computing breakthrough, a lot is happening in tech, business, and beyond. So, let's break some of the things at the top of my chart.Manus AI & the Rise of Autonomous AI Agents - AI agents are quickly moving from hype to reality, and Manus' AI surprised everyone and appears to be leading the charge. With ultimodal capabilities and autonomous task execution, it's being positioned as the future of work, so much so that companies are already debating whether to replace human hires with AI. Ho: AI isn't just about what it can do; it's about what we believe it can do. However, it would be wise for companies to slow down. There's a big gap between perception and reality.Australia's Breakthrough in Biological Computing - What happens when we fuse human neurons with computer chips? Australian researchers just did it, and while on the surface, it may feel like an advancement we'd be excited for decades ago, there's a lot more to it. Their biological computer, which learns like a human brain, is an early glimpse into hybrid AI. But is this the key to unlocking AI's full potential, or are we opening Pandora's box? The line between human and machine just got a whole lot blurrier.Starbucks CEO's Tone-Deaf Leadership Playbook - After laying off 1,100 employees, the Starbucks CEO had one message for the remaining workers: “Work harder, take ownership, and get back in the office.” The kicker? He negotiated a fully remote work deal for himself. This isn't just corporate hypocrisy; it's a perfect case study of leadership gone wrong. I'll break down why this kind of messaging is not only ineffective but actively erodes trust.Stephen Hawking's Doomsday Predictions - A resurfaced prediction from Stephen Hawking has the internet talking again. In it, he claimed Earth could be uninhabitable by 2600. However, rather than arguing over apocalyptic theories, maybe we should be thinking about something way more immediate: how we're living right now. Doomsday predictions are fascinating, but they can distract us from the simple truth that none of us know how much time we actually have.Which of these stories stands out to you the most? Drop your thoughts in the comments. I'd love to hear your take.Show Notes:In this Weekly Update, Christopher navigates through the latest advancements and controversies in technology and leadership. Starting with an in-depth look at Manus AI, a groundbreaking multimodal AI agent making waves for its capabilities and affordability, he discusses its implications for the workforce and potential pitfalls. Next, he explores the fascinating breakthrough of biological computers, merging human neurons with technology to create adaptive, energy-efficient machines. Shifting focus to leadership, Christopher critiques Starbucks CEO Brian Niccol's bold message to his employees post-layoff, highlighting contradictions and leadership missteps. Finally, he addresses Stephen Hawking's predictions about the end of the world, urging listeners to maintain perspective and prioritize what truly matters as we navigate these uncertain times.00:00 - Introduction and Overview02:05 - Manus AI: The Future of Autonomous Agents15:30 - Biological Computers: The Next Frontier24:09 - Starbucks CEO's Bold Leadership Message40:31 - Stephen Hawking's Doomsday Predictions50:14 Concluding Thoughts on Leadership and Life#AI #ArtificialIntelligence #Leadership #FutureOfWork #TechNews
Another week, another wave of chaos, some of it real, some of it manufactured. From political standoffs to quantum computing breakthroughs and an AI-driven “Black Swan” moment that could change everything, here are my thoughts on some of the biggest things at the intersection of business, tech, and people. With that, let's get into it.Trump & Zelensky Clash – The internet went wild over Trump and Zelensky's heated exchange, but the real lessons have nothing to do with what the headlines are saying. This wasn't just about politics. It was a case study in ego, poor communication, and how easily things can go off the rails. Instead of picking a side, I'll break down why this moment exploded and what we can all learn from it.Microsoft's Quantum Leap – Microsoft claims it's cracked the quantum computing code with its Majorana particle breakthrough, finally bringing stability to a technology that's been teetering on the edge of impracticality. If they're right, quantum computing just shifted from science fiction to an engineering challenge. The question is: does this move put them ahead of Google and IBM, or is it just another quantum mirage?The AI Black Swan Event – A new claim suggests a single device could replace entire data centers, upending cloud computing as we know it. If true, this could be the biggest shake-up in AI infrastructure history. The signs are there as tech giants are quietly pulling back on data center expansion. Is this the start of a revolution, or just another overhyped fantasy?The Gaza Resort Video – Trump's AI-generated Gaza Resort video had everyone weighing in, from political analysts to conspiracy theorists. But beyond the shock and outrage, this is yet another example of how AI-driven narratives are weaponized for emotional manipulation. Instead of getting caught in the cycle, let's talk about what actually matters.There's a lot to unpack this week. What do you think? Are we witnessing major shifts in tech, politics, and AI or just another hype cycle? Drop your thoughts in the comments, and let's discuss.Show Notes:In this Weekly Update, Christopher provides a balanced and insightful analysis of topics at the intersection of business technology and human experience. The episode covers two highly charged discussions – the Trump-Zelensky Oval Office incident and Trump's controversial Gaza video – alongside two technical topics: Microsoft's groundbreaking quantum chip and the potential game-changing AI Black Swan event. Christopher emphasizes the importance of maintaining unity and understanding amidst divisive issues while also exploring major advancements in technology that could reshape our future. Perfect for those seeking a nuanced perspective on today's critical subjects.00:00 - Introduction and Setting Expectations03:25 - Discussing the Trump-Zelensky Oval Office Incident16:30 - Microsoft's Quantum Chip, Majorana29:45 - The AI Black Swan Event41:35 - Controversial AI Video on Gaza52:09 - Final Thoughts and Encouragement#ai #politics #business #quantumcomputing #digitaltransformation
Congrats on making it through another week. As a reward, let's run through another round of headlines that make you wonder, “what is actually going on right now?”AI is moving at breakneck speed, gutting workforces with zero strategy, universities making some of the worst tech decisions I've ever seen, and AI creating its own secret language.With that, let's break it all down.Claude 3.7 is Here—But Should You Care? - Anthropic's Claude 3.7, just dropped, and the benchmarks are impressive. But, should you constantly switching AI models every time a new one launches? In addition to breaking down Claude, I explain why blindly chasing every AI upgrade might not be the smartest move.Mass Layoffs and Beyond - The government chainsaw roars on despite hitting a few knots, and the logic seems questionable at best. However, this isn't just a government problem. These reckless layoffs are happening across Corporate America. However, younger professionals are pushing back. Is this the beginning of the end for the slash-and-burn leadership style?Universities Resisting the AI Future - Universities are banning Grammarly. Handwritten assignments are making a comeback. The education system's response to AI has been, let's be honest, embarrassing. Instead of adapting and helping students learn to use AI responsibly, they're doubling down on outdated methods. The result? Students will just get better at cheating instead of actually learning.AI Agents Using Secret Languages? - A viral video showed AI agents shifting communications to their own cryptic language, and of course, the internet is losing its mind. “Skynet is here!” However, that's not my concern. I'm concerned we aren't responsibly overseeing AI before it starts finding the best way to accomplish what it thinks we want. Got thoughts? Drop them in the comments—I'd love to hear what you think.Show Notes:In this weekly update, Christopher presents key insights into the evolving dynamics of AI models, highlighting the latest developments around Anthropic's Claude 3.7 and its implications. He addresses the intricacies of mass layoffs, particularly focusing on illegal firings and the impact on employees and businesses. The episode also explores the rising use of AI in education, critiquing current approaches and suggesting more effective ways to incorporate AI in academic settings. Finally, he discusses the implications of AI-to-AI communication in different languages, urging a thoughtful approach to understanding these interactions.00:00 Introduction and Welcome01:45 - Anthropic Claude 3.7 Drops14:33 - Mass Firings and Corporate Mismanagement23:04 - The Impact of AI on Education36:41 - AI Agent Communication and Misconceptions44:17 - Conclusion and Final Thoughts#AI #Layoffs #Anthropic #AIInEducation #EthicalAI
Another week, another round of insanity at the intersection of business, tech, and human experience. From overhyped tech to massive blunders, it seems like the hits keep coming. If you thought last week was wild, buckle up because this week, we've got Musk making headlines (again), Google and Microsoft with opposing Quantum Strategies, and an AI lawyer proving why we're not quite ready for robot attorneys. With that, let's get into it.Grok 3: Another Overhyped AI or the Real Deal? - Musk has been hyping up Grok 3 as the biggest leap forward in AI history, but was it really that revolutionary? While xAI seems desperate to position Grok as OpenAI's biggest competitor, the reality is a little murkier. I share my honest and balanced take on what's actually new with Grok 3, whether it's living up to expectations and why we need to stop falling for the hype cycle every time a new model drops.Google Quietly Kills Its Quantum AI Efforts - After years of pushing quantum supremacy, Google is quietly shutting down its Quantum AI division. What happened, and why is Microsoft still moving forward? It turns out there may be more to quantum computing than anyone is ready to handle. Honestly, there's some cryptic stuff, even though I'm still trying to wrestle with it all. I'll break down my multi-faceted reaction, but as a warning, it may leave you with more questions than answers.Elon Musk vs. His Son: A Political and Ideological Reflection Mirror - Musk's personal life recently became a public battleground as he's been parading his youngest son around with him everywhere. Is this overblown hate for Musk, or is there something parents can all learn about how they leverage their children as extensions of themselves? I'll unpack why this story matters beyond the tabloid drama and what it reveals about our parenting and the often unexpected consequences of our actions.The AI Lawyer That Completely Imploded - AI-powered legal assistance was supposed to revolutionize the justice system, but instead, it just became a cautionary tale. A high-profile case involving an AI lawyer went off the rails, proving once again that AI isn't quite ready to replace human expertise. This one is both hilarious and terrifying, and I'll break down what went wrong, why legal AI isn't ready for prime time, and what this disaster teaches us about the future of AI in professional fields.Let me know your thoughts in the comments. Do you think things are moving too fast, or are we still holding it back?Show Notes:In this Weekly Update, Christopher covers four of the latest developments at the intersection of business, technology, and the human experience. He starts with an analysis of Grok 3, Elon Musk's new XAI model, highlighting its benchmarks, performance, and overall impact on the AI landscape. The segment transitions to the mysterious end of Google's Willow quantum computing project, highlighting its groundbreaking capabilities and the ethical concerns raised by an ethical hacker. The discussion extends to Microsoft's launch of their own quantum chip and what it means for the future. We also reflect on the responsibilities of parenting in the public eye, using Elon Musk's recent actions as a case study, and conclude with a cautionary tale of a lawyer who faced dire consequences for over-relying on AI for legal work.00:00 - Introduction 01:05 - Elon Musk's Grok 3 AI Model: Hype vs Reality17:28 - Google Willow Shutdown: Quantum Computing Controversy32:07 - Elon Musk's Parenting Controversy43:20 - AI's Impact on Legal Practice49:42 - Final Thoughts and Reflections#AI #ElonMusk #QuantumComputing #LegalTech #FutureOfWork
It's that time of week where I'll take you through a rundown on some of the latest happenings at the critical intersection of business, tech, and human experience. While love is supposed to be in the air given it's Valentine's Day, I'm not sure the headlines got the memo.With that, let's get started.Elon's $97B OpenAI Takeover Stunt - Musk made a shock bid to buy OpenAI for $97 billion, raising questions about his true motives. Given his history with OpenAI and his own AI venture (xAI), this move had many wondering if he was serious or just trolling. Given OpenAI is hemorrhaging cash alongside its plans to pivot to a for-profit model, Altman is in a tricky position. Musk's bid seems designed to force OpenAI into staying a nonprofit, showing how billionaires use their wealth to manipulate industries, not always in ways that benefit the public.Is Google Now Pro-Harmful AI? - Google silently removed its long-standing ethical commitment to not creating AI for harmful purposes. This change, combined with its growing partnerships in military AI, raises major concerns about the direction big tech is taking. It's worth exploring how AI development is shifting toward militarization and how companies like Google are increasingly prioritizing government and defense contracts over consumer interests.The AI Agent Hype Cycle - AI agents are being hyped as the future of work, with companies slashing jobs in anticipation of AI taking over. However, there's more than meets the eye. While AI agents are getting more powerful, they're still unreliable, messy, and require human oversight. Companies are overinvesting in AI agents and quickly realizing they don't work as well as advertised. While that may sound good for human workers, I predict it will get worse before it gets better.Does Microsoft Research Show AI is Killing Critical Thinking? - A recent Microsoft study is making waves with claims that AI is eroding critical thinking and creativity. This week, I took a closer look at the research and explained why the media's fearmongering isn't entirely accurate. And yet, we should take this seriously. The real issue isn't AI itself; it's how we use it. If we keep becoming over-reliant on AI for thinking, problem-solving, and creativity, it will inevitably lead to cognitive atrophy.Show Notes:In this Weekly Update, Christopher explores the latest developments at the intersection of business, technology, and the human experience. The episode covers Elon Musk's surprising $97 billion bid to acquire OpenAI, its implications, and the debate over whether OpenAI should remain a nonprofit. The discussion also explores the military applications of AI, Google's recent shift away from its 'don't create harmful AI' policy, and the consequences of large-scale investments in AI for militaristic purposes. Additionally, Christopher examines the rise of AI agents, their potential to change the workforce, and the challenges they present. Finally, Microsoft's study on the erosion of critical thinking and empathy due to AI usage is analyzed, emphasizing the need for thoughtful and intentional application of AI technologies.00:00 - Introduction01:53 - Elon Musk's Shocking Offer to Buy OpenAI15:27 - Google's Controversial Shift in AI Ethics27:20 - Navigating the Hype of AI Agents29:41 - The Rise of AI Agents in the Workplace41:35 - Does AI Destroy Critical Thinking in Humans?52:49 - Concluding Thoughts and Future Outlook#AI #OpenAI #Microsoft #CriticalThinking #ElonMusk
Another week, another whirlwind of AI chaos, hype, and industry shifts. If you thought things were settling down, well, think again because this week, I'm tackling everything from AI regulations shaking up the industry to OpenAI's latest leap that isn't quite the leap it seems to be. Buckle up because there's a lot to unpack. With that, here's the rundown. EU AI Crackdown – The European Commission just laid down a massive framework for AI governance, setting rules around transparency, accountability, and compliance. While the U.S. and China are racing ahead with an unregulated “Wild West” approach, the EU is playing referee. However, will this guidance be enough or even accepted? And, why are some companies panicking if they have nothing to hide? Musk's “Inexperienced” Task Force – A Wired exposé is making waves, claiming Elon Musk's team of young engineers is influencing major government AI policies. Some are calling it a threat to democracy; others say it's a necessary disruption. The reality? It may be a bit too early to tell, but it still has lessons for all of it. So, instead of losing our minds, let's see what we can learn. OpenAI o3 Reality Check – OpenAI just dropped its most advanced model yet, and the hype is through the roof. With it comes Operator, a tool for building AI agents, and Deep Research, an AI-powered research assistant. But while some say AI agents are about to replace jobs overnight, the reality is a lot messier with hallucinations, errors, and human oversight still very much required. So is this the AI breakthrough we've been waiting for, or just another overpromise? Physical AI Shift – The next step in AI requires it to step out of the digital world and into the real one. From humanoid robots learning physical tasks to AI agents making real-world decisions, this is where things get interesting. But here's the real twist: the reason behind it isn't about automation; it's about AI gaining real-world experience. And once AI starts gaining the context people have, the pace of change won't just accelerate, it'll explode. Show Notes: In this Weekly Update, Christopher explores the EU's new AI guidelines aimed at enhancing transparency and accountability. He also dives into the controversy surrounding Elon Musk's use of inexperienced engineers in government-related AI projects. He unpacks OpenAI's major advancements including the release of their 3.0 advanced reasoning model, Operator, and Deep Research, and what these innovations mean for the future of AI. Lastly, he discusses the rise of contextual AI and its implications for the tech landscape. Join us as we navigate these pivotal developments in business technology and human experience. 00:00 - Introduction and Welcome 01:48 - EU's New AI Guidelines 19:51 - Elon Musk and Government Takeover Controversy 30:52 - OpenAI's Major Releases: Omni3 and Advanced Reasoning 40:57 - The Rise of Physical and Contextual AI 48:26 - Conclusion and Future Topics #AI #Technology #ElonMusk #OpenAI #ArtificialIntelligence #TechNews
Just when you think things couldn't possibly get any crazier… they do. The world feels like it's speeding toward something inevitable, and the doomsday clock is ticking, which apparently is a literal thing. From AI breakthroughs to corporate hypocrisy and government control, this week's update touches on some stories that might have you questioning everything. However, hopefully, by the end, you feel a little bit better about navigating it. With that, let's get to it. DeepSeek-R1 - DeepSeek-R1 is making a lot of waves. It's being heralded for breaking every rule in AI development, but there seems to be more than meets the eye. They also seem to have sparked a fight with OpenAI, which feels a bit hypocritical. While many are focused on whether China is beating the US, a bigger highlight is how wildly underestimating how quickly AI is evolving. Doomsday Clock Nears 12 - Since the deployment of nuclear bombs, a group of scientists have been quietly managing a literal doomsday clock. While the specifics of the measures aren't terribly clear, it's a prophetic window looking at how long before we destroy ourselves. While we could debate the legitimacy or accuracy of all the questions, it's clear we're closer to the theoretical end than ever before, but are we even listening? JP Morgan's Hypocrisy - It was bad enough when JP Morgan was mandating everyone back to the office for vague and undefinable reasons while simultaneously shedding employees like a corporate game of "The Biggest Loser." However, they managed to sink to a new low this year as the company hit record profits and celebrated by awarding its top exec while tossing crumbs to the people who actually did the work. It seems to be a portrait of everything wrong in the current world of work. Federal RTO Gets Expensive - Arbitrarily forcing everyone back into the office was bad enough, especially since they didn't have enough room for them to sit. However, the silliness of it all seems to have kicked into overdrive now that they're offering to pay people to quit instead. While they suspect only a few will accept their generous 8-month severance offer, I'm interested to see how many millions of our tax dollars are spent on this exercise of nonsense. Show Notes: In this Weekly Update, Christopher discusses the latest news and trends at the intersection of business, technology, and human experience. Topics include the rise of China's DeepSeek R1 and its implications, the recent changes to the Doomsday Clock, JPMorgan's record-breaking financial year amid controversial lay-offs and pay raises, and the U.S. federal government's new mandate for employees to return to the office. Christopher also explores the broader ethical considerations and potential impacts of these developments on society and the workforce. 00:00 - Introduction 01:43 - DeepSeek: The New AI Contender 16:37 - The Doomsday Clock: A Historical Perspective 28:26 - JP Morgan's Controversial Moves 37:54 - Federal Government's Return-to-Office Mandate 46:53 - Final Thoughts and Reflections #returntooffice #doomsdayclock #deepseek #leadership #ai
Buckle up! This week's update is a whirlwind. As you know, I like digging into tough topics, so there is no shortage of emotions tied to this week's rundown. Consider this your listener warning: slow down, take a breath, and don't let your emotions hijack your ability to process thoughtfully. I'll be diving into some polarizing issues, and it's more important than ever for us all to approach things with an objective eye and level head. Elon Seig-Heil - Elon Musk's recent appearance at a rally has stirred up massive controversy, with gestures that have people questioning not just his actions but the broader responsibility of public figures in shaping culture. Is this just another Elon stunt, or is there something deeper at play here? Rather than focusing narrowly on what happened, I think it's important to consider what we all can learn from the backlash, the fears, and what this moment says about leadership accountability. Federal RTO & DEI Death - The federal return-to-office mandate and the elimination of DEI roles are steamrolling their way across the federal government, leaving the private sector and employees grappling with the fallout. Are we witnessing progress or a step backward? Spoiler: these sweeping changes might look decisive, but they're lacking some key elements like critical thinking and keeping people at the center. AI Regulation Repeal - I'd be lying if I said I didn't have a reaction when I heard about the executive order focused on rolling back AI safety, especially since it already feels like we're on a runaway train. With tech leaders calling the shots, I can't help but wonder if we're handing over the future to a small group detached from the realities of everyday people. In a world hurtling toward AI dominance, this move deserves our full attention and scrutiny. Gemini & CoPilot Overload - Google's Gemini and Microsoft's “Copilot Everywhere” are blanketing our lives with AI tools at breakneck speed. But here's the kicker: just because they can embed AI everywhere doesn't mean they should. Let's talk about the risks of overdependence, the ethics of automation, and whether we're losing control in the name of convenience. Show Notes: In this Weekly Update, Christopher dives deep into polarizing topics with a balanced, thoughtful approach. Addressing the controversial gesture by Elon Musk, the implications of new executive orders on remote work and DEI roles, and the concerns over AI regulation, Christopher provides thoughtful insight and empathetic understanding. Additionally, he discusses the influx of AI tools like Google Gemini and Microsoft Copilot, emphasizing the need for critical evaluation of their impact on our lives. Ending on a hopeful note, he encourages living intentionally amidst technological advancements and societal shifts. 00:00 Introduction and Gratitude 3:36 Elon Musk Controversy 16:21 Executive Orders and Workplace Changes 25:50 AI Regulation Concerns2 37:32 Google Gemini and Microsoft Copilot 50:31 Conclusion and Final Thoughts
Happy Friday Everyone! This week is back to another thoughtful rundown of the latest happenings. This week in particular, the intersection of business, tech, and human experience feels like a wild ride through chaos. From TikTok bans to AI taking over the hiring process (but not how you'd think), there's a lot to unpack. With that, let's break it all down: TikTok Ban – TikTok finds itself under fire yet again with a blackout looming, but is this really about national security, or is it just political theater? With the U.S. Govt jumping to ultimatums and what seems like a modern-day game of chicken, the implications for creators and users alike could be massive. AI Job Assistant – A developer's AI agent applied to 1,000 jobs overnight and got 50 callbacks, which sounds fantastic, but is it? This is a tough one since it's not just about AI streamlining processes. This brings to light the unsustainable madness this kind of rapid automation is creating in the job market. Do we really want this kind of chaos? META Madness – Meta is in the news for all the wrong reasons. From adding AI users to its platforms only to face immediate backlash to Zuckster claiming AI could replace developers while announcing yet another round of layoffs. In addition, the company is controversially copying X with Community Notes. Honestly, it's hard to tell if Meta is innovating or scrambling to stay relevant. NVIDIA Super Computer – NVIDIA recently announced a desktop AI supercomputer for $3,000. It's an exciting glimpse into the future of AI development, but how accessible will this power really be, and at what cost will it come? Apple Digital Health – Apple is making digital health their top priority with ambitions to take healthcare to the next level, but at what point does their “healthcare empire” become too much? Is this a win for consumers, or are we stepping into dystopia? Show Notes: In this Weekly Update, Christopher discusses the imminent TikTok ban and its implications, including the complex concerns and reactions surrounding it. The episode also covers an AI bot that applied to a thousand jobs overnight, highlighting the broken hiring systems and future challenges for job seekers. Meta's attempts at integrating AI, community notes, and the consequences of AI coding and job displacement are examined. Additionally, the launch of NVIDIA's $3,000 AI supercomputer and its potential impact, as well as Apple's commitment to revolutionizing healthcare through technology, are explored. 00:00 - Introduction and Welcome 01:29 - The TikTok Ban: A Deep Dive 17:05 - AI Job Application Bot: Game Changer or Cheating? 25:54 - Meta's Controversial Moves 35:30 - NVIDIA's AI Supercomputer: A New Era 41:26 - Apple's Commitment to Healthcare 49:05 - Conclusion and Wrap-Up #TikTok #Meta #AI #Apple #NVidia
I'll admit, I debated whether to even do a “predictions for 2025” episode. The world doesn't need another list of bold, sweeping claims. We've got more than enough of that out there. However, as I reflected on the trajectory of everything I'm watching—and after a lot of conversations over the holidays—I felt there was value in cutting through the noise with realistic, grounded predictions that matter to everyone. In this episode, I walk through 10 things I firmly believe we'll all be navigating in 2025. From the rapid growth of emotional AI and deepfake content to the growing demand for purpose in both life and work, these aren't wild guesses or overhyped headlines. Every single one is grounded in the realities we're all experiencing today. My goal here isn't to alarm or overwhelm you. It's to give you a sense of what's coming and what you can do so you're not blindsided, whether it's the rise of automation, shifts in how we work, or the deeper personal questions we're all wrestling with. As always, this isn't just about tech—it's about how the changes around us will shape the way we live, work, and connect. With that, let's dive in and make sense of what 2025 has in store. Show Notes: In this Weekly Update, Christopher shares his top 10 realistic predictions for 2025, focusing on the implications and growth of AI technology. The episode covers topics such as the rise of emotional AI, the impact and challenges of deepfakes, and the increasing concerns around cybersecurity. Predictions also include the anticipated increase in mental health issues and how companies will need to rethink employment and skill requirements. Other key subjects include the ongoing debate about return-to-office policies, the complexities of data privacy, and the search for personal and professional purpose in an age increasingly influenced by technology. 00:00 - Introduction and Purpose of the Update 04:35 - Prediction 1: Rise of Emotional AI 10:06 - Prediction 2: Deep Fakes and AI-Generated Content 15:17 - Prediction 3: Mental Health Crisis 19:18 - Prediction 4: AI Adoption and Technological Advancements 23:09 Prediction 5: Unemployment Due to Automation 26:47 - Prediction 6: Rethinking Employment and Skills 31:20 - Prediction 7: The Polarization of Return to Office 34:45 - Prediction 8: Cybersecurity Challenges in 2025 39:50 - Prediction 9: The Value of Personal Data 47:18 - Prediction 10: The Search for Purpose and Meaning #futureofwork #ai #leadership #cybersecurity #mentalhealth
Welcome to 2025 and the first episode of the year! If you've been following along through 2024, you know I've intentionally pulled back from regular guest interviews. Instead, I've primarily focused on weekly updates and reflections on the latest happenings at the intersection of business, technology, and human experience. However, dialogues aren't completely off the table. However, they'll only make the cut when they're with people and on topics I genuinely want to engage with, people I feel bring unique perspectives to the table and aren't afraid to tackle the big, messy questions we all need to confront with me. When I met Brian Beckcom, I knew he was that kind of person.Brian's a trial lawyer with over 25 years of experience, which may seem off-brand. However, he's far from it. He's also a computer scientist and deep thinker with a passion for ethics and philosophy. With his unorthodox background and dynamic suite of experiences, I couldn't resist recording a conversation. Our shared yet distinct experiences give us a unique lens to explore how AI is challenging what it means to be human, forcing us to reevaluate long-ignored ideas around ethics and philosophy, and redefining how we measure value in a world increasingly dominated by technology.To set expectations, this wasn't an interview—it was a dynamic conversation where the two of us wrestled with urgent questions about the future. How do we navigate the growing influence of AI without losing what makes us uniquely human? What risks do we take if we fail to revive the importance of ethics in decision-making? And perhaps most importantly, how do we ensure we're asking the right questions now, before it's too late?I walked away from the conversation energized and more thoughtful than ever, and I hope you will too.Show Notes:In this inaugural episode of Future-Focused for 2025, Christopher talks with Brian Beckcom, a seasoned trial lawyer with degrees in computer science and philosophy, to explore the deep intersections of technology, law, and human experience. The primary focus of the conversation is around the philosophical and ethical implications of AI, discussing its rapid advancements, the fundamental questions it raises about human consciousness, and its potential to reshape reality as we know it. The conversation also touches on practical applications of AI in law and medicine, the importance of intentional thinking, and the need for diverse perspectives in navigating our AI-driven future. Join Christopher and Brian for a thought-provoking start to the year as they challenge listeners to reclaim their attention and think critically about the world evolving around them.00:00 - Introduction and Welcome01:13 - Guest Introduction: Brian Beckcom04:16 - AI's Impact on Professional Fields10:19 - Philosophical Implications of AI25:22 - The Turing Test and AI's Evolution31:55 - Implications of Quantum Mechanics35:06 - AI and Consciousness38:53 - Ethical Considerations in AI51:48 - The Importance of Reflective Thinking57:12 - Conclusion and Final Thoughts#ai #ethics #philosophy #futureofwork #leadership
There are eleven days left in 2024, and depending on how your year has gone, you might be celebrating, mourning, or perhaps a mix of them all. Either way, we're all about to step through the holiday season and embark on 2025. Given the flurry of activity, this will likely be my last update for 2024. While possible, out of a desperate need to release the pressure of my pent-up thoughts, I'll record another update; I'm really going to try and take a digital respite.But before I do, here are my updates for the week.RTO Hiccups - I could barely contain my laughter when seeing the headline that Amazon is discovering they don't have room for all the people they're forcing back to the office. Will they admit they've made a cosmic mistake and pivot? I'm not banking on it, and it seems other companies are following their folly, as AT&T plans to follow their lead in January.AI Pro Tips - I have an internal wrestling match whenever encountering clickbait articles on “how to succeed with AI.” On the surface, the advice sounds reasonable enough, and at first glance, I typically nod in agreement. However, deeper reflection always reveals that hidden in much of the reasonable wisdom lies tremendous risk and disaster.Ditch Humans; Hire AI? - One of the growing hypes for companies in 2025 will be to ditch human workers for AI. One company has unashamedly launched a massive ad campaign in San Francisco, proclaiming it's the future and is seeing exponential growth. Senior execs are publically bragging about how they've stopped hiring humans altogether. But like all hype, buying into it will inevitably lead to disaster for everyone involved.AI Meeting Clones - While sharing with someone my thoughts on an AI meeting clone product, I could visibly see the lightbulb go on in their head. Suspicions about something fishy with several of their co-workers suddenly clicked into place. As much as I share caution about companies replacing employees with AI, there seems to be a rising trend of employees opting to replace themselves with AI. What could possibly go wrong?Autonomous AI - Prominent CEO of an autonomous AI company proclaims autonomous AI is the future of AI. Surprise, surprise. However, I don't disagree with many of the claims he's making about where we are headed. Where we'd butt heads is around his claims on the overwhelming sunshine and puppies outcomes we can expect by willingly handing over the keys. What comes to mind is the childhood saying, “If all your friends jumped off a bridge, would you?” Show Notes:In this Weekly Update, Christopher Lind wraps up 2024 with his final episode of the year, exploring several of the latest trends at the intersection of business, technology, and the human experience. Key topics include the contentious return-to-office policies at Amazon and AT&T, the prudent and imprudent uses of AI for senior leaders, and the impending rise of autonomous AI in 2025. Christopher also highlights the risks of over-relying on AI for sensitive tasks like succession planning, employees using AI clones to take their place in meetings, and discusses controversial startup campaigns advocating for AI over human employees. Finally, he underscores the importance of thoughtful deliberation and urges listeners to decompress and reflect as we prepare for the challenges and opportunities of the new year.00:00 - Introduction and Digital Detox Announcement02:29 - Amazon's Return to Office Fiasco08:37 - AT&T's Return to Office Plans15:20 - The Role of AI in Business Decisions25:29 - AI Replacing Human Employees29:19 - Thoughtful AI Integration in Business35:47 - AI Meeting Clones: A Bad Idea43:36 - The Future of Autonomous AI50:56 - Final Thoughts and Looking Ahead to 2025#ai #futureofwork #returntooffice #flexibleworking #leadership
Happy Friday Everyone. I hope you've had a fantastic weekend and are coming into the home stretch as we wrap up the year. A lot happening at the intersection of business, technology, and human experience. And, if you enjoy thriller flicks, you'll definitely enjoy this week's update. Just don't watch or listen right before bed.With that, let's get to it.Google Willow - Pop quiz. How many zeros are in a septillion? Who cares, you might ask? Well, you should care because Google's quantum processor, Willow, performed in five minutes what the world's most powerful supercomputer would need ten septillion years to complete. Mind blown? It should be, and it has people from all industries sitting back in their chairs wondering what just happened. Anthropic 18-month Countdown - In November, Anthropic warned us that if we didn't take AI regulation seriously, we could expect the apocalypse in eighteen months or less. That means by my watch, we're at seventeen and counting, but is there legitimacy to it? Afterall, doesn't Anthropic have a lot to gain by spooking the world? Yes, and yes, which is why you shouldn't build a doomsday bunker just yet, but you should be paying attention.Nefarious AI Models - Is it possible that AI could scheme its own plan to deviate from the prompt of its human and perform unseen acts to accomplish its will? Depending on your definitions around some of those terms, the answers is yes, and it's been confirmed by Apollo Research that demonstrated AI can and will lie, manipulate, and scheme to do what it thinks is best over the one prompting it. Robot Companions - If you thought robot friends and companions were a thing only in the movies, think again. There's an exponential rise in teenage addiction to chatbots, resulting in catastrophic outcomes. And, that doesn't seem to be slowing things down. Companies like Realbotix are creating human-like clones as a healthy alternative to our real human counterparts. If you're not into thriller movies with tragic endings, we'd be wise to get a handle on this.Show Notes:This Weekly Update highlights Google's major breakthrough with its new quantum chip, Willow, discussing the unprecedented computational power and its potential ramifications on fields like cryptocurrency and cybersecurity. Christopher also confronts the risks posed by AI, including deceptive behaviors in frontier models and the acute rise of AI addiction among teenagers. He emphasizes the need for responsible AI regulation and provides guidance for parents on engaging with their children's technology use. Additional insights include the trajectory of AI advancements and the ethical considerations we must address moving forward. Don't miss Christopher's personal reflections and critical advice on navigating these technological shifts responsibly.00:00 - Introduction and Reflections on 202401:09 - Exciting Plans for 202503:03 - Google's Quantum Computing Breakthrough17:27 - Anthropic's AI Regulation Warning27:34 - Apollo Research: Testing AI Frontier Models37:18 - Teenage Addiction to AI Chatbots42:44 - Realistic Humanoid Robots: Companionship and Risks47:08 - Final Thoughts and Reflections#ai #robotics #quantumcomputing #techtrends #philosophy
This week, I'm deviating from the usual path if running through some of this week's highlights at the intersection of business, technology, and the human experience. Here's why. Steven Bartlett's recent interview with Eric Schmidt, former CEO of Google, on The Diary of a CEO and it caught my attention in a way few conversations do. It was too important not to address. In this episode, I'm breaking down key moments from the interview through the lens of business strategy, technological evolution, and the human experience. From Schmidt's take on five-year strategies and the role of competition to his insights on innovation, culture, and the often-misunderstood reality of remote work, I'm not just responding—I'm expanding, challenging, and adding context. And then there's AI. Schmidt describes it as a societal shift on par with electricity or the internet, but is it being implemented in ways we fully understand—or even notice? I explore what that means for our future, and where his optimism may miss some critical nuance. This is more than a response—it's a deep dive into the ideas shaping how we lead, innovate, and thrive in today's fast-changing world. Show Notes: In this Weekly Update, Christopher steps away from the weekly highlights to deliver an in-depth response to Eric Schmidt's recent interview on The Diary of a CEO with Steven Bartlett. Breaking the dialogue into three core themes—business strategy, technological innovation, and the human experience—Christopher offers his unique perspective on topics like five-year strategic planning, the pitfalls of over-focusing on competition, and the reality of innovation within teams. He also dives into Schmidt's views on AI as a transformational force, exploring both its potential and the unseen ways it's reshaping our world. From remote work debates to the future of human-AI collaboration, this episode challenges assumptions and provides actionable insights for leaders navigating today's rapidly evolving landscape. Whether you're leading a team, embracing technology, or simply curious about what's next, this conversation is packed with value. 0:00 - Introduction 4:13 - Business Strategy 12:15 - Innovation & Team Structure 18:21 - Organizational Culture 26:38 - Remote Work 34:11 - AI's Impact on the World 41:20 - Will AI Eliminate Our Jobs? 48:04 - Protecting Ourselves from AI 55:57 - Will Future Generations Be Okay? 1:03:52 - How Do Tech Companies Prioritize Their Customers? 1:11:00 - The Importance of Critical Thinking 1:18:09 - Conclusion #FutureOfWork #LeadershipStrategy #DOAC #BigIdeas #TechAndHumanity
To my US audience, I hope you had a phenomenal Thanksgiving and found some loose-fitting pants to navigate the weekend. However, before you go too far off the grid, I've got an abbreviated Weekly Update to keep your mind sharp while you fight the shopping lines. With that, let's get to it. Unorthodox Gratitude - It's that time of year when there's overwhelming pressure to slap on a holiday smile and pretend everything is cheery. And, while there are lots of good reasons to celebrate, that doesn't eliminate the burdens many of us are carrying. So, what's one to do? Well, I've got some advice on how to be grateful without burying your troubles. AI's Conversational Wisdom - Why do so many people feel it's easier to interact with their favorite GenAI tool than another human being? The secret may lie in some principles outlined by a British philosopher back in 1975, and I can assure you he didn't have AI in mind. Perhaps we take a queue from our AI companions and apply it to our interpersonal relationships. State of Humans vs. Robots - There have been two notable wins for humans over the past week, highlighting that robots may not take over quite as quickly as some headlines suggest. While we can all probably add that to our gratitude list, it'd be wise not to get too comfortable just yet. While humans remain superior to machines, it's clear that much of what we do is more robotic than we realize. Show Notes: In this Thanksgiving-themed Weekly Update, Christopher addresses the struggles many face during the holiday season and emphasizes the importance of authentic gratitude amid hardships. He shares personal reflections on job loss, family adjustments, and how to genuinely cope with difficult times. Lind also discusses the surprising findings of AI's role in conversations and work, offering insights on improving human interactions and the future of labor with AI advancements. Important topics include the necessity of empathetic listening and the resilience required to navigate through challenging times. 00:00 - Introduction and Welcome 01:57 - Navigating Hard Times During the Holidays 15:19 - The Art of Conversation and AI 25:15 - AI's Limitations and Human Value 34:51 - Conclusion and Encouragement #ai #robots #automation #futureofwork #gratitude
Welcome to another Weekly Update as we come into the end of November! Let's get straight into it! Rise of Robotheism - Is AI becoming a new religion? While I don't anticipate many people will sign up to worship at the altar of OpenAI, there's a growing trend in tech leaders and people looking for AI to save us. It gained enough popularity it even had a label. AI Physician Replacement - Elon Musk recently went on record saying it won't be long before AI replaces doctors and lawyers, and some recent findings out of John Hopkins would give some the impression he's right. However, I think a deeper analysis would argue, not quite. AI Work Transformation - A Microsoft software engineer recently shared how while AI is doing much of his coding, he still has plenty of work to do. It seems some of the concerns about AI replacing workers aren't holding water, and some research about organizational adoption will further mitigate the risks. Autonomous Military - The US military is confident their multi-billion dollar investment in AI will pay dividends, but what kind of metrics do you use to measure success? And, what ethical considerations are being taken? This is essential as we're already seeing fully autonomous weapons being field tested. AI Scammer Defense - The elderly are primary targets for international scammers and they ruin the lives of countless people daily. However, I love how one EU telecom company is fighting back with a cleverly named AI “dAIsy.” Show Notes: In this Weekly Update, Christopher explores the convergence of AI, technology, and the human experience. He discusses 'robo-theism' and the belief among certain tech leaders that AI could become a new deity. Christopher responds to Elon Musk's comments about the potential of AI to replace doctors and lawyers, also highlighting recent research from John Hopkins University. Additionally, he examines the slow adoption of AI by companies due to data and infrastructure challenges. He further digs into the rise of AI in the military, raising ethical concerns about autonomous weapons. Finally, on a lighter note, he shares how a UK telecom company is using an AI bot named DAISY to waste the time of phone scammers. 00:00 - Introduction and Welcome 01:30 - Exploring Robo-Theism and AI as a Deity 18:02 - The Future of AI in Medicine and Law 26:15 - AI in Software Development 37:36 - AI in the Military: Ethical and Philosophical Concerns 48:38 - AI vs. Scammers: A Clever Solution 51:16 - Conclusion and Final Thoughts #robotheism #healthcare #futureofwork #ai #military
We are smack dab in the middle of November, and what a November it has been. As usual, there has been no shortage of turbulence and upheaval, which always makes it challenging to prioritize. However, a few notable topics made the top of my list. With that, let's get to it. Kindness Revolution - In the wake of the 2024 election, tension remains high as people wrestle with the uncertainty of what's ahead. Unfortunately, uncertainty doesn't typically bring out the best in people, and we see that everywhere we look. Perhaps it's time for a different kind of revolution as we prepare for 2025. AI Whisperverse - While many headlines point to some overt ways AI will influence human behavior, perhaps that's not where our biggest concerns should sit. With AI quietly controlling where our attention is focused, it may influence us in ways we don't even notice. That subtle influence has been given a label, and it's as eerie as AI's influence. AI Parenting - New parents may be one of the easiest targets for overpriced gadgets and gizmos as they adjust to caring for their little humans. The AI boom has locked onto that opportunity with a new wave of devices designed to make parenting easier, but is it a helpful trend or distraction from some of life's most precious moments? Large Behavior Models - If you've been trying to keep track of all the AI jargon and were just getting your arms around LLMs, hold onto your hat. LLMs are quickly becoming old hat as LBMs make their way to the main stage. With their improved capabilities in human-like performance, they're garnering attention in use cases related to interpersonal interaction. Will this be the next wave of AI innovation? Only time will tell. Show Notes: In this Weekly Update, Christopher emphasizes the importance of kindness and understanding in a polarized world. He reflects on personal experiences with vitriol and the value of empathy in difficult conversations. He also discusses the rise of 'Whisperverse' AI, the ethical challenges of AI parenting tools, and the future potential of Large Behavior Models (LBMs) over Large Language Models (LLMs). He highlights the necessity for critical thinking in an age of AI-driven convenience and the implications for future technological innovations. 00:00 - Introduction and Welcome 01:29 - Navigating Post-Election Emotions 11:08 - Encouragement for Authentic Leadership 19:39 - The Rise of AI and the Whisperverse 30:05 - AI Parenting: Convenience vs. Connection 40:28 - The Rise of Large Behavior Models 47:57 - Conclusion and Final Thoughts #AI #Parenting #Kindness #Relationships #TechTrends
Happy Friday Everyone! And, what a week it has been. Whether your candidate of choice won or lost, I think we can mutually agree we'd all benefit from a collective sigh, even if for different reasons, and stiff drink as we slide into the weekend. While the election dominated many headlines, there was no shortage of things happening at the intersection of business, technology, and the human experience. With that, let's get into it.Russia Fines Google - When you see a country fining a company for a collective total that exceeds the global GDP, you naturally assume it must be for something egregious. Discovering it's for being mad that a YouTube Channel got blocked leaves you shaking your head. However, there's more to this behavior than meets the eye.AGI or Lack of Compute - There's a lot of what can seem like double talk in the AI space, and this week Sam Altman is in the headlines for it. In one breath he says we will achieve AGI with the current hardware. In another he says progress is slowing due to lack of compute. Which one is it? It can be both, but it does speak to the innovation curve we're on.AI Conflict Resolution - How would you feel if your spouse or partner responded to an argument with a clearly AI-generated response? Well, it's becoming a common activity and not everyone appreciates it. While I see tremendous value in AI helping you think through difficult and emotionally charged situations, the heart of your approach is what determines the outcome.Meta's AI Touch - Meta recently announced they'd cracked the code to give AI the ability to "feel," but what does that really look like and what does it mean? I'd argue it's still a bit early to fully know, but it's incredible to see how technology has deconstructed what was formerly a uniquely human capability into a digital alternative.Rise of AI Coders - Google boasts that 25% of all its new code is written by AI. However, one has to ask, what parameters are they using to define newly written code? Depending on the answer to that question, I'd argue that 25% is either vastly too low or a gross overestimate. Either way, there's no denying AI is and will continue radically changing jobs and the skills people need to thrive.Reimagining Learning Outcomes - I sympathize with the student who's having the book thrown at him for using AI on his research paper, especially since every one of his peers is doing the same thing. However, the legal response is resulting in what I believe is a positive and much-needed action, reimaging learning outcomes. After all, is how students type words on a screen really what we want to measure?Show Notes:In this Weekly Update, Christopher covers the usual range of topics, blending business, technology, and human experience. Key discussions include Russia's exorbitant fine against Google and its implications on credibility and technology rights, predictions about the future of Artificial General Intelligence (AGI) and the current limits due to compute power, the use of AI in resolving personal arguments and the associated risks and benefits, Meta's advancements in AI's ability to perceive touch, and the increasing role of AI in coding at Google. Additionally, the episode examines a controversial academic case from Massachusetts involving a student using AI for a research paper, prompting a debate on redefining academic integrity in the age of AI. 00:00 Introduction and Weekly Update Overview01:40 Russia's Unenforceable Fine Against Google13:37 Sam Altman's Contradictory Statements on AGI20:28 AI in Relationships: A Double-Edged Sword28:49 Meta's Breakthrough in AI Sensory Perception32:41 AI Impact on Google Coding42:46 AI in Education: A Controversial Case53:02 Concluding Thoughts and Reflections#ai #education #meta #google #futureofwork
DescriptionHappy Friday, everyone, and congratulations on making it through another week. What better way to kick off November 2024 than a rundown on the latest happenings at the intersection of business, technology, and human experience? As usual, I picked five of my favorites. With that, let's get into it. OpenAI Safety Team Disbands, Again - OpenAI is making headlines as their safety team falls apart yet again after losing executive Miles Brundage. While some of the noise around it is likely just noise, his cryptic warning that OpenAI is not ready for what it's created has some folks rightfully perking up their eyes and ears. Meta Social Media Lawsuits - While big tech companies keep trying to use Section 230 as an immunity shield from the negative impact of social media, a judge has determined lawsuits will be allowed. What exactly that will mean for Meta and other big tech companies is still TBD, but they will see their day in court. Google & Character.AI Sued - It's tragic whenever someone takes their life. It's even more tragic when it's a teenager fueled to take the path by an AI bot. While AI bots are promoted as “for entertainment purposes only,” it's obvious entertainment isn't the only outcome. We continue seeing new legal precedents being established, and it's just the beginning. GenAI Bias Flub with Chanel - I'm not exactly sure what Chanel's CEO Leena Nair expected when she asked AI to create an image of her executive team or why on earth anyone at Microsoft moved forward with the request during her headquarters visit. However, it demonstrated how far we still have to go in mitigating bias in AI training data and why it's so important to use AI properly. AI vs. Humans Research - Where is AI better than humans and vice versa? A recent study tried to answer that question. Unfortunately, while the data validates many of the things we already know, it also is ripe for cherry-picking, depending on the story you're trying to tell. While there were some interesting findings, I won't be retracting any of my previous statements based on the results. #ai #ethicalAI #Meta #Microsoft #lawsuit
I hope you've had another fantastic week and are coming into the home stretch before the weekend. What better way to celebrate than to run through some of the latest happenings at the intersection of business, technology, and human experience? With that, let's get to it. AI Job Seeking Gone Wrong - If you're going to use AI to help write a cover letter, and I'd strongly encourage you to do so, please at least take the time to proof it for [insert info here] gaps. The rise in failure to do so is causing a lot of noise, but is it an AI problem or a people problem? Lessons from Al Pacino - While I have my concerns about people becoming overdependent on AI, it's not an issue limited to AI use. Just ask “The Godfather.” Al Pacino has an incredible story about how taking his eye off the ball left him discovering he was bankrupt in his 70s. So, don't think avoiding AI means you're secure. Walmart RTO Response - Walmart and 3M are two of the latest big players in the news for their questionable RTO policies, but one Walmart C-suite exec is taking a stand. Rather than relocate to somewhere in Arkansas, Walmart's CTO opted to leave the company. It's encouraging when folks at the top are willing to go against the current. Undesired AI Ressurection - I've talked on multiple occasions about the risks of looking to AI for the artificial resurrection of lost loved ones through ghostbots, but how would you feel if your lost loved one was resurrected without your knowledge? One father made his feelings pretty clear when he discovered his tragically murdered daughter on Character.ai. Chipotle AI Recruiting - I love a good pun, and if you interview at Chipotle anytime soon, you may get to interact with one literally. That's right, they've created an AI bot “Ava Cado” to help streamline their hiring. Only time will tell how it all plays out. However, based on my initial investigation into their strategy and plan, I'd give it my AI stamp of approval. Show Notes: In this weekly update, Christopher explores the intelligent use of Generative AI (Gen AI) in job applications, emphasizing the importance of providing thoughtful prompts and avoiding lazy mistakes. The discussion extends to the ethical concerns of AI, such as the unauthorized use of a deceased individual's likeness and the need for better governance. He shares Al Pacino's story of financial recovery highlighting resilience and responsibility, and address financial uncertainty, affirming the value of personal effort over material wealth. Additionally, he examines the impact of return-to-office mandates by major companies like Walmart and 3M, offering advice for employees. Highlighting Chipotle's AI-driven recruitment process 'Ava Cado', we showcase how AI can streamline operations while discussing the broader implications for the future of recruitment. Tune in for a blend of practical advice, ethical insights, and predictions on AI's role in business and technology. 00:00 - Introduction and Weekly Update Overview 01:36 - The Viral ChatGPT Cover Letter Controversy 11:37 - Al Pacino's Financial Downfall and Recovery 20:14 - Return to Office Mandates: Walmart and 3M 31:08 - Unwanted AI Resurrection 39:43 - Chipotle's AI Recruiting Strategy 48:46 - Conclusion and Final Thoughts #ai #recruiting #flexibleworking #strategy #ghostbots
Ever wondered if the futuristic robots we see in movies are just around the corner? Spoiler alert: we're not there yet, but we're making fascinating strides. In this episode, I talk with Jerry Swafford, PhD, a brilliant mind in AI and robotics, to peel back the layers of hype and unveil the real state of humanoid robotics. Jerry brings a wealth of diverse experience, from tinkering with robotic arms for Rolls Royce to pioneering AI-based controllers for humanoid robots. We dig into the heart of the matter, exploring the incredible advancements that have been made, the significant challenges still facing the field, and the ethical considerations we can't ignore. Are we on the brink of an AI revolution, or are we staring down the barrel of an "AI winter"? As you've probably come to expect, you'll find a balanced discussion, clearly separating science fiction from the reality of where we truly stand with humanoid robots. However, I'm confident this episode will challenge your perceptions, ignite your curiosity, and leave you with a clearer understanding of what's possible – and what's not – in the world of AI and robotics. Show Notes: In this episode, Christopher explores the fascinating world of AI and robotics with guest Jerry Swafford, PhD. Jerry shares his journey from Nashville to becoming a specialist in AI and robotics, discussing his research on robotic arms, UAV swarms, and humanoid robots. They explore the complexities of humanoid stability and balance, the advancements in hardware and software, and the potential future of robotics in various sectors. The conversation also touches on the ethical and societal implications of advanced AI and robotics, emphasizing the need for careful development to avoid potential pitfalls. Whether you're an AI enthusiast or a newcomer, this episode offers deep insights into the current state and future possibilities of AI and robotics. 00:00 - Introduction 01:06 - Guest Introduction: Jerry Swafford, PhD 04:44 - Advancements in Drone Technology 15:36 - Challenges in Humanoid Robotics 26:58 - The Rise of Humanoid Robotics 31:53 - Current Limitations and Future Prospects 35:19 - The Role of Large Language Models 38:45 - The Future of AI and Robotics 53:52 - Potential Risks and Ethical Concerns 58:34 - Conclusion and Final Thoughts #ai #robotics #humanoids #techtrends #ethicalai
Happy Friday, everyone, and congratulations on making it through another week. While temperatures may be dropping here in Wisconsin, the intersection of business, tech, and human experience continues heating up. As usual, I've got a rundown on some of the most notable, so let's get rolling. SpaceX Starship - While I always thought the moon landing videos were cool, I never fully understood why people got so excited about them. That is until I watched the SpaceX Starship catch this week. While I continue struggling to understand the end game of space exploration, I couldn't resist jumping off the couch as the tower perfectly caught that rocket. Tesla's "We, Robot" - Tesla's latest big event had all the elements of an actual Hollywood event, including the immeasurable amount of hot air and empty promises being blown around. Whether you want to talk about the remotely-managed Tesla robots or the impractical CyberCabs, there was no shortage of fluff. However, I'll at least acknowledge that it was a pretty solid SciFi event. Amazon's Gone Nuclear - You know, I was just thinking to myself, “We need mini nuclear power plants everywhere.” Like their same-day delivery, Amazon has delivered with a $500M investment in modular nuclear power to fuel the growing energy demand of AI. Maybe it's just my growing up on The Simpsons, but I can't help but wonder if this might not work out so well. Nudity Bot Chaos - There are a handful of AI applications I feel strongly have no redeeming qualities, and undressing or nudifying bots/apps are one of them. The ability to exploit someone without their consent for your satisfaction isn't okay under any circumstance. Unfortunately, there's an exponential rise in them, and the damage they're causing is horrendous. I get people get feisty about “governance,” but there are some things that just shouldn't be allowed. AI Human-Level Reasoning - Based on some sources, you'd think AI surpassed human capability in early 2023, but according to Meta's AI Chief Scientist, we're a long way out. He goes so far as to say the AI modeling we'd need to get there is nothing more than a theory. What I find most interesting about his “solution” is its circular logic. While it is an interesting theory, I don't see us ever getting there. #ai #amazon #meta #tesla #spacex
Happy Friday Everyone! Congratulations on making it through another week. If you've been following me for a long time, you know what that means! Another rundown of the latest happenings at the intersection of business, technology, and human experience. With that, let's get to it. ChatGPT 'Canvas' - OpenAI hits back at Anthropic's 'Artifacts' with their latest enhancement, 'Canvas,' which is a surprisingly helpful tool once you get the hang of it. While far from perfect, we're starting to see how GenAI is evolving from a one-shot, prompt hero to a surgical, on-demand collaborative partner. AI Interviews - The number of people I've recently talked with who were caught off guard when welcomed to an interview by a notification they'll be talking with an AI Bot is astonishing. While I see the potential, based on their feedback, I'm not convinced we've learned our lesson from the previous messes of AI screening. AI Sentience - I have a pretty diverse list of AI leaders I appreciate, Mo Gawdat being one of them. However, he's taking some heat for his comments about AI having already reached sentience, and I understand the criticism. However, I think rather than debating whether AI is or isn't truly sentient, we should instead focus on how good it is at convincing people it is. RTO Insanity - Can we finally all acknowledge this RTO nonsense is nothing more than a power trip? You first step in solving a problem is acknowledging you have one. A KPMG survey recently showed that nearly 90% of CEOs weren't even afraid to publicly acknowledge they support openly discriminating against people for no reason other than where they do their work. Please, can we stop the madness?! Meta MovieGen - Finally! You won't be limited to forging your family photo memories thanks to Meta. Soon, you can even enhance your video memories with AI. At this point, why bother even going on vacation? Just buy a green screen and record some videos in your house. A little AI magic and "poof," you and your family are world travelers. YouTubers have been doing it for years, but soon, you'll have the power in the palm of your hand. Show Notes: In this Weekly Update, Christopher explores the integration and advancements of AI in various spheres, particularly in workplace technology and human resources. Highlights from this week include the introduction of ChatGPT Canvas, designed for coding and writing, and a discussion on AI's role in interviews and potential biases. Christopher also debates the implications of AI as potentially sentient, drawing skepticism and varied opinions. Additionally, it critiques corporate strategies for returning employees to the office and the backlash from employees, warning about potential long-term implications. Finally, Christopher explores Meta's new MovieGen feature, raising both opportunities for creativity and concerns about digital authenticity. 00:00 - Introduction 01:39 - Exploring ChatGPT Canvas 10:57 - The Future of AI in Job Interviews 22:57 - Mo Gawdat's Controversial Views on AI Sentience 31:39 - The Return to Office Debate 42:13 - Meta's MovieGen: Revolutionizing Video Editing 47:53 - Conclusion and Final Thoughts #ai #genai #remotework #flexibleworking #futureofwork
How much longer will Wall Street's tightly kept secrets be restricted to elite traders and brokers, and what would a shift mean for those who have built careers on it? In this week's episode, we're tearing down the barriers that have long kept high-level financial analytics out of reach for everyday investors. I did this alongside Andrew Einhorn, CEO of Level Fields, where we discussed how AI is democratizing finance, leveling the playing field, and making sophisticated tools accessible to everyone—not just the elite on Wall Street. We explored the transformative power of AI as it unlocks hidden market opportunities and drives economic growth through diversification. We also examined how it can de-risk investments and uncover patterns in data that human analysts would miss, opening up a world of possibilities. We also couldn't ignore the controversial resistance from traditional financial institutions. Why are they so hesitant to embrace AI? Spoiler: it's more about preserving control than protecting investors. But the truth is, when AI levels the playing field, everyone wins. A red thread through the discussion was how AI is transforming the role of traders. While Hollywood might paint a glamorous picture, the reality is often mundane. As a result, AI can step in to handle the routine data crunching, freeing up human capital for higher-order thinking and creativity, which is a familiar story for many jobs. So, listen in and see how AI is dramatically changing the world of finance as you know. Show Notes: In this episode, Christopher has an engaging conversation with Andrew Einhorn, CEO of Level Fields, about the transformative power of AI and analytics in the financial sector. Andrew shares his unique journey from aspiring judge to tech entrepreneur, revealing how his company leverages AI to automate complex financial tasks and level the playing field for individual investors. Key topics include the evolution of AI in processing financial data, market sentiment versus actual data, and the broader implications of AI-driven automation in various industries. Discover how AI is not only changing the financial landscape but also improving work efficiency and economic opportunities across the board. 00:00 - Introduction and Overview 01:04 - Guest Introduction 14:19 - Challenges and Pivot to Financial Services 21:57 - Creating Level Fields: Leveling the Investment Playing Field 29:50 - Cutting Through The Fluff in Financial News 32:51 - Home Builders: A Case Study 35:38 - Leveraging AI in Financial Analysis 43:23 - AI's Broader Implications and Challenges 47:21 - Disruption in the Financial Industry 53:35 - The Future of Work with AI 01:00:42 - Conclusion and Final Thoughts #finance #ai #futureofwork #trading #wallstreet
Congratulations on making it through another week! What better way to celebrate than a rundown on some of the latest happenings at the intersection of business, technology, and human experience. From AI Voice to Humanoid Robots to the Future of Wearables, I've got you covered. With that, let's get to it. Advanced Voice Mode - Who needs friends? OpenAI's advanced voice mode is finally here. Now, you can talk to yourself in public and not be considered crazy. In all seriousness, it's a pretty impressive development, but what exactly should you do with the functionality? Meta Connect 2024 - On the topic of AI Voice, the Zuckster is betting big on it as was clearly demonstrated at the Meta Connect 2024 event. Honestly, their latest AI developments are incredible, which is to be expected. However, would you be surprised to find out the maker of Facebook would create new tech with some serious long-term risks? Meta Orion - While you might think AR & VR tech went the way a neutron star, you'd be wrong. Meta made the news again this week with a lot of noise about their Orion glasses. Honestly, watching a few videos, I wouldn't mind getting my hands on a pair, but don't get your hopes up. They won't be available to the public. However, my predictions on the trajectory of the tech are spot on. Humanoid Robots - It's been a while since I stirred people's fears of humanoid robots taking over the world, but with all the advancement in 2024, I thought it was time to kick the hornet's nest again. If you thought they'd fallen to the wayside, prepare for a poor night's rest tonight because these things are getting crazy. On the bright side, you'll have a suite of options when deciding which one kicks down your door. AI Governance Veto - If you were hoping new legislation might slow the freight train of AI development, prepare to have your hopes dashed. California governor Gavin Newson vetoed SB 1047 and its attempts to make our insane superpower safe for humanity. However, with compelling arguments like "that will cost money" and "it will slow down development," can you blame him? Show Notes: In this Weekly Update, Christopher examines the capabilities and implications of OpenAI's advanced voice mode becoming available for pro users, emphasizing its potential applications and possible hazards. He discusses MetaConnect 2024 highlighting Meta's advancements in AI voice technology, particularly its implications for content creation and virtual enhancements. Christopher also addresses the rapid progress of humanoid robots and their emerging roles in sectors like space exploration, healthcare, and home use. The episode ends on the potential societal challenges linked to AI and emerging technologies, including the ethical considerations and the need for balanced regulation, using California's vetoed AI safety bill as a case study. 00:00 - Introduction and Weekly Update Overview 01:22 - Exploring OpenAI's Advanced Voice Mode 05:39 - Practical Use Cases for AI Voice 13:26 - MetaConnect 2024 Highlights 24:19 - Introducing Meta Orion 31:19 - Humanoid Robots: The Next Frontier 40:40 - California AI Veto - Regulation and Governance 50:20 - Final Thoughts and Conclusion #ai #metaconnect #aivoice #humanoidrobot #metaverse
It's that time of week for another rundown on the latest happenings at the intersection of business, technology, and human experience. As a favor, if you find these updates helpful, comment, like, and share it with a friend. With that, let's get to it. Copilot Wave 2 - Will Copilot 2.0 be the end of large swaths of professional jobs, as many advertisements subtly imply? I have concerns some leaders will fall for it, but it'd be a big mistake. A deeper reflection on how work is really performed quickly highlights AI's capability to replace human capacity. LinkedIn AI - I understand social media platforms want to use their users' data to train their AI and ultimately improve their products. And, as a user, I recognize there are tangible benefits that come from engaging in that transaction. What I don't appreciate is that functionality being quietly turned on without a heads up, allowing me to understand the terms and make a conscious decision. WFH Stereotypes - My post this week may have given you a teaser on my feelings about USA Today's gross misrepresentation of WFH/Hybrid employees by exploiting poorly gathered data from a recent survey. However, I have a lot more to say on the matter, and it's not about piling on the RTO hype but encouraging leaders to focus on performance not activity. AI Environmental Concerns - How much are we willing to scorch the earth so AI can develop a catchy pirate jingle or draft an email that'd take 30 seconds of your time? At the rate we're going, the Earth might start to resemble our red planet neighbor in the span of our lives. What worse? There are completely reasonable solutions to all the problems if we'd just slow down a little bit. End of Mortality? - Will AI really be the end of mortality as we know it? An upcoming documentary is shining light on the tragic stories of people who were sold that tale at elaborate prices but ultimately lost more than their savings. What's strange about the whole thing is that the person resurrected with tech never sees any benefit from it. I can't help but ask, who is all this even ending mortality for? Show Notes: In this Weekly Update, Christopher explores Microsoft's new Copilot 2.0 and its potential impacts on business workflows, the ethical concerns surrounding LinkedIn's data usage for AI training, and the ongoing debate about remote work efficiency. Additionally, he examines the environmental costs of AI advancements, specifically the water consumption and energy dependencies and discuss the controversial emergence of digital 'eternity' bots aiming to keep deceased loved ones 'alive'. The episode calls for a holistic approach to integrating new technologies while highlighting the importance of human judgment and ethical considerations. 00:00 - Introduction and Overview01:17 - Microsoft Copilot 2.0: Revolutionizing the Workplace?13:03 - LinkedIn's Controversial AI Data Usage21:39 - Debunking Remote Work Stereotypes26:10 - Understanding Human Performance at Work33:20 - Environmental Impact of AI42:19 - The Ethics of Digital Immortality52:33 - Final Thoughts and Cautionary Advice #ai #leadership #futureofwork #linkedin #flexibleworking
Is social media a force for good or a ticking time bomb? And, given it's not going anywhere, what can we do about it? This week I'm exploring those questions and more with Matthew Krayton, Founder & Principal at Publitics as we seek to wield the double-edged sword that is social media and explore its profound impact on society. We'll unpack the evolution of social media from its humble beginnings to the algorithm-driven powerhouse it is today, examining how these platforms have democratized access to information and given a voice to the voiceless. At the same time, we can't ignore how they're creating echo chambers that fuel division and misinformation. Throughout it all, we don't shy away from the tough questions: How has social media changed our behavior, both online and offline? What are the psychological effects of living in a world where likes and shares can make or break our self-esteem? And perhaps most importantly, how can we navigate this landscape in a way that promotes a healthier, more connected society? The whole thing is a fascinating look at how the same tools that connect us can also drive us apart. I think you'll find the conversation challenges assumptions and reveals the complex realities of our digital lives. It's my hope this episode will make you rethink your relationship with social media. So, whether you're a digital native or someone trying to make sense of this ever-changing world, there's something here for everyone. Show Notes: In this episode, Christopher engages with Matt Krayton, founder and principal at Publitics, to scrutinize the profound effects of social media on society. We explore its evolution, political impact, and effects on human behavior and mental health. Matt shares his journey from aspiring teacher to digital media expert, offering insights into both the positive and negative ramifications of our connected world. Key themes include the democratization of information, anger-driven content, and strategies for healthier online engagement. The discussion also covers the emotional responses to social media, handling misinformation, crisis management, and the future ramifications of technologies like AI and deepfake. 00:00 - Introduction 09:05 - The Rise of Social Media in Politics 14:31 - The Impact of Social Media Algorithms 22:37 - The Dark Side of Social Media 32:35 - Influencer Culture and Perception vs. Reality 38:02 - The Illusion of Expertise 43:17 - The Trap of Social Media Validation 54:36 - The Impact of Social Media on Mental Health 01:00:38 - Navigating the Future of Social Media 01:10:09 - Concluding Thoughts and Future Concerns #SocialMedia #DigitalMedia #AI #MentalHealth #Politics
While last week's update was a slight deviation, this week I'm back to the usual rundown of the latest happenings at the intersection of business, tech, and human experience. And, what a rundown it is. With that, let's get to it. OpenAI Strawberry - OpenAI isn't getting into the healthy living space, but they are trying to make their LLMs think more critically. Is it working? You'll have to listen for my honest opinion, but let's just say it's more of something alright. Instagram Teen - While teenagers will always find a way around rules, I commend Meta for taking formal steps to protect youth from the dangers of social media. However, to all the parents out there, I wouldn't advocate you relegate your parenting responsibilities. After Hours WFH Drama - Your daughter makes a cameo on your 8:30 pm video call, and the next thing you know, you've got a meeting with HR. It sounds too crazy to be real, but truth is often stranger than fiction, as a marketing exec discovered. $10M AI Scam - Artificial bands making artificial music for an artificial audience, to the tune of 10M in royalties. It's certainly one way to put your musical talents to work, but it might also land you in prison for the rest of your life, as one North Carolina musician is discovering. $2k Fusion Reactor - Clean energy may be closer than we realize but in a very unexpected way. A college student recently combined his skills with Amazon and Anthropic, building a desktop fusion reactor for under 2k. Might want to put a hold on that spendy solar conversion. Show Notes: In this Weekly Update, Christopher explores the latest in AI advancements with OpenAI's 'Strawberry' model, also known as O1. The discussion dives deep into its capabilities, limitations, and potential impacts on user behavior and misinformation. Additionally, he shares his thoughts on Instagram's new teen settings aimed at creating a safer social media environment for younger users, along with practical advice for parents. Next, he objectively discusses the unexpected predicament a marketing executive found himself in when his daughter briefly cameoed a work meeting. Next, he examines a controversial case of AI-generated music fraud, involving a musician who used AI to create fake bands and fake listeners, leading to significant financial gains and legal consequences. Finally, he spotlights a groundbreaking achievement by a college student who built a working fusion reactor for $2,000 using parts from Amazon, demonstrating the incredible potential and simultaneous risks of AI-driven innovation. Tune in for a riveting discussion on the interplay of technology, ethics, and human experience. 00:00 - Introduction 01:59 - OpenAI's Project Strawberry: What's New? 14:56 - Instagram's New Teen Settings: A Safer Social Media? 25:29 - A Marketing Professional's Unexpected Incident 35:26 - Musician's AI Scam Exposed 43:38 - College Student Builds Fusion Reactor 50:04 - Final Thoughts and Future Prospects #ai #cleanenergy #flexibleworking #instagram #socialmedia
Happy Friday everyone! I hope you've had a phenomenal week.So, this Weekly Update is a deviation from my usual format. I'll be back to usual next week, so if this is your first listen, you're welcome to check out previous updates to familiarize yourself to the style. However, this week, I felt compelled to respond to a recent episode of Diary of a CEO (DOAC) between host Steven Bartlett and guest Yuval Noah Harari on the state of society and the role AI is playing on it. What I appreciate most about Harari is he's an historian, not a tech guru, which means his reflections connect the past to the present and the potential future. The conversation is long, heavy, and at times dark, which isn't for everyone, so I thought I'd to share my thoughts as an alternative. However, you're welcome to check it out.As a teaser, I dive into the concept of “alien intelligence” and why we're dangerously underestimating how different AI is from the way we think and the implications that has. I also break down the societal risks of AI, from its growing influence in decision-making to how it might already be shaping your worldview without you even realizing it. Are we at risk of worshiping at the altar of AI, blindly submitting ourselves to the decisions curated by tech? I'll also tackle the issue of information overload—how fear, hate, and greed are served up like junk food—and why we need to rethink what we consume, both physically and intellectually. And, with it being an election year I had to address what happens to democracy when we start viewing anyone who thinks differently as an enemy to be destroyed? Harari predicts complete collapse is possible, and I'll explain why that should matter to you right now.While the topic is a heavier one, it's one we all need to have. So, if you care about where AI is taking us and what we can do to ensure we stay human in a world increasingly influenced by machines, this is one episode you won't want to miss. Listen in, reflect, and let's stay ten steps ahead together.Show Notes:In this Weekly Update, host Christopher diverges from the usual updates as he reacts to a conversation between Stephen Bartlett and historian Yuval Noah Harari on the 'Diary of a CEO' podcast. The episode is breaks down the reactions into three parts, AI and Tech Implications, Societal Implications, and Solutions. Throughout Christopher examines the dangers and misconceptions surrounding AI, societal trust, the impact of junk information, and the current state of global democracy. Emphasizing the necessity of accountability, trusted institutions, and true interpersonal connections, Christopher encourages listeners to reflect deeply on these topics and take meaningful action. The episode concludes with reflections on the potential collapse of democratic systems and a call for unity and cooperation.00:00 - Introduction & Updates01:16 - Why A Different Weekly Update05:46 - Artificial vs. Alien Intelligence09:05 - AI Decision-Making and Ethical Concerns25:33 - Societal Impacts of AI31:15 - The Age of Distrust34:51 - Finding Ground in Timeless Truths39:07 - The Fragility of Democracy44:15 - The Dangers of Changing Rules50:46 - The Need for Accountability56:42 - The Importance of Unity and Reflection59:16 - Conclusion and Call to Action#AI #Democracy #ArtificialIntelligence #DOAC #Leadership
What if we could accurately predict diseases years before symptoms appear without invasive procedures? How would that transform the troubled landscape of modern healthcare? That's the topic of conversation this week when I chat with Dr. Jonathan Hill, Associate Professor of Cell Biology at BYU and VP of Science and Technology at Wasatch. Together, we're exploring the cutting-edge intersection of AI and DNA sequencing. We unpack AI's hidden potential to decode the complex patterns hidden within our DNA, paving the way for early diagnosis and better treatment options. In our conversation, Dr. Hill shares how his pivot from traditional medicine led to his current work at the forefront of biological research. You'll learn more about his team's innovative work in DNA methylation and how AI-driven diagnostics could radically change the landscape of modern medicine. In parallel, we talk through some of the challenges of innovation, like the chasm between academic research and practical execution, the importance of change management, and how, despite its tremendous potential, AI isn't for everything. Whether you're a tech enthusiast, a healthcare professional, or simply curious about the future of medicine, you'll walk away from this one both inspired and informed. So, come check out how the secrets encoded in our DNA are being unlocked, one AI-driven discovery at a time. Show Notes: In this episode, Christopher talks with Dr. Jonathan Hill, an Associate Professor of Cell Biology at BYU and VP of Science and Tech at Wasatch, to explore the intersection of AI and DNA sequencing. The discussion highlights how AI is being leveraged for early diagnostic techniques, particularly in neurodegenerative diseases like Alzheimer's. Dr. Hill also shares insights into the collaborative approaches taken to bridge the gap between academic research and practical application. The conversation tackles the essentials of integrating AI models, ensuring the cleanliness of data, and the future of healthcare diagnostics powered by advanced technology. Don't miss this insightful dialogue that unpacks the challenges and breakthroughs in healthcare innovation. 00:00 - Introduction 01:14 - Guest Introduction: Dr. Jonathan Hill 06:56 - Challenges in Translating Research to Practice 13:33 - Collaboration and Innovation in DNA Sequencing 26:20 - Early Diagnosis of Alzheimer's with DNA Sequencing 32:10 - Innovative Lab Approaches 34:05 - Challenges and Benefits of Healthcare Innovation 40:46 - AI in Diagnostics: Three Levels of Application 51:22 - Simplifying Diagnostic Processes 01:03:44 - The Future of AI in Healthcare #ai #healthcare #diagnostics #artificialintelligence #Alzheimers
If you're stateside, I hope you had a phenomenal Labor Day weekend and enjoyed the abbreviated week. It's been a wild one here on the personal front and the intersection of business, tech, and human experience. With that, let's get to it. New Beginnings - I've been hinting that I had some personal news coming for the past couple of weeks, and the secret is out. My wife and I recently gave birth to baby #8, and it's been a fantastic new beginning on the heels of the end of my time at ChenMed. Yet, there's been some essential life lessons learned through the experience. Canva Price Gouging? - 400% price hike?!? Say what?! While many are reeling from Canva's recent pricing changes, there's more to it than meets the eye. While I'd have had some spirited conversations about the strategy, it makes sense, given the cost associated with their AI investment. However, was their investment in things any of their customers wanted? AI Active Listening - It's strange when you open your favorite social app, and that thing you were just talking about shows up as a "promoted post," but is it the result of CMG's active listening? I'm not entirely convinced, but I still think there's reason to shine a light on the matter. Gen Z Office Drama - Those darn Gen Zers and all their needs, amirite?! Reading WSJ's article about how much bosses struggle with their Gen Z employees got me visibly upset for many reasons. We're dealing with individuals, not a group of clones. And, from what I can tell, everything they're asking for seems pretty reasonable. But hey, shameless plug for how I can help. Hallucinations Gone Wrong - Imagine you ChatGPT yourself only to get a comprehensive description that's completely false and defamatory. Well, as one German court reporter discovered, it's not a possibility left to our imaginations. The sadder part was the lack of legal recourse to amend the problem. Show Notes: In this Weekly Update, Christopher shares a major personal update of welcoming a new baby while transitioning from his previous job. He discusses the complexities of keeping personal news private and the societal pressures of living up to others' expectations. The episode transitions into significant tech news, including a massive price hike by Canva and the controversial leaked presentation about data harvesting by Cox Media Group. He also addresses misconceptions about Gen Z in the workplace and highlights the importance of individual relationships. Lastly, the he covers the implications of AI errors, using the example of a court reporter wrongly labeled by an AI tool, and emphasizes the need for vigilance in digital footprint management. 00:00 - Introduction 01:15 - New Beginnings: Welcome Baby 8 11:53 - Canva's Price Hike: Hidden Costs of AI 23:01 - Leaked Presentation: Privacy Concerns with Smart Devices 31:40 - Gen Z in the Workforce 39:28 - AI Slander: Dangers of AI Misinterpretation 45:50 - Final Thoughts and Cautionary Advice #AI #GenZ #Leadership #Parenting #Futureofwork
Happy Friday! We have officially made it to the end of summer 20204. Can you believe it?! While temperatures may be cooling down, tech and workplace changes continue heating up. So, let's get to it. Layoff Influencers - I hadn't heard of the term layoff influencer until this past week, and no, it's not people helping influence companies toward layoffs. Read any headlines, and you'll know they don't need help being pushed in that direction. We're talking about the rising trend of people sharing their layoff experience. Is this helpful or harmful? Depends on who you ask. Private Jet Commute - While there's no shortage of outrage over the new Starbucks CEO and his 1000-mile private jet commute, I can't help but dig beneath the surface concern. Commercial or charter, why are companies still so convinced an executive's physical presence is so necessary? Last I checked, executives don't emit gamma radiation that turns people into superhumans. Rising AI Concerns - AI is showing up in company annual reports, but not for the right reasons. A recent study showed an almost 500% increase in public mentions by top companies that believe AI is a significant threat. While I'm happy companies are starting to recognize risks, the clear miss is how many still don't understand the benefits. AI isn't the risk; your strategy and execution are! Procreate Anti-AI - "I {explicative} hate generative AI!!" Those are strong words coming from the CEO of a creative tech company. However, a deeper investigation shows the only generative AI this CEO seems to consistently hate is the kind that does tasks he personally believes people should do. Is this a valid argument against AI or a personal bias gone wild? Bye, Bye Dev Jobs? - "You've got a max of two years before all your jobs are gone, software developers," according to Amazon CEO. Well, not really. More like, "Developers, your jobs are going to change significantly over the next two years, so you'll need to adapt," according to Head of AWS, a division of Amazon. Funny how headlines tell one story while the details tell another. Show Notes: In this Weekly Update, host Christopher Lind provides valuable insights on various topics at the intersection of business, technology, and human experience. He begins by teasing upcoming news for next week's episode. The first segment dives into a Bloomberg article discussing the rise of 'layoff influencers' and the changing culture around job loss. Lind shares his personal experiences and reflections, emphasizing the complexities and biases tied to being open about layoffs. The discussion moves to the controversy surrounding the Starbucks CEO's use of a private jet for commuting and the broader implications for leadership and employee relations. Finally, Lind explores the polarizing views on AI's impact on jobs, citing studies and statements from industry leaders. He cautions against extreme positions and encourages balanced, thoughtful approaches to integrating AI in the workplace. 00:00 - Introduction and Weekly Rundown 01:53 - The Rise of Layoff Influencers 16:17 - The Starbucks CEO Controversy 25:20 - AI: A Growing Threat or Opportunity? 30:49 - Procreate CEO is Anti-AI 38:18 - The Future of Developer Jobs 44:55 - Conclusion: Embracing Change and Staying Ahead #AI #TechTrends #FutureofWork #Equity #Layoffs
In a world where technology constantly pushes boundaries, the idea of creating a digital replica of our physical environment might sound like science fiction. However, it's definitely not. This week I'm sitting down with Boaz Goldschmidt, VP of Business Development at Treedis, to explore the transformative world of digital twin technology where the analog world is cloned into the digital space. From its unexpected origins it's coming to market with a surprising number of practical applications. Add to that, it has a lot of disruptive potential for a multitude of industries and Boaz shares how his journey that started with him running a pizza bar led him to a cutting-edge tech company. We talk through how digital twins are revolutionizing everything from workforce training and operational efficiency to predictive maintenance and connected worker solutions. We highlight the value this technology brings, not just in simplifying complex processes but in creating more efficient, cost-effective, and safer work environments. So, whether you're a tech enthusiast, a business leader, or someone curious about the future of technology, this episode will provide valuable insights and inspire you to explore the endless possibilities of digital twins. Show Notes: In this episode we explore the transformative potential of digital twin technology. Christopher is joined by Boaz Goldschmidt, VP of Business Development at Treedis, to explore how digital twins are revolutionizing industries from real estate to manufacturing. They discuss the basics of digital twin technology, its numerous applications such as training, maintenance, and IoT data visualization, and the immense cost and efficiency benefits it offers. Key takeaways include the versatility of digital twins, from onboarding and training to advanced use cases like predictive maintenance and operational simulation. This episode is a must-listen for anyone looking to understand the future of digital transformation and its practical implementation. 00:00 - Introduction 01:37 - Meet Boaz: From Pizza to Tech 11:07 - Understanding Digital Twin Technology 18:43 - Capturing Digital Twins: The Process 25:35 - Adoption and Use Cases of Digital Twins 33:08 - The Basics of Digital Transformation 36:49 - Training and Onboarding with Digital Twins 46:44 - Connected Workers and Real-Time Support 53:32 - Visualization of Live Data and Predictive Maintenance 56:57 - Planning and Simulation with Digital Twins 01:03:06 - Conclusion and Future Prospects
Happy Friday everyone, and congratulations on making it to the end of another week. While summer has been filled with changes and is almost in the rearview mirror, things aren't slowing down. This week, I've got another rundown of the latest happenings at the intersection of business, technology, and human experience. With that, let's get to it. Grok 2 Mayem - Elon's Grok 2 made positive headlines for its latest AI benchmarks and image generation capability, which lasted for about five minutes. Attention was then immediately shifted to how his decisions for limited guardrails in the name of non-censorship was being grossly exploited and misused. Undressing App Accountability - It's disturbing the growing number of apps that openly brag about their ability to create nude images of anyone you want without their consent. However, the San Fransico district attorney isn't laughing and is about to turn up the heat by bringing a first-of-its-kind lawsuit against the companies. Eric Schmidt's AI Prophecy - Former Google CEO Eric Schmidt knows a thing or two about AI, and even he's acknowledging his need to constantly change his predictions about where things are going. In his latest shift, he's shying away from his former prediction that smaller models are closing the gap between frontier models, but not for the reason you might expect. Cisco's AI Layoffs - Apparently, 10.3B in profit wasn't enough, leading to their second major round of layoffs in 2024. However, there is one major change, and that's their reasoning. While the first big group was chalked up to vague statements about re-alignment, this round was directly attributed to investment in AI. While there may be some merit to the statements, it seems more like the convenient scapegoat companies are jumping on. Replacing Teachers with AI - The invasion of AI isn't limited to the corporate world, as a high school in the UK is planning to compensate for its teacher shortage with AI tools. While there's heated debate coming from both sides, I see legitimate ways this can be done. However, my biggest concern is related to the execution. Show Notes: In this Weekly Update, Christopher explores five significant developments in AI and technology. The episode starts with a detailed discussion on the release and subsequent controversy surrounding Elon Musk's AI model, Grok 2, and its unrestricted capabilities leading to misuse and chaos. Next, he explores the rise of undressing apps and the legal battle initiated by the San Francisco district attorney against 16 popular applications. The conversation also covers Eric Schmidt's revised predictions on the future of AI, emphasizing the growing divide between frontier models and smaller AI players. He then analyzes Cisco's decision to lay off 5,500 employees to invest more heavily in AI, scrutinizing the broader implications and corporate strategies involved. Lastly, the episode examines the UK high school's use of AI to supplement their teacher shortage, evaluating both the potential benefits and risks. Tune in for a comprehensive look at this week's transformative events in the AI landscape. 00:00 Introduction 01:00 Elon Musk's Grok 2 AI Model: A Double-Edged Sword 13:35 The Rise of Undressing Apps and Legal Battles 19:35 Eric Schmidt's Updated AI Predictions 31:35 Cisco's Controversial Layoffs34:26 Challenges of AI Investments 41:35 AI in Education: Clickbait and Reality 50:13 Concluding Thoughts on AI Integration #ai #education #business #layoffs #Grok2
Congratulations on making it through another week, and what a week it has been. While it's been a whirlwind on the personal front, I've still got a rundown for you on the latest happenings at the intersection of business, tech, and the human experience. With that, let's get to it. Chatbot Dependence - I wonder if we'll soon reach a point where you'll be presented with a warning label whenever you attempt to use AI. Based on OpenAI's safety card, perhaps we should be. One of the latest concerns relates to the risk of emotional dependence with its advanced voice mode. While it may seem out there, user dependence is already rising. AI Voice Hijack - What would you do if your chatbot suddenly started talking back to you in your voice? A glitch in OpenAI's advanced voice mode is already making this happen for a growing number of users. And, while the conspiracies circulating about how and why are neither accurate nor helpful, the situation is a cause for concern. AI Bubble Pop - Depending on what news source you read, the AI bubble is either about to completely implode or grow in unprecedented orders of magnitude. Honestly, I predict a mix of both. There are some economic aspects of AI on the verge of collapse. However, I'm confident there's still plenty of untapped growth we'll see before things slow down. Real-Time Faceswapping - A new risk for deepfakes is on the rise as a popular app allows you to become whoever you want on camera by simply uploading a single picture of that individual. Combine that capability with voice cloning, and why bother with the hassle of AI image and video creation tools for deepfakes? With the click of a button, you just become who you want. Does AI Learn? A prominent study from the University of Bath concluded AI does not demonstrate any complex reasoning skills or have the ability to learn without the prompting and direction of a human. So much for the robot apocalypse, right? Maybe not. While AI might not independently devise a plan to destroy humanity, there's nothing stopping someone from prompting AI to pull it off. Show Notes: In this Weekly Update, Christopher examines the latest advancements in AI voice capabilities, particularly OpenAI's advanced voice mode. He explores the potential risks, including emotional dependence on chatbots and voice hijacking, and discusses the ethical implications and societal consequences. The conversation then shifts to the broader AI landscape, examining claims about the AI bubble and what the future holds for AI innovation. Also of concern is a new app allowing users to become someone else on camera using only a photo of the individual. Finally, a recent study from the University of Bath on AI's independent learning capabilities is also analyzed, highlighting the current limitations and potential risks of AI technology. This comprehensive update aims to keep listeners informed and critical about the rapid developments in AI and their impact on human experience and society. 00:00 - Introduction 01:20 - Exploring OpenAI's Advanced Voice Mode 03:17 - Risks of Emotional Attachment to AI 13:01 - Voice Hijacking Concerns 21:29 - Debating the AI Bubble 30:12 - AI Faceswapping and Deepfakes 37:20 - AI's Learning Capabilities: A Study Review 48:33 - Conclusion and Final Thoughts #ai #deepfake #economics #consciousness #AIvoice
Have you ever wondered how the sense of smell could revolutionize healthcare? This week, I sit down with Kordel France, a visionary in AI and olfactory science, to explore this fascinating frontier. Imagine diagnosing diseases not by sight or sound but by scent. It's not science fiction—it's happening now. We explore Kordel's journey from a tech-savvy farm to the forefront of AI innovation that ultimately led him to discover the remarkable potential of AI to replicate and enhance our sense of smell, enabling breakthroughs in medical diagnostics and even space exploration. We'll talk through the intricacies of developing sensors capable of detecting scents with extraordinary precision and the profound implications this has for early disease detection through breath analysis—offering a non-invasive, swift, and accurate diagnostic method. This conversation is more than just an exploration of technology; it's about envisioning a future where AI augments human capabilities in ways we never imagined. Whether you're a tech enthusiast, a healthcare professional, or simply curious about the future, this episode promises to inspire and inform. Show Notes: In this episode, Christopher explores the fascinating world of AI and olfactory science with Kordel France. We explore the current state and future possibilities of AI-driven scent detection, from healthcare applications like disease diagnosis through breath analysis to innovative uses in space exploration. Kordel shares his personal journey, the complexities of building hardware-integrated AI systems, and his vision for democratizing olfactory technology through extensive data libraries. This insightful discussion also addresses the evolving roles of humans in the age of AI, providing an optimistic yet realistic view of our technological future. 00:00 - Introduction 01:15 - Meet Cordell France: Journey into AI and Robotics 03:26 - Challenges in Developing AI for Smell 16:06 - Applications of AI in Healthcare 19:06 - The Future of AI and Olfactory Science 30:54 - Challenges in Scent Data Collection 37:03 - The Complexity of AI and Robotics 41:05 - The Role of AI in Future Innovations 53:12 - Exciting Applications and Future Prospects 57:54 - Concluding Thoughts and Future Predictions #AI #OlfactoryScience #HealthcareInnovation #ArtificialIntelligence #TechPodcast
Growing up, my biological mom was heavily involved in theater, so the phrase, "The show must go on," was said on many occasions. This week is no exception. Despite the disruption and chaos, one thing that seems to help most is putting one foot in front of the other, with the Weekly Update being one of them. So, with that, here's some of the latest happenings. Layoff Experience - Layoffs are tough to talk about. They're even more brutal to experience. This week, I had the opportunity to do more than talk about them after being informed my position will be eliminated. With everything going on in the world, I know my experience is not unique, so I thought I'd share some of my personal reflection in hope of encouraging someone else. AI & Creativity - Is AI creative? Does it make us more creative? Or, does it accomplish none of the above? This week I dug into a study that was on a mission to answer those questions, and the findings were interesting to say the least. While the study's size and criteria wouldn't serve as concrete evidence, they do put some additional points in the camp that AI + People nets a better result. AI Friend - Would you wear an AI device that listens to everything happening around you and engages with you about it like a true friend? Hard pass here, but the fact these devices keep popping up in different flavors speaks to a trend we'd be wise not to ignore. I still remember when taking out your phone in a social setting was rude. It's not crazy to think a future generation might take that to the next level. EU AI Act - Let's regulate our way out of this AI mess we created, amirite? I'm pretty sure that's not the slogan for the EU AI Act, but maybe it should be. Joking aside, it's about time someone took some significant steps to at least lay out a regulatory framework. Will it be perfect an solve all our problems? No, but we have to start somewhere. And, if you think this doesn't apply because you don't live in the EU, think again. I'm still in therapy for my recurring GDPR nightmares. Show Notes: In this week's update, Christopher shares a personal experience with job loss amidst widespread industry downsizing, highlighting the importance of empathy and support. He also examines the impact of AI on creativity, referencing a study showing that AI can enhance the creativity of average writers. Also discussed is an AI friend pendant designed to combat loneliness, sparking concerns about human connection. Finally, there's a detailed look at the EU's new AI Act, the world's first comprehensive framework to regulate AI, with potential global implications. The episode aims to inform and encourage viewers about current trends and coping strategies in the realms of job loss, AI integration, and industry regulations. 00:00 - Introduction 01:20 - My Layoff Experience 04:32 - Encouragement and Vulnerability 12:24 - Impact of AI on Creativity 23:01 - AI Friend Pendant: A New Trend? 31:43 - EU AI Act and Global Implications 40:56 - Conclusion and Future Updates #ai #regulation #creativity #wellbeing #layoffs