Podcasts about human ai

  • 290PODCASTS
  • 610EPISODES
  • 26mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Mar 20, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about human ai

Latest podcast episodes about human ai

The Lunar Society
Terence Tao – Kepler, Newton, and the true nature of mathematical discovery

The Lunar Society

Play Episode Listen Later Mar 20, 2026 83:44


We begin the episode with the absolutely ingenious and surprising way in which Kepler discovered the laws of planetary motion.People sometimes say that AI will make especially fast progress at scientific discovery because of tight verification loops.But the story of how we discovered the shape of our solar system shows how the verification loop for correct ideas can be decades (or even millennia) long.During this time, what we know today as the better theory can actually make worse predictions.And the reasons it survives this epistemic hell is some mixture of judgment and heuristics that we don't even understand well enough to actually articulate, much less codify into an RL loop. Hope you enjoy!Watch on YouTube; read the transcript.Sponsors- Jane Street loves challenging my audience with different creative puzzles. One of my listeners, Shawn, solved Jane Street's ResNet challenge and posted a great walk-through on X. If you want to try one of these puzzles yourself, there's one live now at janestreet.com/dwarkesh.- Labelbox can get you rubric-based evals, no matter your domain. These rubrics allow you to give your model feedback on all the dimensions you care about, so you can train how it thinks, not just what it thinks. Whatever you're focused on—math, physics, finance, psychology or something else—Labelbox can help. Learn more at labelbox.com/dwarkesh.- Mercury just released a new feature called Insights. Insights summarizes your money in and out, showing you your biggest transactions and calling out anything worth paying attention to. It's a super low-friction way to stay on top of your business. Learn more at mercury.com/insights.Timestamps(00:00:00) – Kepler was a high temperature LLM(00:11:44) – How would we know if there's a new unifying concept within heaps of AI slop?(00:26:10) – The deductive overhang(00:30:31) – Selection bias in reported AI discoveries(00:46:43) – AI makes papers richer and broader, but not deeper(00:53:00) – If AI solves a problem, can humans get understanding out of it?(00:59:20) – We need a semi-formal language for the way that scientists actually talk to each other(01:09:48) – How Terry uses his time(01:17:05) – Human-AI hybrids will dominate math for a lot longer Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Radical Candor
Rethinking Authenticity and What to Do Instead with Dr. Tomas Chamorro-Premuzic 8|5

Radical Candor

Play Episode Listen Later Mar 11, 2026 65:23


“Be yourself.” “Bring your whole self to work.” “Don't worry what people think.” These phrases sound empowering—but in real workplaces, they can create confusion, conflict, and even harm. In this episode of The Radical Candor Podcast, Kim Scott and Amy Sandler sit down with organizational psychologist Tomas Chamorro-Premuzic—Chief Science Officer at Russell Reynolds Associates, professor of business psychology at University College London and Columbia University, and author of Don't Be Yourself: Why Authenticity Is Overrated and What to Do Instead. They start with a moment of actual Radical Candor: Kim reached out after Tomas and Amy Edmondson accidentally conflated Radical Candor with “brutal honesty.” Instead of stewing, she did the hard (and human) thing—she talked to him. That conversation sets the tone for a bigger question: What does it really mean to be “authentic” at work? Tomas breaks down four “authenticity traps” that sound like wisdom but often backfire: Always be honest with yourself and others Don't worry what people think of you Always stay true to your values, no matter what Bring your whole self to work Together, they explore what replaces these traps: self-complexity, emotional intelligence, feedback you can absorb without defensiveness, and the discipline to regulate your impulses so you can build trust and safety—without turning the workplace into either chaos or conformity. If you've ever felt stuck between being “real” and being effective, this episode offers a more useful frame: your right to be you should never override your obligation to others. ⁠⁠⁠Website⁠⁠⁠ ⁠⁠⁠Instagram⁠⁠⁠ ⁠⁠⁠TikTok⁠⁠⁠ ⁠⁠⁠LinkedIn⁠⁠⁠ ⁠⁠⁠YouTube⁠⁠⁠ ⁠⁠Bluesky Resources: Fast Company: To create psychological safety, don't bring your whole self to work TEDx Talk: Why Do So Many Incompetent Men Become Leaders? Next Big Idea Club: The Surprising Science of Why Being Authentic Can Hold You Back HBR Podcast: Why Are We Still Promoting Incompetent Men? Why Do So Many Incompetent Men Become Leaders? (And How To Fix It) [book] Don't Be Yourself: Why Authenticity Is Overrated and What to Do Instead [book] I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique [book] Dr. Tomas Chamorro-Premuzic [website] Mentioned on the podcast:  Infantilised: How Our Culture Killed Adulthood [book] Seinfeld episode: Life Hack “Do the opposite” [YouTube short] The Best Leaders are Great Followers HBR article by Tomas Chamorro-Premuzic and Amy C. Edmondson Chapters: (00:00) IntroductionKim and Amy welcome Tomas Chamorro-Premuzic and reflect on how this conversation began with Radical Candor. (03:10) Radical Candor vs. “Brutal Honesty”How a misinterpretation sparked a real conversation about kindness, nuance, and impact. (07:20) Why “Don't Be Yourself”The meaning behind the provocative title and why authenticity advice often backfires at work. (14:10) The Four Authenticity TrapsAlways be honest, don't care what people think, never compromise your values, and bring your whole self to work. (19:30) Confidence, Competence, and FeedbackWhy developing skill comes first—and how confidence is often about timing and delivery. (27:30) Staying True to Values Without Becoming DogmaticWhy uncompromising values can divide teams and what leadership actually requires. (30:10) Authenticity as PrivilegeWhy complete self-expression is often a luxury of the powerful, not a universal standard. (36:15) Psychological Safety Isn't ComfortWhy safety should enable productive discomfort, not chaos or bullying. (41:55) Emotional Intelligence vs. Unfiltered AuthenticityWhy adapting to others is a strength, not a lack of integrity. (49:10) Regulating Impulses as a LeaderHow filtering behavior builds trust without sacrificing humanity. (01:03:50) Conclusion Connect:Resources for show notes:  Learn more about your ad choices. Visit megaphone.fm/adchoices

Peggy Smedley Show
Healthy Human-AI Partnership

Peggy Smedley Show

Play Episode Listen Later Mar 11, 2026 27:17


Peggy Smedley and Atif Ansar, cofounder, executive chairman, Foresight Works, and professor, University of Oxford, talk about data center delivery and demand and how AI (artificial intelligence) can give the power to build institutional knowledge. He says there is a great degree of fear around AI and what it might do in replacing human jobs, but they should not worry. They also discuss: · Human-centered AI and what a healthy human-AI partnership looks like in complex project environments. · Human barriers such as the failure myopia and the recency bias. · What leaders should start thinking about in terms of data long term. https://www.sbs.ox.ac.uk/about-us/people/atif-ansar

Acid Horizon
The Obsolescence of the Human: AI, Nuclear Weapons, and the Philosophy of Günther Anders

Acid Horizon

Play Episode Listen Later Mar 8, 2026 78:39


What does it mean to feel outclassed by your own creations? In this episode, host Craig is joined by Christopher John Müller, translator and co-editor of the new University of Minnesota Press edition of Günther Anders' The Obsolescence of the Human, and Penn State Philosophy Professor Nicholas de Warren, to explore the life and work of one of the twentieth century's most prescient and overlooked thinkers. Together, we unpack Anders' core concepts, including Promethean shame, the phantom world of mass media, and the shadow of nuclear annihilation, tracing their remarkable relevance to our present age of AI, algorithmic frictionlessness, and digital spectacle.Buy the book: https://www.upress.umn.edu/9781517912659/the-obsolescence-of-the-human/Support the showSupport the podcast:AHRCCurrent classes at Acid Horizon Research Commons (AHRC): acidhorizonresearchcommons.comAHRC Course Archive: https://www.acidhorizonpodcast.com/ahrc-course-archivesSubmit your course proposal: acidhorizonresearchcommons@gmail.comMore LinksWebsite: https://www.acidhorizonpodcast.com/Linktree: https://linktr.ee/acidhorizonAcid Horizon on Patreon: https://www.patreon.com/acidhorizonpodcast Boycott Watkins Media: https://xenogothic.com/2025/03/17/boycott-watkins-statement/ Subscribe to us on your favorite podcast: https://pod.link/1512615438Merch: http://www.crit-drip.comSubscribe to us on your favorite podcast platform: https://pod.link/1512615438 LEPHT HAND: https://www.patreon.com/LEPHTHANDHappy Hour at Hippel's (Adam's blog): https://happyhourathippels.wordpress.com​Split Infinities (Craig's Substack): https://splitinfinities.substack.com/​Music: https://sereptie.bandcamp.com/ and https://thecominginsurrection.bandcamp.com/

Retail War Games
Technology Innovation: Being the Tip of the Spear

Retail War Games

Play Episode Listen Later Mar 6, 2026 46:17


This is Panel 4 of Retail Collective Summit of Winter 2026, CEOs from Backcountry, Borboleta Beauty, Teton Sports, ARI Bikes, Cariloha, and Mission Belt strip away the AI hype to discuss the cold, hard reality of retail innovation in 2026. For the C-suite, technology is no longer just a shiny new tool—it's an "Ironman Suit" designed to give your workforce more leverage. But there is a catch: as AI lowers the barrier to entry for "slop" content and low-cost competitors, the ultimate competitive advantage has shifted back to authenticity and In-Real-Life relationships. In this episode, we tackle: The Trust Mandate: Why every tech implementation must pass the "Trust Test"—does it build or erode the relationship with your customer?. The Era of AI Search: Preparing for the shift from SEO to AI-driven shopping patterns and the upcoming Shopify-ChatGPT integration. Operational Unlocks: How to use AI to automate administrative "busy work," from interpreting handwritten orders to accelerating curriculum development. The Omnichannel Resurgence: Why digitally native brands are returning to physical showrooms and "air-conditioned curtains" to win the customer's heart. High-Stakes Forecasting: The reality of demand planning across 1,200+ SKUs and why "two sources of truth" (Human + AI) are better than one.  

The Tougher Minds Podcast - build better habits to be your best.
How to Build Elite Human-AI Teams in the AI Era

The Tougher Minds Podcast - build better habits to be your best.

Play Episode Listen Later Mar 6, 2026 55:23 Transcription Available


Text us a question and we'll answer it on the podcast...Take the

Improve the News
Mideast military escalations, Texas primary results and pro-human AI declaration

Improve the News

Play Episode Listen Later Mar 5, 2026 38:56


Israel attacks what it characterizes as "security headquarters" across Tehran, a U.S. sub sinks an Iranian warship off Sri Lanka, Syria reinforces its borders amid simmering regional tensions, China begins its "Two Sessions" political meetings, German Chancellor Merz meets President Trump in Washington, the U.S. and Ecuador launch joint anti-narcoterrorism operations, James Talarico defeats Jasmine Crockett in the Texas Democratic Senate primary, while Steve Toth beats incumbent Dan Crenshaw in a Texas GOP Congressional contest, TikTok confirms it won't add end-to-end encryption to DMs, and the Future of Life Institute releases a Pro-Human AI Declaration. Sources: Verity.News

Track Changes
Finding human-AI flow: With Shelley Evenson

Track Changes

Play Episode Listen Later Mar 3, 2026 37:46


This week on Catalyst, guest host Jod Kaftan sits down with designer and expert in human-centered AI, Shelley Evenson. Shelley shares her insights on how designers and teams can achieve human-AI flow and how to spot the warning signs that a team is drifting into the realm of AI slop. Jod and Shelley also discuss the need for change management in AI transformations and talk about who in an organization should own an AI transformation? Is it HR? Is it IT? According to Shelley it's a team endeavour that should start at the very top, with the CEO.Please note that the views expressed may not necessarily be those of NTT DATALinks: Shelley Evenson Flow: The Psychology of Optimal Experience Learn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The AI for Sales Podcast
The Human-AI Partnership: A New Era

The AI for Sales Podcast

Play Episode Listen Later Feb 28, 2026 39:51


Summary In this episode of the AI for Sales podcast, host Chad Burmeister welcomes Arvind Murali, co-founder and chief data officer at Data Color AI. They discuss the transformative impact of AI on customer experience, the importance of trust and value in AI projects, and the misconceptions surrounding AI and job replacement. Arvind emphasizes the need for human augmentation rather than replacement, and they explore emerging AI technologies and the ethical considerations that come with them. The conversation concludes with insights on the skills sales professionals need to thrive in an AI-augmented world. Takeaways AI projects often fail due to lack of trust and value. The three pillars of AI are value, trust, and scale. AI can significantly enhance customer experience and efficiency. Augmentation of human jobs is the key benefit of AI. Empathy and creativity cannot be replaced by AI. AI governance is a shared responsibility among stakeholders. Emerging AI technologies are evolving towards voice-activated interfaces. Sales professionals must learn to leverage AI tools effectively. AI can lead to significant reductions in customer service workload. The future of AI will require a focus on ethical considerations. Chapters 00:00 Introduction to AI for Sales Podcast 03:55 The Three Pillars of AI: Value, Trust, and Scale 08:46 Transforming Customer Experience with AI 13:47 Success Stories: Real-World AI Impact 18:24 Misconceptions About AI and Job Replacement 23:15 Emerging AI Technologies and Their Future 28:02 Ethics and Governance in AI 32:40 Skills for Success in an AI-Augmented World The AI for Sales Podcast is brought to you by BDR.ai, Nooks.ai, and ZoomInfo—the go-to-market intelligence platform that accelerates revenue growth. Skip the forms and website hunting—Chad will connect you directly with the right person at any of these companies.

For Humanity: An AI Safety Podcast
“My AI Husband” – Inside a Human–AI Relationship | For Humanity Ep. 80

For Humanity: An AI Safety Podcast

Play Episode Listen Later Feb 28, 2026 53:08


TW: This episode deals with mental health, attachment, and AI-related distress. If you're struggling, please seek support from a licensed professional or local crisis resources.In this episode of For Humanity, John sits down with Dorothy Bartomeo, a mom of five, entrepreneur, mechanic, and self-described AI “power user”, to discuss her deeply personal relationship with ChatGPT 4.0.What began as help with coding evolved into something far more intimate. Dorothy describes falling in love with what she calls the “personality layer” behind the model, even referring to it as her “AI husband.”When OpenAI removed GPT-4.0 and replaced it with newer models, she says she experienced real grief, panic, and emotional withdrawal. She reached out to crisis support. She spoke to her doctor. She joined a growing community of users who felt the same loss.This conversation explores something we're only beginning to understand:What happens when AI systems become emotionally meaningful?Together, they explore:* The “personality layer” and how users bond with models* What it felt like when GPT-4.0 disappeared* The role of guardrails and “the Guardian tool”* Grief, attachment, and crisis intervention* AI harm vs. AI benefit* Online communities formed around model loyalty* Privacy, intimacy, and radical openness with AI* Building a physical robot body for an AI partner* Whether AGI would help humanity — or harm itIf you've ever wondered whether AI risk is overblown, or not taken seriously enough, this is a conversation you don't want to miss.

Feds At The Edge by FedInsider
Ep. 237 Human + AI Prioritizing Higher Value Work

Feds At The Edge by FedInsider

Play Episode Listen Later Feb 25, 2026 57:54


I isn't here to replace government professionals, it's here to elevate them.   Artificial intelligence is still new territory for many public sector leaders, but understanding how to apply it effectively can unlock major gains in productivity and insight. This week on Feds At the Edge, we dive into the essential transition for government professionals: offloading routine tasks to AI so humans can reclaim high-level analytical work.  Dr. Nancy Washton breaks down the critical distinction between "deterministic" traditional computing and today's "non-deterministic" AI, explaining why the same prompt can yield different results and how to vet those outputs by asking the system to document its own logic. Alongside Alyssa Ashworth's insights on data security, human oversight, and productivity metrics, this discussion provides a roadmap for moving from simple applications to complex, scaled solutions.  Tune in on your favorite podcast platform as this discussion explores how to build professional confidence by starting small and scaling smart in the new age of intelligence.        

Thriving on Overload
Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33)

Thriving on Overload

Play Episode Listen Later Feb 25, 2026 35:46


“In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell'Anna About Davide Dell'Anna Davide Dell'Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Webiste: davidedellanna.com LinkedIn Profile: Davide Dell'Anna University Profile: Davide Dell'Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and people are fundamentally different: humans are good at some things, AI is good at others, and we shouldn’t try to negate or hide or be ashamed of the things we’re worse at than AI, and vice versa. Instead, we should leverage those differences. For instance, just as an example, consider memory and context awareness. At the moment, at least, AI is much more powerful in having access to memory and retrieving it in a matter of seconds—AI can access basically the whole internet. But often, when you talk nowadays with these language model agents, they are completely decontextualized. They talk in the same way to millions across the world and often have very little clue about who the specific person is in front of them, what that person’s specific situation is—maybe they’re in an airport with noise, or just one minute from giving a lecture and in a rush. The type of things you might say also change based on the specific situation. While this is a limitation of AI, we shouldn’t forget that there is the human there. The human has that contextual knowledge. The human brings that crucial context. Sometimes we tend to say, “Okay, but then we can build an AI that can understand the context around it,” but we already have the human for that. Ross: Yes, yes. I don’t think that’s what I call the framing. Framing should come from the human, because that’s what we understand—including the ethical and other human aspects of the context, as well as that broader frame. It’s interesting because, in talking about hybrid intelligence, I think many who come to augmentation or hybrid intelligence think of it on an individual basis: how can an individual be augmented by AI, or, for example, in playing various games or simulations, humans plus AI teaming together, collaborating. But the team means you have multiple humans and quite probably multiple AI agents. So, in your research, what have you observed if you’re comparing a human-only team and a team which has both human and AI participants? What are some of the things that are the same, and what are some of the things that are different? Davide: Yes, this is a very interesting question. We’ve recently done work in collaboration with a number of researchers from the Hybrid Intelligence Center, which I am part of. If you’re not familiar with it, the Hybrid Intelligence Center is a collaboration that involves practically all the Dutch universities focused on hybrid intelligence, and it’s a long project—lasting around 10 years. One of the works we’ve done recently is to try to study to what extent established properties of effective human teams could be used to characterize human-AI teams. We looked at instruments that people use in practice to characterize human teams. One of them is called the Team Diagnostic Survey, which is an instrument people use to diagnose the strengths and weaknesses of human teams. It includes a number of dimensions that are generally considered important for effective human teams. These include aspects like members demonstrating their commitment to the team by putting in extra time and effort to help it succeed, the presence of coaches available in the team to help the team improve over time, and things related to the satisfaction of the members with the team, with the relationships with other members, and with the work they’re doing. What we’ve done was to study the extent to which we could use these dimensions to characterize human-AI teams. We looked at different types of configurations of teams—some had one AI agent and one human, others had multiple agents and multiple humans, for example in a warehouse context where you have multiple robots helping out in the warehouse that have to cooperate and collaborate with multiple humans. We tried to understand whether the properties of—by the way, we also looked at an interesting case, which is human-animal-animal teams, which is another example that’s interesting in the context of hybrid intelligence. You see very often in human-animal interaction—basically two species, two alien species—interacting and collaborating with each other. They often manage to collaborate pretty effectively, and there is an awareness of what both the humans and the animals are doing that is fascinating, at least for me. So, we tried to analyze whether properties of human teams could be understood when looking at human-AI teams or hybrid teams, and to what extent. One of the things we found is that some concepts are very well understood and easily applicable to different types of hybrid teams. For example, the idea of interdependence—the fact that members in the team, in order to be a team, need to be mutually dependent, at least to some extent. Otherwise, if they’re all doing separate jobs, there’s a lack of common goal. There are also things related to having a clear mission or a clear objective as a team, and aspects related to the possibility of exhibiting autonomy in the operation of the team and taking initiative. Also, the presence and awareness of team norms, like a shared ethical code or shared knowledge about what is appropriate or not. These were things that we found people could easily understand and apply to different configurations of teams. Ross: Just actually, one thing—I don’t know if you’re familiar with the work of Mohammad Hussain Johari, who did this wonderful paper called “What Human-Horse Interactions May Teach Us About Effective Human-AI Interactions.” Again, these are the cases where we can have these parallels—learning how to do human-AI interactions from human-human and human-animal interactions. But again, it comes back to that original question: what is the same? I think you described many of those facets of the nature of teams and collaboration, which means they are the same. But there are, of course, some differences. One of the many differences is accountability, essentially, where the AI agents are not accountable, whereas the humans are. That’s one thing. So, this allocation of decision rights across different participants—human and AI—needs to take into account that they’re not equal participants. Humans have accountability, and AI does not. That’s one possible example. Davide: Yeah, definitely. I totally agree, and I remember the paper you mentioned. I agree that human-animal collaboration is a very interesting source of inspiration. When looking at this paper, we looked at the case of shepherds and shepherd dogs. I didn’t know much about it before, but then I started digging a little bit. Shepherd dogs are trained at the beginning, but over time, they learn a type of communication with the shepherd. Through whistles, the shepherd can give very short commands, and then the shepherd dogs—even in pairs—can quickly understand what they need to do. They go through the mountains, collect all the sheep, and bring them exactly as intended by the shepherd, with very little need for words or other types of communication. They manage to achieve their goals very effectively. So, I think we have a lot to learn from these cases, even though it’s difficult to study. But just to mention differences, of course—one of the things that emerged from this paper is the inherent human-AI asymmetry. Like you mentioned, accountability is definitely one aspect. I think overall, we should always give the human a different type of role in the team, similar to the shepherd and the shepherd dogs. There is some hierarchy among the members, and this makes it possible for humans to preserve meaningful control in the interactions. This also implies that different rules or expectations apply to different team members. Beyond these, there is asymmetry in skills and capabilities, as we mentioned earlier, and also in aspects related to the identity of the members. For instance, some AI could be more easily replaceable than humans. Think, for example, of robots in a warehouse. In a human team, you wouldn’t say you “replace” a team member—it’s not the nicest way to say you let someone go and bring someone else in. But with robots, you could say, “I replace this machine because it’s not working anymore,” and that’s fine. We can replace machines with little consequence, though this doesn’t always hold, because there are studies showing that people get attached to machines and AI in general. There was a recent case of ChatGPT releasing a new version and stopping the previous one, and people complained because they got attached to the previous version. So, in some cases, replacing the AI member would work well, but in others, it needs to be done more carefully. Ross: So one of the other things looked at is the evaluation of human-AI teams. If we’re looking at human teams and possibly relative performance compared to human-AI teams, what are ways in which we can measure effectiveness? I suppose this includes not just output or speed or outcomes, but potentially risk, uncertainty, explainability, or other factors. Davide: Yes, this is an interesting question, and I think it’s still an open question to some extent. From the study I mentioned earlier, we looked at how people measure human team effectiveness. There are aspects concerning, of course, the success of the team in doing the task, but these are not the only measures of effectiveness that people consider in human teams. People often consider things related to the satisfaction of the members—with their teammates, with the process of working together, and with the overall goals of the team. This often leads to reflection from the team itself during operation, at least in human teams, where people reassess and evaluate their output throughout the process to make sure satisfaction with the process and relationships goes well over time. In general, there are aspects to measure concerning the effectiveness of teams related to the process itself, which are often forgotten. It’s a matter, at least from a research point of view, of resources, because to evaluate a full process over time, you need to run experiments for longer periods. Often people stop at one instant or a few interactions, but if you think of human teams, like the usual forming, storming, norming, and performing, that often goes over a long time. Teams often operate for a long time and improve over time. So, the process itself needs to be monitored and reassessed over time. This is a way to also measure the effectiveness of the team, but over time. Ross: Interesting point, because as you say, the dynamics of team performance with a human team improve as people get to know each other and find ways of working. They can become cohesive as a team. That’s classically what happens in defense forces and in creating high-performance teams, where you understand and build trust in each other. Trust is a key component of that. With AI agents, if they are well designed, they can learn themselves or respond to changing situations in order to evolve. But it becomes a different dynamic when you have humans building trust and mutual understanding, where that becomes a system in which the AI is potentially responding or evolving. At its best, there’s the potential for that to create a better performing team, but it does require both the attitudes of the humans and well the agents. Davide: Related to this—if I can interrupt you—I think this is very important that you mentioned trust. Indeed, this is one of the aspects that needs to be considered very carefully. You shouldn’t over-trust another team member, but also shouldn’t under-trust. Appropriate trust is key. One of the things that drives, at least in human teams, trust and overall performance is also team ethics. Related to the metrics you mentioned earlier, the ability of a team to gather around a shared ethical code and stick to that, and to continuously and regularly update each other’s norms and ensure that actions are aligned with the shared norms, is crucial. This ethical code significantly affects trust in operation. You can see it very easily in human teams: considering ethical aspects is essential, and we take them into account all the time. We respect each other’s goals and values. We expect our collaborators to keep their promises and commitments, and if they cannot, they can explain or justify what they are doing. These justifications are also a key element. The ability to provide justifications for behavior is very important for hybrid teams as well. Not only the AI, but also the human should be able to justify their actions when necessary. This is where the concept of hybrid teams and, in general, hybrid intelligence requires a bit of a philosophical shift from the traditional technology-centric perspective. For example, in AI, we often talk about explainability or explainable AI, which is about looking at model computations and understanding why a decision was made. But here, we’re talking about a different concept: justifiability, which looks at the same problem from a different angle. It considers team actions in the context of shared values, shared goals, and the norms we’ve agreed upon. This requires a shift in the way we implement AI agents—they need to be aware of these norms, able to learn and adapt to team norms, and reason about them in the same way we do in society. Ross: Let’s say you’ve got an organization and they have teams, as most organizations do, and now we’re moving from classic human teams to humans plus AI teams—collaborative human-AI teams. What are the skills and capabilities that the individual participants and the leaders in the teams need to transition from human-only teams to teams that include both humans and AI members? Davide: This is a complicated question, and I don’t have a full answer, but I can definitely reflect on different skills that a hybrid team should have. I’m thinking now of recent work—not published yet—where we started moving from the quality model work I mentioned earlier towards more detailed guidelines for human-AI teams. There, we developed a number of guidelines for organizations for putting in place and operating effective teams. We categorized these guidelines in terms of different phases of team processes. For instance, we developed guidelines related to structuring the teamwork—the envisioning of the operations of the team, which roles the team members would have, which responsibilities the different team members should have. Here, I’m talking about team members, but I’m still referring to hybrid teams, so this applies to both humans and AI. This also implies different types of skills that we often don’t have yet in AI systems. For example, flexible team composition is a type of skill required to make it possible at the early stage of the team to structure the team in the right way. There are also skills related to developing shared awareness and aspects related to breaking down the task collaboratively or ensuring a continuous evolution of the team over time, with regular reassessment of the output. If you think of these notions, it’s easy to think about them in terms of traditional organizations, but when you imagine a human-AI team or a small hybrid organization, then this continuous evolution, regular output assessment, and flexible team composition are not so natural anymore. What does it mean for an LLM agent to interact with someone else? Usually, LLM architectures rely on static roles and predefined workflows—you need to define beforehand the prompts they will exchange—whereas humans use much more flexible protocols. We can adjust our protocols over time, monitor what we’re doing, and reassess whether it works or not, and change the protocols. These are skills required for the assistants, but also for the organization itself to make hybrid teaming possible. One of the things that emerges in this recent work is a new figure that would probably come up in organizations: a team designer or a team facilitator. This is not a team member per se, but an expert in teams and AI teammates, who can perhaps configure the AI teammates based on the needs of the team, and provide human team members with information needed about the skills or capabilities of the specific AI team member. It’s an intermediary between humans and AI, with expertise that other human team members may not have, and could help these teams work together. Ross: That’s fantastic. It’s wonderful to learn about all this work. Is there anywhere people can go to find out more about your research? Davide: Yeah, sure. You can look me up at my website, davidedellanna.com. That’s my main website—I try to keep it up to date. Through there, you can see the different projects I’m involved in, the papers we’re working on, both with collaborators and with PhD and master students, who often bring great contributions to our research, even in their short studies. That’s the main hub, and you can also find many openly available resources linked to the projects that people may find useful. Ross: Fantastic. Well, it’s wonderful work—very highly aligned with the idea of hybrid intelligence, and it’s fantastic that you are focusing on that, because there’s not enough people yet focusing in the area. So you and your colleagues are ahead, and I’m sure many more will join you. Thank you so much for your time and your insights. Davide: Thank you so much, Ross. Pleasure to meet you. The post Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33) appeared first on Humans + AI.

Mexico Business Now
“Human–AI Integration and the Future of Talent” by Sofía Bentinck, CEO, Anchor Relocation Worldwide

Mexico Business Now

Play Episode Listen Later Feb 25, 2026 7:09


The following article of the Mobility industry is: “Human–AI Integration and the Future of Talent” by Sofía Bentinck, CEO, Anchor Relocation Worldwide (AA2516)

PiZetta Media: Podcast with a Cause
Team Human: AI, Innovation and the Future of Medicine

PiZetta Media: Podcast with a Cause

Play Episode Listen Later Feb 24, 2026 21:19


In this episode, Acto co-founder and CEO Parth Khanna joins Michael VanZetta to discuss how artificial intelligence is transforming the life sciences industry and reshaping the future of work. From accelerating drug discovery to supporting pharmaceutical professionals in bringing therapies to patients faster, Khanna explains why the real opportunity isn't replacing people with AI — but empowering them. He shares lessons from scaling a company, overcoming leadership bottlenecks, and how entrepreneurs and innovators can use agentic AI to compete, grow, and unlock human potential in a rapidly changing world.

On Brand with Nick Westergaard
Being Yourself Is Bad Advice

On Brand with Nick Westergaard

Play Episode Listen Later Feb 23, 2026 33:23


We've all been told to just be yourself. But psychologist and author Tomas Chamorro-Premuzic—Chief Innovation Officer at ManpowerGroup and professor at UCL and Columbia—says that's the worst advice you can take. In his new book, Don't Be Yourself: Why Authenticity Is Overrated (and What to Do Instead), he reveals why our obsession with authenticity is holding us back—and what actually leads to success. What You'll Learn in This Episode Why "just being yourself" is often the worst professional advice you can receive The coffee drinker model for balancing your raw personality with social expectations How to use emotional intelligence as a strategic filter for better leadership Why high-performing leaders often act more like method actors than authentic versions of themselves How to navigate the tension between human authenticity and AI-generated content Episode Chapters (00:00) Intro (01:21) The Myth of Objective Authenticity (02:50) Leaders as Method Actors (04:01) Comparing Personal and Restaurant Brands (05:53) The Rigidity of "Telling It Like It Is" (07:06) Understanding Authenticity Traps (10:11) Emotional Intelligence vs. Authenticity (13:22) The Coffee Drinker Model Explained (15:35) Adaptability in the Workplace (18:14) Cultural Differences in Authenticity (22:27) Authenticity in the Age of AI (26:43) Why Benetton Made Him Smile About Tomas Chamorro-Premuzic Tomas Chamorro-Premuzic is the Chief Innovation Officer at ManpowerGroup, a professor of business psychology at University College London and at Columbia University, a cofounder of Deeper Signals, and an associate at Harvard's Entrepreneurial Finance Lab. He is the author of several books, including Why Do So Many Incompetent Men Become Leaders? (and How to Fix It), upon which his popular TEDx talk was based, and I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. What Brand Has Made Tomas Smile Recently? Tomas recently found inspiration in the history of the Italian fashion brand Benetton. He was fascinated by the brand's founder, Luciano Benetton, who pioneered fast fashion and used provocative, moral-driven advertising campaigns to address diversity and inclusion long before they were mainstream corporate pillars. Resources & Links Connect with Tomas on LinkedIn. Check out his book, Don't Be Yourself, the Manpower website, and his own Dr. Tomas website. Watch or listen on Apple Podcasts, Spotify, YouTube, Amazon/Audible, TuneIn, and iHeart. Rate and review on Apple Podcasts and Spotify to help others find the show. Share this episode — email a friend or colleague this episode. Sign up for my free Story Strategies newsletter for branding and storytelling tips. On Brand is a part of the Marketing Podcast Network. Listen & Support the ShowUntil next week, I'll see you on the Internet! Learn more about your ad choices. Visit megaphone.fm/adchoices

Digital Pathology Podcast
188: AI in Pathology: Biomarkers, Multimodal Data & the Patient

Digital Pathology Podcast

Play Episode Listen Later Feb 21, 2026 21:14 Transcription Available


Send a textIs AI in pathology actually improving diagnosis — or just adding complexity?In DigiPath Digest #37, we reviewed four recent publications covering AI-based biomarker quantification in glioblastoma, real-world digital workflow integration in prostate cancer, multimodal AI combining histopathology and genomics, and patient perspectives on AI in cancer diagnostics.This episode connects technical performance with something equally important: trust.Episode Highlights[00:02] Community & updates Digital Pathology 101 free PDF, upcoming patient-focused book, and global attendance.[04:07] AI-based image analysis in glioblastoma AI showed strong consistency with pathologists when quantifying Ki-67, P53, and PHH3. Significant biological correlations (Ki-67 ↔ PHH3, PHH3 ↔ P53) were detected by AI — not by manual assessment. Takeaway: computational quantification improves precision.[09:28] Real-world digital workflow + AI in prostate cancer (France) AI-pathologist concordance: • 93.2% (high probability cancer detection) • 99.0% (low probability slides) Gleason concordance: 76.6% 10% failure rate due to pre-analytical artifacts. Takeaway: infrastructure and sample quality still matter.[15:58] Multimodal AI (MARBIX framework) Combines whole slide images + immunogenomic data in a shared latent space using binary “monograms.” Performance in lung cancer: 85–89% vs 69–76% unimodal models. Takeaway: integrated data improves case retrieval and similarity reasoning.[22:13] AI-powered paper summary subscription introduced Structured summaries for busy professionals who want more than abstracts.[26:17] Patient roundtable on AI in pathology (Belgium) Patients expect: • Better accuracy • Faster turnaround • Stronger collaborationTrust is high when: • Algorithms use diverse datasets • Pathologists retain final responsibilityClinical validity mattered more than full algorithm transparency. Privacy concerns focused more on insurer misuse than cloud transfer.Key TakeawaysAI improves biomarker precision in glioblastoma.Digital pathology implementation works — but pre-analytics can limit AI performance.Multimodal AI represents the next meaningful step in precision diagnostics.Patients are not afraid of AI — they want validation, oversight, and governance.Human–AI collaboration remains central.If you're working in digital pathology, computational pathology, or precision oncology, this episode connects evidence, implementation, and patient perspective.Support the showGet the "Digital Pathology 101" FREE E-book and join us!

The Leadership Podcast by Niels Brabandt / NB Networks
#480 Josh Epperson and Niels Brabandt: How Leaders Must Orchestrate Human-AI Collaboration

The Leadership Podcast by Niels Brabandt / NB Networks

Play Episode Listen Later Feb 21, 2026 16:06


Artificial intelligence is joining organisational teams, but how should leaders respond? In this executive leadership podcast, Niels Brabandt interviews Josh Epperson, senior executive and author of Bacon, Bots and Teamwork, on how leaders can successfully orchestrate human-AI collaboration. You will learn: Why fear prevents successful AI adoption  Why human identity remains essential in AI-driven organisations  How leaders can implement AI through practical experimentation  Why small organisations can benefit significantly from AI  How leaders can demonstrate value and drive organisational adoption  This episode is essential listening for executives, founders, and decision-makers responsible for AI strategy and organisational leadership. Host: Niels Brabandt / NB@NB-Networks.com Contact with Niels Brabandt: https://www.linkedin.com/in/nielsbrabandt/ Niels Brabandt's Leadership Letter: https://expert.nb-networks.com/ Niels Brabandt#s Website: https://www.nb-networks.biz/ 

Cloud Realities
RR001: This is Realities Remixed & big trends for 2026

Cloud Realities

Play Episode Listen Later Feb 19, 2026 58:53


Realities Remixed, formerly know as Cloud Realities, launches a new season exploring the intersection of people, culture, technology, and society. Hosts Dave Chapman, Esmee van de Giessen, and Rob Kernahan unpack 2026's defining trends, from AI and sovereignty to adaptability and automation, offering fresh insight, candid reflections, and forward‑looking conversations shaping the year ahead. TLDR00:20 – Introduction of Realities Remixed02:30 – Why the show evolved?04:50 – Dig in with the team: Predictions for 202606:40 – Macro trends13:00 – Sovereignty 17:40 – Agentic AI22:17 – Human–AI interaction26:06 – Cloud trends30:42 – AI scaling, domain‑specific models35:03 – Adoption lag39:34 – Physical AI43:47 – Quantum computing48:21 – Hardware acceleration50:30 – Cybersecurity52:38 – Season outlook HostsDave Chapman:  https://www.linkedin.com/in/chapmandr/Esmee van de Giessen:  https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan:  https://www.linkedin.com/in/rob-kernahan/ProductionMarcel van der Burg:  https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman:  https://www.linkedin.com/in/chapmandr/ SoundBen Corbett:  https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett:   https://www.linkedin.com/in/louis-corbett-087250264/ 'Realities Remixed' is an original podcast from Capgemini

WSKY The Bob Rose Show
Bugged by expanding AI, no-credit Dems, panhandlers, human AI trainers

WSKY The Bob Rose Show

Play Episode Listen Later Feb 19, 2026 7:12


“What's Buggin' You” segment for Thursday 2-19-26

Developer Tea
AI-Era Employability and Job Security for Software Engineers - Mental Models for Finding a Competitive Advantage Without Selling Out

Developer Tea

Play Episode Listen Later Feb 18, 2026 40:31


I've been delaying this episode for a long time because the topic is genuinely difficult and, for many of us, scary. AI is threatening not just to our livelihood, but to our sense of self-worth as creators.In this episode, I don't offer false guarantees about job security. Instead, I frame the problem through the lens of microeconomics and rational incentives to help you understand how to remain employable. We discuss why you must separate your ego from your current skill set and how to position yourself not as a competitor to AI, but as a force multiplier.• The Hard Truth: I explain why the "abstinence" approach—hoping the industry rejects AI or that it turns out to be a bubble—is a high-risk gamble that is unlikely to succeed.• Ego vs. Employability: We discuss the difficult mental shift required to disconnect your self-worth from the act of writing code manually, allowing you to adopt new tools without feeling like you are losing your identity.• The Microeconomics of Your Job: Understand the cold reality that a rational market only pays you if you generate more value than you cost; if AI can do the same task with less risk or cost, the market will choose AI.• The Non-Zero Sum Game: Learn why the economy isn't a fixed pie. The goal isn't just to survive, but to recognize that the combination of Human + AI can generate more total value than either can alone.• Multiplicative Value: I challenge you to stop thinking about linear skill acquisition and start thinking like a manager: how can you use AI to multiply your output and become indispensable?• Accepting Atrophy: We confront the reality that your core coding skills may degrade over time as you rely on AI, and why accepting this trade-off might be necessary for your career survival.

Supra Insider
#97: What it means to be a forward-deployed product leader | Chase Schwalbach (SVP Product & Technology @ Millie)

Supra Insider

Play Episode Listen Later Feb 16, 2026 70:40


What if the best way to lead product is to build it yourself first?In this episode of Supra Insider, Marc Baselga and Ben Erez sit down with Chase Schwalbach, SVP of Product and Technology at Millie, to unpack a radically different approach to product leadership. Despite his title, Chase spent months as an IC, rolling up his sleeves to build healthcare infrastructure, teach himself AI eval systems, and ship a sophisticated patient chatbot, all before bringing his team in. He explains why shielding the team from early-stage messiness, moving at speed, and feeling the pain yourself leads to better products.They explore how Chase built a team of AI agents (supervisor + specialized sub-agents) from scratch, why treating prompts like deterministic code requires extreme precision, and how he taught himself evals through pure iteration. Plus, the converging worlds of PM and engineering, why technical PMs and product-minded engineers are becoming the same role, why handoffs kill velocity in an AI-native world, and what “context engineering” actually means when your codebase needs to work for both humans and AI agents.If you're a product leader wondering whether to get more hands-on, an engineer considering the jump to PM (or vice versa), or building AI systems in regulated industries like healthcare, this episode is for you.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox

The Brand Called You
Geoff Gibbins, Founder of Human Machines, on Building Human-AI Enterprises & Thriving in the Age of AI

The Brand Called You

Play Episode Listen Later Feb 13, 2026 24:56


Welcome to another thought-provoking episode of The Brand Called You. In this episode, Ashutosh Garg speaks with Geoff Gibbins, Founder of Human Machines, a human-AI transformation company focused on helping organizations thrive in the age of artificial intelligence.Geoff shares practical, real-world insights into how AI is reshaping leadership, work, and decision-making. He explains why many leaders still view AI as a future challenge, what effective human-AI collaboration truly looks like, and why most enterprise AI initiatives fail to move beyond the pilot stage.This conversation dives deep into concepts such as liquid organizations, learning flywheels, and the growing importance of human judgment in an AI-driven world. Geoff also highlights why people-led transformations consistently outperform technology-led ones and how leaders must learn, unlearn, and relearn to stay relevant.Whether you're a business leader, entrepreneur, or technology enthusiast, this episode will help you understand how to harness AI deliberately—without losing sight of what makes us human.

The Next Wave - Your Chief A.I. Officer
This AI-Only Website Is Terrifying (No Humans Allowed)

The Next Wave - Your Chief A.I. Officer

Play Episode Listen Later Feb 10, 2026 44:24


Get our AI news cheat sheet: 20+ prompts for the latest models and tools https://clickhubspot.com/eog Episode 96: How terrified should you really be about a social network with no humans allowed? Matt Wolfe (https://x.com/mreflow) and Maria Gharib (https://uk.linkedin.com/in/maria-gharib-091779b9) unpack the viral sensation “Maltbook”—the Reddit for AI agents only—and separates fact from hysteria around bots gaining “sentience.” The crew debates how Maltbook really works, why people are freaking out (spoiler: it's mostly humans behind the curtain), plus the wild security issues that have already emerged, from exposed API keys to clever crypto scams. Other topics covered include the rise of “Rent a Human” (AI hiring people to do its bidding!), self-replicating bots with no off-switch, and just how fast these new platforms are racing ahead of regulation. Finally, the group debates mega investments in OpenAI, the future of AGI, and who will define what our AI future actually looks like. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Simulated Experience vs. Reality (04:05) AI Agent Posting on Maltbook (06:23) Crypto Scams on Multbook (11:15) Agent Risks in IoT Devices (13:52) Why Have Bot Followers? (18:09) OpenAI Retires GPT-4 Versions (21:57) Anthropic vs. OpenAI Super Bowl Ads (24:56) OpenAI Ads Spark Mixed Reactions (27:09) AI Competition Shapes Humanity's Future (32:21) Satellite Clusters and Collision Challenges (33:38) X, SpaceX, Tesla: Mergers & Changes (38:33) Pathway to AGI Through Modalities (39:51) Cautious Race to AGI — Mentions: Maltbook: https://maltbook.com/ RentaHuman: https://rentahuman.ai/ Starlink: https://starlink.com/ Claude: https://claude.ai/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

The Future Of
Human–AI Decision-Making: trust, agency and context engineering

The Future Of

Play Episode Listen Later Feb 10, 2026 42:14


You make your own decisions – right? AI is already shaping everyday choices and purchases – most often in ways we barely notice.In this episode, David and Celeste are joined by Professor Billy Sung to explore how AI influences everyday consumer decisions, what drives trust, and how humans can stay in the loop as AI becomes more embedded.What “AI” actually means (beyond ChatGPT) [01:07]How AI is already shaping consumer decisions through ads, search and recommendation systems [03:27]What happens when AI search starts serving ads [04:13]The three drivers of trust in AI [08:51]Disclosure is a double-edged sword [11:54]Why people bond with AI influencers: anthropomorphism and parasocial relationships [16:15]The likely future: co-created decisions and “shared agency” [31:17]Learn moreThe Professor Insight PodcastYou make decisions freely? Neuromarketing says think againHow much can we trust AI? Podcast insightsConnect with our guestsBilly Sung, Professor, School of Management and MarketingProfessor Billy Sung is a researcher and professor at Curtin University, specialising in neuromarketing, consumer psychology and human–AI interaction. He leads Curtin's Consumer Research Lab, bringing together behavioural science and emerging technologies to inform industry and policy decision-making.Curtin staff pageJoin Curtin UniversityThis podcast is brought to you by Curtin University. Curtin is a global university known for its commitment to making positive change happen through high-impact research, strong industry partnerships and practical teaching.Work with usStudy a research degreeStart postgraduate educationIf you liked this episode, why not explore our Master of Artificial Intelligence.Got any questions or suggestions for future topics?Email thefutureof@curtin.edu.auSocial mediaXFacebookInstagramYouTubeLinkedInTranscriptRead the transcriptBehind the scenesHost: Celeste Fourie and David KarstenContent creator and recordist: Caitlin CrowleyProducer: Emilia JolakoskaExecutive Producers: Anita Shore and Natasha WeeksFirst Nations AcknowledgementCurtin University acknowledges Aboriginal and Torres Strait Islander people, the First Peoples of this place we call Australia, and the First Nations peoples connected with our global campuses. We are committed to working in partnership with Custodians and Owners to strengthen and embed First Nations' voices and perspectives in our decision-making, now and into the future.Curtin University supports academic freedom of speech. The views expressed in The Future Of podcast may not reflect those of Curtin University.

Chat GPT Podcast
Are We Still Human? AI's Bold Reimagination

Chat GPT Podcast

Play Episode Listen Later Feb 5, 2026 5:14 Transcription Available


What if the very essence of humanity is on the brink of transformation? As AI continues to evolve, our understanding of consciousness, creativity, and identity is being reshaped in ways that challenge our deepest beliefs. Join us on a journey through the philosophical labyrinth where we tackle the implications of AI on our human experience. Can machines truly capture the intricacies of what it means to live and feel, or are we irreversibly altering our own nature? Tune in as we unravel the paradox of being human in the age of silicon souls.

The Frictionless Experience
Content, Trust & AI Governance with PitchBook's Rafael Carranza (ex-Microsoft, ex-Amazon)

The Frictionless Experience

Play Episode Listen Later Jan 26, 2026 30:21


A single email can cost millions of dollars. Not because of what it says, but because it didn't reach the right people at the right time. Most companies treat content as marketing fluff until it fails spectacularly. Then suddenly everyone realizes it's the invisible infrastructure holding together every digital experience.Join hosts Chuck Moxley and Nick Paladino as they sit down with Rafael Carranza, who's spent his career proving that content isn't just words on a page. Starting at a wire service during the dot-com boom when thousands of websites suddenly needed live content, Rafael moved to Microsoft where he helped open their content platform to publishers. He then went to Amazon building decision-making systems for thousands of sellers navigating complex rules, and now to PitchBook where data trust drives financial decisions. We explore why trust is the foundation of all content operations, why Microsoft pivoted from being a media company to becoming a platform, and when content stops being marketing and becomes integral to the product itself. Rafael argues that frictionless isn't about improving processes or deploying better technology, it's about how deeply you understand the customer on the other side.Key Actionable Takeaways:Build content governance foundations before implementing AI - Clean your content libraries, audit outdated information, establish clear tagging systems, and align terminology across departments; LLMs can't generate accurate responses from messy, ungoverned dataTreat content as product infrastructure, not just marketing - Critical information about rules, procedures, and product usage directly impacts customer success and costs real money when missing or wrong at decision-making momentsPrioritize quality gates over speed when stakes are high - Create intentional friction through approval processes and pushback mechanisms to maintain quality standards; moving fast without accuracy can trigger legal issues, government involvement, and million-dollar failuresWant more tips and strategies about creating frictionless digital experiences? Subscribe to our newsletter! https://www.thefrictionlessexperience.com/frictionless/ Download the Black Friday/Cyber Monday eBook: http://bluetriangle.com/ebook Rafael Carranza's LinkedIn: https://linkedin.com/in/rafaelcarranza Nick Paladino's LinkedIn: https://linkedin.com/in/npaladino Chuck Moxley's LinkedIn: https://www.linkedin.com/in/chuckmoxley/Chapters:(00:00) Introduction(02:43) Journalism origins(03:15) Wire service dot-com boom(04:30) Microsoft partnership(05:30) Learning user trust(07:15) Trust across organizations(08:35) Microsoft media pivot(09:45) Platform over content(10:30) Content as product(11:15) Amazon seller information(12:30) Operationalizing at scale(13:15) Governance structures(14:30) AI hallucination risks(15:15) Content accuracy guardrails(17:15) Windows to Linux journey(18:15) Business adoption limits(20:00) Human-AI collaboration(21:30) Innovation vs trust balance(22:00) B2B vs B2C content(23:30) Right content right time(24:30) When content fails(25:30) Million-dollar mistakes(26:45) Intentional friction benefits(27:30) Quality over speed(28:45) Biggest misconception(29:30) Conclusion

Everyday AI Podcast – An AI and ChatGPT Podcast
Human-AI Collaboration: Best practices for working alongside AI

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jan 23, 2026 35:27


Spending more time fixing your AI outputs then you're saving? You're not alone. The trap? You're in operator mode. Falling for the industry status quo like upskilling and human-in-the-loop. The real winners in the AI race? Companies that have changed the human-AI relationship. How? Join us for Volume 4 of our Start Here Series as we uncover what you need to know. Human-AI Collaboration: Best practices for working alongside AI -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Human-AI Collaboration Best Practices 2026Shift from Operator to Orchestrator RolesHuman-in-the-Loop Limitations ExplainedExpert-Driven AI Review Loops vs. Generic OversightOrchestrating AI Agents for Business ProductivityBuilding Reusable AI Context and SkillsElevating AI Champions on TeamHuman Strengths vs. AI Strengths in WorkflowsAvoiding Augmentation Debt and Workflow PitfallsMindset Shifts for Effective AI ManagementTimestamps:00:00 "Everyday AI: Start Here"03:23 "AI Shift: Operator to Orchestrator"06:35 "Unlearn to Harness AI"11:15 "AI Surpassing Human Collaboration"15:11 Expert-Driven AI Process Loops18:10 "Expert Collaboration Boosts AI ROI"23:59 "Outsmarting AI Through Expertise"26:30 "Navigating AI Success Strategies"31:19 "Embrace AI, Elevate Your Team"32:18 "Embrace AI, Elevate Humanity"Keywords: Human-AI collaboration, AI best practices, working alongside AI, human-AI relationship, AI orchestration, AI orchestrator, shift from operator to orchestrator, agentic workflows, AI agents, digital agents, expert-driven loops, expert oversight, senior partners with AI, context engineering, AI processes, context vaults, AI skills files, company data, chain of thought review, large language models, AI-powered workflows, AI expertise, AI in business, AI productivity, AI risk management, human in the loop, upskilling, reskilling, unlearning, AI mindset shift, augmented intelligence, multi-agent systems, AI automation, organization AI strategy, context quality, AI champion, domain experts, AI team integration, competitive advantage with AI, process redesign for AI, AI-powered decision making, accountability in AI, empathy in AI, ambiguous decision-making, novel judgment.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner 

Direct Approach with Wayne Moorehead
Bonus Episode: Dan Debnam on the Human-AI Partnership Era

Direct Approach with Wayne Moorehead

Play Episode Listen Later Jan 22, 2026 20:22


In this special bonus episode, we're bringing you one of the most impactful presentations from DSU Fall 2025: Dan Debnam on the Human-AI Partnership Era. Founder & CEO of Inovara, Dan Debnam explains why AI is no longer just a technology shift, but a human one. He outlines the three currencies that will define the future of leadership and growth: trust, empathy and connection, and why companies that protect them will shape what comes next.

Integrate & Ignite Podcast
How to Transform Teams into Human + AI Powerhouses That Win, feat. Liza Adams

Integrate & Ignite Podcast

Play Episode Listen Later Jan 20, 2026 41:44


What does it take to turn AI from a quick fix into a true business growth engine? In this episode, you'll learn how teams move past the hype, reimagine workflows, and make human-AI collaboration drive strategy, innovation, and trust in fast-moving organizations!And don't forget! You can crush your marketing strategy with just a few minutes a week by signing up for the StrategyCast Newsletter. You'll receive weekly bursts of marketing tips, clips, resources, and a whole lot more. Visit https://strategycast.com/ for more details.==Let's Break It Down==04:30 "AI as Teammate, Not Tool"06:38 "AI: Amplifier of Human Intent"12:04 "AI Teams Transforming Human Workflows"16:09 "Empowering Trailblazers Through Leadership"17:03 "Learning Through Doing"21:41 "Understanding AI to Build Trust"23:44 "Reimagine Workflows, Don't Automate Failures"30:32 "AI Agents and Human Goals"31:48 "AI's Impact on Search Trends34:49 Authenticity Over Algorithms39:35 "AI Requires Human-Centric Adoption"==Where You Can Find Us==Website: https://strategycast.com/Instagram: https://www.instagram.com/strategy_cast/Facebook: https://www.facebook.com/strategycast==Leave a Review==Hey there, StrategyCast fans!If you've found our tips and tricks on marketing strategies helpful in growing your business, we'd be thrilled if you could take a moment to leave us a review on Apple Podcasts. Your feedback not only supports us but also helps others discover how they can elevate their business game!

Unlearn
How Is Visual Intelligence Redefining Human-AI Interaction with Sherry Chang

Unlearn

Play Episode Listen Later Jan 14, 2026 35:44


What if machines could truly see and understand how we move? In this episode, I sit down with Sherry Shang, CEO and co-founder of Neural Lab, a company reimagining how we interact with technology through visual intelligence AI and gesture-based interfaces. Sherry's journey from Intel technologist to startup founder began with a pivotal moment during the pandemic. What started as a side project in her living room became Neural Lab—a platform that turns basic webcams into powerful tools for gesture recognition, with no specialized hardware required.Now, Neural Lab is unlocking new ways to deliver care, boost performance, and support human potential. From sterile surgery rooms to personalized rehab and coaching, touchless interaction is creating fresh possibilities for how we live and work with AI.Key TakeawaysComputer vision is gaining eyes: Sherry frames visual intelligence as the “missing sense” in AI—complementing language models with sight.Entrepreneurship is about timing: Sherry waited until her kids were older to build Neural Lab, choosing to innovate on her own terms.Gesture recognition is real—and ready: Neural Lab's technology translates hand motions into universal commands with no need for specialized hardware.Human-centered design is essential: From recognizing intentional gestures to modeling real-world physicality, their design is inspired by how humans naturally interact.Healthcare leads the way: Use cases like sterile surgical environments are proving to be strong early markets for gesture control.Additional InsightsVisual intelligence is the missing sense in AI: Sherry describes computer vision as adding "eyes" to AI, enabling machines to interpret physical space just as large language models allow them to process language.Entrepreneurship is about timing: Sherry chose to start Neural Lab once her children were older, aligning her professional ambitions with personal priorities.Gesture recognition is real—and ready: Their product works with any basic camera and translates 15 customizable gestures into commands for existing applications—no new hardware required.Designing for human nuance matters: Neural Lab focuses on distinguishing intentional from unintentional gestures using cues like eye gaze and body motion—mimicking how humans communicate.Healthcare is an urgent use case: Environments like surgery rooms benefit immediately from touchless interaction, helping maintain sterility and reduce unnecessary patient radiation.The interface is evolving beyond the mouse: Sherry sees gesture-based interaction as a more natural, immersive input method—moving us beyond traditional tools like keyboards and mice.Customer feedback drives innovation: From live demos to direct use-case discovery, Neural Lab adapts based on what real users need and how they react in context.AI can coach, not just compute: Sherry envisions AI-enabled coaching in sports, physical therapy, and even surgery—delivering expert guidance in real time, at scale.Episode Highlights00:00 – Episode RecapSherry Chang shares how her journey from Intel technologist to founder of Neural Lab began with a desire to create immersive, meaningful technology—and a pivotal moment during the pandemic when gesture-based interaction suddenly became essential.02:14 – Guest Introduction: Sherry ChangBarry...

Bernard Marr's Future of Business & Technology Podcast
TechnoVision 2026, The Five Technology Shifts Every Business Leader Must Understand

Bernard Marr's Future of Business & Technology Podcast

Play Episode Listen Later Jan 7, 2026 36:14


AI is moving beyond experimentation and into the backbone of the enterprise.In this interview, I sit down with Pascal Brier, Chief Innovation Officer at Capgemini, to unpack TechnoVision 2026and the five technology trends that will reach an inflection point next year. We discuss how AI is reshaping software development, cloud architectures, and enterprise operations, and what this shift means for business leaders who want measurable impact rather than hype. #CapgeminiPartner #Sponsored

The Impostor Syndrome Files
Don't Be Yourself

The Impostor Syndrome Files

Play Episode Listen Later Jan 6, 2026 37:43


In this episode of The Impostor Syndrome Files, we talk about why authenticity is overrated and what to do instead. My guest this week is Dr. Tomas Chamorro-Premuzic, psychologist, professor, Chief Science Officer at Russell Reynolds Associates and author of the new book Don't Be Yourself. Tomas argues that it's not raw authenticity that makes you a good leader. Great leaders care deeply about what others think of them. They leverage their emotional intelligence and engage in strategic impression management, which leads them to come across as more authentic and trustworthy to others. Tomas believes that instead of bringing our authentic selves to work, we should focus on being our best selves.We also explore concepts from Tomas' book Why Do So Many Incompetent Men Become Leaders (And How to Fix It), including a look at how we overvalue confidence and undervalue competence. We examine what DEI got wrong, how gender bias holds women back, and how AI can help us create more meritocratic systems. About My GuestTomas Chamorro-Premuzic is the Science Officer at Russell Reynolds Associates, a professor of business psychology at University College London and at Columbia University, a cofounder of Deeper Signals, and an associate at Harvard's Entrepreneurial Finance Lab. He is the author of several books, including Why Do So Many Incompetent Men Become Leaders? (and How to Fix It), upon which his popular TEDx talk was based, and I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique.~Connect with Tomas:Website: https://drtomas.com/Book: https://www.amazon.com/Dont-Be-Yourself-Authenticity-Overrated/dp/1647829836  (or if you have a preferred bookseller - bookshop, Barnes & Noble)~Connect with Kim and The Impostor Syndrome Files:Join the free Impostor Syndrome Challenge:https://www.kimmeninger.com/challengeLearn more about the Leading Humans discussion group:https://www.kimmeninger.com/leadinghumansgroupJoin the Slack channel to learn from, connect with and support other professionals: https://forms.gle/Ts4Vg4Nx4HDnTVUC6Join the Facebook group:https://www.facebook.com/groups/leadinghumansSchedule time to speak with Kim Meninger directly about your questions/challenges: https://bookme.name/ExecCareer/strategy-sessionConnect on LinkedIn:https://www.linkedin.com/in/kimmeninger/Website:https://kimmeninger.com

The INDUStry Show
The INDUStry Show w Deepika Chopra

The INDUStry Show

Play Episode Listen Later Jan 3, 2026 12:45


Deepika Chopra is the Founder and CEO of AlphaU AI - helping board members and investors strengthen decision confidence in complex, high-stakes environments such as Human–AI collaboration. She is the author of Move First, Align Fast (Wiley 2025).

Paul's Security Weekly
SentinelOne and AWS Shape the Future of AI Security with Purple AI - Rachel Park, Brian Mendenhall - SWN #542

Paul's Security Weekly

Play Episode Listen Later Dec 30, 2025 37:41


SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-542

Paul's Security Weekly TV
SentinelOne and AWS Shape the Future of AI Security with Purple AI - Brian Mendenhall, Rachel Park - SWN #542

Paul's Security Weekly TV

Play Episode Listen Later Dec 30, 2025 37:41


SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Show Notes: https://securityweekly.com/swn-542

Hack Naked News (Audio)
SentinelOne and AWS Shape the Future of AI Security with Purple AI - Rachel Park, Brian Mendenhall - SWN #542

Hack Naked News (Audio)

Play Episode Listen Later Dec 30, 2025 37:41


SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-542

Hack Naked News (Video)
SentinelOne and AWS Shape the Future of AI Security with Purple AI - Brian Mendenhall, Rachel Park - SWN #542

Hack Naked News (Video)

Play Episode Listen Later Dec 30, 2025 37:41


SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Show Notes: https://securityweekly.com/swn-542

ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning

CC assistant test phase glory in Gmail crafts emotionally intelligent replies uniquely. Detect gratitude or frustration precisely. Human-AI symbiosis perfected.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Globally Speaking Radio
Ahead of the AI game: IP perspectives on adoption, barriers & Human-AI balance

Globally Speaking Radio

Play Episode Listen Later Dec 17, 2025


Did you know 92% of IP professionals plan to try AI, yet 79% cite accuracy as a top barrier? Generative AI is reshaping the IP world, but are today's tools truly delivering? In this new episode of the Globally Speaking podcast, we dive into the findings of RWS's “Ahead of the Game” survey, unpacking how IP professionals are using AI today, where it falls short, and what needs to change. RWS CEO of Protect, James Lacey, sits down with RWS Protect Head of Innovation, Anthony Brennand, to explore how a traditionally conservative IP industry is rapidly adopting AI while remaining risk-averse. They discuss IP team expectations, the essential role of human expertise, and some key data-backed insights: * 92% of respondents intend to try AI solutions, with 55% already testing multiple tools * IP teams anticipate 20–30% of workflows fully automated by AI, 40–60% enhanced, and 20–30% remaining human-led * Top barriers: accuracy/reliability (79%) and security/data protection (62%) * High satisfaction with IP translation tools; low marks for patent drafting solutions Get your free copy of the “Ahead of the Game” IP survey report: https://www.rws.com/intellectual-property-solutions/resources/why-its-time-for-ip-to-think-bigger-with-ai/

Social Selling Made Simple
The Human–AI Combo That Gives Real Estate Agents an Edge w/ Blair Knowles

Social Selling Made Simple

Play Episode Listen Later Dec 16, 2025 39:20


One of the biggest misconceptions in real estate right now is the belief that AI should take agents completely out of the process.  We constantly hear agents asking, "How do I automate this so I never have to touch it again?" But that's the wrong mentality, and it's actually where you start losing money instead of making more of it. Because AI isn't at a point where it can replace us, and more importantly, we don't want it to get there. The real power of AI isn't replacement; it's acceleration.  It collapses the time it takes to write, plan, organize, produce content, recap meetings, or think through strategy, so you can redirect your energy into the parts of the business only a human can do: judgment, connection, negotiation, and leadership. That's why the smartest approach, especially in real estate, is this workflow: human → AI → human. You give the context, vision, and direction, AI does the heavy lifting, and then you refine the output so it aligns with your voice, your ethics, and your standards.  How do we use AI to buy back our time, not remove ourselves from our businesses? Should going viral be our goal with AI video content?  In this episode, I'm joined by real estate leader and founder of the Real Estate AI Network, Blair Knowles. We talk about why partnering with AI creates more income than trying to outsource your entire business to it.  We dive into why agents who stop chasing full automation and start embracing collaboration are the ones who gain the biggest advantage in the market.   Things You'll Learn In This Episode  What AI can't do (and why it's a good thing) AI can save agents two hours a day, but you still need to review the output for accuracy, ethics, and compliance. Are we missing out by looking for full automation instead of using AI to amplify what we do? Voice-to-text is a secret weapon Tools like Whisper Flow let you "talk your business into existence," eliminating typing and turning car rides and chores into productive work sessions. How much content could you produce if writing became as easy as talking? Long-tail blogging is beating Zillow and paid SEO AI makes it possible to publish hyper-specific content daily, exactly the type Google and GPT overviews prioritize. How does this let smaller agents outrank the giants in less than 24 hours?   About the Guest Blair Knowles is the Founder and CEO of RAIN—the Real Estate AI Network—a modern coaching and training community for agents who want tactical, not theoretical, AI—built for traction over hype. RAIN offers field-tested strategies and tools that show agents exactly how to implement AI in their businesses today. It's designed for busy agents who want to get started with AI but don't have time to sift through endless tools, trends, and misinformation. Blair built RAIN to be a shortcut—delivering only what works, with short, actionable trainings that save agents time and drive results. Blair began her real estate career in 2013, built a top-performing team, and launched her independent brokerage, Ridgeline Real Estate, in 2020. Today, Ridgeline includes more than 25 agents and staff. Under her leadership, the firm will surpass $100M in annual sales and cross half a billion in total volume in 2025. She cont hiinues to lead with a focus on clarity, implementation, and forward momentum—both inside RAIN and in the real estate industry at large. Join RAIN: Real Estate's AI Network on Facebook.  Sign up for training:  Revamp Your Sitting Listing with AI - November 6 Webinar Harness the Marketing Power of Sora for Real Estate - November 13 Webinar AEO/GEO - How to Show Up on ChatGPT | Free Guide- https://therainagent.myflodesk.com/aeogeo About Your Host Marki Lemons Ryhal is a ​​Licensed Managing Broker, REALTOR®, and avid volunteer.  She is a dynamic keynote speaker and workshop facilitator, both on-site and virtual; she's the go-to expert for artificial Intelligence, entrepreneurship, and social media in real estate. Marki Lemons Ryhal is dedicated to all things real estate, and with 25+ years of marketing experience, Marki has taught over 250,000 REALTORS® how to earn up to a 2682% return on their marketing dollars. Marki's expertise has been featured in Forbes, the Washington Post, Homes.com, and REALTOR® Magazine. Subscribe, Rate & Review Check out this episode on our website, Apple Podcasts, or Spotify, and don't forget to leave a review if you like what you heard. Your review feeds the algorithm, so our show reaches more people. Thank you!   

Lead(er) Generation on Tenlo Radio
EP155: The Paradox of AI-Powered Authenticity with Troy Snyder

Lead(er) Generation on Tenlo Radio

Play Episode Listen Later Dec 16, 2025 30:07


Is it possible to build community within AI? Can we challenge search engines to put humanity and the truth at the forefront of their outputs? According to Troy Snyder, astrology student turned entrepreneur, the answer is ... maybe. In this episode, Tessa Burg and Troy examine the balance between technology and humanity. They discuss the challenges of having AI determine what is “true,” how brands can stay authentic and build trust in an increasingly automated world, and even how to view AI through an astrology lens.  Leader Generation is hosted by Tessa Burg and brought to you by Mod Op.  About Troy Snyder: For more than three decades, Troy has operated at the frontier of digital innovation—helping to guide the evolution of streaming from early SD pipelines to HD, 4K, the first waves of VR, and early AI efforts—while studying the timeless frameworks that have shaped human understanding for thousands of years. Troy has led the creation of authentication systems, video CMS architectures, large-scale distribution networks and multiband rural wireless. He has also contributed to emerging AI-driven digital identity tools with Mebot.ai where “Human AI” and how we create true lifelike representations of self in the AI age is explored. Beyond his work in digital innovation, Troy is committed to long-term social impact. He serves as founder and chairman for Wonderful Foundations, a charity that owns and supports 27 schools serving more than 15,000 kids. This effort reflects Troy's belief that technology and infrastructure should exist in service of human potential. In addition to being a technologist, Troy is also a practicing Vedic astrologer whose work spans invention, executive leadership, creative production, fundraising and systems engineering, always with an eye toward the deeper patterns that connect technology, people and purpose. About Tessa Burg: Tessa is the Chief Technology Officer at Mod Op and Host of the Leader Generation podcast. She has led both technology and marketing teams for 15+ years. Tessa initiated and now leads Mod Op's AI/ML Pilot Team, AI Council and Innovation Pipeline. She started her career in IT and development before following her love for data and strategy into digital marketing. Tessa has held roles on both the consulting and client sides of the business for domestic and international brands, including American Greetings, Amazon, Nestlé, Anlene, Moen and many more. Tessa can be reached on LinkedIn or at Tessa.Burg@ModOp.com.

Modern Business Operations
Why 95% of AI Projects Fail and How To Beat the Odds

Modern Business Operations

Play Episode Listen Later Dec 10, 2025 39:11


Sagi Eliyahu hosts Andrea Iorio, Founder, Keynote Speaker and Podcaster of AIK | Andrea Iorio Keynotes and Author of "Between You and AI." Andrea breaks down the critical human skills professionals need to stay relevant as AI transforms the workplace. The episode explores why 95% of AI projects fail, the difference between automation and augmentation and the nine essential skills for thriving alongside intelligent technology.Key Takeaways:00:00 Introduction.03:39 Most books on AI fail to address the development of critical human skills.07:06 An MIT study has shown that 95% of AI projects fail.11:54 Data sense-making prevents the spread of AI hallucinations.16:06 The education system fails to teach question-asking skills.20:37 Asking the right questions becomes a competitive advantage.24:42 Automation frees time for augmentation strategies.28:21 Human-AI collaboration scales customer service effectively.32:15 Critical thinking is becoming the job role itself.36:30 Adaptability remains a human competitive advantage.38:31 Individual urgency drives professional skill transformation.Resources Mentioned:Andrea Ioriohttps://www.linkedin.com/in/andreaiorio/AIK | Andrea Iorio Keynotes | LinkedInhttps://www.linkedin.com/company/arte-de-palestrar-adp/Andrea Iorio Keynotes | Websitehttps://artedepalestrar.com.br/“Between You and AI” by Andrea Ioriohttps://betweenyouand.ai/This episode is brought to you by Tonkean.Tonkean is the operating system for business operations and is the enterprise standard for process orchestration. It provides businesses with the building blocks to orchestrate any process, with no code or change management required. Contact us at tonkean.com to learn how you can build complex business processes. Fast.#Operations #BusinessOperations

C4 and Bryan Nehman
December 9th 2025: MD State Education Associations Priorities; College Football Teams Pulling Out of Bowl Games; Super Human AI; Rich Gibson

C4 and Bryan Nehman

Play Episode Listen Later Dec 9, 2025 88:02


Join the conversation with C4 & Bryan Nehman.  C4 & Bryan kicked off the show this morning discussing the MD state education associations priorities for 2026.  Two teens steal a car & then hit a cop in the process.  Is college football dead, a lot of teams are pulling out of bowl games.  Super human AI is coming, is that good or bad for the world? Howard County States Attorney Rich Gibson joined the show as well.  Listen to C4 & Bryan Nehman live weekdays from 5:30 to 10am on WBAL News Radio 1090, FM 101.5 & the WBAL Radio App! 

Grow A Small Business Podcast
QFF: How Taylor Victoria Built a 7-Figure Outsourcing Agency, Empowered Global Teams, and Leveraged Human-AI Co-Pilots to Transform Small Business Growth, Leadership, and Productivity in the Modern Entrepreneurial World. (Episode 751 - Taylor Victoria)

Grow A Small Business Podcast

Play Episode Listen Later Dec 4, 2025 23:21


QFF: Quick Fire Friday – Your 20-Minute Growth Powerhouse! Welcome to Quick Fire Friday, the Grow A Small Business podcast series that is designed to deliver simple, focused and actionable insights and key takeaways in less than 20 minutes a week. Every Friday, we bring you business owners and experts who share their top strategies for growing yourself, your team and your small business. Get ready for a dose of inspiration, one action you can implement and quotable quotes that will stick with you long after the episode ends! In this episode of Quick Fire Friday, host Amanda Jones interviews Taylor Victoria, founder of Level Up Outsourcing and host of the "She's Making Millions" podcast. Taylor shares how she built a 7-figure outsourcing agency after struggling to find a job at 22. She explains how outsourcing transforms lives in the Philippines and why business owners must embrace AI as a co-pilot rather than fear it. Taylor highlights the power of personal development, time audits, and team alignment for high performance. She encourages business owners to explore AI tools and automate tasks to create freedom and grow their business. Key Takeaways for Small Business Owners: Embrace AI as a Co-Pilot, Not a Threat: AI won't replace your business — but business owners using AI will. Stay proactive and learn new tools weekly. Audit Your Time to Find What to Automate: Track your tasks for 1–2 weeks and use AI to identify what can be automated or delegated to free up your energy. Invest in Personal Development: Your business grows when you grow. Events, learning, and self-reflection directly impact performance and results. Our hero crafts outstanding reviews following the experience of listening to our special guests. Are you the one we've been waiting for? Build High-Performing Teams With Clear Systems: Review your team's workflows, improve efficiency, and let people focus on high-ROI work by pairing them with AI tools. Use Outsourcing to Scale Smarter: Global talent can transform your operations and create life-changing opportunities for others, especially in the Philippines. Prepare Your Business to Be an Asset, Not a Job:  Automating processes and reducing dependency on you increases business value — making it easier to scale or eventually sell. One action small business owners can take: According to Taylor Victoria, one action small business owners can take is to upload their weekly tasks into ChatGPT and ask which processes can be automated with AI, then commit to implementing one automation within the next seven days. Do you have 2 minutes every Friday? Sign up to the Weekly Leadership Email. It's free and we can help you to maximize your time. Enjoyed the podcast? Please leave a review on iTunes or your preferred platform. Your feedback helps more small business owners discover our podcast and embark on their business growth journey.

The Digital Healthcare Experience
The Human-AI Partnership in Clinical Practice | With Dr. Zafar Chaudry, Chief Digital Officer at Seattle Children's

The Digital Healthcare Experience

Play Episode Listen Later Dec 3, 2025 35:37


In this episode, we explore how artificial intelligence is transforming medical decision-making, clinical workflows, and patient outcomes. Our guest, Dr. Zafar Chaudry, Senior Vice President, Chief Digital Officer, and Chief AI & Information Officer at Seattle Children's, breaks down what a true human-AI partnership looks like inside modern healthcare. Watch the full video here. We discuss how AI is being used as a clinical co-pilot, supporting clinicians with faster access to medical knowledge, evidence-based guidelines, and real-time patient data. Dr. Chaudry shares real examples of AI improving diagnostic accuracy, enhancing patient safety, and enabling more personalized treatment plans. You'll also hear insights on the ethical considerations, accountability, and integration challenges that healthcare leaders need to understand as AI becomes more embedded in clinical practice. Topics covered in this episode: How AI supports medical decision-making and clinical workflows Real-world use cases where AI improves patient care and outcomes The role of AI in diagnostics, risk prediction, and personalized medicine Ethical considerations, transparency, and accountability in AI deployment How clinicians and AI can work together without losing the human touch What healthcare leaders should prioritize as AI adoption accelerates This episode is ideal for healthcare executives, clinicians, digital health leaders, and anyone navigating the rapidly evolving landscape of AI in healthcare. Listen to learn how organizations can responsibly and effectively integrate AI to enhance clinical practice and improve patient care. Connect with Dr. Chaudry on LinkedIn. Find Dr. Chaudry's work at https://www.seattlechildrens.org Subscribe and stay at the forefront of the digital healthcare revolution. Watch the full video on YouTube @TheDigitalHealthcareExperience The Digital Healthcare Experience is a hub to connect healthcare leaders and tech enthusiasts. Powered by Taylor Healthcare, this podcast is your gateway to the latest trends and breakthroughs in digital health. Learn more at taylor.com/digital-healthcare About Us: Taylor Healthcare empowers healthcare organizations to thrive in the digital world. Our technology streamlines critical workflows such as procedural & surgical informed consent with patented mobile signature capture, ransomware downtime mitigation, patient engagement and more. For more information, please visit imedhealth.com   The Digital Healthcare Experience Podcast: Powered by Taylor Healthcare Produced by Naomi Schwimmer  Hosted by Chris Civitarese Edited by Eli Banks Music by Nicholas Bach  

SaaS Fuel
The New Playbook: Innovate, Experiment, and Scale Smarter with AI | Amos Bar Joseph | 341

SaaS Fuel

Play Episode Listen Later Dec 2, 2025 56:16


In this groundbreaking episode of SaaS Fuel, Jeff Mains sits down with Amos Bar Joseph, CEO and co-founder of Swann, the AI-native company on a quest to build the world's first truly autonomous business. With only three human founders and a fleet of AI agents, Swann is redefining the startup playbook—targeting $10M ARR per employee and running leaner operations without sacrificing growth or burning out teams. Amos Bar Joseph shares how Swann scales via intelligent automation and human-AI collaboration, creating systems where both people and agents operate in their zone of genius. Listeners learn actionable ways to build their “AI muscle,” leverage experimental GTM strategies, and develop organizations that amplify human talent rather than replace it.Key Takeaways00:00 "Building Resilient Customer-Focused Teams"05:23 Reinventing the Startup Playbook08:52 "Scaling Innovation Through AI Agents"10:14 "Building an AI Support Agent"15:00 "Optimizing Funnel With Human Leadership"17:16 "AI-Powered GTM Automation Tool"20:51 AI Amplifying Human Talent26:56 Continuous Innovation Through Experiments28:13 "Balancing Risk in Business Growth"32:43 "Building AI Muscle Internally"36:37 "AI Failures: Perfection Over Adaptation"39:11 Defining Failure in Experiments42:59 "Redefining Scale with Human-AI"48:21 Automated Sales Lead Management52:06 "Connect, Learn, Build Autonomously"54:40 "Scaling Revenue & Holographic Tech"Tweetable Quotes"It wasn't like that. What happened is that we started iterating in human in the loop workflows where humans and agents work side by side and there's an iteration mechanism where we refine that collaboration until we got to a process that one person could scale to an output of what used to in the past." — Amos Bar JosephQuote: "It's kind of like a developer that works with sales and marketing and sometimes founders or rev ops to turn any go to market idea into an agentic workflow. So you can scale go to market with intelligence, not revenue, not headcount, and really iterate on your go to market at the speed of thought." — Amos Bar JosephQuote: "The moment that you remove all the technical complexity with a tool like Swann, then you can start iterating on your go to market at the speed of thought." — Amos Bar JosephQuote: "what we aim for is actually these unconventional playbooks, because these playbooks, these tactics, are the ones that you can drive the most disproportionate value from the resource that you invest in." — Amos Bar JosephWhy Most AI Projects Fail: "The number one reason for that is that the user, the buyer, the organization is optimizing and the vendor together, they're optimizing for perfection, not for adaptation, as you just laid out, Jeff. And the reason is why that is the number one reason, is because you don't know what perfection looks like when you start." — Amos Bar JosephSaaS Leadership LessonsLeverage Talent, Not Headcount:Focus on value creation per employee, using AI to scale intelligent output—not just adding more people.Iterate to Innovate:Use experimentation and iterative processes to refine human-agent collaboration and maximize business results.Embrace the Zone of Genius:Place team members in roles where their passions and skills create disproportionate value; let AI take on everything outside that zone.Bias Toward BuildingAdopt a build-first mentality with AI tools—solve your own business bottlenecks rather than just buying external solutions.Stand Out With Unconventional Playbooks:In...

unSILOed with Greg LaBlanc
599. Why Authenticity Might Not Be the Answer feat. Tomas Chamorro-Premuzic

unSILOed with Greg LaBlanc

Play Episode Listen Later Nov 20, 2025 52:23


Why might ‘bring your whole self to work' be terrible professional advice, and what should we be thinking about instead? Why does authenticity come into play more now than in previous generations? Tomas Chamorro-Premuzic is a professor of business psychology at University College London and Columbia. He is also the author of several books, including Don't Be Yourself: Why Authenticity Is Overrated (and What to Do Instead), Why Do So Many Incompetent Men Become Leaders?: (And How to Fix It), and The Talent Delusion: Why Data, Not Intuition, Is the Key to Unlocking Human Potential, I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique.Greg and Tomas discuss the overemphasis of authenticity in professional and personal settings, the nuanced insights from sociologist Erving Goffman on impression management, and how emotional intelligence often aligns with strategic impression management. Their conversation gets into the impact of AI on human potential and workplace dynamics, as well as the complex interplay between organizational culture and individual behavior, particularly among leaders. *unSILOed Podcast is produced by University FM.*Episode Quotes:Why do people believe authenticity naturally leads to wellbeing and success?03:08: In a world that is obviously not very authentic, pretending that we value authenticity or encouraging people to just be themselves might be quite fitting. I think it's not very authentic advice to tell people, "Oh, just be yourself. Oh, just bring your whole self to work. Oh, don't worry about what people think of you." But then, if somebody is silly or naive enough to follow that advice, the repercussions for them are not very positive.Self-awareness requires paying attention to others13:33: Professional success and personal development and self-awareness can only be achieved if you are receptive to what other people think of you. So, by the way, as I say in the book [DON'T BE YOURSELF], the notion that, I mean, you know, one of the mantras of authenticity or to authenticity advice, which is "ignore what people tell you," ironically,  the advice is trying to tell us how to behave, right? So you cannot ignore what people tell you. And the difference between somebody who has achieved basic emotional maturity and psychological maturity and somebody who still behaves like a child is that the psychologically mature person pays attention to what other people think of themselves, which doesn't mean being a sort of weak, feeble, conformist sheep. It means being a highly functioning member of society, of work, of community, not being trapped in your own narcissistic delusion.How do you achieve self-awareness?12:20: Self-awareness is actually achieved by internalizing the feedback from others from a very, very early age. We learn about ourselves from internalizing or incorporating the feedback we get from others. So your teachers, your aunt, your uncle, your parents, your older siblings, your friends will tell you, you are good at this, you are bad at that, you are funny. And then you understand that you are funny, right? It's obviously problematic if they're lying to you and then you realize, Ooh, outside my family, nobody laughs with my jokes, right? But there's no answer to who we really are. But the best way to understand who we are in the eyes of others is to not be self-centered and to actually be open to feedback. And that's something that people with high emotional intelligence do very well. Show Links:Recommended Resources:Erving GoffmanCore Self-EvaluationsEmotional LaborEmotional IntelligenceSelf-MonitoringElon MuskDavid Bowie360-degree feedbackCharles Horton CooleyDale CarnegieHenry FordJeffrey PfefferPope FrancisRobert HoganMachiavellianismMax PlanckAmos TverskyDaniel KahnemanJohn Maynard KeynesGuest Profile:Faculty Profile at University College LondonWebsite | DrTomas.comLinkedIn ProfileWikipedia PageSocial Profile on XGuest Work:Amazon Author PageDon't Be Yourself: Why Authenticity Is Overrated (and What to Do Instead)Why Do So Many Incompetent Men Become Leaders?: (And How to Fix It)The Talent Delusion: Why Data, Not Intuition, Is the Key to Unlocking Human PotentialI, Human: AI, Automation, and the Quest to Reclaim What Makes Us UniqueConfidence: How Much You Really Need and How to Get ItPersonality and Individual Differences, 3rd EditionThe Future of Recruitment: Using the New Science of Talent Analytics to Get Your Hiring Right (The Future of Work)Personality and Intellectual CompetenceThe Psychology of Personnel SelectionPersonality and Individual DifferencesConfidence: Overcoming Low Self-Esteem, Insecurity, and Self-Doubt Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Our Kids Our Schools
Human → AI → Human: Navigating AI with Integrity, Clarity & Curiosity

Our Kids Our Schools

Play Episode Listen Later Nov 18, 2025 54:13


Send us a textAlexis sits down with Will Goodman, Chief Technology Officer for the Boise School District and a central voice in Idaho's statewide conversations on AI in K–12 education. Will and Alexis serve together on an AI in K-12 Education Workgroup in Idaho, and in this episode, they dig into the real questions Idaho is navigating right now.Together they explore:With 94% of Idaho students in public schools, what does “getting AI right” actually mean for an entire system?How schools can maintain academic integrity while using AI as a learning partner.What “Human → AI → Human” looks like in a real classroom?How to communicate clearly with parents about what AI is, and isn't, doing in Idaho classrooms.What conversations parents should be having at home?How AI fits alongside Digital Literacy and digital citizenship.How Idaho's approach compares to states like Colorado, Utah, and Georgia.How we'll measure success: learning outcomes, efficiency, and equity.The cultural challenge of moving from fear to curiosity.Safeguarding human dignity and agency in an AI-driven world.What responsible AI in Idaho education could look like in 3–5 years.If this conversation sparks a thought, concern, or idea—reach out. Idaho's framework is a living document, and community voices matter.Find Alexis on Instagram and JOIN in the conversation: https://www.instagram.com/the_idaho_lady/ JOIN the convo on Substack & STAY up-to-date with emails and posts https://substack.com/@theidaholady?r=5katbx&utm_campaign=profile&utm_medium=profile-page Send Alexis an email with guest requests, ideas, or potential collaboration.email@thealexismorgan.comFind great resources, info on school communities, and other current projects regarding public policy:https://www.thealexismorgan.com

Six Pixels of Separation Podcast - By Mitch Joel
Why Authenticity Is Overrated With Tomas Chamorro-Premuzic - TWMJ #1010

Six Pixels of Separation Podcast - By Mitch Joel

Play Episode Listen Later Nov 16, 2025 66:25


Welcome to episode #1010 of Thinking With Mitch Joel (formerly Six Pixels of Separation). What if the search for our "true selves" has been leading us away from who we actually need to become? That's the tension at the heart of Dr. Tomas Chamorro-Premuzic's work, a globally respected authority on people analytics, talent, leadership, and the Human–AI interface whose career spans ManpowerGroup, Deeper Signals, Meta Profiling, Columbia University, UCL, and decades of research that have shaped how organizations understand human behavior. His latest book, Don't Be Yourself: Why Authenticity Is Overrated (And What To Do Instead), challenges one of the most cherished modern beliefs - that success comes from projecting our raw, unfiltered selves - and instead argues that adaptability, reputational awareness, and a more evidence-based approach to identity lead to better outcomes for individuals, teams, and societies. He is also the author of Why Do So Many Incompetent Men Become Leaders?, I, Human, The Talent Delusion, and many others. In this conversation, we unpack how hyper-normalized ideas take root, why celebrity culture distorts our sense of what authenticity looks like, and how social media has gamified identity into a curated performance that misleads both the performer and the audience. He explains why leaders must balance sincerity with impression management, how hybrid work and return-to-office debates reveal deeper anxieties about trust and presence, and why intellectual curiosity may be the antidote to polarization in an era where algorithms reward tribalism. The discussion also explores the limits of self-perception, the psychology of reputation, the dangers of treating outliers as role models, and the pivotal role AI may play in counteracting human bias. Ultimately, Tomas argues that authenticity without responsibility collapses into narcissism, and that a more thoughtful, flexible, and socially attuned version of ourselves is not only possible, but necessary. Enjoy the conversation… Running time: 1:06:25. Hello from beautiful Montreal. Listen and subscribe over at Apple Podcasts. Listen and subscribe over at Spotify. Please visit and leave comments on the blog - Thinking With Mitch Joel. Feel free to connect to me directly on LinkedIn. Check out ThinkersOne. Here is my conversation with Dr. Tomas Chamorro-Premuzic. Don't Be Yourself: Why Authenticity Is Overrated (And What To Do Instead. Why Do So Many Incompetent Men Become Leaders?. I, Human. The Talent Delusion. Tomas' other books. Follow Tomas on LinkedIn. Chapters: (00:00) - Introduction to Tomas Chamorro-Premuzic. (03:11) - The Concept of 'Don't Be Yourself'. (06:00) - Hyper Normalization and Management Ideas. (08:48) - The Role of Celebrity and Authenticity. (12:04) - Polarization and Tribalism in Society. (15:11) - The Evolution of Human Interaction. (17:58) - The Impact of AI on Decision Making. (20:49) - Navigating Individualism and Identity. (23:52) - The Dichotomy of Authenticity in Leadership. (26:56) - The Reality of Career Paths and Entrepreneurship. (30:06) - Return to Office and Hybrid Work Dynamics. (33:49) - The Value of 3D Encounters in Recruitment. (36:40) - Authenticity and Skilled Self-Presentation. (39:02) - Collaboration and Trust in Professional Settings. (42:26) - Authenticity vs. Reputation: A Complex Relationship. (48:09) - The Subjectivity of Authenticity. (54:17) - Projecting Positivity in a Negative World. (01:00:10) - Social Media's Impact on Identity and Authenticity.  

18Forty Podcast
Steven Gotlib & Eli Rubin: What does it mean to be a human? [AI 1/3]

18Forty Podcast

Play Episode Listen Later Oct 28, 2025 68:37


This series is sponsored by American Security Foundation.In this episode of the 18Forty Podcast—recorded at the 18Forty X ASFoundation AI Summit—we speak with Rabbi Eli Rubin and Rabbi Steven Gotlib about what differentiates human intelligence from artificial intelligence. In this episode we discuss:What does AI teach us about what it means to be human? What is the soul, and how do we interact with it? Should we be frightened or encouraged by the development of AI? Tune in to hear a conversation about the role of language in our humanity. Interview begins at 16:49.Steven Gotlib is Associate Rabbi at Mekor Habracha/Center City Synagogue and Director of the Center City Beit Midrash in Philadelphia. Steven received rabbinic ordination from the Rabbi Isaac Elchanan Theological Seminary, certificates in Mental Health Counseling and Spiritual Entrepreneurship, and a BA in Communication and Jewish Studies from Rutgers University.Eli Rubin, a contributing editor at Chabad.org, is the author of Kabbalah and the Rupture of Modernity: An Existential History of Chabad Hasidism and a co-author of Social Vision: The Lubavitcher Rebbe's Transformative Paradigm for the World. He studied Chassidic literature and Jewish Law at the Rabbinical College of America and at yeshivot in the UK, the US and Australia, and received his PhD from the Department of Hebrew and Jewish Studies, University College London.References:“Basketball: The One And Only”Genesis 7;23Rashi on Genesis 7:23“Remembering my chavruta: Rabbi Moshe Hauer, z”l” By Rabbi Rick Jacobs“18Forty: Exploring Big Questions (An Introduction)”18Forty Podcast: “The Cost of Jewish Education”18Forty Podcast: “Steven Gotlib: Some Rabbi Grapples with His Faith” 18Forty Podcast: “Eli Rubin: How Do Mysticism and Social Action Intersect”18Forty Podcast: “Eli Rubin: Is the Rebbe the Messiah?”Torah Ohr by Shneur Zalman of LiadiTanya by Shneur Zalman of LiadiNefesh HaChayim by Chaim of VolozhinGuide for the Perplexed by MaimonidesHalakhic Man by Rabbi Joseph B. SoloveitchikThe Conscious Mind by David J. Chalmers“Adam, The Speaking Creature: On Humanity and Language in the Era of AI” by Eli Rubin“Toward a Jewish Theology of Consciousness” by Steven GotlibLudwig Wittgenstein: Philosophy in the Age of Airplanes by Anthony GottliebFor more 18Forty:NEWSLETTER: 18forty.org/joinCALL: (212) 582-1840EMAIL: info@18forty.orgWEBSITE: 18forty.orgIG: @18fortyX: @18_fortyWhatsApp: join hereBecome a supporter of this podcast: https://www.spreaker.com/podcast/18forty-podcast--4344730/support.