POPULARITY
Of course AI will continue to make waves, but what other important legal technologies do you need to be aware of in 2025? Dennis and Tom give an overview of legal tech tools—both new and old—you should be using for successful, modernized legal workflows in your practice. They recommend solutions for task management, collaboration, calendars, projects, legal research, and more. Later, the guys answer a listener's question about online prompt libraries. Are there reputable, useful prompts available freely on the internet? They discuss their suggestions for prompt resources and share why these libraries tend to quickly become outdated. As always, stay tuned for the parting shots, that one tip, website, or observation that you can use the second the podcast ends. Have a technology question for Dennis and Tom? Call their Tech Question Hotline at 720-441-6820 for the answers to your most burning tech questions. Show Notes: A Segment: What's Happening in LegalTech Other than AI? OmniFocus ToDoist Microsoft Planner+Project Calendly Microsoft Bookings Microsoft Teams Practical Law B Segment: A Voicemail from Our Listeners - Online Prompt Libraries Anthropic Prompt Library https://docs.anthropic.com/en/prompt-library/library Google Prompting Essentials - https://grow.google/prompting-essentials/ Copilot Prompt Library - https://copilot.cloud.microsoft/en-US/prompts/all Ethan Mollick - https://www.moreusefulthings.com/prompts Parting Shots: Personal Strategy Compass - https://dennis538.substack.com/p/personal-strategy-compass Learn more about your ad choices. Visit megaphone.fm/adchoices
Of course AI will continue to make waves, but what other important legal technologies do you need to be aware of in 2025? Dennis and Tom give an overview of legal tech tools—both new and old—you should be using for successful, modernized legal workflows in your practice. They recommend solutions for task management, collaboration, calendars, projects, legal research, and more. Later, the guys answer a listener's question about online prompt libraries. Are there reputable, useful prompts available freely on the internet? They discuss their suggestions for prompt resources and share why these libraries tend to quickly become outdated. As always, stay tuned for the parting shots, that one tip, website, or observation that you can use the second the podcast ends. Have a technology question for Dennis and Tom? Call their Tech Question Hotline at 720-441-6820 for the answers to your most burning tech questions. Show Notes: A Segment: What's Happening in LegalTech Other than AI? OmniFocus ToDoist Microsoft Planner+Project Calendly Microsoft Bookings Microsoft Teams Practical Law B Segment: A Voicemail from Our Listeners - Online Prompt Libraries Anthropic Prompt Library https://docs.anthropic.com/en/prompt-library/library Google Prompting Essentials - https://grow.google/prompting-essentials/ Copilot Prompt Library - https://copilot.cloud.microsoft/en-US/prompts/all Ethan Mollick - https://www.moreusefulthings.com/prompts Parting Shots: Personal Strategy Compass - https://dennis538.substack.com/p/personal-strategy-compass Learn more about your ad choices. Visit megaphone.fm/adchoices
Nuri Cankaya, the Vice President of Commercial Marketing at Intel and author of AI in Marketing, joins the show to discuss the transformative impacts of AI on various marketing functions, particularly product and partner marketing, as well as how to implement AI effectively within teams and organizations. Also in this episode: the importance of AI assessment, implementation, and measurement, along with practical advice on leveraging AI tools while maintaining data security. Nuri and Itir also dig into the emergence of agentic AI and artificial general intelligence (AGI), and even touch on the possibility of artifical superintelligience in the not-too-distant future. With over twenty years of experience in marketing and innovation, Nuri Cankaya has established a profound career in AI Product Marketing at Intel. Dedicated to aiding esteemed clients in navigating their business challenges and exceeding objectives with AI's transformative capabilities, Nuri is a true futurist. His enthusiasm for the subject is evident in his engaging presentations on “AI and the Future,” delivered at various customer and community events. His passion not only drives him to share his vast knowledge and insights but has also inspired him to author books on the forefront of technology. Nuri's works delve into topics such as AI, Web 3.0, the Internet of Things (IoT), and Blockchain, reflecting his deep commitment to exploring and shaping the future of the digital world. Nuri's favorite coffee spot in Kirkland is Zoka Coffee Roasters: https://www.zokacoffee.com/pages/kirkland-zoka. He recommends reading Winning the Week by Demir and Carey Bentley (https://www.amazon.com/Winning-Week-Plan-Successful-Every/dp/1544530234) and Co-Intelligence by Ethan Mollick (https://www.amazon.com/Co-Intelligence-Living-Working-Ethan-Mollick/dp/059371671X). Connect with Nuri Cankaya on LinkedIn: https://www.linkedin.com/in/nuricankaya If you have any questions about brands and marketing, connect with the host of this channel, Itir Eraslan, on LinkedIn: https://www.linkedin.com/in/itireraslan/
After a quick spring break, Paul Roetzer and Mike Kaput are back, and the AI world definitely didn't take a vacation. In this episode of The Artificial Intelligence Show, our hosts catch up on two weeks of major developments, including OpenAI's surprising release of o3 and o4-mini, the accelerating wave of quiet AI-driven layoffs, and a new federal executive order on AI education. Access the show notes and show links here Timestamps: 00:05:49 —o3 and o4-mini, and AGI 00:17:21 — AI-Caused “Quiet Layoffs” and Impact on Jobs 00:31:46 — White House Plan for AI Education 00:36:04 — Other OpenAI Updates 00:43:04 — Ethan Mollick's Criticism of Microsoft Copilot 00:46:43 — Era of Experience Paper 00:54:23 — Chief AI Officers at Companies 00:58:54 — Anthropic Researcher Says There Is a Chance Claude Is Conscious 01:07:03 — xAI Funding and Updates 01:11:07 — Other AI Product Updates 01:13:40 — Listener Questions This episode is brought to you by our AI for B2B Marketers Summit: Join us and learn valuable insights and practical knowledge on how AI can revolutionize your marketing efforts, enhance customer experiences, and drive business growth. The Summit takes place virtually from 12:00pm - 4:45pm ET on Thursday, June 5. There is a free registration option, as well as paid ticket options that also give you on-demand access after the event. To register, go to b2bsummit.ai This week's episode is also brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types. For more information on MAICON and to register for this year's conference, visit www.MAICON.ai. Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy
As AI becomes more integrated into everyday business and personal life, the biggest question leaders face is: when should we rely on AI and when must the human touch prevail? In this episode, Knownwell CMO Courtney Baker joins CEO David DeWolf and Chief Product and Technology Officer Mohan Rao to explore the fine line between AI augmentation and human intuition. The team uses a recent LinkedIn post from Ethan Mollick as a leaping off point to unpack how leaders can responsibly scale with AI while keeping people empowered, not replaced. In the post, Mollick shares the results of a recent study that shows students who use AI as a tutor benefit from it, whereas those who use AI to do their work for them end up faring worse on standardized tests. Special guest Richard Lin, CEO of Anyreach.ai, speaks with Pete Buer about how AI voice agents are transforming customer service and sales. Lin reveals how cloning the “top 1%” of reps isn't about replacement, but about elevating consistent performance while keeping humans in the loop to label, train, and improve AI systems. In our news segment, Pete dives into the fascinating world of AI therapists. A recent study shows that bots can actually help patients with anxiety and depression, raising the provocative question: are people more honest with machines than humans? See how Knownwell's platform empowers your team with actionable insights. Visit Knownwell.com. AI Knowhow is brought to you by the team at Knownwell.
Humans, walking and taking bags of water and trace chemicals that we are, have managed to convince well-organized sand to pretend to think like us.” - Ethan Mollick In this episode, Ana Melikian challenges the common "us vs. them" mindset when it comes to technology, especially in the age of Artificial Intelligence. Ana takes listeners on a journey through humanity's long history with technology, highlighting how our survival and progress have always been intertwined with new tools—from controlling fire to inventing computers and now, navigating the waves of AI. Ana shares why seeing humanity and technology as allies, rather than adversaries, is essential. She encourages listeners to drop the outdated "humans vs. machines" narrative and instead adopt an experimentation mindset, inviting AI to the table as a collaborator. Drawing wisdom from Ethan Mollick's "Co-Intelligence," Ana presents a practical framework for thriving in this era: always invite AI to the table, be the human in the loop, treat AI like a person (but guide it with intention), and remember that today's AI will soon be considered primitive. Throughout the episode, Ana offers insightful historical context and practical advice for increasing your "AI literacy" without getting overwhelmed or burned out. She emphasizes that we don't need to face the AI revolution alone—working as a team is key to using these powerful tools to shape a better future. Let's dive in! This week on the MINDSET ZONE podcast: 00:00 Introduction and Expanding Our View of Technology 01:31 The Essential Mindset Shift: Humanity and Technology as Allies 03:33 Technology's Deep Roots in Human History 07:58 The Emergence of AI in Our Everyday Lives 10:09 Shift from “Us vs. Them” to “Teamwork with Technology” 12:17 Learning AI as a Strategic Advantage 13:41 Combating Overwhelm: Teamwork and Shared Learning 15:27 Ethan Mollick's Framework for Co-Intelligence 18:49 Practical Prompts: How to Effectively Engage with AI 22:30 Embracing Growth: The Future of AI Evolution 23:33 Reflections, Invitations, and Looking Ahead 25:02 Resources, Book Info, and Gratitude Meet Your Host: Ana Melikian, Ph.D., advises leaders on how to amplify impact while avoiding burnout. She is passionate about teaching others how to unlock their human potential using simple and powerful approaches such as her P.I.E. method.
The era of artificially intelligent large language models is upon us and isn't going away. Rather, AI tools like ChatGPT are only going to get better and better and affect more and more areas of human life.If you haven't yet felt both amazed and unsettled by these technologies, you probably haven't explored their true capabilities.My guest today will explain why everyone should spend at least 10 hours experimenting with these chatbots, what it means to live in an age where AI can pass the bar exam, beat humans at complex tests, and even make us question our own creative abilities, what AI might mean for the future of work and education, and how to use these new tools to enhance rather than detract from your humanity.Ethan Mollick is a professor at the Wharton business school and the author of Co-Intelligence: Living and Working with AI. Today on the show, Ethan explains the impact of the rise of AI and why we should learn to utilize tools like ChatGPT as a collaborator — a co-worker, co-teacher, co-researcher, and coach. He offers practical insights into harnessing AI to complement your own thinking, remove tedious tasks from your workday, and amplify your productivity. We'll also explore how to craft effective prompts for large language models, maximize their potential, and thoughtfully navigate what may be the most profound technological shift of our lifetimes.Connect With Ethan MollickEthan's faculty pageOne Useful Thing SubstackEthan on LinkedInEthan on BlueskyEthan on X
Ramzi Fawaz is an award-winning queer cultural critic, public speaker, and educator. He is the author of two books, including "The New Mutants: Superheroes and the Radical Imagination of American Comics" (2016), and "Queer Forms." (2022). In 2019-2020, Fawaz was a Stanford Humanities Center fellow. He is currently a Romnes Professor of English at the University of Wisconsin, Madison. Please be warned: this conversation is a firehose of brilliance. We cover a frankly outrageous number of topics, including: The politics and poetics of gender/ The radical imagination of the 1960s and 70s/ What happens when college students of today read manifestos from the 1970s and discover just how fiery, and fearless those voices actually were/ How feminist and gay liberation were deeply intertwined... and yet different/ The dark seduction of wounded identity and the political dead-end of suffering as a personality/ What the Beatles, postwar masculinity, and femme androgyny have to do with trans desire and cultural anxiety/ How trans liberation actually predates gay liberation in the U.S. / Teaching as ego dissolution: what it means to use the classroom like a psychedelic space. / And the idea that pluralism — true, radical pluralism — begins by accepting that you will be changed by contact with people who are radically different from you. Ramzi Fawaz is bold, funny, passionate about teaching, absurdly articulate, and I think you'll find he is deeply attuned to the moment we're living in. https://www.ramzifawaz.com/ Ramzi's Esalen offering: Thinking Like a Multiverse: Embracing a Diverse World June 23–27, 2025 Register now: https://www.esalen.org/workshops/thinking-like-a-multiverse-embracing-a-diverse-world-06232025 A quick note on AI: I use LLMs (often the multi-purposse ChatGPT, sometimes other models) to help me with various tasks associated with podcast production, including help with writing my intros, generating questions for my guests, and episode titles. Occasionally I create episode graphics, too. I almost never take the AI output as-is; I subscribe to Ethan Mollick's notion of co-intelligence, in that I edit what's been given me, add my own creativity, and aim for the best possible output in the end. My hope is that this will create a better Voices of Esalen. - SS
With Ethan Mollick, professor at Wharton and author of the bestselling “Co-Intelligence”, we explore how generative AI tools like ChatGPT can enhance scientific creativity. Ethan emphasizes that AI excels at idea generation through sheer volume and recombination, outperforming most humans in many creativity tasks – though it does have odd obsessions with VR and crypto. However, AI is most effective when integrated into a collaborative human–machine workflow rather than used as a replacement. Ethan describes AI as your tireless science buddy that never gets bored or judgmental during brainstorming. We discuss how AI's "hallucinations" can be used for creativity, how AI can bridge disciplines by revealing hidden connections across fields, and how prompting strategies – such as chain-of-thought or playful personas – can guide AI toward more original outputs. Ethan stresses the need for scientists to actively experiment with these tools, share their methods openly, and reconsider scientific workflows in light of rapid AI progress.For more information on Night Science, visit https://www.biomedcentral.com/collections/night-science .
It's official: AI has arrived and, from here on out, will be a part of our world. So how do we begin to learn how to coexist with our new artificial coworkers? Ethan Mollick is an associate professor at University of Pennsylvania's Wharton School and the author of Co-Intelligence: Living and Working with AI. The book acts as a guide to readers navigating the new world of AI and explores how we might work alongside AI. He and Greg discuss the benefits of anthropomorphizing AI, the real impact the technology could have on employment, and how we can learn to co-work and co-learn with AI. *unSILOed Podcast is produced by University FM.*Episode Quotes:The result of an experiment identifying the impact of GEN AI07:35 We went to the Boston Consulting Group, one of the elite consulting companies, and we gave them 18 realistic business tasks we created with them and these were judged to be very realistic. They were used to do actual evaluations of people in interviews and so on. And we got about 8 percent of the global workforce of BCG, which is a significant investment. And we had them do these tasks first on their own without AI, and then we had them do a second set of tasks either with or without AI. So, random selection to those two. The people who got access to AI, and by the way, this is just plain vanilla GPT-4 as of last April. No special fine-tuning, no extra details, no special interface, no RAG, nothing else. And they had a 40 percent improvement in the quality of their outputs on every measure that we had. We got work done about 25 percent faster, about 12.5 percent more work done in the same time period. Pretty big results in a pretty small period of time. Is AI taking over our jobs?20:30 The ultimate question is: How good does AI get, and how long does it take to get that good? And I think if we knew the answer to that question, which we don't, that would teach us a lot about what jobs to think about and worry about.Will there be a new data war where different LLM and Gen AI providers chase proprietary data?11:17 I don't know whether this becomes like a data fight in that way because the open internet has tons of data on it, and people don't seem to be paying for permission to train on those. I think we'll see more specialized training data potentially in the future, but things like conversations, YouTube videos, podcasts are also useful data sources. So the whole idea of LLMs is that they use unsupervised learning. You throw all this data at them; they figure out the patterns.Could public data be polluted by junk and bad actors?16:39 Data quality is obviously going to be an issue for these systems. There are lots of ways of deceiving them, of hacking them, of working like a bad actor. I don't necessarily think it's going to be by poisoning the datasets themselves because the datasets are the Internet, Project Gutenberg, and Wikipedia. They're pretty resistant to that kind of mass poisoning, but I think data quality is an issue we should be concerned about.Show Links:Recommended Resources:“Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality” | Harvard Business SchoolGeoffrey HintonProject GutenbergGemini AI“Google's Gemini Controversy Explained: AI Model Criticized By Musk And Others Over Alleged Bias” | ForbesDevin AI Karim LakhaniGuest Profile:Faculty Profile at University of PennsylvaniaHis Work:Co-Intelligence: Living and Working with AI
This week we talk about Studio Ghibli, Andrej Karpathy, and OpenAI.We also discuss code abstraction, economic repercussions, and DOGE.Recommended Book: How To Know a Person by David BrooksTranscriptIn late-November of 2022, OpenAI released a demo version of a product they didn't think would have much potential, because it was kind of buggy and not very impressive compared to the other things they were working on at the time. This product was a chatbot interface for a generative AI model they had been refining, called ChatGPT.This was basically just a chatbot that users could interact with, as if they were texting another human being. And the results were good enough—both in the sense that the bot seemed kinda sorta human-like, but also in the sense that the bot could generate convincing-seeming text on all sorts of subjects—that people went absolutely gaga over it, and the company went full-bore on this category of products, dropping an enterprise version in August the following year, a search engine powered by the same general model in October of 2024, and by 2025, upgraded versions of their core models were widely available, alongside paid, enhanced tiers for those who wanted higher-level processing behind the scenes: that upgraded version basically tapping a model with more feedstock, a larger training library and more intensive and refined training, but also, in some cases, a model that thinks longer, than can reach out and use the internet to research stuff it doesn't already know, and increasingly, to produce other media, like images and videos.During that time, this industry has absolutely exploded, and while OpenAI is generally considered to be one of the top dogs in this space, still, they've got enthusiastic and well-funded competition from pretty much everyone in the big tech world, like Google and Amazon and Meta, while also facing upstart competitors like Anthropic and Perplexity, alongside burgeoning Chinese competitors, like Deepseek, and established Chinese tech giants like Tencent and Baidu.It's been somewhat boggling watching this space develop, as while there's a chance some of the valuations of AI-oriented companies are overblown, potentially leading to a correction or the popping of a valuation bubble at some point in the next few years, the underlying tech and the output of that tech really has been iterating rapidly, the state of the art in generative AI in particular producing just staggeringly complex and convincing images, videos, audio, and text, but the lower-tier stuff, which is available to anyone who wants it, for free, is also valuable and useable for all sorts of purposes.Just recently, at the tail-end of March 2025, OpenAI announced new multimodal capabilities for its GPT-4o language model, which basically means this model, which could previously only generate text, can now produce images, as well.And the model has been lauded as a sort of sea change in the industry, allowing users to produce remarkable photorealistic images just by prompting the AI—telling it what you want, basically—with usually accurate, high-quality text, which has been a problem for most image models up till this point. It also boasts the capacity to adjust existing images in all sorts of ways.Case-in-point, it's possible to use this feature to take a photo of your family on vacation and have it rendered in the style of a Studio Ghibli cartoon; Studio Ghibli being the Japanese animation studio behind legendary films like My Neighbor Totoro, Spirited Away, and Princess Mononoke, among others.This is partly the result of better capabilities by this model, compared to its precursors, but it's also the result of OpenAI loosening its policies to allow folks to prompt these models in this way; previously they disallowed this sort of power, due to copyright concerns. And the implications here are interesting, as this suggests the company is now comfortable showing that their models have been trained on these films, which has all sorts of potential copyright implications, depending on how pending court cases turn out, but also that they're no long being as precious with potential scandals related to how their models are used.It's possible to apply all sorts of distinctive styles to existing images, then, including South Park and the Simpsons, but Studio Ghibli's style has become a meme since this new capability was deployed, and users have applied it to images ranging from existing memes to their own self-portrait avatars, to things like the planes crashing into the Twin Towers on 9/11, JFK's assassination, and famous mass-shootings and other murders.It's also worth noting that the co-founder of Studio Ghibli, Hayao Miyazaki, has called AI-generated artwork “an insult to life itself.” That so many people are using this kind of AI-generated filter on these images is a jarring sort of celebration, then, as the person behind that style probably wouldn't appreciate it; many people are using it because they love the style and the movies in which it was born so much, though. An odd moral quandary that's emerged as a result of these new AI-provided powers.What I'd like to talk about today is another burgeoning controversy within the AI space that's perhaps even larger in implications, and which is landing on an unprepared culture and economy just as rapidly as these new image capabilities and memes.—In February of 2025, the former AI head at Tesla, founding team member at OpenAI, and founder of an impending new, education-focused project called Eureka Labs named Andrej Karpathy coined the term ‘vibe coding' to refer to a trend he's noticed in himself and other developers, people who write code for a living, to develop new projects using code-assistant AI tools in a manner that essentially abstracts away the code, allowing the developer to rely more on vibes in order to get their project out the door, using plain English rather than code or even code-speak.So while a developer would typically need to invest a fair bit of time writing the underlying code for a new app or website or video game, someone who's vibe coding might instead focus on a higher, more meta-level of the project, worrying less about the coding parts, and instead just telling their AI assistant what they want to do. The AI then figures out the nuts and bolts, writes a bunch of code in seconds, and then the vibe coder can tweak the code, or have the AI tweak it for them, as they refine the concept, fix bugs, and get deeper into the nitty-gritty of things, all, again, in plain-spoken English.There are now videos, posted in the usual places, all over YouTube and TikTok and such, where folks—some of whom are coders, some of whom are purely vibe coders, who wouldn't be able to program their way out of a cardboard box—produce entire functioning video games in a matter of minutes.These games typically aren't very good, but they work. And reaching even that level of functionality would previously have taken days or weeks for an experienced, highly trained developer; now it takes mere minutes or moments, and can be achieved by the average, non-trained person, who has a fundamental understanding of how to prompt AI to get what they want from these systems.Ethan Mollick, who writes a fair bit on this subject and who keeps tabs on these sorts of developments in his newsletter, One Useful Thing, documented his attempts to make meaning from a pile of data he had sitting around, and which he hadn't made the time to dig through for meaning. Using plain English he was able to feed all that data to OpenAI's Deep Research model, interact with its findings, and further home in on meaningful directions suggested by the data.He also built a simple game in which he drove a firetruck around a 3D city, trying to put out fires before a competing helicopter could do the same. He spent a total of about $13 in AI token fees to make the game, and he was able to do so despite not having any relevant coding expertise.A guy named Pieter Levels, who's an experienced software engineer, was able to vibe-code a video game, which is a free-to-play, massively multiplayer online flying game, in just a month. Nearly all the code was written by Cursor and Grok 3, the first of which is a code-writing AI system, the latter of which is a ChatGPT-like generalist AI agent, and he's been able to generate something like $100k per month in revenue from this game just 17 days, post-launch.Now an important caveat here is that, first, this game received a lot of publicity, because Levels is a well-known name in this space, and he made this game as part of a ‘Vibe Coding Game Jam,' which is an event focused on exactly this type of AI-augmented programming, in which all of the entrants had to be at least 80% AI generated. But he's also a very skilled programmer and game-maker, so this isn't the sort of outcome the average person could expect from these sorts of tools.That said, it's an interesting case study that suggests a few things about where this category of tools is taking us, even if it's not representative for all programming spaces and would-be programmers.One prediction that's been percolating in this space for years, even before ChatGPT was released, but especially after generative AI tools hit the mainstream, is that many jobs will become redundant, and as a result many people, especially those in positions that are easily and convincingly replicated using such tools, will be fired. Because why would you pay twenty people $100,000 a year to do basic coding work when you can have one person working part-time with AI tools vibe-coding their way to approximately the same outcome?It's a fair question, and it's one that pretty much every industry is asking itself right now. And we've seen some early waves of firings based on this premise, most of which haven't gone great for the firing entity, as they've then had to backtrack and starting hiring to fill those positions again—the software they expected to fill the gaps not quite there yet, and their offerings suffering as a consequence of that gambit.Some are still convinced this is the way things are going, though, including people like Elon Musk, who, as part of his Department of Government Efficiency, or DOGE efforts in the US government, is basically stripping things down to the bare-minimum, in part to weaken agencies he doesn't like, but also, ostensibly at least, to reduce bloat and redundancy, the premise being that a lot of this work can be done by fewer people, and in some cases can be automated entirely using AI-based systems.This was the premise of his mass-firings at Twitter, now X, when he took over, and while there have been a lot of hiccups and issues resulting from that decision, the company is managing to operate, even if less optimally than before, with about 20% the staff it had before he took over—something like 1,500 people compared to 7,500.Now, there are different ways of looking at that outcome, and Musk's activities since that acquisition will probably color some of our perceptions of his ambitions and level of success with that job-culling, as well. But the underlying theory that a company can do even 90% as well as it did before with just a fifth of the workforce is a compelling argument to many people, and that includes folks running governments, but also those in charge of major companies with huge rosters of employees that make up the vast majority of their operating expenses.A major concern about all this, though, is that even if this theory works in broader practice, and all these companies and governments can function well enough with a dramatically reduced staff using AI tools to augment their capabilities and output, we may find ourselves in a situation in which the folks using said tools are more and more commodified—they'll be less specialized and have less education and expertise in the relevant areas, so they can be paid less, basically, the tools doing more and the humans mostly being paid to prompt and manage them. And as a result we may find ourselves in a situation where these people don't know enough to recognize when the AI are doing something wrong or weird, and we may even reach a point where the abstraction is so complete that very few humans even know how this code works, which leaves us increasingly reliant on these tools, but also more vulnerable to problems should they fail at a basic level, at which point there may not be any humans left who are capable of figuring out what went wrong, since all the jobs that would incentivize the acquisition of such knowledge and skill will have long since disappeared.As I mentioned in the intro, these tools are being applied to images, videos, music, and everything else, as well. Which means we could see vibe artists, vibe designers, vibe musicians and vibe filmmakers. All of which is arguably good in the sense that these mediums become more accessible to more people, allowing more voices to communicate in more ways than ever before.But it's also arguably worrying in the sense that more communication might be filtered through the capabilities of these tools—which, by the way, are predicated on previous artists and writers and filmmakers' work, arguably stealing their styles and ideas and regurgitating them, rather than doing anything truly original—and that could lead to less originality in these spaces, but also a similar situation in which people forget how to make their own films, their own art, their own writing; a capability drain that gets worse with each new generation of people who are incentivized to hand those responsibilities off to AI tools; we'll all become AI prompters, rather than all the things we are, currently.This has been the case with many technologies over the years—how many blacksmiths do we have in 2025, after all? And how many people actually hand-code the 1s and 0s that all our coding languages eventually write, for us, after we work at a higher, more human-optimized level of abstraction?But because our existing economies are predicated on a certain type of labor and certain number of people being employed to do said labor, even if those concerns ultimately don't end up being too big a deal, because the benefits are just that much more impactful than the downsides and other incentives to develop these or similar skills and understandings arise, it's possible we could experience a moment, years or decades long, in which the whole of the employment market is disrupted, perhaps quite rapidly, leaving a lot of people without income and thus a lot fewer people who can afford the products and services that are generated more cheaply using these tools.A situation that's ripe with potential for those in a position to take advantage of it, but also a situation that could be devastating to those reliant on the current state of employment and income—which is the vast, vast majority of human beings on the planet.Show Noteshttps://en.wikipedia.org/wiki/X_Corphttps://devclass.com/2025/03/26/the-paradox-of-vibe-coding-it-works-best-for-those-who-do-not-need-it/https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/https://www.wired.com/story/anthropic-benevolent-artificial-intelligence/https://arstechnica.com/tech-policy/2025/03/what-could-possibly-go-wrong-doge-to-rapidly-rebuild-social-security-codebase/https://en.wikipedia.org/wiki/Vibe_codinghttps://www.newscientist.com/article/2473993-what-is-vibe-coding-should-you-be-doing-it-and-does-it-matter/https://nmn.gl/blog/dangers-vibe-codinghttps://x.com/karpathy/status/1886192184808149383https://simonwillison.net/2025/Mar/19/vibe-coding/https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gnarly-or-reckless-maybe-some-of-both/https://devclass.com/2025/03/26/the-paradox-of-vibe-coding-it-works-best-for-those-who-do-not-need-it/https://www.creativebloq.com/3d/video-game-design/what-is-vibe-coding-and-is-it-really-the-future-of-app-and-game-developmenthttps://arstechnica.com/ai/2025/03/openais-new-ai-image-generator-is-potent-and-bound-to-provoke/https://en.wikipedia.org/wiki/Studio_Ghibli This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
In this episode:Apple announces WWDC will be June 9-13Jeff is experimenting with Gentler Streak and having a bit of nostalgia with Notebook ArtilleryA subtle AirPods Pro experienceThe boys continue to argue over AIEthan Mollick highlights study that finds legit benefits of AI at workLinks from the show:Mac - Lumon Terminal Pro‘Severance' editor was all-in on Apple hardware, but not Final Cut ProWorkout Tracker Gentler StreakNotebook ArtilleryChatGPT Starts Blocking Studio Ghibli-Style Images After Trend Goes ViralThe Cybernetic TeammateArtificial Intelligence Systems and CopyrightThe Tech Fantasy That Powers A.I. Is Running on FumesQuestion or Comment? Send us a Text Message!Contact Us Drop us a line at feedback@basicafshow.com You'll find Jeff at @reyespoint on Threads and reyespoint.bsky.social on Bluesky Find Tom at @tomanderson on Threads Join Tom's newsletter, Apple Talk, for more Apple coverage and tips & tricks. Tom has a new YouTube channel Show artwork by the great Randall Martin Design Enjoy Basic AF? Leave a review or rating! Review on Apple Podcasts Rate on Spotify Recommend in Overcast Intro Music: Psychokinetics - The Chosen Apple Music Spotify Show transcripts and episode artwork are AI generated and likely contain errors and general si...
What do you get when Harvard Business School, Wharton's Ethan Mollick, and 776 Procter & Gamble professionals team up to test the real-world power of generative AI? The kind of data-backed proof that finally silences the skeptics.In this episode of Instant Expertise: Marketing, Yvette Brown and Shari Nomady break down the groundbreaking study that's reshaping how we think about AI, productivity, collaboration, and the future of work.✨ You'll learn:• How AI boosted individual performance to match full team output• Why emotional support from AI is a real thing (yep, really)• How functional silos are being smashed in top-tier organizations• Why NOW is the time to upskill—or risk being left behindThis isn't just theory. It's the wake-up call your team can't ignore.
News Stories Covered in the Episode AI Typing Like a Human? – Graham Clay's LinkedIn post on ChatGPT Operator mimicking human keystrokes https://www.linkedin.com/feed/update/urn:li:activity:7292163775963557888/ The Vatican on AI Ethics – “Antiqua et Nova” report on AI, human intelligence, and ethics in education Sacerdotus: Vatican's New Document on AI: Ethical Guidelines and Human Responsibility Australian Government's AI Study – Treasury report on Microsoft 365 Copilot, estimating it pays for itself if it saves 13 minutes per week Treasury M365 Copilot review estimates 13-minute efficiency gain needed to justify licence cost - Software - iTnews California State University & AI – Providing 500,000 stuents access to ChatGPT Edu https://openai.com/index/openai-and-the-csu-system/ Estonia's AI Strategy – National AI policy giving all students access to AI tutors https://openai.com/index/estonia-schools-and-chatgpt/ Australia's New National AI Centre Director – ex-podcast co-host Lee Hicken appointed to lead the centre. For those keeping count, that's the second of the podcast hosts that's now at NAIC, with current hosts Dan & Ray still waiting for the call
The advances in AI have skyrocketed, with more and more people beginning to make use of it in everyday life. In time, AI will have a monumental effect on society at virtually every level. As such, questions about the ethics and theology of artificial intelligence. are no longer speculative, but are right here on our doorstep. How should Christians respond? What positives are there in AI? Where can it help relieve unnecessary burdens? Where are the increasing dangers too? As AI gets smarter, do we get dumber? How do think theologically about AI? How does sin factor into AI? If we create AI in the image of sinful humans, are we unleashing something capable of ever greater destruction? Could AI become "self-aware" at some point? If so, how would we categorise it? Is AI capable of "good" or "bad" moral actions? Questions truly do abound! We address many of them, and more, in this jam-packed episode of Pod of the Gaps! **** RESOURCES MENTIONED **** AI Tools: * ChatGPT (from OpenAI): https://claude.ai * Claude (from Anthropic): https://claude.ai * Perplexity: https://www.perplexity.ai * Matthew Berman, 'OpenAI's New o1 Is LYING ON PURPOSE?! (Thinking For Itself)', https://www.youtube.com/watch?v=GlZfndaO01c * George M. Coghill, ‘Artificial Intelligence (and Christianity): Who? What? Where? When? Why? sand How?' Studies in Christian Ethic'. Studies in Christian Ethics 36.3 (2023) 604-619 (online at https://doi.org/10.1177/09539468231169462) * Ethan Mollick, "Co-Intelligence: Living and Working with AI" (London: WH Allen, 2024) * Alan M. Turing, ‘Computing Machinery and Intelligence'. Mind LIX.236 (1950) 433-460 * C. R. Wiley, 'Discerning the Spirits, Part 1: When it comes to AI, nobody's home--except you" https://crwiley.substack.com/p/discerning-the-spirits-part-1
Jim talks with Josh Bernoff, author of Writing Without Bullshit, about the impact of AI on writing education and professional writing. They discuss Josh's background and career, Stephen Lane's recent op-ed arguing that AI should take over writing mechanics, problems with AI-generated writing, the role of writing in thinking, ChatGPT's "deep research," Jim's ScriptHelper project, the decline in math & navigation skills, the importance of memos for corporate decision-making, literacy as a fundamental life skill, Ethan Mollick's approach to AI in education, writing as art, the PowerPoint problem, the Idiocracy scenario, and much more. Episode Transcript "Could AI Replace the Teaching of Writing?: Why the Boston Globe op-ed is dead wrong" - Josh's blog post "AI in the classroom could spare educators from having to teach writing" - Stephen Lane's Boston Globe op-ed Writing Without Bullshit, by Josh Bernoff The Age of Intent: Using Artificial Intelligence to Deliver a Superior Customer Experience, by P.V. Kannan with Josh Bernoff Josh Bernoff is an expert on how business books can propel thinkers to prominence. He is the author of Build a Better Business Book: How to Plan, Write, and Promote a Book That Matters – A Comprehensive Guide for Authors and Writing Without Bullshit: Boost Your Career by Saying What You Mean, as well as coauthor of Groundswell: Winning in a World Transformed by Social Technologies. He works closely with nonfiction authors as an advisor, coach, editor, or ghostwriter.
Book Club Podcast? Before we even got to the News and Research, this week we discussed the AI-related books we're currently reading: Dan's reading: Where Good Ideas Come From, by Steven Johnson (TED Talk) Why Data Science Projects Fail, by Douglas Gray and Evan Shellshear (An interview with Evan) Ray's reading The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, by Kate Crawford (Wikipedia page) News Links Links to the reports and news we discuss in the episode: OpenAI's new Education newsletter https://openaiforeducation.substack.com/ Ethan Mollick's new "AI in Education: Leveraging ChatGPT for Teaching" course on Coursera https://www.coursera.org/learn/wharton-ai-in-education-leveraging-chatgpt-for-teaching World Economic Forum "Future of Jobs report" https://www.weforum.org/publications/the-future-of-jobs-report-2025/infographics-94b6214b36/ Student expelled and deported because they were accused of using ChatGPT by their professor. So they're suing their professor https://www.fox9.com/video/1574324 Digital Education Council Global AI Faculty Survey 2025 https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-faculty-survey We'll discuss this report with one of the authors in next week's episode UK government policy paper on "Generative artificial intelligence (AI) in education" https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education Year13 Case Study on AI use https://news.microsoft.com/en-au/2024/12/13/guiding-school-leavers-with-ai-support-year13s-mission-to-democratise-opportunities-for-young-people/ AI Use by industry employees - US, 2024 https://www.nber.org/papers/w32966 In the discussion of energy use by AI, Ray mentioned some stats from this research report: "The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans" https://arxiv.org/ftp/arxiv/papers/2303/2303.06219.pdf Research Papers And finally, links to the research papers we discussed this week ChatGPT and Its Educational Impact: Insights from a Software Development Competition https://arxiv.org/abs/2409.03779 How to Align Large Language Models for Teaching English? Designing and Developing LLM based-Chatbot for Teaching English Conversation in EFL, Findings and Limitations https://arxiv.org/abs/2409.04987 AI Meets the Classroom: When Does ChatGPT Harm Learning? https://arxiv.org/abs/2409.09047 Are Large Language Models Good Essay Graders? https://arxiv.org/abs/2409.13120 An Education Researcher's Guide to ChatGPT https://osf.io/spbz3 A Step Towards Adaptive Online Learning: Exploring the Role of GPT as Virtual Teaching Assistants in Online Education https://osf.io/preprints/edarxiv/rw45b The AI Assessment Scale (AIAS) in action: A pilot implementation of GenAI-supported assessment https://ajet.org.au/index.php/AJET/article/view/9434
Curious, excited, or even concerned about AI's impact on your career and the future of work? Whether you're eager to embrace AI or cautious about its effects, this episode with Reid Hoffman, LinkedIn co-founder and author of the new book Superagency: What Could Possibly Go Right With Our AI Future, is packed with insights that will help you navigate the AI revolution, no matter your perspective. Jessi and Reid discuss: Practical strategies for using AI to boost career growth Leveraging AI to enhance your job search Supercharging creativity with AI How to use AI to make better decisions Why we should approach AI with curiosity rather than fear This episode was filmed live in-studio. Check out the full video version on LinkedIn Premium. Continue the conversation with us at Hello Monday Office Hours! Join us here, on the LinkedIn News page, this Wednesday at 3 PM EST. Want to learn more about using AI at work and in life? Check out Jessi's conversation with Ethan Mollick on Apple Podcasts, Spotify, or wherever you listen to podcasts.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
DeepSeek R1 dominated the conversation this week, but should it be the model you're using? NLW reads and discusses an opinionated essay by Ethan Mollick and adds his perspective on what models he uses. Original: https://www.oneusefulthing.org/p/which-ai-to-use-now-an-updated-opinionated Brought to you by: KPMG – Go to www.kpmg.us/ai to learn more about how KPMG can help you drive value with our AI solutions. Vanta - Simplify compliance - https://vanta.com/nlw The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score. The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown
Creative thinker /innovation facilitator Curtis Michelson explores crucial trends and predictions for 2025. Curtis shares his insights on generative AI, its impacts on various sectors, the concept of BANI (Brittle, Anxious, Nonlinear, Incomprehensible), and how it updates VUCA to describe the contemporary world. The discussion highlights the ethical implications of AI, the need for creative and decentralized innovation, and the importance of slowing down to enhance strategic decision-making. Curtis provides a practical use of AI in education and explores how leaders can harness AI to support agility and resilience without sacrificing ethical integrity.Find Curtis on LinkedIn: https://www.linkedin.com/in/curtismichelson/References mentioned:"Co-Intelligence" book by Ethan Mollick https://www.penguinrandomhouse.com/books/741805/co-intelligence-by-ethan-mollick/ Gen AI Card Deck by Alexandre Eichenstetter - Used for workshops and understanding AI concepts https://cards.ai-tinkerers.club/ BANI (Brittle, Anxious, Non-linear, Incomprehensible) concept - Coined by futurist Jamais Cascio https://ageofbani.com/ Custom GPTs by OpenAI - Mentioned as a tool for creating and storing AI prompts Context Explorer: https://chatgpt.com/g/g-DKj02DkNp-context-explorer Hyperbolizer: this is an actual prompt you can put into your own favorite tool (chatGPT, Claude, Gemini, etc.) Test this out yourself. https://www.dropbox.com/scl/fi/c9rxnegcg7yerbe6dg7z2/Hyperbolizer-Prompt.txt?rlkey=8ltey2fuq6jss1jpvnlqib5v1&dl=0 Led Zeppelin 4 album reimagined in 1940s style using AI - Mentioned as an example of AI's creative capabilities on YouTube https://youtu.be/gBOVr1zEvaE?si=RTc1FXgryyKt4OVdSubscribers to Dawna's Navigating Uncertainty on Substack get the preview and thought-provoking posts to raise the level of human and business decision-making leadership and provide insights into reviving and restoring emotional and mental health. Subscribe here: https://dawnajones.substack.com/Contact or follow host Dawna Jones on one or more of these channels:Linkedin: https://www.linkedin.com/in/dawnahjones/X: https.//www.X.com/EPDawna_JonesInstagram: https://www.instagram.com/insightful_dawna/Website: https://www.dawnajones.comIntro music provided by Mark Romero Music. The track is called AlignmentSupport this show http://supporter.acast.com/insight-to-action-inspirational-insights-podcast. Hosted on Acast. See acast.com/privacy for more information.
Tom and Nate sit down for a classic discussion of the role of AI in the modern philosophy of science. Much of this discussion is based on Thomas Samuel Kuhn's influential book The Structure of Scientific Revolutions. We ask -- is AI a science in the Kuhn'ian sense? Will the "paradigm" worldview apply to other sciences post AI? How will scientific institutions manage the addition of AI?We promised an AI for science reading list, so here it is:[Dario interview with Lex] https://youtu.be/ugvHCXCOmm4?si=1hnlvue8M4pV2TqCLevers for biological progress https://open.substack.com/pub/cell/p/levers?r=68gy5&utm_medium=iosX thread on theories of change in scienceshttps://x.com/AdamMarblestone/status/1845158919523664019whitepaper linked by seb krierDwarkesh physics pod https://open.substack.com/pub/dwarkesh/p/adam-brown?r=68gy5&utm_medium=ios — Nobel in physics went to aiAi policy perspectives piece A new golden age of discoveryhttps://www.aipolicyperspectives.com/p/a-new-golden-age-of-discoveryOwl posting checking recent NeurIPS papers https://www.owlposting.com/p/can-o1-preview-find-major-mistakes based on idea from Ethan Mollick https://x.com/emollick/status/1868329599438037491also another post on the subject https://open.substack.com/pub/amistrongeryet/p/the-black-spatula-project?r=68gy5&utm_medium=iosKuhn's The Structure of Scientific Revolutionsintrinsic perspective https://open.substack.com/pub/erikhoel/p/great-scientists-follow-intuition?r=68gy5&utm_medium=iosGet The Retort (https://retortai.com/)…… on YouTube: https://www.youtube.com/@TheRetortAIPodcast… on Spotify: https://open.spotify.com/show/0FDjH8ujv7p8ELZGkBvrfv?si=fa17a4d408f245ee… on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-retort-ai-podcast/id1706223190… Follow Interconnects: https://www.interconnects.ai/… email us: mail@retortai.com
AI is going to change the world in good ways and bad ways. We have to figure out how to use it, not just responsibly, but effectively. Today, Ryan Holiday talks about how he had used AI as a learning opportunity for his kids, teaching them about AI's potential, the importance of accurate communication, and various ethical considerations. Check out Ethan Mollick's Substack: https://www.oneusefulthing.org/ Pick up a copy of From Under The Truck by Josh Brolin: https://www.thepaintedporch.com ✉️ Sign up for the Daily Dad email: DailyDad.com
What a great conversation about the new book, The Artificial Intelligence Playbook: Time-Saving Tools for Teachers that Make Learning More Engaging! Jenn got to talk to all three of the authors: Meghan Hargrave, Douglas Fisher, and Nancy Frey and learned so much. We discussed everything from what AI is and isn't, to the reasons leaders should address teachers' emotions around AI, to whether AI is going to take over our jobs! (Spoiler alert, the authors quoted Ethan Mollick who says, "AI won't take your job, but someone who uses AI will!" ... so listen to this podcast to be the person who knows how to use AI!) There are, however, things to be careful of — like students using AI for plagiarism — so we discussed some great ideas to address this. We also got into some concrete examples of the ways AI can help teachers with the important work they're doing: managing content fostering student engagement meeting students' instructional needs assessing student learning providing effective feedback, and lifelong learning for educators The authors share examples of prompts you can feed into AI and some of the tips they have for making sure you get the best possible answers from AI. Since our listeners are mostly ed leaders, they also shared that on Corwin's website there's a school leader's guide to the book. There's also a study guide for teachers and a boot camp with self-paced modules on the website. The authors are all over social media. You can't miss them. If you've been shy about diving into AI, this conversation and this book are two great places to start! As always, send your comments, questions, and show ideas to mike@schoolleadershipshow.com. Consider rating the podcast in iTunes and leaving a comment. And please pass the show along to your colleagues. Additionally, if you have other non-education books with implications for school leaders, send those suggestions our way, too. And finally, If you or someone you know would like to sponsor the show, send Mike an email at mike@schoolleadershipshow.com.
A heap of news stories this week means we didn't cover any research at all! AI isn't a tool, it's an environment by Josh Thorpe https://wonkhe.com/blogs/ai-isnt-a-tool-its-an-environment/ Australian Senate report into Adoption of AI https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Adopting_Artificial_Intelligence_AI/AdoptingAI/Report Ethan Mollick's course on Coursera: "AI in Education: Leveraging ChatGPT for Teaching" https://www.coursera.org/learn/wharton-ai-in-education-leveraging-chatgpt-for-teaching I'll also recommend his book - Co-Intelligence - as a great Christmas gift for yourself or a friend Open AI announced Open AI Pro @ $200/m https://openai.com/index/introducing-chatgpt-pro/ We like Ethan Mollick's example that showed what it could do - solving this problem and creating a working app in 15 minutes - see here: https://bsky.app/profile/emollick.bsky.social/post/3lcldsn2grk2z Microsoft Copilot with vision for consumers https://www.microsoft.com/en-us/microsoft-copilot/blog/2024/12/05/copilot-vision-now-in-preview-a-new-way-to-browse Demo example: https://youtu.be/H3-hHiITH_g Sora released https://openai.com/index/sora-is-here/ ChatGPT's Advanced Voice Mode finally gets visual context on the 6th day of OpenAI https://www.zdnet.com/article/chatgpts-advanced-voice-mode-finally-gets-visual-context-on-the-6th-day-of-openai/ Apple released iOS 18.2 with integrated AI https://www.zdnet.com/article/ios-18-2-rolls-out-to-iphones-try-these-6-new-ai-features-today/ Google Gemini 2.0 with real time speech and vision https://aistudio.google.com/live We're on Bluesky https://bsky.app/profile/aiineducation.bsky.social "The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans" https://arxiv.org/ftp/arxiv/papers/2303/2303.06219.pdf Microsoft's Zero-Water Solution for Data Centre Cooling https://sustainabilitymag.com/articles/microsoft-unveils-zero-water-cooling-for-ai-data-centres
In this episode, Provost Kimberly D. McCorkle talks with Dr. Melanie B. Richards, interim director of ETSU's new School of Marketing and Media, about how her experience in the corporate world led to a career in academia – and how she is harnessing that experience to make sure her students get hands-on, project-based learning opportunities in her classroom. Dr. Richards also discusses how she incorporates AI in her instruction and recommends a book that she has used to guide her research and teaching in this area: Co-Intelligence: Living and Working with AI by Ethan Mollick. Listen to more episodes of Why I Teach, where Dr. Kimberly D. McCorkle explores stories of impact and success of ETSU faculty. Subscribe at https://why-i-teach-conversation-with-etsu-faculty.podbean.com/. Dr. Richards' Bio: https://www.etsu.edu/cbat/media-communication/facstaff/richardsm.php ETSU's Master of Arts in Brand and Media Strategy: https://www.etsu.edu/cbat/media-communication/academics/graduate-programs/brand-strategy.php School of Marketing and Media News: https://www.etsu.edu/etsu-news/schools/marketing-media.php/ ETSU's Approach to Community-Engaged Learning: https://www.etsu.edu/teaching/teaching_community/cel_qep.php
Choosing a one-word theme is one of the most fun and thought-provoking exercises within the “Design Your Year” set. Here, we review our themes for 2024 and reveal the themes we've chosen for 2025. We also share a hack for choosing a powerful theme, and include many of the themes chosen by listeners. Resources and links related to this episode: Happiness Project Shop Tips for making a "25 for '25" list Print your own "25 for '25" list One-Sentence Journal Happier in Hollywood newsletter sign-up Habits for Happiness quiz: What's the next new habit that will make you happier? Four Tendencies quiz: Are you an Upholder, Questioner, Obliger, or Rebel? Gift-Giving quiz: What kind of gift makes you happy? "5 Things Making Me Happy" newsletters Ethan Mollick's newsletter “One Useful Thing" Hard Fork podcast Muse Machine Gretchen is reading: Ink Blood Sister Scribe by Emma Torzs (Amazon, Bookshop) Get in touch: podcast@gretchenrubin.com Visit Gretchen's website to learn more about Gretchen's best-selling books, products from The Happiness Project Collection, and the Happier app. Find the transcript for this episode on the episode details page in the Apple Podcasts app. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
As 2024 comes to an end, we take a look back at some of the biggest themes that emerged on Behind the Tech over this incredibly exciting year for tech and AI: creativity, education, and transformation. And we take a stroll through some of Kevin's obsessions – from ceramics to Maker YouTube to classical piano – alongside guests like Xyla Foxlin, Lisa Su, Ben Laude, Ethan Mollick, Refik Anadol, and more. Kevin Scott Behind the Tech with Kevin Scott Discover and listen to other Microsoft podcasts.
Can AI agents can make you better at your job? Listen to Ethan Mollick, Co-Director of Generative AI Lab at Wharton, for tips on how to use AI agents to be more creative and efficient. The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
David De Cremer: The AI-Savvy Leader David De Cremer is the Dunton Family Dean of the D'Amore-McKim School of Business and professor of management and technology at Northeastern University. He's also an affiliated faculty member at the Institute for Experiential AI at Northeastern University and an affiliated researcher at the Center for Collective Intelligence at MIT. His newest book is titled The AI-Savvy Leader: Nine Ways to Take Back Control and Make AI Work*. We've all heard the warnings that AI is going to take our jobs. That's certainly a possibly in the long-term, but the story emerging, at least for now, is looking a little different. In this episode, David and I discuss how leaders can use AI to augment, not replace, human intelligence. Key Points AI is substantially different than prior digital transformations, and adoption efforts are failing at alarming rates. Instead of leading, too often leaders are being too deferential to data and analytics teams. Your expertise is exactly what your organization needs to deploy AI successfully. Leaders who learn the fundamentals of AI will play an essential roll in narrating dialogue between the technology experts and everyone else. Get just enough foundational knowledge with statistics and modeling to communicate with the data and analytics folks better. Resources Mentioned The AI-Savvy Leader: Nine Ways to Take Back Control and Make AI Work* by David De Cremer Interview Notes Download my interview notes in PDF format (free membership required). Related Episodes How to Solve the Toughest Problems, with Wendy Smith (episode 612) How to Begin Leading Through Continuous Change, with David Rogers (episode 649) Principles for Using AI at Work, with Ethan Mollick (episode 674) Discover More Activate your free membership for full access to the entire library of interviews since 2011, searchable by topic. To accelerate your learning, uncover more inside Coaching for Leaders Plus.
Ethan Mollick, Associate Professor at the Wharton School of the University of Pennsylvania, dives into how we can shape the future of AI. Ethan explores why organizations need to rethink their approach to AI adoption, the importance of disciplined experimentation, and how imagination—not just technology—will unlock AI's true potential. From understanding the psychology of LLMs to embracing R&D as an everyday practice, Ethan shares practical advice for making AI work for you, not the other way around. Enhance your listening experience with C&C Chat at data.world/podcasts
Mike Schmitz, Bart Busschots, Marty Jencius, and host Chuck Joiner finish up the first MacVoices Gift Guide of 2024 with picks that go from apparel to hardware and software. (Part 2) This edition of MacVoices is supported by MacVoices Magazine, our free magazine on Flipboard. Updated daily with the best articles on the web to help you do more with your Apple gear and adjacent tech, access MacVoices Magazine content on Flipboard, on the web, or in your favorite RSS reader. Show Notes: Links: Picks by Marty Jencius: Scottevest Pack Windbreaker https://amzn.to/3AV738B SCOTTeVEST Best Travel Vest for Men - 26 Hidden Pockets https://amzn.to/4fQpvOu SCOTTeVest EDC Jacket https://www.scottevest.com/products/edc-jacket-mens Workona project/tab organizer https://workona.com/ Picks by Mike Schmitz: Mode Sonnet Keyboard https://modedesigns.com/products/sonnet Co-Intelligence by Ethan Mollick https://amzn.to/4ftKgjv Picks by Bart Busschots: KU XIU Magnetic Wireless Charging Stand for iPad Pro https://amzn.to/3YVZuXl UGREEN 65W USB-C Charging Station https://amzn.to/3YWaXpV UGREEN Uno Charger 100W USB C Charger, Nexode 4-Port GaN Charger Compact Fast USB C Power Adapter https://amzn.to/4fM8fKq Picks by Chuck Joiner: Sunco 12 Pack 11W/65W Equivalent BR30 Indoor Area Recessed LED Flood Light, Dimmable 850 Lumens Selectable CCT 2700K/3000K/5000K https://amzn.to/3CDaOQR eufy Security Indoor Cam E220, Camera for home Security, Pan & Tilt, 2K https://amzn.to/3YUVaHW Guests: By day, Bart Busschots is a Linux sysadmin and Perl programmer, and a keen amateur photographer when ever he gets the time. Bart hosts and produces the Let's Talk podcast series - a monthly Apple show that takes a big-picture look at the last month in Apple news, and a monthly photography show focusing on the art and craft of photography. Every second week Bart is the guest for the Chit Chat Across the Pond segment on Allison Sheridan's NosillaCast. You can get links to everything Bart gets up including a link to his photography and his personal blog. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession ‘firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Mike Schmitz is an Apple fanboy, coffee snob, and productivity junkie who is intent on teaching people how to be more productive. His newest effort is He is the Executive Editor for The Sweet Setup, a site dedicated to reviewing and recommending the very best Mac and iOS apps, and is the creator of LifeHQ, where he teaches his personal approach to getting more done. Mike lives in Wisconsin with his wife and 4 crazy boys and is the author of Thou Shalt Hustle. He is also the co-host of the Bookworm podcast and (probably) spends too much time on Twitter. You can find all his projects on his personal web site, MikeSchmitz.com, including his new podcast with his wife Rachel at IntentionalFamily.fm. Follow him on Twitter as _MikeSchmitz. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss
Ethan Mollick, Associate Professor at the Wharton School of the University of Pennsylvania, dives into how we can shape the future of AI. Ethan explores why organizations need to rethink their approach to AI adoption, the importance of disciplined experimentation, and how imagination—not just technology—will unlock AI's true potential. From understanding the psychology of LLMs to embracing R&D as an everyday practice, Ethan shares practical advice for making AI work for you, not the other way around. Enhance your listening experience with C&C Chat at data.world/podcasts
As you may have discovered on your own, genAI tools are ready and enthusiastic with their outputs, but may be woefully ill-informed, in spite of the snappy replies they spew out with unfettered confidence. So, what is being done to remedy this issue? Dennis and Tom explain how Retrieval-Augmented Generation (RAG) combines AI's LLMs with more current external information, addressing problems arising from outdated LLM data. The guys talk through some of their favorite tools that employ RAG effectively and offer insights into their uses for attorneys. Later, could AI-adoption be diminishing a lawyer's hard-earned expertise? Dennis and Tom dive into this common fear shared by many traditionally-minded attorneys, focusing on ways to leverage AI not to replace, but enhance their legal practice. As always, stay tuned for the parting shots, that one tip, website, or observation you can use the second the podcast ends. Have a technology question for Dennis and Tom? Call their Tech Question Hotline at 720-441-6820 for the answers to your most burning tech questions. Show Notes - Kennedy-Mighell Report #372 A Segment: AI and RAG: Hate the Name, Love the Application Google Notebook LM https://notebooklm.google/ Perplexity.ai - https://www.perplexity.ai/ Practical Law with AI: https://legal.thomsonreuters.com/en/products/practical-law B Segment: A question from our AI Chatbot Parting Shots: Android search with your camera - https://theintelligence.com/34456/android-search-google-lens/ Ethan Mollick, “Thinking Like an AI” - https://www.oneusefulthing.org/p/thinking-like-an-ai Learn more about your ad choices. Visit megaphone.fm/adchoices
As you may have discovered on your own, genAI tools are ready and enthusiastic with their outputs, but may be woefully ill-informed, in spite of the snappy replies they spew out with unfettered confidence. So, what is being done to remedy this issue? Dennis and Tom explain how Retrieval-Augmented Generation (RAG) combines AI's LLMs with more current external information, addressing problems arising from outdated LLM data. The guys talk through some of their favorite tools that employ RAG effectively and offer insights into their uses for attorneys. Later, could AI-adoption be diminishing a lawyer's hard-earned expertise? Dennis and Tom dive into this common fear shared by many traditionally-minded attorneys, focusing on ways to leverage AI not to replace, but enhance their legal practice. As always, stay tuned for the parting shots, that one tip, website, or observation you can use the second the podcast ends. Have a technology question for Dennis and Tom? Call their Tech Question Hotline at 720-441-6820 for the answers to your most burning tech questions. Show Notes - Kennedy-Mighell Report #372 A Segment: AI and RAG: Hate the Name, Love the Application Google Notebook LM https://notebooklm.google/ Perplexity.ai - https://www.perplexity.ai/ Practical Law with AI: https://legal.thomsonreuters.com/en/products/practical-law B Segment: A question from our AI Chatbot Parting Shots: Android search with your camera - https://theintelligence.com/34456/android-search-google-lens/ Ethan Mollick, “Thinking Like an AI” - https://www.oneusefulthing.org/p/thinking-like-an-ai
"We're generating assessments faster than ever, but our real test is ensuring that these tools are fair and reliable across diverse candidate groups."–Louis HickmanIn this episode I welcome my friend, super dad, and ex- professional wrestler Louis Hickman for a killer conversation about the ins and outs of using LLMs to create and score assessments.Louis is a professor at Virginia Tech specializing in research on AI and large language models in assessment and hiring processes. He knows a thing or two about this stuff and we waste no time tackling some really great topics centering around the cutting edge of research and practice on the subject of LLMs and assessments.This is a must listen episode for anyone developing, or considering developing, LLM based assessments. Or anyone who wants to educate themselves about how LLMs behave when asked to be I/O psychologists.Topics Covered:* LLMs in Assessment Center Role-Plays:* Using LLMs to simulate realistic role-play scenarios for assessments, with the challenge of ensuring consistent, replicable candidate experiences.* Evaluating Open-Ended Text with LLMs:* How LLMs score open-ended responses and the observed biases, especially when diversity prompts only partially reduce disparities.* Consistency in AI Scoring:* Ensuring LLMs apply scoring criteria consistently across diverse candidates and settings.* Applicant Reactions to AI Interviews:* How candidates perceive AI-driven interviews, with many expressing discomfort due to the perceived inability to influence AI decisions compared to human interactions.* Predicting Responses to Assessment Items:* The potential for LLMs to predict candidate responses without actual data, though accuracy remains limited by model training and inherent biases.* Impact on Academic Research:* LLMs' influence on research publications, with concerns over AI tools favoring self-generated content and potentially amplifying biases in academic discourse.Listen to the episode to hear the skinny on these topics and more!And of course we have fun with this episode's “Take it or Leave it” articles.Article 1 “The Impact of Generative AI on Labor Market Matching.” An MIT Exploration of Generative AI”, explores the use of LLMs on matching job seekers and employers.Article 2Four Singularities for Research: The Rise of AI is Creating Both Crisis and OpportunityIn this article from Ethan Mollick's Substack blog One Useful Thing discusses the positive and negative impact of LLMs on academic research. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com
BONUS: AI-Driven Agile, Speeding Up Feedback Cycles for Better Product Iteration, And More AI Transformations with Jurgen Appelo In this BONUS episode, leadership expert and entrepreneur Jurgen Appelo joins us to dive into the transformative power of AI in today's workplaces. Creator of the unFIX model and author of Management 3.0, Jurgen shares his insights on how AI is revolutionizing team collaboration, creativity, and innovation. This engaging conversation covers practical examples, personal stories, and thought-provoking ideas for anyone interested in leveraging AI to thrive in their career and business. AI and the Future of Collaboration "AI gives me more time to focus on the things I really enjoy." Jurgen kicks off by discussing the major changes AI is bringing to how teams collaborate and get work done. He highlights how AI tools like ChatGPT are enhancing feedback loops in product development, allowing teams to gain insights faster and more efficiently. Jurgen shares how he's used AI to improve his own writing, helping his editor focus more on storytelling rather than grammar corrections. For teams, AI is already making client interactions smoother and boosting productivity. "AI gives teams more time to focus on creativity and innovation by automating repetitive tasks and improving workflow efficiency." AI as an Assistant or Creative Partner? "We need to learn to delegate to AI." Jurgen dives deeper into his personal experience of managing multiple AI systems to develop a library of use cases and patterns. He sees AI as a powerful assistant, capable of generating creative ideas and enhancing human work, but stresses that we're still in the early stages. To truly maximize AI's potential, people need to learn how to delegate tasks to AI more effectively, while AI systems evolve to help us think beyond our usual patterns. "Delegating to AI allows us to break free from old habits and explore new creative possibilities." AI's Role in Personal Development "AI is a general-purpose technology, like the internet was in the beginning." AI may have a vast potential to enhance personal and professional growth. However, many of its future applications are still unknown. He compares AI to the early days of the internet, a tool with endless possibilities yet to be fully realized. Right now, AI can help individuals automate simple tasks, but it has the potential to do so much more, including reshaping how we approach learning and career development. "AI could revolutionize personal development by helping people organize and prioritize their learning journeys." AI and Creativity: Can It Be a True Collaborator? "AI can give you instant feedback on whatever you create." Jurgen discusses how AI can enhance creativity within teams, providing immediate feedback on ideas and helping teams refine their concepts without leaving their desks. He mentions real-world examples, such as using AI to generate designs and suggestions in creative fields, giving people access to insights they might not have considered otherwise. "AI can act as a creative collaborator, offering immediate, actionable feedback that pushes innovation forward." The Exciting Future of AI in the Workplace "I'm an optimist—AI frees us up to do more of what we love." Looking ahead, Jurgen expresses optimism about AI's potential to change the way we work. While AI will inevitably displace some jobs, he believes it will also enable people to focus on tasks they truly enjoy. AI levels the playing field between small entrepreneurs and large enterprises by making high-quality tools accessible to everyone. This shift will create new opportunities and competition in the market. "AI will free up time for the tasks that matter most while leveling the playing field for entrepreneurs and businesses alike." Resources for Further Exploration Looking to dive deeper into the AI revolution? Jurgen recommends the book Co-intelligence by Ethan Mollick for those curious about AI's collaborative potential and Rebooting AI by Gary Marcus for a more skeptical view on its impact. "If you're looking to learn more about AI, these books will give you both the optimistic and cautious perspectives." About Jurgen Appelo Jurgen Appelo is a writer, speaker, and entrepreneur who helps organizations thrive in the 21st century. Creator of the unFIX model, he focuses on organization design, continuous innovation, and enhancing the human experience. Jurgen is also the author of Management 3.0 and a recognized leadership expert by Inc.com. You can link with Jurgen Appelo on LinkedIn.
October 14, 2024 Discussion on the book "Co-Intelligence" by Ethan Mollick by Dr. Farid Holakouee
In this episode of the Research Like a Pro Genealogy podcast, Diana and Nicole discuss using AI in locality research, focusing on the Isabella Weatherford project. They emphasize the importance of locality guides in genealogical research, as they provide essential historical context, help researchers understand available records, and shed light on migration patterns and local events that may have impacted ancestors' lives. The hosts explore how AI tools like ChatGPT, Claude, Gemini, and Perplexity can be used to create locality guides more efficiently. Diana shares her experience using AI to create a locality guide for Dallas County, Texas, in the 1870s, demonstrating how AI helped her gather historical and geographical information, create a timeline of major events, and identify relevant record collections. Diana and Nicole also discuss the strengths and limitations of different AI tools and offer tips for effectively using AI in locality research. They emphasize the importance of verifying information from AI sources and using AI as a tool to complement, rather than replace, traditional research methods. This summary was generated by Google Gemini. Links Post-apocalyptic education by Ethan Mollick - https://www.oneusefulthing.org/p/post-apocalyptic-education Using AI in Locality Research: Isabella Weatherford Project Part 3 - https://familylocket.com/using-ai-in-locality-research-isabella-weatherford-project-part-3/ Custom GPT - Diana's Genealogy Locality Guide Builder by Diana Elder - https://chatgpt.com/g/g-Y7oqvFVmP-diana-s-genealogy-locality-guide-builder Custom GPT - Locality Guide for Genealogical Research by Mark Thompson - https://chatgpt.com/g/g-TpLAIvCzD-locality-guide-for-genealogical-research Sponsor – Newspapers.com For listeners of this podcast, Newspapers.com is offering new subscribers 20% off a Publisher Extra subscription so you can start exploring today. Just use the code “FamilyLocket” at checkout. Research Like a Pro Resources Airtable Universe - Nicole's Airtable Templates - https://www.airtable.com/universe/creator/usrsBSDhwHyLNnP4O/nicole-dyer Airtable Research Logs Quick Reference - by Nicole Dyer - https://familylocket.com/product-tag/airtable/ Research Like a Pro: A Genealogist's Guide book by Diana Elder with Nicole Dyer on Amazon.com - https://amzn.to/2x0ku3d 14-Day Research Like a Pro Challenge Workbook - digital - https://familylocket.com/product/14-day-research-like-a-pro-challenge-workbook-digital-only/ and spiral bound - https://familylocket.com/product/14-day-research-like-a-pro-challenge-workbook-spiral-bound/ Research Like a Pro Webinar Series 2024 - monthly case study webinars including documentary evidence and many with DNA evidence - https://familylocket.com/product/research-like-a-pro-webinar-series-2024/ Research Like a Pro eCourse - independent study course - https://familylocket.com/product/research-like-a-pro-e-course/ RLP Study Group - upcoming group and email notification list - https://familylocket.com/services/research-like-a-pro-study-group/ Research Like a Pro with DNA Resources Research Like a Pro with DNA: A Genealogist's Guide to Finding and Confirming Ancestors with DNA Evidence book by Diana Elder, Nicole Dyer, and Robin Wirthlin - https://amzn.to/3gn0hKx Research Like a Pro with DNA eCourse - independent study course - https://familylocket.com/product/research-like-a-pro-with-dna-ecourse/ RLP with DNA Study Group - upcoming group and email notification list - https://familylocket.com/services/research-like-a-pro-with-dna-study-group/ Thank you Thanks for listening! We hope that you will share your thoughts about our podcast and help us out by doing the following: Write a review on iTunes or Apple Podcasts. If you leave a review, we will read it on the podcast and answer any questions that you bring up in your review. Thank you! Leave a comment in the comment or question in the comment section below. Share the episode on Twitter, Facebook, or Pinterest. Subscribe on iTunes, Stitcher, Google Podcasts, or your favorite podcast app. Sign up for our newsletter to receive notifications of new episodes - https://familylocket.com/sign-up/ Check out this list of genealogy podcasts from Feedspot: Top 20 Genealogy Podcasts - https://blog.feedspot.com/genealogy_podcasts/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #81: Alpha Proteo, published by Zvi on September 12, 2024 on LessWrong. Following up on Alpha Fold, DeepMind has moved on to Alpha Proteo. We also got a rather simple prompt that can create a remarkably not-bad superforecaster for at least some classes of medium term events. We did not get a new best open model, because that turned out to be a scam. And we don't have Apple Intelligence, because it isn't ready for prime time. We also got only one very brief mention of AI in the debate I felt compelled to watch. What about all the apps out there, that we haven't even tried? It's always weird to get lists of 'top 50 AI websites and apps' and notice you haven't even heard of most of them. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. So many apps, so little time. 4. Language Models Don't Offer Mundane Utility. We still don't use them much. 5. Predictions are Hard Especially About the Future. Can AI superforecast? 6. Early Apple Intelligence. It is still early. There are some… issues to improve on. 7. On Reflection It's a Scam. Claims of new best open model get put to the test, fail. 8. Deepfaketown and Botpocalypse Soon. Bots listen to bot music that they bought. 9. They Took Our Jobs. Replit agents build apps quick. Some are very impressed. 10. The Time 100 People in AI. Some good picks. Some not so good picks. 11. The Art of the Jailbreak. Circuit breakers seem to be good versus one-shots. 12. Get Involved. Presidential innovation fellows, Oxford philosophy workshop. 13. Alpha Proteo. DeepMind once again advances its protein-related capabilities. 14. Introducing. Google to offer AI podcasts on demand about papers and such. 15. In Other AI News. OpenAI raising at $150b, Nvidia denies it got a subpoena. 16. Quiet Speculations. How big a deal will multimodal be? Procedural games? 17. The Quest for Sane Regulations. Various new support for SB 1047. 18. The Week in Audio. Good news, the debate is over, there might not be another. 19. Rhetorical Innovation. You don't have to do this. 20. Aligning a Smarter Than Human Intelligence is Difficult. Do you have a plan? 21. People Are Worried About AI Killing Everyone. How much ruin to risk? 22. Other People Are Not As Worried About AI Killing Everyone. Moving faster. 23. Six Boats and a Helicopter. The one with the discord cult worshiping MetaAI. 24. The Lighter Side. Hey, baby, hey baby, hey. Language Models Offer Mundane Utility ChatGPT has 200 million active users. Meta AI claims 400m monthly active users and 185m weekly actives across their products. Meta has tons of people already using their products, and I strongly suspect a lot of those users are incidental or even accidental. Also note that less than half of monthly users use the product monthly! That's a huge drop off for such a useful product. Undermine, or improve by decreasing costs? Nate Silver: A decent bet is that LLMs will undermine the business model of boring partisans, there's basically posters on here where you can 100% predict what they're gonna say about any given issue and that is pretty easy to automate. I worry it will be that second one. The problem is demand side, not supply side. Models get better at helping humans with translating if you throw more compute at them, economists think this is a useful paper. Alex Tabarrok cites the latest paper on AI 'creativity,' saying obviously LLMs are creative reasoners, unless we 'rule it out by definition.' Ethan Mollick has often said similar things. It comes down to whether to use a profoundly 'uncreative' definition of creativity, where LLMs shine in what amounts largely to trying new combinations of things and vibing, or to No True Scotsman that and claim 'real' creativity is something else beyond that. One way to interpret Gemini's capabilities tests is ...
In this episode of the Research Like a Pro Genealogy podcast, Diana and Nicole discuss the use of Artificial Intelligence (AI) in genealogy. Diana shares that she took a course on AI and read the book "Co-Intelligence: Living and Working with AI" by Ethan Mollick, finding it to be helpful and informative. The book discusses the history of AI and how it can be used. The author emphasizes the importance of experimenting with AI to learn its capabilities and limitations. He provides four rules for working with AI: always invite AI to the table, be the human in the loop, treat AI like a person, and assume this is the worst AI you'll ever use. Diana and Nicole then discuss the different ways AI can be used in genealogy, such as brainstorming ideas, transcribing documents, and providing feedback on research reports. They emphasize the importance of human oversight when using AI and stress that it should be seen as a tool to enhance, not replace, human expertise. Listeners will learn about the potential benefits and limitations of using AI in genealogy and gain practical tips for incorporating it into their research process. This summary was generated by Google Gemini. Links Diana Elder, "AI and Family History: Review of 'Co-Intelligence: Living and Working With AI'," blog post, 21 July 204, Family Locket, https://familylocket.com/ai-and-family-history-review-of-co-intelligence-living-and-working-with-ai/. Co-Intelligence: Living and Working with AI by Ethan Mollick - affiliate link to Amazon - https://amzn.to/473BMfD Sponsor – Newspapers.com For listeners of this podcast, Newspapers.com is offering new subscribers 20% off a Publisher Extra subscription so you can start exploring today. Just use the code “FamilyLocket” at checkout. Research Like a Pro Resources Airtable Universe - Nicole's Airtable Templates - https://www.airtable.com/universe/creator/usrsBSDhwHyLNnP4O/nicole-dyer Airtable Research Logs Quick Reference - by Nicole Dyer - https://familylocket.com/product-tag/airtable/ Research Like a Pro: A Genealogist's Guide book by Diana Elder with Nicole Dyer on Amazon.com - https://amzn.to/2x0ku3d 14-Day Research Like a Pro Challenge Workbook - digital - https://familylocket.com/product/14-day-research-like-a-pro-challenge-workbook-digital-only/ and spiral bound - https://familylocket.com/product/14-day-research-like-a-pro-challenge-workbook-spiral-bound/ Research Like a Pro Webinar Series 2024 - monthly case study webinars including documentary evidence and many with DNA evidence - https://familylocket.com/product/research-like-a-pro-webinar-series-2024/ Research Like a Pro eCourse - independent study course - https://familylocket.com/product/research-like-a-pro-e-course/ RLP Study Group - upcoming group and email notification list - https://familylocket.com/services/research-like-a-pro-study-group/ Research Like a Pro with DNA Resources Research Like a Pro with DNA: A Genealogist's Guide to Finding and Confirming Ancestors with DNA Evidence book by Diana Elder, Nicole Dyer, and Robin Wirthlin - https://amzn.to/3gn0hKx Research Like a Pro with DNA eCourse - independent study course - https://familylocket.com/product/research-like-a-pro-with-dna-ecourse/ RLP with DNA Study Group - upcoming group and email notification list - https://familylocket.com/services/research-like-a-pro-with-dna-study-group/ Thank you Thanks for listening! We hope that you will share your thoughts about our podcast and help us out by doing the following: Write a review on iTunes or Apple Podcasts. If you leave a review, we will read it on the podcast and answer any questions that you bring up in your review. Thank you! Leave a comment in the comment or question in the comment section below. Share the episode on Twitter, Facebook, or Pinterest. Subscribe on iTunes, Stitcher, Google Podcasts, or your favorite podcast app. Sign up for our newsletter to receive notifications of new episodes - https://familylocket.com/sign-up/ Check out this list of genealogy podcasts from Feedspot: Top 20 Genealogy Podcasts - https://blog.feedspot.com/genealogy_podcasts/
Let's face it, dealing with the future isn't something we can kick down the road. And at the intersection of people, technology, and brand, change is coming faster than ever before. We could blame Generative AI. Or we can embrace it and use it as an accelerant for our teams, employers, customers, and brands... In this episode of The Trending Communicator, host Dan Nestle welcomes award-winning digital innovator, public speaker, and agency executive Rob Davis, the Chief Digital Innovation Officer at MSL, for an engaging conversation on the future of communications and marketing. Rob, a digital pioneer with a career spanning broadcast TV and leading agencies, shares his journey of invention and innovation (and his good fortune at attaining a role that Dan not-so-secretly covets). From his early days at MTV, where he developed the world's first interactive game show, to his groundbreaking work at Ogilvy and now MSL Global, Rob has consistently been at the forefront of digital innovation. His accolades, including the Provoke Media Innovator 25 and two consecutive appearances in the PRWeek Dashboard 25, merely scratch the surface of his influence and expertise. He and Dan explore how Generative AI is reshaping how we approach communications and marketing and what that means for brands. They cover the importance of understanding and utilizing AI prompts, the evolving nature of authority in content, and the critical need for brands to adapt to the rapid advancements in technology. They address how AI impacts brand communication—shifting the dynamics between brands, media, and consumers. Rob stresses the need for brands to curate content that directly engages AI models to enhance discoverability and relevance. Moving on to the future of work, Rob notes that AI is reshaping job paths in communications, offering opportunities to integrate new talent and innovative roles. He encourages professionals to embrace AI to remain competitive in the workforce. Whether you're a communications novice or a seasoned professional, this episode offers lessons on embracing change, leveraging technology for enhanced communication strategies, and staying ahead in a fast-paced digital world. There's no doubt that AI will significantly impact the future of communications and marketing - listen in to hear what you can do about it for your future and the brands you represent. Listen in and hear about... How generative AI is revolutionizing communications and marketing. Mastering AI prompts to boost content creation and engagement. AI search algorithms reshaping content discoverability and relevance. Workforce dynamics shifting with AI, creating new job opportunities. Brands adapting content strategies to thrive in an AI-driven world. Opportunities and challenges of AI and emerging technologies in business. Notable Quotes On the Evolution of Technology and Career: - "I came out of college at a time in the early nineties when everything was starting to change and everything I was passionate about, music, photography, video, storytelling, all individual passions, all of a sudden had a place to go, and I've just kind of let the industry take me along." — Rob Davis [00:03:42 → 00:04:02] On the Thrill of Innovation: - "The thrill of that. It's funny, it wasn't an ego building thrill. It was a hunger building thrill. I said, I got to find more of this. Where does this come from? How do I tap into this bucket over and over again? And that's kind of what's driven me all along." — Rob Davis [00:06:24 → 00:06:42] On the Acceleration of Technology: - "I look at the years between advances from that era and it's like those advances are happening in months, if not weeks." — Rob Davis [00:09:31 → 00:09:47] On the Power of AI and Prompting: - "You can go to chat GPT the first time and you're probably going to treat it like this is a fortune telling arcade game. Zoltan. Zoltar, you're asking a question and you're going to get an answer, and as you said, you say, oh, that's cool, and you're going to walk away without realizing that you can start a chain of prompts that's going to develop something much deeper than that." — Rob Davis [00:16:38 → 00:16:58] On the Creativity in AI: - "You are creating every prompt you write, no matter how basic or complex it is, are creating something. And I think that that's part of the adrenaline rush for me." — Rob Davis [00:21:01 → 00:21:14] On the Future of Content Creation: - "We don't have to wait. We can do more than we can possibly imagine right now, and it's only going to get better, but we don't have to wait for the next piece. We can revolutionize almost everything that we're doing right now." — Rob Davis [00:22:14 → 00:22:31] On the Changing Nature of Authority in Content: - "The judge of authority is the consumer. So if the content, to your point, is not appealing to the consumer, it's not in their vernacular, it's not something that they feel ready to consume, they're not going to interact with it, and it is not going to get that algorithmic importer of authoritative." — Rob Davis [00:41:08 → 00:41:34] On the Power Shift in Content Creation: - "Because the consumer is now, or the customer…is getting more and more answers from AI, from generative AI. Well, now that puts a little bit more power in the hands of the brands, doesn't it?" — Dan Nestle [00:45:49 → 00:46:09] On the Future of Communication: - "The roles that are going to be created, the tasks that are going to be created are very different. It's going to create a whole bundle of new tasks. Those new tasks will have to be apportioned to somebody which become new jobs also." — Dan Nestle [00:54:27 → 00:54:59] Resources and Links Dan Nestle The Trending Communicator | Website Daniel Nestle | LinkedIn Dan Nestle | Twitter/X Rob Davis Rob Davis's Website Rob Davis | LinkedIn Timestamped key moments from this episode (as generated by Fireflies.ai)
What happens when machines become funnier, kinder, and more empathetic than humans? Do robot therapists save lives? And should Angela credit her virtual assistant as a co-author of her book? SOURCES:Robert Cialdini, professor emeritus of psychology at Arizona State University.Reid Hoffman, co-founder and executive chairman of LinkedIn; co-founder and board member of Inflection AI.Kazuo Ishiguro, novelist and screenwriter.Ethan Mollick, professor of management and co-director of the Generative A.I. Lab at the Wharton School of the University of Pennsylvania.Ann Patchett, author.Kevin Roose, technology columnist for The New York Times and co-host of the podcast Hard Fork.Niko Tinbergen, 20th-century Dutch biologist and ornithologist.Lyle Ungar, professor of computer and information science at the University of Pennsylvania.E. B. White, 20th-century American author. RESOURCES:Co-Intelligence: Living and Working with AI, by Ethan Mollick (2024)."Meet My A.I. Friends," by Kevin Roose (The New York Times, 2024)."Loneliness and Suicide Mitigation for Students Using GPT3-Enabled Chatbots," by Bethanie Maples, Merve Cerit, Aditya Vishwanath, and Roy Pea (NPJ Mental Health Research, 2024)."AI Can Help People Feel Heard, but an AI Label Diminishes This Impact," by Yidan Yin, Nan Jia, and Cheryl J. Wakslak (PNAS, 2024)."Romantic AI Chatbots Don't Have Your Privacy at Heart," by Jen Caltrider, Misha Rykov and Zoë MacDonald (Mozilla Foundation, 2024).Klara and the Sun, by Kazuo Ishiguro (2021).The Study Of Instinct, by Niko Tinbergen (1951).Pi. EXTRAS:"Are Our Tools Becoming Part of Us?" by People I (Mostly) Admire (2024)."Is GPS Changing Your Brain?" by No Stupid Questions (2023)."How to Think About A.I.," series by Freakonomics Radio (2023)."Would You Rather See a Computer or a Doctor?" by Freakonomics, M.D. (2022).
Ethan Mollick joins us today to share his insights into the rapidly evolving world of artificial intelligence. Ethan is an associate professor at the Wharton School of the University of Pennsylvania, specializing in innovation and entrepreneurship. He also co-directs the Generative AI Lab at Wharton, which focuses on developing prototypes and conducting research to explore how AI can help humans thrive while reducing risks. His body of work includes the book Co-Intelligence, a New York Times bestseller that delves into AI's current state and future, as well as numerous published papers in top academic journals.In this episode, Ethan takes us through his journey from working at MIT's Media Lab with AI pioneer Marvin Minsky to becoming a leading voice on the impact of AI on work and education. He shares practical advice on how creatives, including game designers, can wield AI to enhance their work while navigating its ethical complexities. Ethan and I reflect on co-designing the Breakthrough Game, which has been used by organizations like Google and Twitter to boost innovation and creativity. There's a lot to learn from this episode, so get those notebooks out—Enjoy! Get full access to Think Like A Game Designer at justingarydesign.substack.com/subscribe
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Ethan Mollick is the Co-Director of the Generative AI Lab at Wharton, which builds prototypes and conducts research to discover how AI can help humans thrive while mitigating risks. Ethan is also an Associate Professor at the Wharton School of the University of Pennsylvania, where he studies and teaches innovation and entrepreneurship, and also examines the effects of artificial intelligence on work and education. His papers have been published in top journals and his book on AI, Co-Intelligence, is a New York Times bestseller. In Today's Episode with Ethan Mollick We Discuss: 1. Models: Is More Compute the Answer: How has Ethan changed his mind on whether we have a lot of room to run in adding more compute to increase model performance? What will happen with models in the next 12 months that no one expects? Why will open models immediately be used by bad actors, what should happen as a result? Data, algorithms, compute, what is the biggest bottleneck and how will this change with time? 2. OpenAI: The Missed Opportunity, Product Roadmap and AGI: Why does Ethan believe that OpenAI is completely out of touch with creating products that consumers want to use? Which product did OpenAI shelve that will prove to be a massive mistake? How does Ethan analyse OpenAI's pursuit of AGI? Why did Ethan think Brad, COO @ OpenAI's heuristic of "startups should be threatened if they are not excited by a 100x improvement in model" is total BS? 3. VCs, Startups and AI Labs: What the World Does Not Understand: What do Big AI labs not understand about big companies? What are the biggest mistakes companies are making when implementing AI? Why are startups not being ambitious enough with AI today? What are the single biggest ways consumers can and should be using AI today?
Hosts Will Larry and Chad Pytel interview Brock Dubbels, Principal UX and AI Researcher at CareTrainer.ai. Brock discusses how CareTrainer.ai leverages AI to address the current care crisis in elderly populations. He highlights the growing demographic of individuals over 70 and the significant shortage of caregivers, exacerbated by COVID-19. CareTrainer.ai aims to alleviate this by automating routine tasks, allowing caregivers to focus on building meaningful relationships and providing personalized, compassionate care. The platform utilizes AI to manage tasks such as documentation, communication, and monitoring, which helps caregivers spend more time engaging with patients, ultimately enhancing the quality of care and reducing caregiver burnout. Brock elaborates on the specific tasks that CareTrainer.ai automates, using an example from his own experience. He explains how AI can transform transactional interactions into conversational ones, fostering trust and authenticity between caregivers and patients. By automating repetitive tasks, caregivers are freed to engage more deeply with patients, encouraging them to participate in their own care. This not only improves patient outcomes but also increases job satisfaction and retention among caregivers. Brock mentions the alarming attrition rates in caregiving jobs and how CareTrainer.ai's approach can help mitigate this by creating more rewarding and relational caregiving roles. Additionally, Brock discusses the apprenticeship model CareTrainer.ai employs to train caregivers. This model allows new caregivers to learn on the job with AI assistance, accelerating their training and integrating them more quickly into the workforce. He emphasizes the importance of designing AI tools that are user-friendly and enhance the caregiving experience rather than replace human interaction, and by focusing on customer obsession and continuously iterating based on feedback, CareTrainer.ai aims to create AI solutions that are not only effective but also enrich the entire caregiving profession. CareTrainer.ai (https://www.caretrainer.ai/) Follow CareTrainer.ai on LinkedIn (https://www.linkedin.com/company/caretraining-ai/). Follow Brock Dubbels on LinkedIn (https://www.linkedin.com/in/brockdubbels/). Visit his website: brockdubbels.com (https://brockdubbels.com/). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Transcript: WILL: This is the Giant Robots Smashing Into Other Giant Robots podcast, where we explore the design, development, and business of great products. I'm your host, Will Larry. CHAD: And I'm your other host, Chad Pytel. And with us today is Brock Dubbels, Principal UX and AI Researcher at CareTrainer.ai, which is transforming health care and caregiving with a human-first approach to artificial intelligence. Brock, thank you for joining us. BROCK: Hey, thanks for having me, guys. I'm excited to talk about this. CHAD: Brock, let's get started with just diving into what CareTrainer.ai actually does. You know, so many businesses today are getting started with or incorporating artificial intelligence into their product offerings. And I know that it's been something that you've been working on for a long time. So, what is CareTrainer? BROCK: Well, CareTrainer is an opportunity in the midst of a crisis. So, right now, we have what's called a care crisis for the elderly populations. If you were to look at the age of the North American population and look at it over the next 10 years, about 65% of our population will be over the age of 70. And right now, we are understaffed in caregiving by almost 20%. Caregivers, especially after COVID, are leaving at about a 40% clip. And enrollment in these care programs is down 9%, but yet that older population is growing. And in the midst of this, we've just recently had an executive order called the Older Americans Act, which states that we actually have to reduce the ratio of caregivers to patients, and we need to give more humane interaction to the patients in these facilities, in homes and help them to retain their dignity. Many of them lose their identity to diagnosis, and they're often referred to as the tasks associated with them. And what CareTrainer attempts to do is take many of the tasks out of the hands of the caregivers so that they can focus on what they're good at, which is building relationships, learning and understanding, acting with curiosity and compassion, and demonstrating expert knowledge in the service to caring for patients, either in homes, facilities or even post-acute care. WILL: You mentioned your hope is to take some of the tasks away from the caregivers. Can you go a little bit deeper into that? What tasks are you referring to? BROCK: Let's think about an example. My mom was a public health nurse, and she worked in child maternal health. And these were oftentimes reluctant counseling sessions between she and a young mother or a potential mother. And if she were sitting there with a clipboard or behind a computer screen and looking at the screen, or the clipboard, and doing the interview with questions, she would probably not get a very good interview because she's not making a relationship. It's not conversational; it's transactional. And when we have these transactional relationships, oftentimes, we're not building trust. We're not expressing authenticity. We're not building relationships. It's not conversational. And we don't get to know the person, and they don't trust us. So, when we have these transactional relationships, we don't actually build the loyalty or the motivation. And when we can free people of the tasks associated with the people that they care for by automating those tasks, we can free them up to build relationships, to build trust, and, in many cases, become more playful, expose their own vulnerability, their own past, their own history, and, hopefully, help these patients feel a little bit more of their worth. Many of these people worked meaningful lives as school teachers, working at the fire department, working at the hardware store. And they had a lot of friends, and they did a lot for their community. And now they're in a place where maybe there's somebody taking care of them that doesn't know anything about them, and they just become a person in a chair that, you know, needs to be fed at noon. And I think that's very sad. So, what we help to do is generate the conversations people like to have, learn the stories. But more importantly, we do what's called restorative care, which is, when we have a patient who becomes much more invested in their own self-care, the caregiver can actually be more autonomous. So, let's say it's an elderly person, and, in the past, they wouldn't dress themselves. But because they've been able to build trust in a relationship, they're actually putting on their own blouse and slacks now. For example, a certified nursing assistant or a home health aide can actually make the bed while they're up dressing because the home health aide or certified nursing assistant is not dressing them or is not putting the toothpaste on the toothbrush. So, what we're doing is we're saying, "Let's get you involved in helping with restorative care." And this also increases retention amongst the caregivers. One of the things that I learned in doing an ethnography of a five-state regional healthcare system was that these caregivers there was an attrition rate of about 45% of these workers within the first 30 days of work. So, it's a huge expense for the facility, that attrition rate. One of the reasons why they said they were leaving is because they felt like they weren't building any relationships with the people that they were caring for, and it was more like a task than it was a care or a relationship. And, in fact, in many cases, they described it as maid service with bedpans for grumpy people [chuckles]. And many of them said, "I know there's somebody nice down there, but I think that they've just become a little bit hesitant to engage because of the huge number of people that come through this job, and the lack of continuity, the lack of relationship, the lack of understanding that comes from building a relationship and getting to know each other." And when we're talking about taking the tasks away, we're helping with communication. We're actually helping with diagnosis and charting. We're helping with keeping the care plan updated and having more data for the care plan so that nurse practitioners and MDs can have a much more robust set of data to make decisions upon when they meet with this patient. And this actually reduces the cost for the care facilities because there's less catastrophic care in the form of emergency rooms, prescriptions, assisted care, as well as they actually retain their help. The caregivers stay there because it's a good quality of life. And when those other costs go down, some of the institutions that I work for actually put that money back into more patient care, hiring more people to have more meaningful, humane interactions. And that's what I mean about taking the tasks off of the caregiver so that they can have the conversations and the relational interactions, rather than the transactional interactions. CHAD: One thing I've heard from past guests and clients that we've had in this space, too, is, to speak more to the problem, the lack of staff and the decline in the quality of care and feeling like it's very impersonal causes families to take on that burden or family members to take on that burden, but they're not necessarily equipped to do it. And it sort of causes this downward spiral of stress and quality of care that impacts much bigger than just the individual person who needs the care. It often impacts entire families. BROCK: Oh yeah. Currently, they're estimating that family, friends, and communities are providing between $90 and $260,000 worth of care per person per year. And this is leading to, you know, major financial investments that many of these people don't have. It leads to negative health outcomes. So, in a lot of ways, what I just described is providing caregiver respite, and that is providing time for a caregiver to actually engage with a person that they're caring for, teaching them communication skills. And one of the big things here is many of these institutions and families are having a hard time finding caregivers. Part of that is because we're using old systems of education in new days that require new approaches to the problem. And the key thing that CareTrainer does is it provides a guided apprenticeship, which means that you can earn while you learn. And what I mean by that is, rather than sitting in a chair in front of a screen doing computer-based training off of a modified PowerPoint with multiple-choice tests, you can actually be in the context of care and earning while you learn rather than learning to earn. CHAD: Well, at thoughtbot, we're a big believer in apprenticeships as a really solid way of learning quickly from an experienced mentor in a structured way. I was excited to hear about the apprenticeship model that you have. BROCK: Well, it's really exciting, isn't it? I mean, when you begin looking at what AI can do as...let's call it a copilot. I thought some of the numbers that Ethan Mollick at Wharton Business School shared on his blog and his study with Boston Consulting Group, which is that an AI copilot can actually raise the quality of work, raise the floor to 82%, what he calls mediocrity. 82% was a pretty good grade for a lot of kids in my classes back when I was a Montessori teacher. But, in this case, what it does is it raises the floor to care by guiding through apprenticeship, and it allows people to learn through observation and trial and error. And people who are already at that 82nd percentile, according to Mollick's numbers, increase their productivity by 40%. The thing that we're not clear on is if certain people have a greater natural proficiency or proclivity for using these care pilots or if it's a learned behavior. CHAD: So, the impact that CareTrainer can have is huge. The surface area of the problem and the size of the industry is huge. But often, from a product perspective, what we're trying to do is get to market, figure out the smallest addressable, minimum viable product. Was that a challenge for you to figure out, okay, what's the first thing that we do, and how do we bring that to market and without getting overwhelmed with all the potential possibilities that you have? BROCK: Yeah, of course. I start out with what I call a GRITS model. I start out with, what are my goals? Then R, let's review the market. How is this problem being addressed now? I, what are my ideas for addressing these goals, and what's currently being done? And T, what tasks need to be completed in order to test these ideas? And what steps will I take to test them and iterate as far as a roadmap? And what that allowed me to do is to begin saying, okay, let's take the ideas that I can bring together first that are going to have the first initial impact because we're bootstrapping. And what we need to be able to do is get into a room with somebody who realizes that training caregivers and nursing is something that needs a review, maybe some fresh ideas. And getting that in front of them, understanding that that's our MVP 1 was really important. And what was really interesting is our MVP 2 through 5, we've begun to see that the technology is just exponential, the growth and progress. Our MVP 2 we thought we're going to be doing a heck of a lot of stuff with multimedia reinforcement learning. But now we're finding that some of the AI giants have actually done the work for us. So, I have just been very happy that we started out simple. And we looked at what is our core problem, which is, you know, what's the best way to train people? And how do we do that with the least amount of effort and the most amount of impact? And the key to it is customer obsession. And this is something I learned at Amazon as their first principle. And many of the experiences that I brought from places like Amazon and other big tech is, how do I understand the needs of the customer? What problems do they have, and what would make this a more playful experience? And, in this case, I wanted to design for curiosity. And the thing that I like to say about that is AI chose its symbol of the spark really smartly. And I think the spark is what people want in life. And the spark is exploring, and it's finding something. And you see this kind of spark of life, this learning, and you discover it. You create more from it. You share it. It's enlightening. It's inspirational. It makes people excited. It's something that they want to share. It's inventing. It's creation. I think that's what we wanted to have people experience in our learning, rather than my own experience in computer-based training, which was sitting in front of a flashified PowerPoint with multiple choice questions and having the text read to me. And, you know, spending 40 hours doing that was kind of soul-killing. And what I really wanted to do was be engaged and start learning through experience. And that's what came down to our MVP 1 is, how do we begin to change the way that training occurs? How can we change the student experience and still provide for the institutional needs to get people on the floor and caring for people? And that was our first priority. And that's how we began to make hard decisions about how we were going to develop from MVP 1, 2, 3, 4, and 5 because we had all the big ideas immediately. And part of that is because I had created a package like this back in 2004 for a five-state regional care provider in the Midwest. Back then, I was designing what could only be called a finite game. I'm designing in Flash for web. I'm doing decision trees with dialogue, and it's much like a video game, but a serious game. It's getting the assessment correct in the interactions and embedding the learning in the interaction and then being able to judge that and provide useful feedback for the player. And what this did was it made it possible for them to have interactive learning through doing in the form of a video game, which was a little bit more fun than studying a textbook or taking a computer-based test. It also allowed the health system a little bit more focus on the patients because what was happening is that they would be taking their best people off the floor and taking a partial schedule to train these new people. But 45% of those that they were training were leaving within the first 30 days. So, the game was actually an approach to providing that interaction as a guided apprenticeship without taking their best people off the floor into part-time schedules and the idea that they might not even be there in 30 days. So, that's kind of a lot to describe, but I would say that the focus on the MVP 1 was, this is the problem that we're going to help you with. We're going to get people out of the seats and onto the floor, off the screen, caring for people. And we're going to guide them through this guided apprenticeship, which allows for contextual computing and interaction, as we've worked with comparing across, like, OpenAI, Anthropic, Google, Mistral, Grok, trying these different approaches to AI, figuring out which models work best within this context. And, hopefully, when we walk in and we're sitting with an exec, we get a "Wow," [laughs]. And that's the big thing with our initial technology. We really want a wow. I shared this with a former instructor at the University of Minnesota, Joe Gaugler, and I said...I showed him, and he's like, "Wow, why isn't anybody doing this with nursing and such?" And I said, "Well, we are," you know, that's what I was hoping he would say. And that's the thing that we want to see when we walk into somebody's office, and we show them, and they say, "Wow, this is cool." "Wow, we think it's cool. And we hope you're going to want to go on this journey with us." And that's what MVP 1 should do for us is solve what seems like a little problem, which is a finite game-type technology, but turn it into an infinite game technology, which is what's possible with AI and machine learning. WILL: I love, you know, you're talking about your background, being a teacher, and in gaming, and I can see that in your product, which is awesome. Because training can be boring, especially if it's just reading or any of those things. But when you make it real life, when you put someone, I guess that's where the quote comes from, you put them in the game, it's so much better. So, for you, with your teacher background and your gaming background, was there a personal experience that you had that brought out your passion for caregiving? BROCK: You know, my mom is a nurse. She has always been into personal development. By the time I was in sixth grade, I was going to CPR classes with her while she was [inaudible 19:22] her nursing thing [laughs]. So, I was invited to propose a solution for the first version of CareTrainer, which had a different name back in 2004, which we sold. That led to an invitation to work and support the virtual clinic for the University of Minnesota Medical School, which is no longer a thing. The virtual clinic that is the medical school is still one of the best in the country, a virtual stethoscope writing grants as an academic for elder care. And I would have to say my personal story is that at the end of their lives, I took care of both my maternal grandmother in her home while I was going to college. And then, I took care of my paternal grandfather while I was going to college. And, you know, those experiences were profound for me because I was able to sit down and have coffee with them, tell jokes, learn about their lives. I saw the stories that went with the pictures. And I think one of the greatest fears that I saw in many of the potential customers that I've spoken to is at the end of a loved one's life that they didn't learn some of the things that they had hoped from them. And they didn't have the stories that went with all the pictures in the box, and that's just an opportunity missed. So, I think those are some of the things that drive me. It's just that connection to people. And I think that's what makes us humane is that compassion, that wanting to understand, and, also, I think a desire to have compassion and to be understood. And I think that's where gaming and play are really important because making mistakes is part of play. And you can make lots of mistakes and have lots of ways to solve a problem in a game. Whereas in computer-based training and standardized tests, which I used to address as a teacher, there's typically one right answer, and, in life, there is rarely a right answer [laughs]. CHAD: Well, and not really an opportunity to learn from mistakes either. Like, you don't necessarily get an opportunity on a standardized test to review the answers you got wrong in any meaningful way and try to learn from that experience. BROCK: Have you ever taken one of those tests and you're like, well, that's kind of right, but I think my answer is better, but it's not here [laughter]? I think what we really want from schools is creativity and innovation. And when we're showing kids that there's just a right answer, we kind of take the steam out of their engine, which is, you know, well, what if I just explore this and make mistakes? And I remember, in high school, I had an art teacher who said, "Explore your mistakes." Maybe you'll find out that their best is intentional. Maybe it's a feature, not a bug [laughs]. I think when I say inculcate play or inspire play, there's a feeling of psychological safety that we can be vulnerable, that we can explore, we can discover; we can create, and we can share. And when people say, "Oh, well, that's stupid," and you can say, "Well, I was just playing. I'm just exploring. I discovered this. I kind of messed around with a little bit, and I wanted to show you." And, hopefully, the person backs off a little bit from their strong statement and says, "Oh, I can see this and that." And, hopefully, that's the start of a conversation and maybe a startup, right [laughs]? CHAD: Well, there are so many opportunities in so many different industries to have an impact by introducing play. Because, in some ways, I feel like that may have been lost a little bit in so many sort of like addressing problems at scale or when scaling up to particular challenges. I think we trend towards standardization and lose a little bit of that. BROCK: I agree. I think humans do like continuity and predictability. But what we find in product is that when we can pleasantly surprise, we're going to build a customer base, you know, that doesn't come from, you know, doing the same thing all the time that everybody else does. That's kind of the table stakes, right? It works. But somebody is going to come along that does it in a more interesting way. And people are going to say, "Oh." It's like the arts and crafts effect in industrialization, right? Everybody needs a spoon to eat soup, a lot of soup [laughs]. And somebody can make a lot of spoons. And somebody else says, "Well, I can make spoons, too." "And how do I differentiate?" "Well, I've put a nice scrollwork design on my spoon. And it's beautiful, versus this other very plain spoon. I'll sell it to you for a penny more." And most people will take the designed thing, the well-designed thing that provides some beauty and some pleasure in their life. And I think that's part of what I described as the spark is that realization that we live in beauty, that we live in this kind of amazing place that inspires wonder when we're open to it. MID-ROLL AD: When starting a new project, we understand that you want to make the right choices in technology, features, and investment but that you don't have all year to do extended research. In just a few weeks, thoughtbot's Discovery Sprints deliver a user-centered product journey, a clickable prototype or Proof of Concept, and key market insights from focused user research. We'll help you to identify the primary user flow, decide which framework should be used to bring it to life, and set a firm estimate on future development efforts. Maximize impact and minimize risk with a validated roadmap for your new product. Get started at: tbot.io/sprint. WILL: You mentioned gamifying the training and how users are more involved. It's interesting because I'm actually going through this with my five-year-old. We're trying to put him in kindergarten, and he loves to play. And so, if you put him around a game, he'll learn it. He loves it. But most of the schools are like, workbooks, sit down; focus, all of those things. And it probably speaks to your background as being a Montessori teacher, but how did you come up with gamifying it for the trainee, I guess you could say? Like, how did you come up with that plan? Because I feel like in the school systems, a lot of that is missing because it's like, like you said, worksheets equal that boring PowerPoint that we have to sit down and read and stuff like that. So, how did you come up with the gamifying it when society is saying, "Worksheets, PowerPoints. Do it this way." BROCK: I think that is something I call the adult convenience model. Who's it better for: the person who has to do the grading and the curriculum design, or the kid doing the learning? And I think that, in those cases, the kid doing the learning misses out. And the way that we validate that behavior is by saying, "Well, you've got to learn how to conform. You've got to learn how to put your own interests and drives aside and just learn how to focus on this because I'm telling you to do it." And I think that's important, to be able to do what you're asked to do in a way that you're asked to do it. But I think that the instructional model that I'm talking about takes much more up-front thought. And where I came from with it is studying the way that I like to learn. I struggled in school. I really did. I was a high school dropout. I went to junior college in Cupertino, and I was very surprised to find out that I could actually go to college, even though I hadn't finished high school. And I began to understand that it's very different when you get to college, so much more of it is about giving you an unstructured problem that you have to address. And this is the criteria under which you're going to solve the problem and how I'm going to grade you. And these are the qualities of the criteria, and what this is, is basically a rubric. We actually see these rubrics and such in products. So, for example, when I was at American Family, we had this matrix of different insurance policies and all the different things in the column based upon rows that you would get underneath either economy, standard, or performance. And I think it was said by somebody at Netflix years ago; there's only two ways to sell bundled and unbundled. The idea is that there were these qualities that changed as a gradient or a ratio as you moved across this matrix. And the price went up a little bit for each one of those qualities that you added into the next row or column, and that's basically a rubric. And when we begin to create a rubric for learning, what we're really doing is moving into a moment where we say, "This is the criteria under which I'm going to assess you. These are the qualities that inform the numbers that you're going to be graded with or the letter A, B, or C, or 4, 3, 2, 1. What does it mean to have a 4? Well, let me give you some qualities." And one of the things that I do in training companies and training teams is Clapping Academy. You want to do that together? WILL: Yeah, I would love to. BROCK: Would you like to try it here? Okay. Which one of you would like to be the judge? WILL: I'll do it. BROCK: Okay. As the judge, you're going to tell me thumbs up or thumbs down. I'm going to clap for you. Ready? [Claps] Thumbs up or thumbs down? CHAD: [laughs] WILL: I say thumbs up. It was a clap [laughs]. BROCK: Okay. Is it what you were expecting? WILL: No, it wasn't. BROCK: Ah. What are some of the qualities of clapping that we could probably tease out of what you were expecting? Like, could volume or dynamics be one? WILL: Yeah, definitely. And then, like, I guess, rhythm of it like music, like a music rhythm of it. BROCK: Okay. In some cases, you know, like at jazz and some churches, people actually snap. They don't clap. So, hands or fingers or style. So, if we were to take these three categories and we were to break them 4, 3, 2, 1 for each one, would a 4 be high volume, or would it be middle volume for you? WILL: Oh, wow. For that, high volume. BROCK: Okay. How about rhythm? Would it be 4 would be really fast; 1 would be really slow? I think slow would be...we have this cultural term called slow clapping, right [laughter]? So, maybe that would be bad, right [laughter]? A 1 [laughter]? And then, style maybe this could be a non-numerical category, where it could just be a 1 or a 2, and maybe hands or slapping a thigh or snapping knuckles. What do you think? WILL: I'm going off of what I know. I guess a clap is technically described as with hands. So, I'll go with that. BROCK: Okay, so a 4 would be a clap. A 3 might be a thigh slap [laughter]. A 2 might be a snap, and a 1 would be air clap [laughter]. WILL: Yep. BROCK: Okay. So, you can't see this right now. But let's see, if I were to ask you what constitutes a 12 out of 12 possible, we would have loud, fast, hand-to-hand clap. I think we could all do it together, right [Clapping]? And that is how it works. What I've just done is I've created criteria. I've created gradients or qualities. And then, we've talked about what those qualities mean, and then you have an idea of what it might look like into the future. You have previewed it. And there's a difference here in video games. A simulation is where I copy you step by step, and I demonstrate, in performance, what's been shown to me to be accurate to what's been shown to me. Most humans don't learn like that. Most of us learn through emulation, which is we see that there's an outcome that we want to achieve, and we see how it starts. But we have to improvise between the start and the end. In a book by Michael Tomasello on being human...he's an anthropologist, and he studies humans, and he studied other primates like great apes. And he talks about emulation as like the mother using a blade of grass, licking it, and putting it down a hole to collect ants so that she can eat the ants. And oftentimes, the mother may have their back to her babies. And the babies will see the grass, and they'll see that she's putting it in her mouth, but they won't see the whole act. So, they've just [inaudible 33:29] through trial and error, see if they can do it. And this is the way an earlier paper that I wrote in studying kids playing video games was. We start with trial and error. We find a tactic that works for us. And then, in a real situation, there might be multiple tactics that we can use, and that becomes a strategy. And then, we might choose different strategies for different economic benefits. So, for example, do I want to pay for something with pennies or a dollar, or do I want a hundred pennies to carry around? Or would I rather have a dollar in a game, right? We have to make this decision of, what is the value of it, and what is the encumbrance of it? Or if it's a shooting game, am I going to take out a road sign with a bazooka when I might need that bazooka later on? And that becomes economic decision-making. And then, eventually, we might have what's called top site, which is, I understand that the game has these different rules, opportunities, roles, and experiences. How do I want to play? For example, Fallout 4 was a game that I really enjoyed. And I was blown away when I found out that a player had actually gone through the Final Boss and never injured another non-player character in the game. They had just done the whole thing in stealth. And I thought that is an artistic way to play. It's an expression. It's creative. It's an intentional way of moving through the game. And I think that when we provide that type of independent, individual expression of learning, we're allowing people to have a unique identity, to express it creatively, and to connect in ways that are interesting to other people so that we can learn from each other. And I think that's what games can do. And one of the hurdles that I faced back in 2004 was I was creating a finite game, where what I had coded in decision trees, in dialogue, in video interactions, once that was there, that was done. Where we're at now is, I can create an infinite game because I've learned how to leverage machine learning in order to generate lots of different contexts using the type of criteria and qualities that I described to you in Clapping Academy, that allow me to evaluate many different variations of a situation, but with the same level of expectation for professionalism, knowledge and expertise, communication, compassion, curiosity. You know, these are part of the eight elements of what is valued in the nursing profession. And when we have those rubrics, when we have that matrix, we begin to move into a new paradigm in teaching and learning because there's a much greater latitude and variety of how we get up the mountain. And that's one of the things that I learned as a teacher is that every kid comes in differently, but they're just as good. And every kid has a set of gifts that we can have them, you know, celebrate in service to warming up cold spots. And I think that sometimes kids are put into situations, and so are adults, where they're told to overcome this cold spot without actually leveraging the things that they're good at. And the problem with that is, in learning sciences, it's a transfer problem, which is if I learn it to pass the test, am I ever going to apply it in life, or is it just going to be something that I forget right away? And my follow-ups on doing classroom and learning research is that it is usually that. They learned it for the test. They forgot it, and they don't even remember ever having learned it. And the greatest gift that I got, having been a teacher, was when my wife and I would, I don't know, we'd be somewhere like the grocery store or walking out of a Target, and a couple of young people would come up and say, "Yo, Mr. Dubbs," And I'd be like, "Hey [laughs]!" And they're like, "Hey, man, you remember when we did that video game class and all that?" And I was like, "Yeah, you were so good at that." Or "Remember when we made those boats, and we raced them across the pool?" "Yeah, yeah, that was a lot of fun, wasn't it?" And I think part of it was that I was having as much fun doing the classes and the lessons as they were doing it. And it's kind of like a stealth learning, where they are getting the experience to populate these abstract concepts, which are usually tested on these standardized choice tests. And it's the same problem that we have with scaling a technology. Oftentimes, the way that we scale is based on conformity and limited variation when we're really scaling the wrong things. And I think it's good to be able to scale a lot of the tasks but provide great variety in the way that we can be human-supported around them. So, sure, let's scale sales and operations, but let's also make sure that we can scope out variation in how we do sales, and how we do customer service, and how we do present our product experience. So, how do we begin to personalize in scope and still be able to scale? And I think that's what I'm getting at as far as how I'm approaching CareTrainer, and how I'm approaching a lot of the knowledge translation that we're doing for startups, and consulting with larger and medium-sized businesses on how they can use AI. CHAD: That's awesome. Bringing it back to CareTrainer, what are some of the hurdles or cold spots that are in front of you and the business? What are the next steps and challenges in front of you? BROCK: I think the big thing is that I spend a good two to three [laughs] hours a day reading about the advances in the tech, you know, staying ahead of the knowledge translation and the possible applications. I mean, it's hard to actually find time to do the work because the technology is moving so fast. And, like I said, we were starting to build MVP 2, and we realized, you know what, this is going to be done for us in a little while. You know, it'd be cool if we can do this bespoke. But why not buy the thing that's already there rather than creating it from scratch, unless we're going to do something really different? I think that the biggest hurdle is helping people to think differently. And with the elder care crisis and the care crisis, I think that we really have to help people think differently about the things that we've done. I think regulation is really important, especially when it comes to health care, treatment, prescription safety. I think, though, that there are a lot of ways that we can help people to understand those regulations rather than put them in a seat in front of a monitor. CHAD: I think people respond to, you know, when there's a crisis, different people respond in different ways. And it's a natural tendency to not want to rock the boat, not introduce new things because that's scary. And adding more, you know, something that is scary to a difficult situation already is hard for some people. Whereas other people react to a crisis realizing that we got into the crisis for a reason. And the old ways of doing things might not necessarily be the thing to get us out of it. BROCK: Yeah, I totally agree. When I run into that, the first thought that comes to my head is, when did you stop learning [laughs]? When did you stop seeking learning? Because, for me, if I were to ever stop learning, I'd realize that I'd started dying. And that's what I mean by the spark, is, no matter what your age, as long as you're engaged in seeking out learning opportunities, life is exciting. It's an adventure. You're discovering new frontiers, and, you know, that's the spark. I think when people become complacent, and they say, "Well, this is the way we've always done it," okay, has that always served us well? And there are a lot of cultural issues that go with this. So, for example, there are cultural expectations about the way kids learn in class. Like, kids who come from blue-collar families might say, "Hey, you know what? My kid is going to be doing drywall, or he's going to be working fixing cars, or he's going to be in construction, or why does he need to do this? Or why does she need to do that? And, as a parent, I don't even understand the homework." And then, there are the middle-class folks who say, "You know what? I'm given these things. They need to be correct, accurate, and easy to read. And that's my job. And I don't see this in my kids' curriculum." And then, there are the creatives who say, "Hey, you know, this has nothing to do with where my kid is going. My kids are creative. They're going to have ambiguous problems that they have to come up with creative solutions for." Then you get to the executive class where, like, these elite private schools, where they say, "My kid is going to be a leader in the industry, and what they should be doing is leading groups of people through an activity in order to accomplish a goal." And those are four different pedagogical approaches to learning. So, I'm wondering, what is it that we expect from our caregivers? And I've got kind of a crazy story from that, where this young woman, [SP] Gemma, who was a middle school student, I gave her the option, along with my other kids, to either take a standardized test on Greek myths, or they could write their own myth. And she wrote this myth about a mortal who fell in love with a young goddess. Whenever they would wrap and embrace and kiss, a flame would occur. One day the mother found out and says, "Oh, you've fallen in love with a mortal. Well, here you shall stay. This shall be your penance." And she wrapped her in this thread, this rope, and dipped them in wax so they would be there forever. But then the flame jumped to the top, and that is how candles were created. And I read that, and I was...and this is, like, you know, 30 years ago, and I still have this at the top of my head. And I was like, "Gemma, that was amazing. Are you going to go to college?" And she says, "No." "No? Really? What are you going to do?" "I want to be a hairstylist." And, in my mind, my teacher mind is like, oh no, no, no, no. You [laughs] need to go to college. But then I thought about it. I thought, why wouldn't I want a smart, skilled, creative person cutting my hair? And, you know, people who cut hair make really good money [laughter]. And the whole idea is, are we actually, you know, empowering people to become their best selves and be able to explore those things? Or are we, you know, scaring them out of their futures with, you know, fear? Those are the big hurdles, which is, I'm afraid of the future. And the promise is, well, it's going to be different. But I can't assure you that it's not going to come without problems that we're going to have to figure out how to solve. And there are some who don't want the problems. They just want how it's always been. And I think that's the biggest hurdle we face is innovation and convincing people that trying something new it may not be perfect, but it's a step in the right direction. And I think Hans Rosling in Factfulness said it very well. He said, "Things are better than they were before, but they're not great." Can we go from good to great? Sure. And what do we need to do? But we always are getting better, as long as we're continuing to adapt and create and be playful and look at different ways of doing things because now people are different, but just as good. CHAD: Brock, I really appreciate you stopping by and bringing your creativity, and energy, and playfulness to this difficult problem of caregiving. I'm excited for what the future holds for not only CareTrainer but the impact that you're going to have on the world. I really appreciate it. BROCK: Well, thank you for having me and letting me tell these stories, and, also, thanks for participating in Clapping Academy [laughter]. WILL: It was great. CHAD: If folks want to get in touch with you or follow along with you, or if they work in a healthcare organization where they think CareTrainer might be right for them, where are all the places that they can do that? BROCK: You can reach me at brock@caretrainer.ai. They can express interest on our website at caretrainer.ai. They can reach me at my personal website, brockdubbels.com, or connect with me on LinkedIn, because, you know, life is too short not to have friends. So, let's be friends [laughs]. CHAD: You can subscribe to the show and find notes for this entire episode along with a complete transcript at giantrobots.fm. WILL: If you have questions or comments, email us at hosts@giantrobots.fm. CHAD: You can find me on Mastodon at cpytel@thoughtbot.social. WILL: And you can find me on Twitter @will23larry. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. CHAD: Thank you again, Brock. And thank you all for listening. See you next time. AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.
Is AI part of your business strategy? Well, if it's not, it probably should be. Ethan Mollick, Wharton School professor of innovation and entrepreneurship, and Arun Jagannathan, two-time entrepreneur, enthusiastically agree on that. In this episode you'll gain strategic insights and practical tools from an AI visionary and hear how one intrepid entrepreneur is pushing himself and his company to embrace AI.Arun Jagannathan is the founder of not one, but two, startups in India. CrackVerbal helps students prepare for exams and make smarter career decisions, and Yzerly enhances corporate communication through innovative training programs. Jagannathan says, “Many employees today are asking: What is our AI strategy? Because nobody is in a bubble. Everybody is hearing this, right? And they know that if we are on a growth path, on a growth trajectory, then AI has to be a part of the strategy.” So, he's experimenting and adapting across different facets of his business to reap the full benefits of AI.Ethan Mollick is here to help. He's a professor, blogger, and best-selling author of Co-Intelligence: Living and Working with AI, a practical guide for thinking and working with AI. Mollick's practical experience, deep research, and endless curiosity enable him to guide entrepreneurs on the AI journey so they can tackle it more practically, systematically, and creatively. He begins by asking entrepreneurs four questions in the face of AI: What special thing have you done that is no longer important? What impossible thing can you now do? What can you move down market or democratize? What can you have upmarket or personalized?“I think if you think about those sets of ideas, you end up in pretty good shape,” Mollick says. He also places great importance on keeping “humans in the loop” and so does Jagannathan. “What AI does is, it makes good very easy, but great is still very hard,” Jagannathan explains.Hear how Jagannathan answers those four important questions and learn how to ask them of yourself and your company while navigating the challenges that companies and employees face when integrating AI into their businesses.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
José Antonio Bowen: Teaching With AI José Antonio Bowen has won teaching awards at Stanford and Georgetown and is past president of Goucher College. He has written over 100 scholarly articles and has appeared as a musician with Stan Getz, Bobby McFerrin, and others. He is the author of multiple books in higher education and is a senior fellow for the American Association of Colleges and Universities. He is the author with C. Edward Watson of Teaching With AI: A Practical Guide to a New Era of Human Learning*. AI will change how we work, but it's also going to change how we think. In this conversation, José and I explore where to begin working with AI and why those who can use it will serve a critical role in shaping what's next. Key Points Physical maps make you smarter than GPS, but GPS is more practical for daily use. AI isn't inherently good or bad, but like the internet, it will change how we work. AI will eliminate some jobs, but it will change every job. Those who can work with AI will replace those who can't. Rather than thinking about creativity through the lens of responses from AI, focus on bringing creativity into your prompts. Most of the AI progress for companies is coming from non-tech folks that are figuring our how specific tasks get more efficient. AI is very good at some things and not good at others. You'll discover how this relates to your work by experimenting with different prompts. Resources Mentioned Teaching With AI: A Practical Guide to a New Era of Human Learning* by José Antonio Bowen and C. Edward Watson Example AI Prompts by José Antonio Bowen The Human Side of Generative AI: Creating a Path to Productivity by Aaron De Smet, Sandra Durth, Bryan Hancock, Marino Mugayar-Baldocchi, and Angelika Reich Moderna and OpenAI partner to Accelerate the Development of Life-Saving Treatments The State of AI in Early 2024: Gen AI Adoption Spikes and Starts to Generate Value by Alex Singla, Alexander Sukharevsky, Lareina Yee, Michael Chui, and Bryce Hall Interview Notes Download my interview notes in PDF format (free membership required). Related Episodes Make Your Reading More Meaningful, with Sönke Ahrens (episode 564) Principles for Using AI at Work, with Ethan Mollick (episode 674) How to Enhance Your Credibility (Audio course) Discover More Activate your free membership for full access to the entire library of interviews since 2011, searchable by topic. To accelerate your learning, uncover more inside Coaching for Leaders Plus.
Ep. 242 Can AI assistants really turn raw data into insightful dashboards in under a minute? Kipp and Kieran dive into how tools like Claude are revolutionizing content and data visualization in business. Learn more on why interactive content such as apps and dashboards are catching investor attention, how AI-powered tools are shifting the focus from traditional data visualization to effective data management, and the transformative potential of using Claude for everything from financial reporting to personalizing web apps. Mentions Claude https://claude.ai/ Anthropic https://www.anthropic.com/ Ethan Mollick https://mgmt.wharton.upenn.edu/profile/emollick/ Resource [Free] Steal our favorite AI Prompts featured on the show! Grab them here: https://clickhubspot.com/aip We're on Social Media! Follow us for everyday marketing wisdom straight to your feed YouTube: https://www.youtube.com/channel/UCGtXqPiNV8YC0GMUzY-EUFg Twitter: https://twitter.com/matgpod TikTok: https://www.tiktok.com/@matgpod Join our community https://landing.connect.com/matg Thank you for tuning into Marketing Against The Grain! Don't forget to hit subscribe and follow us on Apple Podcasts (so you never miss an episode)! https://podcasts.apple.com/us/podcast/marketing-against-the-grain/id1616700934 If you love this show, please leave us a 5-Star Review https://link.chtbl.com/h9_sjBKH and share your favorite episodes with friends. We really appreciate your support. Host Links: Kipp Bodnar, https://twitter.com/kippbodnar Kieran Flanagan, https://twitter.com/searchbrat ‘Marketing Against The Grain' is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Produced by Darren Clarke.
https://passionstruck.com/passion-struck-book/ - Order a copy of my new book, "Passion Struck: Twelve Powerful Principles to Unlock Your Purpose and Ignite Your Most Intentional Life," today! Picked by the Next Big Idea Club as a must-read for 2024.In this episode of Passion Struck, host John R. Miles sits down with Ethan Mollick, a Wharton professor and author of the groundbreaking book Co-Intelligence. They delve into the rapidly evolving world of artificial intelligence (AI) and its impact on various aspects of life and work. Ethan Mollick shares insights on the potential benefits and risks of AI, including its role in enhancing productivity and creativity, job security concerns, and broader implications for humanity.Full show notes and resources can be found here: https://passionstruck.com/ethan-mollick-the-impact-of-ai-on-life-and-work/In this episode, you will learn:The importance of setting boundaries and clear roles when working with AI to ensure it operates within desired scopes.The evolving role of human judgment as AI becomes more integrated into decision-making processes.Addressing biases in AI systems and the challenges of ensuring accountability in AI-driven decision-making.Recommendations for individuals preparing for a future where AI capabilities are constantly evolving, emphasizing the need to adapt to uncertainty and plan for potential advancements in AI technology.All things Ethan Mollick: https://mgmt.wharton.upenn.edu/profile/emollick/SponsorsBrought to you by Indeed. Head to https://www.indeed.com/passionstruck, where you can receive a $75 credit to attract, interview, and hire in one place.Brought to you by Nom Nom: Go Right Now for 50% off your no-risk two week trial at https://trynom.com/passionstruck.Brought to you by Cozy Earth. Cozy Earth provided an exclusive offer for my listeners. 35% off site-wide when you use the code “PASSIONSTRUCK” at https://cozyearth.com/This episode is brought to you by BetterHelp. Give online therapy a try at https://www.betterhelp.com/PASSIONSTRUCK, and get on your way to being your best self.This episode is brought to you By Constant Contact: Helping the Small Stand Tall. Just go to Constant Contact dot com right now. So get going, and start GROWING your business today with a free trial at Constant Contact dot com.--► For information about advertisers and promo codes, go to:https://passionstruck.com/deals/Catch More of Passion StruckMy solo episode on Why We All Crave To Matter: Exploring The Power Of Mattering: https://passionstruck.com/exploring-the-power-of-matteringWatch my interview with Robert Waldinger On What Are The Keys To Living A Good Life.Can't miss my episode with Oksana Masters On How The Hard Parts Lead To TriumphListen to my interview with Richard M. Ryan On Exploring The Heart Of Human Motivation.Catch my episode with Coach Matt Doherty On How You Rebound From Life's Toughest Moments.Listen to my solo episode On 10 Benefits Of Meditation For Transforming The Mind And Body.Like this show? Please leave us a review here-- even one sentence helps! Consider including your Twitter or Instagram handle so we can thank you personally!How to Connect with JohnConnect with John on Twitter at @John_RMiles and on Instagram at @john_R_Miles.Subscribe to our main YouTube Channel Here: https://www.youtube.com/c/JohnRMilesSubscribe to our YouTube Clips Channel: https://www.youtube.com/@passionstruckclipsWant to uncover your profound sense of Mattering? I provide my master class with five simple steps to achieving it.Want to hear my best interviews? Check out my starter packs on intentional behavior change, women at the top of their game, longevity and well-being, and overcoming adversity.Learn more about John: https://johnrmiles.com/