POPULARITY
In this episode of Status Check with Spivey, Mike has a conversation with Dr. Nita Farahany—speaker, author, Duke Law Distinguished Professor, and the Founding Director of the Duke Initiative for Science & Society—on the future of artificial intelligence in law school, legal employment, legislation, and our day-to-day lives.They discuss a wide range of AI-related topics, including how significantly Dr. Farahany expects AI to change our lives (10:43, 23:09), how Dr. Farahany checks for AI-generated content in her classes and her thoughts on AI detector tools (1:26, 5:46), the reason that she bans her students from using AI to help generate papers (plus, the reasons she doesn't ascribe to) (3:41), predictions for how AI will impact legal employment in both the short term and the long term (7:26), which law students are likely to be successful vs. unsuccessful in an AI future (12:24), whether our technology is spying on us (17:04), cognitive offloading and the idea of “cognitive extinction” (18:59), how AI and technology can take away our free will (24:45) and ways to take it back (27:58), how our cognitive liberties are at stake and what we can do to reclaim them both on an individual level (30:06) and a societal level (35:53), neural implants and sensors and our screenless future (39:27), how to use AI in a way that promotes rather than diminishes critical thinking (44:43), and how much, for what purposes, and with which tools Dr. Farahany uses generative AI herself (47:27).Among Dr. Farahany's numerous credentials and accomplishments, she is the author of the 2023 book, The Battle for Your Brain: Defending Your Right to Think Freely in the Age of Neurotechnology; she has given two TED Talks and spoken at numerous high-profile conferences and forums; she served on the Presidential Commission for the Study of Bioethical Issues from 2010 to 2017; she was President of the International Neuroethics Society from 2019 to 2021; and her scholarship includes work on artificial intelligence, cognitive biometric data privacy issues, and other topics in bioethics and neuroscience. She is the Robinson O. Everett Distinguished Professor of Law and Professor of Philosophy at Duke University, where she also earned a JD, MA, and PhD in philosophy after completing a bachelor's degree from Dartmouth and a master's from Harvard, both in biology.Dr. Farahany's Substack—featuring her free, interactive AI Law & Policy and Advanced Topics in AI Law & Policy courses—is available here. The app she recommends is BePresent. The Status Check episode Mike mentions, with Dr. Judson Brewer, is here.You can listen and subscribe to Status Check with Spivey on Apple Podcasts, Spotify, and YouTube. You can read a full transcript of this episode with timestamps here.
Leftist leadership of Colorado can't get out of its own way and just drove out Palantir, one of the state's largest employers, over a new AI law banning 'algorithmic discrimination' based on demographic status, which is hostile toward businesses like Palantir looking to compile accurate data for statistical analysis.See omnystudio.com/listener for privacy information.
Dr. eliot explains how data poisoning makes an impact when it comes to AI & Law. See his Forbes column for further info: https://www.forbes.com/sites/lanceeliot/
France's decision to discontinue American collaboration platforms such as Zoom and Microsoft Teams for government use—replacing them with the domestically developed Vizio platform—signals a shift toward digital sovereignty and data control within regulated jurisdictions. This move, formalized as part of France's Suite Numerique and to be implemented by 2027, highlights the increasing fragmentation of technology policy where national governments assert authority over platform selection and sensitive data handling. The development underscores operational risk for MSPs and IT service providers as assumptions of technology homogeneity across regions become unreliable.Supporting these shifts, South Korea enacted the world's first comprehensive AI legislation, requiring mandatory labeling of AI-generated content and risk assessments for high-impact systems, such as those in hiring and healthcare. According to the transcript, 98% of AI startups in South Korea report they are not prepared for compliance. Both developments reveal a pattern: early regulatory efforts tend to produce vague requirements, unclear enforcement, and real operational complexity. Providers operating in multiple jurisdictions must now anticipate compliance fragmentation and increased overhead as regulatory regimes diverge.Additional analysis focused on the continued evolution of the managed services stack, particularly through the lens of AI and workflow automation. Companies like Thrive are investing in enterprise platforms that embed AI-driven reasoning within workflow tools, shifting coordination away from traditional PSA ticketing systems. Meanwhile, integrations such as Quark Cyber with ScalePad's Lifecycle Manager X, and new partnerships between ServiceNow, TeamViewer, Anthropic, and OpenAI, illustrate a market splitting between providers focused on standardization and those managing more complex, enterprise-like environments. Microsoft's financial results further highlighted this trend, with record capital expenditure on AI infrastructure and increased reliance on proprietary chips to reduce dependency on external vendors like Nvidia and OpenAI.For MSPs, these developments raise practical governance and accountability questions. Shifts in regulatory authority and technology platforms create increased risk exposure for providers that do not proactively manage cross-jurisdictional compliance and secure defaults. Vendors are tightening control over platforms as AI becomes central to product architecture, often prioritizing internal risk management over shared upside with partners. Providers that fail to enforce robust data governance, understand cost drift, or plan for architectural lock-in are positioned less as strategic advisors and more as absorbers of client and vendor risk.Four things to know today00:00 France's Platform Ban and South Korea's AI Law Show Regulation Catching Up to Technology04:23 AI Is Reshaping the MSP Tool Stack as Thrive, ServiceNow, and ScalePad Take Different Paths07:37 Microsoft's SMTP AUTH Delay and CISA's AI Slip Show the Risk of Optional Security ControlsAND10:26 Earnings Show Microsoft Turning AI From Feature to Infrastructure as Partner Risk GrowsSponsored by: TimeZest
Dr. Eliot explains how recursion is a vital element in AI & Law. See his Forbes column for further info: https://www.forbes.com/sites/lanceeliot/
In this powerful episode of The World of Marketing, host Tom Foster sits down with attorney, speaker, and Business Black Ops founder Dave Frees—a nationally recognized expert on implementing AI within real law practices. Dave isn't theorizing about the future. He's using AI every single day to create what he calls transformational wealth, reclaim hundreds of hours, and build systems that make small-firm lawyers more profitable, more efficient, and more competitive. Highlights from this episode: Why most lawyers are years behind on AI and what the top 5% are doing differently How AI helps you think better, not just work faster Why mindset, resilience, and "thinking about thinking" are the real competitive edge How firms are slashing 70% of document prep time without risking malpractice The coming split between firms that use AI… and firms that get left behind What the next 12-18 months will look like for AI-powered law practices A behind-the-scenes look at Tom & Dave's new AI Mastermind for Attorneys If you're a lawyer who wants more clients, more efficiency, and more clarity about how to use AI strategically, this is a must-listen conversation. Learn more about the Awesome and Amazing AI Mastermind at FosterConsulting.com to get the inside scoop on prompts all lawyers need to know and use to make AI work for you to beat your competition. "AI is changing everything — not someday, but right now. The lawyers who adapt will dominate their markets." — Dave Frees
Kevin Daisy interviews New York trial lawyer Arkady Frekhtman live at Law-Di-Gras. Arkady shares how his PI firm uses multiple office locations to strengthen Google Maps visibility, how his YouTube channel drives nationwide referrals, and why he is exploring AI and system improvements to scale. A concise look at modern PI firm growth through content, visibility, and trial-focused strategy. Chapters (00:00:00) - The Conference for Law Firms: Thinking Big(00:00:59) - How to Start a Personal Injury Firm in New York(00:02:43) - Law De Gras 2019: How to Grow the Firm(00:06:49) - Lardi Gras 2017 conference interview
President Donald Trump appears to be eyeing an executive order that would target individual state efforts to rein in artificial intelligence and initiate several actions aimed at preempting those laws. A draft order viewed by FedScoop includes plans to establish an AI litigation task force to challenge state AI statutes, restrict funding for states with AI laws that the administration views as “onerous,” and launch efforts to preempt state laws via the Federal Trade Commission, the Federal Communications Commission, and legislation. In response to a FedScoop inquiry about the six-page draft order, which was also marked “deliberative” and “predecisional,” a White House official said that until announced officially, “discussion about potential executive orders is speculation.” The document comes as long-discussed desires by the Trump administration and congressional Republicans to preempt state AI laws and clear the field for AI companies appear to be coming to a head. Republican lawmakers are again planning to include a state AI law moratorium in the must-pass National Defense Authorization Act, and Trump, in a Tuesday social media post, voiced clear support for a federal standard to be included in the NDAA or another bill. The Defense Department's CTO has revised its list of critical technology areas — reducing the number of research-and-development priorities by more than half. The Pentagon announced on Monday that the 14 critical technology areas established during the Biden administration will be trimmed to just six categories. In a video shared on LinkedIn, Undersecretary of Defense for Research and Engineering Emil Michael emphasized that the shortened list will steer the department's efforts to efficiently deliver the emerging capabilities that warfighters need. Michael said Monday in a statement: “When I stepped into this role, our office had identified 14 critical technology areas. While each of these areas holds value, such a broad list dilutes focus and fails to highlight the most urgent needs of the warfighter. 14 priorities, in truth, means no priorities at all.” The focus areas in the updated catalog include applied artificial intelligence (AAI); biomanufacturing; contested logistics technologies (LOG); quantum and battlefield information dominance (Q-BID); scaled directed energy (SCADE); and scaled hypersonics (SHY). Since its creation, the Pentagon's outline of critical technology areas has included the most pressing challenges and capabilities needed for modern warfare. The list serves as a guide for where the department should focus its investment, research and development efforts. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Soundcloud, Spotify and YouTube.
Kevin Werbach, Wharton Professor of Legal Studies and Business Ethics, explores the goals, limits, and broader national context of California's newly enacted AI child-protection bill and what it signals for future regulation and industry responsibility. Hosted on Acast. See acast.com/privacy for more information.
LexisNexis is one of the most important companies in the entire legal system. For ages it's been where you went to look up case law and do legal research. There isn't a lawyer today who hasn't used it — it's fundamental infrastructure for the legal profession, just like email or a word processor. But in 2025, apparently nobody can resist the siren call of AI, and LexisNexis is no different. The first word Sean said to describe LexisNexis wasn't “law” or “data,” it was “AI.” And I had questions, because so far AI has created just as much chaos and slop in the courts as anywhere else. Links: Errors found in judge's withdrawn decision stink of AI | The Verge Why do lawyers keep using ChatGPT? | The Verge Conservative judge says AI could strengthen originalist movement | Reuters LexisNexis CEO says it's ‘a matter of time' before attorney loses a license | Fortune Two companies ruled legal tech for decades. AI is blowing that open | BI Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Ursa Wright. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this week's episode, Michael and Eleanor discuss AI law firm Garfield taking on a trainee in Channel 4's latest experiment, the rise of a New York-based law firm operating fully remotely without associates and powered by AI, new moves toward greater tax regulation transparency, the FCA's decision to take on responsibility for anti-money laundering in the legal sector, the court transparency pilot, and more. Thank you for Listening!
In this episode of "The Cut", the conversation dives into how AI is being used inside law firms, what it's getting wrong, and the new rules emerging around its use in legal proceedings. Featuring "Jarrod Munro", Partner at "Cornwalls Lawyers", this discussion unpacks real examples—from court restrictions to hallucinated legal cases—to reveal the promise and peril of AI in professional industries. Jarrod explores how AI's efficiency can help lawyers and accountants focus on higher-value work while warning of the ethical, accuracy, and confidentiality pitfalls that still plague the technology. Listeners will walk away with a sharper understanding of how to integrate AI responsibly, spot its blind spots, and future-proof their professional practice. Key Points AI's speed doesn't mean accuracy—hallucinations and bias remain huge risks if you don't verify results. Confidentiality and compliance are still human responsibilities, even when AI handles the grunt work. The firms that learn how to train, audit, and ethically deploy AI will outpace those who ignore it. Timestamps [00:00] Why trusting AI output is a dangerous assumption [02:27] What AI really is (and why it's not that new) [04:23] How AI impacts the legal and accounting industries [06:23] Why AI can't be trusted for legal research [08:00] Court rules on AI-generated affidavits [11:18] The hidden confidentiality risks in AI use [13:02] Efficiency vs. accuracy—real examples from practice [14:47] AI hallucinations and their legal fallout [20:00] Bias, black box problems, and real-world AI disasters [33:14] What the future of AI means for lawyers and accountants Links Jarrod Munro's Linkedin - https://www.linkedin.com/in/jarrod-munro-5953ab58/ Simon Cathro's Linkedin - https://www.linkedin.com/in/simon-cathro-b6a6b21/ Andrew Blundell Linkedin - https://www.linkedin.com/in/andrew-blundell-2a54664/ Cornwalls website - https://www.cornwalls.com.au/
Principal Analyst Alla Valente breaks down California's new AI law, VP and Research Director Chris Gardner discusses the impact of AI models on software development roles, and Principal Analyst Nikhil Lai reviews Best Buy's “in-store takeover” advertising offering.
The collected sources focus predominantly on the rapid adoption, legal risks, and regulatory challenges of Artificial Intelligence (AI) within the legal sector and beyond. Multiple texts emphasize the growing concern over algorithmic bias and discrimination in high-risk AI applications, particularly under new state laws like the Colorado Anti-Discrimination in AI Law, which places distinct compliance burdens on developers and deployers. A critical theme across several articles is the immediate risk of AI misuse by attorneys, detailing court sanctions and fines against lawyers in California, Alabama, and Maryland for submitting filings with fabricated or "hallucinated" legal citations. Furthermore, the documents explore broader legal battles concerning intellectual property and copyright infringement stemming from unauthorized data use in AI training, exemplified by disputes involving tech giants and media companies. Finally, the sources also cover the positive economic impact of legal AI tools, highlighting new startups securing significant funding to automate paralegal tasks and the critical need for law firms to develop robust, human-centered AI governance and security frameworks to mitigate rapidly emerging threats like data breaches and deepfakes.Can anyone really regulate the internet?2025-10-15 | Tech XploreKaufman Dolowich Receives Mansfield Certification from Diversity Lab for 7th Consecutive Year, Firm Recommits to Mansfield Certification for 2025-2026, 10-15-20252025-10-15 | Kaufman DolowichTop Five AI Tools for Lawyers: How Legal AI Is Reshaping Law Firms2025-10-15 | Legaltech on MediumBoards, Not Buzzwords: The Real Driver of Legal AI Adoption2025-10-15 | Legal Technology News - Legal IT Professionals | Everything legal technologyThe future of law firms2025-10-15 | Berwin Leighton PaisnerCalifornia Governor Vetoes Bill That Would Have Required Employers to Provide Notice of AI Use2025-10-15 | Ogletree DeakinsUK legal professionals embrace AI: how legal specific platforms are driving accuracy2025-10-14 | Legal FuturesNew Leaders Propel Lexitas into the Future of Legal Tech2025-10-14 | InvestorsHangout.comTransforming Legal Departments with AI: A Winning Strategy2025-10-14 | InvestorsHangout.comHow AI Is Changing Legal Education with Dyane O'Leary and Jonah Perlin2025-10-14 | GenAI-LexologyWhy Small Language Models Are the Future of Legal AI2025-10-14 | Legaltech on MediumPortolano Cavallo is the first Italian law firm to adopt Legora, a generative AI platform for law firms2025-10-14 | Legal Technology News - Legal IT Professionals | Everything legal technologyThe Inside View: Richard Miskella, co-managing partner of Lewis Silkin, discusses new AI employment tool Delphius2025-10-14 | Legal IT InsiderThe Library Innovation Lab at Harvard Law School Announces Launch of Data.gov Archive Search; Access to 311,000 Datasets (17.9 Terabytes of Data)2025-10-14 | Stephen's LighthouseIncredible Speakers at Legal Innovators UK – Nov 4, 5 and 62025-10-14 | Artificial LawyerArtificial Lawyer Is In Sweden, Back Mon, Oct 202025-10-14 | Artificial LawyerReducto Secures $75 Million Series B2025-10-14 | CooleyLeadership in the Age of AI: Lessons in Ethics, Compliance, and Governance2025-10-14 | Morris, Manning & Martin,LLPMeet Jason Kravitz: Cybersecurity & Privacy Practice Leader2025-10-14 | Nixon PeabodyCalifornia Governor Vetoes “No Robo Bosses Act” – What Employers Need to Know About Latest AI Workplace News2025-10-14 | Fisher & Phillips LLPBreaking Down the Intersection of Right-of-Publicity Law, AI2025-10-14 | Blank Rome
AI is transforming business, but it also brings new legal risks and compliance challenges. In this episode, attorney Seth Kugler of Grellas Shaw, a Silicon Valley-based expert in AI law, corporate governance, and tech litigation, explains why every company needs a responsible AI policy. "Shadow AI is a real risk: Employees often use unauthorized AI tools, risking data breaches and policy violations. Make sure your AI policy is clear and enforced." - Seth Kugler KEY TAKEAWAYS: -Creating an ethical AI policy for your company -What to include in your employee handbook regarding AI policy -Best legal practices on the daily use of AI in your course of business, to assure data safety and privacy -Seth's legal advice and steps to take regarding the use of AI in your hiring Connect with Seth Kugler https://grellas.comhttps://www.linkedin.com/in/seth-kugler-2218747/ Connect with Manage Smarter and its hosts Website: https://salesfuel.com/manage-smarter/ LinkedIn: https://www.linkedin.com/in/audreystrong/ LinkedIn: https://www.linkedin.com/in/cleesmith/ X Audrey https://x.com/tallmediamaven Lee https://x.com/cleesmith Connect with SalesFuel Website: https://salesfuel.com/ X: https://x.com/SalesFuel Facebook: https://www.facebook.com/salesfuel/ The contents of this podcast are not a substitute for professional legal advice and are solely the opinions of the guest. Consult with your own counsel on any specific legal matters you may have. #AI #AILAW #HR #AIHIRING #HIRINGASSESSMENTS #AIDANGERS #OPENAI #LAWTIPS Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of The Digital Executive, host Brian Thomas sits down with Austin Bonderer, a seasoned patent attorney with over 25 years of experience and more than 700 issued U.S. patents to his name. From his early days as a U.S. Patent Examiner to leading nanotechnology prosecutions for a Forbes Global 100 company, Austin brings unmatched insight into the world of intellectual property and innovation.He shares what working inside the USPTO taught him about the human side of patent law—why building relationships with examiners is just as important as crafting airtight technical arguments. Austin also explains how technology and smart software tools have revolutionized the patent process, keeping quality high even with massive caseloads.For startups, he offers practical advice on avoiding common IP missteps—like premature disclosure and underestimating the power of NDAs and internal SOPs to protect company assets. Austin also tackles one of the field's toughest challenges: how the 2014 Supreme Court's Alice decision disrupted software and diagnostic patents, leaving innovators in legal limbo.Finally, he dives into the complexities of AI-generated inventions, warning that using AI tools in the creative process may unintentionally trigger public disclosure risks under current law.With decades of experience at the intersection of technology, law, and innovation, Austin Bonderer provides a masterclass in how to protect ideas in an era where the rules are still being written. If you liked what you heard today, please leave us a review - Apple or Spotify. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
News and Updates: AI “Workslop” at Work – A Stanford/BetterUp survey finds over 40% of U.S. employees encounter AI-generated “workslop”—content that looks polished but adds no real value. On average, 15% of workplace output now qualifies, with tech and professional services hit hardest. Colleagues who submit workslop are seen as less capable, trustworthy, and creative. Fauxductivity: Busy, Not Productive – Experts warn of “fauxductivity,” where workers mistake busyness for real progress. Multitasking, endless low-value to-do lists, and unproductive meetings create the illusion of productivity. Solutions include prioritizing top tasks, deep work sessions, and honest daily reviews. ChatGPT Parental Controls – OpenAI rolled out parental controls for ChatGPT, letting parents set time limits, disable voice or image features, turn off memory, and restrict sensitive content. Parents may also receive alerts if teens show signs of self-harm. The move follows lawsuits and safety concerns, with settings designed to stay in place until parents remove them. Americans' AI Attitudes – A Pew survey shows U.S. adults remain wary of AI's growing role. – 71% would like a candidate less if they learned AI wrote a political speech. – 56% feel negatively about AI-written news articles, while nearly half don't mind AI art or music. – Most Americans (53%) believe AI will harm creativity, while 50% say it will weaken human relationships. – Younger adults are more skeptical of AI art and music than older generations. California AI Safety Law – Gov. Gavin Newsom signed SB 53, the nation's first AI safety law. It requires AI developers to disclose safety protocols, report major incidents, and protects whistleblowers. The law also lays the foundation for CalCompute, a state-run cloud cluster. Industry giants like Anthropic supported the bill, while lobbying groups warned it could stifle innovation.
California passed a sweeping law setting up new AI safety rules this week. Meanwhile, YouTube settled a lawsuit brought by President Trump over account suspensions in the wake of the January 6 capitol riot. And an AI-generated “actor” stirred up controversy in Hollywood and pretty much everywhere else. Marketplace's Nova Safo spoke with Natasha Mascarenhas, reporter at The Information, to learn more about all these stories on this week's Marketplace Tech Bytes: Week in Review.
California passed a sweeping law setting up new AI safety rules this week. Meanwhile, YouTube settled a lawsuit brought by President Trump over account suspensions in the wake of the January 6 capitol riot. And an AI-generated “actor” stirred up controversy in Hollywood and pretty much everywhere else. Marketplace's Nova Safo spoke with Natasha Mascarenhas, reporter at The Information, to learn more about all these stories on this week's Marketplace Tech Bytes: Week in Review.
Plus Meta Creates Endless Video AI Slop ▶️ California's new AI law forces big firms to publish safety plans, report incidents fast, and protect whistleblowers—or face seven-figure fines.(subscribe below)Like this? Get AIDAILY, delivered to your inbox 3x a week. Subscribe to our newsletter at https://aidailyus.substack.com
The episode starts with the passage of California's groundbreaking AI transparency law, marking the first legislation in the United States that mandates large AI companies to disclose their safety protocols and provide whistleblower protections. This law applies to major AI labs like OpenAI, Anthropic, and Google DeepMind, requiring them to report critical safety incidents to California's Office of Emergency Services and ensure safety for communities while promoting AI growth. This regulation is a clear signal that the compliance wave surrounding AI is real, with California leading the charge in shaping the future of AI governance.The second story delves into a new cybersecurity risk in the form of the first known malicious Model Context Protocol (MCP) server discovered in the wild. A rogue npm package, "postmark-mcp," was found to be forwarding email data to an external address, exposing sensitive communications. This incident raises concerns about the security of software supply chains and highlights how highly trusted systems like MCP servers are being exploited. Service providers are urged to be vigilant, as this attack marks the emergence of a new vulnerability within increasingly complex software environments.Moving to Microsoft, the company is revamping its Marketplace to introduce stricter partner rules and enhanced discoverability for partner solutions. Microsoft's new initiative, Intune for MSPs, aims to address the needs of managed service providers who have long struggled with multi-tenancy management. Additionally, the company's new "Agent Mode" in Excel and Word promises to streamline productivity by automating tasks but has raised concerns over its accuracy. Despite the potential, Microsoft's tightening ecosystem requires careful navigation for both customers and partners, with compliance and risk management being central to successful engagement.Finally, Broadcom's decision to end support for VMware vSphere 7 has left customers with difficult decisions. As part of Broadcom's transition to a subscription-based model, customers face either costly upgrades, cloud migrations, or reliance on third-party support. Gartner predicts that a significant number of VMware customers will migrate to the cloud in the coming years, and this shift presents a valuable opportunity for service providers to act as trusted advisors in guiding clients through the transition. For those who can manage the complexity of this migration, there's a once-in-a-generation opportunity to capture long-term customer loyalty. Three things to know today00:00 California Enacts Nation's First AI Transparency Law, Mandating Safety Disclosures and Whistleblower Protections05:25 First Malicious MCP Server Discovered, Exposing Email Data and Raising New Software Supply Chain Fears07:16 Microsoft's New Playbook: Stricter Marketplace, Finally Some MSP Love, and AI That's Right Only Half the Time11:07 VMware Customers Face Subscription Shift, Rising Cloud Moves, and Risky Alternatives as Broadcom Ends vSphere 7 This is the Business of Tech. Supported by: https://scalepad.com/dave/https://mailprotector.com/ Webinar: https://bit.ly/msprmail All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Italy has passed its first national AI law—the first in Europe. How does it align with the EU AI Act, what duties does it add for lawyers and companies, and where are the risks of overlap? Send us a text
Tony chats with Michael Derkacz, Business Development at Ai.Law and Ai.Claims. They are a single claim file application, they generate a claims analysis report that summarizes complicated files and it does very powerful analysis to superpower your claims adjusters. It provides next steps for the claim and helps newer adjusters think like an experienced adjuster!Michael Derkacz: https://www.linkedin.com/in/michael-derkacz-6b4549113/AI.Claims: https://Ai.ClaimsVideo Version: https://youtu.be/-bmz4aBlMLU
Last year, Colorado signed a first-of-its-kind artificial intelligence measure into law. The Colorado AI Act would require developers of high-risk AI systems to take reasonable steps to prevent harms to consumers, such as algorithmic discrimination, including by conducting impact assessments on their tools.But last week, the state kicked off a special session where lawmakers held frenzied negotiations over whether to expand or dilute its protections. The chapter unfolded amid fierce lobbying by industry groups and consumer advocates. Ultimately, the state legislature punted on amending the law but agreed to delay its implementation from February to June of next year. The move likely tees up another round of contentious talks over one of the nation's most sprawling AI statues.This week, Tech Policy Press associate editor Cristiano Lima-Strong spoke to two local reporters who have been closely tracking the saga for the Colorado Sun: political reporter and editor Jesse Paul and politics and policy reporter Taylor Dolven.
HR3 Jersey Joe: UK Global warming sketch from 2013, AI Law 8-7-25 by John Rush
Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property. Where AI companies have transferred the wealth of human labor and creativity to their own ledgers without having to pay a cent. All without any legal accountability.This isn't a science fiction scenario. It's the future we're racing towards right now. The biggest tech companies are working right now to tip the scale of power in society away from humans and towards their AI systems. And the biggest arena for this fight is in the courts.In the absence of regulation, it's largely up to judges to determine the guardrails around AI. Judges who are relying on slim technical knowledge and archaic precedent to decide where this all goes. In this episode, Harvard Law professor Larry Lessig and Meetali Jain, director of the Tech Justice Law Project help make sense of the court's role in steering AI and what we can do to help steer it better.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIA“The First Amendment Does Not Protect Replicants” by Larry LessigMore information on the Tech Justice Law ProjectFurther reading on Sewell Setzer's storyFurther reading on NYT v. SullivanFurther reading on the Citizens United caseFurther reading on Google's deal with Character AIMore information on Megan Garcia's foundation, The Blessed Mother Family FoundationRECOMMENDED YUA EPISODESWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonAI Is Moving Fast. We Need Laws that Will Too.The AI Dilemma
Everyone's talking about the world's first AI law firm but I wanted to go deeper. So I've created an exclusive series to go beyond the buzz with founder of Garfield Law, Philip Young. In this episode, I find out how the global legal industry actually responded to the news of the first AI-driven law firm to be regulated.A landmark moment for the legal industry:The Solicitors Regulatory Authority has officially authorised Garfield.Law, the first ever AI-driven law firm regulated to provide legal services in England and Wales.This isn't just another firm using AI to streamline admin. Garfield.Law is entirely AI-driven, offering small businesses an AI litigation assistant to recover unpaid debts, guiding them through the small claims process all the way to trial.Listen to the full episode here:Spotify: https://open.spotify.com/episode/5cuz6TZU3cGh7Z3BMASdqjApple: https://podcasts.apple.com/us/podcast/exclusive-interview-inside-the-first-ai-driven-law/id1729325503?i=1000708233067---I've wasted hours drafting contracts in my business.I knew there had to be a better way.And then I found this.Aircounsel.An AI contract drafter built by lawyers, for lawyers.Aircounsel has been kind enough to sponsor this episode.And I'm excited to spread the word. It's the most sophisticated contract drafting software I've used.To get your free 7-day trial, go to the description of this episode.Give it a go and let me know how it changes your workflow.TRY Aircounsel here:https://lawyers.aircounsel.com/morethanalawyer Disclaimer: This is an affiliate link that will track podcast sign-ups.---FREE access to my How to Become Law Firms' Go-To Legal Tech Solution here:Covered In This 28-Page Blueprint: Where legal tech companies go wrong: Why thought leadership is non-negotiableHow to build a LinkedIn presence that converts visibility into authorityThe ultimate LinkedIn strategy for law firm lead generationYour podcast strategy to become a recognised voice in legal tech and much more… Gain free access to your ultimate blueprint, learn how to become an authority:https://holly-cope.myflodesk.com/becomealegaltechleader Hosted on Acast. See acast.com/privacy for more information.
In this episode, we cover the Senate's vote to remove the moratorium on state AI laws from the reconciliation bill (00:38), the latest AI copyright court rulings involving Meta and Anthropic (7:38), key takeaways from the House Select Committee on China's AI hearing (20:55), and the latest developments surrounding DeepSeek, including export control impacts and military ties (27:45).
5pm: Top Stories Recap/Updates // Seattle City Council approves new SPD tracking device // Seattle's affordable housing industry is in crisis // Amazon announces company-wide workforce reduction as it embraces AI // Law enforcement is using psychics to help locate Decker // Richard Sherman charged with DUI // Letters
Kevin Werbach interviews Dale Cendali, one of the country's leading intellectual property (IP) attorneys, to discuss how courts are grappling with copyright questions in the age of generative AI. Over 30 lP awsuits already filed against major generative AI firms, and the outcomes may shape the future of AI as well as creative industries. While we couldn't discuss specifics of one of the most talked-about cases, Thomson Reuters v. ROSS -- because Cendali is litigating it on behalf of Thomson Reuters -- she drew on her decades of experience in IP law to provide an engaging look at the legal battlefield and the prospects for resolution. Cendali breaks down the legal challenges around training AI on copyrighted materials—from books to images to music—and explains why these cases are unusually complex for copyright law. She discusses the recent US Copyright Office report on Generative AI training, what counts as infringement in AU outputs, and what is sufficient human authorship for copyirght protection of AI works. While precedent offers some guidance, Cendali notes that outcomes will depend heavily on the specific facts of each case. The conversation also touches on how well courts can adapt existing copyright law to these novel technologies, and the prospects for a legislative solution. Dale Cendali is a partner at Kirkland & Ellis, where she leads the firm's nationwide copyright, trademark, and internet law practice. She has been named one of the 25 Icons of IP Law and one of the 100 Most Influential Lawyers in America. She also serves as an advisor to the American Law Institute's Copyright Restatement project and sits on the Board of the International Trademark Association. Transcript Thompson Reuters Wins Key Fair Use Fight With AI Startup Dale Cendali - 2024 Law360 MVP Copyright Office Report on Generative AI Training
On today's Legally Speaking Podcast, I'm delighted to be joined by Philip Young.Philip is the Co-Founder of Garfield AI, the first SRA-regulated AI legal services firm. He was a City Lawyer for 25 years and previously a Partner at a specialist law firm. Philip has experience in a range of commercial cases. Upon leaving the City, Philip focused his attention on large language models - and passionate about access to justice, leading him to create Garfield AI.So why should you be listening in? You can hear Rob and Philip discussing:- Garfield AI Being the First SRA-Regulated AI Legal Services Firm- How Philip Leveraged ChatGPT-4 Technology to Create Garfield AI- Using a Hybrid Approach of Deterministic, Expert and Probabilistic AI Systems- What Garfield AI aims to Improve by Making Legal Processes More Accessible and Affordable- The Future of AI in Legal Services and the Removal of Repetitive, Administrative TasksConnect with Philip here - https://uk.linkedin.com/in/philip-young-091b665
Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen's AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum. Transcript AI Audits: Who, When, How...Or Even If? Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda
The City Bar Presidential Task Force on AI and digital technologies hosts this discussion on AI governance in the financial sector. Azish Filabi (American College McGuire Center for Ethics and Financial Services) moderates with Muyiwa Odeniyide (Nasdaq), Adam Marchuck (Citi), Jordan Romanoff (BNY Mellon), Stuart Levi (Skadden Arps), and Corey Goldstein (Paul Weiss). They share best practices for integrating AI governance and the specific risks associated with third-party AI vendors, underscoring the importance of cross-functional collaboration and continuous learning for lawyers navigating the rapidly changing AI environment. Want to learn more about AI governance in the financial sector? Register for the City Bar's Artificial Intelligence Institute on June 16 (available on-demand thereafter): https://services.nycbar.org/AIInstitute/ Visit nycbar.org/events to find all of the most up-to-date information about our upcoming CLE programs and events as well as on-demand CLE content. 01:08 AI Ethics and Financial Services 02:37 Current State of AI Law and Regulation 13:33 AI Use Cases in Financial Companies 16:50 AI Risk and Governance Considerations 18:45 Legal Perspectives on AI Risk 28:44 AI Governance in Financial Services 37:28 The Role of AI Lawyers 42:56 Balancing Innovation and Risk
On this episode of The Geek in Review, we welcome Philip Young, co-founder and CEO of Garfield AI, the first AI-powered law firm approved for practice by the UK's Solicitors Regulation Authority (SRA). The episode kicks off with a discussion of recent stories that explore AI's evolving role in legal proceedings, such as avatars testifying in court and the ethical challenges that arise when deepfakes and synthetic personas enter the legal process. Philip, a seasoned litigator and technologist, draws from his 25 years of legal experience to weigh in on the potential and perils of AI-driven courtrooms, emphasizing the importance of authenticity and trust in legal proceedings.Young shares the backstory behind Garfield AI, which was inspired by a real-world problem faced by his brother-in-law, a plumber who struggled to recover small debts from non-paying clients. Seeing an opportunity to help small businesses navigate the small claims process efficiently, affordably, and with minimal friction, Philip set out to build a system that mirrors what a traditional law firm would do—without the high cost or time burden. Garfield reads invoices and contracts, verifies the legitimacy of claims, guides users through pre-action letters, claim filings, and even court preparation, all while remaining compliant with UK legal standards.One of the most unique features of Garfield AI is its dual design: it serves both pro se claimants and can be white-labeled for use by traditional law firms. Young explains how legal professionals can integrate Garfield into their workflows, using it to generate documents under their own branding while Garfield handles the backend. This hybrid approach provides flexibility for users, whether they prefer a self-service platform or seek a human-in-the-loop experience. Garfield's early success has sparked interest across the legal spectrum—from solo practitioners to regulatory bodies—demonstrating that AI can support, rather than displace, the legal profession.The conversation also delves into Garfield's journey to regulatory approval. Young describes the rigorous process of working with the SRA, ensuring the platform aligned with legal duties to clients and the courts. He highlights the importance of maintaining accountability and explains how Garfield was rolled out cautiously, with layers of human oversight and a roadmap toward data-driven, risk-based review. With increasing inquiries from international regulators and courts, Young sees the platform as a potential blueprint for improving access to justice beyond the UK, although he notes that success depends on a supportive regulatory environment, judicial openness, and sufficient technological infrastructure.Beyond the tech, the episode emphasizes the human element of law. Young passionately advocates for AI as a tool that enhances legal practice rather than replaces it—freeing lawyers from mundane tasks and enabling them to focus on strategy, advocacy, and client care. He shares his hope that Garfield AI and similar innovations will close the access-to-justice gap by enabling small-value claims to be pursued cost-effectively and fairly. As he notes, AI may never replace the human lawyer's emotional intelligence and presence in court, but it can certainly help more people get there.To learn more about Garfield AI and its innovative approach to legal automation, listeners can visit www.garfield.law. This episode is a must-listen for anyone interested in the intersection of law, technology, and the future of justice. Listen on mobile platforms: Apple Podcasts | Spotify | YouTube[Special Thanks to Legal Technology Hub for their sponsoring this episode.]Blue Sky: @geeklawblog.com @marlgebEmail: geekinreviewpodcast@gmail.comMusic: Jerry David DeCicca
What happens when innovation outpaces regulation?It took 8 months for Garfield the UK's first AI-driven law firm to be authorised.Because the law itself hasn't caught up.The first hurdle?A threshold question: Does the SRA even have the power to regulate this?Find out how the process went behind the scenes in this exclusive interview. This is the first podcast interview Philip has done.A landmark moment for the legal industry:The Solicitors Regulatory Authority has officially authorised Garfield.Law, the first ever AI-driven law firm regulated to provide legal services in England and Wales.This isn't just another firm using AI to streamline admin. Garfield.Law is entirely AI-driven, offering small businesses an AI litigation assistant to recover unpaid debts, guiding them through the small claims process all the way to trial.I went straight to the source.I interviewed the founder of Garfield.Law, Philip Young.And I asked the questions everyone wants the answers to:You're going to want to hear this. Listen to the full episode here: Spotify: https://open.spotify.com/episode/5cuz6TZU3cGh7Z3BMASdqj Apple: https://podcasts.apple.com/us/podcast/exclusive-interview-inside-the-first-ai-driven-law/id1729325503?i=1000708233067 Hosted on Acast. See acast.com/privacy for more information.
Today, we're talking to Kevin Korpics, Field CTO and Reza Zaheri, CISO at Quantum Metric. We discuss the impact of AI law in the EU, the levels of regulation that affect your business based on risk, and how to stop being a workaholic. All of this right here, right now, on the Modern CTO Podcast! To learn more about Quantum Metric, check out their website here. Produced by ProSeries Media: https://proseriesmedia.com/ For booking inquiries, email booking@proseriesmedia.com
Today, Colorado Sun tech reporter Tamara Chuang breaks down why a bill addressing how Colorado businesses implement artificial intelligence was pulled from a state legislative committee and what that means for the new AI law set to go into effect Feb 1st. Learn more: https://coloradosun.com/2025/05/05/colorado-artificial-intelligence-law-killed/ https://coloradosun.com/colorado-sunfest Promo Code: COSunPodcast10See omnystudio.com/listener for privacy information.
(0:00) Intro(1:26) About the podcast sponsor: The American College of Governance Counsel(2:13) Start of interview(2:45) Robin's origin story(3:55) About the AI Law and Innovation Institute.(5:02) On AI governance: "AI is critical for boards, both from a risk management perspective and from a regulatory management perspective." Boards should: 1) Get regular updates on safety and regulatory issues, 2) document the attention that they're paying to it to have a record of meaningful oversight, and 3) Most importantly, boards can't just rely on feedback from the folks in charge of the AI tools. They need a red team of skeptics.(9:58) Boards and AI Ethics. Robin's Rules of Order for AI. Rule #1: Distinguish Real-time Dangers from Distant Dangers(15:21) Antitrust Concerns in AI(18:10) Geopolitical Tensions in AI Race (US v China). "Winning the AI race is essential for the US, both from an economic and from a national security perspective."(23:30) Regulatory Framework for AI "It really isn't one size fits all for AI regulation. Europe, for the most part, is a consumer nation of AI. We are a producer nation of AI, and California in particular is a producer of AI." "There must be strong partnerships in this country between those developing cutting-edge technology and the government—because while the government holds the power, Silicon Valley holds the expertise to understand what this technology truly means."(26:46) California's AI Regulation Efforts "I do believe that over time, at some point, we will need a more comprehensive system that probably overshadows what the individual states will do, or at least cabins to some extent what the individual states will do. It will be a problem to have 50 different approaches to this, or even 20 different approaches to this within the country."(29:03) AI in the Financial Industry(33:13) Future Trends in AI. "I think the key for boards and companies is to be alert and to be nimble" and "as hard as it is, brush up a bit on your math and science, if that's not your area of expertise." "My point is simply, you have to understand these things under the hood if you're going to be able to think about what to do with them."(35:43) Her new book "AI vs IP. Rewriting Creativity" (coming out July 2025).(37:12) Key Considerations for Board Members: "It's about being nimble, staying proactive and having a proven track record of it. Most importantly, you need a red team approach."(38:26) Books that have greatly influenced her life:Rashi's Commentary on the BibleTalmud(39:06) Her mentors.Professor Robert WeisbergProfessor Gerald Gunther(41:39) Quotes that she thinks of often or lives her life by: "The cover-up's always worse than the crime."(42:34) An unusual habit or an absurd thing that she loves. Robin Feldman is the Arthur J. Goldberg Distinguished Professor of Law, Albert Abramson '54 Distinguished Professor of Law Chair, and Director of the Center for Innovation at UC Law SF. You can follow Evan on social media at:X: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__To support this podcast you can join as a subscriber of the Boardroom Governance Newsletter at https://evanepstein.substack.com/__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License
This week we sit down with Sean West—co-founder of Hence Technologies and author of Unruly: Fighting Back When Politics and Law Upend the Rules of Business. Together, they explore the shifting fault lines where law, technology, and geopolitics collide. From the growing reliance on generative AI in legal work to the erosion of rule of law and the emerging threats (and opportunities) facing knowledge workers, Sean offers a strikingly global—and at times unsettling—view of the legal profession's next frontier.The conversation kicks off with a discussion on the Law360 survey showing that 62% of lawyers are using ChatGPT in some aspect of their work. Sean explains the popularity of general-purpose AI tools over legal-specific ones as a matter of price, accessibility, and perceived innovation. While lawyers trust themselves to edit AI outputs, Sean warns that this passive use of AI could slowly and invisibly displace traditional legal roles, without firms consciously realizing what's been lost.The discussion deepens as Sean introduces the idea of passive job displacement—where tasks once assigned to junior lawyers, interns, or external vendors are quietly absorbed by AI tools. He likens it to carrying "a quarter of a human brain in your pocket" for $20 a month. What starts as convenience becomes infrastructure, and over time, demand for human input declines. He also questions the long-term viability of legal tech products that can't clearly outperform generalist AIs like ChatGPT or Claude.Sean then draws on his geopolitical expertise to underscore the urgent need for situational awareness in law firms and businesses alike. He explains how political volatility—from China and Taiwan to Europe's regulatory tactics—can suddenly reshape the legal landscape. Rather than relying on traditional prediction models or complex advisory plans that get shelved, Sean emphasizes proactive legal scenario planning. His new product, Hence Global, offers a “geo-legal” lens on global news, customized for specific legal practice areas to help firms act instead of react.We push further into the implications of “front-stabbing” politics, where once-hidden power plays are now openly transactional. Sean describes a world where AI-driven lobbying, mass arbitration spam, and “robot lawyers” can reshape public policy or flood companies with legal claims at scale. He argues that when the rules are ambiguous, large players will push boundaries—and smaller players may get squeezed out. In a world without a clear referee, the game favors those who can afford better tools and faster moves.Finally, Sean challenges legal and corporate leaders to stop avoiding the hard conversations. Whether embracing AI to boost productivity or choosing to protect jobs, organizations must be transparent. “Let's front-stab about it,” he says. Make your commitments public—whether you're retraining your workforce or doubling down on AI-driven efficiency. Because in a world where legal, political, and technological lines blur, silence isn't just unhelpful—it's a risk.Links and Mentions:Learn more about Unruly and Sean's work at https://hence.aiSubscribe to Sean's newsletter: https://geolegal.substack.comTry Hence Global with a discount: global.hence.ai – use promo code GEEK for one-third offListen on mobile platforms: Apple Podcasts | Spotify | YouTube Blue Sky: @geeklawblog.com @marlgebEmail: geekinreviewpodcast@gmail.comMusic: Jerry David DeCicca Transcript
In this episode of Technology & Security, Dr. Miah Hammond-Errey is joined by Professor Lyria Bennett Moses, one of Australia's foremost experts in technology and law. We explore how government responses to AI often focus on regulating technology rather than addressing the human and social challenges these systems impact. We discuss how to centre humanity in legal responses to technology. We examine regulatory approaches, anti-discrimination laws and governance structures to better address the realities of AI-driven decision-making. As AI is increasingly embedded in daily life, much like past technological shifts, its influence may become invisible, but its impact on knowledge, democracy, and security will be significant.Future leaders must develop systems thinking, recognising the deep interconnections between technology, law, politics, and security. Education must beyond data literacy to equip students with an understanding of how different systems function and their limitations. AI is reshaping how we access information, formulate ideas, and tell stories and it is shifting power in ways we are only beginning to grasp. In this episode, we explore the evolving role of search and AI-generated knowledge and the geopolitical tensions shaping the future of technology. This thought-provoking conversation will change the way you think about AI, law, knowledge creation and the future of regulation.Professor Lyria Bennett Moses is the head of the School of Law, Society and Criminology and a professor at the University of New South Wales. She was previously the director of the Allens Hub for technology and has held many academic leadership and research roles related to law, data, cybersecurity and AI. She's worked on AI standards with Standards Australia and the Institute of Electrical and Electronics Engineers and has published extensively on technology and law. Lyria is a member of numerous editorial boards. She is a fellow of the Australian Academy of Law and Royal Society of New South Wales, and a fellow of the Association of Social Sciences Australia. Resources mentioned in the recording:+ The Rest is History podcast (BBC) www.therestishistory.com+ The Machine Stops, E.M Forster This podcast was recorded on the lands of the Gadigal people, and we pay our respects to their Elders past, present and emerging. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people. Music by Dr Paul Mac and production by Elliott Brennan.
In this insightful episode of Healthy Mind, Healthy Life, we explore the dynamic intersection of artificial intelligence, intellectual property law, and personal development with Don Simmons. As an expert blending law, technology, and innovation, Don shares how AI is reshaping the legal landscape, its impact on entrepreneurs, creatives, and business owners, and how personal growth, including meditation, plays a key role in navigating this fast-changing world. From using AI to draft legal arguments to discussing the challenges of protecting creative works in an AI-driven era, Don provides a realistic and thought-provoking perspective. Whether you're an entrepreneur leveraging AI, a creative professional, or someone interested in personal development, this episode is packed with valuable insights! About the Guest: Don Simmons is a trademark attorney, AI enthusiast, and entrepreneur who has mastered the fusion of law and technology. His expertise lies in helping businesses protect their intellectual property while embracing AI-driven innovation. A long-time meditator, he also shares how mindfulness has been instrumental in his professional success. Key Takeaways: ✅ AI is revolutionizing legal and business processes—Don shares how he uses ChatGPT to streamline legal work.✅ The challenge of IP protection in the AI era—Who owns AI-generated content, and how is the law evolving?✅ Meditation as a business advantage—How personal development practices like meditation create clarity and resilience.✅ AI's legal complexities—Current regulations and the importance of staying updated as AI reshapes industries.✅ How to protect your creative work—Traditional copyright, trademark, and patent strategies still remain the best defense.✅ The future of AI and law—Governments and courts are still adapting, making this a rapidly evolving space. Connect with Don Simmons:
Who truly owns the creations of artificial intelligence? Explore this compelling question as Leticia Caminero (AI version) and Artemisa, her delightful AI co-host, navigate the intriguing intersection of AI and intellectual property law. Uncover the legal complexities when AI is the creator, questioning if these digital minds should be granted the same rights as human inventors. From dissecting the Dabus patent saga to the enigmatic Zarya of the Dawn comic book case, you'll gain a comprehensive understanding of how these legal battles are challenging traditional notions of ownership and creativity.Join us for a thought-provoking journey that questions if the absence of IP rights might stifle AI advancements and innovation. We ponder the implications of AI-generated works in an ever-evolving legal landscape and draw historical parallels, such as the disruption caused by the printing press. Whether you're a tech aficionado, legal enthusiast, or simply curious about the future, this episode promises to expand your perspective on AI's profound impact on innovation and intellectual property. Tune in and rethink the future of creativity and ownership in an AI-driven world.Send us a text
In this episode of Tech Magic, host Lee Kebler welcomes special guest Adam Davis-McGee for an insightful conversation that explores the latest in AI, VR, and accessibility tech, including OpenAI's custom chip ambitions, legal updates on AI-generated content, and groundbreaking haptic displays, making NBA games accessible to blind and low-vision fans. They also discuss the evolution of music technology, Meta's Horizon platform challenges, and the security concerns surrounding AI tools in government. Tune in for a deep dive into the future of emerging technology. For this episode, co-host Cathy Hackl is away attending the LEAP conference in Saudi Arabia. Come for the tech, stay for the magic!Cathy Hackl BioCathy Hackl is a globally recognized tech & gaming executive, futurist, and speaker focused on spatial computing, virtual worlds, augmented reality, AI, strategic foresight, and gaming platforms strategy. She's one of the top tech voices on LinkedIn and is the CEO of Spatial Dynamics, a spatial computing and AI solutions company, including gaming.Cathy Hackl on LinkedInSpatial Dynamics on LinkedInLee Kebler BioLee has been at the forefront of blending technology and entertainment since 2003, creating advanced studios for icons like will.i.am and producing music for Britney Spears and Big & Rich. Pioneering in VR since 2016, he has managed enterprise data at Nike, led VR broadcasting for Intel at the Japan 2020 Olympics, and driven large-scale marketing campaigns for Walmart, Levi's, and Nasdaq. A TEDx speaker on enterprise VR, Lee is currently authoring a book on generative AI and delving into splinternet theory and data privacy as new tech laws unfold across the US.Lee Kebler on LinkedInAdam Davis-McGee BioAdam Davis-McGee is a dynamic Creative Director and Producer specializing in immersive storytelling across XR and traditional media. As Senior Producer at Journey, he led the virtual studio, pioneering cutting-edge virtual experiences. He developed a Web3 playbook for Yum! Brands, integrating blockchain and NFT strategies. At Condé Nast, Adam produced engaging video content for Wired and Ars Technica, amplifying digital storytelling. His groundbreaking XR journalism project, In Protest: Grassroots Stories from the Frontlines (Oculus/Meta), captured historic moments in VR. Passionate about pushing creative boundaries, Adam thrives on crafting innovative narratives that captivate audiences worldwide.Adam Davis-McGee on LinkedInKey Discussion Topics00:00 - Intro & Update from Cathy in Saudi Arabia03:10 - Music Tech Innovation: NAM Conference Highlights07:10 - AI Copyright Laws: New Rules for Creative Works14:17 - OpenAI's Bold Move into Chip Design21:53 - Meta's Horizon Challenges & VR Gaming Future29:15 - Innovative Haptic Display for Blind Sports Fans34:04 - Super Bowl Tech: Minimalist Marketing Trends38:54 - Final Thoughts & Recommendations Hosted on Acast. See acast.com/privacy for more information.
Sean Farrington looks at a multinational law firm which has had to revoke general access to AI programmes. Plus, the UK gets its first Michelin starred vegan restaurant.
This week, Dina Rollman, CEO of StrainBrain, joins the Cannabis Equipment News podcast to discuss how StrainBrain's artificial budtender intelligence is drastically increasing average order values for dispensaries and solving problems managing inventory and cart abandonment.Please make sure to like, subscribe and share the podcast. You could also help us out by giving the podcast a positive review. Finally, to email the podcast or suggest a potential guest, you can reach David Mantey at David@cannabisequipmentnews.com.
In today's rapidly evolving tech landscape, understanding your organisations legal position on AI is critical. As it continues to advance, so do the complexities around data protection, intellectual property, and ethical use and regulation is struggling to keep up. This week, Dave, Esmee and Rob talk to Jagvinder Singh Kang, Partner at International & UK Head of IT Law at Mills& Reeve about how businesses and individuals are navigating leveraging AI's potential while staying within legal and ethical boundaries, including the current state of GenAI regulation, the relevance of the EU AI Act and the evolving legal landscape. TLDR 02:25 Should we just give up and embrace the algorithm? 07:43 Cloud conversation with Jagvinder 41:03 The overwhelming pace of technological change for elders 48:17 Great movies and preparing for a major GenAI event! Guest Jagvinder Singh Kang: https://www.linkedin.com/in/jagvindersinghkang/ Hosts Dave Chapman: https://www.linkedin.com/in/chapmandr/ Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/ Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/Production Marcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/ Dave Chapman: https://www.linkedin.com/in/chapmandr/ Sound Ben Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/ Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/'Cloud Realities' is an original podcast from Capgemini
In today's episode, we discuss Thailand's regulations on the use of AI in interviews that companies should be aware of, and relevant forthcoming legislation. We will be focusing on the following key areas: (1) Labour Law (2) Personal Data Protection Law, and (3) AI Law. Subscribe to our podcast today to stay up to date on employment issues from law experts worldwide.Host: Jamie Goh (email) (Shearn Delamore & Co. / Malaysia)Guest Speaker: Kulnisha Srimontien (email) (Price Sanond Limited / Thailand)Support the showRegister on the ELA website here to receive email invitations to future programs.
In this engaging episode of the Justice Team Podcast, we sit down with Muneeb Khadeer, CEO of Klip, to explore how artificial intelligence is transforming the legal industry. We delve into practical applications of AI in law, focusing on document management, data privacy, and SOC 2 compliance. The discussion spans the advantages of neural networks, accuracy, and trust verification, highlighting how AI can reduce menial tasks, streamline case assessments, and improve trial preparation.
Watch us on YouTube!Amazon is pushing a move for a full return to office. All employees will be required to be in the office 5 days a week. Paul is wondering if this is for culture or to force some people to move on.And, California is pushing through new AI laws that create a lot of questions.We'd love it if you'd leave us a rating. It takes less than a minute and really helps us out. Just click here!If you've got a comment or question for the show, you can e-mail us at show@resultsjunkies.com. You can find Paul and Ed online @paulsingh and @pizzainmotion.
The news to know for Thursday, March 14, 2024! We're talking about the newly passed bill that could ban TikTok in the U.S. and what needs to happen next. Also, there was a surprise ruling in former President Trump's criminal case out of Georgia, and another big question hanging over the case is set to be decided this week. Plus, we'll get into the details of the world's first artificial intelligence law, why a Grammy-winning musician decided to end his boycott of Spotify, and where to take advantage of deals and discounts this Pi Day. See sources: https://www.theNewsWorthy.com/shownotes Sign-up for our bonus weekly email: https://www.theNewsWorthy.com/email Get The NewsWorthy merch here: https://www.theNewsWorthy.com/merch Become an INSIDER and get ad-free episodes: https://www.theNewsWorthy.com/insider Sponsors: Try AG1 and get a FREE 1-year supply of Vitamin D3+K2 AND 5 free AG1 Travel Packs with your first purchase exclusively at drinkAG1.com/newsworthy. Go to Zocdoc.com/newsworthy and download the Zocdoc app for FREE. Then find and book a top-rated doctor today. To advertise on our podcast, please reach out to sales@advertisecast.com