POPULARITY
Kevin Werbach interviews Kay Firth-Butterfield about how responsible AI has evolved from a niche concern to a global movement. As the world's first Chief AI Ethics Officer and former Head of AI at the World Economic Forum, Firth-Butterfield brings deep experience aligning AI with human values. She reflects on the early days of responsible AI—when the field was dominated by philosophical debates—to today, when regulation such as the European Union's AI Act is defining the rules of the road.. Firth-Butterfield highlights the growing trust gap in AI, warning that rapid deployment without safeguards is eroding public confidence. Drawing on her work with Fortune 500 firms and her own cancer journey, she argues for human-centered AI, especially in high-stakes areas like healthcare and law. She also underscores the equity issues tied to biased training data and lack of access in the Global South, noting that AI is now generating data based on historical biases. Despite these challenges, she remains optimistic and calls for greater focus on sustainability, access, and AI literacy across sectors. Kay Firth-Butterfield is the founder and CEO of Good Tech Advisory LLC. She was the world's first C-suite appointee in AI ethics and was the inaugural Head of AI and Machine Learning at the World Economic Forum from 2017 to 2023. A former judge and barrister, she advises governments and Fortune 500 companies on AI governance and remains affiliated with Doughty Street Chambers in the UK. Transcript Kay Firth-Butterfield Is Shaping Responsible AI Governance (Time100 Impact Awards) Our Future with AI Hinges on Global Cooperation Building an Organizational Approach to Responsible AI Co-Existing with AI - Firth-Butterfield's Forthcoming Book
Kevin Werbach interviews Dale Cendali, one of the country's leading intellectual property (IP) attorneys, to discuss how courts are grappling with copyright questions in the age of generative AI. Over 30 lP awsuits already filed against major generative AI firms, and the outcomes may shape the future of AI as well as creative industries. While we couldn't discuss specifics of one of the most talked-about cases, Thomson Reuters v. ROSS -- because Cendali is litigating it on behalf of Thomson Reuters -- she drew on her decades of experience in IP law to provide an engaging look at the legal battlefield and the prospects for resolution. Cendali breaks down the legal challenges around training AI on copyrighted materials—from books to images to music—and explains why these cases are unusually complex for copyright law. She discusses the recent US Copyright Office report on Generative AI training, what counts as infringement in AU outputs, and what is sufficient human authorship for copyirght protection of AI works. While precedent offers some guidance, Cendali notes that outcomes will depend heavily on the specific facts of each case. The conversation also touches on how well courts can adapt existing copyright law to these novel technologies, and the prospects for a legislative solution. Dale Cendali is a partner at Kirkland & Ellis, where she leads the firm's nationwide copyright, trademark, and internet law practice. She has been named one of the 25 Icons of IP Law and one of the 100 Most Influential Lawyers in America. She also serves as an advisor to the American Law Institute's Copyright Restatement project and sits on the Board of the International Trademark Association. Transcript Thompson Reuters Wins Key Fair Use Fight With AI Startup Dale Cendali - 2024 Law360 MVP Copyright Office Report on Generative AI Training
Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen's AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum. Transcript AI Audits: Who, When, How...Or Even If? Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda
Kevin Werbach interviews Shameek Kundu, Executive Director of AI Verify Foundation, to explore how organizations can ensure AI systems work reliably in real-world contexts. AI Verify, a government-backed nonprofit in Singapore, aims to build scalable, practical testing frameworks to support trustworthy AI adoption. Kundu emphasizes that testing should go beyond models to include entire applications, accounting for their specific environments, risks, and data quality. He draws on lessons from AI Verify's Global AI Assurance pilot, which matched real-world AI deployers—such as hospitals and banks—with specialized testing firms to develop context-aware testing practices. Kundu explains that the rise of generative AI and widespread model use has expanded risk and complexity, making traditional testing insufficient. Instead, companies must assess whether an AI system performs well in context, using tools like simulation, red teaming, and synthetic data generation, while still relying heavily on human oversight. As AI governance evolves from principles to implementation, Kundu makes a compelling case for technical testing as a backbone of trustworthy AI. Shameek Kundu is Executive Director of the AI Verify Foundation. He previously held senior roles at Standard Chartered Bank, including Group Chief Data Officer and Chief Innovation Officer, and co-founded a startup focused on testing AI systems. Kundu has served on the Bank of England's AI Forum, Singapore's FEAT Committee, the Advisory Council on Data and AI Ethics, and the Global Partnership on AI. Transcript AI Verify Foundation Findings from the Global AI Assurance Pilot Starter Kit for Safety Testing of LLM-Based Applications
Kevin Werbach interviews journalist and author Karen Hao about her new book Empire of AI, which chronicles the rise of OpenAI and the broader implications of generative artificial intelligence. Hao reflects on how the ethical challenges of AI have evolved, noting the shift from concerns like data privacy and algorithmic bias to more complex issues such as intellectual property violations, environmental impact, misleading user experiences, and concentration of power. She emphasizes that while some technical solutions exist, they are rarely implemented by developers, and foundational harms often occur before tools reach end users. Hao argues that OpenAI's trajectory was not inevitable but instead the result of specific ideological beliefs, aggressive scaling decisions, and CEO Sam Altman's singular fundraising prowess. She critiques the “pseudo-religious” ideologies underpinning Silicon Valley's AI push, where utopian and doomer narratives coexist to justify rapid development. Hao outlines a more democratic alternative focused on smaller, task-specific models and stronger regulation to redirect AI's future trajectory. Karen Hao has written about AI for publications such as The Atlantic, The Wall Street Journal, and MIT Tchnology Review. She was the first journalist to ever profile OpenAI, and leads The AI Spotlight Series, a program with the Pulitzer Center that trains thousands of journalists around the world on how to cover AI. She has also been a fellow with the Harvard Technology and Public Purpose program, the MIT Knight Science Journalism program, and the Pulitzer Center's AI Accountability Network. She won an American Humanist Media Award in 2024, and an American National Magazine Award in 2022. Transcript Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI Inside the Chaos at OpenAI (The Atlantic, 2023) Cleaning Up ChatGPT Takes Heavy Toll on Human Workers (Wall St. Journal, 2023) The New AI Panic (The Atlantic, 2023) The Messy, Secretive Reality Behind OpenAI's Bid to Save the World (MIT Technology Review, 2020)
AI companion applications, which create interactive personas for one-on-one conversations, are incredibly popular. However, they raise a number of challenging ethical, legal, and psychological questions. In this episode, Kevin Werbach speaks with researcher Jaime Banks about how users view their conversations with AI companions, and the implications for governance. Banks shares insights from her research on mind-perception, and how AI companion users engage in a willing suspension of disbelief similar to watching a movie. She highlights both potential benefits and dangers, as well as novel issues such as the real feelings of loss users may experience when a companion app shuts down. Banks advocates for data-driven policy approaches rather than moral panic, suggesting responses such as an "AI user's Bill of Rights" for these services. Jaime Banks is Katchmar-Wilhelm Endowed Professor at the School of Information Studies at Syracuse University. Her research examines human-technological interaction, including social AI, social robots, and videogame avatars. She focuses on relational construals of mind and morality, communication processes, and how media shape our understanding of complex technologies. Her current funded work focuses on social cognition in human-AI companionship and on the effects of humanizing language on moral judgments about AI. Transcript ‘She Helps Cheer Me Up': The People Forming Relationships With AI Chatbots (The Guardian, April 2025) Can AI Be Blamed for a Teen's Suicide? (NY Times, October 2024) Beyond ChatGPT: AI Companions and the Human Side of AI (Syracuse iSchool video)
In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday's legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the company. The conversation covers how Workday evaluates model risks based on context and potential human impact, especially in sensitive areas like hiring and performance evaluation. Trindel outlines how the company conducts bias testing, maintains documentation, and uses third-party audits to support transparency and trustworthiness. She also discusses how Workday is preparing for emerging regulatory frameworks, including the EU AI Act, and how internal governance systems are designed to be flexible in the face of evolving policy and technological change. Other topics include communicating AI risks to customers, sustaining post-deployment oversight, and building trust through accountability infrastructure. Dr. Kelly Trindel directs Workday's AI governance program. As a pioneer in the responsible AI movement, Kelly has significantly contributed to the field, including testifying before the U.S. Equal Employment Opportunity Commission (EEOC) and later leading an EEOC task force on ethical AI—one of the government's first. With more than 15 years of experience in quantitative science, civil rights, public policy, and AI ethics, Kelly's influence and commitment to responsible AI are instrumental in driving the industry forward and fostering AI solutions that have a positive societal impact. Transcript Responsible AI: Empowering Innovation with Integrity Putting Responsible AI into Action (video masterclass)
Kevin Werbach interviews Lauren Wagner, a builder and advocate for market-driven approaches to AI governance. Lauren shares insights from her experiences at Google and Meta, emphasizing the critical intersection of technology, policy, and trust-building. She describes the private AI governance model, and the incentives for private-sector incentives and transparency measures, such as enhanced model cards, to guide responsible AI development without heavy-handed regulation. Lauren also explores ongoing challenges around liability, insurance, and government involvement, highlighting the potential of public procurement policies to set influential standards. Reflecting on California's SB 1047 AI bill, she discusses its drawbacks and praises the inclusive debate it sparked. Lauren concludes by promoting productive collaborations between private enterprises and governments, stressing the importance of transparent, accountable, and pragmatic AI governance approaches. Lauren Wagner is a researcher, operator and investor creating new markets for trustworthy technology. She is currently a Term Member at the Council on Foreign Relations, a Technical & AI Policy Advisor to the Data & Trust Alliance, and an angel investor in startups with a trust & safety edge, particularly AI-driven solutions for regulated markets. She has been a Senior Advisor to Responsible Innovation Labs, an early-stage investor at Link Ventures, and held senior product and marketing roles at Meta and Google. Transcript AI Governance Through Markets (February 2025) How Tech Created the Online Fact-Checking Industry (March 2025) Responsible Innovation Labs Data & Trust Alliance
Kevin Werbach speaks with Medha Bankhwal and Michael Chui from QuantumBlack, the AI division of the global consulting firm McKinsey. They discuss how McKinsey's AI work has evolved from strategy consulting to hands-on implementation, with AI trust now embedded throughout their client engagements. Chui highlights what makes the current AI moment transformative, while Bankwhal shares insights from McKinsey's recent AI survey of over 760 organizations across 38 countries. As they explain, trust remains a major barrier to AI adoption, although there are geographic differences in AI governance maturity. Medha Bankhwal, a graduate of Wharton's MBA program, is an Associate Partner, as well as Co-founder of McKinsey's AI Trust / Responsible AI practice. Prior to McKinsey, Medha was at Google and subsequently co-founded a digital learning not-for-profit startup. She co-leads forums for AI safety discussions for policy + tech practitioners, titled “Trustworthy AI Futures” as well as a community of ex-Googlers dedicated to the topic of AI Safety. Michael Chui is a senior fellow at QuantumBlack, AI by McKinsey. He leads research on the impact of disruptive technologies and innovation on business, the economy, and society. Michael has led McKinsey research in such areas as artificial intelligence, robotics and automation, the future of work, data & analytics, collaboration technologies, the Internet of Things, and biological technologies. Episode Transcript The State of AI: How Organizations are Rewiring to Capture Value (March 12, 2025) Superagency in the workplace: Empowering people to unlock AI's full potential (January 28, 2025) Building AI Trust: The Key Role of Explainability (November 26, 2024) McKinsey Responsible AI Principles
Kevin Werbach speaks with Eric Bradlow, Vice Dean of AI & Analytics at Wharton. Bradlow highlights the transformative impacts of AI from his perspective as an applied statistician and quantitative marketing expert. He describes the distinctive approach of Wharton's analytics program, and its recent evolution with the rise of AI. The conversation highlights the significance of legal and ethical responsibility within the AI field, and the genesis of the new Wharton Accountable AI Lab. Werbach and Bradlow then examine the role of academic institutions in shaping the future of AI, and how institutions like Wharton can lead the way in promoting accountability, learning and responsible AI deployment. Eric Bradlow is the Vice Dean of AI & Analytics at Wharton, Chair of the Marketing Department, and also a professor of Economics, Education, Statistics, and Data Science. His research interests include Bayesian modeling, statistical computing, and developing new methodology for unique data structures with application to business problems. In addition to publishing in a variety of top journals, he has won numerous teaching awards at Wharton, including the MBA Core Curriculum teaching award, the Miller-Sherrerd MBA Core Teaching Award and the Excellence in Teaching Award. Episode Transcript Wharton AI & Analytics Initiative Eric Bradlow - Knowledge at Wharton Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks.
Wharton's new foray into studying Artificial Intelligence is Accountability AI Lab. Malcolm sits down with Kevin Werbach, a Wharton professor (and fellow podcaster) who leads the lab, to ask exactly what is accountable AI?
This week, Kevin Werbach is joined by Wendy Gonzalez of Sama, to discuss the intersection of human judgment and artificial intelligence. Sama provides data annotation, testing, model fine-tuning, and related services for computer vision and generative AI. Kevin and Wendy review Sama's history and evolution, and then consider the challenges of maintaining reliability in AI models through validation and human-centric feedback. Wendy addresses concerns about the ethics of employing workers from the developing world for these tass. She then shares insights on Sama's commitment to transparency in wages, ethical sourcing, and providing opportunities for those facing the greatest employment barriers. Wendy Gonzalez is the CEO Sama. Since taking over 2020, she has led a variety of successes at the company, including launching Machine Learning Assisted Annotation which has improved annotation efficiency by over 300%. Wendy has over two decades of managerial and technology leadership experience for companies including EY, Capgemini Consulting and Cycle30 (acquired by Arrow Electronics), and is an active Board Member of the Leila Janah Foundation. https://www.sama.com/ Forbes Business Council - Wendy Gonzalez
The UK is in a unique position in the global AI landscape. It is home to important AI development labs and corporate AI adopters, but its regulatory regime is distinct from both the US and the European Union. In this episode, Kevin Werbach sits down with Jessica Leonard, the Chief Strategy and External Affairs Officer at the UK's Competition and Markets Authority (CMA). Jessica discusses the CMA's role in shaping AI policy against the backdrop of a shifting political and economic landscape, and how it balances promoting innovation with competition and consumer protection. She highlights the guiding principles that the CMA has established to ensure a fair and competitive AI ecosystem, and how they are designed to establish trust and fair practices across the industry. Jessica Lennard took up the role of Chief Strategy & External Affairs Officer at the CMA in August 2023. Jessica is a member of the Senior Executive Team, an advisor to the Board, and has overall responsibility for Strategy, Communications and External Engagement at the CMA. Previously, she was a Senior Director for Global Data and AI Initiatives at VISA. She also served as an Advisory Board Member for the UK Government Centre for Data Ethics and Innovation. Competition and Markets Authority CMA AI Strategic Update (April 2024)
Kevin Werbach speaks with Krishna Gade, founder and CEO of Fiddler AI, on the the state of explainability for AI models. One of the big challenges of contemporary AI is understanding just why a system generated a certain output. Fiddler is one of the startups offering tools that help developers and deployers of AI understand what exactly is going on. In the conversation, Kevin and Krishna explore the importance of explainability in building trust with consumers, companies, and developers, and then dive into the mechanics of Fiddler's approach to the problem. The conversation covers current and potential regulations that mandate or incentivize explainability, and the prospects for AI explainability standards as AI models grow in complexity. Krishna distinguishes explainability from the broader process of observability, including the necessity of maintaining model accuracy through different times and contexts. Finally, Kevin and Krishna discuss the need for proactive AI model monitoring to mitigate business risks and engage stakeholders. Krishna Gade is the founder and CEO of Fiddler AI, an AI Observability startup, which focuses on monitoring, explainability, fairness, and governance for predictive and generative models. An entrepreneur and engineering leader with strong technical experience in creating scalable platforms and delightful products,Krishna previously held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft. At Facebook, Krishna led the News Feed Ranking Platform that created the infrastructure for ranking content in News Feed and powered use-cases like Facebook Stories and user recommendations. Fiddler.Ai How Explainable AI Keeps Decision-Making Algorithms Understandable, Efficient, and Trustworthy - Krishna Gade x Intelligent Automation Radio
In this episode, Kevin Werbach is joined by Reggie Townsend, VP of Data Ethics at SAS, an analytics software for business platform. Together they discuss SAS's nearly 50-year long history of supporting business's technology and the recent implementation of responsible AI initiatives. Reggie introduces model cards and the importance of variety in AI systems across diverse stakeholders and sectors. Reggie and Kevin explore the increase in both consumer trust and purchases when they feel a brand is ethical in its use of AI and the importance of trustworthy AI in employee retention and recruitment. Their discussion approaches the idea of bias in an untraditional way, highlighting the positive humanistic nature of bias and learning to manage the negative implications. Finally, Reggie shares his insights on fostering ethical AI practices through literacy and open dialogue, stressing the importance of authentic commitment and collaboration among developers, deployers, and regulators. SAS adds to its trustworthy AI offerings with model cards and AI governance services Article by Reggie Townsend: Talking AI in Washington, DC Reggie Townsend oversees the Data Ethics Practice (DEP) at SAS Institute. He leads the global effort for consistency and coordination of strategies that empower employees and customers to deploy data driven systems that promote human well-being, agency and equity. He has over 20 years of experience in strategic planning, management, and consulting focusing on topics such as advanced analytics, cloud computing and artificial intelligence. With visibility across multiple industries and sectors where the use of AI is growing, he combines this extensive business and technology expertise with a passion for equity and human empowerment. Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks.
Kevin Werbach speaks with Diya Wynn, the responsible AI lead at Amazon Web Services (AWS). Diya shares how she pioneered a formal practice for ethical AI at AWS, and explains AWS's “Well-Architected” framework to assist customers in responsibly deploying AI. Kevin and Diya also discuss the significance of diversity and human bias in AI systems, revealing the necessity of incorporating diverse perspectives to create more equitable AI outcomes. Diya Wynn leads a team at AWS that helps customers implement responsible AI practices. She has over 25 years of experience as a technologist scaling products for acquisition; driving inclusion, diversity & equity initiatives; and leading operational transformation. She serves on the AWS Health Equity Initiative Review Committee; mentors at Tulane University, Spelman College, and GMI; was a mayoral appointee in Environment Affairs for six years; and guest lectures regularly on responsible and inclusive technology. Responsible AI for the greater good: insights from AWS's Diya Wynn Ethics In AI: A Conversation With Diya Wynn, AWS Responsible AI Lead Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks.
Kevin Werbach is joined by Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, to discuss the pioneering efforts of her team in building a culture of ethical technology use. Paula shares insights on aligning risk assessments and technical mitigations with business goals to bring stakeholders on board. She explains how AI governance functions in a large business with enterprise customers, who have distinctive needs and approaches. Finally, she highlights the shift from "human in the loop" to "human at the helm" as AI technology advances, stressing that today's investments in trustworthy AI are essential for managing tomorrow's more advanced systems. Paula Goldman leads Salesforce in creating a framework to build and deploy ethical technology that optimizes social benefit. Prior to Salesforce, she served Global Lead of the Tech and Society Solutions Lab at Omidyar Network, and has extensive entrepreneurial experience managing frontier market businesses. Creating safeguards for the ethical use of technology Trusted AI Needs a Human at the Helm Responsible Use of Technology: The Salesforce Case Study Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks.
Kevin Werbach speaks with Navrina Singh of Credo AI, which automates AI oversight and regulatory compliance. Singh addresses the increasing importance of trust and governance in the AI space. She discusses the need to standardize and scale oversight mechanisms by helping companies align and translate their systems to include all stakeholders and comply with emerging global standards. Kevin and Navrina also explore the importance of sociotechnical approaches to AI governance, the necessity of mandated AI disclosures, the democratization of generative AI, adaptive policymaking, and the need for enhanced AI literacy within organizations to keep pace with evolving technologies and regulatory landscapes. Navrina Singh is the Founder and CEO of Credo AI, a Governance SaaS platform empowering enterprises to deliver responsible AI. Navrina previously held multiple product and business leadership roles at Microsoft and Qualcomm. She is a member of the U.S. Department of Commerce National Artificial Intelligence Advisory Committee (NAIAC), an executive board member of Mozilla Foundation, and a Young Global Leader of the World Economic Forum. Credo.ai ISO/ 42001 standard for AI governance Navrina Singh Founded Credo AI To Align AI With Human Values Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks.
Kevin Werbach speaks with Scott Zoldi of FICO, which pioneered consumer credit scoring in the 1950s and now offers a suite of analytics and fraud detection tools. Zoldi explains the importance of transparency and interpretability in AI models, emphasizing a “simpler is better” approach to creating clear and understandable algorithms. He discusses FICO's approach to responsible AI, which includes establishing model governance standards, and enforcing these standards through the use of blockchain technology. Zoldi explains how blockchain provides an immutable record of the model development process, enhancing accountability and trust. He also highlights the challenges organizations face in implementing responsible AI practices, particularly in light of upcoming AI regulations, and stresses the need for organizations to catch up in defining governance standards to ensure trustworthy and accountable AI models. Dr. Scott Zoldi is Chief Analytics Officer of FICO, responsible for analytics and AI innovation across FICO's portfolio. He has authored more than 130 patents, and is a long-time advocate and inventor in the space of responsible AI. He was nomianed for American Banker's 2024 Innovator Award and received Corinium's Future Thinking Award in 2022. Zoldi is a member of the Board of Advisors for FinReg Lab, and serves on the Boards of Directors of Software San Diego and San Diego Cyber Center of Excellence. He received his Ph.D. in theoretical and computational physics from Duke University. Navigating the Wild AI with Dr. Scott Zoldi How to Use Blockchain to Build Responsible AI The State of Responsible AI in Financial Services
Welcome to the Road to Accountable AI. Explore the crucial intersection of technology, law, and business ethics with Wharton professor Kevin Werbach, as he and his guests examine efforts to implement responsible, safe and trustworthy artificial intelligence. In this initial episode, Professor Werbach describes the concept he calls Accountable AI. He talks about his background in emerging technology over the past three decades, starting with his experience leading internet policy at the U.S. Federal Communications Commission during the early years of the commercial internet. He explains why AI has such revolutionary potential today, while at the same time raising serious legal, ethical, and public policy concerns. He provides five reasons why why companies should take Accountable AI seriously, Look for upcoming episodes featuring top AI experts such as Azeem Azhar (Exponential View), Reid Blackman (Author of Ethical Machines), Elham Tabassi (NIST), Dragos Tudorache (European Parliament), Dominique Shelton Leipzig (Mayer Brown), Scott Zoldi (FICO), Navrina Singh (Credo AI), and Paula Goldman (Salesforce). Accountable AI Website Professor Werbach's Substack Professor Werbach's personal page Pew Research Center Survey on Americans' Views of AI DataRobot State of AI Bias Report KPMG US AI Risk Survey Report The Blockchain and the New Architecture of Trust
Kevin Werbach, professor of Legal Studies and Business Ethics at the Wharton School, University of Pennsylvania, and formerly Counsel for New Technology Policy at the U.S. Federal Communications Commission, is a well-known expert on the business, legal, and social implications of emerging technologies. In this interview, we explore blockchain technology and its impact on traditional notions of trust. He delves into the different architectures of trust, including peer-to-peer trust, Leviathan trust, and intermediary trust, highlighting the limitations and risks associated with these traditional forms of trust, leading to the introduction of decentralized architecture offered by blockchain technology. The interview focuses on the application of blockchain in enhancing trust in specific contexts, using the example of Walmart implementing a blockchain-based solution to improve food safety within its global supply chain. The guest speaker emphasizes how blockchain can overcome trust barriers and inefficiencies, leading to enhanced trust and improved outcomes. The conversation also delves into the potential of blockchain technology to bring about freedom from corporate and government power, while acknowledging the risk of empowering criminals. Kevin highlights the importance of blockchain as a part of rebuilding trust in society, by providing transparent and decentralized systems for verifying information and maintaining integrity. He sets out the value of cryptocurrencies, bitcoin, with an emphasis on how blockchain technology provides trust through the integrity and transparency of the ledger. The interview concludes with a discussion on the viability of blockchain technology, the collapse of centralized platforms like FTX, and the comparison between the telecom industry and blockchain.
Decentralized systems are evolving at a dizzying pace. How are these systems solving real-world problems today? What is a DAO and how may this new form of business organization disrupt the status quo? How can we ensure new Web3 regulatory frameworks protect citizens without stifling innovation? On this latest episode of EY Better Innovation, Jeff Saviano explores these questions with Kevin Werbach, professor of Legal Studies and Business Ethics at the University of Pennsylvania's Wharton School. As former Counsel for New Technology Policy at the U.S. Federal Communications Commission, and a frequent author on emerging technologies, Kevin is deeply engaged in matters of Web3 policies and governance. He is a pioneer in emerging fields such as gamification (applying digital game design principles to business), algorithmic accountability, and blockchain. Kevin has published four books, including The Blockchain and the New Architecture of Trust, For the Win: The Power of Gamification and Game Thinking in Business, Education, Government, and Social Impact, and After the Digital Tornado: Networks, Algorithms, Humanity. His work and scholarship are shaping the future of technology systems, as they're poised to better address important societal and business problems.
Decentralized autonomous organizations -- DAOs -- hold much promise, but practitioners and governments must be aware of risks, says Wharton's Kevin Werbach, co-author of a DAO Toolkit that was released at this year's World Economic Forum. Hosted on Acast. See acast.com/privacy for more information.
Wharton's Kevin Werbach speaks with Wharton Business Daily on SiriusXM about the fall of FTX and the need for better cryptocurrency regulation. Hosted on Acast. See acast.com/privacy for more information.
In conversation with Kevin Werbach Acclaimed for their intersectional explorations of cyberculture, religion, currency, and politics, Douglas Rushkoff's 20 bestselling books include Throwing Rocks at the Google Bus, Program or Be Programmed, Present Shock, and Media Virus. He also is the host of the Team Human podcast, writes a column for Medium, and created the PBS Frontline documentaries Generation Like, The Persuaders, and Merchants of Cool. A professor of media theory and digital economics at City University of New York, Queens College, he was selected as one of the world's 10 most influential intellectuals by MIT, was the first winner of the Neil Postman Award for Career Achievement in Public Intellectual Activity, is a recipient of the Marshall McLuhan Award, and has received many other accolades. In Survival of the Richest, Rushkoff reveals the flawed mindset that has led out-of-touch tech titans to prepare for a societal catastrophe they could simply avert through practical measures. Chair of the Department of Legal Studies and Business Ethics at the University of Pennsylvania's Wharton School, Ken Werbach is the author of For the Win: How Game Thinking Can Revolutionize Your Business and The Blockchain and the New Architecture of Trust. He served on the Obama administration's presidential transition team and helped develop the Federal Communications Commission's approach to internet policy. (recorded 9/20/2022)
Kevin Werbach, Wharton Professor of Legal Studies & Business Ethics, talks about Biden's recent executive order on cryptocurrency regulations and what it means for the federal agencies who are tasked with developing policies to protect consumers, investors and businesses. See acast.com/privacy for privacy and opt-out information.
Wharton's Kevin Werbach speaks with Wharton Business Daily on SiriusXM about the Biden administration's executive order to develop a national policy on cryptocurrency. See acast.com/privacy for privacy and opt-out information.
Wharton's Kevin Werbach explains why the Biden administration's executive order to develop a national policy on cryptocurrency is an important step forward.
According to a Microsoft research, a new type of domain name is ripe for fraudsters to abuse. Microsoft's new Digital Defence Report features a rogue's gallery of cyberthreats such as phishing, ransomware, and supply-chain intrusions. However, it introduces a new foe to the mix: blockchain domains. In Microsoft's latest annual security report, domain names inscribed into a distributed ledger maintained across a constellation of machines rather than housed in a traditional, centralised registry are referred to as "the next major threat." When domain names are stored on a blockchain, they can be difficult to shut down or to trace to their owners. It also renders them unavailable without the use of specialised software or configuration. "In recent years, we have observed blockchain domains incorporated into cybercriminal infrastructure and activities," the paper states, referring to Microsoft's experience dismantling a botnet known as Necurs last spring. That botnet employed a domain-generating algorithm to generate new hosts in bulk, including under the.bit blockchain top-level domain, rendering them unpoliced in the same way that a.com or other standards-compliant domain would be. Because of the possibility of abuse, a group called OpenNIC, which advocates alternatives to the existing domain-name system, voted in 2019 to prohibit the.bit domain, fearing that the organisation would be "directly responsible for the birth of a whole new kind of malware." "This trend of dangers employing blockchain domains as infrastructure with the means to establish an undeniable criminal network should be taken carefully," adds Microsoft's research. CAN'T GET THEM TO STOP Meanwhile, among supporters of a decentralised internet, there is a popular answer to the criticism that blockchain names cannot be removed: That's exactly right. According to the sales pitch on the webpage of one blockchain-domain registrar, Unstoppable Domains, "Unlike traditional domains, Unstoppable Domains are totally owned and controlled by the user with zero renewal costs ever (you buy it once, you own it for life! It lists one-time registration rates ranging from $20 to $100 for blockchain top-level domains like as.crypto,.wallet,.coin,.888, and.x, but costs can skyrocket for shorter, more memorable domains. Potomacriver.x, for example, would cost $100, whereas potomac.x would cost $7,500. Unstoppable Domains CEO Matthew Gould responded via email, dismissing the notion that his San Francisco-based company is an irresponsible actor. He mentioned the company's trademark-compliance regulations (it wouldn't let me start registration fastcompany.x because it said it was "protected") and applicant-screening procedures. "We have also prevented the registration of domains associated with known pirating software or other types of IP theft and fraud," he wrote, adding that Unstoppable can even take back a domain if registrants park it with its custody service rather than transferring it to their own cryptocurrency wallet—the former being an easier route that roughly 75% of registrants take today. Gould also argued that blockchain domains would improve trust in cryptocurrency transactions rather than decrease it. "Anonymous people like to generate new addresses every time since it is great practise," he wrote. "Domains establish a single memorable non-changing endpoint, which reduces the anonymity of cryptocurrency payments." Microsoft refused to comment further on the report's conclusions. REQUIRES A SPECIAL BROWSER While blockchain domains have been exploited for malware, Sean Gallagher, senior security researcher at Sophos, stated in an email that their need for bespoke routing rendered them an ineffective option for such assaults, because malware can't spread via standard web browsers that don't support the domains. He also pointed out that blockchain domains provide less privacy than Tor, the cloaked routing method used to avoid many censorship regimes: "They don't provide anonymity for the destination." The simplest method to navigate to a blockchain domain, such as brad.crypto—Unstoppable Domains cofounder Bradley Kam's online space—is to utilise one of the few browsers that already support that namespace, such as the Chrome-based, privacy-optimised Brave. Enter brad.crypto into Brave's URL bar, click to accept the blockchain routing, and you should view Kam's gallery of non-fungible token (NFT) artwork. Kevin Werbach, a professor at the University of Pennsylvania's Wharton School, said he doubted browser support for blockchain domains would spread anytime soon, despite the fact that he'd recently registered kwerb.eth (that suffix references another blockchain domain system, the Ethereum Name Service). "Google, Apple, and Microsoft aren't going to provide native support unless they're confident that those concerns will be addressed," he wrote. As a result, adoption will be contingent on people's willingness to switch browsers, instal browser extensions, or custom-configure DNS settings—the latter two practises being the types of fiddling that malware occasionally exploits. "DNS has security flaws that are partly related to its centralised structure," Werbach explained, "but putting domain names on a blockchain introduces a new set of security issues." "I don't believe we know enough about the size of the relative dangers to make categorical claims." The current frothiness of cryptocurrency and blockchain mania is cause for concern. Mike Masnick, founder of the Techdirt tech-policy blog and proponent of a more decentralised social internet, praised the potential for blockchain domains to "create both a different kind of incentive structure and one in which users may retain more control over their own information." However, he went on to say that the blockchain space today is "almost entirely populated by mercenary folks looking for profit, which has some useful elements—in terms of bringing in funding and incentivising certain behaviours—but also has the real potential for prioritising pure profit over societal benefit." Masnick didn't draw any comparisons between his work and today's commercial social media. However, why should he? Support us!
111. Bölümde Gamfed Türkiye ve Oyun Akademisi Kurucusu Ercan Altuğ Yılmaz konuğum oldu. Abaküs yayıncılıktan 2015'te yayınlanan ilk Türkçe oyunlaştırma kitabı olan “Herkes için Oyunlaştırma” kitabının yazarıdır. Her yıl İspanya'nın Madrid kentinde düzenlenen dünyanın oyunlaştırma odaklı tek konferansı olan Gamification World Congress – Dünya Oyunlaştırma Konferansı'nda 2016 yılında sunum yapan ilk ve tek Türk'tür. (00:00) - Açılış ve Ercan Altuğ Yılmaz kimdir? (02:27) - Oyunlaştırma ne zaman çıkmış? Foursquare (05:24) - Çok uzun zamandır hayatımızda olmasına rağmen hala bir çok insan oyunlaştırmayı açıklamakta zorlanıyor. Bunun nedeni ne olabilir? Oyun deyince küçümsüyor muyuz? Önemsemiyor muyuz? Sinan Canan (12:27) - Oyunlaştırma Nasıl Çalışır? Kevin Werbach - http://werbach.com/ Atomik Alışkanlıklar - https://www.goodreads.com/book/show/53711349-atomik-al-kanl-klar (17:00) - Oyunlaştırma üzerine araştırmalar Tampere University - https://webpages.tuni.fi/gamification/ Amy Jo Kim - https://amyjokim.com/ (22:40) - Oyunlaştırma işe yarıyor mu? (27:45) - Dünyada en çok oyun oynayan 3 ülkeden biriyiz yanılmıyorsam. Bunun sebebi nedir? Bunun avantajlı yanları var mı? Emrehan Halıcı - https://twitter.com/emrehanhalici?lang=en Platon Günlükleri (34:00) - Çocuklar ve oyunlar. Küçük Alışkanlıklar - https://www.goodreads.com/book/show/59335993-k-k-al-kanl-klar?from_search=true&from_srp=true&qid=4JsYOMcyhi&rank=2 Beni Ödülle Cezalandırma - (37:15) - Şirketler şu anda oyunlaştırmayı nasıl kullanıyor? Tencent - https://en.wikipedia.org/wiki/Tencent Çin E- Devleti (41:50) - Kitap Önerisi Homo Ludens - https://www.kitapyurdu.com/kitap/homo-ludens/23319.html Ercan Altuğ Yılmaz Kitapları - https://www.idefix.com/yazar/ercan-altug-yilmaz/s=310836 Ercan Altuğ Yılmaz - https://www.linkedin.com/in/ercanaltug/ Sosyal Medya Hesaplarımız; Twitter - https://twitter.com/dunyatrendleri Instagram - https://www.instagram.com/dunya.trendleri/ Linkedin - https://www.linkedin.com/company/dunyatrendleri/ Youtube - https://www.youtube.com/c/aykutbalcitv Goodreads - https://www.goodreads.com/user/show/28342227-aykut-balc aykut@dunyatrendleri.com Bize Bağış Yapmak İsterseniz Patreon hesabımız - https://www.patreon.com/dunyatrendleri
The big question employers everywhere, from small size mom-and-pop shops to global enterprises, are struggling with is “how do we attract and keep talented workers?” There are a lot of different and important factors, such as wages, having safe COVID policies, remote work, culture, but one thing very few people are talking about is even in such a serious time, making the effort to make a workplace fun. Kevin Werbach is an Associate Professor and Chairperson of Legal studies and Business Ethics at The Wharton University of Pennsylvania. His recent book, “For the Win: The Power of Gamification and Game Thinking in Business, Education, Government, and Social Impact,” illustrates the ways in which concepts in video games can be utilized to find, and retain the best level of talent for your business. Kevin honed the practices in his book during his tenure as the FCC Review Co-Lead under the Obama administration and as a consultant for the CIA and World Bank, where he developed training programs based on his gamification practices. Even in these very “serious” industries, Kevin used games, and more importantly, the idea of fun to engage, motivate, and build trust and community within his teams, something that employers everywhere right now are struggling to do during The Great Resignation, so with that...let's bring it in!
Despite its decentralized governance and vulnerability, why does Blockchain excite even the most powerful companies and governments? Associate Professor of Legal Studies & Business Ethics at Wharton, Author, and Founder of Supernova Group Consulting Firm, Kevin Werbach dives into the details of his book, The Blockchain And The New Architecture Of Trust.Listen as he discusses with host Greg LaBlanc how we can put ourselves in the position of confident vulnerability and how the different trust frameworks substitute and work complementary with each other. Kevin explains why Blockchain emerges in places where there's a solid institution of trust.Finally, make sure to take notes as they comprehensively talk about issues arising from the decentralized governance of this system, learning from the DAO, integrating smart contracts in legal systems, and the concept of Information Fiduciary.Episode Quotes:Systems of trust: substitutes or complements?What's good about blockchain type systems is it's a different kind of trust. You're trusting in the technology, the code, the math, the cryptography and then that's an alternative. He really sees them as substitutes and I pushed back a lot on that view as well. You know, I'm sure we'll get to one of the other big points of the book that is regulation, governance law. These are not opposed to the kind of cryptographic decentralized trust that blockchain enables. In fact, having good systems of law, regulation and governance are necessary to realize the full potential of the technology.Why do blockchains emerge in places where there's a solid institution of trust?The first one is important that while it's certainly true that many people at the beginning and even now, who are influential and associated with cryptocurrencies and blockchain have this radical distrust of governments and authority. Well, that's certainly true. Coinbase is now the most valuable crypto company in the world. If you use Coinbase, you give them your keys, you give them control of your digital assets. 80% of all cryptocurrencies are held in these centralized exchanges where people literally give someone else control. Why? Because it's much easier and more effective that way. You, as an individual, don't have the full burden of securing your digital assets and having no recourse if you lose the private key. The second one, I really agree with your statement that we don't see the adoption of blockchain in areas that have a significant breakdown in trust. Those are societies where there are huge problems. And so we're typically not going to see massive investment and large companies being built, just because of all the different societal problems in general. The issue is ultimately not the one that the digital systems, the blockchain system solve.In context to what happened to DAO, how did this prompt the industry to make the system better and open discussions on governance?If you look at what the more sophisticated blockchain-based systems are doing, and this has only increased in the time since the book came out, they are building new governance mechanisms that try to recreate some of the best features of court systems in a decentralized way. Now, again, that's still not going to be perfect. I still think we need law as a backstop ultimately. They are recognizing that there's a value that those governance mechanisms promote. The issue is not how to abandon them, but how to get as much of the benefit of them as possible in a decentralized way.An overview of Information Fiduciary in the context of big data companies: If you're a fiduciary, you have special obligations. You have to take into account the interest of the one that you are responsible for ahead of your own interests. Jack Balkan at Yale and Jonathan Zittrain at Harvard have made the claim that big digital platforms like Facebook and Google are now so central to the information ecosystem. We have so much dependency on them for our data without really any choice to say, no. We can't just say I'm going to live my life and never touch them in the world today. Because of that, they should be treated as fiduciaries. They should have obligations. For example, if I can make money on your data, I just need to have a contract where you say you agree to that use. No, no, no. They have to show that their use of your data is in your interest. They use your data to give you a better service but not if they use data for them to make more money, that's against a mantra.Show Links:Guest ProfileKevin Werbach's Profile and Official WebsiteKevin on TwitterKevin on LinkedInGoogle Scholar ArticlesOrder BooksFor the Win: The Power of Gamification and Game Thinking in Business, Education, Government, and Social ImpactAfter the Digital Tornado: Networks, Algorithms, HumanityThe Blockchain and the New Architecture of Trust (Information Policy)The Gamification Toolkit: Dynamics, Mechanics, and Components for the Win
Kevin Werbach, Wharton Professor of Legal Studies and Business Ethics, talks to Wharton Business Daily's Dan Loney about decentralized finance - the intersection of blockchain, digital assets and financial services. He outlines why it has really exploded over the last year and the DeFi toolkit he's creating for the World Economic Forum. See acast.com/privacy for privacy and opt-out information.
Decentralized finance -- or DeFi -- has experienced explosive growth in the past year. But in order for DeFi to fulfill its promise “now is the time to evaluate its benefits and dangers ” write Kevin Werbach and David Gogel. See acast.com/privacy for privacy and opt-out information.
Decentralized finance -- or DeFi -- has experienced explosive growth in the past year. But in order for DeFi to fulfill its promise, “now is the time to evaluate its benefits and dangers,” write Kevin Werbach and David Gogel.
Decentralized finance -- or DeFi -- has experienced explosive growth in the past year. But in order for DeFi to fulfill its promise, “now is the time to evaluate its benefits and dangers,” write Kevin Werbach and David Gogel.
Measures that protect investors and weed out bad actors will boost confidence in cryptocurrencies and help the industry to grow according to Wharton's Brian Feinstein and Kevin Werbach. See acast.com/privacy for privacy and opt-out information.
Blockchain is one of the biggest buzzwords in technology today but confusion exists about what it is exactly. Wharton's Kevin Werbach provides clarity in his new book. See acast.com/privacy for privacy and opt-out information.
God and technology are often portrayed as being at odds. Maybe because both are impossible to understand without dedicating your life to them. They can also be either helpful or harmful when applied in the right or wrong place.See omnystudio.com/listener for privacy information.See omnystudio.com/listener for privacy information.
Wharton professor Kevin Werbach explains why the blockchain is poised to upend the way many industries do business. See acast.com/privacy for privacy and opt-out information.
Our topic today is: Game Thinking and the MVP of Instructional Design with special guest Zsolt Olah. Zsolt participated in Kevin Werbach's G.A.M.E. (Gameful Approaches of Motivation and Engagement) along with researchers and practitioners of game design and gamification such as Karl Kapp, Amy Jo Kim, and Sebastian Deterding. While researchers and practitioners might have disagreed on many levels about gamification, there was one big take away for L&D we all agreed on. Is Instructional Design Dead? Zsolt believes that Instructional Design can be the MVP of the game, but only if we redefine our job and move our focus from the traditional content-driven design to user-centered action design. And that's where Game Thinking comes in. Zsolt likes to refer to the systematic approach overall as Game Thinking, which may result in gamification, game-based learning, the combination of the two or no training at all. It all depends on the business goals. Our discussion today explores: The meaning of Game Thinking first. A resistance by instructional designers to designing gamified learning experiences The most frequently asked questions by instructional designers about gamification Moving away from the content-driven approach to a user-centered, action-focused approach The best way to learn more about gamification of learning About Zsolt Olah: Zsolt is a Director, Innovation and Learning Solutions at PDG (Performance Development Group), where he's responsible for a team to deliver innovative learning and performance solutions that drive business results. Previously, Zsolt worked as a Sr. Program Manager at Comcast University, where he was the thought leader in the creative learning solutions space, spearheading the research and application strategy of gamification/game-based learning, game-thinking within learning and development. Connect with Zsolt http://rabbitoreg.com and on Twitter at @rabbitoreg Connect with Monica at www.monicacornetti.com or on Twitter at @monicacornetti Visit the Sententia website to learn more about our upcoming Level 1 and Level 2 certifications for learning and talent management professionals – we have both live and online certifications you can choose from – www.sententiagames.com
Our topic today is: Game Thinking and the MVP of Instructional Design with special guest Zsolt Olah. Zsolt participated in Kevin Werbach's G.A.M.E. (Gameful Approaches of Motivation and Engagement) along with researchers and practitioners of game design and gamification such as Karl Kapp, Amy Jo Kim, and Sebastian Deterding. While researchers and practitioners might have disagreed on many levels about gamification, there was one big take away for L&D we all agreed on. Is Instructional Design Dead? Zsolt believes that Instructional Design can be the MVP of the game, but only if we redefine our job and move our focus from the traditional content-driven design to user-centered action design. And that's where Game Thinking comes in. Zsolt likes to refer to the systematic approach overall as Game Thinking, which may result in gamification, game-based learning, the combination of the two or no training at all. It all depends on the business goals. Our discussion today explores: The meaning of Game Thinking first. A resistance by instructional designers to designing gamified learning experiences The most frequently asked questions by instructional designers about gamification Moving away from the content-driven approach to a user-centered, action-focused approach The best way to learn more about gamification of learning About Zsolt Olah: Zsolt is a Director, Innovation and Learning Solutions at PDG (Performance Development Group), where he's responsible for a team to deliver innovative learning and performance solutions that drive business results. Previously, Zsolt worked as a Sr. Program Manager at Comcast University, where he was the thought leader in the creative learning solutions space, spearheading the research and application strategy of gamification/game-based learning, game-thinking within learning and development. Connect with Zsolt http://rabbitoreg.com and on Twitter at @rabbitoreg Connect with Monica at www.monicacornetti.com or on Twitter at @monicacornetti Visit the Sententia website to learn more about our upcoming Level 1 and Level 2 certifications for learning and talent management professionals – we have both live and online certifications you can choose from – www.sententiagames.com
An “open Internet ” free of entry bans throttling or paid prioritization won the day in a big FCC ruling. Wharton professor Kevin Werbach explains the business implications. See acast.com/privacy for privacy and opt-out information.
The FCC's proposed new rules regarding net neutrality are most likely benign says Wharton's Kevin Werbach. Instead a much bigger threat to the Internet in the U.S. is a lack of competition in broadband. See acast.com/privacy for privacy and opt-out information.
Agreeing on a system that will allow the most efficiency and innovation lies at the heart of the current debate over wireless spectrum allocation says Wharton's Kevin Werbach. See acast.com/privacy for privacy and opt-out information.
From Wikipedia: Kevin Werbach is a leading expert on the business, policy, and social implications of emerging Internet and communications technologies. Werbach is an Associate Professor of Legal Studies and Business Ethics at The Wharton School, University of Pennsylvania (since 2004). His latest book, “For the Win“, talks about how to reap the benefits of Gamification in the workplace. It's an awesome interview, and reminds us of our friends over at 8BIT. Enjoy!
Can work be fun? Can the insights of successful game designers be used to engage customers in a variety of industries? Wharton legal studies and business ethics professor Kevin Werbach and New York Law School professor Dan Hunter authors of For the Win: How Game Thinking Can Revolutionize Your Business say yes. Knowledge at Wharton spoke with Werbach and Hunter about what gamification really is how companies are using it and what pitfalls to avoid when gamifying. (Video with transcript) See acast.com/privacy for privacy and opt-out information.
Gamification may be a new term to most people but for many members of the business community it means a new way to create value for their companies customers and employees among others. What exactly is gamification what is it not and how will it change the way we do business in the next few years? Knowledge at Wharton discussed these issues with professor Kevin Werbach; Rajat Paharia founder of Bunchball a tech company that enables businesses to implement gamification and Daniel Debow co-founder of Rypple a social performance management company. Werbach and colleague Dan Hunter recently organized a two-day conference on gamification titled ”For the Win: Serious Gamification.” See acast.com/privacy for privacy and opt-out information.
Rumors that Facebook or Cisco would buy Skype were proven wrong on Tuesday when Microsoft struck an $8.5 billion deal to acquire the online voice and video chat service. Most analysts welcomed the takeover as a shrewd move on the grounds that it positions Microsoft in a commanding position in the emerging markets of video content and online telephony. Still considering that Skype's previous acquisition by eBay ended in a $1.4 billion write-down questions remain. Will there be a good cultural fit between Microsoft and Skype? Did Microsoft overpay for a company that continues to lose money? Knowledge at Wharton discussed these questions and more with Wharton professors Eric Clemons and Kevin Werbach. See acast.com/privacy for privacy and opt-out information.