Shifting Privacy Left features lively discussions on the need for organizations to embed privacy by design into the UX/UI, architecture, engineering / DevOps and the overall product development processes BEFORE code or products are ever shipped. Each Tuesday, we publish a new episode that features interviews with privacy engineers, technologists, researchers, ethicists, innovators, market makers, and industry thought leaders. We dive deeply into this subject and unpack the exciting elements of emerging technologies and tech stacks that are driving privacy innovation; strategies and tactics that win trust; privacy pitfalls to avoid; privacy tech issues ripped from the headlines; and other juicy topics of interest.Â
Debra J. Farber (Shifting Privacy Left)
In this episode, I'm joined by Amalia Barthel, founder of Designing Privacy, a consultancy that helps businesses integrate privacy into business operations; and Eric Lybeck, a seasoned independent privacy engineering consultant with over two decades of experience in cybersecurity and privacy. Eric recently served as Director of Privacy Engineering at Privacy Code. Today, we discuss: the importance of more training for privacy engineers on AI system enablement; why it's not enough for privacy professionals to solely focus on AI governance; and how their new hands-on course, "Privacy Engineering in AI Systems Certificate program," can fill this need. Throughout our conversation, we explore the differences between AI system enablement and AI governance and why Amalia and Eric were inspired to develop this certification program. They share examples of what is covered in the course and outline the key takeaways and practical toolkits that enrollees will get - including case studies, frameworks, and weekly live sessions throughout. Topics Covered: How AI system enablement differs from AI governance and why we should focus on AI as part of privacy engineering Why Eric and Amalia designed an AI systems certificate course that bridges the gaps between privacy engineers and privacy attorneysThe unique ideas and practices presented in this course and what attendees will take away Frameworks, cases, and mental models that Eric and Amalia will cover in their courseHow Eric & Amalia structured the Privacy Engineering in AI Systems Certificate program's coursework The importance of upskilling for privacy engineers and attorneysResources Mentioned:Enroll in the 'Privacy Engineering in AI Systems Certificate program' (Save $300 with promo code: PODCAST300 - enter this into the Inquiry Form instead of directly purchasing the course)Read: 'The Privacy Engineer's Manifesto'Take the free European Commission's course, 'Understanding Law as Code'Guest Info: Connect with Amalia on LinkedInConnect with Eric on LinkedInLearn about Designing PrivacySend us a Text Message. TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Today, I chat with Gianclaudio Malgieri, an expert in privacy, data protection, AI regulation, EU law, and human rights. Gianclaudio is an Associate Professor of Law at Leiden University, the Co-director of the Brussels Privacy Hub, Associate Editor of the Computer Law & Security Review, and co-author of the paper "The Unfair Side of Privacy Enhancing Technologies: Addressing the Trade-offs Between PETs and Fairness". In our conversation, we explore this paper and why privacy-enhancing technologies (PETs) are essential but not enough on their own to address digital policy challenges.Gianclaudio explains why PETs alone are insufficient solutions for data protection and discusses the obstacles to achieving fairness in data processing – including bias, discrimination, social injustice, and market power imbalances. We discuss data alteration techniques such as anonymization, pseudonymization, synthetic data, and differential privacy in relation to GDPR compliance. Plus, Gianclaudio highlights the issues of representation for minorities in differential privacy and stresses the importance of involving these groups in identifying bias and assessing AI technologies. We also touch on the need for ongoing research on PETs to address these challenges and share our perspectives on the future of this research. Topics Covered: What inspired Gianclaudio to research fairness and PETsHow PETs are about power and controlThe legal / GDPR and computer science perspectives on 'fairness'How fairness relates to discrimination, social injustices, and market power imbalances How data obfuscation techniques relate to AI / ML How well the use of anonymization, pseudonymization, and synthetic data techniques address data protection challenges under the GDPRHow the use of differential privacy techniques may led to unfairness Whether the use of encrypted data processing tools and federated and distributed analytics achieve fairness 3 main PET shortcomings and how to overcome them: 1) bias discovery; 2) harms to people belonging to protected groups and individuals autonomy; and 3) market imbalances.Areas that warrant more research and investigation Resources Mentioned:Read: "The Unfair Side of Privacy Enhancing Technologies: Addressing the Trade-offs Between PETs and Fairness"Guest Info: Connect with Gianclaudio on LinkedInLearn more about Brussles Privacy HubSend us a Text Message.Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Send us a Text Message.In this episode, I had the pleasure of talking with Avi Bar-Zeev, a true tech pioneer and the Founder and President of The XR Guild. With over three decades of experience, Avi has an impressive resume, including launching Disney's Aladdin VR ride, developing Second Life's 3D worlds, co-founding Keyhole (which became Google Earth), co-inventing Microsoft's HoloLens, and contributing to the Amazon Echo Frames. The XR Guild is a nonprofit organization that promotes ethics in extended reality (XR) through mentorship, networking, and educational resources. Throughout our conversation, we dive into privacy concerns in augmented reality (AR), virtual reality (VR), and the metaverse, highlighting increased data misuse and manipulation risks as technology progresses. Avi shares his insights on how product and development teams can continue to be innovative while still upholding responsible, ethical standards with clear principles and guidelines to protect users' personal data. Plus, he explains the role of eye-tracking technology and why he advocates classifying its data as health data. We also discuss the challenges of anonymizing biometric data, informed consent, and the need for ethics training in all of the tech industry. Topics Covered: The top privacy and misinformation issues that Avi has noticed when it comes to AR, VR, and metaverse dataWhy Avi advocates for classifying eye tracking data as health data The dangers of unchecked AI manipulation and why we need to be more aware and in control of our online presence The ethical considerations for experimentation in highly regulated industriesWhether it is possible to anonymize VR and AR dataWays these product and development teams can be innovative while maintaining ethics and avoiding harm AR risks vs VR risksAdvice and privacy principles to keep in mind for technologists who are building AR and VR systems Understanding The XR Guild Resources Mentioned:Read: The Battle for Your Brain: Defending the Right to Think Freely in the Age of NeurotechnologyRead: Our Next RealityGuest Info: Connect with Avi on LinkedInCheck out the XR GuildLearn about Avi's Consulting Services Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnTRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Today, I'm joined by Matt Gershoff, Co-founder and CEO of Conductrics, a software company specializing in A/B testing, multi-armed bandit techniques, and customer research and survey software. With a strong background in resource economics and artificial intelligence, Matt brings a unique perspective to the conversation, emphasizing simplicity and intentionality in decision-making and data collection. In this episode, Matt dives into Conductrics' background, the role of A/B testing and experimentation in privacy, data collection at a specific and granular level, and the details of Conductrics' processes. He emphasizes the importance of intentionally collecting data with a clear purpose to avoid unnecessary data accumulation and touches on the value of experimentation in conjunction with data minimization strategies. Matt also discusses his upcoming talk at the PEPR Conference and shares his hopes for what privacy engineers will learn from the event. Topics Covered: Matt's background and how he started A/B testing and experimentation at ConductricsThe major challenges that arise when companies run experiments and how Conductrics works to solve them Breaking down A/B testingHow being intentional about A/B testing and experimentation supports high level privacyThe process of the data collection, testing, and experimentation Collecting the data while minimizing privacy risks The value of attending the USENIX Conference on Privacy Engineering Practice & Respect (PEPR24) and what to expect from Matt's talk Guest Info: Connect with Matt on LinkedInLearn more about ConductricsRead about George Box's quote, "All models are wrong" Learn about the PEPR Conference Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
In this episode, Marie Potel-Saville joins me to shed light on the widespread issue of dark patterns in design. With her background in law, Marie founded the 'FairPatterns' project with her award-winning privacy and innovation studio, Amurabi, to detect and fix large-scale dark patterns. Throughout our conversation, we discuss the different types of dark patterns, why it is crucial for businesses to prevent them from being coded into their websites and apps, and how designers can ensure that they are designing fair patterns in their projects.Dark patterns are interfaces that deceive or manipulate users into unintended actions by exploiting cognitive biases inherent in decision-making processes. Marie explains how dark patterns are harmful to our economic and democratic models, their negative impact on individual agency, and the ways that FairPatterns provides countermeasures and safeguards against the exploitation of people's cognitive biases. She also shares tips for designers and developers for designing and architecting fair patterns.Topics Covered: Why Marie shifted her career path from practicing law to deploying and lecturing on Legal UX design & combatting Dark Patterns at AmurabiThe definition of ‘Dark Patterns' and the difference between them and ‘deceptive patterns'What motivated Marie to found FairPatterns.com and her science-based methodology to combat dark patternsThe importance of decision making governance Why execs should care about preventing dark patterns from being coded into their websites, apps, & interfacesHow dark patterns exploit our cognitive biases to our detrimentWhat global laws say about dark patternsHow dark patterns create structural risks for our economies & democratic modelsHow "Fair Patterns" serve as countermeasures to Dark PatternsThe 7 categories of Dark Patterns in UX design & associated countermeasures Advice for designers & developers to ensure that they design & architect Fair Patterns when build9ing products & featuresHow companies can boost sales & gain trust with Fair Patterns Resources to learn more about Dark Patterns & countermeasuresGuest Info: Connect with Marie on LinkedInLearn more about AmurabiCheck out FairPatterns.comResources Mentioned:Learn about the 7 Stages of Action ModelTake FairPattern's course: Dark Patterns 101 Read Deceptive Design PatternsListen to FairPatterns' Fighting Dark Patterns Podcast Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
In this episode, I sat down with Aaron Weller, the Leader of HP's Privacy Engineering Center of Excellence (CoE), focused on providing technical solutions for privacy engineering across HP's global operations. Throughout our conversation, we discuss: what motivated HP's leadership to stand up a CoE for Privacy Engineering; Aaron's approach to staffing the CoE; how a CoE's can shift privacy left in a large, matrixed organization like HP's; and, how to leverage the CoE to proactively manage privacy risk.Aaron emphasizes the importance of understanding an organization's strategy when creating a CoE and shares his methods for gathering data to inform the center's roadmap and team building. He also highlights the great impact that a Center of Excellence can offer and gives advice for implementing one in your organization. We touch on the main challenges in privacy engineering today and the value of designing user-friendly privacy experiences. In addition, Aaron provides his perspective on selecting the right combination of Privacy Enhancing Technologies (PETs) for anonymity, how to go about implementing PETs, and the role that AI governance plays in his work. Topics Covered: Aaron's deep privacy and consulting background and how he ended up leading HP's Privacy Engineering Center of Excellence The definition of a "Center of Excellence" (CoE) and how a Privacy Engineering CoE can drive value for an organization and shift privacy leftWhat motivates a company like HP to launch a CoE for Privacy Engineering and what it's reporting line should beAaron's approach to creating a Privacy Engineering CoE roadmap; his strategy for staffing this CoE; and the skills & abilities that he soughtHow HP's Privacy Engineering CoE works with the business to advise on, and select, the right PETs for each business use caseWhy it's essential to know the privacy guarantees that your organization wants to assert before selecting the right PETs to get you thereLessons Learned from setting up a Privacy Engineering CoE and how to get executive sponsorshipThe amount of time that Privacy teams have had to work on AI issues over the past year, and advice on preventing burnoutAaron's hypothesis about the value of getting an early handle on governance over the adoption of innovative technologiesThe importance of being open to continuous learning in the field of privacy engineering Guest Info: Connect with Aaron on LinkedInLearn about HP's Privacy Engineering Center of ExcellenceReview the OWASP Machine Learning Security Top 10Review the OWASP Top 10 for LLM Applications Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing Partners Top privacy talent - when you need it, where you need it.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Today, I'm joined by Amaka Ibeji, Privacy Engineer at Cruise where she designs and implements robust privacy programs and controls. In this episode, we discuss Amaka's passion for creating a culture of privacy and compliance within organizations and engineering teams. Amaka also hosts the PALS Parlor Podcast, where she speaks to business leaders and peers about privacy, AI governance, leadership, and security and explains technical concepts in a digestible way. The podcast aims to enable business leaders to do more with their data and provides a way for the community to share knowledge with one other.In our conversation, we touch on her career trajectory from security engineer to privacy engineer and the intersection of cybersecurity, privacy engineering, and AI governance. We highlight the importance of early engagement with various technical teams to enable innovation while still achieving privacy compliance. Amaka also shares the privacy-enhancing technologies (PETs) that she is most excited about, and she recommends resources for those who want to learn more about strategic privacy engineering. Amaka emphasizes that privacy is a systemic, 'wicked problem' and offers her tips for understanding and approaching it. Topics Covered:How Amaka's compliance-focused experience at Microsoft helped prepare her for her Privacy Engineering role at CruiseWhere privacy overlaps with the development of AI Advice for shifting privacy left to make privacy stretch beyond a compliance exerciseWhat works well and what doesn't when building a 'Culture of Privacy'Privacy by Design approaches that make privacy & innovation a win-win rather than zero-sum gamePrivacy Engineering trends that Amaka sees; and, the PETs about which she's most excitedAmaka's Privacy Engineering resource recommendations, including: Hoepman's "Privacy Design Strategies" book;The LINDDUN Privacy Threat Modeling Framework; andThe PLOT4AI Framework"The PALS Parlor Podcast," focused on Privacy Engineering, AI Governance, Leadership, & SecurityWhy Amaka launched the podcast;Her intended audience; andTopics that she plans to cover this yearThe importance of collaboration; building a community of passionate privacy engineers, and addressing the systemic issue of privacy Guest Info & Resources:Follow Amaka on LinkedInListen to The PALS Parlor PodcastRead Jaap-Henk Hoepman's "Privacy Design Strategies (The Little Blue Book)"Read Jason Cronk's "Strategic Privacy by Design, 2nd Edition"Check out The LINDDUN Privacy Threat Modeling FrameworkCheck out The Privacy Library of Threats for Artificial Intelligence (PLOT4.AI) Framework Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing Partners Top privacy talent - when you need it, where you need it.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
In this week's episode, I am joined by Heidi Saas, a privacy lawyer with a reputation for advocating for products and services built with privacy by design and against the abuse of personal data. In our conversation, she dives into recent FTC enforcement actions, analyzing five FTC actions and some enforcement sweeps by Colorado & Connecticut. Heidi shares her insights on the effect of the FTC enforcement actions and what privacy engineers need to know, emphasizing the need for data management practices to be transparent, accountable, and based on affirmative consent. We cover the role of privacy engineers in ensuring compliance with data privacy laws; why 'browsing data' is 'sensitive data;' the challenges companies face regarding data deletion; and the need for clear consent mechanisms, especially with the collection and use of location data. We also discuss the need to audit the privacy posture of products and services - which includes a requirement to document who made certain decisions - and how to prioritize risk analysis to proactively address risks to privacy.Topics Covered: Heidi's journey into privacy law and advocacy for privacy by design and defaultHow the FTC brings enforcement actions, the effect of their settlements, and why privacy engineers should pay closer attentionCase 1: FTC v. InMarket Media - Heidi explains the implication of the decision: where data that are linked to a mobile advertising identifier (MAID) or an individual's home are not considered de-identifiedCase 2: FTC v. X-Mode Social / OutLogic - Heidi explains the implication of the decision, focused on: affirmative express consent for location data collection; definition of a 'data product assessment' and audit programs; and data retention & deletion requirementsCase 3: FTC v. Avast - Heidi explains the implication of the decision: 'browsing data' is considered 'sensitive data'Case 4: The People (CA) v. DoorDash - Heidi explains the implications of the decision, based on CalOPPA: where companies that share personal data with one another as part of a 'marketing cooperative' are, in fact, selling of dataHeidi discusses recent State Enforcement Sweeps for privacy, specifically in Colorado and Connecticut and clarity around breach reporting timelinesThe need to prioritize independent third-party audits for privacyCase 5: FTC v. Kroger - Heidi explains why the FTC's blocking of Kroger's merger with Albertson's was based on antitrust and privacy harms given the sheer amount of personal data that they processTools and resources for keeping up with FTC cases and connecting with your privacy community Guest Info: Follow Heidi on LinkedInRead (book): 'Means of Control: How the Hidden Alliance of Tech and Government is Creating a New American Surveillance State' Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing Partners Top privacy talent - when you need it, where you need it.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week's episode, I chat with Chris Zeunstrom, the Founder and CEO of Ruca and Yorba. Ruca is a global design cooperative and founder support network, while Yorba is a reverse CRM that aims to reduce your digital footprint and keep your personal information safe. Through his businesses, Chris focuses on solving common problems and creating innovative products. In our conversation, we talk about building a privacy-first company, the digital minimalist movement, and the future of decentralized identity and storage.Chris shares his journey as a privacy-focused entrepreneur and his mission to prioritize privacy and decentralization in managing personal data. He also explains the digital minimalist movement and why its teachings reach beyond the industry. Chris touches on Yorba's collaboration with Consumer Reports to implement Permission Slip and creating a Data Rights Protocol ecosystem that automates data deletion for consumers. Chris also emphasizes the benefits of decentralized identity and storage solutions in improving personal privacy and security. Finally, he gives you a sneak peek at what's next in store for Yorba.Topics Covered: How Yorba was designed as a privacy-1st consumer CRM platform; the problems that Yorba solves; and key product functionality & privacy featuresWhy Chris decided to bring a consumer product to market for privacy rather than a B2B productWhy Chris incorporated Yorba as a 'Public Benefit Corporation' (PBC) and sought B Corp statusExploring 'Digital Minimalism' How Yorba's is working with Consumer Reports to advance the CR Data Rights Protocol, leveraging 'Permission Slip' - an authorized agent for consumers to submit data deletion requestsThe architectural design decisions behind Yorba's personal CRM system The benefits to using Matomo Analytics or Fathom Analytics for greater privacy vs. using Google Analytics The privacy benefits to deploying 'Decentralized Identity' & 'Decentralized Storage' architecturesChris' vision for the next stage of the Internet; and, the future of YorbaGuest Info: Follow/Connect with Chris on LinkedInCheck out Yorba's website Resources Mentioned: Read: TechCrunch's review of YorbaRead: 'Digital Minimalism - Choosing a Focused Life In a Noisy World' by Cal NewportSubscribe to the Bullet Journal (AKA Bujo) on Digital Minimalism by Ryder CarrollLearn about Consumer Reports' Permission Slip Protocol Check out Matomo Analytics and Fathom for privacy-first analytics platforms Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing Partners Top privacy talent - when you need it, where you need it.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
In this week's episode, I sat down with Jake Ottenwaelder, Principal Privacy Engineer at Integrative Privacy LLC. Throughout our conversation, we discuss Jake's holistic approach to privacy implementation that considers business, engineering, and personal objectives, as well as the role of anonymization, consent management, and DSAR processes for greater privacy. Jake believes privacy implementation must account for the interconnectedness of privacy technologies and human interactions. He highlights what a successful implementation looks like and the negative consequences when done poorly. We also dive into the challenges of implementing privacy in fast-paced, engineering-driven organizations. We talk about the complexities of anonymizing data (a very high bar) and he offers valuable suggestions and strategies for achieving anonymity while making the necessary resources more accessible. Plus, Jake shares his advice for organizational leaders to see themselves as servant-leaders, leaving a positive legacy in the field of privacy. Topics Covered: What inspired Jake's initial shift from security engineering to privacy engineering, with a focus on privacy implementationHow Jake's previous role at Axon helped him shift his mindset to privacyJake's holistic approach to implementing privacy The qualities of a successful implementation and the consequences of an unsuccessful implementationThe challenges of implementing privacy in large organizations Common blockers to the deployment of anonymizationJake's perspective on using differential privacy techniques to achieve anonymityCommon blockers to implementing consent management capabilitiesThe importance of understanding data flow & lineage, and auditing data deletion Holistic approaches to implementing a streamlined and compliant DSAR process with minimal business disruption Why Jake believes it's important to maintain a servant-leader mindset in privacyGuest Info: Connect with Jake on LinkedInIntegrative Privacy LLC Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing Partners Top privacy talent - when you need it, where you need it.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
In this week's episode, I am joined by Steve Tout, Practice Lead at Integrated Solutions Group (ISG) and Host of The Nonconformist Innovation Podcast to discuss the intersection of privacy and identity. Steve has 18+ years of experience in global Identity & Access Management (IAM) and is currently completing his MBA from Santa Clara University. Throughout our conversation, Steve shares his journey as a reformed technologist and advocate for 'Nonconformist Innovation' & 'Tipping Point Leadership.'Steve's approach to identity involves breaking it down into 4 components: 1) philosophy, 2) politics, 3) economics & 4)technology, highlighting their interconnectedness. We also discuss his work with Washington State and its efforts to modernize Consumer Identity Access Management (IAM). We address concerns around AI, biometrics & mobile driver's licenses. Plus, Steve offers his perspective on tipping point leadership and the challenges organizations face in achieving privacy change at scale.Topics Covered: Steve's origin story; his accidental entry into identity & access management (IAM)Steve's perspective as a 'Nonconformist Innovator' and why he launched 'The Nonconformist Innovation Podcast'The intersection of privacy & identityHow to address organizational resistance to change, especially with lean resourcesBenefits gained from 'Tipping Point Leadership'4 common hurdles to tipping point leadership How to be a successful tipping point leader within a very bottom-up focused organization'Consumer IAM' & the driving need for modernizing identity in Washington StateHow Steve has approached the challenges related to privacy, ethics & equity Differences between the mobile driver's license (mDL) & verified credentials (VC) standards & technologyHow States are approaching the implementation of mDL in different ways and the privacy benefits of 'selective disclosure'Steve's advice for privacy technologists to best position them and their orgs at the forefront of privacy and security innovationSteve recommended books for learning more about tipping point leadershipGuest Info: Connect with Steve on LinkedInListen to The Nonconformist Innovation Podcast Resources Mentioned: Steve's Interview with Tom KempTipping Point Leadership books:On Change Management Organizational BehaviorEthics in the Age of Disruptive Technologies: An Operational Roadmap Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing Partners Top privacy talent - when you need it, where you need it.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week, I chat with Jake Ward, the Co-Founder and CEO of Data Protocol, to discuss how the Data Protocol platform supports developers' accountability for privacy by giving developers the relevant information in the way that they want it. Throughout the episode, we cover the Privacy Engineering course offerings and certification program; how to improve communication with developers; and trends that Jake sees across his customers after 2 years of offering these courses to engineers.In our conversation, we dive into the topics covered in the Privacy Engineering Certification Program course offering , led by instructor Nishant Bhajaria, and the impact that engineers can make in their organization after completing it. Jake shares why he's so passionate about empowering developers, enabling them to build safer products. We talk about the effects of privacy engineering on large tech companies and how to bridge the gap between developers and the support they need with collaboration and accountability. Plus, Jake reflects on his own career path as the Press Secretary for a U.S. Senator and the experiences that shaped his perspectives and brought him to where he is now.Topics Covered: Jake's career journey and why he landed on supporting software developers How Jake build Data Protocol and it's community What 'shifting privacy left' means to JakeData Protocol's Privacy Engineering Courses, Labs, & Certification Program and what developers will take awayThe difference between Data Protocol's free Privacy Courses and paid CertificationFeedback from customers and & trends observedWhether tech companies have seen improvement in engineers' ability to embed privacy into the development of products & services after completing the Privacy Engineering courses and labs Other privacy-related courses available on Data Protocol, and privacy courses on the roadmapWays to leverage communications to surmount current challengesHow organizations can make their developers accountable for privacy, and the importance of aligning responsibility, accountability & business processesHow Debra would operationalize this accountability into an organizationHow you can use the PrivacyCode.ai privacy tech platform to enable the operationalization of privacy accountability for developersResources Mentioned: Check out Data Protocol's courses, based on topicEnroll in The Privacy Engineering Certification Program (courses are free)Check out S3E2: 'My Top 20 Privacy Engineering Resources for 2024' Guest Info: Connect with Jake on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnTRU Staffing Partners Top privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
My guest this week is Jay Averitt, Senior Privacy Product Manager and Privacy Engineer at Microsoft, where he transitioned his career from Technology Attorney to Privacy Counsel, and most recently to Privacy Engineer.In this episode, we hear from Jay about: his professional path from a degree in Management Information Systems to Privacy Engineer; how Twitter and Microsoft navigated a privacy setup, and how to determine privacy program maturity; multiple of his Privacy Engineering community projects; and tips on how to spread privacy awareness and stay active within the industry. Topics Covered:Jay's unique professional journey from Attorney to Privacy EngineerJay's big mindset shift from serving as Privacy Counsel to Privacy Engineer, from a day-to-day and internal perspectiveWhy constant learning is essential in the field of privacy engineering, requiring us to keep up with ever-changing laws, standards, and technologiesJay's comparison of what it's like to work for Twitter vs. Microsoft when it comes to how each company focuses on privacy and data protection Two ways to determine Privacy Program Maturity, according to JayHow engineering-focused organizations can unify around a corporate privacy strategy and how privacy pros can connect to people beyond their siloed teamsWhy building and maintaining relationships is the key for privacy engineers to be seen as enablers instead of blockers A detailed look at the 'Technical Privacy Review' processA peak into Privacy Quest's gamified privacy engineering platform and the events that Jay & Debra are leading as part of its DPD'24 Festival Village month-long puzzles and eventsDebra's & Jay's experiences at the USENIX PEPR'23; why it provided so much value for them both; and, why you should consider attending PEPR'24 Ways to utilize online Slack communities, LinkedIn, and other tools to stay active in the privacy engineering worldResources Mentioned:Review talks from the University of Illinois 'Privacy Everywhere Conference 2024'Join the Privacy Quest Village's 'Data Privacy Day'24 Festival' (through Feb 18th)Submit a Proposal / Register for the USENIX PEPR ‘24 ConferenceGuest Info:Connect with Jay on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnTRU Staffing Partners Top privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
In Honor of Data Privacy Week 2024, we're publishing a special episode. Instead of interviewing a guest, Debra shares her 'Top 20 Privacy Engineering Resources' and why. Check out her favorite free privacy engineering courses, books, podcasts, creative learning platforms, privacy threat modeling frameworks, conferences, government resources, and more.DEBRA's TOP 20 PRIVACY ENGINEERING RESOURCES (in no particular order)Privado's Free Course: 'Technical Privacy Masterclass'OpenMined's Free Course: 'Our Privacy Opportunity' Data Protocol's Privacy Engineering Certification ProgramThe Privacy Quest Platform & Games; Bonus: The Hitchhiker's Guide to Privacy Engineering'Data Privacy: a runbook for engineers by Nishant Bhajaria'Privacy Engineering, a Data Flow and Ontological Approach' by Ian Oliver'Practical Data Privacy: enhancing privacy and security in data' by Katharine JarmulStrategic Privacy by Design, 2nd Edition by R. Jason Cronk'The Privacy Engineer's Manifesto: getting from policy to code to QA to value' by Michelle Finneran-Dennedy, Jonathan Fox and Thomas R. Dennedy USENIX Conference on Privacy Engineering Practice and Respect (PEPR)IEEE's The International Workshop on Privacy Engineering (IWPE)Institute of Operational Privacy Design (IOPD)'The Shifting Privacy Left Podcast,' produced and hosted by Debra J Farber and sponsored by PrivadoMonitaur's 'The AI Fundamentalists Podcast' hosted by Andrew Clark & Sid MangalikSkyflow's 'Partially Redacted Podcast' with Sean FalconerThe LINDDUN Privacy Threat Model Framework & LINDDUN GO Card GameThe Privacy Library Of Threats 4 Artificial Intelligence (PLOT4ai) Framework & PLOT4ai Card GameThe IAPP Privacy Engineering SectionThe NIST Privacy Engineering Program Collaboration SpaceThe EDPS Internet Privacy Engineering Network (IPEN)Read “Top 20 Privacy Engineering Resources” on Privado's Blog. Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnTRU Staffing Partners Top privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
My guest this week is Patricia Thaine, Co-founder and CEO of Private AI, where she leads a team of experts in developing cutting-edge solutions using AI to identify, reduce, and remove Personally Identifiable Information (PII) in 52 languages across text, audio, images, and documents.In this episode, we hear from Patricia about: her transition from starting a Ph.D. to co-founding an AI company; how Private AI set out to solve fundamental privacy problems to provide control and understanding of data collection; misunderstandings about how best to leverage AI regarding privacy-preserving machine learning; Private AI's intention when designing their software, plus newly deployed features; and whether global AI regulations can help with current risks around privacy, rogue AI and copyright.Topics Covered:Patricia's professional journey from starting a Ph.D. in Acoustic Forensics to co-founding an AI companyWhy Private AI's mission is to solve privacy problems and create a platform for developers to modularly and flexibly integrate it anywhere you want in your software pipeline, including model ingress & egressHow companies can avoid mishandling personal information when leveraging AI / machine learning; and Patricia's advice to companies to avoid mishandling personal information Why keeping track of ever-changing data collection and regulations make it hard to find personal informationPrivate AI's privacy-enabling architectural approach to finding personal data to prevent it from being used by or stored in an AI modelThe approach that Privacy AI took to design their softwarePrivate AI's extremely high matching rate, and how they aim for 99%+ accuracyPrivate AI's roadmap & R&D effortsDebra & Patricia discuss AI Regulation and Patricia's insights from her article 'Thoughts on AI Regulation'A foreshadowing of AI's copyright risk problem and whether regulations or licenses can helpChatGPT's popularity, copyright, and the need for embedding privacy, security, and safety by design from the beginning (in the MVP)How to reach out to Patricia to connect, collaborate, or access a demoHow thinking about the fundamentals gets you a good way on your way to ensuring privacy & securityResources Mentioned:Read: Yoshua Bengio's blog post: "How Rogue AI's May Arise"Read: Microsoft's Digital Defense Report 2023Read Patricia's article, “Thoughts on AI Regulation” Guest Info:Connect with Patricia on LinkedInCheck out Private AI Demo PrivateG Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnTRU Staffing Partners Top privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
My guest this week is Kevin Killens, CEO of AHvos, a technology service that provides AI solutions for data-heavy businesses using a proprietary technology called Contextually Responsive Intelligence (CRI), which can act upon a business's private data and produce results without storing that data.In this episode, we delve into this technology and learn more from Kevin about: his transition from serving in the Navy to founding an AI-focused company; AHvos' architectural approach in support of data minimization and reduced attack surface; AHvos' CRI technology and its ability to provide accurate answers based on private data sets; and how AHvos' Data Crucible product helps AI teams to identify and correct inaccurate dataset labels. Topics Covered:Kevin's origin story, from serving in the Navy to founding AHvosHow Kevin thinks about privacy and the architectural approach he took when building AHvosThe challenges of processing personal data, 'security for privacy,' and the applicability of the GDPR when using AHvosKevin explains the benefits of Contextually Responsive Intelligence (CRI): which abstracts out raw data to protect privacy; finds & creates relevant data in response to a query; and identifies & corrects inaccurate dataset labelsHow human-created algorithms and oversight influence AI parameters and model bias; and, why transparency is so importantHow customer data is ingested into models via AHvosWhy it is important to remove bias from Testing Data, not only Training Data; and, how AHvos ensures accuracy How AHvos' Data Crucible identifies & corrects inaccurate data set labelsKevin's advice for privacy engineers as they tackle AI challenges in their own organizationsThe impact of technical debt on companies and the importance of building slowly & correctly rather than racing to market with insecure and biased AI modelsThe importance of baking security and privacy into your minimum viable product (MVP), even for products that are still in 'beta' Guest Info:Connect with Kevin on LinkedInCheck out AHvosCheck out Trinsic Technologies Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
My guest this week is Nabanita De, Software Engineer, Serial Entrepreneur, and Founder & CEO at Privacy License where she's on a mission to transform the AI landscape. In this episode, we discuss Nabanita's transition from Engineering Manager at Remitly to startup founder; what she's learned from her experience at Antler's accelerator program, her first product to market: PrivacyGPT and her work to educate Privacy Champions. Topics Covered:Nabanita's origin story, from conducting AI research at Microsoft as an intern all the way to founding Privacy LicenseHow Privacy License supports enterprises entering the global market while protecting privacy as a human rightA comparison between Nabanita's experience as a corporate role as Privacy Engineering Manager at Remitly versus her entrepreneurial role as Founder-in-Residence at AntlerHow PrivacyGPT, a Chrome browser plugin, empowers people to use ChatGPT with added privacy protections and without compromising data privacy standards by redacting sensitive and personal data before sending to ChatGPTNLP techniques that Nabanita leveraged to build out PrivacyGPT, including: 'regular expressions,' 'parts of speech tagging,' & 'name entity recognition'How PrivacyGPT can be used to protect privacy across nearly all languages, even where a user has no Internet connectionHow to use Product Hunt to gain visibility around a newly-launched product; and whether it's easier to raise a financial round in the AI space right nowNabanita's advice for software engineers who might found a privacy or AI startup in the near futureWhy Nabanita created a Privacy Champions Program; and how it provides (non)-privacy folks with recommendations to prioritize privacy within their organizationsHow to sign up for PrivacyGPT's paid pilot app, connect with Nabanita to collaborate, or subscribe to "Nabanita's Moonshots Newsletter" on LinkedInResources Mentioned:Check out Privacy LicenseLearn more about PrivacyGPTInstall the PrivacyGPT Chrome ExtensionLearn about Data Privacy Week 2024Guest Info:Connect with Nabanita on LinkedInSubscribe to the Nabanita's Moonshots NewsletterLearn more about The Nabinita De Foundation Learn more about Covid Help for IndiaLearn more about Project FiB Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
My guests this week are Yusra Ahmad, CEO of Acuity Data, and Luke Beckley, Data Protection Officer and Privacy Governance Manager at Correla, who work with The RED (Real Estate Data) Foundation, a sector-wide alliance that enables the real estate sector to benefit from an increased use of data, while voiding some of the risks that this presents, and better serving society.We discuss the current drivers for change within the real estate industry and the complexities of the real estate industry utilizing incredible amounts of data. You'll learn the types of data protection, privacy, and ethical challenges The RED Foundation seeks to solve, especially now with the advent of new technologies. Yusra and Luke discuss some ethical questions the real estate sector as it considers leveraging new technology. Yusra and Luke come to the conversation from the knowledgeable perspective as The RED Foundation's Chair of the Data Ethics Steering Group and Chair of the Engagement and Awareness Group, respectively.Topics Covered:Introducing Luke Beckley (DPO, Privacy & Governance Manager at Correla) and Yusra Ahmed (CEO of Acuity Data); who are here to talk about their data ethics work at The RED FoundationHow the scope, sophistication, & connectivity of data is increasing exponentially in the real estate industryWhy ESG, workplace experience, & smart city development are drivers of data collection; and the need for data ethics reform within the real estate industryDiscussion of types of personal data these real estate companies collect & use across stakeholders: owners, operators, occupiers, employees, residents, etc.Current approaches that retailers take to protect location data, when collected; and why it's important to simplify language, increase transparency, & make consumers aware of tracking in in-store WIFi privacy noticesOverview of The RED Foundation & mission: to ensure the real estate sector benefits from an increased use of data, avoids some of the risks that this presents, and is better placed to serve societySome ethical questions with which the real estate sector needs to still align, along with examplesWhy there's a need to educate the real estate industry on privacy-enhancing techThe need for privacy engineers and PETs in real estate; and why this will build trust with the different stakeholdersGuidance for privacy engineers who want to work in the real estate sector.Ways to collaborate with The RED Foundation to standardize data ethics practices across the real estate industryWhy there's great opportunity to embed privacy into real estate; and why its current challenges are really obstacles, rather than blockers.Resources Mentioned:Check out The RED FoundationGuest Info:Follow Yusra on LinkedInFollow Luke on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week, I welcome Jared Coseglia, co-founder and CEO at TRU Staffing Partners, a contract staffing & executive placement search firm that represents talent across 3 core industry verticals: data privacy, eDiscovery, & cybersecurity. We discuss the current and future state of the contracting market for privacy engineering rols and the market drivers that affect hiring. You'll learn about the hiring trends and the allure of 'part-time impact,' 'part-time perpetual,' and 'secondee' contract work. Jared illustrates the challenges that hiring managers face with a 'Do-it-Yourself' staffing process; and he shares his predictions about the job market for privacy engineers over the next 2 years. Jared comes to the conversation with a lot of data that supports his predictions and sage advice for privacy engineering hiring managers and job seekers. Topics Covered:How the privacy contracting market compares and contrasts to the full-time hiring market; and, why we currently see a steep rise in privacy contractingWhy full-time hiring for privacy engineers won't likely rebound until Q4 2024; and, how hiring for privacy typically follows a 2-year cycleWhy companies & employees benefit from fractional contracts; and, the differences between contracting types: 'Part-Time - Impact,' 'Part-Time - Perpetual,' and 'Secondee'How hiring managers typically find privacy engineering candidatesWhy it's far more difficult to hire privacy engineers for contracts; and, how a staffing partner like TRU can supercharge your hiring efforts and avoid the pitfalls of a "do-it-yourself" approachHow contract work benefits privacy engineers financially, while also providing them with project diversityHow salaries are calculated for privacy engineers; and, the driving forces behind pay discrepancies across privacy rolesJared's advice to 2024 job seekers, based on his market predictions; and, why privacy contracting increases 'speed to hire' compared to hiring FTEsWhy privacy engineers can earn more money by changing jobs in 2024 than they could by seeking raises in their current companies; and discussion of 2024 salary ranges across industry segmentsJared's advice on how privacy engineers can best position themselves to contract hiring managers in 2024Recommended resources for privacy engineering employers and job seekersResources Mentioned:Read: "State of the Privacy Job Market Q3 2023”Subscribe to TRU InsightsGuest Info:Connect with Jared on LinkedInLearn more about TRU Staffing PartnersEngineering Managers: Check out TRU Staffing Data Privacy Staffing solutionsPE Candidates: Apply to Open Privacy Positions Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week's guests are Mathew Mytka and Alja Isakovoić, Co-Founders of Tethix, a company that builds products that embed ethics into the fabric of your organization. We discuss Matt and Alja's core mission to bring ethical tech to the world, and Tethix's services that work with your Agile development processes. You'll learn about Tethix's solution to address 'The Intent to Action Gap,' and what Elemental Ethics can provide organizations beyond other ethics frameworks. We discuss ways to become a proactive Responsible Firekeeper, rather than remaining a reactive Firefighter, and how ETHOS, Tethix's suite of apps can help organizations embody and embed ethics into everyday practice. TOPICS COVERED:What inspired Mat & Alja to co-found Tethix and the company's core missionWhat the 'Intent to Action Gap' is and how Tethix address itOverview of Tethix's Elemental Ethics framework; and how it empowers product development teams to 'close the 'Intent to Action Gap' and move orgs from a state of 'Agile Firefighting' to 'Responsible Firekeeping'Why Agile is an insufficient process for embedding ethics into software and product development; and how you can turn to Elemental Ethics and Responsible Firekeeping to embed 'Ethics-by-Design' into your Agile workflowsThe definition of 'Responsible Firekeeping' and its benefits; and how Ethical Firekeeping transitions Agile teams from a reactive posture to a proactive oneWhy you should choose Elemental Ethics over conventional ethics frameworksTethix's suite of apps called ETHOS: The Ethical Tension and Health Operating System apps, which help teams embed ethics into their collaboration tech stack (e.g., JIRA, Slack, Figma, Zoom, etc.)How you can become a Responsible FirekeeperThe level of effort required to implement Elemental Ethics & Responsible Firekeeping into Product Development based on org size and level of maturityAlja's contribution to the ResponsibleTech.Work, an open source Responsible Product Development Framework, core elements of the Framework, and why we need itWhere to learn more about Responsible FirekeepingRESOURCES MENTIONED:Read: "Day in the Life of a Responsible Firekeeper"Review the ResponsibleTech.Work FrameworkSubscribe to the Pathfinders NewmoonsletterGUEST INFO:Connect with Mat on LinkedInConnect with Alja on LinkedInCheck out Tethix's Website Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week's guest is Isabel Barberá, Co-founder, AI Advisor, and Privacy Engineer at Rhite , a consulting firm specializing in responsible and trustworthy AI and privacy engineering, and creator of The Privacy Library Of Threats 4 Artificial Intelligence Framework and card game. In our conversation, we discuss: Isabel's work with privacy-by-design, privacy engineering, privacy threat modeling, and building trustworthy AI; and info about Rhite's forthcoming Self-Assessment Open-Source framework for AI maturity, SARAI®. As we wrap up the episode, Isabel shares details about PLOT4ai, her AI threat modeling framework and card game created based on a library of threats for artificial intelligence. Topics Covered:How Isabel became interested in privacy engineering, data protection, privacy by design, threat modeling, and trustworthy AIHow companies are thinking (or not) about incorporating privacy-by-design strategies & tactics and privacy engineering approaches within their orgs todayWhat steps can be taken so companies start investing in privacy engineering approaches; and whether AI has become a driver for such approaches.Background on Isabel's company, Rhite, and its mission to build responsible solutions for society and its individuals using a technical mindset. What “Responsible & Trustworthy AI” means to Isabel The 5 core values that make up the acronym, R-H-I-T-E, and why they're important for designing and building products & services.Isabel's advice for organizations as they approach AI risk assessments, analysis, & remediation The steps orgs can take in order to build responsible AI products & servicesWhat Isabel hopes to accomplish through Rhite's new framework: SARAI® (for AI maturity), an open source AI Self-Assessment Tool and Framework, and an extension the Privacy Library Of Threats 4 Artificial Intelligence (PLOT4ai) Framework (i.e., a library of AI risks)What motivated Isabel to focus on threat modeling for privacyHow PLOT4ai builds on LINDDUN (which focuses on software development) and extends threat modeling to the AI lifecycle stages: Design, Input, Modeling, & OutputHow Isabel's experience with the LINDDUN Go card game inspired her to develop of a PLOT4ai card game to make it more accessible to teams.Isabel calls for collaborators to contribute to the PLOT4ai open source database of AI threats as the community grows.Resources Mentioned:Privacy Library Of Threats 4 Artificial Intelligence (PLOT4ai)PLOT4ai's Github Threat Repository"Threat Modeling Generative AI Systems with PLOT4ai” Self-Assessment for Responsible AI (SARAI®)LINDDUN Privacy Threat Model Framework"S2E19: Privacy Threat Modeling - Mitigating Privacy Threats in Software with Kim Wuyts (KU Leuven)”"Data Privacy: a runbook for engineers"Guest Info:Isabel's LinkedIn ProfileRhite's Website Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week, I sat down with Vaibhav Antil ('Vee'), Co-founder & CEO at Privado, a privacy tech platform that's leverages privacy code scanning & data mapping to bridge the privacy engineering gap. Vee shares his personal journey into privacy, where he started out in Product Management and saw need for privacy automation in DevOps. We discuss obstacles created by the rapid pace of engineering teams and a lack of a shared vocabulary with Legal / GRC. You'll learn how code scanning enables privacy teams to move swiftly and avoid blocking engineering. We then discuss the future of privacy engineering, its growth trends, and the need for cross-team collaboration. We highlight the importance of making privacy-by-design programmatic and discuss ways to scale up privacy reviews without stifling product innovation. Topics Covered:How Vee moved from Product Manager to Co-Founding Privado, and why he focused on bringing Privacy Code Scanning to market.What it means to "Bridge the Privacy Engineering Gap" and 3 reasons why Vee believes the gap exists.How engineers can provide visibility into personal data collected and used by applications via Privacy Code Scans.Why engineering teams should 'shift privacy left' into DevOps.How a Privacy Code Scanner differs from traditional static code analysis tools in security.How Privado's Privacy Code Scanning & Data Mapping capabilities (for the SDLC) differ from personal data discovery, correlation, & data mapping tools (for the data lifecycle).How Privacy Code Scanning helps engineering teams comply with new laws like Washington State's 'My Health My Data Act.'A breakdown of Privado's FREE "Technical Privacy Masterclass."Exciting features on Privado's roadmap, which support its vision to be the platform for collaboration between privacy operations & engineering teams.Privacy engineering trends and Vee's predictions for the next two years. Privado Resources Mentioned:Free Course: "Technical Privacy Masterclass" (led by Nishant Bhajaria)Guide: Introduction to Privacy Code ScanningGuide: Code Scanning Approach to Data MappingSlack: Privado's Privacy Engineering CommunityOpen Source Tool: Play Store Data Safety Report BuilderGuest Info:Connect with Vee on LinkedInCheck out Privado's website Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week's guest is Rebecca Balebako, Founder and Principal Consultant at Balebako Privacy Engineer, where she enables data-driven organizations to build the privacy features that their customers love. In our conversation, we discuss all things privacy red teaming, including: how to disambiguate adversarial privacy tests from other software development tests; the importance of privacy-by-infrastructure; why privacy maturity influences the benefits received from investing in privacy red teaming; and why any database that identifies vulnerable populations should consider adversarial privacy as a form of protection. We also discuss the 23andMe security incident that took place in October 2023 and affected over 1 mil Ashkenazi Jews (a genealogical ethnic group). Rebecca brings to light how Privacy Red Teaming and privacy threat modeling may have prevented this incident. As we wrap up the episode, Rebecca gives her advice to Engineering Managers looking to set up a Privacy Red Team and shares key resources. Topics Covered:How Rebecca switched from software development to a focus on privacy & adversarial privacy testingWhat motivated Debra to shift left from her legal training to privacy engineeringWhat 'adversarial privacy tests' are; why they're important; and how they differ from other software development testsDefining 'Privacy Red Teams' (a type of adversarial privacy test) & what differentiates them from 'Security Red Teams'Why Privacy Red Teams are best for orgs with mature privacy programsThe 3 steps for conducting a Privacy Red Team attackHow a Red Team differs from other privacy tests like conducting a vulnerability analysis or managing a bug bounty programHow 23andme's recent data leak, affecting 1 mil Ashkanazi Jews, may have been avoided via Privacy Red Team testingHow BigTech companies are staffing up their Privacy Red TeamsFrugal ways for small and mid-sized organizations to approach adversarial privacy testingThe future of Privacy Red Teaming and whether we should upskill security engineers or train privacy engineers on adversarial testingAdvice for Engineer Managers who seek to set up a Privacy Red Team for the first timeRebecca's Red Teaming resources for the audienceResources Mentioned:Listen to: "S1E7: Privacy Engineers: The Next Generation" with Lorrie Cranor (CMU)Review Rebecca's Red Teaming Resources Guest Info:Connect with Rebecca on LinkedInVisit Balebako Privacy Engineer's website Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week's guest is Steve Hickman, the founder of Epistimis, a privacy-first process design tooling startup that evaluate rules and enables the fixing of privacy issues before they ever take effect. In our conversation, we discuss: why the biggest impediment to protecting and respecting privacy within organizations is the lack of a common language; why we need a common Privacy Ontology in addition to a Privacy Taxonomy; Epistimis' ontological approach and how it leverages semantic modeling for privacy rules checking; and, examples of how Epistimis Privacy Design Process tooling complements privacy tech solutions on the market, not compete with them.Topics Covered:How Steve's deep engineering background in aerospace, retail, telecom, and then a short stint at Meta, led him to found Epistimis Why its been hard for companies to get privacy right at scaleHow Epistimis leverages 'semantic modeling' for rule checking and how this helps to scale privacy as part of an ontological approachThe definition of a Privacy Ontology and Steve's belief that all should use one for common understanding at all levels of the businessAdvice for designers, architects, and developers when it comes to creating and implementing privacy ontology, taxonomies & semantic modelsHow to make a Privacy Ontology usableHow Epistimis' process design tooling work with discovery and mapping platforms like BigID & Secuvy.aiHow Epistimis' process design tooling work along with a platform like Privado.ai, which scans a company's product code and then surfaces privacy risks in the code and detects processing activities for creating dynamic data mapsHow Epistimis' process design tooling works with PrivacyCode, which has a library of privacy objects, agile privacy implementations (e.g., success criteria & sample code), and delivers metrics on the privacy engineering process is goingSteve calls for collaborators who are interested in POCs and/or who can provide feedback on Epistimis' PbD processing toolingSteve describes what's next on the Epistimis roadmap, including wargamingResources Mentioned:Read Dan Solove's article, "Data is What Data Does: Regulating Based on Harm and Risk Instead of Sensitive Data"Guest Info:Connect with Steve on LinkedInReach out to Steve via EmailLearn more about Epistimis Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week's guest is Shashank Tiwari, a seasoned engineer and product leader who started with algorithmic systems of Wall Street before becoming Co-founder & CEO of Uno.ai, a pathbreaking autonomous security company. He started with algorithmic systems on Wall Street and then transitioned to building Silicon Valley startups, including previous stints at Nutanix, Elementum, Medallia, & StackRox. In this conversation, we discuss ML/AI, large language models (LLMs), temporal knowledge graphs, causal discovery inference models, and the Generative AI design & architectural choices that affect privacy. Topics Covered:Shashank describes his origin story, how he became interested in security, privacy, & AI while working on Wall Street; & what motivated him to found UnoThe benefits to using "temporal knowledge graphs," and how knowledge graphs are used with LLMs to create a "causal discovery inference model" to prevent privacy problemsThe explosive growth of Generative AI, it's impact on the privacy and confidentiality of sensitive and personal data, & why a rushed approach could result in mistakes and societal harm Architectural privacy and security considerations for: 1) leveraging Generative AI, and those to avoid certain mechanisms at all costs; 2) verifying, assuring, & testing against "trustful data" rather than "derived data;" and 3) thwarting common Generative AI attack vectorsShashank's predictions for Enterprise adoption of Generative AI over the next several yearsShashank's thoughts on proposed and future AI-related legislation may affect the Generative AI market overall and Enterprise adoption more specificallyShashank's thoughts on the development of AI standards across tech stacksResources Mentioned:Check out episode S2E29: Synthetic Data in AI: Challenges, Techniques & Use Cases with Andrew Clark and Sid Mangalik (Monitaur.ai)Guest Info:Connect with Shashank on LinkedInLearn more about Uno.ai Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week I welcome Dr. Andrew Clark, Co-founder & CTO of Monitaur, a trusted domain expert on the topic of machine learning, auditing and assurance; and Sid Mangalik, Research Scientist at Monitaur and PhD student at Stony Brook University. I discovered Andrew and Sid's new podcast show, The AI Fundamentalists Podcast. I very much enjoyed their lively episode on Synthetic Data & AI, and am delighted to introduce them to my audience of privacy engineers. In our conversation, we explore why data scientists must stress test their model validations, especially for consequential systems that affect human safety and reliability. In fact, we have much to learn from the aerospace engineering field who has been using ML/AI since the 1960s. We discuss the best and worst use cases for using synthetic data'; problems with LLM-generated synthetic data; what can go wrong when your AI models lack diversity; how to build fair, performant systems; & synthetic data techniques for use with AI.Topics Covered:What inspired Andrew to found Monitaur and focus on AI governanceSid's career path and his current PhD focus on NLPWhat motivated Andrew & Sid to launch their podcast, The AI FundamentalistsDefining 'synthetic data' & why academia takes a more rigorous approach to synthetic data than industryWhether the output of LLMs are synthetic data & the problem with training LLM base models with this dataThe best and worst 'synthetic data' use cases for ML/AIWhy the 'quality' of input data is so important when training AI models Thoughts on OpenAI's announcement that it will use LLM-generated synthetic data; and critique of OpenAI's approach, the AI hype machine, and the problems with 'growth hacking' corner-cuttingThe importance of diversity when training AI models; using 'multi-objective modeling' for building fair & performant systemsAndrew unpacks the "fairness through unawareness fallacy"How 'randomized data' differs from 'synthetic data'4 techniques for using synthetic data with ML/AI: 1) the Monte Carlo method; 2) Latin hypercube sampling; 3) gaussian copulas; & 4) random walkingWhat excites Andrew & Sid about synthetic data and how it will be used with AI in the futureResources Mentioned:Check out Podchaser Listen to The AI Fundamentalists PodcastCheck out MonitaurGuest Info:Follow Andrew on LinkedInFollow Sid on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week, I welcome Jutta Williams, Head of Privacy & Assurance at Reddit, Co-founder of Humane Intelligence and BiasBounty.ai, Privacy & Responsible AI Evangelist, and Startup Board Advisor. With a long history of accomplishments in privacy engineering, Jutta has a unique perspective on the growing field.In our conversation, we discuss her transition from security engineering to privacy engineering; how privacy cultures differ across social media companies where she's worked: Google, Facebook, Twitter, and now Reddit; the overlap of the privacy engineering & responsible AI; how her non-profit, Humane Intelligence, supports AI model owners; her experience launching the largest Generative AI Red Teaming challenge ever at DEF CON; and, how a curious knowledge-enhancing approach to privacy will create engagement and allow for fun. Topics Covered:How Jutta's unique transition from security engineering landed her in the privacy engineering space. A comparison of privacy cultures across Google, Facebook, Twitter (now 'X'), and Reddit based on her privacy engineering experiences there.Two open Privacy Engineering roles at Reddit, and Jutta's advice for those wanting to transition from security engineering to privacy engineering.Whether Privacy Pros will be responsible for owning new regulatory obligations under the EU's Digital Services Act (DSA) & the Digital Markets Act (DMA); and the role of the Privacy Engineer when overlapping with Responsible AI issuesHumane Intelligence, Jutta's 'side quest,' which she co-leads with Dr. Rumman Chowdhury, and supports AI model owners seeking 'Product Readiness Reviews' at scale.When, during the product development life cycle, companies should perform 'AI Readiness Reviews'How to de-biased at scale or whether attempting to do so is 'chasing windmills'Who should be hunting for biases in an AI Bias Bounty challengeDEF CON 31's AI Village's 'Generative AI Red Teaming Challenge,' which was a bias bounty that she co-designed; lessons learned; and what Jutta & team have planned for DEF CON 32 next yearWhy it's so important for people to 'love their side quests'Resources Mentioned:DEF CON Generative Red Team ChallengeHumane IntelligenceBias Buccaneers ChallengeGuest Info:Connect with Jutta on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Today, I welcome Victor Morel, PhD and Simone Fischer-Hübner, PhD to discuss their recent paper, "Automating Privacy Decisions – where to draw the line?" and their proposed classification scheme. We dive into the complexity of automating privacy decisions and emphasize the importance of maintaining both compliance and usability (e.g., via user control and informed consent). Simone is a Professor of Computer Science at Karlstad University with over 30 years of privacy & security research experience. Victor is a post-doc researcher at Chalmers University's Security & Privacy Lab, focusing on privacy, data protection, and technology ethics.Together, they share their privacy decision-making classification scheme and research across two dimensions: (1) the type of privacy decisions: privacy permissions, privacy preference settings, consent to processing, or rejection to processing; and (2) the level of decision automation: manual, semi-automated, or fully-automated. Each type of privacy decision plays a critical role in users' ability to control the disclosure and processing of their personal data. They emphasize the significance of tailored recommendations to help users make informed decisions and discuss the potential of on-the-fly privacy decisions. We wrap up with organizations' approaches to achieving usable and transparent privacy across various technologies, including web, mobile, and IoT. Topics Covered:Why Simone & Victor focused their research on automating privacy decisions How GDPR & ePrivacy have shaped requirements for privacy automation toolsThe 'types' privacy decisions & associated 'levels of automation': privacy permissions, privacy preference settings, consent to processing, & rejection to processingThe 'levels of automation' for each privacy decision type: manual, semi-automated & fully-automated; and the pros / cons of automating each privacy decision typePreferences & concerns regarding IoT Trigger Action PlatformsWhy the only privacy decisions that you should 'fully automate' are the rejection of processing: i.e., revoking consent or opting outBest practices for achieving informed controlAutomation challenges across web, mobile, & IoTMozilla's automated cookie banner management & why it's problematic (i.e., unlawful)Resources Mentioned:"Automating Privacy Decisions – where to draw the line?"CyberSecIT at Chalmers University of Technology"Tapping into Privacy: A Study of User Preferences and Concerns on Trigger-Action Platforms"Consent O Matic browser extension Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week, I welcome philosopher, author, & AI ethics expert, Reid Blackman, Ph.D., to discuss Ethical AI. Reid authored the book, "Ethical Machines," and is the CEO & Founder of Virtue Consultants, a digital ethical risk consultancy. His extensive background in philosophy & ethics, coupled with his engagement with orgs like AWS, U.S. Bank, the FBI, & NASA, offers a unique perspective on the challenges & misconceptions surrounding AI ethics.In our conversation, we discuss 'passive privacy' & 'active privacy' and the need for individuals to exercise control over their data. Reid explains how the quest to train data for ML/AI can lead to privacy violations, particularly for BigTech companies. We touch on many concepts in the AI space including: automated decision making vs. keeping "humans in the loop;" combating AI ethics fatigue; and advice for technical staff involved in AI product development. Reid stresses the importance of protecting privacy, educating users, & deciding whether to utilize external APIs or on-prem servers. We end by highlighting his HBR article - "Generative AI-xiety" - and discuss the 4 primary areas of ethical concern for LLMs: the hallucination problem; the deliberation problem; the sleazy salesperson problem; & the problem of shared responsibilityTopics Covered:What motivated Reid to write his book, "Ethical Machines"The key differences between 'active privacy' & 'passive privacy'Why engineering incentives to collect more data to train AI models, especially in big tech, poses challenges to data minimizationThe importance of aligning privacy agendas with business prioritiesWhy what companies infer about people can be a privacy violation; what engineers should know about 'input privacy' when training AI models; and, how that effects the output of inferred dataAutomated decision making: when it's necessary to have a 'human in the loop'Approaches for mitigating 'AI ethics fatigue'The need to backup a company's stated 'values' with actions; and why there should always be 3 - 7 guardrails put in place for each stated valueThe differences between 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethicsReid's article, "Generative AI-xiety," & the 4 main risks related to generative AIReid's advice for technical staff building products & services that leverage LLM'sResources Mentioned:Read the book, "Ethical Machines"Reid's podcast, Ethical MachinesGuest Info:Follow Reid on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week, we're chatting with Engin Bozdag, Senior Staff Privacy Architect at Uber, and Stefano Bennati, Privacy Engineer at HERE Technologies. Today, we explore their recent IWPE'23 talk, "Can Location Data Truly be Anonymized: a risk-based approach to location data anonymization" and discuss the technical & business challenges to obtain anonymization. We also discuss the role of Privacy Engineers, how to choose a career path, and the importance of embedding privacy into product development & DevPrivOps; collaborating with cross-functional teams; & staying up-to-date with emerging trends.Topics Covered:Common roadblocks privacy engineers face with anonymization techniques & how to overcome themHow to get budgets for anonymization tools; challenges with scaling & regulatory requirements & how to overcome themWhat it means to be a 'Privacy Engineer' today; good career paths; and necessary skill setsHow third-party data deletion tools can be integrated into a company's distributed architectureWhat Privacy Engineers should understand about vendor privacy requirements for LLMs before bringing them into their orgsThe need to monitor code changes in data or source code via code scanning; how HERE Technologies uses Privado to monitor the compliance of its products & data lineage; and how Privado detects new assets added to your inventory & any new API endpointsAdvice on how to deal with conflicts between engineering, legal & operations teams and hon how to get privacy issues fixed within an orgStrategies for addressing privacy issues within orgs, including collaboration, transparency, and continuous refinementResources Mentioned:IAPP Defining Privacy Engineering InfographicEU AI ActEthics Guidelines for Trustworthy AIPrivacy Engineering SuperheroesFTC Investigates OpenAI over Data Leak and ChatGPT's InaccuracyGuest Info:Follow EnginFollow Stefano Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week's guest is Elias Grünewald, Privacy Engineering Research Associate at Technical University, Berlin, where he focuses on cloud-native privacy engineering, transparency, accountability, distributed systems, & privacy regulation. In this conversation, we discuss the challenge of designing privacy into modern cloud architectures; how shifting left into DevPrivOps can embed privacy within agile development methods; how to blend privacy engineering & cloud engineering; the Hawk DevOps Framework; and what the Shared Responsibilities Model for cloud lacks. Topics Covered:Elias's courses at TU Berlin: "Programming Practical Privacy: Web-based Application Engineering & Data Management" & "Advanced Distributed Systems Prototyping: Cloud-native Privacy Engineering"Elias' 2022 paper, "Cloud Native Privacy Engineering through DevPrivOps" - his approach, findings, and frameworkThe Shared Responsibilities Model for cloud and how to improve it to account for privacy goalsDefining DevPrivOps & how it works with agile developmentHow DevPrivOps can enable formal privacy-by-design (PbD) & default strategiesElias' June 2023 paper, "Hawk: DevOps-Driven Transparency & Accountability in Cloud Native Systems," which helps data controllers align cloud-native DevOps with regulatory requirements for transparency & accountabilityEngineering challenges when trying to determine the details of personal data processing when responding to access & deletion requestsA deep-dive into the Hawk 3-phase approach for implementing privacy into each DevOps phase: Hawk Release; Hawk Operate; & Hawk MonitorHow open sourced project, TOUCAN, is documenting conceptual best practices for corresponding phases in the SDLC, and a call for collaborationHow privacy engineers can convince their management to adopt a DevPrivOps approachRead Elias' papers, talks, & projects:Cloud Native Privacy Engineering through DevPrivOpsHawk: DevOps-driven Transparency and Accountability in Cloud Native Systems CPDP Talk: Privacy Engineering for Transparency & Accountability TILT: A GDPR-Aligned Transparency Information Language & Toolkit for Practical Privacy EngineeringTOUCAN Guest Info:Connect with Elias on LinkedInContact Elias at TU Berlin Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week, my guest is George Ratcliffe, Head of the Privacy GRC & Cryptography Executive Search Practice at recruitment firm, Stott & May.In this conversation, we discuss the current market climate & hiring trends for technical privacy roles; the need for higher technical capabilities across the industry; pay ranges within different technical privacy roles; and George's tips and tools for applicants interested in, entering, and/or transitioning into the privacy industry. Topics Covered:Whether the hiring trends are picking back up for technical privacy rolesThe three 'Privacy Engineering' roles that companies seek to hire for and core competencies: Privacy Engineer, Privacy Software Engineer, & Privacy Research EngineerThe demand for 'Privacy Architects'IAPP's new Privacy Engineering infographic & if it maps with how companies approach hiring Overall hiring trends for privacy engineers & technical privacy rolesAdvice technologists who want to grow into Privacy Engineer, Researcher, or Architect rolesCapabilities that companies need or want in candidates that they can't seem to find; & whether there are roles that are harder to fill because of a lack of candidates & skill setsWhether a PhD is necessary to become a 'Privacy Research Engineer'Typical pay ranges across technical privacy roles: Privacy Engineer, Privacy Software Engineer, Privacy Researcher, Privacy ArchitectDifferences in pay for a Privacy Engineering Manager vs an Independent Contributor (IC) and the web apps for crowd-sourced info about roles & salary rangesWhether companies seek to fill entry level positions for technical privacy rolesHow privacy technologists can stay up-to-date on hiring trendsResources Mentioned:Check out episode S2E11: Lessons Learned as a Privacy Engineering Manager with Menotti Minutillo (ex-Twitter & Uber)IAPP Defining Privacy Engineering Infographic Check out Blind and Levels for compensation benchmarkingGuest Info:Connect with George on LinkedInReach out to Stott & May for your privacy recruiting needs Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Get ready for an eye-opening conversation with Sanjay Saini, the founder and CEO of Privaini, a groundbreaking privacy tech company. Sanjay's journey is not only impressive due to his role in creating high-performance teams that have built entirely new product categories, but also for the invaluable lessons he learned from his grandfather about the pillars of successful companies - trust and human connections. In our discussion, Sanjay shares how Privaini is raising the privacy bar by constructing the world's largest repository of company privacy policies and practices. It's a fascinating dive into the future of privacy risk management.Imagine being able to gain full coverage of your external privacy risks with continuous monitoring. Wouldn't that revolutionize your approach to risk management? That's exactly what Privaini is doing! Sanjay explains how Privaini utilizes AI to analyze, standardize, and derive meaningful "privacy views" and insights from vast volumes of publicly-available data. Listen in to understand how Privaini's innovative approach is helping companies gain visibility into their entire business network to make quicker, more informed decisions. Topics Covered:What motivated Sanjay to found companies that bring trusted systems to market and why he founded Privaini to focus on continuous privacy risk monitoringHow to quantitatively analyze & monitor privacy risk throughout an entire 'business network' and what Sanjay means by 'business network'Which stakeholders benefit from using the Privaini platformThe benefits to calculating a "quantified privacy risk score" for each company in your business network to effectively monitor privacy riskHow Privaini leverages AI to discover external data about companies' privacy posture and why it must be used in a responsible and deliberate wayWhy effective privacy risk monitoring of a company's business network requires an “outside-in” approachThe importance of continuous monitoring & the benefits to using an 'outside-in' approachWhat it takes to set up an enterprise's network with Privaini for full coverage of external privacy risksThe recent Criteo fines and how Privaini could have helped Criteo surface privacy risks about its vendorsWhy Sanjay believes learning about the “right side” of the equation is necessary in order to "shift privacy left."Guest Info:Connect with Sanjay on LinkedInLearn more about Privaini Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week's guest is Tom Kemp: author; entrepreneur; former Co-Founder & CEO of Centrify (now called Delinia), a leading cybersecurity cloud provider; and a Silicon Valley-based Seed Investor and Policy Advisor. Tom led campaign marketing efforts in 2020 to pass California Proposition 24, the California Privacy Rights Act, (CPRA), and is currently co-authoring the California Delete Act bill.In this conversation, we discuss chapters within Tom's new book, Containing Big Tech: How to Protect Our CIVIL RIGHTS, ECONOMY, and DEMOCRACY; how big tech is using AI to feed into the attention economy; what should go into a U.S. federal privacy law and how it should be enforced; and a comprehensive look at some of Tom's privacy tech investments. Topics Covered:Tom's new book - Containing Big Tech: How to Protect Our Civil Rights, Economy and DemocracyHow and why Tom's book is centered around data collection, artificial intelligence, and competition. U.S. state privacy legislation that Tom helped get passed & what he's working on now, including: CPRA, the California Delete Act, & Texas Data Broker RegistryWhether there will ever be a U.S. federal, omnibus privacy law; what should be included in it; and how it should be enforcedTom's work as a privacy tech and security tech Seed Investor with Kemp Au Ventures and what inspires him to invest in a startup or notWhat inspired Tom to invest in PrivacyCode, Secuvy & Privaini Why having a team and market size is something Tom looks for when investing. The importance of designing for privacy from a 'user-interface perspective' so that it's consumer friendlyHow consumers looking to trust companies are driving a shift left movementTom's advice for how companies can better shift left in their orgs & within their business networksResources Mentioned:The California Consumer Privacy Act (amended by the CPRA)The California Delete ActGuest Info:Follow Tom on LinkedInKemp Au VenturesPre-order Containing Big Tech: How to Protect Our CIVIL RIGHTS, ECONOMY, and DEMOCRACY Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week's guest is Jeff Jockisch, Partner at Avantis Privacy and co-host of the weekly LinkedIn Live event, Your Bytes = Your Rights, a town hall-style discussion around ownership, digital rights, and privacy. Jeff is currently a data privacy researcher at PrivacyPlan, where he focuses specifically on privacy data sets. In this conversation, we delve into current risks to location privacy; how precise location data really is; how humans can have more control over their data; and what organizations can do to protect humans' data privacy. For access to a dataset of data resources and privacy podcasts, check out Jeff's robust database — the Shifting Privacy Left podcast was recently added.Topics Covered:Jeff's approach to creating privacy data sets and what “gaining insight into the privacy landscape” means.How law enforcement can be a threat actor to someone's privacy, using the example of Texas' abortion lawWhether data brokers are getting exact location information or are inferring someone's location.Why geolocation brokers had not considered themselves data brokers.Why anonymization is insufficient for location privacy. How 'consent theater' coupled with location leakage is an existential threat to our privacy.How people can protect themselves from having data collected and sold by data and location brokers.Why apps permissions should be more specific when notifying users about personal data collection and use. How Apple and Android devices treat Mobile Ad ID (MAID) differently and how that affects your historical location data.How companies can protect data by using broader geolocation information instead of precise geolocation information. More information about Jeff's LinkedIn Live show, Your Bytes = Your Rights.Resources Mentioned:Avantis PrivacyPrivacy PlanThreat modeling episode with Kim Wuyts"Your Bytes = Your Rights" LinkedIn LiveThe California Delete ActPrivacy Podcast DatabaseContaining Big Tech Guest Info:Follow Jeff on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week's guest is Kim Wuyts, Senior Postdoctoral Researcher at the DistriNet Research Group at the Department of Computer Science at KU Leuven. Kim is one of the leading minds behind the development and extension of LINDDUN, a privacy threat modeling framework that mitigates privacy threats in software systems.In this conversation, we discuss threat modeling based on the Threat Modeling Manifesto Kim co-authored; the benefits to using the LINDDUN privacy threat model framework; and how to bridge the gap between privacy-enhancing technologies (PETs) in academia and the commercial world. Topics Covered:Kim's career journey & why she moved into threat modeling.The definition of 'threat modeling,' who should threat model, and what's included in her "Threat Modeling Manifesto."The connection between threat modeling & a 'shift left' mindset / strategy.Design patterns that benefit threat modeling & anti-patterns that inhibit.Benefits to using the LINDDUN Privacy Threat Modeling framework for mitigating privacy threats in software, including the 7 'privacy threat types,' associated 'privacy threat trees,' and examples.How "privacy threat trees' refine each threat type into concrete threat characteristics, examples, criteria & impact info.Benefits & differences between LINDDUN GO and LINDDUN PRO.How orgs can combine threat modeling approaches with PETs to address privacy risk.Kim's work as Program Chair for the International Workshop on Privacy Engineering (IWPE), highlighting some anticipated talks.The overlap of privacy & AI threats, and Kim's recommendation of The Privacy Library of Threats 4 AI ("PLOT4AI") Threat Modeling Card DeckRecommended resources for privacy threat modeling, privacy engineering & PETs.How the LINDDUN model & methodologies have been adopted by global orgs.How to bridge the gap between the academic & commercial world to advance & deploy PETs.Resources Mentioned:The Threat Modeling ManifestoLINDDUN Privacy Threat Model STRIDE threat modelThreat Modeling Connect CommunityElevation of Privilege card gamePlot4AI (privacy & AI threat modeling) card deckInternational Workshop on Privacy Engineering (IWPE)Guest Info:Follow Kim on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
I am delighted to welcome my next guest, Brad Dominy. Brad is a MacOS and iOS developer and Founder & Inventor of Neucards, a privacy-preserving app that enables secure shareable and updatable digital contacts. In this conversation, we delve into why personally managing our digital contacts has been so difficult and Brad's novel approach to securely manage our contacts, architected with privacy by design and default.Contacts have always been the “junk drawer” of digital data, where people have information that they want to keep up-to-date, but are rarely able to based on current technology. The vCard standard is outdated, but is the only standard that works across iOS, Android, and Microsoft. It is still the most commonly used contact format, but lacks any capacity for updating contacts. Once someone exchanges their contact information with you, it then falls on you to keep that up-to-date. This is why Brad created Neucards: to gain the benefits of sharing information easily, privately (with E2EE) and receiving updates across all platforms.Topics Covered:Why it is difficult to keep our digital contacts up-to-date across devices and platforms.Brad describes his career journey that inspired him to invent Neucards; the problems Neucards solves for; and why this became his passion project for over a decadeWhy companies haven't innovated more in the digital contacts spaceThe 3 main features that make Neucards different from other contact appsHow Neucards enables you to share digital contacts data easily & securelyNeucards' privacy by design and default approach to sharing and updating digital contactsHow you can use NFC tap tags with Neucards to make the process of sharing digital contacts much easierWhether Neucards can solve the "New phone, who dis?" problemWhether we will see an update to the vCard standard or new standards for digital contactsNeucards' roadmap, including a 'mask communications' featureThe importance of language; the difference between 'privacy-preserving' vs. 'privacy-enabling' architectural approachesResources Mentioned:Learn about NeucardsDownload the Neucards iOS appGuest Info:Follow Brad on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
In this week's episode, I speak with Damien Desfontaines, also known by the pseudonym “Ted”, who is the Staff Scientist at Tumult Labs, a startup leading the way on differential privacy. In Damien's career, he has led an Anonymization Consulting Team at Google and specializes in making it easy to safely anonymize data. Damien earned his PhD and wrote his thesis at ETH Zurich, as well as his Master's Degree in Mathematical Logic and Theoretical Computer Science.Tumult Labs' platform makes differential privacy useful by making it easy to create innovative privacy and enabling data products that can be safely shared and used widely. In this conversation, we focus our discussion on Differential Privacy techniques, including what's next in its evolution, common vulnerabilities, and how to implement differential privacy into your platform.When it comes to protecting personal data, Tumult Labs has three stages in their approach. These are Assess, Design, and Deploy. Damien takes us on a deep dive into each with use cases provided.Topics Covered:Why there's such a gap between the academia and the corporate worldHow differential privacy's strong privacy guarantees are a result of strong assumptions; and why the biggest blockers to DP deployments have been eduction & usabilityWhen to use "local" vs "central" differential privacy techniquesAdvancements in technology that enable the private collection of dataTumult Labs' Assessment approach to deploying differential privacy, where a customer defines its 'data publication' problem or questionHow the Tumult Analytics platform can help you build different privacy algorithms that satisfies 'fitness for use' requirementsWhy using gold standard techniques like differential privacy to safely release, publish, or share data has value far beyond complianceHow data scientists can make the analysis & design more robust to better preserve privacy; and the tradeoff between utility on very specific tasks & number of tasks that you can possibly answerDamien's work assisting the IRS & DOE deploy differential privacy to safely publish and share data publicly via the College Scorecards projectHow to address security vulnerabilities (i.e. potential attacks) to differentially private datasetsWhere you can learn more about differential privacyHow Damien sees this space evolving over the next several yearsResources Mentioned:Join the Tumult Labs SlackLearn about Tumult LabsGuest Info:Connect with Damien on LinkedInLearn more on Damien's websiteFollow 'Ted' on Twitter Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
I'm delighted to welcome guest, Melanie Ensign, Founder and CEO of Discernible, where she helps organizations adopt effective communication strategies to improve risk-related outcomes. She's managed security & privacy communications for some of the world's most notable brands, including Facebook, Uber & AT&T.Melanie counsels executives and technical teams to cut through internal politics, dysfunctional inertia & meaningless metrics. For the past 10 years, she's also led the press department & communication strategy for DEF CON. Also, Melanie is an accomplished scuba diver and brings lessons learned preventing, preparing for & navigating unexpected high-risk underwater incidents to her work in security & privacy. Today's discussion focuses on the importance of communication strategies and tactics for privacy engineering teams. Topics Covered:Melanie's career journey and how she leveraged her experience in shark science to help executives get over their initial fears of the unknown in security & privacyHow Melanie guides and supports technical teams at Discernible on effective communicationsHow to prevent 'Privacy Outrage'The value of preventing privacy snafus rather than focusing only on crisis commsHow companies can use technical communication strategies & tactics to earn trust with the publicThe problem with incentives - why most social media metrics have been bullshit for far too longWhy Melanie decided to leave big tech to start DiscernibleInsight into the 7 Arthur W. Page Society Principles, a 'code of ethics' for communications professionalsWhat makes for a good PR story that the media would want to coverWhy press releases are mostly ineffective except for announcing funding raisesThe importance of educating the community for which you're buildingMelanie's advice to Elon Musk, who does not invest in a comms teamWhat OpenAI could have done differently, and whether their go-to-market strategy was effectiveThe importance of elevating Compliance teams to Business Advisors in the eyes of stakeholdersResources Mentioned:Subscribe to the Discernible newsletterDiscover Github's ReadMe NewsletterLearn about the Arthur W. Page PrinciplesGuest Info:Follow Melanie on LinkedInFollow Melanie on Mastodon Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week's guest is Umar Iqbal, PhD, a Postdoctoral Scholar at the Paul G. Allen School of Computer Science & Engineering at the University of Washington, working in the Security and Privacy Research Lab. Umar focuses his research on two themes: 1) bringing transparency into data collection and usage practices, and 2) enabling individuals to have control over their own data by identifying & restricting privacy-invasive data collection & usage practices of online servicesHis long-term research vision is to create an environment where users can reap the benefits of technology without losing their privacy by enabling preemptive privacy protections and establishing 'checks & balances' on the Internet. In this discussion, we discuss his previous and current research with a goal of empowering people to protect their privacy on the Internet. Topics Covered:Why Umar focused his research on transparencyUmar's research relating to transparency, data collection & use, with a focus on Amazon's smart speaker & metadata privacy and potential EU regulatory enforcementHis transparency-related work related to browsers & API's, and the growing problem of using fingerprinting techniques to track people without consentHow Umar plans to bring control to individuals by restricting online privacy-invasive data collectionsHow he used a ML technique to detect browser fingerprinting scripts based on their functionalityUmar's research to determine the prevalence of online tracking & measure how effective currently-available tracker detection tools are His research on early detection of emerging privacy threats (e.g., 'browser fingerprinting' & 'navigational tracking', etc.) and his investigation of privacy issues related to IoT (e.g., smart speakers & health & fitness bands that analyze people's voices)How we can ensure strong privacy guarantees and make a more accountable InternetWhy regulations need technological support to be effective for enforcementUmar's advice to developers / hackers looking for 'privacy bugs' via dynamic code analysis and a discussion of the future of 'privacy bug bounties'Resources Mentioned:Read Umar's papers: Google Scholar CitationsGuest Info:Learn about Umar on his websiteConnect with Umar on LinkedInFollow Umar on Twitter Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week, we welcome Suchakra Sharma, Chief Scientist at Privado.ai, where he builds code analysis tools for data privacy & security. Previously, he earned his PhD in Computer Engineering from Polytechnique Montreal, where he worked on eBPF Technology and hardware-assisted tracing techniques for OS Analysis. In this conversation, we delve into Suchakra's background in shifting left for security and how he applies traditional, tested static analysis techniques — such as 'taint tracking' and 'data flow analysis' — for use on large code bases at scale to help fix privacy leaks right at the source.---------Thank you to our sponsor, Privado, the developer friendly privacy platform.---------Suchakra aligns himself with the philosophical aspects of privacy and wishes to work on anything that helps in limiting the erosion of privacy in modern society, since privacy is fundamental to all of us. These kinds of needs have always been here, and as societies have advanced, this is a time when we require more guarantees of privacy. After all, it is humans that are behind systems and it is humans that are going to be affected by the machines that we build. Check out this fascinating discussion on how to shift privacy left in your organization.Topics Covered:Why Suchakra was interested in privacy after focusing on static code analysis for securityWhat 'shift left' means and lessons learned from the 'shift security left' movement that can be applied to 'shift privacy left' effortsSociological perspectives on how humans developed a need for keeping things 'private' from othersHow to provide engineering-focused guarantees around privacy today & what the role should be of engineers within this 'shift privacy left' paradigmSuchakra's USENIX Enigma talk & discussion of 'taint tracking' & 'data flow analysis' techniquesWhich companies should build in-house tooling for static analysis, and which should be outsourcing to experienced vendors like PrivadoHow to address 'privacy bugs' in code; why it's important to have an 'auditor's mindset;' &, why we'll see 'Privacy Bug Bounty Programs' soonSuchakra's advice to engineering managers to move the needle on privacy in their orgsResources Mentioned:Join Privado's Slack CommunityReview Privado's Open Source Code Scanning ToolsGuest Info:Connect with Suchakra on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
I am delighted to welcome this week's guest, Kurt Rohloff. Kurt is the CTO and Co-Founder of Duality Technologies, a privacy tech company that enables organizations to leverage data across their ecosystem and generate joint insights for better business while preserving privacy. Kurt was also Co-Founder of the OpenFHE Homomorphic Encryption Software Library that enables practical and usable privacy and collaborative data analytics.He's successfully led teams that are developing, transitioning, and applying first-in-the-world technology capabilities for both the Department of Defense as well as for commercial use. Kurt specializes in generating, developing, and commercializing innovative secure computing technologies with a focus on privacy and AI/ML at scale. In this episode, we discuss use cases for leveraging Fully Homomorphic Encryption (FHE) and other PETs.In a previous episode, we spoke about federated learning; and in this episode, we learn how to achieve secure federated learning using fully homomorphic encryption (FHE) techniques.Kurt has been focused on and supported homomorphic encryption since it was first discovered, including his involvement in one of the seminal projects, funded by DARPA, where he ran an implementation team, called PROCEED.FHE, as opposed to other kinds of privacy technologies, is more general and malleable. As each organization has different needs when it comes to data collaboration, Duality Technologies offers three separate models for collaboration, which enable organizations to secure sensitive data while still allowing different types of sharing.Topics Covered:How companies can gain utility from a dataset while protecting the privacy of individuals or entitiesHow FHEs help with fraud prevention, How FHEs help with fraud prevention, secure investigations, real-world evidence & genome-wide association studiesUse cases for the three collaboration models Duality offers: Single Data Set, Horizontal Data Analysis, and Vertical Data AnalysisComparison & trade-offs involved between federated learning and homomorphic encryptionProliferation of FHE StandardsOpenFHE.org, the leading open source library for implementations of fully homomorphic encryption protocolsResources Mentioned:Review the OpenFHE encryption software libraryLearn about DualityGuest Info:Connect with Kurt on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week, I'm joined by Katharine Jarmul, Principal Data Scientist at Thoughtworks & author of the the forthcoming book, "Practical Data Privacy: Enhancing Privacy and Security in Data." Katharine began asking questions similar to those of today's ethical machine learning community as a university student working on her undergrad thesis during the war in Iraq. She focused that research on natural language processing and investigated the statistical differences between embedded & non-embedded reporters. In our conversation, we discuss ethical & secure machine learning approaches, threat modeling against adversarial attacks, the importance of distributed data setups, and what Katharine wants data scientists to know about privacy and ethical ML.Katharine believes that we should never fall victim to a 'techno-solutionist' mindset where we believe that we can solve a deep societal problem simply with tech alone. However, by solving issues around privacy & consent with data collection, we can more easily address the challenges with ethical ML. In fact, ML research is finally beginning to broaden and include the intersections of law, privacy, and ethics. Katharine anticipates that data scientists will embrace PETs that facilitate data sharing in a privacy-preserving way; and, she evangelizes the un-normalization of sending ML data from one company to another. Topics Covered:Katharine's motivation for writing a book on privacy for a data scientist audience and what she hopes readers will learn from itWhat areas must be addressed for ML to be considered ethicalOverlapping AI/ML & Privacy goalsChallenges with sharing data for analyticsThe need for data scientists to embrace PETsHow PETs will likely mature across orgs over the next 2 yearsKatharine's & Debra's favorite PETsThe importance of threat modeling ML models: discussing 'adversarial attacks' like 'model inversion' & 'membership inference' attacksWhy companies that train LLMs must be accountable for the safety of their modelsNew ethical approaches to data sharingWhy scraping data off the Internet to train models is the harder, lazier, unethical way to train ML modelsResources Mentioned:Pre-order the forthcoming book: "Practical Data Privacy"Subscribe to Katharine's newsletter: Probably PrivateGuest Info:Follow Katharine on LinkedInFollow Katharine on Twitter Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week, we gain insights into the profession of privacy engineering with guest Menotti Minutillo, a Sr. Privacy Engineering Manager with 15+ years of experience leading critical programs and product delivery at companies like Uber, Thrive Global & Twitter. He started his career in 2007 on Wall Street as a DevOps & Infrastructure Engineer; and now, Menotti is a sought-after technical privacy expert and Privacy Tech Advisor. In this conversation, we discuss privacy engineering approaches that have work, the skillsets required for privacy engineering, and the current climate for landing privacy engineering roles.Menotti sees privacy engineering as the practice of building or improving info systems to advance a set of privacy goals. It's like a 'layer cake' in that you have different protections and risk reductions based on threat modeling, as well as different specialization capabilities for larger orgs.It makes a lot of sense that he's held weaving roles from company to company. His journey into privacy engineering was originally 'adjacent work' and today, he shares lessons learned from taking a PET like differential privacy from the lab to systematizing it into an organization to deploying it in the real-world. In this episode, we delve into tools, technical processes, technical standards, the maturing landscape for privacy engineers, and how the success of privacy is coupled with the success of each product shipped.Topics Covered:How Menotti found his way to managing privacy engineering teamsMenotti's definition of 'privacy engineer' & the skillsets requiredWhat it was like to work at Uber & Twitter, which have multiple privacy engineering teamsBest practices for setting up teams & deploying solutionsPrivacy outcomes that privacy engineers should keep top of mindBest practices for privacy architectureMenotti positive experience while at Uber working with Privacy Researchers from UC Berkeley to take differential privacy from the lab to a real-world deploymentLessons learned from times of transition, including while at Twitter during Musk's takeover Whether privacy was a 'zero interest rate bet,' and what that means for privacy engineering roles given current economic realitiesResources Mentioned:Check out the PEPR conferenceRead 'Was Privacy a Zero Interest Rate Bet?'Guest Info:Follow Menotti on LinkedInConnect with Menotti on Mastadon Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This week, we welcome Lipika Ramaswamy, Senior Applied Scientist at Gretel AI, a privacy tech company that makes it simple to generate anonymized and safe synthetic data via APIs. Previously, Lipika worked as a Data Scientist at LeapYear Technologies, and was the Machine Learning Researcher at Harvard University's Privacy Tools Project.Lipika's interest in both machine learning and privacy comes from her love of math and things that can be defined with equations. Her interest was piqued in grad school and accidentally walked into a classroom holding a lecture on Applying Differential Privacy for Data Science. The intersection of data combined with the privacy guarantees that we have available today has kept her hooked ever since.---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------There's a lot to unpack when it comes to synthetic data & privacy guarantees, as she takes listeners on a deep dive of these compelling topics. Lipika finds elegant how privacy assurances like differential privacy revolve around math and statistics at their core. Essentially, she loves building things with 'usable privacy' & security that people can easily use. We also delve into the metrics tracked in the Gretel Synthetic Data Report, which assesses both 'statistical integrity' & 'privacy levels' of a customer's training data.Topics Covered:The definition of 'synthetic data,' & good use casesThe process of creating synthetic dataHow to ensure that synthetic data is 'privacy-preserving'Privacy problems that may arise from overtraining ML modelsWhen to use synthetic data rather than other techniques like tokenization, anonymization, aggregation & othersExamples of good use cases vs poor use cases for using synthetic dataCommon misperceptions around synthetic dataGretel.ai's approach to 'privacy assurance,' including a focus on 'privacy filters,' which prevent some privacy harms outputted by LLMsHow to plug into the 'synthetic data' communityWho bears the responsibility for educating the public about new technology like LLMs and potential harmsHighlights from Gretel.ai's Synthesize 2023 conferenceResources Mentioned:Join Gretel's Synthetic Data Community on DiscordWatch Talks on Synthetic Data on YouTubeGuest Info:Connect with Lipika on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
In this episode, I'm delighted to welcome Apu Kapadia, Professor of Computer Science and Informatics at the School of Informatics and Computing, Indiana University. His research is focused on the privacy implications of ubiquitous cameras and online photo sharing. More recently, he has examined the cybersecurity and privacy challenges posed by AI-based smart voice assistants that can listen and converse with us.Prof. Kapadia has been excited by anonymized networks since childhood. He has memories of watching movies where a telephone call was being routed around the world so that it became impossible to trace. What really fascinates him now is how much there is to understand mathematically and technically in order to measure that amount of privacy. In more recent years, he has been interested in privacy in the context of digital photography and audio shared online and on social media. His current research is focused on understanding privacy issues around photo sharing in a world with cameras everywhere.In this conversation, we delve into how users are affected once privacy violations have already occurred, the implications of privacy of children when it comes to parents sharing photos of them online, the fascinating future of trusted hardware that will help ensure "digital forgetting," and how all of this is a people problem as much as it is a technical problem.Topics Covered:Can we trick 'automated speech recognition' (ASR)?Apu's co-authored paper: 'Defending Against Microphone-based Attacks with Personalized Noise'What Apu means by 'tangible privacy' & what design approaches he recommendsApu's view on 'bystander privacy' & the approach that he took in his researchHow to leverage 'temporal redactions' via 'trusted hardware' for 'digital forgetting'Apu's surprising finding in his research on "interpersonal privacy" in the context of social media and photosGuidance for developers building privacy-respective social media appsApu's research focused on cybersecurity & privacy for marginalized & vulnerable populationsHow we can make privacy & security more 'useable'Resources Mentioned:Read Defending Against Microphone-Based Attacks with Personalized NoiseRead Decaying Photos for Enhanced Privacy: User Perceptions Towards Temporal Redactions and 'Trusted' PlatformsGuest Info:Follow Prof. Kapadia on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Victor Platt is a Senior AI Security and Privacy Strategist who previously served as Head of Security and Privacy for privacy tech company, Integrate.ai. Victor was formerly a founding member of the Risk AI Team with Omnia AI, Deloitt's artificial intelligence practice in Canada. He joins today to discuss privacy enhancing technologies (PETs) that are shaping industries around the world, with a focus on federated learning.---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------Victor views PETs as functional requirements and says they shouldn't be buried in your design document as nonfunctional obligations. In his work, he has found key gaps where organizations were only doing “security for security's sake.” Rather, he believes organizations should be thinking about it at the forefront. Not only that, we should all be getting excited about it because we all have a stake in privacy.With federated learning, you have the tools available to train ML models on large data sets with precision at scale without risking user privacy. In this conversation, Victor demystifies what federated learning is, describes the 2 different types: at the edge and across data silos, and explains how it works and how it compares to traditional machine learning.We deep dive into how an organization knows when to use federated learning, with specific advice for developers and data scientists as they implement it into their organizations.Topics Covered:What 'federated learning' is and how it compares to traditional machine learningWhen an organization should use vertical federated learning vs horizontal federated learning, or instead a hybrid versionA key challenge in 'transfer learning': knowing whether two data sets are related to each other and techniques to overcome this, like 'private set intersection'How the future of technology will be underpinned by a 'constellation of PETs' The distinction between 'input privacy' vs. 'output privacy'Different kinds of federated learning with use case examplesWhere the responsibility for adding PETs lies within an organizationThe key barriers to adopting federated learning and other PETs within different industries and use casesHow to move the needle on data privacy when it comes to legislation and regulationResources Mentioned:Take this outstanding, free class from OpenMined: Our Privacy OpportunityGuest Info:Follow Victor on LinkedInFollow the SPL Show:Follow us on TwitterFollow us on LinkedInCheck out our website Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
In this conversation with Markus Lampinen, Co-founder and CEO at Prifina, a personal data platform, we discuss meaty topics like: Prifina's approach to building privacy-respected apps for consumer wearable sensors; LLMs (Large Language Models) like Chat GPT; and why we should consider training our own personal AIs.Markus shares his entrepreneurial journey in the privacy world and how he is “the biggest data nerd you'll find.” It started with tracking his own data, like his eating habits, activity, sleep, and stress, an then he built his company around that interest. His curiosity about what you can glean from one's own data made him wonder how you could also improve your life or the lives of your customers with that data.---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------We discuss how to approach building a privacy-first platform to unlock the value and use of IOT / sensor data. It began with the concept of individual ownership: who should actually benefit from the data that we generate? Markus says it should be individuals themselves. Prifina boasts a strong community of 30,000 developers who align around common interests - liberty, equality & data - and build and test prototypes that are gathering and utilizing the data working for individuals, as opposed to corporate entities. The aim is to empower individuals, companies & developers to build apps that re-purpose individuals' own sensor data to gain privacy-enabled insights.---------Listen to the episode on Apple Podcasts, Spotify, iHeartRadio, or on your favorite podcast platform.---------Topics Covered:Enabling true, consumer-grade 'data portability' with personal data clouds (a 'bring your own data' approach)Use cases to illustrate the problems Prifina is solving with sensorsWhat are large language models (LLM) and chatbots trained on them, and why they are so hot right nowThe dangers of using LLMs, with emphasis on privacy harmsHow to benefit from our own data with personal AIsAdvice to data scientists, researchers and developers regarding how to architect for ethical uses of LLMsWho's responsible for educating the public about LLMs, chatbots, and their potential harms & limitationsResources Mentioned:Learn more about PrifinaJoin Prifina's Slack Community: Liberty.Equality.DataGuest Info:Follow Markus on LinkedInFollow Markus on Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Today, I welcome Gary LaFever, co-CEO & GC at Anonos; WEF Global Innovator; and a solutions-oriented futurist with a computer science and legal background. Gary has over 35 years of technical, legal and policy experience that enables him to approach issues from multiple perspectives. I last saw Gary when we shared the stage at a RegTech conference in London six years ago, and it was a pleasure to speak with him again to discuss how the Schrems II decision coupled with the increasing prevalence of data breaches and ransomware attacks have shifted privacy left from optional to mandatory, necessitating a "privacy left trust" approach.---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------Gary describes the 7 Universal Data Use Cases with relatable examples and how they are applicable across orgs and industries, regardless of jurisdiction. We then dive into what Gary is seeing in the market in regard to the use cases. He then reveals the 3 Main Data Use Obstacles to accomplishing these use cases and how to overcome them with "statutory pseudonymization" and "synthetic data."In this conversation that evaluates how we can do business in a de-risked environment, we discuss why you can't approach privacy with just words - contracts, policies, and treaties; why it's essential to protect data in use; and how you can embed technical controls that move with data for protection that meets regulatory thresholds while "in use" to unlock additional data use cases. I.e., these effective controls equate to competitive advantage.Topics Covered:Why trust must be updated to be technologically enforced - "privacy left trust"The increasing prevalence of data breaches and ransomware attacks and how they have shifted privacy left from optional to mandatory7 Data Use Cases, 3 Data Use Obstacles, and deployable technologies to unlock new data use casesHow the market is adopting technology for the 7 use cases and trends that Gary is seeingWhat it means to "de-risk" dataBeneficial uses of "variant twins" technologyBuilding privacy in by design, so it increases revenue generation"Statutory pseudonymization" and how it will help you reduce data privacy risks while increasing utility and valueResources Mentioned:Learn about AnonosRead: "Technical Controls that Protect Data When in Use and Prevent Misuse"Guest Info:Follow Gary on LinkedInFollow Gary on Twitter Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
R. Jason Cronk is the Founder of the Institute of Operational Privacy Design (IOPD) and CEO of Enterprivacy Consulting Group, as well as the author of Strategic Privacy by Design. I recently caught up with Jason at the annual Privacy Law Salon event and had a conversation about the socio-technical challenges of privacy, different privacy-by-design frameworks that he's worked on, and his thoughts on some hot topics in the web privacy space.---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------We start off discussing updates to Strategic Privacy by Design, now in it's 2nd edition. We chat about the brand new ISO 31700 Privacy by Design for Consumer Goods and Services standard and consensus process and compare it to the NIST Privacy Framework, IEEE 7002 Standard for Data Privacy, and Jason's work with the Institute of Operational Privacy Design (IOPD) and it's newly-published Design Process Standard v1. Jason and I also explore risk tolerance through the lens of privacy using FAIR. There's a lot of room for subjective interpretation, particularly of non-monetary harm, and Jason provides many thought-provoking examples of how this plays out in our society. We round out our conversation by talking about the challenges of Global Privacy Control (GPC) and what deceptive design strategies to look out for.Topics Covered:Why we should think of privacy beyond "digital privacy"What readers can expect from Jason's book, Strategic Privacy by Design, and what's included in the 2nd editionIOPD's B2B third-party privacy auditWhy you should leverage the FAIR quantitative risk analysis model to define address effective privacy risk management programsThe NIST Privacy Framework and developments of its Privacy Workforce Working GroupDark patterns & why just asking the wrong question can be a privacy harm (interrogation)How there are 15 privacy harms & only 1 of them is about securityResources Mentioned:Learn about the ISO 31700 Privacy by Design StandardReview the IOPD Design Process Standard v1Guest Info:Follow Jason on LinkedInFollow Enterprivacy Consulting Group on Twitter Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Nishant Bhajaria is the Director of Privacy Engineering, Architecture, & Analytics at Uber and Author of "Data Privacy: A Runbook for Engineers.” He's also an Advisor to Data Protocol, Privado & Piiano. In our conversation, we discuss privacy engineering trends, educational materials that Nishant has developed, and his advice to privacy technologists, engineers, and hiring managers. ---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------Nishant is a great example of a cross-functional, influential agent who has adapted to the ever-growing privacy discipline. He describes himself as an engineer for the attorneys and an attorney for the engineers, which has helped him secure positions at WebMD, Nike, Netflix, and now Uber. Nishant shares his advice for career development, both through the lens of how to break into the privacy space and also how to grow within your role. He explains how he's been able to get board-level understanding about the importance of privacy as a product, not an afterthought. He also highlights takeaways from his book and online courses.Topics Covered:How privacy engineers can secure their jobs during this widespread tech industry layoff Privacy tech as the glue between different teams and in-house servicesHow to make privacy more visible to the business as something that benefits the bottom line Common mistakes that Nishant sees engineers make when it comes to privacy What's covered in Nishant's ‘Privacy by Design' courses Resources Mentioned:Buy Data Privacy: A Runbook for Engineers Check out the Privacy Engineering Certification Course Guest Info:Follow Nishant on LinkedIn Follow the SPL Show:Follow us on Twitter Follow us on LinkedInCheck out our website Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.