Podcasts about Federated learning

  • 144PODCASTS
  • 180EPISODES
  • 46mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jun 4, 2025LATEST
Federated learning

POPULARITY

20172018201920202021202220232024


Best podcasts about Federated learning

Latest podcast episodes about Federated learning

Practical AI
Federated learning in production (part 2)

Practical AI

Play Episode Listen Later Jun 4, 2025 45:25 Transcription Available


Chong Shen from Flower Labs joins us to discuss what it really takes to build production-ready federated learning systems that work across data silos. We talk about the Flower framework and it's architecture (supernodes, superlinks, etc.), and what makes it both "friendly" and ready for real enterprise environments. We also  explore how the generative Generative AI boom is reshaping Flower's roadmap.Featuring:Chong Shen Ng – LinkedInChris Benson – Website, GitHub, LinkedIn, XDaniel Whitenack – Website, GitHub, XEpisode links:The future of AI training is federatedDeepLearning.ai short course on Federated Learning with FlowerFlower MonthlyFederated Learning in AutomotiveFederated AI in FinanceFederated Learning in HealthcareFederated AI on IoT SystemsFlowerTune LLM LeaderboardFlower IntelligenceGitHubSlackFlower DiscussCheck out upcoming webinars!Sponsors:NordLayer is toggle-ready network security built for modern businesses—combining VPN, access control, and threat protection in one platform that deploys in under 10 minutes with no hardware required. It's built on Zero Trust architecture with granular access controls, so only the right people access the right resources, and it scales effortlessly as your team grows. Get up to 32% off yearly plans with code practically-10 at nordlayer.com/practicalai - 14-day money-back guarantee included.

Practical AI
Federated learning in production (part 1)

Practical AI

Play Episode Listen Later May 30, 2025 44:38 Transcription Available


In this first of a two part series of episodes on federated learning, we dive into the evolving world of federated learning and distributed AI frameworks with Patrick Foley from Intel. We explore how frameworks like OpenFL and Flower are enabling secure, collaborative model training across silos, especially in sensitive fields like healthcare. The conversation touches on real-world use cases, the challenges of distributed ML/AI experiments, and why privacy-preserving techniques may become essential for deploying AI to production.Featuring:Patrick Foley – LinkedInChris Benson – Website, GitHub, LinkedIn, XDaniel Whitenack – Website, GitHub, XLinks:IntelOpenFLSponsors:NordLayer is a toggle-ready network security platform built for modern businesses. It combines VPN, access control, and threat protection in one easy-to-use platform. No hardware. No complex setup. Just secure connection and full control—in less than 10 minutes. Up to 22% off NordLayer yearly plans plus 10% on top with the coupon code practically-10.

healthsystemCIO.com
Clean Data Combined with Federated Learning Keys to Maximizing AI Efforts

healthsystemCIO.com

Play Episode Listen Later Feb 25, 2025 26:24


The promise of AI in healthcare hinges on a fundamental requirement: high-quality data. Without it, even the most advanced algorithms will fail to deliver meaningful results, according to Sonya Makhni, MD, Medical Director of Applied Informatics at Mayo Clinic Platform. “Data quality isn’t always the most exciting topic at conferences, but it’s the foundation of […] Source: Clean Data Combined with Federated Learning Keys to Maximizing AI Efforts on healthsystemcio.com - healthsystemCIO.com is the sole online-only publication dedicated to exclusively and comprehensively serving the information needs of healthcare CIOs.

Mobile Dev Memo Podcast
Season 5, Episode 5: The measurement myth

Mobile Dev Memo Podcast

Play Episode Listen Later Jan 29, 2025 32:31


"Half the money I spend on advertising is wasted; the trouble is, I don't know which half." Knowing the context of his work, my view of the infamous quote attributed to John Wanamaker is that advertising measurement is fundamentally and necessarily uncertain, even in success. This surfaces another, in my view, invalid interpretation of the quote: that advertising is only effective when it can be measured perfectly, absolutely, and with total precision. To my mind, this has been the prevailing view within digital advertising sector: that advertising measurement is inherently defined by total, deterministic precision. This is the measurement myth. In this podcast, I'll unpack the measurement myth and why I believe the digital advertising ecosystem is abandoning it in favor of more holistic, statistically sophisticated, and scalable approaches to advertising attribution and measurement. I'll discuss some of the methodologies at the frontier of advertising attribution that are alleviating the need for deterministic identity in advertising measurement and how their use allows advertisers to materially expand the reach of their messaging, and what the implications of that are for the digital economy. Resources referenced / cited in this podcast: CapitalOne Mobile e-Commerce Statistics Sensor Tower 5 Year Market Forecast IAB 2025 Outlook Study Meta's Renaissance Everything is an ad network Netflix and Disney+ advertising, two years in Flying blind Last-click attribution, deterministic measurement, and Wittgenstein's ruler A Comprehensive Guide to Bayesian Marketing Mix Modeling Podcast: Understanding Interoperable Private Attribution (with Ben Savage) What is Federated Learning in digital advertising? Thanks to the sponsors of this week's episode of the Mobile Dev Memo podcast: Vibe. Vibe is the leading Streaming TV ad platform for small and medium-sized businesses looking for actionable advertising campaign performance. INCRMNTAL⁠⁠. True attribution measures incrementality, always on. Interested in sponsoring the Mobile Dev Memo podcast? Contact ⁠Marketecture⁠. The Mobile Dev Memo podcast is available on: Apple Podcasts Spotify Google Podcasts

Digital Pathology Podcast
117: Tertiary Lymphoid Structures in Colorectal Cancer Prognosis | Dr. Aleks + AI

Digital Pathology Podcast

Play Episode Listen Later Dec 11, 2024 22:15 Transcription Available


Send us a textLeveraging AI for Deep Insights into Tertiary Lymphoid Structures in Colorectal CancerIn this episode of the Digital Pathology Podcast, I introduce 'Aleks + AI,' a new experimental series leveraging Google's Notebook LM to delve deeper into scientific literature. Today's focus is on tertiary lymphoid structures (TLS) and their potential to predict colorectal cancer prognosis. We discuss a study published in the October 2024 issue of Precision Clinical Medicine, exploring different methods of quantifying TLS using digital pathology and AI.  The paper title is: "Comparative analysis of tertiary lymphoid structures for predicting survival of colorectal cancer: a whole-slide images-based study"The findings highlight TLS density as a reliable predictor of survival and its correlation with immune responses and microsatellite instability. We also touch upon the potential for AI to streamline TLS analysis in clinical settings and the broader implications for personalized medicine. Join us as we dive into the intersection of digital pathology and computer science, featuring insights and commentary from my AI co-hosts, Hema and Toxy.00:00 Welcome and Introduction00:45 Introducing the New AI Tool: Notebook LM by Google01:11 Experimental Series: "Aleks + AI"02:06 Deep Dive into Tertiary Lymphoid Structures (TLS)03:18 Understanding TLS and Their Role in Colorectal Cancer04:20 Quantification Methods and Key Findings05:02 Implications for Personalized Medicine09:02 AI in TLS Analysis and Future Prospects11:00 CMS Classification and TLS Density12:08 Study Limitations and Future Directions15:40 Final Thoughts and Wrap-Up16:28 Feedback and Future PlansTHIS EPISODE'S RESOURCES116: DigiPath Digest #18 | Federated Learning in Pathology. Developing AI Models While Preserving PrivacyPUBLICATION DISCUSSED TODAY

The Tech Blog Writer Podcast
3111: Unlocking the Power of Federated Learning in Business

The Tech Blog Writer Podcast

Play Episode Listen Later Dec 7, 2024 22:38


What if your organization could unlock the full potential of AI without ever compromising on privacy or sharing sensitive data? In this episode of Tech Talks Daily, I am joined by Alexander Alten, Co-Founder and CEO of Scalytics, to explore how he is building the next-generation infrastructure layer for AI agents. Alexander brings a wealth of expertise, having led data and product teams at industry giants like Cloudera, Allianz, and Healthgrades. With a background in startups such as X-Warp and Infinite Devices, he has a proven track record of developing customer-centric, data-driven solutions that not only disrupt conventional norms but also fuel measurable growth. During our conversation at the IT Press Tour in Malta,  Alexander introduces Scalytics Connect, a modern AI data platform designed to accelerate insights while preserving privacy. He unpacks the challenges of breaking down data silos and explains why centralizing data may not always be the optimal solution. We also demystify federated learning, shedding light on its potential to empower businesses, particularly in regulated industries, to collaborate on AI models without exposing their data. The discussion extends to the value of open-source technologies and why they often emerge as long-term winners, citing examples like MySQL, Postgres, and WordPress. Alexander shares how Scalytics leverages open-source principles to provide scalable and transparent machine learning solutions for businesses looking to outperform in an increasingly data-driven world. As AI continues to redefine the way we work and innovate, Alexander's insights provide a roadmap for navigating the complexities of decentralized machine learning, privacy-first AI, and scalable technology. Could his approach to AI and data collaboration be the key to unlocking your organization's potential? Tune in to find out, and don't forget to share your thoughts on the future of AI-powered innovation.

Digital Pathology Podcast
116: DigiPath Digest #18 | Federated Learning in Pathology. Developing AI Models While Preserving Privacy

Digital Pathology Podcast

Play Episode Listen Later Dec 6, 2024 26:33 Transcription Available


Send us a textIn today's DigiPath Digest, we delve into federated learning, a decentralized approach to AI training that preserves data privacy. I discuss recent papers from PubMed and share my experiences experimenting with AI tools like Perplexity and Gemini for research efficiency. You will also get updates on upcoming plans, including leveraging AI to share more podcasts with you. Did I mention that this is the last livestream of the year as I head to Poland for Christmas? No More DigiPath Digests. We got to number 18 (I overestimated it a bit in the podcast), and you have been instrumental in continuing this series!Big THANK YOU to all the digital Pathology #TRLBLZRS showing up every Friday morning for this!Join me as we tackle the nuances of federated learning and its impact on healthcare and pathology.00:00 Introduction and Greetings00:18 Today's Topic: Federated Learning00:57 AI Tools and Updates04:39 Federated Learning in Detail08:03 Challenges and Benefits of Federated Learning11:21 Exploring More Papers and Future Plans22:53 Wrapping Up and Final ThoughtsLinks and Resources:Subscribe to Digital Pathology Podcast on YouTubeFree E-book "Pathology 101"YouTube (unedited) version of this episodeTry Perplexity with my referral linkMy new page built with PerplexityPublications Discussed Today:

Cyber Security Inside
217. Federated Learning: A New Era of Collaboration for Pharma

Cyber Security Inside

Play Episode Listen Later Dec 2, 2024 34:52


In this InTechnology episode, Camille Morhardt discusses the application of artificial intelligence (AI) in the biopharmaceutical and life sciences sectors with Prashant Shah, Intel's CTO for federated artificial intelligence products, and Abhishek Pandey, a global lead and principal research scientist at AbbVie. The conversation centers on the potential of AI, particularly federated learning, to revolutionize drug discovery and development. They explore the challenges of data privacy and IP protection in this context, emphasizing the importance of collaboration and the role of initiatives like OpenFL and MLCommons in setting standards for AI in the industry.   The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.  

Eye On A.I.
#213 Mark Surman: How Mozilla Is Shaping the Future of Open-Source AI

Eye On A.I.

Play Episode Listen Later Oct 13, 2024 47:14


This episode is sponsored by Oracle. AI is revolutionizing industries, but needs power without breaking the bank. Enter Oracle Cloud Infrastructure (OCI): the one-stop platform for all your AI needs, with 4-8x the bandwidth of other clouds. Train AI models faster and at half the cost. Be ahead like Uber and Cohere.   If you want to do more and spend less like Uber, 8x8, and Databricks Mosaic - take a free test drive of OCI at https://oracle.com/eyeonai     In this episode of the Eye on AI podcast, we sit down with Mark Surman, President of Mozilla, to explore the future of open-source AI and how Mozilla is leading the charge for privacy, transparency, and ethical technology.   Mark shares Mozilla's vision for AI, detailing the company's innovative approach to building trustworthy AI and the launch of Mozilla AI. He explains how Mozilla is working to make AI open, accessible, and secure for everyone—just as it did for the web with Firefox. We also dive into the growing importance of federated learning and AI governance, and how Mozilla Ventures is supporting groundbreaking companies like Flower AI.   Throughout the conversation, Mark discusses the critical need for open-source AI alternatives to proprietary models like OpenAI and Meta's LLaMA. He outlines the challenges with closed systems and highlights Mozilla's work in giving users the freedom to choose AI models directly in Firefox.   Mark provides a fascinating look into the future of AI and how open-source technologies can create trillions in economic value while maintaining privacy and inclusivity. He also sheds light on the global race for AI innovation, touching on developments from China and the impact of public AI funding.   Don't forget to like, subscribe, and hit the notification bell to stay up to date with the latest trends in AI, open-source tech, and machine learning!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI   (00:00) Introduction to Mark Surman and Mozilla's Mission (02:01) The Evolution of Mozilla: From Firefox to AI (04:40) Open-Source Movement and Mozilla's Legacy (06:58) The Role of Open-Source in AI (11:06) Advancing Federated Learning and AI Governance (14:10) Integrating AI Models into Firefox (16:28) Open vs Closed Models (22:09) Partnering with Non-Profit AI Labs for Open-Source AI (25:08) How Meta's Strategy Compares to OpenAI and Others (27:58) Global Competition in AI Innovation (31:17) The Cost of Training AI Models (33:36) Public AI Funding and the Role of Government (37:40) The Geopolitics of AI and Open Source (41:35) Mozilla's Vision for the Future of AI and Responsible Tech

TechBurst Asia Podcast
056: DEMYSTIFYING AI: Simplifying Complex Terms for Everyday Use

TechBurst Asia Podcast

Play Episode Listen Later Sep 24, 2024 58:53


In this episode of TechBurst Talks, we're taking on the challenge of demystifying AI and breaking down the overwhelming flood of buzzwords. You've heard them before—AI, ML, DL, LLM, SLM, RAG, ChatGPT, Custom GPT, NLP—the list goes on. Our goal is to make these terms accessible and easy to understand, no matter your background. I'm thrilled to be joined by Bernard Leong, who is the Host of the Analyse Asia podcast, the founder of Dorje.ai, and an expert in the field of Artifical Intelligence.  What makes this episode special is that Bernard and I approach learning differently. Bernard is a theoretical learner, able to grasp complex concepts with ease. On the other hand, I'm more of a kinesthetic learner—I need to see how these terms apply in the real world, rather than just understanding the theory. Together, we'll cover the essential AI buzzwords and trends, with something for everyone. Whether you're on the cutting edge of AI or just trying to get a handle on the jargon, this episode will bring clarity to the chaos. 01:00 Welcome and Introduction to AI 01:26 Bernard Leon's Background and Expertise 04:30 Understanding Artificial Intelligence 05:10 Big Data vs. Artificial Intelligence 06:29 Machine Learning and Deep Learning Explained 10:23 Generative AI: The Game Changer 17:30 Real-World Applications of AI 21:56 AI in Everyday Life and Business 31:14 The Evolution of Wind Turbine Maintenance 32:50 ChatGPT Revolutionizes Data Cleaning 34:21 Ethical Considerations in AI 37:09 Federated Learning and Privacy 38:47 AI's Impact on Jobs and Productivity 44:49 Future Trends in AI 49:48 Challenges in AI Adoption 51:57 Introducing Dorje.ai 55:24 Final Thoughts and Advice on AI  

Environment Variables
Making Testbeds for Carbon Aware Computing

Environment Variables

Play Episode Listen Later Sep 12, 2024 48:18


Host Chris Adams is joined by special guest Philipp Wiesner, a research associate and PhD student at TU Berlin, to discuss how computing systems can better align energy consumption with clean energy availability. Contributing to Project Vessim, Philipp explains how researchers are now able to model different energy consumption scenarios, from solar and wind power integration to the complexities of modern grids despite the scarcity of available testing environments. They discuss federated learning and its role in carbon-aware designs, along with challenges in tracking real energy savings. Tune in to learn about the future of carbon-aware computing and the tools being developed to help software become more sustainable.

AI in Action Podcast
E527 Daniel Feller, AI Program Lead at Rhino Health

AI in Action Podcast

Play Episode Listen Later Aug 12, 2024 17:16


Today's guest is Daniel Feller, AI Program Lead at Rhino Health. Founded in 2021, Rhino Health are activating the World's Health Data with Federated Computing, and streamlining end-to-end healthcare research and AI development. They transform healthcare AI by integrating Edge Computing and Federated Learning into a cohesive Federated Computing strategy. This innovative approach provides AI developers with swift and secure access to healthcare data, dramatically reducing setup times from months to days and ensuring data privacy across global networks. Federated Computing allows working with data across multiple sites while keeping that data at rest behind each site's firewall. This technique enables multiple entities to contribute to AI model training without disclosing raw data, safeguarding the privacy of each dataset. Rhino's Federated Computing strategy ensures data integrity and compliance while minimizing latency and maximizing efficiency, making it an essential tool for advancing global healthcare solutions. In this episode, Daniel talks about: His background and journey to Rhino Health, How Rhino Health uses federated learning to preserve data privacy, Use cases of supporting research for breast cancer and CT scans, Deploying models with Docker to ensure data control and collaboration, How Federated Computing aids fraud detection & drug discovery by ensuring data control, Prioritizing engineers for an agile, market-responsive federated learning infrastructure, How Rhino enhances federated computing for data privacy & governance

Digital Podcast
Daten teilen? Aber sicher!

Digital Podcast

Play Episode Listen Later May 17, 2024 72:19


Daten wären wertvoll für die Forschung, doch Daten sammeln verletzt die Privatsphäre. Ein unauflösbares Dilemma? Nein! Es gibt Tricks, wie man Daten nutzen und die Privatsphäre schützen kann: die PET - Privacy Enhancing Technologies. Der Podcast im Überblick (00:00:51) Das Dilemma: Sharing oder Privacy (00:09:47) PET 1 - Anonymisierung (00:15:21) PET 2 - Differential Privacy (00:23:20) PET 3 - Synthetic Data (00:16:42) PET 4 - Trusted Execution Environment (00:25:08) PET 5 - Zero Knowledge Proof (00:26:59) PET 6 - Homomorphic Encryption (00:32:10) PET 7 - Multiparty Computation (00:35:59) PET 8 - Distributed Analytics (00:38:47) PET 9 - Federated Learning (00:45:02) Hindernisse (00:49:48) Biomedizin mit Catherine Jutzeler (01:00:29) Start-up mit Jean-Pierre Hubaux Links Peter zu Federated Learning: https://www.srf.ch/audio/digital-podcast/chindsgi-kuenstliche-intelligenz-und-dorfromantik?id=11969009 Peter zu Homomorphic Encryption: https://www.srf.ch/audio/digital-podcast/dreckige-waesche-sichere-daten?id=11972567 Zero Knowledge Proof (Video): https://www.youtube.com/watch?v=5qzNe1hk0oY Zero Knowledge Proof (Artikel): https://www.spektrum.de/kolumne/zero-knowledge-proof-wie-man-etwas-geheimes-beweist/2140194 OECD-Bericht: https://www.oecd-ilibrary.org/docserver/bf121be4-en.pdf?expires=1714738067&id=id&accname=guest&checksum=23355B1680302D7AC1E70819326D7103 Bericht der Royal Society: https://royalsociety.org/news-resources/projects/privacy-enhancing-technologies/ SRF Geek Sofa bei Discord https://discord.gg/geeksofa

Anablock Podcast
Collaborative Learning Quantum AI-Driven Framework for Healthcare

Anablock Podcast

Play Episode Listen Later May 17, 2024 3:54


This podcast discusses the application of Quantum Tensor Networks in Federated Learning for healthcare, providing a collaborative and privacy-preserving framework for analyzing medical data and improving diagnostic tools. Key Points Federated Learning (FL) offers a solution for healthcare institutions to collaborate on analyzing sensitive data while maintaining privacy and reducing data transfer costs. The integration of Quantum Tensor Networks (QTNs) in this framework shows promise in successfully training models on heterogeneous medical data across multiple healthcare institutions. The experiments conducted on different medical datasets demonstrate the superior performance of the Quantum Federated Global Model, particularly with models like TTN and MERA, showcasing higher accuracy and improved generalization compared to locally trained models.

Digital Podcast (MP3)
Daten teilen? Aber sicher!

Digital Podcast (MP3)

Play Episode Listen Later May 17, 2024 72:19


Daten wären wertvoll für die Forschung, doch Daten sammeln verletzt die Privatsphäre. Ein unauflösbares Dilemma? Nein! Es gibt Tricks, wie man Daten nutzen und die Privatsphäre schützen kann: die PET - Privacy Enhancing Technologies. Der Podcast im Überblick (00:00:51) Das Dilemma: Sharing oder Privacy (00:09:47) PET 1 - Anonymisierung (00:15:21) PET 2 - Differential Privacy (00:23:20) PET 3 - Synthetic Data (00:16:42) PET 4 - Trusted Execution Environment (00:25:08) PET 5 - Zero Knowledge Proof (00:26:59) PET 6 - Homomorphic Encryption (00:32:10) PET 7 - Multiparty Computation (00:35:59) PET 8 - Distributed Analytics (00:38:47) PET 9 - Federated Learning (00:45:02) Hindernisse (00:49:48) Biomedizin mit Catherine Jutzeler (01:00:29) Start-up mit Jean-Pierre Hubaux Links Peter zu Federated Learning: https://www.srf.ch/audio/digital-podcast/chindsgi-kuenstliche-intelligenz-und-dorfromantik?id=11969009 Peter zu Homomorphic Encryption: https://www.srf.ch/audio/digital-podcast/dreckige-waesche-sichere-daten?id=11972567 Zero Knowledge Proof (Video): https://www.youtube.com/watch?v=5qzNe1hk0oY Zero Knowledge Proof (Artikel): https://www.spektrum.de/kolumne/zero-knowledge-proof-wie-man-etwas-geheimes-beweist/2140194 OECD-Bericht: https://www.oecd-ilibrary.org/docserver/bf121be4-en.pdf?expires=1714738067&id=id&accname=guest&checksum=23355B1680302D7AC1E70819326D7103 Bericht der Royal Society: https://royalsociety.org/news-resources/projects/privacy-enhancing-technologies/ SRF Geek Sofa bei Discord https://discord.gg/geeksofa

Research Insights, a Society of Actuaries Podcast
Federated Learning for Insurance Companies

Research Insights, a Society of Actuaries Podcast

Play Episode Listen Later Apr 9, 2024 25:31


Hello Listeners!  We have a great episode today.  Our guest is Tian Wang, Professor at Colorado State University and we are talking about the recently published SOA Research Institute report “Federated Learning for Insurance Companies.”  Listen and learn about unlocking the potential of Privacy-Preserving Data Sharing!   Landing page:  https://www.soa.org/resources/research-reports/2024/federated-learning-insurance-companies/   Send us your feedback at ResearchInsights@soa.org   

Infinite Machine Learning
LLM Data Frontiers

Infinite Machine Learning

Play Episode Listen Later Jan 22, 2024 33:45


Curtis Northcutt is the cofounder and CEO of Cleanlab, a data curation platform for LLMs. They have raised $30M in funding from Bain Capital Ventures, Menlo, Databricks, and TQ. He was previously the cofounder and CTO of ChipBrain. He has a PhD in Computer Science from MIT.(00:07) Data Curation in the Context of LLMs(01:14) Connection between Language Models and Computer Science(03:14) Importance of Data Curation for LLMs(04:06) Challenges in Data Curation for LLMs(06:09) Confident Learning and its Concept(09:42) CleanLab and its Role(12:42) Role of Open Source Datasets and Tooling(15:08) Balancing Data and Privacy in Regulated Industries(17:25) Feasibility of Federated Learning(20:35) Decentralized Compute and Aggregating Compute Clusters(25:19) Determining Model Size for Data Representation(27:09) Advice for ML Engineers in Handling Data Curation(30:20) Rapid Fire RoundCurtis's favorite book: The Bible (in the context of marketing)--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi 

Simply Trade
Supply Chain Superpowers through Federated Learning with Alan Bersin

Simply Trade

Play Episode Listen Later Jan 5, 2024 41:02


In this episode of Simply Trade, Alan Bersin continues his discussion from Part 1 on the evolution of supply chain security and trade compliance. Major topics covered include moving from federated search to federated learning, building supply chain visibility, engaging the C-suite, and strengthening partnerships between government and the private sector. Key Discussion Points: - Alan Bersin discussed the evolution from the federated search model used by CBP to the new paradigm of federated learning being developed by Altana. Federated learning allows risk signals to be shared without commingling private data. - An example was provided of how federated search could be used with a company like Nike to validate supply chain information on a shipment of branded goods. - The importance of supply chain visibility beyond the first tier of suppliers was highlighted, especially with regulations like the Uyghur Forced Labor Prevention Act. - Actions the private sector can take to engage C-suite executives on compliance transformation included assessing supply chain visibility and available technologies. - Companies should monitor indicators like increases in CBP inquiries (CF28s) and enforcement actions (CF29s) that may point to gaps requiring attention. - Geopolitical variables like US-China tensions and potential conflict over Taiwan could introduce major disruptions, emphasizing the need for agile, data-driven supply chain risk management. - Partnership between government and the private sector was stressed as critical to effectively address compliance challenges through a federated learning approach. Enjoy the show! Host: Andy Shiles: https://www.linkedin.com/in/andyshiles/  Host/Producer: Lalo Solorzano: https://www.linkedin.com/in/lalosolorzano/  Co-Producer: Mara Marquez: https://www.linkedin.com/in/mara-marquez-a00a111a8/ Show references: Global Training Center - www.GlobalTrainingCenter.com Simply Trade Podcast - twitter.com/SimplyTradePod  Alan Bersin - https://www.linkedin.com/in/alan-bersin-315523177/ Altana AI - https://altana.ai/  Contact SimplyTrade@GlobalTrainingCenter.com or message @SimplyTradePod for: Advertising and sponsoring on Simply Trade Requests to be on the show as guest Suggest any topics you would like to hear about Simply Trade is not a law firm or an advisor. The topics and discussions conducted by Simply Trade hosts and guests should not be considered and is not intended to substitute legal advice. You should seek appropriate counsel for your own situation. These conversations and information are directed towards listeners in the United States for informational, educational, and entertainment purposes only and should not be In substitute for legal advice. No listener or viewer of this podcast should act or refrain from acting on the basis of information on this podcast without first seeking legal advice from counsel. Information on this podcast may not be up to date depending on the time of publishing and the time of viewership. The content of this posting is provided as is, no representations are made that the content is error free. The views expressed in or through this podcast are those are the individual speakers not those of their respective employers or Global Training Center as a whole. All liability with respect to actions taken or not taken based on the contents of this podcast are hereby expressly disclaimed.

The Brave Marketer
Privacy Protecting AI and Building Consumer Trust

The Brave Marketer

Play Episode Listen Later Nov 22, 2023 23:09


Kleomenis Katevas, Machine Learning Researcher at Brave Software, discusses how we can build trust in AI with the general public by making data as safe and secure as possible. He also unpacks some of the myths the general public holds about AI, how to debunk these myths, and tangible steps companies can take to reduce privacy concerns with AI. Key Takeaways:  How Twitter (now known as X) can be a great resource for learning more about AI, along with specific accounts and thought leaders he's following in the space Exciting ways that healthcare will be vastly improved through artificial intelligence via customized treatment plans and timely diagnosis  Why Brave is taking a privacy-first approach to its AI product suite Guest Bio: Kleomenis Katevas is a Machine Learning Researcher at Brave Software, where he's focused on designing and building privacy-preserving, ML-based systems. His research interests lie in the areas of Privacy-Preserving Machine Learning, Federated Learning, Mobile Systems, and Human-Computer Interaction. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech.To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software -  makers of the privacy-respecting Brave browser and Search engine, now powering AI with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte  

The Machine Learning Podcast
Building Better AI While Preserving User Privacy With TripleBlind

The Machine Learning Podcast

Play Episode Listen Later Nov 22, 2023 46:54


Summary Machine learning and generative AI systems have produced truly impressive capabilities. Unfortunately, many of these applications are not designed with the privacy of end-users in mind. TripleBlind is a platform focused on embedding privacy preserving techniques in the machine learning process to produce more user-friendly AI products. In this episode Gharib Gharibi explains how the current generation of applications can be susceptible to leaking user data and how to counteract those trends. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Gharib Gharibi about the challenges of bias and data privacy in generative AI models Interview Introduction How did you get involved in machine learning? Generative AI has been gaining a lot of attention and speculation about its impact. What are some of the risks that these capabilities pose? What are the main contributing factors to their existing shortcomings? What are some of the subtle ways that bias in the source data can manifest? In addition to inaccurate results, there is also a question of how user interactions might be re-purposed and potential impacts on data and personal privacy. What are the main sources of risk? With the massive attention that generative AI has created and the perspectives that are being shaped by it, how do you see that impacting the general perception of other implementations of AI/ML? How can ML practitioners improve and convey the trustworthiness of their models to end users? What are the risks for the industry if generative models fall out of favor with the public? How does your work at Tripleblind help to encourage a conscientious approach to AI? What are the most interesting, innovative, or unexpected ways that you have seen data privacy addressed in AI applications? What are the most interesting, unexpected, or challenging lessons that you have learned while working on privacy in AI? When is TripleBlind the wrong choice? What do you have planned for the future of TripleBlind? Contact Info LinkedIn (https://www.linkedin.com/in/ggharibi/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers. Links TripleBlind (https://tripleblind.ai/) ImageNet (https://scholar.google.com/citations?view_op=view_citation&hl=en&user=JicYPdAAAAAJ&citation_for_view=JicYPdAAAAAJ:VN7nJs4JPk0C) Geoffrey Hinton Paper BERT (https://en.wikipedia.org/wiki/BERT_(language_model)) language model Generative AI (https://en.wikipedia.org/wiki/Generative_artificial_intelligence) GPT == Generative Pre-trained Transformer (https://en.wikipedia.org/wiki/Generative_pre-trained_transformer) HIPAA Safe Harbor Rules (https://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identification/index.html) Federated Learning (https://en.wikipedia.org/wiki/Federated_learning) Differential Privacy (https://en.wikipedia.org/wiki/Differential_privacy) Homomorphic Encryption (https://en.wikipedia.org/wiki/Homomorphic_encryption) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)

Techshaw
Machine Learning for Mobile Engineers

Techshaw

Play Episode Listen Later Oct 14, 2023 34:26


MLKit solutions => https://developers.google.com/ml-kit MediaPipe => https://developers.google.com/mediapipe/solutions MediaPipe (No Code) Studio => https://developers.google.com/mediapipe/solutions/studio LLM on device => https://youtu.be/pNWNMPi0Mvk On device training => https://www.tensorflow.org/lite/examples/on_device_training/overview Federated Learning => https://blog.research.google/2017/04/federated-learning-collaborative.html Image/Video moderation SaaS => https://sightengine.com/ --- Send in a voice message: https://podcasters.spotify.com/pod/show/techshaw/message

The Machine Learning Podcast
Applying Federated Machine Learning To Sensitive Healthcare Data At Rhino Health

The Machine Learning Podcast

Play Episode Listen Later Sep 11, 2023 49:54


Summary A core challenge of machine learning systems is getting access to quality data. This often means centralizing information in a single system, but that is impractical in highly regulated industries, such as healthchare. To address this hurdle Rhino Health is building a platform for federated learning on health data, so that everyone can maintain data privacy while benefiting from AI capabilities. In this episode Ittai Dayan explains the barriers to ML in healthcare and how they have designed the Rhino platform to overcome them. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Ittai Dayan about using federated learning at Rhino Health to bring AI capabilities to the tightly regulated healthcare industry Interview Introduction How did you get involved in machine learning? Can you describe what Rhino Health is and the story behind it? What is federated learning and what are the trade-offs that it introduces? What are the benefits to healthcare and pharmalogical organizations from using federated learning? What are some of the challenges that you face in validating that patient data is properly de-identified in the federated models? Can you describe what the Rhino Health platform offers and how it is implemented? How have the design and goals of the system changed since you started working on it? What are the technological capabilities that are needed for an organization to be able to start using Rhino Health to gain insights into their patient and clinical data? How have you approached the design of your product to reduce the effort to onboard new customers and solutions? What are some examples of the types of automation that you are able to provide to your customers? (e.g. medical diagnosis, radiology review, health outcome predictions, etc.) What are the ethical and regulatory challenges that you have had to address in the development of your platform? What are the most interesting, innovative, or unexpected ways that you have seen Rhino Health used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Rhino Health? When is Rhino Health the wrong choice? What do you have planned for the future of Rhino Health? Contact Info LinkedIn (https://www.linkedin.com/in/ittai-dayan/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Links Rhino Health (https://www.rhinohealth.com/) Federated Learning (https://en.wikipedia.org/wiki/Federated_learning) Nvidia Clara (https://www.nvidia.com/en-us/clara/) Nvidia DGX (https://www.nvidia.com/en-us/data-center/dgx-platform/) Melloddy (https://www.melloddy.eu/) Flair NLP (https://github.com/flairNLP/flair) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)

Talking HealthTech
372 - Federated learning and the future of healthcare. Greg Miner, Guy Tsafnat - Evidentli

Talking HealthTech

Play Episode Listen Later Aug 31, 2023 26:55


Infinite Machine Learning
Prateek talks about Federated Learning

Infinite Machine Learning

Play Episode Listen Later Jul 20, 2023 18:40


In this episode, the host Prateek Joshi talks about why we need Federated Learning, how it works, and where it's used in the real world. --------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi 

The Machine Learning Podcast
The Role Of Model Development In Machine Learning Systems

The Machine Learning Podcast

Play Episode Listen Later May 29, 2023 46:41


Summary The focus of machine learning projects has long been the model that is built in the process. As AI powered applications grow in popularity and power, the model is just the beginning. In this episode Josh Tobin shares his experience from his time as a machine learning researcher up to his current work as a founder at Gantry, and the shift in focus from model development to machine learning systems. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Josh Tobin about the state of industry best practices for designing and building ML models Interview Introduction How did you get involved in machine learning? Can you start by describing what a "traditional" process for building a model looks like? What are the forces that shaped those "best practices"? What are some of the practices that are still necessary/useful and what is becoming outdated? What are the changes in the ecosystem (tooling, research, communal knowledge, etc.) that are forcing teams to reconsider how they think about modeling? What are the most critical practices/capabilities for teams who are building services powered by ML/AI? What systems do they need to support them in those efforts? Can you describe what you are building at Gantry and how it aids in the process of developing/deploying/maintaining models with "modern" workflows? What are the most challenging aspects of building a platform that supports ML teams in their workflows? What are the most interesting, innovative, or unexpected ways that you have seen teams approach model development/validation? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Gantry? When is Gantry the wrong choice? What are some of the resources that you find most helpful to stay apprised of how modeling and ML practices are evolving? Contact Info LinkedIn (https://www.linkedin.com/in/josh-tobin-4b3b10a9/) Website (http://josh-tobin.com/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Links Gantry (https://gantry.io/) Full Stack Deep Learning (https://fullstackdeeplearning.com/) OpenAI (https://openai.com/) Kaggle (https://www.kaggle.com/) NeurIPS == Neural Information Processing Systems Conference (https://nips.cc/) Caffe (https://caffe.berkeleyvision.org/) Theano (https://github.com/Theano/Theano) Deep Learning (https://en.wikipedia.org/wiki/Deep_learning) Regression Model (https://www.analyticsvidhya.com/blog/2022/01/different-types-of-regression-models/) scikit-learn (https://scikit-learn.org/) Large Language Model (https://en.wikipedia.org/wiki/Large_language_model) Foundation Models (https://en.wikipedia.org/wiki/Foundation_models) Cohere (https://cohere.com/) Federated Learning (https://en.wikipedia.org/wiki/Federated_learning) Feature Store (https://www.featurestore.org/) dbt (https://www.getdbt.com/) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
AI Today Podcast: AI Glossary Series – CPU, GPU, TPU, and Federated Learning

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later May 5, 2023 11:24


In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms CPU, GPU, TPU, and Federated Learning, explain how these terms relate to AI and why it's important to know about them. Show Notes: FREE Intro to CPMAI mini course CPMAI Training and Certification AI Glossary Glossary Series: Artificial Intelligence AI Glossary Series – Machine Learning, Algorithm, Model Glossary Series: (Artificial) Neural Networks, Node (Neuron), Layer Glossary Series: Natural Language Processing (NLP), NLU, NLG, Speech-to-Text, TTS, Speech Recognition Continue reading AI Today Podcast: AI Glossary Series – CPU, GPU, TPU, and Federated Learning at AI & Data Today.

The Inside View
Breandan Considine on Neuro Symbolic AI, Coding AIs and AI Timelines

The Inside View

Play Episode Listen Later May 4, 2023 105:04


Breandan Considine is a PhD student at the School of Computer Science at McGill University, under the supervision of Jin Guo and Xujie Si). There, he is building tools to help developers locate and reason about software artifacts, by learning to read and write code. I met Breandan while doing my "scale is all you need" series of interviews at Mila, where he surprised me by sitting down for two hours to discuss AGI timelines, augmenting developers with AI and neuro symbolic AI. A fun fact that many noticed while watching the "Scale Is All You Need change my mind" video is that he kept his biking hat most of the time during the interview, since he was close to leaving when we talked. All of the conversation below is real, but note that since I was not prepared to talk for so long, my camera ran out of battery and some of the video footage on Youtube is actually AI generated (Brendan consented to this). Disclaimer: when talking to people in this podcast I try to sometimes invite guests who share different inside views about existential risk from AI so that everyone in the AI community can talk to each other more and coordinate more effectively. Breandan is overall much more optimistic about the potential risks from AI than a lot of people working in AI Alignement research, but I think he is quite articulate in his position, even though I disagree with many of his assumptions. I believe his point of view is important to understand what software engineers and Symbolic reasoning researchers think of deep learning progress. Transcript: https://theinsideview.ai/breandan Youtube: ⁠https://youtu.be/Bo6jO7MIsIU⁠ Host: https://twitter.com/MichaelTrazzi Breandan: https://twitter.com/breandan OUTLINE (00:00) Introduction(01:16) Do We Need Symbolic Reasoning to Get To AGI?(05:41) Merging Symbolic Reasoning & Deep Learning for Powerful AI Systems(10:57) Blending Symbolic Reasoning & Machine Learning Elegantly(15:15) Enhancing Abstractions & Safety in Machine Learning(21:28) AlphaTensor's Applicability May Be Overstated(24:31) AI Safety, Alignment & Encoding Human Values in Code(29:56) Code Research: Moral, Information & Software Aspects(34:17) Automating Programming & Self-Improving AI(36:25) Debunking AI "Monsters" & World Domination Complexities(43:22) Neural Networks: Limits, Scaling Laws & Computation Challenges(59:54) Real-world Software Development vs. Competitive Programming(1:02:59) Measuring Programmer Productivity & Evaluating AI-generated Code(1:06:09) Unintended Consequences, Reward Misspecification & AI-Human Symbiosis(1:16:59) AI's Superior Intelligence: Impact, Self-Improvement & Turing Test Predictions(1:23:52) AI Scaling, Optimization Trade-offs & Economic Viability(1:29:02) Metrics, Misspecifications & AI's Rich Task Diversity(1:30:48) Federated Learning & AI Agent Speed Comparisons(1:32:56) AI Timelines, Regulation & Self-Regulating Systems

The Shifting Privacy Left Podcast
S2E8: Leveraging Federated Learning for Input Privacy with Victor Platt

The Shifting Privacy Left Podcast

Play Episode Listen Later Feb 28, 2023 41:20 Transcription Available


Victor Platt is a Senior AI Security and Privacy Strategist who previously served as Head of Security and Privacy for privacy tech company, Integrate.ai. Victor was formerly a founding member of the Risk AI Team with Omnia AI, Deloitt's artificial intelligence practice in Canada. He joins today to discuss privacy enhancing technologies (PETs) that are shaping industries around the world, with a focus on federated learning.---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------Victor views PETs as functional requirements and says they shouldn't be buried in your design document as nonfunctional obligations. In his work, he has found key gaps where organizations were only doing “security for security's sake.” Rather, he believes organizations should be thinking about it at the forefront. Not only that, we should all be getting excited about it because we all have a stake in privacy.With federated learning, you have the tools available to train ML models on large data sets with precision at scale without risking user privacy. In this conversation, Victor demystifies what federated learning is, describes the 2 different types: at the edge and across data silos, and explains how it works and how it compares to traditional machine learning.We deep dive into how an organization knows when to use federated learning, with specific advice for developers and data scientists as they implement it into their organizations.Topics Covered:What 'federated learning' is and how it compares to traditional machine learningWhen an organization should use vertical federated learning vs horizontal federated learning, or instead a hybrid versionA key challenge in 'transfer learning': knowing whether two data sets are related to each other and techniques to overcome this, like 'private set intersection'How the future of technology will be underpinned by a 'constellation of PETs' The distinction between 'input privacy' vs. 'output privacy'Different kinds of federated learning with use case examplesWhere the responsibility for adding PETs lies within an organizationThe key barriers to adopting federated learning and other PETs within different industries and use casesHow to move the needle on data privacy when it comes to legislation and regulationResources Mentioned:Take this outstanding, free class from OpenMined:  Our Privacy OpportunityGuest Info:Follow Victor on LinkedInFollow the SPL Show:Follow us on TwitterFollow us on LinkedInCheck out our website Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Infinite Machine Learning
Federated Learning, Healthcare AI | Ittai Dayan, cofounder of Rhino Health

Infinite Machine Learning

Play Episode Listen Later Feb 20, 2023 32:22


Ittai Dayan is the cofounder and CEO of Rhino Health. It's a platform powered by federated learning and edge compute technology that allows the scaling of data between different institutions without sharing data or compromising privacy. Prior to that, he has held roles at Harvard Medical School, Mass General Brigham, BCG, and Israeli Defense Forces. In this episode, we cover a range of topics including: - What is federated learning - How federated learning is used in healthcare - How is AI being used in medicine and healthcare - Lifecycle of healthcare AI - Tackling algorithmic bias  Ittai's favorite book: The Story of San Michele by Axel Munthe--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: http://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi 

Masters of Privacy
Sunny Kang: Machine Learning meets Privacy Enhancing Technologies

Masters of Privacy

Play Episode Listen Later Feb 17, 2023 22:02


Sunny Seon Kang is Global Privacy Counsel at VISA, specializing in AI Governance and Privacy Enhancing Technologies. She is well versed in comparative privacy law across the US, the EU and the UK. She has studied at Stanford and Berkeley in the US, as well as UCL in London, and is a member of the New York Bar. With Sunny we are discussing a highly complex but very exciting topic: Privacy-Preserving Machine Learning, as well as a more generic understanding of Privacy Enhancing Technologies.  References: Sunny Seon Kang on LinkedIn US Algorithmic Accountability Act (Proposal) EU AI Regulation (Proposal)

Masters of Privacy (ES)
Joaquín Muñoz: la protección de datos ante el aprendizaje federado y la computación cuántica

Masters of Privacy (ES)

Play Episode Listen Later Dec 2, 2022 29:51


Joaquín Muñoz es partner en Bird & Bird y director del departamento en el que se ubica la práctica de protección de datos, en la oficina madrileña de este despacho internacional. Joaquín cuenta con amplia experiencia asesorando a empresas con alto componente tecnológico en todo lo que afecta a protección de datos, seguridad de la información, comercio electrónico o propiedad intelectual.  Cubriremos, en este orden: El aprendizaje federado (“Federated Learning”) como fórmula en auge para entrenar y aplicar algoritmos de aprendizaje automático minimizando los riesgos en el tratamiento (en el marco de las Privacy Enhancing Technologies) Desafíos de la computación cuántica para la seguridad de la información y los programas de ajuste a la normativa de protección de datos. Referencias: Privacy Enhancing Technologies y Aprendizaje federado (Wikipedia) Worse than Y2K? Quantum computing and the end of privacy (Forbes) New methods can protect data privacy against quantum computing (NJIT) Joaquín Muñoz en Bird & Bird Joaquín Muñoz en LinkedIn

10X Success Hacks for Startups, Innovations and Ventures (consulting and training tips)
Disrupting Healthcare In The US Market! | Akshay Sharma

10X Success Hacks for Startups, Innovations and Ventures (consulting and training tips)

Play Episode Listen Later Oct 19, 2022 19:50


Healthcare is up for disruption and decentralized technologies are In today's episode of Pitch Cafe, I have one of my old friends, Akshay Sharma, CEO of hotg.ai with me!

MaML - Medicine & Machine Learning Podcast
Ittai Dayan, MD - Data Privacy & Federated Learning with Rhino Health

MaML - Medicine & Machine Learning Podcast

Play Episode Listen Later Oct 15, 2022 49:36


Ittai Dayan is the co-founder and CEO of Rhino Health, a distributed computing platform leveraging privacy-preserving federated learning. The platform allows medical researchers and healthcare AI developers to seamlessly access diverse and disparate datasets and use them to create better AI algorithms. Host: David Wu / Twitter: @davidjhwu Producer: Aaron Schumacher / Twitter: @a_schu95 Artwork & Video: Saurin Kantesaria Music: Caligula - Windows96. Used with Artist Permission. 00:56 How did you come to the intersection of medicine and artificial intelligence? 06:15 What type of medicine did you start out studying? 11:35 Could you tell us the story behind Rhino Health? 14:30 What is federated learning? 21:00 Common use cases for Rhino Health? 26:45 Relationship between generalizability and accuracy when using federated learning? 28:15 What were your biggest challenges in creating Rhino Health? 32:40 An example of using Rhino Health? 37:40 How does Rhino Health integrate with EHR's 38:15 What are your next steps for Rhino Health? 43:10 What do you think the future of AI in healthcare will look like? 48:08 What gives your life meaning and what are your greatest fears?

Lambda3 Podcast
Lambda3 Podcast 310 – VPN e Privacidade – Parte 2

Lambda3 Podcast

Play Episode Listen Later Jul 29, 2022 93:17


Este episódio do Podcast traz a segunda parte da conversa necessária sobre privacidade online com o lambda Giovanni Bassi e os convidados André Valenti, Guilherme Siquinelli e William Grasel, desta vez trazendo o assunto para o ambiente web, usuários, dados e mais.  Entre no nosso grupo do Telegram e compartilhe seus comentários com a gente: https://lb3.io/telegram Feed do podcast: www.lambda3.com.br/feed/podcast Feed do podcast somente com episódios técnicos: www.lambda3.com.br/feed/podcast-tecnico Feed do podcast somente com episódios não técnicos: www.lambda3.com.br/feed/podcast-nao-tecnico Lambda3 · #310 - VPN e Privacidade - Parte 2 Pauta: Técnicas de tracking de usuários na Web Google quer matar cookies Primeira tentativa: Federated Learning of Cohorts (FLoC). Segunda tentativa: Topics API Rastreamento sem consentimento do usuário: Fingerprinting CDN e privacidade idle detection, api do chrome que gerou conversa por permitir identificar quando o usuário está na frente do computador ou não Como proteger os dados dos seus usuários no navegador Evite dados sensíveis de usuário no front APIs de Criptografia na Web Passwordless Web Auth API login e/ou 2FA autenticação biométrica yubikeys futuro: passkeys Novos recursos de privacidade nos navegadores (Brave, Firefox, Chrome, Edge) Participantes: André Willik Valenti - @awvalenti Giovanni Bassi - @giovannibassi Guilherme Siquinelli - @guiseek William Grasel - @willgmbr Links: Lambda3 Podcast 298 - VPN e Privacidade - Parte 1  Lambda3 Podcast 97 - Privacidade Edição: Compasso Coolab Créditos das músicas usadas neste programa: Music by Kevin MacLeod (incompetech.com) licensed under Creative Commons: By Attribution 3.0 - creativecommons.org/licenses/by/3.0

Town Hall Seattle Science Series
186. Blaise Aguera y Arcas and Melanie Mitchell with Lili Cheng: How Close Are We to AI?

Town Hall Seattle Science Series

Play Episode Listen Later Jul 29, 2022 84:57


Building Policy Update: As of June 1, 2022, masks remain required at Town Hall Seattle. Read our current COVID-19 policies and in-building safety protocols. Thu 7/14, 2022, 7:30pm Blaise Agüera y Arcas and Melanie Mitchell with Lili Cheng How Close Are We to AI? BUY THE BOOKS   Ubi SuntBy Blaise Agüera y Arcas   Artificial Intelligence: A Guide for Thinking HumansBy Melanie Mitchell     Artificial Intelligence (AI), a term first coined at a Dartmouth workshop in 1956, has seen several boom and bust cycles over the last 66 years. Is the current boom different? The most exciting advance in the field since 2017 has been the development of “Large Language Models,” giant neural networks trained on massive databases of text on the web. Still highly experimental, Large Language Models haven't yet been deployed at scale in any consumer product — smart/voice assistants like Alexa, Siri, Cortana, or the Google Assistant are still based on earlier, more scripted approaches. Large Language Models do far better at routine tasks involving language processing than their predecessors. Although not always reliable, they can give a strong impression of really understanding us and holding up their end of an open-ended dialog. Unlike previous forms of AI, which could only perform specific jobs involving rote perception, classification, or judgment, Large Language Models seem to be capable of a lot more — including possibly passing the Turing Test, named after computing pioneer Alan Turing's thought experiment that posits when an AI in a chat can't be distinguished reliably from a human, it will have achieved general intelligence. But can Large Language Models really understand anything, or are they just mimicking the superficial “form” of language? What can we say about our progress toward creating real intelligence in a machine? What do “intelligence” and “understanding” even mean? Blaise Agüera y Arcas, a Fellow at Google Research, and Melanie Mitchell, the Davis Professor of Complexity at the Santa Fe Institute, take on these thorny questions in a wide-ranging presentation and discussion. The discussion will be moderated by Lili Cheng, Corporate Vice President of the Microsoft AI and Research division. Blaise Agüera y Arcas is a VP and Fellow at Google Research, where he leads an organization working on basic research and new products in Artificial Intelligence. His team focuses on the intersection of machine learning and devices, developing AI that augments humanity while preserving privacy. One of the team's technical contributions is Federated Learning, an approach to training neural networks in a distributed setting that avoids sending user data off-device. Blaise also founded Google's Artists and Machine Intelligence program and has been an active participant in cross-disciplinary dialogs about AI and ethics, fairness and bias, policy, and risk. He has given TED talks on Sead­ragon and Pho­to­synth (2007, 2012), Bing Maps (2010), and machine creativity (2016). In 2008, he was awarded MIT's TR35 prize. Melanie Mitchell is the Davis Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems.  Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans. Lili Cheng is a Corporate Vice President of the Microsoft AI and Research division, responsible for the AI developer platform which includes Cognitive Services and Bot Framework. Prior to Microsoft, Lili worked in Apple Computer's Advanced Technology Group on the user interface research team where she focused on QuickTime Conferencing and QuickTime VR. Lili is also a registered architect, having worked in Tokyo and Los Angeles for Nihon Sekkei and Skidmore Owings and Merrill on commercial urban design and large-scale building projects. She has also taught at New York University and Harvard University. Ubi SuntBy Blaise Agüera y Arcas    Artificial Intelligence: A Guide for Thinking HumansBy Melanie Mitchell   Presented by Town Hall Seattle. To become a member or make a donation click here.

Tech Hive: The Tech Leaders Podcast
#51 Arshad Farhad, CTO in Healthcare and Lifesciences in EMEA for Dell

Tech Hive: The Tech Leaders Podcast

Play Episode Listen Later Jul 28, 2022 41:38


From virtual-appointments, to artificial intelligence, to data management and fighting cyber-attacks, Dell's Arshad Farhad helps healthcare providers to see how technology can support their work.Arshad is Dell Technology's Chief Technology Officer in Healthcare and Life Sciences for Europe, Middle East and Africa. He's also working towards his doctorate in Federated Learning for AI (using ML and Data Science) to improve patient care using wearable and Internet of Medical Things sensors.In this episode, Arshad tells Gareth why technology is the future of healthcare and how medicine can be more effective if it is data-driven.Why Arshad is passionate about healthcare and technology (2.05)Arshad's PHD journey and challenges (3.17)Healthcare in the next 5 to 10 years (4.35)Ethical dilemmas (8.35)Robotic surgery (9.53)Wearable technology in healthcare (11.30)How does the UK healthcare technology compare to other countries (15.41)The last 12 months in healthcare technology (19.17)Smart hospitals (21.00)What problems is the healthcare sector facing (22.28)How Dell is helping to address these challenges (25.09)Edge-based healthcare (25.40)AI healthcare (26.50)Data management (27.15)Data security (29.52)Cloud technology (31.30)Arshad's role as Chief Technology Officer (33.27)What Arshad believes makes a good leader (36.27)Leadership in the pandemic (37.34)Who inspires Arshad (38.27)

The CyberWire
Cyber phases of Russia's hybrid war seem mostly espionage. Belgium accuses China of spying. LockBit ransomware spreads. And Micodus GPS tracker vulnerabilities are real and unpatched.

The CyberWire

Play Episode Listen Later Jul 20, 2022 31:47


What's Russia up to in cyberspace, nowadays? Belgium accuses China of cyberespionage. LockBit ransomware spreading through compromised servers. Malek Ben Salem from Accenture explains the Privacy Enhancing Technologies of Federated Learning with Differential Privacy guarantees. Rick Howard speaks with Rob Gurzeev from Cycognito on Data Exploitation. And Micodus GPS tracker vulnerabilities should motivate the user to turn the thing off. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/11/136 Selected reading. Continued cyber activity in Eastern Europe observed by TAG (Google) Declaration by the High Representative on behalf of the European Union on malicious cyber activities conducted by hackers and hacker groups in the context of Russia's aggression against Ukraine (European Council) China: Declaration by the Minister for Foreign Affairs on behalf of the Belgian Government urging Chinese authorities to take action against malicious cyber activities undertaken by Chinese actors (Federal Public Service Foreign Affairs)  Déclaration du porte-parole de l'Ambassade de Chine en Belgique au sujet de la déclaration du gouvernement belge sur les cyberattaques (Embassy of the People's Republic of China in the Kingdom of Belgium) LockBit: Ransomware Puts Servers in the Crosshairs (Broadcom Software Blogs | Threat Intelligence) Critical Vulnerabilities Discovered in Popular Automotive GPS Tracking Device (MiCODUS MV720) (BitSight) CISA released Security Advisory on MiCODUS MV720 Global Positioning System (GPS) Tracker (CISA)

Ardan Labs Podcast
Ethical AI, Endangered Languages, & NLP with Daniel Whitenack

Ardan Labs Podcast

Play Episode Listen Later Jul 6, 2022 92:16


Daniel Whitenack is a co-host of the Practical AI podcast and a data scientist with SIL International. In one of our more technical episodes, we hear about Daniel's journey from computational physics in college to using artificial intelligence for language processing. Tune in for a conversation on ethical AI, endangered languages, real-time translation, and more!Connect with Daniel:Twitter: https://twitter.com/dwhitenaWebsite: https://datadan.io/ Email: dan_whitenack@sil.orgPractical AI podcast: https://changelog.com/practicalai Gopher Slack Channel: https://invite.slack.golangbridge.org/ Mentioned in today's episode:SIL International: https://sil.orgMultilingual AI: https://ai.sil.org Federated Learning: https://en.m.wikipedia.org/wiki/Federated_learning Microsoft Flight Simulator: https://en.wikipedia.org/wiki/Microsoft_Flight_Simulator Babel Fish: https://en.wikipedia.org/wiki/Babel_Fish_(website) Data Science with Go (GopherCon 2016): https://youtu.be/D5tDubyXLrQ Pachyderm: https://www.pachyderm.com/ Nvidia Grace Hopper: https://www.nvidia.com/en-us/data-center/grace-cpu/ Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses: https://ardanlabs.com/education/ Live Events: https://www.ardanlabs.com/live-training-events/ Blog: https://www.ardanlabs.com/blog Github: https://github.com/ardanlabs 

Your AI Injection
Federated AI: Lessons From the War Theater with Dr. David Bauer

Your AI Injection

Play Episode Play 30 sec Highlight Listen Later Jun 23, 2022 49:50 Transcription Available


What do data centers loaded on tractor trailers with self destruct buttons in war zones have to do with ML models built from networks of hospital data? Join us in this fascinating episode as Deep speaks with Dr. David Bauer, Co-founder and CTO of BOSS AI. Dr. Bauer shares colorful stories and discusses how his experience leading big data and distributed learning initiatives for the U.S. Intelligence community inspired him to take hard lessons learned on the battlefield into the commercial sector and start BOSS AI. Deep and Dr. Bauer dive into Federated AI, a type of AI that allows machine learning models to be built from data across multiple disparate systems while leaving the data in each system encrypted and decentralized. Federated AI encourages businesses to leverage their data to extract insights, alleviating the need for expensive data lakes, while also avoiding the risk,  hassle and increased latency involved in centralizing data. 

Root Causes: A PKI and Security Podcast
Root Causes 228: Getting the FLoC out of Here

Root Causes: A PKI and Security Podcast

Play Episode Listen Later May 31, 2022 14:21


In a follow-up to our recent episode on cookies and browser tracking, we discuss Google's Federated Learning of Cohorts (FLoC) initiative, why it failed as a response, and other directions the industry is looking in.

This Week in Health IT
American College of Radiology & Rhino Health – Scaling AI Federated Learning via NVIDIA AI Enterprise

This Week in Health IT

Play Episode Listen Later Apr 6, 2022 23:12 Transcription Available


April 6, 2022: https://www.linkedin.com/in/ittai-dayan-md-89447167/ (Ittai Dayan, MD), Cofounder and CEO of https://www.rhinohealth.com/ (Rhino Health) and https://www.linkedin.com/in/mtilkin/ (Mike Tilkin), CIO at https://www.acr.org/ (American College of Radiology) join http://linkedin.com/in/IntegratorBrad (Brad Genereaux), Medical Imaging & Smart Hospitals Alliance Manager for https://www.nvidia.com/en-us/ (NVIDIA) to discuss Federated AI Learning models. They deep dive into the NVIDIA AI Enterprise on VMware vSphere with https://tanzu.vmware.com/ (VMware Tanzu) solution mixed with ACR Connect powered by Rhino Platform. What kind of work is the American College of Radiology doing in this area? With data connections to the various member organizations, how does this allow the community to work together on AI problems? Why is it important to move the compute towards the edge? What does it take to stand up a model like this and unlock the power of AI in the enterprise? Where would a CTO or CIO start this process? Key Points: 00:00:00 - Intro 00:03:45 - The problem space right now is providing education for tools that are going to help healthcare folks validate algorithms 00:05:55- What we've done with NVIDIA certified systems and AI enterprise with VMware as our virtualization stack is create an ecosystem where we can build all of our applications on one environment https://www.rhinohealth.com/ (Rhino Health) https://www.acrdsi.org/ (American College of Radiology - Data Science Institute) https://www.nvidia.com/en-us/ (NVIDIA) https://tanzu.vmware.com/ (VMware Tanzu)

The AI with Maribel Lopez (AI with ML)
#6. Dr. Mona G. Flores Of NVIDIA Defines Federated Learning and Shares How It's Being Using In Healthcare Today

The AI with Maribel Lopez (AI with ML)

Play Episode Listen Later Apr 5, 2022 22:34


In this episode, Dr Flores shares the opportunities for AI and federated learning . She discusses examples in healthcare including, Gatortron, the largest clinical language model.About Dr Flores.Mona G. Flores, M.D. - Global Head of Medical AI at NVIDIADr. Mona G. Flores is the global head of medical AI at NVIDIA, where she oversees AI initiativesin medicine and healthcare to bridge the chasm between those industries and technology.Dr. Flores first joined NVIDIA in 2018 with a focus on healthcare ecosystem development.Before joining NVIDIA, she served as the chief medical officer of digital health company Human-Resolution Technologies, following over 25 years working in medicine and cardiothoracicsurgery.Dr. Flores received her medical degree from Oregon Health and Science University. Shecompleted a general surgery residency at the University of California, San Diego, a postdoctoralfellowship at Stanford, and a cardiothoracic surgery residency and fellowship at ColumbiaUniversity.Dr. Flores also has a master's degree in biology from San Jose State University, and holds anMBA from the University at Albany School of Business. She initially worked in investmentbanking for a few years before pursuing her passion for medicine and technology.Where to follow us: Maribel Lopez on Twitter at @MaribelLopez and LinkedIN https://www.linkedin.com/in/maribellopez/You can find Mona on Twitter @Monagflores and @NVIDIAYou can find here  on LinkedIN at  https://www.linkedin.com/in/monagflores/You can find more information on GatorTron here. https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s32030/

This Week in Health IT
Prepare for HIMSS & ViVE: NVIDIA Democratizing Access to AI with VMware, iCAD, and Rhino Health

This Week in Health IT

Play Episode Listen Later Mar 3, 2022 10:01 Transcription Available


A special ViVE and HIMSS conference sneak peak episode. What can we expect to see from VMware and NVIDIA in collaboration with iCAD and Rhino Health this year? https://www.linkedin.com/in/IntegratorBrad/ (Brad Genereaux), Medical Imaging Alliance Manager for https://www.nvidia.com/en-us/ (NVIDIA) joins Bill to talk about democratizing access to artificial intelligence across healthcare. What are the exciting advances that we can do with AI in the enterprise? How is it helping to drive efficiencies across every single department in the hospital? How do we empower IT departments with the virtualization stack from VMware to really demonstrate what AI enterprise means for hospitals? There will be an increased number of AI models running in hospitals five years from now, what does that look like? https://www.nvidia.com/en-us/ (NVIDIA: The way it's meant to be played) https://www.vmware.com/ (VMware - Delivering a Digital Foundation For Businesses) https://www.rhinohealth.com/ (Rhino Health - Healthcare AI with Federated Learning) https://www.icadmed.com/ (iCAD - Global medical technology leader providing innovative cancer detection and therapy solutions)

Sounds Profitable: Adtech Applied
HBO Gets Serious About Podcasting + 4 more stories for Jan 28, 2022

Sounds Profitable: Adtech Applied

Play Episode Listen Later Jan 28, 2022 8:19


Today on The Download: is the IAB on borrowed time?, podcast ads see big gains again, Google makes misinformation less profitable, podcasts grow globally, SXM enters the identity game, and Spotify stands behind Rogan even as earnings fall. HBO Max is Hiring For Their Podcast Marketing Team Subscription streaming video service HBO Max continues to prove its serious about podcasting. Their first venture into the space dropped last summer with Batman: The Audio Adventures, an exclusive podcast that could only be listened to in HBO Max app itself.While HBO maintains podcast channels on Spotify and Apple for related content, just like their peers at Netflix, this is the first podcast to be exclusively hosted in a subscription video streaming app.Now, they're looking to hire a new role, specifically for podcasting under the HBO Max brand. The role is for a Sr. Analyst, Direct-to-Consumer, Podcasting Strategy & Operations, and will “provide strategic and analytic support on various projects covering direct-to-consumer and HBO Max Podcasting initiatives. Interactive Voices Lack Diversity If you missed CES 2022 because... well, reasons; you probably saw a plenty of breathless reporting of new prototypes and maybe-coming-in-the-future tech. Steve Keller, Sonic Strategy Director for Studio Resonate, SXM’s audio-first creative consultancy has an in-depth piece on things he noticed at the event of interest to audio people like us. Like a lot of tech around the explosion of interactive voice systems. But he also noticed something else: Quoting from the piece: But something was missing. Innovation aside, the lack of sonic diversity in the voice sector was disturbing. [P]ractically all the AI voices we heard at CES 2022 were female—and white. In fact, the only voice assistant of color heard was during a series of sessions focused on voice technology, curated and moderated by attn.live CEO, Ian Utile. Multiple panelists affirmed that there’s an underlying problem with the overwhelmingly white, male demographics of the AI world who are unconsciously programming biases, sonic color lines, and digital discrimination into voice systems. As a result, the default voice of automotive assistants, connected homes, and a plethora of other devices is white. The issue is compounded by the fact that these assistants, designed to serve us, are also predominantly female. It’s a systemic problem, and developers and brands need to work harder to sonically diversify their voice systems, as well as the designers, engineers, and developers who create them. With DEI so high on the priority list for most companies, it’s surprising this problem exists. No, wait. It’s not surprising at all, is it? Google’s Federated Learning of Cohorts Replaced by Topics While third-party cookies aren’t part of the information we receive from listeners in podcasting, they are a big part of the device graphs we use to augment what we do receive and improve how we run attribution. So while Safari and Firefox kicked them completely to the curb in 2020, Google has pushed out their deadline for when they’re twilight third-party cookies until 2023. And their original pitch, Federated Learning of Cohorts, or FLoC for short, has now been scrapped for what they’re calling Topics. FLoC grouped audiences based on their browsing activity at a very granular level, where Topics focuses on applying a list of topics, starting at around 300 but expected to be in the thousands, directly to the individual. Only the top three most prevalent Topics will be available for targeting and identification, but what’s really interesting is that they expire every three weeks, keeping them increasingly fresh and relevant. Topics fit in nicely with the contextual offerings that podcasting is primed to offer advertisers if we continue to prioritize transcription and contextual targeting. Podcasting Only Looks Hit-Resistant If you somehow avoided the kerfuffle over the Bloomberg Article where Lucas Shaw reported on podcasting’s inability to generate a current hit... well, I’m not sure how you did. There have been a lot of hot takes on the article, but one worthy of your attention was penned by Tom Webster in his weekly newsletter, I Hear Things.It’s a fascinating read, with Tom pointing out that other mediums, like movies, television programs, and music all have the same “problem”. They just present differently. Examining the top movies from last year, Tom notes: Even if you go further down the list from the top 10, it's sequels, movies based on existing properties, and remakes. Is it fair to say that the movie industry hasn't produced a new hit in years? No--all the above movies are new movies, but they are familiar at the same time. He goes on to make a similar case for popular television programs - The Bachelor season 26, anyone? - and even music, going so far as to craft metaphors around melody and harmony to predict a hit. Working that back to podcasting, Tom says: Podcasting is, by its very definition, a medium that largely lacks harmony. When you can listen to a podcast anytime, there is little compunction to listen to them at any given time. They are always there--convenient, but rarely urgent. In other words, asynchronous. And they also currently (though not by definition) lack melody. The whole medium is new to so many people, and even for veteran listeners, there isn't exactly the equivalent of NCIS: New Orleans or Thursday Night Football or The Traveling Wilburys--that thread of familiarity that telegraphs immediately: if you like this, you will like that. Even some of the biggest hits of podcasting aren't easily explainable to a friend. That's part of why there is such a spate of celebrity podcasts right now. What is easier to describe to people: It's the Michele Obama podcast, or "it's the podcast that reveals the stories behind the world's most recognizable and interesting sounds. Check out the entire post for insights on why Tom thinks the article that made such waves was a little unfair little wrong, but ultimately right-ish. Links in the episode details, as always. Amazon Expands Ad Sales Efforts Amazon Advertising was responsible for generating $23bn in revenue for the first three quarters of 2021, nearly double the $13.5bn generated in that same period for 2020. How’d they do it? By shifting their focus to pursuing major brands, agencies, and holding companies looking to focus on awareness with their large customer sales team.Joshua Kreitzer, founder and CEO of Channel Bakers, an Amazon-focused ad agency tells Digiday With this change, the Amazon large customer sales team is no longer focused on shopper marketing dollars — they're now responsible for breaking through to the $70 billion TV market. While selling advertising to Amazon’s clients actively selling products on Amazon.com is still part of their focus, they’re now providing a bigger brand play by being able to offer inventory across Twitch, Fire TV, IMDb TV, and their podcast companies Art 19 and Wondery. Amazon has an immense amount of first-party data, from all their apps and services that require a login, so coupled with the technology they integrated from acquiring attribution company Sizmek and their AWS infrastructure, they have the potential to provide insights competitive to Google and Meta. The Download is presented by Sounds Profitable and is hosted by Bryan Barletta and Evo Terra. Audio editing by Ian Powell. SA81NiPbZ3KByWl3qVE3See omnystudio.com/listener for privacy information.See omnystudio.com/listener for privacy information.

The Download from Sounds Profitable
HBO Gets Serious About Podcasting + 4 more stories for Jan 28, 2022

The Download from Sounds Profitable

Play Episode Listen Later Jan 28, 2022 8:19


Today on The Download: is the IAB on borrowed time?, podcast ads see big gains again, Google makes misinformation less profitable, podcasts grow globally, SXM enters the identity game, and Spotify stands behind Rogan even as earnings fall. HBO Max is Hiring For Their Podcast Marketing Team Subscription streaming video service HBO Max continues to prove its serious about podcasting. Their first venture into the space dropped last summer with Batman: The Audio Adventures, an exclusive podcast that could only be listened to in HBO Max app itself. While HBO maintains podcast channels on Spotify and Apple for related content, just like their peers at Netflix, this is the first podcast to be exclusively hosted in a subscription video streaming app. Now, they're looking to hire a new role, specifically for podcasting under the HBO Max brand. The role is for a Sr. Analyst, Direct-to-Consumer, Podcasting Strategy & Operations, and will “provide strategic and analytic support on various projects covering direct-to-consumer and HBO Max Podcasting initiatives.   Interactive Voices Lack Diversity If you missed CES 2022 because... well, reasons; you probably saw a plenty of breathless reporting of new prototypes and maybe-coming-in-the-future tech. Steve Keller, Sonic Strategy Director for Studio Resonate, SXM's audio-first creative consultancy has an in-depth piece on things he noticed at the event of interest to audio people like us. Like a lot of tech around the explosion of interactive voice systems. But he also noticed something else: Quoting from the piece: But something was missing. Innovation aside, the lack of sonic diversity in the voice sector was disturbing. [P]ractically all the AI voices we heard at CES 2022 were female—and white. In fact, the only voice assistant of color heard was during a series of sessions focused on voice technology, curated and moderated by attn.live CEO, Ian Utile. Multiple panelists affirmed that there's an underlying problem with the overwhelmingly white, male demographics of the AI world who are unconsciously programming biases, sonic color lines, and digital discrimination into voice systems. As a result, the default voice of automotive assistants, connected homes, and a plethora of other devices is white. The issue is compounded by the fact that these assistants, designed to serve us, are also predominantly female. It's a systemic problem, and developers and brands need to work harder to sonically diversify their voice systems, as well as the designers, engineers, and developers who create them. With DEI so high on the priority list for most companies, it's surprising this problem exists. No, wait. It's not surprising at all, is it?   Google's Federated Learning of Cohorts Replaced by Topics While third-party cookies aren't part of the information we receive from listeners in podcasting, they are a big part of the device graphs we use to augment what we do receive and improve how we run attribution. So while Safari and Firefox kicked them completely to the curb in 2020, Google has pushed out their deadline for when they're twilight third-party cookies until 2023. And their original pitch, Federated Learning of Cohorts, or FLoC for short, has now been scrapped for what they're calling Topics. FLoC grouped audiences based on their browsing activity at a very granular level, where Topics focuses on applying a list of topics, starting at around 300 but expected to be in the thousands, directly to the individual. Only the top three most prevalent Topics will be available for targeting and identification, but what's really interesting is that they expire every three weeks, keeping them increasingly fresh and relevant. Topics fit in nicely with the contextual offerings that podcasting is primed to offer advertisers if we continue to prioritize transcription and contextual targeting.   Podcasting Only Looks Hit-Resistant If you somehow avoided the kerfuffle over the Bloomberg Article where Lucas Shaw reported on podcasting's inability to generate a current hit... well, I'm not sure how you did. There have been a lot of hot takes on the article, but one worthy of your attention was penned by Tom Webster in his weekly newsletter, I Hear Things.It's a fascinating read, with Tom pointing out that other mediums, like movies, television programs, and music all have the same “problem”. They just present differently. Examining the top movies from last year, Tom notes: Even if you go further down the list from the top 10, it's sequels, movies based on existing properties, and remakes. Is it fair to say that the movie industry hasn't produced a new hit in years? No--all the above movies are new movies, but they are familiar at the same time. He goes on to make a similar case for popular television programs - The Bachelor season 26, anyone? - and even music, going so far as to craft metaphors around melody and harmony to predict a hit. Working that back to podcasting, Tom says: Podcasting is, by its very definition, a medium that largely lacks harmony. When you can listen to a podcast anytime, there is little compunction to listen to them at any given time. They are always there--convenient, but rarely urgent. In other words, asynchronous. And they also currently (though not by definition) lack melody. The whole medium is new to so many people, and even for veteran listeners, there isn't exactly the equivalent of NCIS: New Orleans or Thursday Night Football or The Traveling Wilburys--that thread of familiarity that telegraphs immediately: if you like this, you will like that. Even some of the biggest hits of podcasting aren't easily explainable to a friend. That's part of why there is such a spate of celebrity podcasts right now. What is easier to describe to people: It's the Michele Obama podcast, or "it's the podcast that reveals the stories behind the world's most recognizable and interesting sounds. Check out the entire post for insights on why Tom thinks the article that made such waves was a little unfair little wrong, but ultimately right-ish. Links in the episode details, as always.   Amazon Expands Ad Sales Efforts Amazon Advertising was responsible for generating $23bn in revenue for the first three quarters of 2021, nearly double the $13.5bn generated in that same period for 2020. How'd they do it? By shifting their focus to pursuing major brands, agencies, and holding companies looking to focus on awareness with their large customer sales team. Joshua Kreitzer, founder and CEO of Channel Bakers, an Amazon-focused ad agency tells Digiday With this change, the Amazon large customer sales team is no longer focused on shopper marketing dollars — they're now responsible for breaking through to the $70 billion TV market. While selling advertising to Amazon's clients actively selling products on Amazon.com is still part of their focus, they're now providing a bigger brand play by being able to offer inventory across Twitch, Fire TV, IMDb TV, and their podcast companies Art 19 and Wondery. Amazon has an immense amount of first-party data, from all their apps and services that require a login, so coupled with the technology they integrated from acquiring attribution company Sizmek and their AWS infrastructure, they have the potential to provide insights competitive to Google and Meta. The Download is presented by Sounds Profitable and is hosted by Bryan Barletta and Evo Terra. Audio editing by Ian Powell. SA81NiPbZ3KByWl3qVE3 See omnystudio.com/listener for privacy information.

I Hear Things
HBO Gets Serious About Podcasting + 4 more stories for Jan 28, 2022

I Hear Things

Play Episode Listen Later Jan 28, 2022 8:19


Today on The Download: is the IAB on borrowed time?, podcast ads see big gains again, Google makes misinformation less profitable, podcasts grow globally, SXM enters the identity game, and Spotify stands behind Rogan even as earnings fall. HBO Max is Hiring For Their Podcast Marketing Team Subscription streaming video service HBO Max continues to prove its serious about podcasting. Their first venture into the space dropped last summer with Batman: The Audio Adventures, an exclusive podcast that could only be listened to in HBO Max app itself.While HBO maintains podcast channels on Spotify and Apple for related content, just like their peers at Netflix, this is the first podcast to be exclusively hosted in a subscription video streaming app.Now, they're looking to hire a new role, specifically for podcasting under the HBO Max brand. The role is for a Sr. Analyst, Direct-to-Consumer, Podcasting Strategy & Operations, and will “provide strategic and analytic support on various projects covering direct-to-consumer and HBO Max Podcasting initiatives. Interactive Voices Lack Diversity If you missed CES 2022 because... well, reasons; you probably saw a plenty of breathless reporting of new prototypes and maybe-coming-in-the-future tech. Steve Keller, Sonic Strategy Director for Studio Resonate, SXM’s audio-first creative consultancy has an in-depth piece on things he noticed at the event of interest to audio people like us. Like a lot of tech around the explosion of interactive voice systems. But he also noticed something else: Quoting from the piece: But something was missing. Innovation aside, the lack of sonic diversity in the voice sector was disturbing. [P]ractically all the AI voices we heard at CES 2022 were female—and white. In fact, the only voice assistant of color heard was during a series of sessions focused on voice technology, curated and moderated by attn.live CEO, Ian Utile. Multiple panelists affirmed that there’s an underlying problem with the overwhelmingly white, male demographics of the AI world who are unconsciously programming biases, sonic color lines, and digital discrimination into voice systems. As a result, the default voice of automotive assistants, connected homes, and a plethora of other devices is white. The issue is compounded by the fact that these assistants, designed to serve us, are also predominantly female. It’s a systemic problem, and developers and brands need to work harder to sonically diversify their voice systems, as well as the designers, engineers, and developers who create them. With DEI so high on the priority list for most companies, it’s surprising this problem exists. No, wait. It’s not surprising at all, is it? Google’s Federated Learning of Cohorts Replaced by Topics While third-party cookies aren’t part of the information we receive from listeners in podcasting, they are a big part of the device graphs we use to augment what we do receive and improve how we run attribution. So while Safari and Firefox kicked them completely to the curb in 2020, Google has pushed out their deadline for when they’re twilight third-party cookies until 2023. And their original pitch, Federated Learning of Cohorts, or FLoC for short, has now been scrapped for what they’re calling Topics. FLoC grouped audiences based on their browsing activity at a very granular level, where Topics focuses on applying a list of topics, starting at around 300 but expected to be in the thousands, directly to the individual. Only the top three most prevalent Topics will be available for targeting and identification, but what’s really interesting is that they expire every three weeks, keeping them increasingly fresh and relevant. Topics fit in nicely with the contextual offerings that podcasting is primed to offer advertisers if we continue to prioritize transcription and contextual targeting. Podcasting Only Looks Hit-Resistant If you somehow avoided the kerfuffle over the Bloomberg Article where Lucas Shaw reported on podcasting’s inability to generate a current hit... well, I’m not sure how you did. There have been a lot of hot takes on the article, but one worthy of your attention was penned by Tom Webster in his weekly newsletter, I Hear Things.It’s a fascinating read, with Tom pointing out that other mediums, like movies, television programs, and music all have the same “problem”. They just present differently. Examining the top movies from last year, Tom notes: Even if you go further down the list from the top 10, it's sequels, movies based on existing properties, and remakes. Is it fair to say that the movie industry hasn't produced a new hit in years? No--all the above movies are new movies, but they are familiar at the same time. He goes on to make a similar case for popular television programs - The Bachelor season 26, anyone? - and even music, going so far as to craft metaphors around melody and harmony to predict a hit. Working that back to podcasting, Tom says: Podcasting is, by its very definition, a medium that largely lacks harmony. When you can listen to a podcast anytime, there is little compunction to listen to them at any given time. They are always there--convenient, but rarely urgent. In other words, asynchronous. And they also currently (though not by definition) lack melody. The whole medium is new to so many people, and even for veteran listeners, there isn't exactly the equivalent of NCIS: New Orleans or Thursday Night Football or The Traveling Wilburys--that thread of familiarity that telegraphs immediately: if you like this, you will like that. Even some of the biggest hits of podcasting aren't easily explainable to a friend. That's part of why there is such a spate of celebrity podcasts right now. What is easier to describe to people: It's the Michele Obama podcast, or "it's the podcast that reveals the stories behind the world's most recognizable and interesting sounds. Check out the entire post for insights on why Tom thinks the article that made such waves was a little unfair little wrong, but ultimately right-ish. Links in the episode details, as always. Amazon Expands Ad Sales Efforts Amazon Advertising was responsible for generating $23bn in revenue for the first three quarters of 2021, nearly double the $13.5bn generated in that same period for 2020. How’d they do it? By shifting their focus to pursuing major brands, agencies, and holding companies looking to focus on awareness with their large customer sales team.Joshua Kreitzer, founder and CEO of Channel Bakers, an Amazon-focused ad agency tells Digiday With this change, the Amazon large customer sales team is no longer focused on shopper marketing dollars — they're now responsible for breaking through to the $70 billion TV market. While selling advertising to Amazon’s clients actively selling products on Amazon.com is still part of their focus, they’re now providing a bigger brand play by being able to offer inventory across Twitch, Fire TV, IMDb TV, and their podcast companies Art 19 and Wondery. Amazon has an immense amount of first-party data, from all their apps and services that require a login, so coupled with the technology they integrated from acquiring attribution company Sizmek and their AWS infrastructure, they have the potential to provide insights competitive to Google and Meta. The Download is presented by Sounds Profitable and is hosted by Bryan Barletta and Evo Terra. Audio editing by Ian Powell. SA81NiPbZ3KByWl3qVE3See omnystudio.com/listener for privacy information.See omnystudio.com/listener for privacy information.

Carnegie Council Audio Podcast
AI, Movable Type, & Federated Learning, with Blaise Aguera y Arcas

Carnegie Council Audio Podcast

Play Episode Listen Later Jan 19, 2022 65:34


Are we reaching for the wrong metaphors and narratives in our eagerness to govern AI? In this Artificial Intelligence & Equality podcast, Carnegie Council Senior Fellow Anja Kaspersen is joined by Google Research's Blaise Aguera y Arcas. In a talk that spans from Gutenberg to federated learning models to what we can learn from nuclear research, they discuss what we need to be mindful of when discussing and engaging with future applications of machine intelligence.  For more on this podcast, please go to carnegiecouncil.org.  For more on the Artificial Intelligence & Equality Initiative (AIEI), please go to carnegieaie.org.

mAcademia - Science, More than Just Academia.
#41 - Business development in health care - with Yuile Klerman

mAcademia - Science, More than Just Academia.

Play Episode Listen Later Dec 10, 2021 75:22


Yulie Klerman solves puzzles of the complex healthcare ecosystem, helping health tech companies crystalize product/market fit and develop a go-to-market strategy for their innovative solutions. Yulie is a VP of Business Development at Rhino Health, an end-to-end Federated Learning and Privacy Preservation platform facilitating privacy-centric data collaboration. In her previous role, she launched a Health Data Connectivity Platform for the safe and effective incorporation of health, consumer, and digital data at Liveramp. Yulie is obsessed with data democratization, interoperability, and privacy. We talked about how to develop a career starting with a science degree and get proficient in business, how to start in one country and move to a different one, how to start in one area and move to a different one. We talked about passion, curiosity and passion to what you do. Want to continue the conversation? Join our mAcademia Group on facebook. Music: Music credits: Funkorama Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 3.0 License http://creativecommons.org/licenses/by/3.0/ --- Send in a voice message: https://anchor.fm/macademia/message

The Encrypted Economy
Privacy Tech At Work In The Kingdom.Thomas Attema, Cryptography Scientist at TNO - E36

The Encrypted Economy

Play Episode Listen Later Jul 6, 2021 60:34 Transcription Available


This week on The Encrypted Economy, my guest is Research Scientist Thomas Attema, a cryptographer from the Netherlands-based applied research institute TNO. TNO works to bridge the gap between academia and industry to implement sustainable competitive innovation that enhances the well-being of society. Thomas sheds light on utilizing developments in tech to tackle some of the most pressing issues of our time. Be sure to subscribe to The Encrypted Economy for more insight on privacy-enhancing technologies and their benefit to the future of our world.Topics Covered:·      Thomas's Background·      Bridging the Gap Between Academia and Industry·      Strategies for Making Theoretical Technology Practical·      Legal, Ethical, and Governance Considerations·      Is the Market Ready or Multi-Party Computation?·      Who Will Be the First Implementers of Innovative Tech?·      Example of Privacy Enhancing Technology Implementation·      Techniques for Utilizing Proper Security Protocols·      Mitigating the Risks in Tech Post-Quantum Computing·      Learning Experiences at TNO·      Thoughts on Deploying Privacy Measure in Different JurisdictionsResource Links: ·      Thomas' LinkedIn·      TNO Website·      CWI Cryptology Group·      Thomas Attema's Publications·      Sugar Beets Options and Multiparty Computation·      Multiparty Computation Introduction·      Use Cases for Multiparty Computation Paper·      GDPR·      Privacy-Friendly HIV Treatments·      Homomorphic Encryption·      Federated Learning·      TNO Spin-off Companies·      The Quantum Threat·      Future of Data Privacy ·      LinksightFollow The Encrypted Economy on your favorite platforms!  ·      Twitter ·      LinkedIn ·      Instagram ·      Facebook

Let's Know Things
App Tracking Transparency

Let's Know Things

Play Episode Listen Later Feb 23, 2021 25:10


This week we talk about cookies, ATT, and IDFAs.We also discuss Federated Learning of Cohorts, targeted ads, and Facebook. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe