Podcasts about Algorithmic

  • 788PODCASTS
  • 1,278EPISODES
  • 38mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Mar 21, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Algorithmic

Show all podcasts related to algorithmic

Latest podcast episodes about Algorithmic

@BEERISAC: CPS/ICS Security Podcast Playlist
Episode 442 - Maritime Domain Awareness Series - Securing our seas: Innovations and challenges

@BEERISAC: CPS/ICS Security Podcast Playlist

Play Episode Listen Later Mar 21, 2025 62:01


Podcast: Cyber Security Weekly Podcast (LS 37 · TOP 2.5% what is this?)Episode: Episode 442 - Maritime Domain Awareness Series - Securing our seas: Innovations and challengesPub date: 2025-03-19Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationThis session focused on gaining insights in the latest developments and capabilities for establishing and maintaining situational awareness across the maritime domain, with a focus on security, sustainability and space-earth observation. For Reference to the Maritime Domain and related activities – welcome to refer to the following links:https://www.iala.int/technical/mass/https://smartsatcrc.com/smartsat-crc-and-nz-government-announce-four-new-joint-research-projects-under-the-australia-new-zealand-collaborative-space-program/https://unseenlabs.space/our-product/ DISCUSSION KEY POINTS- Future of Maritime Autonomous Surface Ships (MASS)- Imagery utilization and availability (TPED) / configuration- On board processing for tip/cue scenarios- Algorithmic considerations for efficient ship detections (optical and SAR)- Synthetic aperture radar (SAR) missions – Australia - NZThomas Southall, Committee ManagerINTERNATIONAL ORGANIZATION FOR MARINE AIDS TO NAVIGATION (IALA)Thomas is Committee Manager for the International Organization for Marine Aids to Navigation (IALA) directing the technical output aligning deliverables with the organization's Strategic Vision and Committee Work Programme. He is also a Trustee and Fellow of the Royal institute of Navigation awarded to him in recognition for his contribution to improved Vessel Traffic Services practice, training and development of policy at national and international levels. He has recently been admitted into the Fraternity of the United Kingdom's Trinity House as Younger Brother in recognition of his experience and achievements.He was representative for the International Harbour Masters Association to IALA where he served as participant and Chair of the VTS Operations Working Group. In this role and as IALA Technical Officer, he made significant contribution to the adoption of the new IMO Resolution on VTS.Before joining IALA, Tom worked for the Australian Maritime Safety Authority as a maritime advisor. Previously, he oversaw the Port of London Authorities' VTS and led a commercial training organization. Tom served as a Navigational Officer in the Merchant Navy.Dr Carl Seubert, Chief Research OfficerSMARTSAT CRCDr Carl Seubert joined SmartSat in May 2021, after nine years NASA Jet Propulsion Laboratory (JPL) as a Senior Aerospace Engineer. After graduating First Class Honours in Aerospace Engineering from the University of Sydney, Dr Seubert completed a Master of Science degree in Aerospace Engineering from the Missouri University of Science and Technology (USA) and a PhD in Aerospace Engineering from the University of Colorado Boulder (USA).As NASA JPL's Manager of Formation Control Testbed and Guidance and Control Engineer, Dr Seubert led research and technology development for spacecraft formation flight, future Earth observation missions and precise planetary landing. This includes designing the spacecraft pointing control algorithms and software for the upcoming Europa Clipper mission and the next Mars lander mission.Kevin Jones, CTO & VP ProductCATALYST (PCI GEOMATICS)Kevin has a background in remote sensing applications, and began his career working on the RADARSAT-1 mission in Canada. Throughout his career, he has developed and delivered earth observation based solutions to clients globally spanning many applications areas. With the advent of AIS data, Kevin managed the implementation of near real time ship detect service that fused / correlated detections with known ship positions. At CATALYST, we are working to make the deep & rich algorithm stack available for efficient processing of earth observation imagery to enable innovative data as a service solutions for several application areas.Rachid Nedjar, Chief Strategy & Marketing OfficerUNSEENLABSRachid NEDJAR is the Head of Marketing at Unseenlabs. In this role, he focuses on developing tailored content and solutions to Unseenlabs customers involved in maritime security. Prior to joining Unseenlabs, Rachid had been working for Le Poool, giving support and consulting to early stage technological companies or in the process of growth. #australiainspacetv #ipsec #mass #maritime #maritimedomain #autonomoussystems #autonomousshipping #unseenlabs #iala #maritimesecurity #sar #spacetechnology #smartsatcrcThe podcast and artwork embedded on this page are from MySecurity Media, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.

Movement Memos
We Must Burst Our Algorithmic Bubbles and Build Together Across Difference

Movement Memos

Play Episode Listen Later Mar 20, 2025 73:18


“We need each other, and interdependence is key to survival for human beings,” says Mariame organizer Kaba. In this episode, Mariame and Kelly talk about what their book Let This Radicalize You brings to this moment. They also discuss the fight for reproductive justice, the problem with schadenfreude, and the need to build collective courage. Music: Son Monarcas and Pulsed You can find a transcript and show notes (including links to resources) here: truthout.org/series/movement-memos/ If you would like to support the show, you can donate here: bit.ly/TODonate If you would like to receive Truthout's newsletter, please sign up: bit.ly/TOnewsletter

Cyber Security Weekly Podcast
Episode 442 - Maritime Domain Awareness Series - Securing our seas: Innovations and challenges

Cyber Security Weekly Podcast

Play Episode Listen Later Mar 19, 2025 62:01


This session focused on gaining insights in the latest developments and capabilities for establishing and maintaining situational awareness across the maritime domain, with a focus on security, sustainability and space-earth observation. For Reference to the Maritime Domain and related activities – welcome to refer to the following links:https://www.iala.int/technical/mass/https://smartsatcrc.com/smartsat-crc-and-nz-government-announce-four-new-joint-research-projects-under-the-australia-new-zealand-collaborative-space-program/https://unseenlabs.space/our-product/ DISCUSSION KEY POINTS- Future of Maritime Autonomous Surface Ships (MASS)- Imagery utilization and availability (TPED) / configuration- On board processing for tip/cue scenarios- Algorithmic considerations for efficient ship detections (optical and SAR)- Synthetic aperture radar (SAR) missions – Australia - NZThomas Southall, Committee ManagerINTERNATIONAL ORGANIZATION FOR MARINE AIDS TO NAVIGATION (IALA)Thomas is Committee Manager for the International Organization for Marine Aids to Navigation (IALA) directing the technical output aligning deliverables with the organization's Strategic Vision and Committee Work Programme. He is also a Trustee and Fellow of the Royal institute of Navigation awarded to him in recognition for his contribution to improved Vessel Traffic Services practice, training and development of policy at national and international levels. He has recently been admitted into the Fraternity of the United Kingdom's Trinity House as Younger Brother in recognition of his experience and achievements.He was representative for the International Harbour Masters Association to IALA where he served as participant and Chair of the VTS Operations Working Group. In this role and as IALA Technical Officer, he made significant contribution to the adoption of the new IMO Resolution on VTS.Before joining IALA, Tom worked for the Australian Maritime Safety Authority as a maritime advisor. Previously, he oversaw the Port of London Authorities' VTS and led a commercial training organization. Tom served as a Navigational Officer in the Merchant Navy.Dr Carl Seubert, Chief Research OfficerSMARTSAT CRCDr Carl Seubert joined SmartSat in May 2021, after nine years NASA Jet Propulsion Laboratory (JPL) as a Senior Aerospace Engineer. After graduating First Class Honours in Aerospace Engineering from the University of Sydney, Dr Seubert completed a Master of Science degree in Aerospace Engineering from the Missouri University of Science and Technology (USA) and a PhD in Aerospace Engineering from the University of Colorado Boulder (USA).As NASA JPL's Manager of Formation Control Testbed and Guidance and Control Engineer, Dr Seubert led research and technology development for spacecraft formation flight, future Earth observation missions and precise planetary landing. This includes designing the spacecraft pointing control algorithms and software for the upcoming Europa Clipper mission and the next Mars lander mission.Kevin Jones, CTO & VP ProductCATALYST (PCI GEOMATICS)Kevin has a background in remote sensing applications, and began his career working on the RADARSAT-1 mission in Canada. Throughout his career, he has developed and delivered earth observation based solutions to clients globally spanning many applications areas. With the advent of AIS data, Kevin managed the implementation of near real time ship detect service that fused / correlated detections with known ship positions. At CATALYST, we are working to make the deep & rich algorithm stack available for efficient processing of earth observation imagery to enable innovative data as a service solutions for several application areas.Rachid Nedjar, Chief Strategy & Marketing OfficerUNSEENLABSRachid NEDJAR is the Head of Marketing at Unseenlabs. In this role, he focuses on developing tailored content and solutions to Unseenlabs customers involved in maritime security. Prior to joining Unseenlabs, Rachid had been working for Le Poool, giving support and consulting to early stage technological companies or in the process of growth. #australiainspacetv #ipsec #mass #maritime #maritimedomain #autonomoussystems #autonomousshipping #unseenlabs #iala #maritimesecurity #sar #spacetechnology #smartsatcrc

Recsperts - Recommender Systems Experts
#27: Recommender Systems at the BBC with Alessandro Piscopo and Duncan Walker

Recsperts - Recommender Systems Experts

Play Episode Listen Later Mar 19, 2025 87:44


In episode 27 of Recsperts, we meet Alessandro Piscopo, Lead Data Scientist in Personalization and Search, and Duncan Walker, Principal Data Scientist in the iPlayer Recommendations Team, both from the BBC. We discuss how the BBC personalizes recommendations across different offerings like news or video and audio content recommendations. We learn about the core values for the oldest public service media organization and the collaboration with editors in that process.The BBC once started with short video recommendations for BBC+ and nowadays has to consider recommendations across multiple domains: news, the iPlayer, BBC Sounds, BBC Bytesize, and more. With a reach of about 500M+ users who access services every week there is a huge potential. My guests discuss the challenges of aligning recommendations with public service values and the role of editors and constant exchange, alignment, and learning between the algorithmic and editorial lines of recommender systems.We also discuss the potential of cross-domain recommendations to leverage the content across different products as well as the organizational setup of teams working on recommender systems at the BBC. We learn about skews in the data due to the nature of an online service that also has a linear offering with TV and radio services.Towards the end, we also touch a bit on QUARE @ RecSys, which is the Workshop on Measuring the Quality of Explanations in Recommender Systems.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:10) - About Alessandro Piscopo and Duncan Walker (14:53) - RecSys Applications at the BBC (20:22) - Journey of Building Public Service Recommendations (28:02) - Role and Implementation of Public Service Values (36:52) - Algorithmic and Editorial Recommendation (01:01:54) - Further RecSys Challenges at the BBC (01:15:53) - Quare Workshop (01:23:27) - Closing Remarks Links from the Episode:Alessandro Piscopo on LinkedInDuncan Walker on LinkedInBBCQUARE @ RecSys 2023 (2nd Workshop on Measuring the Quality of Explanations in Recommender Systems)Papers:Clarke et al. (2023): Personalised Recommendations for the BBC iPlayer: Initial approach and current challengesBoididou et al. (2021): Building Public Service Recommenders: Logbook of a JourneyPiscopo et al. (2019): Data-Driven Recommendations in a Public Service OrganisationGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

SAGE Sociology
Socius - Weaponizing the Workplace: How Algorithmic Management Shaped Amazon's Antiunion Campaign in Bessemer, Alabama

SAGE Sociology

Play Episode Listen Later Mar 14, 2025 23:58


Author Teke Wiggin discusses the article, "Weaponizing the Workplace: How Algorithmic Management Shaped Amazon's Antiunion Campaign in Bessemer, Alabama" published in Socius: Sociological Research for a Dynamic World.

Humanism Now
28. Susie Alegre on the Algorithmic Assault on Human Rights: How AI Threatens Our Core Freedoms

Humanism Now

Play Episode Listen Later Mar 9, 2025 39:58 Transcription Available


AI technologies pose significant threats to fundamental human rights, reinforcing historical biases and power imbalances. This week, we are joined by Susie Alegre, international human rights lawyer and author, to explore the impact of generative AI on gender and racial equality, labour markets, and information ecosystems.Susie has worked for international NGOs like Amnesty International and organisations including the UN, the EU and the Council of Europe. Susie has published two books covering the critical issue or technology's impact on human rights; “Freedom to Think” (2022) was a Financial Times Technology Book of the Year 2022 and shortlisted for the Royal Society of Literature Christopher Bland Prize 2023 and “Human Rights, Robot Wrongs: Being Human in the Age of AI” published in 2024.  The episode covers; How AI systems, like ChatGPT, perpetuate gender and racial biasesThe "Pygmalion" pattern in AI designPotential longterm effects on skills, education and social interactionsThe rise of "ultra-processed information" and its consequences for the internetLegal risks and the role of effective regulationEnforcement in addressing AI's human rights risksWhen AI applications may be valuable—and when they are not

Talking about Platforms
Crypto Economics and Open Source Platforms with Mariia Petryk

Talking about Platforms

Play Episode Listen Later Mar 5, 2025 48:41


Wed, 05 Mar 2025 06:30:00 +0000 https://tap.podigee.io/50-mariia-petryk 2d276b1508e6095001e0330fc436a8b2 Decentralization, Tokenization, and the Evolution of Digital Incentives Guest: Maria Petryk Bio: Maria Petryk is an Assistant Professor of Information Systems and Operations Management at George Mason University. Her research interests include information systems, finance management, organization science, and the economics of blockchain. She focuses particularly on decentralized platforms and open-source software. Summary: In this episode of Talking About Platforms, Maria discusses the platform business model as one that creates infrastructural opportunities for other agents, companies, individuals, and users to create new value. The platform operator provides the infrastructure and foundational tools for other economic agents to create derivative value and the economy around the platform. Key discussion points include: • Decentralized platforms and blockchain: Maria shares her journey into researching blockchain technology around 2017, initially learning about it from the Bitcoin perspective and then finding a community on campus discussing this technology. She notes the ethos behind it as a movement against centralization, particularly in financial transactional systems, aligning with open-source software concepts. • Research gaps and the evolution of blockchain applications: Early research focused on understanding what blockchain is and what changes it brings to existing business models. The evolution of applications, from Bitcoin to various cryptocurrencies, has been crucial in understanding blockchain's impact. • Traditional firms and blockchain: Some companies use blockchain technology to make processes more efficient, such as stablecoin companies utilizing blockchain for cheaper and more efficient payment rails. Others, like Starbucks and Nike, experiment with Web3 artifacts for community engagement and loyalty enhancement. • Open-source community and value capture: Maria discusses capturing the value of open source in the cryptocurrency space, given that a majority of cryptocurrencies have open-source code on GitHub. • Centralization in decentralized sectors: The discussion touches on the tendency toward centralization in the blockchain space, with larger entities dominating through grant programs. • Tokenization and incentivization: Blockchain introduces the concept of token organizations, digitizing transactions and exchanges, and using tokens as a payment for contributions, potentially shifting the balance between intrinsic and extrinsic motivation. • Algorithmic governance and immutability: The immutability of code in blockchain systems can be a dilemma when the system scales and new market mechanisms require changes. Flexibility is needed, and sometimes centralized entities are required to make decisions. Publications & Projects Mentioned: • von Hippel, E. (2002). Open source software projects as user innovation networks. MIT Sloan School of Management • Petryk, M., Qiu, L., & Pathak, P. (2023). The Impact of Open-Source Community on Cryptocurrency Market Price: An Empirical Investigation. Journal of Management Information Systems, 40(4), 1237-1270. • Nimalendran, M., Pathak, P., Petryk, M., & Qiu, L. (2024). Informational efficiency of cryptocurrency markets. Journal of Financial and Quantitative Analysis, 1-30. Links: • Mariia's website: https://www.mariiapetryk.com/home full Decentralization, Tokenization, and the Evolution of Digital Incentives no crypto economics,decentralised platforms,digital platforms Daniel Trabucchi, Tommaso Buganza and Philip Meier

Speaking Out of Place
A Conversation with Laila Lalami on The Dream Hotel: dreaming beyond the algorithmic state

Speaking Out of Place

Play Episode Listen Later Mar 4, 2025 41:35


Today on Speaking Out of Place I talk with award-winning novelist Laila Lalami about her new novel, The Dream Hotel. What happens when the state, with the pretext of protecting public safety, can detain indefinitely certain individuals whose dreams seem to indicate they may be capable of committing a crime?  Set in a precarious world where sleep-enhancing devices and algorithms provide the tools and formulae for making one's unconscious a witness to one's possible waking life, this novel touches on a myriad of political, philosophical, and moral concerns as they particularly connect to issues of gender, race, ethnicity, privacy, and  the security state.Laila Lalami is the author of five books, including The Moor's Account, which won the American Book Award, the Arab-American Book Award, and the Hurston / Wright Legacy Award. It was on the longlist for the Booker Prize and was a finalist for the Pulitzer Prize in Fiction. Her most recent novel, The Other Americans, was a national bestseller, won the Joyce Carol Oates Prize, and was a finalist for the National Book Award in Fiction. Her books have been translated into twenty languages. Her essays have appeared in the Los Angeles Times, the Washington Post, The Nation, Harper's, the Guardian, and the New York Times.  She has been awarded fellowships from the British Council, the Fulbright Program, the Guggenheim Foundation, and the Radcliffe Institute at Harvard University.  She lives in Los Angeles.  

Work in Progress with Sim & Ko
Algorithmic Lives, Digital Tribes: Tulsi Mehrotra Menon on Work, Identity, and the Human-Machine Future

Work in Progress with Sim & Ko

Play Episode Listen Later Feb 27, 2025 73:01


In this thought-provoking episode, we sit down with Tulsi Mehrotra Menon, a Cultural and Digital Anthropologist, to unravel the intricate relationships between work, identity, and technology in today's ever-evolving world. Born and raised in Bangalore, Tulsi brings a unique anthropological lens to the corporate sphere, helping brands move beyond surface-level strategies to truly understand the human motivations and cultural contexts that shape consumer behavior.We dive into:The Anthropology of Business: How Tulsi applies cultural insights to brand and business strategies, pushing for a shift from brand-centric to human-centric approaches.The Multitudes of Identity: Exploring how selfhood is constructed in 2025—and what it might look like in 2050—as work, technology, and personal expression intersect.Human vs. Machine: What does the future of work hold as AI and algorithms increasingly influence our choices, relationships, and identities?The Authenticity Paradox: In an era of curated personas and performative authenticity, how do we distinguish what's real—and what does this mean for brands trying to connect with their audiences?Cultural Glasses and the Three C's Model: Understanding people's behaviors through the lens of context, conditions, and choices—and how this model helps resolve everyday conflicts and inform better business decisions.The Evolution of Connection: From fandoms to online communities, how are tribes forming in India versus the rest of the world—and what lessons can brands learn?Women and Work: Balancing unpaid caregiving with personal ambition—how can workplaces create unconventional spaces for women to explore their multidimensional identities?Leading for the Future: What leadership looks like in a world shifting from linear career paths and CV-based hiring to skill-driven, imagination-fueled environments.Whether you're a brand strategist, business leader, or someone simply curious about how human behavior evolves, this episode offers fresh insights and a bold reimagining of the future.Listen now and step into the fascinating world of digital tribes and algorithmic lives with Tulsi Mehrotra Menon.

52 Weeks of Cloud
European Digital Sovereignty: Breaking Tech Dependency

52 Weeks of Cloud

Play Episode Listen Later Feb 24, 2025 10:38


European Digital Sovereignty: Breaking Tech DependencyEpisode NotesHeterodox Economic Foundations (00:00-02:46)Current economic context: Income inequality at historic levels (worse than pre-French Revolution)Problems with GDP as primary metric:Masks inequality when wealth is concentratedFails to measure human wellbeingAmerican example: majority living paycheck-to-paycheck despite GDP growthAlternative metrics:Human dignity quantificationPlanetary health indicatorsCommons-based resource managementCare work valuation (teaching, healthcare, social work)Multi-dimensional inequality measurementPractical examples:Life expectancy as key metric (EU/Japan vs US differences)Education quality and accessibilityDemocratic participationIncome distributionDigital Infrastructure Autonomy (02:46-03:18)European cloud infrastructure development (GAIA-X)Open-source technology adoption in public institutionsLocal semiconductor production capacityNetwork infrastructure without US-controlled chokepointsIncome Redistribution via Tech Regulation (03:18-03:53)Digital services taxation modelsGraduated taxation based on market concentrationLabor share requirements through tax incentivesSME ecosystem development through regulatory frameworksHealth Data Sovereignty (03:53-04:29)Patient data localization requirementsIndigenous medical technology developmentEuropean-controlled health datasets for AI trainingContrasting social healthcare vs. capitalistic healthcare modelsAgricultural Technology Independence (04:29-04:53)European research-driven precision farmingFarm management systems with European values (cooperative models)Rural connectivity self-sufficiency for smart farmingInformation Ecosystem Control (04:53-05:33)European content moderation standardsConcerns about American platforms' rule changesPublic funding for quality news contentTaxation mechanisms on disinformation spreadDemocratic Technology Governance (05:33-06:17)Algorithmic impact assessment frameworksEvaluating offline harm potentialDigital rights enforcement mechanismsCountering extremist content proliferationMobility Data Sovereignty (06:17-06:33)Public transportation data ownership by European citiesVehicle data localization requirementsEuropean component requirements for autonomous vehiclesTaxation Technology Independence (06:33-06:48)Tax incentives for European tech adoptionPenalties for dependence on US vendorsStrategic technology sector preferencesClimate Technology Self-Sufficiency (06:48-07:03)Renewable energy management softwareCarbon accounting toolsPrioritizing climate technology in economic planningConclusion: Competing Through Rights-Based Innovation (07:03-10:36)Critique of American outcomes despite GDP growth:Declining life expectancyHealthcare bankruptcyGun violenceEuropean competitive advantage through:Human rights prioritizationEnvironmental protectionDeterministic technology developmentConstructive vs. extractive economic modelsPotential to attract global talent seeking better quality of lifeReframing "overregulation" criticisms as human rights defenseBuilding rather than extracting as the European model

Talking about Platforms
Algorithmic Management with Lindsey Cameron

Talking about Platforms

Play Episode Listen Later Feb 19, 2025 39:22


Wed, 19 Feb 2025 06:30:00 +0000 https://tap.podigee.io/46-lindsey-cameron 39e8b4526e97215270154c66da566497 Platform Power and the Future of Work Guest Lindsey D. Cameron Bio Lindsey D. Cameron is an Assistant Professor of Management at the University of Pennsylvania's Wharton School of Business and a faculty affiliate at the Berkman Klein Center for Internet and Society at Harvard University. Her research focuses on algorithmic management, artificial intelligence, and platform work. Lindsey became interested in platforms after observing her mother's experiences in the gig economy after a job loss, which sparked her interest in social mobility and labor platforms. Summary In this episode of Talking About Platforms, Lindsey D. Cameron discusses her research on labor platforms, platform power, and the evolving nature of platform economies. She emphasizes that platforms are not neutral entities but are embedded with values that can lead to exploitation. She explores how this exploitation is visible on labor platforms through declining wages and safety violations but is often subtler on digital platforms. Lindsey also addresses the co-option of the sharing economy narrative by early labor platforms and the role of algorithmic management in controlling workers. She distinguishes between open and closed labor markets, highlighting the varying degrees of control exerted by platforms. The discussion further covers the need for platform accountability, the importance of transparent rules, and the potential for on-demand platform work in formalizing economies, especially in the Global South. We touch on the increasing entanglement of platform work within society, shaping decisions, and capital flows. Publications & Projects Mentioned Rahman, H. A., Karunakaran, A., & Cameron, L. D. (2024). Taming platform power: Taking accountability into account in the management of platforms. Academy of Management Annals, 18(1), 251-294 Cameron, L. D. (2024). The Making of the "Good Bad" Job: How Algorithmic Management Manufactures Consent Through Constant and Confined Choices. Administrative Science Quarterly, 69(2), 458-514. Cameron, L. D. (2022). "Making out" while driving: Relational and efficiency games in the gig economy. Organization Science, 33(1), 231-252. Links Lindsey's Website: lindseydcameron.com X (Twitter): @LindseyDCameron6 full Platform Power and the Future of Work no Daniel Trabucchi, Tommaso Buganza and Philip Meier

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

AI Art: Ethical Considerations and Artistic ImpactAI art's emergence prompts significant discussion, balancing excitement and apprehension regarding its implications. The author explores the ethical quagmire surrounding copyright, the potential impact on artists' livelihoods, and the inherent biases within AI training data. It questions the very definition of art, pondering whether AI-generated content possesses genuine soul and meaning. However, it acknowledges the democratisation of creativity, empowering individuals to visualise concepts regardless of artistic skill. AI is presented as a powerful tool for artistic collaboration and innovation, offering new avenues for expression and efficiency across various creative disciplines. The piece concludes by advocating ongoing dialogue and exploration of AI's evolving role in the artistic landscape.

The New Quantum Era
Informationally complete measurement and dual-rail qubits with Guillermo García-Pérez and Sean Weinberg

The New Quantum Era

Play Episode Listen Later Feb 18, 2025 34:15 Transcription Available


Welcome to another episode of The New Quantum Era, where we delve into the cutting-edge developments in quantum computing. with your host, Sebastian Hassinger. Today, we have a unique episode featuring representatives from two companies collaborating on groundbreaking quantum algorithms and hardware. Joining us are Sean Weinberg, Director of Quantum Applications at Quantum Circuits Incorporated, and Guillermo Garcia Perez, Chief Science Officer and co-founder at Algorithmiq. Together, they discuss their partnership and the innovative work they are doing to advance quantum computing applications, particularly in the field of chemistry and pharmaceuticals.Key Highlights:Introduction of New Podcast Format: Sebastian explains the new format of the podcast and introduces the guests, Sean Weinberg from Quantum Circuits Inc. and Guillermo Garcia Perez from Algorithmic.Collaboration Overview: Guillermo discusses the partnership between Quantum Circuits Inc. and Algorithmiq, focusing on how Quantum Circuits Inc.'s dual-rail qubits with built-in error detection enhance Algorithmiq's quantum algorithms.Innovative Algorithms: Guillermo elaborates on their novel approach to ground state simulations using tensor network methods and informationally complete measurements, which improve the accuracy and efficiency of quantum computations.Hardware Insights: Sean provides insights into Quantum Circuits Inc.'s Seeker device, an eight-qubit system that flags 90% of errors, and discusses the future scalability and potential for error correction.Future Directions: Both guests talk about the potential for larger-scale devices and the importance of collaboration between hardware and software companies to advance the field of quantum computing.Mentioned in this Episode:Quantum Circuits Inc.AlgorithmiqQCI's forthcoming quantum computing device, Aqumen SeekerTensor Network Error Mitigation: A method used by Algorithmic to improve the accuracy of quantum computations.Tune in to hear about the exciting advancements in quantum computing and how these two companies are pushing the boundaries of what's possible in this new quantum era, and if you like what you hear, check out www.newquantumera.com, where you'll find our full archive of episodes and a preview of the book I'm writing for O'Reilly Media, The New Quantum Era.

NATO Review
NATO Review: Algorithmic invasions: How information warfare threatens NATO's Eastern Flank

NATO Review

Play Episode Listen Later Feb 7, 2025 17:46


On 6 December 2024, in an unprecedented move, Romania's Constitutional Court annulled the results of the first round of its 24 November presidential election, citing evidence provided by intelligence agencies that the electoral process had been “compromised throughout its duration and across all stages”. This dramatic decision, unparalleled in Romania's history since the 1989 revolution against the communist regime, underscores the evolving nature of hybrid warfare, one aspect of which includes algorithmic manipulation and cyber-enabled disinformation campaigns that target and destabilise democracies. The prospect of a NATO Ally on the Alliance's south-eastern flank being undermined—not through military invasion but through algorithm-driven social media manipulation—serves as a stark reminder of national security vulnerabilities in the digital age. The implications extend far beyond Romania, highlighting the urgent need to integrate robust information security measures into NATO's strategic framework.

Show Me The Money Club
Uber RESPONDS To Our Algorithmic Wage Discrimination Tests

Show Me The Money Club

Play Episode Listen Later Feb 5, 2025 123:46


Welcome to Show Me The Money Club live show with Sergio and Chris Tuesdays 6pm est/3pm pst.

Education · The Creative Process
PAY ATTENTION: A Call to Regulate the Attention Market & Prevent Algorithmic Emotional Governance

Education · The Creative Process

Play Episode Listen Later Feb 3, 2025 60:38


AI competes for our attention because our attention has been commodified. As our entire lives revolve more and more around the attention economy, what can we do to restore our autonomy, reclaim our privacy, and reconnect with the real world.Computer scientist Fabien Gandon and research engineer Franck Michel are experts in AI, the Web, and knowledge systems. Fabien is a senior researcher at Inria (Institut national de recherche en sciences et technologies du numérique), specializing in the Semantic Web, while Franck focuses on integrating and sharing data through Linked Open Data technologies.Together, they've written Pay Attention: A Call to Regulate the Attention Market and Prevent Algorithmic Emotional Governance. Their research unpacks how digital platforms are monetizing our attention at an unprecedented scale—fueling misinformation and division and even threatening democracy and affecting our emotions and well-being.“The fact that technologies are being used and combined to capture our attention is concerning. This is currently being done with no limitations and no regulations. That's the main problem. Attention is a very private resource. No one should be allowed to extract it from us by exploiting what we know about the human mind and how it functions, including its weaknesses. We wrote this paper as a call to regulate the attention market and prevent algorithmic emotional governance.”Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Education · The Creative Process
On Regulating the Attention Market & Prevent Algorithmic Emotional Governance w/ FABIEN GANDON & FRANCK MICHEL

Education · The Creative Process

Play Episode Listen Later Feb 3, 2025 11:34


“The fact that technologies are being used and combined to capture our attention is concerning. This is currently being done with no limitations and no regulations. That's the main problem. Attention is a very private resource. No one should be allowed to extract it from us by exploiting what we know about the human mind and how it functions, including its weaknesses. We wrote this paper as a call to regulate the attention market and prevent algorithmic emotional governance.”Computer scientist Fabien Gandon and research engineer Franck Michel are experts in AI, the Web, and knowledge systems. Fabien is a senior researcher at Inria (Institut national de recherche en sciences et technologies du numérique), specializing in the Semantic Web, while Franck focuses on integrating and sharing data through Linked Open Data technologies.Together, they've written Pay Attention: A Call to Regulate the Attention Market and Prevent Algorithmic Emotional Governance. Their research unpacks how digital platforms are monetizing our attention at an unprecedented scale—fueling misinformation and division and even threatening democracy and affecting our emotions and well-being.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

The Creative Process in 10 minutes or less · Arts, Culture & Society
PAY ATTENTION: A Call to Regulate the Attention Market & Prevent Algorithmic Emotional Governance

The Creative Process in 10 minutes or less · Arts, Culture & Society

Play Episode Listen Later Feb 3, 2025 11:34


“The fact that technologies are being used and combined to capture our attention is concerning. This is currently being done with no limitations and no regulations. That's the main problem. Attention is a very private resource. No one should be allowed to extract it from us by exploiting what we know about the human mind and how it functions, including its weaknesses. We wrote this paper as a call to regulate the attention market and prevent algorithmic emotional governance.”Computer scientist Fabien Gandon and research engineer Franck Michel are experts in AI, the Web, and knowledge systems. Fabien is a senior researcher at Inria (Institut national de recherche en sciences et technologies du numérique), specializing in the Semantic Web, while Franck focuses on integrating and sharing data through Linked Open Data technologies.Together, they've written Pay Attention: A Call to Regulate the Attention Market and Prevent Algorithmic Emotional Governance. Their research unpacks how digital platforms are monetizing our attention at an unprecedented scale—fueling misinformation and division and even threatening democracy and affecting our emotions and well-being.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Tech, Innovation & Society - The Creative Process
PAY ATTENTION: A Call to Regulate the Attention Market & Prevent Algorithmic Emotional Governance

Tech, Innovation & Society - The Creative Process

Play Episode Listen Later Feb 3, 2025 60:38


AI competes for our attention because our attention has been commodified. As our entire lives revolve more and more around the attention economy, what can we do to restore our autonomy, reclaim our privacy, and reconnect with the real world.Computer scientist Fabien Gandon and research engineer Franck Michel are experts in AI, the Web, and knowledge systems. Fabien is a senior researcher at Inria (Institut national de recherche en sciences et technologies du numérique), specializing in the Semantic Web, while Franck focuses on integrating and sharing data through Linked Open Data technologies.Together, they've written Pay Attention: A Call to Regulate the Attention Market and Prevent Algorithmic Emotional Governance. Their research unpacks how digital platforms are monetizing our attention at an unprecedented scale—fueling misinformation and division and even threatening democracy and affecting our emotions and well-being.“The fact that technologies are being used and combined to capture our attention is concerning. This is currently being done with no limitations and no regulations. That's the main problem. Attention is a very private resource. No one should be allowed to extract it from us by exploiting what we know about the human mind and how it functions, including its weaknesses. We wrote this paper as a call to regulate the attention market and prevent algorithmic emotional governance.”Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Tech, Innovation & Society - The Creative Process
On Regulating the Attention Market & Prevent Algorithmic Emotional Governance w/ FABIEN GANDON & FRANCK MICHEL

Tech, Innovation & Society - The Creative Process

Play Episode Listen Later Feb 3, 2025 11:34


“The fact that technologies are being used and combined to capture our attention is concerning. This is currently being done with no limitations and no regulations. That's the main problem. Attention is a very private resource. No one should be allowed to extract it from us by exploiting what we know about the human mind and how it functions, including its weaknesses. We wrote this paper as a call to regulate the attention market and prevent algorithmic emotional governance.”Computer scientist Fabien Gandon and research engineer Franck Michel are experts in AI, the Web, and knowledge systems. Fabien is a senior researcher at Inria (Institut national de recherche en sciences et technologies du numérique), specializing in the Semantic Web, while Franck focuses on integrating and sharing data through Linked Open Data technologies.Together, they've written Pay Attention: A Call to Regulate the Attention Market and Prevent Algorithmic Emotional Governance. Their research unpacks how digital platforms are monetizing our attention at an unprecedented scale—fueling misinformation and division and even threatening democracy and affecting our emotions and well-being.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Communism Exposed:East and West
The Demoralizing Downward Spiral of Algorithmic Culture

Communism Exposed:East and West

Play Episode Listen Later Jan 27, 2025 7:52


Voice-Over-Text: Pandemic Quotables
The Demoralizing Downward Spiral of Algorithmic Culture

Voice-Over-Text: Pandemic Quotables

Play Episode Listen Later Jan 27, 2025 7:52


Communism Exposed:East & West(PDF)
The Demoralizing Downward Spiral of Algorithmic Culture

Communism Exposed:East & West(PDF)

Play Episode Listen Later Jan 27, 2025 7:52


Pandemic Quotables
The Demoralizing Downward Spiral of Algorithmic Culture

Pandemic Quotables

Play Episode Listen Later Jan 27, 2025 7:52


Show Me The Money Club
More Algorithmic Discriminations Being Investigated On Uber

Show Me The Money Club

Play Episode Listen Later Jan 22, 2025 93:44


Welcome to Show Me The Money Club live show with Sergio and Chris Tuesdays 6pm est/3pm pst.

The Burn Bag Podcast
BEST OF: Is TikTok a Threat? Data Sovereignty, Algorithmic Influence, and the China Factor with Lindsay Gorman, Senior Fellow at GMF Tech

The Burn Bag Podcast

Play Episode Listen Later Jan 19, 2025 54:04


RE-RELEASE: This episode was originally released in April 2024. The TikTok ban took effect late Saturday night, but may be revoked by President-elect Trump.This week, A'ndre is joined by Lindsay Gorman, the Managing Director & Senior Fellow at the German Marshall Fund's GMF Tech, to delve into the controversies surrounding TikTok and its implications for national security. Lindsay sheds light on ByteDance, the company behind TikTok, and discusses the concerns surrounding its data storage practices. A'ndre and Lindsay explore the concept of data sovereignty and discuss whether China can access ByteDance's data at will, and why it's different from how the U.S. Government engages with U.S.-based social media companies. Lindsay outlines the types of user data TikTok gathers, and touches upon how China can exploit this collected data. The conversation extends to China's history of leveraging social media platforms for targeting dissenters and the workings of TikTok's algorithms in content recommendation -- particularly with regards to misinformation and polarization. Lindsay offers insights into the likelihood of a TikTok divestiture (and why it's not a ban), legal challenges it might face, and the possibility of a U.S.-based firm acquiring TikTok. The discussion concludes with an examination of China's reaction to the scrutiny, and what Lindsay sees as the biggest myths surrounding TikTok.CORRECTION: A'ndre referenced a dispute between the FBI and Apple, incorrectly attributing it to the Boston Bombing investigation, when in actuality it was the 2015 San Bernardino Terror Attack

CDT Tech Talks
Tech Talk: Talking Tech with Umang Bhatt on Algorithmic Resignation

CDT Tech Talks

Play Episode Listen Later Jan 13, 2025 30:43


In today's episode, we tackle a fascinating question: What happens when an AI system deployed by a company decides to "resign"—stopping its recommendations or restricting access to its outputs? Can such actions help mitigate reputational or legal risks for organizations? To help us explore this, we're joined by Dr. Umang Bhatt, Assistant Professor and Faculty Fellow at the Center for Data Science at New York University, CDT Non-Resident Fellow, and co-author of the paper When Should Algorithms Resign?: A Proposal for AI Governance, which delves into this thought-provoking concept.

Algorithms + Data Structures = Programs
Episode 216: Programming Paradigms and Algorithmic Thinking

Algorithms + Data Structures = Programs

Play Episode Listen Later Jan 10, 2025 29:57


In this episode, Conor and Ben chat about programming paradigms, algorithms, and much more!Link to Episode 216 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)SocialsADSP: The Podcast: TwitterConor Hoekstra: Twitter | BlueSky | MastodonBen Deane: Twitter | BlueSkyShow NotesDate Generated: 2024-12-16Date Released: 2025-01-10SICP - Structure and Interpretation of Computer ProgramsC++Now 2019 - Algorithm IntuitionC++98 std::adjacent_differenceC++23 std::views::adjacent_transformHaskell mapAdjacentC++98 std::partial_sumBQN ⌾ (under)Design Patterns (Gang of Four)"'tag_invoke' - An Actually Good Way to Do Customization Points" - Gašper AžmanDyalog APL TatinDyalog APL LinkIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8

Making Contact
Decoding Algorithmic Racism with Dr. Safiya Umoja Noble

Making Contact

Play Episode Listen Later Dec 25, 2024 29:30


On this week's episode, we dive into the hidden biases of the digital age with Dr. Safiya Umoja Noble, author of the groundbreaking book, _Algorithms of Oppression._ Dr. Noble unpacks how search engines, often seen as neutral tools, can reinforce harmful stereotypes and limit access to critical knowledge. Join us as we explore the forces shaping our digital experiences and discuss the urgent need for accountability in technology. Featuring: Dr. Safiya U. Noble is the David O. Sears Presidential Endowed Chair of Social Sciences and Professor of Gender Studies, African American Studies, and Information Studies at the University of California, Los Angeles (UCLA). She is the Director of the Center on Race & Digital Justice and Co-Director of the Minderoo Initiative on Tech & Power at the UCLA Center for Critical Internet Inquiry (C2i2). She currently serves as Interim Director of the UCLA DataX Initiative, leading work in critical data studies for the campus. Making Contact Team: Episode Host: Lucy Kang Producers: Anita Johnson, Salima Hamirani, Amy Gastelum, and Lucy Kang Executive Director: Jina Chung Editor: Adwoa Gyimah-Brempong Engineer: Jeff Emtman Digital Media Marketing: Lissa Deonarain Music credits: Xylo-Ziko - Phase 2 Audiobinger - The Garden State Learn More:  Dr. Safiya Noble  Dr. Safiya Noble  Algorithms of Oppression  Watch Dr. Noble discuss the themes of her book in this lecture. Making Contact is an award-winning, nationally syndicated radio show and podcast featuring narrative storytelling and thought-provoking interviews. We cover the most urgent issues of our time and the people on the ground building a more just world.

The Cognitive Crucible
#212 Libby Lange on Algorithmic Cognitive Warfare

The Cognitive Crucible

Play Episode Listen Later Dec 17, 2024 40:32


The Cognitive Crucible is a forum that presents different perspectives and emerging thought leadership related to the information environment. The opinions expressed by guests are their own, and do not necessarily reflect the views of or endorsement by the Information Professionals Association. During this episode, Libby Lange discusses her recent article: Algorithmic Cognitive Warfare: The Next Frontier in China's Quest for Global Influence. Recording Date: 9 Dec 2024 Research Question: Libby Lange suggests an interested student or researcher:  Take the concept of Algorithmic Cognitive Warfare from the hypothetical and theoretical into the real world and find evidence of Chinese researchers who are actually retrieving data from data brokers. Investigate how Russia is pursuing adjacent or complementary Algorithmic Cognitive Warfare capabilities. Resources: Cognitive Crucible Podcast Episodes Mentioned #210 Paul Groestad on Cognitive Warfare #187 Randy Rosin on Reflexive Control Algorithmic Cognitive Warfare: The Next Frontier in China's Quest for Global Influence by Libby Lange Decoding China's AI-Powered ‘Algorithmic Cognitive Warfare' by Libby Lange Special Competitive Studies Project (SCSP) Active Measures by Thomas Rid Link to full show notes and resources Guest Bio: Libby Lange is a Director of Intelligence at the Special Competitive Studies Project. Prior to SCSP, Libby worked as an Intel Analyst at Graphika, where she focused on Chinese state-linked influence operations and public health misinformation. Prior to Graphika, she served as a speechwriter and communications manager for Taiwanese President Tsai Ing-wen, accompanying the President on multiple state visits. Libby holds an M.A. in Global Affairs from Yale University and a B.A. in Political Science from National Taiwan University. About: The Information Professionals Association (IPA) is a non-profit organization dedicated to exploring the role of information activities, such as influence and cognitive security, within the national security sector and helping to bridge the divide between operations and research. Its goal is to increase interdisciplinary collaboration between scholars and practitioners and policymakers with an interest in this domain. For more information, please contact us at communications@information-professionals.org. Or, connect directly with The Cognitive Crucible podcast host, John Bicknell, on LinkedIn. Disclosure: As an Amazon Associate, 1) IPA earns from qualifying purchases, 2) IPA gets commissions for purchases made through links in this post.

ProjectME with Tiffany Carter – Entrepreneurship & Millionaire Mindset
The Cash Machine Formula: Fine-Tuning Your Content for Profit EP665

ProjectME with Tiffany Carter – Entrepreneurship & Millionaire Mindset

Play Episode Listen Later Dec 4, 2024 57:08 Transcription Available


**LAST CHANCE** to access my Once a Year Black Friday Sale! Over $2800 in free luxe gifts when you join my famous ProjectME Posse Business & Money Coaching Membership (Absolute Last Chance)   >> Get my new In The Abundance Zone 90-day guided journal + planner, my Successful Launch Formula Masterclass, my Ideal Client Magnifier System, my Abundance Shower & Bath Guided Meditations, plus so much more when you join today!!!!!   FREE GIFT: Your Season of Abundance Guided Walking Meditation Series   Subscribe to Tiffany's FREE Weekly Digest The Secret Posse Tired of spending hours creating content that helps people, but doesn't pay the bills? In this high-impact episode, I'm breaking down the exact strategy to transform your content into a profit-generating powerhouse. This is designed to get you results whether you are a beginning content creator, have a long-standing business you want to monetize online, or you're making some money….but want to learn how to make big money online. Key Insights: Turn followers into high-ticket clients Content monetization strategies Magnetic messaging techniques Algorithmic content design Profit-driven content engineering What You'll Learn: Crafting content that sells Positioning your expertise Converting followers to customers Scalable content systems Algorithmic content design CONNECT WITH TIFF: Tiffany on Instagram @projectme_with_tiffany Tiffany on TikTok @projectme_with_tiffany ProjectME the Podcast on YouTube: ProjectME TV

Emphasis Added
RealPage Litigation and Algorithmic Decisionmaking

Emphasis Added

Play Episode Listen Later Nov 27, 2024 56:59


In the third episode of Season 6, we sit down with University of Houston Law Center Professor Nikolas Guggenberger and current UHLC 3L, Jake Evinger. As is customary, Emphasis Added hosts, Graysen Mechler and Geoffrey Okolo, begin the episode by exploring Professor Guggenberger's journey to becoming a lawyer as well as his unique journey from Germany to the United States. The episode then took a slight detour – exploring what common law signifies for countries like Germany, which are traditionally considered civil law systems.Returning to the episode's core topic, algorithms, the guests provide a primer on algorithms and their development over time. They discuss their role in decision-making, the influence of artificial intelligence and machine learning, and the factors that make certain markets particularly suited to algorithmic use.The discussion then pivots to the Department of Justice's lawsuit against RealPage, examining the case's background, allegations of price collusion, and the implications of algorithmic decision-making in rental markets. Professor Guggenberger and Jake provide insights into the origins of RealPage's data, the challenges of regulating algorithms, and the potential remedies available to curb price collaboration.Tune in for great conversation, and to learn a bit more about algorithmic decision making! Subscribe to the Houston Law Review at the link below:https://uhlc.wufoo.com/forms/mkzu7j60z0ytjk/To get a mailing or electronic subscription to the Houston Law Review click here. For more Emphasis Added content, follow us on Instagram and check out our video content on YouTube!

FedSoc Events
Practice Groups: Data, Algorithmic Integrity and AI

FedSoc Events

Play Episode Listen Later Nov 26, 2024 92:10


Much has been made of the promise and concerns around AI technical advances, and guardrails that might be considered to reduce the downside of opaque quasi-algorithmic outcomes associated with current large language model approaches. This panel will examine the current AI regulatory debate and explore how current and proposed corporate and governmental AI is being shaped and normed to provide outputs that reinforce “mainstream” economic, ideological and operational norms, with the risk of vested interests defining such norms. From national security applications, autonomous vehicle safety decisions, economic predictions, pareto-optimal and social benefit determinations, and health care deployment, to how you are entertained and educated, can we control what most of us can’t understand?Featuring:Mr. Stewart A. Baker, Of Counsel, Steptoe & Johnson LLPMr. Christopher Ekren, Global Technology Counsel, Sony Corporation of AmericaMs. Victoria Luxardo Jeffries, Director, United States Public Policy, MetaProf. John C. Yoo, Emanuel S. Heller Professor of Law, University of California at Berkeley; Nonresident Senior Fellow, American Enterprise Institute; Visiting Fellow, Hoover InstitutionModerator: Hon. Stephen Alexander Vaden, Judge, United States Court of International Trade

director university california ai law practice data judge integrity groups berkeley international trade american enterprise institute visiting fellow algorithmic united states court of counsel nonresident senior fellow steptoe sony corporation emanuel s heller professor administrative law & regulatio telecommunications & electroni law & economics stewart a baker international & national secur john c yoo
DarshanTalks

Subscriber-only episodeIn this episode, we're diving into the intersection of technology and healthcare, specifically the role of Artificial Intelligence (AI) in clinical trials. As a Food and Drug lawyer, Darshan has seen firsthand how AI is revolutionizing drug development and testing. The FDA is closely monitoring this shift, recognizing the potential of AI to enhance patient outcomes, improve trial efficiency, and reduce costs.However, it's not all smooth sailing. AI can help identify the right patients for specific treatments, but it's crucial to address potential biases in AI algorithms, which could affect diversity in clinical trials. AI can also streamline trial processes, but the “black box” nature of how decisions are made raises concerns about transparency and fairness. Cost reduction is often touted, yet we're still waiting to see if AI will truly lower expenses in the long run.Data privacy and security are also big considerations. With AI relying on massive data sets, how can we ensure patient privacy is protected? And who truly owns the data? Algorithmic bias is another serious concern—especially when it comes to underrepresented patient populations. The FDA is working on issuing guidance for AI in clinical trials, but we're still in the early stages. They are encouraging collaboration between industry, academia, and other stakeholders to develop best practices. Plus, the FDA is investing in research to better understand both the benefits and risks of AI in healthcare.In the end, AI's potential is enormous, but we need to be careful about how it's implemented. What do you think are the biggest challenges when using AI in clinical trials? Drop your thoughts in the comments!And if you're a drug or medical device company looking to leverage AI, reach out to us at Kulkarni Law Firm for legal guidance through the complex regulatory landscape. Visit our website for more info.

KPFA - Project Censored
Algorithmic Literacy for Journalists / A New Movement Media Alliance

KPFA - Project Censored

Play Episode Listen Later Nov 8, 2024 42:22


Mickey's first guest this week is Project Censored's Associate Director, Andy Lee Roth. Roth is a 2024-25 Reynolds Journalism Institute Fellow where he is developing an “algorithmic literacy” toolkit for journalists. He explains why today's journalists need a basic understanding of the algorithms used by internet and social media tech giants to better serve the public. Issues around horse-race poll coverage, shadow banning, and algorithmic gatekeeping are discussed. In the second half of the show, Maya Schenwar of Truthout and Lara Witt of Prism introduce the organization they co-founded, the Movement Media Alliance. They explain why social-justice-oriented media outlets should work together, both to enhance their impact and to better the working conditions for journalists in independent media. GUESTS: Andy Lee Roth is Associate Director of Project Censored, co-editor of its state-of-the-free-press yearbooks, co-author of The Media and Me, and coordinator of its Campus Affiliates Program. His work on algorithmic literacy for journalists is supported by a fellowship from the Reynolds Journalism Institute at the University of Missouri. Maya Schenwar is Editor-At-Large for Truthout, and writes extensively on prison and policing issues. Lara Witt is Editor-In-Chief at Prism Reports.   The post Algorithmic Literacy for Journalists / A New Movement Media Alliance appeared first on KPFA.

2 Girls 1 Podcast
25 How Patreon Fosters Creative Diversity in an Algorithmic World | Hayley Rosenblum

2 Girls 1 Podcast

Play Episode Listen Later Oct 30, 2024 65:54


Before becoming Patreon's Head of Online Community, Hayley Rosenblum was no stranger to fan funding. She had worked closely with musicians in their pivot away from record labels, and toward the Internet - where fandom reigns supreme. These days, she helps creators large and small by listening to their needs and communicating pain points back to the Patreon mothership. Many artist conversations have changed the platform, often in subtle and unexpected ways. But even when her work seems "invisible," she takes great pride in empowering creators to do what they do best: make more amazing stuff for the people who love it. This week, Hayley and Matt chat about her sage advice for starting a Patreon, the surprising ways educators use the platform, the "death of the follower," why she sometimes feels like an Internet "piñata," and that time Neil Young convinced her dad that she's pretty cool. If you're a Patreon creator, join their official Discord community! https://discord.com/invite/patreon This show is made possible by listener support: https://www.patreon.com/influencepod Listen & subscribe wherever you get podcasts:

Thriving on Overload
Jason Burton on LLMs and collective intelligence, algorithmic amplification, AI in deliberative processes, and decentralized networks (AC Ep68)

Thriving on Overload

Play Episode Listen Later Oct 30, 2024 36:12


The post Jason Burton on LLMs and collective intelligence, algorithmic amplification, AI in deliberative processes, and decentralized networks (AC Ep68) appeared first on amplifyingcognition.

Bitcoin.Review
BR084: Nostr Rising 07 - Long Form Content ft. DK & Miljan

Bitcoin.Review

Play Episode Listen Later Oct 28, 2024 53:22 Transcription Available


I'm joined by guests DK & Miljan to discuss long form content.Chapters:- (00:00:00) Introduction to long form content on Nostr- (00:03:15) Challenges of content distribution on Nostr- (00:06:42) Nostr as a decentralized platform for creators- (00:11:20) The role of relays in content storage and competition- (00:15:47) Building user-centric tools for long-form content- (00:19:33) Balancing short-form and long-form content on Nostr- (00:22:58) Algorithmic feeds and user-controlled algorithms- (00:28:40) Primal's focus on algorithm marketplace development- (00:34:15) Human-curated algorithms and user-selected feeds- (00:39:48) Ethical advertising and user choice in ads- (00:43:22) Importance of competition in relays, clients, and algorithms- (00:46:58) Creating customizable algorithms for users- (00:48:09) Potential for ethical, non-intrusive advertising- (00:50:26) Nostr as a global town square for nuanced discourse- (00:51:34) Learning from torrents: the importance of better UX- (00:52:24) Closing thoughts on Nostr's future and appeal to creatorsLinks & Contacts:⁠⁠⁠⁠Website⁠⁠⁠⁠: https://bitcoin.review/Podcast⁠⁠⁠⁠Substack⁠⁠⁠⁠: https://substack.bitcoin.review/⁠⁠⁠⁠Twitter⁠⁠⁠⁠: https://twitter.com/bitcoinreviewhq⁠⁠⁠⁠NVK Twitter⁠⁠⁠⁠: https://twitter.com/nvk⁠⁠⁠⁠Telegram⁠⁠⁠⁠: https://t.me/BitcoinReviewPod⁠⁠⁠⁠Email⁠⁠⁠⁠: producer@coinkite.comNostr & LN:⚡nvk@nvk.org (not an email!)Full show notes: https://bitcoin.review/podcast/episode-84

Macro n Cheese
Ep 300 - Algorithmic Warfare with Andy Lee Roth

Macro n Cheese

Play Episode Listen Later Oct 26, 2024 59:02 Transcription Available


**Milestone 300! We dedicate this, the 300th weekly episode, to our loyal listeners, and we wish to recognize the valiant work of our underpaid podcast crew – correction: our unpaid podcast crew – who have put in thousands of hours editing audio, correcting transcripts, writing show notes, creating artwork, and posting promos on social media. To have the next 300 episodes delivered to your inbox as soon as they're released, subscribe at realprogressives.substack.com Project Censored has been a valuable resource for Macro N Cheese. This week, sociologist Andy Lee Roth talks with Steve about information gatekeeping by big tech through their use of AI algorithms to stifle diverse voices. The discussion highlights historical and current instances of media censorship and looks at the monopolization of news distribution by corporate giants like Google, Facebook, and Twitter. In an economic system that is fully privatized, trustworthy journalism is another casualty. News, which should be treated as a public good, is anything but. Andy Lee Roth is associate director of Project Censored, a nonprofit that promotes independent journalism and critical media literacy education. He is the coauthor of The Media and Me (2022), the Project's guide to critical media literacy for young people, and “Beyond Fact-Checking” (2024), a teaching guide about news frames and their power to shape our understanding of the world. Roth holds a PhD in sociology from the University of California, Los Angeles, and a BA in sociology and anthropology from Haverford College. His research and writing have been published in a variety of outlets, including Index on Censorship, In These Times, YES! Magazine, The Progressive, Truthout, Media Culture & Society, and the International Journal of Press/Politics. During 2024-2025 his current work on Algorithmic Literacy for Journalists is supported by a fellowship from the Reynolds Journalism Institute. projectcensored.org @ProjectCensored on Twitter

Voices of VR Podcast – Designing for Virtual Reality
#1476: UploadVR Editor Ian Hamilton’s Deep Reflections: AR vs VR, Ethical Dilemmas, & Future of Meta’s Algorithmic Realities

Voices of VR Podcast – Designing for Virtual Reality

Play Episode Listen Later Oct 6, 2024 107:32


I interviewed Ian Hamilton, Editor at UploadVR.com, at Meta Connect 2024. Here's the article about the twin sisters who play Walkabout Mini Golf VR together that Hamilton references. See more context in the rough transcript below. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

AXRP - the AI X-risk Research Podcast
37 - Jaime Sevilla on AI Forecasting

AXRP - the AI X-risk Research Podcast

Play Episode Listen Later Oct 4, 2024 104:25


Epoch AI is the premier organization that tracks the trajectory of AI - how much compute is used, the role of algorithmic improvements, the growth in data used, and when the above trends might hit an end. In this episode, I speak with the director of Epoch AI, Jaime Sevilla, about how compute, data, and algorithmic improvements are impacting AI, and whether continuing to scale can get us AGI. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast The transcript: https://axrp.net/episode/2024/10/04/episode-37-jaime-sevilla-forecasting-ai.html   Topics we discuss, and timestamps: 0:00:38 - The pace of AI progress 0:07:49 - How Epoch AI tracks AI compute 0:11:44 - Why does AI compute grow so smoothly? 0:21:46 - When will we run out of computers? 0:38:56 - Algorithmic improvement 0:44:21 - Algorithmic improvement and scaling laws 0:56:56 - Training data 1:04:56 - Can scaling produce AGI? 1:16:55 - When will AGI arrive? 1:21:20 - Epoch AI 1:27:06 - Open questions in AI forecasting 1:35:21 - Epoch AI and x-risk 1:41:34 - Following Epoch AI's research   Links for Jaime and Epoch AI: Epoch AI: https://epochai.org/ Machine Learning Trends dashboard: https://epochai.org/trends Epoch AI on X / Twitter: https://x.com/EpochAIResearch Jaime on X / Twitter: https://x.com/Jsevillamol   Research we discuss: Training Compute of Frontier AI Models Grows by 4-5x per Year: https://epochai.org/blog/training-compute-of-frontier-ai-models-grows-by-4-5x-per-year Optimally Allocating Compute Between Inference and Training: https://epochai.org/blog/optimally-allocating-compute-between-inference-and-training Algorithmic Progress in Language Models [blog post]: https://epochai.org/blog/algorithmic-progress-in-language-models Algorithmic progress in language models [paper]: https://arxiv.org/abs/2403.05812 Training Compute-Optimal Large Language Models [aka the Chinchilla scaling law paper]: https://arxiv.org/abs/2203.15556 Will We Run Out of Data? Limits of LLM Scaling Based on Human-Generated Data [blog post]: https://epochai.org/blog/will-we-run-out-of-data-limits-of-llm-scaling-based-on-human-generated-data Will we run out of data? Limits of LLM scaling based on human-generated data [paper]: https://arxiv.org/abs/2211.04325 The Direct Approach: https://epochai.org/blog/the-direct-approach   Episode art by Hamish Doodles: hamishdoodles.com

New Books Network
Gerald Sim, "Screening Big Data: Films That Shape Our Algorithmic Literacy" (Routledge, 2024)

New Books Network

Play Episode Listen Later Oct 3, 2024 68:10


Screening Big Data: Films that Shape Our Algorithmic Literacy (Routledge, 2024) examines the influence of key films on public understanding of big data and the algorithmic systems that structure our digitally mediated lives. From star-powered blockbusters to civic-minded documentaries positioned to facilitate weighty debates about artificial intelligence, these texts frame our discourse and mediate our relationship to technology. Above all, they impact society's abilities to regulate AI and navigate big tech's political and economic manoeuvres to achieve market dominance and regulatory capture.  Foregrounding data politics with close readings of key films like Moneyball, Minority Report, The Social Dilemma, and Coded Bias, in Screening Big Data by Dr. Gerald Sim reveals compelling ways in which films and tech industry–adjacent media define apprehension of AI. With the mid-2010s techlash in danger of fizzling out, Screening Big Data explores the relationship between this resistance and cultural infrastructure while highlighting the urgent need to refocus attention onto how technocentric media occupy the public imagination. This interview was conducted by Dr. Miranda Melcher whose new book focuses on post-conflict military integration, understanding treaty negotiation and implementation in civil war contexts, with qualitative analysis of the Angolan and Mozambican civil wars. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in Film
Gerald Sim, "Screening Big Data: Films That Shape Our Algorithmic Literacy" (Routledge, 2024)

New Books in Film

Play Episode Listen Later Oct 3, 2024 68:10


Screening Big Data: Films that Shape Our Algorithmic Literacy (Routledge, 2024) examines the influence of key films on public understanding of big data and the algorithmic systems that structure our digitally mediated lives. From star-powered blockbusters to civic-minded documentaries positioned to facilitate weighty debates about artificial intelligence, these texts frame our discourse and mediate our relationship to technology. Above all, they impact society's abilities to regulate AI and navigate big tech's political and economic manoeuvres to achieve market dominance and regulatory capture.  Foregrounding data politics with close readings of key films like Moneyball, Minority Report, The Social Dilemma, and Coded Bias, in Screening Big Data by Dr. Gerald Sim reveals compelling ways in which films and tech industry–adjacent media define apprehension of AI. With the mid-2010s techlash in danger of fizzling out, Screening Big Data explores the relationship between this resistance and cultural infrastructure while highlighting the urgent need to refocus attention onto how technocentric media occupy the public imagination. This interview was conducted by Dr. Miranda Melcher whose new book focuses on post-conflict military integration, understanding treaty negotiation and implementation in civil war contexts, with qualitative analysis of the Angolan and Mozambican civil wars. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/film

Machine Learning Street Talk
Ben Goertzel on "Superintelligence"

Machine Learning Street Talk

Play Episode Listen Later Oct 1, 2024 97:18


Ben Goertzel discusses AGI development, transhumanism, and the potential societal impacts of superintelligent AI. He predicts human-level AGI by 2029 and argues that the transition to superintelligence could happen within a few years after. Goertzel explores the challenges of AI regulation, the limitations of current language models, and the need for neuro-symbolic approaches in AGI research. He also addresses concerns about resource allocation and cultural perspectives on transhumanism. TOC: [00:00:00] AGI Timeline Predictions and Development Speed [00:00:45] Limitations of Language Models in AGI Development [00:02:18] Current State and Trends in AI Research and Development [00:09:02] Emergent Reasoning Capabilities and Limitations of LLMs [00:18:15] Neuro-Symbolic Approaches and the Future of AI Systems [00:20:00] Evolutionary Algorithms and LLMs in Creative Tasks [00:21:25] Symbolic vs. Sub-Symbolic Approaches in AI [00:28:05] Language as Internal Thought and External Communication [00:30:20] AGI Development and Goal-Directed Behavior [00:35:51] Consciousness and AI: Expanding States of Experience [00:48:50] AI Regulation: Challenges and Approaches [00:55:35] Challenges in AI Regulation [00:59:20] AI Alignment and Ethical Considerations [01:09:15] AGI Development Timeline Predictions [01:12:40] OpenCog Hyperon and AGI Progress [01:17:48] Transhumanism and Resource Allocation Debate [01:20:12] Cultural Perspectives on Transhumanism [01:23:54] AGI and Post-Scarcity Society [01:31:35] Challenges and Implications of AGI Development New! PDF Show notes: https://www.dropbox.com/scl/fi/fyetzwgoaf70gpovyfc4x/BenGoertzel.pdf?rlkey=pze5dt9vgf01tf2wip32p5hk5&st=svbcofm3&dl=0 Refs: 00:00:15 Ray Kurzweil's AGI timeline prediction, Ray Kurzweil, https://en.wikipedia.org/wiki/Technological_singularity 00:01:45 Ben Goertzel: SingularityNET founder, Ben Goertzel, https://singularitynet.io/ 00:02:35 AGI Conference series, AGI Conference Organizers, https://agi-conf.org/2024/ 00:03:55 Ben Goertzel's contributions to AGI, Wikipedia contributors, https://en.wikipedia.org/wiki/Ben_Goertzel 00:11:05 Chain-of-Thought prompting, Subbarao Kambhampati, https://arxiv.org/abs/2405.04776 00:11:35 Algorithmic information content, Pieter Adriaans, https://plato.stanford.edu/entries/information-entropy/ 00:12:10 Turing completeness in neural networks, Various contributors, https://plato.stanford.edu/entries/turing-machine/ 00:16:15 AlphaGeometry: AI for geometry problems, Trieu, Li, et al., https://www.nature.com/articles/s41586-023-06747-5 00:18:25 Shane Legg and Ben Goertzel's collaboration, Shane Legg, https://en.wikipedia.org/wiki/Shane_Legg 00:20:00 Evolutionary algorithms in music generation, Yanxu Chen, https://arxiv.org/html/2409.03715v1 00:22:00 Peirce's theory of semiotics, Charles Sanders Peirce, https://plato.stanford.edu/entries/peirce-semiotics/ 00:28:10 Chomsky's view on language, Noam Chomsky, https://chomsky.info/1983____/ 00:34:05 Greg Egan's 'Diaspora', Greg Egan, https://www.amazon.co.uk/Diaspora-post-apocalyptic-thriller-perfect-MIRROR/dp/0575082097 00:40:35 'The Consciousness Explosion', Ben Goertzel & Gabriel Axel Montes, https://www.amazon.com/Consciousness-Explosion-Technological-Experiential-Singularity/dp/B0D8C7QYZD 00:41:55 Ray Kurzweil's books on singularity, Ray Kurzweil, https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0143037889 00:50:50 California AI regulation bills, California State Senate, https://sd18.senate.ca.gov/news/senate-unanimously-approves-senator-padillas-artificial-intelligence-package 00:56:40 Limitations of Compute Thresholds, Sara Hooker, https://arxiv.org/abs/2407.05694 00:56:55 'Taming Silicon Valley', Gary F. Marcus, https://www.penguinrandomhouse.com/books/768076/taming-silicon-valley-by-gary-f-marcus/ 01:09:15 Kurzweil's AGI prediction update, Ray Kurzweil, https://www.theguardian.com/technology/article/2024/jun/29/ray-kurzweil-google-ai-the-singularity-is-nearer

Pathfinder
Algorithmic Debris Management, with Chiara Manfletti (CEO of Neuraspace)

Pathfinder

Play Episode Listen Later Oct 1, 2024 40:28


In this week's Pathfinder pod, Chiara Manfletti, CEO of Neuraspace and former President of the Portuguese National Space Agency, discusses the growing importance of space situational awareness (SSA) and space traffic management. Neuraspace is a Portuguese startup that is developing a software platform that provides satellite operators with risk assessments, maneuvering advice, and insights on space debris.Chiara explains Neuraspace's mission to tackle the challenges of space debris and why it's essential to develop better tools for managing space traffic. She also shares the story behind Neurospace's founding, the company's growth, and their innovative approach to automating satellite operations.We also discuss:The threat of space debris and its long-term implicationsHow Neuraspace integrates multiple data sourcesThe role of space situational awareness in the future of autonomous spacecraftDifferences in commercial and government customer needsThe long-term vision for making space a safer, more sustainable place for satellites and other assetsAnd much more... • Chapters •00:00 - Intro00:58 - What is Neuraspace?02:14 - The founding vision04:07 - Is space debris an issue?08:18 - Unnecessary maneuvers09:40 - Neuraspace's ecosystem11:17 - Neuraspace's ground-based hardware12:36 - Challenges acquiring the right data amidst competition14:17 - Value chain of space situational awareness15:26 - Benefits of having a company focused on intelligence17:40 - How Neuraspace predicts collision events20:22 - Challenges integrating different sources of data22:00 - Automation and level of control satellite operators24:37 - Scaling26:39 - Catalysts for satellite threat detection28:24 - Primary customers28:53 - Expectations of governments vs. commercial clients30:05 - State of orbital debris globally and how Chiara thinks it'll change31:12 - Competitors today32:25 - Revenue model33:36 - Work Neuraspace does with regulatory bodies34:20 - Funding36:07 - Long-term vision for Neuraspace • Show notes •Neuraspace's website — https://www.neuraspace.com/Neuraspace's socials — https://x.com/neuraspaceChiara's socials — https://x.com/chiaramanflettiMo's socials — https://twitter.com/itsmoislamPayload's socials — https://twitter.com/payloadspace / https://www.linkedin.com/company/payloadspacePathfinder archive — Watch: https://www.youtube.com/@payloadspacePathfinder archive — Listen: https://pod.payloadspace.com/episodes• About us •Pathfinder is brought to you by Payload, a modern space media brand built from the ground up for a new age of space exploration and commercialization. We deliver need-to-know news and insights daily to 19,000+ commercial, civil, and military space leaders. Payload is read by decision-makers at every leading new space company, along with c-suite leaders at all of the aerospace & defense primes. We're also read on Capitol Hill, in the Pentagon, and at space agencies around the world. Payload began as a weekly email sent to a few friends and coworkers. Today, we're a team distributed across four time zones and two continents, publishing five media properties across multiple platforms:1) Payload, our flagship daily newsletter, sends M-F @ 9am Eastern2) Pathfinder publishes weekly on Tuesday mornings (pod.payloadspace.com)3) Polaris, our weekly policy briefing, publishes weekly on Tuesdays4) Payload Research, our weekly research and analysis piece,  comes out on WednesdaysYou can sign up for all of our publications here: https://payloadspace.com/subscribe/

Business of Bees
AI Dilemma: Can US Legislators Take Action Before It's Too Late?

Business of Bees

Play Episode Listen Later Sep 18, 2024 30:02


Deepfakes. Disinformation. Algorithmic bias. Job displacement. These are just some of the harms legislators and regulators worry about when they think about how to tackle the risks posed by artificial intelligence. The first episodes of this season of UnCommon Law deal with generative AI in the copyright law context, since the technology uses massive amounts of copyright protected work. But while copyright law might be the beginning, there's so much more to the story of generative AI and the law. In this episode, we examine what the government might do to ensure that 21st century life doesn't turn into a dystopian future. Guests: Cary Coglianese, director of the Penn Program on Regulation at the University of Pennsylvania Carey Law School Oma Seddiq, tech policy reporter for Bloomberg Government Isabel Gottlieb, reporter for Bloomberg Law covering AI and issues impacting corporate legal departments Learn more about your ad choices. Visit megaphone.fm/adchoices

Data Bytes
GenAI effects on the job market and hiring

Data Bytes

Play Episode Listen Later Sep 5, 2024 31:39


(01:22) Research on skills and technology(01:47) Changes in job search methods(02:29) Algorithmic hiring and firm adaptations(03:25) New roles from technology(04:54) Ripple effects of technological changes(06:06) Skating to where the puck is(07:07) Building future-proof skills(08:02) AI tools in daily work(09:00) AI's impact on jobs(10:08) Mega trends: technology, climate, demographics(11:17) Testing tools and adapting workflow(12:44) AI and future of hiring(13:45) Longer time to hire with tech(15:34) AI reshaping the labor market(17:03) Gaining skills for complex roles(18:20) Turing Trap: AI vs human augmentation(19:05) Challenges for early career seekers(20:26) Mentorship and human capital development(21:41) Updating skills before job transitions(23:21) Impact of job loss on earnings(24:52) Career conversations and landscape awareness(26:31) Advice for young researchers(27:24) Staying motivated through research --- Support this podcast: https://podcasters.spotify.com/pod/show/women-in-data/support

CDT Tech Talks
Talking Tech on Algorithmic Disability Determinations

CDT Tech Talks

Play Episode Listen Later Sep 5, 2024 44:35


More and more people turn to quantified health, achievement, and ability measures, such as fitness apps and economic measures of well-being every single day. As part of this trend, medicalized approaches to human health often describe people in terms of statistics and data, sometimes failing to capture more important details. In particular, the quantified approach falls short in describing the needs and rights of disabled people, as seen in lawsuits and case studies involving algorithmic decision-making about disability benefits. Here to talk about algorithmic decision-making and quantification in disability benefits in the United States and India are Vandana Chaudhry, Associate Professor in the Department of Social Work and Disability Studies at the City University of New York who focuses on disability and digital justice in the Global South, and Lydia X.Z. Brown, activist for disability justice, Director of Public Policy at the National Disability Institute, and CDT's very own former policy counsel.

KPFA - UpFront
DOJ Sues RealPage for Algorithmic Price Fixing and Rent Inflation; Plus, Lead Contamination in Oakland Schools; And, the Future of “Recyclable”

KPFA - UpFront

Play Episode Listen Later Aug 28, 2024 59:58


0:08 — Heather Vogell is investigative reporter with ProPublica. 0:33 — Lisa Song is a reporter on the environment, energy and climate change for ProPublica. 0:45 — Ashley McBride is a reporter for the Oaklandside covering education equity. The post DOJ Sues RealPage for Algorithmic Price Fixing and Rent Inflation; Plus, Lead Contamination in Oakland Schools; And, the Future of “Recyclable” appeared first on KPFA.

EXTRA GRAVY
Algorithmic Supporter

EXTRA GRAVY

Play Episode Listen Later Aug 7, 2024 90:54


(02:50) Bana Recap(13:35) France Ghetto Olympics(28:20) 100 gigs from Drake(47:15) Marlon's worst nightmare(52:55) Brittany Renner is habibi??(59:30) Joe Rogan special isn't special(1:02:35) Troy Lanez bottles from jail?(1:09:30) Matt's IG Deep Dive(1:22:30) Cardi B again?(1:26:00) Kehlani's curse Hosted on Acast. See acast.com/privacy for more information.