POPULARITY
In this episode of Nina’s What’s Trending, the team tackles two stories that spark conversation about identity, ethics, and the future of technology. Is Negative Attention Better Than None?Would you rather have people talk behind your back or be completely ignored? A new poll reveals that one in seven people would prefer being trash-talked over being forgotten. The team debates the psychology behind attention, self-worth, and the human desire to be remembered. Source: YouGov – One in seven Americans say they'd rather be talked about negatively than not at all UK Testing AI to Predict Future CrimesA real-world version of Minority Report is in development in the UK, where authorities are piloting artificial intelligence tools that aim to predict violent crimes before they happen. The conversation explores the ethical implications, privacy concerns, and how close we are to preemptive policing becoming a reality. Source: The Guardian – Police to test AI that predicts crimes before they happenAdditional Reading: BBC – Predictive Policing in the UK Nina's What's Trending is your daily dose of the hottest headlines, viral moments, and must-know stories from The Jubal Show! From celebrity gossip and pop culture buzz to breaking news and weird internet trends, Nina’s got you covered with everything trending right now. She delivers it with wit, energy, and a touch of humor. Stay in the know and never miss a beat—because if it’s trending, Nina’s talking about it! This is just a tiny piece of The Jubal Show. You can find every podcast we have, including the full show every weekday right here…➡︎ https://thejubalshow.com/podcasts The Jubal Show is everywhere, and also these places: Website ➡︎ https://thejubalshow.com Instagram ➡︎ https://instagram.com/thejubalshow X/Twitter ➡︎ https://twitter.com/thejubalshow Tiktok ➡︎ https://www.tiktok.com/@the.jubal.show Facebook ➡︎ https://facebook.com/thejubalshow YouTube ➡︎ https://www.youtube.com/@JubalFresh Support the show: https://the-jubal-show.beehiiv.com/subscribeSee omnystudio.com/listener for privacy information.
Im Mai übernimmt er die Spitze der Stadtpolizei St. Gallen: Fabian Kühner, zuvor bei der Bundeskriminalpolizei, übernimmt das Kommando – in einer Zeit, in der sich die Polizeiarbeit grundlegend wandelt. Warum der Wechsel an die Spitze? Wie läuft ein solches Auswahlverfahren – und warum keine interne Lösung? Was kommt auf die Polizei in den nächsten Jahren zu? Welche gesellschaftlichen Erwartungen und Spannungsfelder zeichnen sich ab? Und welche Rolle spielen Technologien wie Predictive Policing, Algorithmen oder Gesichtserkennung in der Polizeiarbeit der Zukunft – mehr Chance oder mehr Risiko? Eine Folge für alle, die wissen wollen, wohin sich die Polizei bewegt – und was es heisst, in einem hochsensiblen Bereich Verantwortung zu übernehmen. Als Strafverteidiger erhält man Einblicke in die unglaublichsten Fälle und arbeitet eng mit sehr unterschiedlichen und spannenden Menschen zusammen. Im Podcast [Auf dem Weg als Anwält:in](https://www.duribonin.ch/podcast) versucht der Anwalt [Duri Bonin](https://www.duribonin.ch) gemeinsam mit seinen Gesprächspartnern (Beschuldigte, Verurteilte, Staatsanwälte, Strafverteidiger, Gutachter, Opfer, Unschuldige, Schuldige …) zu ergründen, wie diese ticken, was sie antreibt und wie sie das Rechtssystem erleben. Behandelt werden urmenschliche Themen. Bei genauerem Hinsehen findet man Antworten auf eigene Fragen des Lebens und der Gesellschaft. Die Podcasts "Auf dem Weg als Anwält:in" sind unter https://www.duribonin.ch/podcast/ oder auf allen üblichen Plattformen zu hören
In this episode of Impact Theory with Tom Bilyeu, we dive deep into a thought-provoking conversation with investigative journalist Whitney Webb. As Webb unwraps complex topics such as the intersection of national security, technology, and political influence, she offers a critical perspective on the dynamics shaping our world today. Tom and Whitney explore the roles of elites and technocrats, pondering the balance between liberty and security in a rapidly digitizing society. They discuss how globalism and economic strategies may be manipulating power structures, and delve into the implications of AI and surveillance on civil liberties. If you're curious about the underlying mechanics of political and technological developments and their impact on society, this episode is sure to provide eye-opening insights. SHOWNOTES 00:00 AI-Induced Cognitive Decline 08:09 Tech-Security Fusion: A Call for Debate 14:07 Critique of US-China AI Strategy 16:47 US-China Commerce and Political Ties 25:00 Think Tanks Influence Global Policies 30:32 Epstein's Role in China Gate 37:33 BCCI: Bank and Criminal Enterprise 42:33 Currency Instability and Digital Dollarization 46:02 Bitcoin's Ideological Shift Debate 55:10 Tech Oligarchs and Market Control 57:24 AI Infrastructure Drive Unveiled 01:04:29 Predictive Policing and AI Concerns 01:09:54 Elite Risk Aversion Strategies 01:16:39 Empowerment Against Fear and Manipulation 01:18:48 Beware of Imposter Accounts CHECK OUT OUR SPONSORS: Range Rover: Range Rover: Explore the Range Rover Sport at https://landroverUSA.com Audible: Sign up for a free 30 day trial at https://audible.com/IMPACTTHEORY Vital Proteins: Get 20% off by going to https://www.vitalproteins.com and entering promo code IMPACT at check out. iTrust Capital: Use code IMPACT when you sign up and fund your account to get a $100 bonus at https://www.itrustcapital.com/tombilyeu NetSuite: Download the CFO's Guide to AI and Machine Learning at https://NetSuite.com/THEORY ********************************************************************** What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER SCALING a business: see if you qualify here. Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** Join me live on my Twitch stream. I'm live daily from 6:30 to 8:30 am PT at www.twitch.tv/tombilyeu ********************************************************************** LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of Impact Theory with Tom Bilyeu, we dive deep into a thought-provoking conversation with investigative journalist Whitney Webb. As Webb unwraps complex topics such as the intersection of national security, technology, and political influence, she offers a critical perspective on the dynamics shaping our world today. Tom and Whitney explore the roles of elites and technocrats, pondering the balance between liberty and security in a rapidly digitizing society. They discuss how globalism and economic strategies may be manipulating power structures, and delve into the implications of AI and surveillance on civil liberties. If you're curious about the underlying mechanics of political and technological developments and their impact on society, this episode is sure to provide eye-opening insights. SHOWNOTES 00:00 AI-Induced Cognitive Decline 08:09 Tech-Security Fusion: A Call for Debate 14:07 Critique of US-China AI Strategy 16:47 US-China Commerce and Political Ties 25:00 Think Tanks Influence Global Policies 30:32 Epstein's Role in China Gate 37:33 BCCI: Bank and Criminal Enterprise 42:33 Currency Instability and Digital Dollarization 46:02 Bitcoin's Ideological Shift Debate 55:10 Tech Oligarchs and Market Control 57:24 AI Infrastructure Drive Unveiled 01:04:29 Predictive Policing and AI Concerns 01:09:54 Elite Risk Aversion Strategies 01:16:39 Empowerment Against Fear and Manipulation 01:18:48 Beware of Imposter Accounts CHECK OUT OUR SPONSORS: Range Rover: Range Rover: Explore the Range Rover Sport at https://landroverUSA.com Audible: Sign up for a free 30 day trial at https://audible.com/IMPACTTHEORY Vital Proteins: Get 20% off by going to https://www.vitalproteins.com and entering promo code IMPACT at check out. iTrust Capital: Use code IMPACT when you sign up and fund your account to get a $100 bonus at https://www.itrustcapital.com/tombilyeu NetSuite: Download the CFO's Guide to AI and Machine Learning at https://NetSuite.com/THEORY ********************************************************************** What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER SCALING a business: see if you qualify here. Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** Join me live on my Twitch stream. I'm live daily from 6:30 to 8:30 am PT at www.twitch.tv/tombilyeu ********************************************************************** LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices
Predictive policing is an AI use case in law enforcement wherein biometric technologies are used to “predict” crimes. There has been a considerable increase in deployment of such technologies across countries in the past decade and these are thought to be a way to reduce crime. However, while its effectiveness is debatable, increased surveillance and reduced privacy for all citizens is an immediate outcome which brings with it additional risks. In this episode, Anwesha Sen speaks to Disha Verma about how these technologies are being deployed in India and their various pitfalls. All Things Policy is a daily podcast on public policy brought to you by the Takshashila Institution, Bengaluru. The Takshashila Institution has designed the 'Technopolitik: A Technology Geopolitics Survey' to understand and assess what people think about how India should navigate high-tech geopolitics. Please take this 5-minute survey at the following link: https://bit.ly/technopolitik_survey Find out more on our research and other work here: https://takshashila.org.in/... Check out our public policy courses here: https://school.takshashila.org.in
Ein Standpunkt von Felix Feistel.Seit die vierte Version des KI-Sprachassistenten ChatGPT Ende 2022 an den Start ging, hat die Debatte um die künstliche Intelligenz wieder an Fahrt aufgenommen. Neben OpenAI, der Firma hinter ChatGPT, sind eine Vielzahl anderer Tech-Konzerne im KI-Geschäft aktiv, darunter Microsoft, Google, Nvidia oder Amazon. Diese Konzerne haben längst damit begonnen, ihre Systeme und Geräte mit Modellen künstlicher Intelligenz auszustatten. So plant Microsoft die Implementierung seines KI-Modells „Recall“, das alle 15 Sekunden ein Bildschirmfoto machen und mittels KI die auf diese Weise gesammelten Daten umwandeln soll, mit dem Versprechen, auf diese Weise die Wiederherstellung des Systems im Falle eines Ausfalles zu erleichtern. Die Daten sollen, so das Versprechen, nur lokal gespeichert werden. Die Frage, wie das mit der Wiederherstellung des Systems im Falle eines Ausfalls zusammenpasst, drängt sich auf. Näher liegt hier die Vermutung, dass die KI zu einer flächendeckenden Überwachung der Nutzer eingesetzt werden soll. Dasselbe gilt für Google-Systeme. Der Konzern trainiert bereits eine KI, die menschliche Geräusche überwachen soll um Krankheiten zu erkennen. (1) Auf diese Weise soll die KI einen Beitrag dazu leisten, Krankheiten zu „managen“. Und was das bedeuten kann haben uns die Jahre der Pseudopandemie eindrücklich unter Beweis gestellt. KI kann in Zukunft dazu eingesetzt werden, Maßnahmen zur Krankheitsverhinderung also nicht kollektiv, sondern auf individueller Ebene zu verhängen. Dazu wird sie zu einer flächendeckenden Überwachung bis in das Private hinein bereits trainiert.Und das weckt natürlich auch Begehrlichkeiten von staatlicher Seite. So hat die Regierung des ach so libertären Präsidenten Javier Milei bereits eine Polizeieinheit gegründet, die durch flächendeckende Überwachung auch sozialer Netzwerke mittels KI Verbrechen schon im Vorfeld verhindern können soll. (2) Predictive Policing, auch Pre-Crime genannt, heißt diese Art der KI-basierten Verbrechensverhinderung, und erinnert nicht umsonst an den dystopischen Film „Minority Report“. Auch die EU hat vor einigen Monaten ein KI-Gesetz verabschiedet, und damit die Entwicklung und Anwendung von KI nach eigenen Aussagen „reguliert.“ (3) Allerdings hat sie dabei die Verwendung von KI zur flächendeckenden Massenüberwachung, beispielsweise mittels Gesichtserkennungssoftware in Überwachungskameras, nicht ausgeschlossen. (4) Die Verwendung solcher Daten soll zwar nur in schwerwiegenden Fällen möglich sein – beispielsweise aufgrund des Verdachts von Terrorismus. Allerdings lässt sich ein Fall von Terrorismus leicht konstruieren, wie die politischen Begriffsverwirrungen immer wieder zeigen.Wenn staatliche Stellen ein Interesse haben, dann in der Regel auch ein militärisches. Und so wird der KI-Einsatz auch zu solchen Zwecken bereits erprobt. Das Unternehmen ShieldAI beispielsweise entwickelt KI-Drohnenpiloten, die Drohnen autonom steuern können...hier weiterlesen: https://apolut.net/im-kafig-der-kunstlichen-dummheit-von-felix-feistel/ Hosted on Acast. See acast.com/privacy for more information.
Another win for the Institute for Justice. https://ij.org/
This week's topics:St. Pete Council votes to approve Rays Stadium BondsMany Floridians in Trump Administration ChoicesFlorida is Ground Zero in high cost of rentsTECO gets thumbs up for rate increasePasco Sheriff settles lawsuit over predictive policingWith guests:Eric Deggans, TV & Media Analyst, NPRJanelle Irwin Taylor, Publisher, Southeast PoliticsMitch Perry, Senior Reporter, Florida PhoenixRay Roa, Editor-in-chief, Creative Loafing Tampa
Listen to 88 Future Now Podcast Read 88 Future Now Transcript Wouldn’t you know, Starship 6 takes off during our show, so naturally it’s in our lineup today! And heads up if Starship 6 successfully launches to space from its Texas pad.you are looking for an inexpensive DIY way of running your home on solar! Waymo is in the news with LA now being served by their autonomous robot taxis! Its nice to be driven about, but this week we also pay homage to those who enjoy the act of driving, which, surprisingly, goes beyond our species; it turns out rodents enjoy driving as well! And then there is the turtle on a skateboard.. And who says our upcoming humanoid robots need exactly the same senses as us, especially when they can have Pano Radar, enabling them to see clearly through smoke, fire, and fog. If one day my robot needs to rescue me from a wildfire, I’d want hir to see clearly what she is doing! The ethics of predictive policing are up again, this time in the context ofTurns out rodents like to drive! photo-Greg Panos analyzing data from thousands of police bodycams. The data may help us predict who and where crime may occur, but at what cost to our privacy? And do you have a fear of needles? Or how about some AI headphones that can put you in a sound bubble, enhancing what you want to hear around you, excluding what you don’t? There’s a new blood draw in town, more like a leech than a mosquito in its approach to collecting our precious bodily fluids! It works great, and kinda ‘sucks.’ Enjoy.. New leech-like device to suck blood for sampling instead of needling
In this episode Sheriff Dirkse explains how the Sheriff's Office uses technology when investigating crimes in Stanislaus County.
Im Jahr 2060 gehört "Predictive Policing" längst zum Alltag der Menschen. Sogar die Wahrscheinlichkeit, mit der irgendwo ein gefährliches Virus lauern könnte, wird mitberechnet. Von Martin Heindel Mit Rosalie Thomass, Aurel Manthei, Philipp Moog, Jochen Striebeck u.a. Komposition: haarmann Regie: Martin Heindel Autorenproduktion 2020 Podcast-Tipp: Re:Produktion - Millennial Briefroman in Sprachnachrichten https://1.ard.de/reproduktion Von Martin Heindel.
In this episode of AI, Government, and the Future, host Max Romanik is joined by Nidhi Sinha, a research fellow at the Center for AI and Digital Policy, to discuss the ethical challenges of AI in national security, such as predictive policing and cyber surveillance. They explore how to balance innovation with individual rights and the role of AI in shaping global equity. Nidhi shares insights from her extensive experience to illuminate how democratic societies can manage AI's impact responsibly.
In today's episode of the Tactical Living Podcast, we discuss police using technology and how Law Enforcement and Artificial Intelligence is changing the way officers do their job. The integration of technology, especially artificial intelligence (AI), into law enforcement has transformed policing practices, enhancing efficiency, accuracy, and safety. Here are key ways in which technology and AI have impacted the field: Geo-Tracking for Enhanced Response: Geo-tracking technology enables law enforcement agencies to pinpoint the exact location of crimes as they happen, dramatically improving response times. Officers can be dispatched more efficiently, ensuring help arrives where it's needed most, swiftly and accurately. Drones for Surveillance and Tactical Operations: Drones equipped with cameras and sensors provide a bird's-eye view of crime scenes, search operations, and large public gatherings. They can navigate hard-to-reach areas, offering live feeds to command centers, which is crucial in tactical situations or when monitoring for potential threats in crowded places. Artificial Intelligence in Facial Recognition: AI-driven facial recognition systems are used to identify individuals in crowds, at airports, or in other public spaces. This technology has been pivotal in locating missing persons, fugitives, and suspects by analyzing video footage from surveillance cameras against databases. Predictive Policing with AI: AI algorithms analyze historical crime data to predict future crime locations and times. This predictive policing strategy enables departments to allocate resources more effectively, potentially preventing crimes before they occur by identifying patterns that human analysts might miss. AI in Forensic Analysis: AI enhances forensic capabilities by rapidly analyzing vast amounts of data. For example, AI can match DNA samples from crime scenes with national databases much quicker than traditional methods, speeding up investigations and helping to solve cases faster. Social Media Monitoring through AI: Law enforcement agencies use AI tools to monitor social media platforms for potential threats, hate speech, or plans for criminal activity. These AI systems can sift through massive amounts of online content in real-time, flagging relevant information for further investigation. Automated License Plate Recognition (ALPR): ALPR systems use AI to read and process license plates on moving or parked vehicles, helping to locate stolen cars or vehicles associated with wanted individuals. This technology significantly enhances the ability of law enforcement to enforce the law and protect communities. Chatbots for Public Assistance: AI-powered chatbots are being employed by police departments to offer 24/7 assistance to the public. These chatbots can provide information on local laws, report filing assistance, and guidance on non-emergency inquiries, freeing up human officers for more critical tasks. Body-Worn Cameras with AI Analytics: Body-worn cameras equipped with AI can analyze video content for specific behaviors, objects, or events, aiding in evidence collection and review processes. This not only increases transparency and accountability but also helps in training officers by reviewing interactions and incidents. Gunshot Detection Systems: AI-enabled gunshot detection systems use acoustic sensors to identify and locate gunshots in real-time, alerting law enforcement immediately. These systems can accurately differentiate between gunshots and other sounds, ensuring a rapid and appropriate response to gun-related incidents. The integration of technology and AI into law enforcement represents a paradigm shift, offering unprecedented capabilities in the prevention, investigation, and prosecution of crimes. However, it also raises important discussions around privacy, ethics, and the potential for bias, underscoring the need for careful implementation and oversight. All viewpoints discussed in this episode are for entertainment purposes only and are simply our opinions based off of our own experience, background and education. #policepodcast #policeofficer #leowarriors #thinbluelineusa #firstresponder #lawenforcementpodcast #LawEnforcement #LEOWarriors #fireemblemwarriors #whatcopsmustknow #nyslawenforcement #lawenforcementpodcast #podcastenglish #thinblueline #leowife #leomarriage #policemarriage #officer #officers ⩥ PLEASE SUBSCRIBE TO OUR YOUTUBE CHANNEL ⩤ https://linktw.in/KDLEUl CLICK HERE for our favorite Tactical Gear: https://linktw.in/yUhFaw #ad Some product links are affiliate links which means if you buy something by clicking on one of our links, we'll receive a small commission. CLICK HERE to join our free Police, Fire, Military and Families Facebook Group: https://linktw.in/CmmzHn Check out our website and learn more about how you can work with LEO Warriors by going to: https://www.leowarriors.com/ Like what you hear? We are honored. Drop a review and subscribe to our show. The Tactical Living Podcast is owned by LEO Warriors, LLC. None of the content presented may be copied, repurposed or used without the owner's prior consent. For PR, speaking requests and other networking opportunities, contact LEO Warriors: EMAIL: ashliewalton555@gmail.com. ADDRESS: P.O. Box 400115 Hesperia, Ca. 92340 ASHLIE'S FACEBOOK: https://www.facebook.com/police.fire.lawenforcement ➤➤➤➤➤➤➤➤➤➤➤➤➤➤➤➤➤➤ This episode is NOT sponsored.
We dive deep into Hidden Side of AI, Military Shadow on Tech, Our Children vs the Algorithm, Ethics in AI: 19Keys & X.Simplify the high level with AI. Join a free webinar at https://19keysai.com/ to learn more. Join 19Keys on 'High Level Conversations', the award-winning show elevating your mindset and value. In Partnership with the Earn Your Leisure network, this is the show where thought leadership meets empowerment. Each episode features luminaries like Billy Carson and Wallstreet Trapper, exploring topics from financial literacy to future tech, emotional intelligence to wellness. Our mission is to challenge, inspire, and ignite change. Be a part of this journey to reshape narratives and elevate consciousness. Dive into our diverse and dynamic content – your platform for growth and cultural empowerment.Special Guest: https://www.instagram.com/techwithx/?hl=en X. Eyeé, a high school and college dropout turned military combat veteran, has made significant strides in the fields of AI and emerging technologies over 17 years. With experience as an engineering leader, product manager, and researcher at Microsoft, Google, and the Department of Defense, X has pioneered blockchain and AI innovations. At Microsoft, X developed Ambient Intelligence solutions integrating blockchain, AI, and IoT for global Fortune 500 companies. At Google, X ensured AI products and research were developed responsibly, leading teams like the Skin Tone team to set industry standards for AI recognition of diverse skin tones, impacting products like Pixel phones and Google Image Search. Currently, as CEO of Malo Santo, X advances AI consulting, offering services in education, governance, and development to clients including L'Oreal, Mozilla, and Hillman Grad.19Keys is a visionary thinker and motivational speaker who empowers people to unlock their greatest potential. His thought leadership provides a blueprint for living life to the fullest through developing mental, physical, emotional, and spiritual mastery. As a self-made polymath and visionary, he leverages his diverse expertise across metaphysics, mindfulness, business, and technology to empower millions worldwideVisit www.19KEYS.com to support and learn more. *Special EYL Viewer Promotion*Text “HLC“ to 2012283670Tap in on all platforms:Youtube:https://www.youtube.com/c/19keysTwitter:https://twitter.com/19keys_Instagram:www.instagram.com/19_keys/TikTok:https://www.tiktok.com/@19keys?Send in a voice message: https://anchor.fm/19keys/messageFollow his links below to learn more:https://linktr.ee/19_keys https://crownz19.com/ https://goldewater.com/ https://crownz19.com/products/paradigm-keys-solution-based-mind-reprogramming-e-bookSupport this podcast at — https://redcircle.com/19keys/exclusive-contentAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
In this episode, we focus on algorithmic action and fairness.
Algorithms are also used to provide suggestions and advice.Iit is helpful to differentiate between algorithms providing advice and suggestions to experts and those providing advice and suggestions for lay people and users of digital communication environments and services.
In recent years, police departments nationwide have increasingly embraced data and AI tools to enhance their crime prevention, investigation, and conviction efforts. These technologies range from image analysis on body cameras to license plate trackers predicting potential involvement in drug trafficking. However, a crucial question arises: Are these technologies both accurate and fair? Is law enforcement adequately trained to utilize them effectively? Is legislation adapting swiftly enough to keep pace with these transformative changes? On today's episode we engage in a conversation with a professor of law and a police chief who together provide us insights into the evolving landscape of policing technologies. Our guests: Andrew Guthrie Ferguson, Professor of Law, American University Washington College of Law Virgil Green, Chief of Police for Golden Valley, Minnesota, and co-host of “You And The Law” podcast
What if we told you that your favorite Beatles melody could be revived by Artificial Intelligence? That's right! In this stimulating discussion, we delve into the fascinating world of AI, its implications, and how it impacts various facets of life. From President Biden's executive order on AI safety to the use of AI in creating a brand-new Beatles song, we've got an intriguing blend of tech talk and music chatter that you wouldn't want to miss.We don't stop at the intersection of AI and music; we also explore its potential in redesigning the judiciary system and the necessary safeguards against bias. With the advent of AI in forensic analysis, crime forecasting, and predictive policing, the conversation takes an interesting turn. We also discuss the commitments of 15 leading companies to promote safe AI development. We promise, you're in for an insightful discussion!Finally, we examine the repercussions of the executive order and its influence on the legal sector and workforce. Could AI be the affordable court-appointed attorney of the future? What measures do we need to protect the workforce in this AI-driven world? We also contemplate the international implications and the necessity for a global consensus on AI safety and security. Wrapping up, we steer the conversation back to music, discussing the Beatles' new song and the potential of AI in creating meaningful art. So, are you ready to join us on this exciting journey through the universe of AI?Support the showLet's get into it!Follow us!Email us: TheCatchupCast@Gmail.com
In vielen Ländern nutzt die Polizei Software, um Verbrechen zu verhindern, bevor sie begangen werden. Das kann die Städte sicherer machen, aber auch bestimmte Bevölkerungsgruppen diskriminieren. Warum ist das so?
In many countries, police use software that supposedly helps prevent crimes before they're committed. Proponents say this makes cities safer. Critics say it leads to increased discrimination. How does it work?
SoundThinking is purchasing parts of Geolitica, the company that created PredPol. Experts say the acquisition marks a new era of companies dictating how police operate. Read this story here. Learn more about your ad choices. Visit megaphone.fm/adchoices
NEW THIS SEASON! Leave us a 90-second voice message about this episode. We may feature it in a future segment! “Minority Report” was just a movie, right? Well, not anymore. Beat cops from Los Angeles to London are using Artificial Intelligence to forecast tomorrow's murders. Is every crime predestined? Artificial Intelligence is everywhere, including the police station. From forecasting burglary to heading off homicides, predictive policing is more than 10 years old, and your city is probably already using it. What is predictive policing, and what should Christians think about it? Chris and Adam unpack the ethics of prediction and beliefs about predestination. They look some of the risks that come with crime statistics, and the solutions that don't merely reduce crime but actually increase justice. Plus, if you've ever used Google Analytics, stick around for the Bible story of David's census In this episode What is predictive policing? Is crime predictable? Is it predestined? What personal data is being tracked? How can crime statistics be biased? What should Christians think? Links Predictive Policing Explained (Brennan Center for Justice) 2021: Predictive Policing and Crime Control in The United States of America and Europe: Trends in a Decade of Research and the Future of Predictive Policing (Social Sciences Journal) 2022: Algorithm predicts crime a week in advance, but reveals bias in police response (University of Chicago) 2022: The never-ending quest to predict crime using AI (Washington Post) 2020: Predictive policing algorithms are racist. They need to be dismantled. (MIT Tech Review) 2017: How strategic is Chicago's “Strategic Subjects List”? (Equal Future) 2020 Update: Chicago police end effort to predict gun offenders, victims. (Associated Press) Talk Back NEW THIS SEASON! Leave us a 90-second voice message about this episode. We may feature it in a future segment! Follow Device & Virtue on Instagram and Twitter. Follow Chris and Adam on Twitter. Support Device & Virtue. Learn how. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Welcome to AI Lawyer Talking Tech, your daily review of the latest legal technology news. Today, we'll be discussing the impact of artificial intelligence on the legal industry, including how it's changing the way professionals approach their work, the importance of enhancing digital understanding, and the potential legal implications of AI-generated content. We'll also be exploring how AI is being used to provide fast, cost-effective, and convenient legal advice to small businesses, and the ethical concerns surrounding its use in the courtroom. So sit back, relax, and join us as we delve into the world of AI in the legal industry. The promise and peril of artificial intelligence in patent lawDate: 05 Jun 2023Source: Virginia Lawyers Weekly Legal Services Available on the Internet: Fast, Cost-Effective and Convenient!Date: 05 Jun 2023Source: AiThority.com Enhancing digital understandingDate: 05 Jun 2023Source: International Bar Association A new era for law firm GCsDate: 05 Jun 2023Source: International Bar Association Stability AI faces another Getty Images legal action in UKDate: 05 Jun 2023Source: Music Ally Preston legal battle leads to legal AI breakthroughDate: 05 Jun 2023Source: Prolific North Does AI belong in the courtroom? A Texas judge doesnt think so.Date: 04 Jun 2023Source: MSN United States Sad that Twitter Deplatformed Trump? Don't Take It to Court–Rutenberg v. TwitterDate: 03 Jun 2023Source: Technology & Marketing Law Blog Artificial Intelligence In Employment: What New York Citys Local Law 144 Means for Automated Employment Decision ToolsDate: 03 Jun 2023Source: Seyfarth Shaw LIVE WEB - Document Management and Retention: Can I Throw It Away Yet?Date: 02 Jun 2023Source: Illinois State Bar Association Use of ChatGPT in Federal Litigation Holds Lessons for Lawyers and Non-Lawyers EverywhereDate: 02 Jun 2023Source: LexBlog States Should Welcome the World's First Actual Robot LawyerDate: 02 Jun 2023Source: Center for Data Innovation Unlocking the potential of ChatGPT plugins in your law practiceDate: 02 Jun 2023Source: NY Daily Record Legal Update: Use of ChatGPT in Federal Litigation Holds Lessons for Lawyers and Non-Lawyers EverywhereDate: 02 Jun 2023Source: LexBlog Section 230 Once Again Immunizes Google's Search Results–Metroka v. PA Law EnforcementDate: 04 Jun 2023Source: Technology & Marketing Law Blog Judges and ChatGPTDate: 04 Jun 2023Source: The Time Blawg Privacy and Data Protection in the Digital AgeDate: 03 Jun 2023Source: Legaltech on Medium Litigants in Person and ChatGPTDate: 03 Jun 2023Source: The Time Blawg Digitalization and Predictive Policing in ConservationDate: 02 Jun 2023Source: Legal Planet Webcast: The Revised Colorado AI Insurance Regulations – What Got Fixed and What Still Needs FixingDate: 02 Jun 2023Source: Debevoise Data Blog
Ein Entwicklungsstopp für künstliche Intelligenz (KI), wie es kürzlich viele Prominente gefordert haben? „Alles Quatsch“, sagen Lajla Fetic und Ralph Müller-Eiselt. Bei der Bertelsmann Stiftung sind sie Expert:innen für Digitalisierung und Gemeinwohl. Und in der 28. Folge des Podcasts „Zukunft gestalten“ der Bertelsmann Stiftung mit den Moderator:innen Malva Sucker und Jochen Arntz stellen sie klar: „Wir brauchen keine Stopp-Taste für die Entwicklung künstlicher Intelligenz. Wir brauchen ein ‚Go‘ für eine Gesellschaft, die verantwortungsvoll mit der künstlichen Intelligenz umgeht.“Denn dann kann die KI dafür sorgen, dass mehr Zeit für das Wesentliche bleibt. Für ein ausführliches Arztgespräch, für Zeit mit den Schüler:innen statt mit zeitraubenden Routinearbeiten. Sie kann den Zugang zu Bildung, zu Kita- und Studienplatz effizienter, transparenter und gerechter machen.Warum sich die Expert:innen der Bertelsmann Stiftung um KI kümmern, obwohl das schon so viele andere tun? Weil die KI zumeist als Geschäftsmodell eingesetzt wird, von Unternehmen, die auf Profit ausgerichtet sind. Weil die Frage, welchen Nutzen die KI für das Gemeinwohl haben kann, viel zu selten gestellt wird. Und weil es hier, obwohl Anwendungen wie ChatGPT in aller Munde sind, noch immer an Wissen und an Ressourcen fehlt. Kapitel:00:00 Einleitung01:47 Vorstellung der Gäste02:33 Ist KI gefährlich?05:12 Was ist ein Algorithmus?06:11 Sind Algorithmen gefährlich?08:19 KI richtig einsetzen11:36 Predictive Policing und Confirmation Bias14:03 Predictive Policing in Deutschland16:11 KI und demografischer Wandel17:19 Mehr Zeit fürs Wesentliche18:38 KI verändert die Arbeitswelt19:50 Können Kommunen KI?22:50 Beispiel Videoüberwachung in Mannheim23:34 Beispiel Kitaplatzvergabe in Steinfurt26:12 KI und Gemeinwohl28:03 Ethik-Guidelines und Regulierung30:09 Blick in die Zukunft33:30 Schluss Weiterführende Links:Projekt:https://www.bertelsmann-stiftung.de/de/unsere-projekte/reframetech-algorithmen-fuers-gemeinwohl Publikationen:https://www.bertelsmann-stiftung.de/de/unsere-projekte/reframetech-algorithmen-fuers-gemeinwohl/publikationenhttps://www.bertelsmann-stiftung.de/de/publikationen/publikation/did/predictive-policing-mit-algorithmen-vor-die-lage-kommen-1https://www.bertelsmann-stiftung.de/de/unsere-projekte/ethik-der-algorithmen/projektthemen/wir-und-die-intelligenten-maschinenhttps://www.bertelsmann-stiftung.de/de/publikationen/publikation/did/we-humans-and-the-intelligent-machines-all Blog:https://www.reframetech.de/ Podcast „Die digitale Bildungsrevolution“:www.bertelsmann-stiftung.de/podcast Schreibt uns an podcast@bertelsmann-stiftung.de oder auf https://www.instagram.com/bertelsmannstiftung/
Christopher Parsons is a Senior Technology and Policy Advisor at the IPC. Prior to joining the IPC in early 2023, he was a Senior Research Associate at the Citizen Lab, an interdisciplinary laboratory based at the University of Toronto's Munk School of Global Affairs and Public Policy.Choosing to focus on research related to privacy, national security, and public policy [2:38]The modernization of policing through technology [4:57]Defining the term predictive policing [7:19]Bail assessments as an example of predictive policing [8:33]Potentially problematic aspects of predictive technologies [9:34]Findings of the Citizen Lab's Surveil and Predict report [11:11]Privacy and predictive policing [12:20]Human rights issues associated with predictive policing [14:18]Key recommendations of the Citizen Lab's Surveil and Predict report [18:07]The need for openness and accountability when it comes to the use of predictive policing tools [21:09]Future issues on the horizon related to law enforcement practices and privacy in Ontario [26:26]Resources:To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada (Citizen Lab, September 1, 2020)‘Algorithmic policing' in Canada needs more legal safeguards, Citizen Lab report says (Toronto Star)Law Enforcement and Security Agency Surveillance in Canada: The Growth of Digitally-Enabled Surveillance and Atrophy of Accountability (Citizen Lab, February 26, 2018)Law Enforcement and Surveillance Technologies (IPC Privacy Day webcast)IPC Strategic Priorities 2021-2025Next-Generation Law-Enforcement (IPC resources)Info Matters is a podcast about people, privacy, and access to information hosted by Patricia Kosseim, Information and Privacy Commissioner of Ontario. We dive into conversations with people from all walks of life and hear stories about the access and privacy issues that matter most to them. If you enjoyed the podcast, leave us a rating or a review. Have an access to information or privacy topic you want to learn more about? Interested in being a guest on the show? Send us a tweet @IPCinfoprivacy or email us at podcast@ipc.on.ca. The information, opinions, and recommendations presented in this podcast are for general information only. It should not be relied upon as a substitute for legal advice. Unless specifically stated otherwise, the IPC does not endorse, approve, recommend, or certify any information, product, process, service, or organization presented or mentioned in this podcast, and information from this podcast should not be used or reproduced in any way to imply such approval or endorsement. None of the information, opinions and recommendations presented in this podcast bind the IPC's Tribunal that may be called upon to independently investigate and decide upon an individual complaint or appeal based on the specific facts and unique circumstances of a given case.
This week we're discussing Artificial Intelligence and the positive and negative impacts it is having on trans individuals and the world. Cam kicks things off with a poem "Batter My Heart, Transgender'd God" by Meg Day. Ana discusses why it's important to us that humor is part of our podcast.Thank you for Eli Oberman (www.elijahoberman.com) for producing our new theme music and to our editor Frey Cheqama (www.youtube.com/@FreyCheqama)!Please send your questions and gender euphoria to questions@transgendapod.com, message us on social media, or hit the Chat with Us button on our website.If you or a trans loved one are contemplating suicide, please call the Trans Lifeline at (877) 565-8860.Other Resources:https://www.thetrevorproject.org/https://www.thetrevorproject.org/resources/guide/a-guide-to-being-an-ally-to-transgender-and-nonbinary-youth/https://www.glaad.org/resourcelistWant to support the podcast?For as little as $3 per month, you can help us continue producing this podcast and reaching a broader audience. https://www.patreon.com/transgendapodCheck out our merch and more at www.transgendapod.com. You can also support us on Patreon!Start a podcast free on Buzzsprout - Let's get your podcast launched!Find services from experts in a competitive marketplace with FiverrWrite a review on Apple PodcastsPassive AggressionA Midwestern look at hot button, taboo and dated topics with Kyle and Jess WassingListen on: Apple Podcasts Spotify
An algorithm can predict future crimes with 90% accuracy, so we're told. How much of this tech is already being used by law enforcement? Is this even possible?http://www.troubledminds.org Support The Show! https://rokfin.com/creator/troubledminds https://troubledfans.com https://patreon.com/troubledminds#aliens #conspiracy #paranormalRadio Schedule Mon-Tues-Wed-Thurs 7-9pst - https://fringe.fm/iTunes - https://apple.co/2zZ4hx6Spotify - https://spoti.fi/2UgyzqMStitcher - https://bit.ly/2UfAiMXTuneIn - https://bit.ly/2FZOErSTwitter - https://bit.ly/2CYB71UFollow Algo Rhythm -- https://bit.ly/3uq7yRYFollow Apoc -- https://bit.ly/3DRCUEjFollow Ash -- https://bit.ly/3CUTe4ZFollow Daryl -- https://bit.ly/3GHyIaNFollow James -- https://bit.ly/3kSiTEYFollow Jennifer -- https://bit.ly/3BVLyCMFollow Joseph -- https://bit.ly/3pNjbzb Matt's Book -- https://bit.ly/3x68r2d -- code for free book WY78YFollow Nightstocker -- https://bit.ly/3mFGGtxRobert's Book -- https://amzn.to/3GEsFUKFollow TamBam -- https://bit.ly/3LIQkFw-------------------------------------------------Professor Says He Foresees No Issues With His AI That Predicts Crimes Before They Happenhttps://futurism.com/the-byte/professor-ai-predicts-crimesAn algorithm can predict future crimes with 90% accuracy. Here's why the creator thinks the tech won't be abused | BBC Science Focus Magazinehttps://www.sciencefocus.com/news/algorithm-predict-future-crimes-90-accuracy-heres-why-creator-thinks-tech-wont-be-abused/Event-level prediction of urban crime reveals a signature of enforcement bias in US cities | Nature Human Behaviourhttps://www.nature.com/articles/s41562-022-01372-0?utm_medium=affiliate&utm_source=commission_junction&utm_campaign=CONR_PF018_ECOM_GL_PHSS_ALWYS_DEEPLINK&utm_content=textlink&utm_term=PID100041175&CJEVENT=4f7f08651d4211ed80cabf7c0a18050cPolice surveillance is more invasive and more mysterious than ever - Voxhttps://www.vox.com/recode/2020/2/5/21120404/police-departments-artificial-intelligence-public-recordsPre-crime - Wikipediahttps://en.wikipedia.org/wiki/Pre-crimePrecrime Definition & Meaning | Dictionary.comhttps://www.dictionary.com/browse/precrimeThe problem with predictive policing and pre-crime algorithmshttps://cybernews.com/editorial/the-problem-with-predictive-policing-and-pre-crime-algorithms/The Future Of Policing Using Pre-Crime Technologyhttps://archive.ph/K2gNBPrecrime | The Minority Report and Other Stories Wikipedia | GradeSaverhttps://www.gradesaver.com/the-minority-report/wikipedia/precrimeTotal Tyranny: We'll All Be Targeted Under the Government's New Precrime Program – Nwo Reporthttps://nworeport.me/2021/05/22/total-tyranny-well-all-be-targeted-under-the-governments-new-precrime-program/DHS Creates New Center for Prevention Programs and Partnerships and Additional Efforts to Comprehensively Combat Domestic Violent Extremism | Homeland Securityhttps://www.dhs.gov/news/2021/05/11/dhs-creates-new-center-prevention-programs-and-partnerships-and-additional-effortsPrecogs | Minority Report Wiki | Fandomhttps://minorityreport.fandom.com/wiki/PrecogsHomeland Security moves forward with 'pre-crime' detection - CNEThttps://www.cnet.com/news/privacy/homeland-security-moves-forward-with-pre-crime-detection/Florida Police Program Resembles ‘Precrime' Enforcement from ‘Minority Report' | National Reviewhttps://archive.ph/tzZc4Virginia Public School To Use Precrime Thought-Police Program To Deter Off-Campus, Social Media Hate Speech By Students - The Washington Standardhttps://thewashingtonstandard.com/virginia-public-school-to-use-precrime-thought-police-program-to-deter-off-campus-social-media-hate-speech-by-students/China Tries Its Hand at Pre-Crime - Bloomberghttps://archive.ph/UPFiJShocker: Florida Sheriff's 'Pre-Crime' Program Leads to Harassment - FindLawhttps://www.findlaw.com/legalblogs/legally-weird/shocker-florida-sheriffs-pre-crime-program-leads-to-harassment/
In this episode, we will talk about algorithms, what they are, how they work, and concerns they create once applied broadly and in scale across society.
It often feels like machine learning experts are running around with a hammer, looking at everything as a potential nail - they have a system that does cool things and is fun to work on, and they go in search of things to use it for. But what if we flip that around and start by working with people in various fields - education, health, or economics, for example - to clearly define societal problems, and then design algorithms providing useful steps to solve them?Rediet Abebe, a researcher and professor of computer science at UC Berkeley, spends a lot of time thinking about how machine learning functions in the real world, and working to make the results of machine learning processes more actionable and more equitable.Abebe joins EFF's Cindy Cohn and Danny O'Brien to discuss how we redefine the machine learning pipeline - from creating a more diverse pool of computer scientists to rethinking how we apply this tech for the betterment of society's most marginalized and vulnerable - to make real, positive change in people's lives.In this episode you'll learn about:The historical problems with the official U.S. poverty measurement How machine learning can (and can't) lead to more just verdicts in our criminal courtsHow equitable data sharing practices could help nations and cultures around the worldReconsidering machine learning's variables to maximize for goals other than commercial profitThis podcast is supported by the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology.Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower. This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: http://dig.ccmixter.org/files/djlang59/59729Probably Shouldn't by J.Langhttp://dig.ccmixter.org/files/Skill_Borrower/41751Klaus by Skill_Borrower http://dig.ccmixter.org/files/airtone/58703commonGround by airtonehttp://dig.ccmixter.org/files/JeffSpeed68/56377Smokey Eyes by Stefan Kartenberg http://dig.ccmixter.org/files/NiGiD/62475Chrome Cactus by Martijn de Boer (NiGiD)
Too many young people – particularly young people of color – lack enough familiarity or experience with emerging technologies to recognize how artificial intelligence can impact their lives, in either a harmful or an empowering way. Educator Ora Tanner saw this and rededicated her career toward promoting tech literacy and changing how we understand data sharing and surveillance, as well as teaching how AI can be both a dangerous tool and a powerful one for innovation and activism.By now her curricula have touched more than 30,000 students, many of them in her home state of Florida. Tanner also went to bat against the Florida Schools Safety Portal, a project to amass enormous amounts of data about students in an effort to predict and avert school shootings – and a proposal rife with potential biases and abuses.Tanner speaks with EFF's Cindy Cohn and Jason Kelley on teaching young people about the algorithms that surround them, and how they can make themselves heard to build a fairer, brighter tech future.In this episode you'll learn about:Convincing policymakers that AI and other potentially invasive tech isn't always the answer to solving public safety problems.Bringing diverse new voices into the dialogue about how AI is designed and used.Creating a culture of searching for truth rather than just accepting whatever information is put on your plate.Empowering disadvantaged communities not only through tech literacy but by teaching informed activism as well.This podcast is supported by the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology.Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower. This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: Meet Me at Phountain by gaetanh (c) copyright 2022 http://ccmixter.org/files/gaetanh/64711Hoedown at the Roundabout by gaetanh (c) copyright 2022 http://ccmixter.org/files/gaetanh/64711JPEG of a Hotdog by gaetanh (c) copyright 2022 http://ccmixter.org/files/gaetanh/64711reCreation by airtone (c) copyright 2019 http://dig.ccmixter.org/files/airtone/59721
Heute mit Predictive Policing, First-A-Übernahme, Spotify Live-Audio, Minizug Flexy
One particularly important social institution is the police force, who are increasingly using technological tools to help efficiently and effectively deploy policing resources. I've covered criticisms of these tools in the past, but in this episode, my guest Daniel Susser has some novel perspectives to share on this topic, as well as some broader reflections on how humans can relate to machines in social decision-making. This one was a lot of fun and covered a lot of ground. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Relevant LinksDaniel's HomepageDaniel on Twitter'Predictive Policing and the Ethics of Preemption' by Daniel'Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making' by Daniel (and Kiel Brennan-Marquez and Karen Levy) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
He was one of LA's most-loved rappers, and a pillar of his community. But records disclosed after his death revealed that he was also the target of an extensive Los Angeles policing operation
For this week's episode, host Maryam Tanwir and panelist Nanna Sæten speak about predictive policing with Johannes Heiler, Adviser on Anti-Terrorism Issues at the OSCE Office for Democratic Institutions and Human Rights (ODIHR) and Miri Zilka, Research Associate in the Machine Learning Group at the University of Cambridge. Predictive policing leverages the techniques of statistics and machine learning for the purpose of predicting crime. The human rights perspective provides several interesting questions for the use of predictive policing; as the technology functions today, it seems to perpetuate already existing bias in police work, but could this be overcome? Using technology for the purpose of police work necessitates questions of who is responsible for the protection of human rights and how to decide on who's human rights to uphold in the case of conflict. What is clear to both of our guests is that there needs to be clear channels of oversight if human rights are to be protected in digitized law enforcement.
He was one of LA's most-loved rappers, and a pillar of his community. But records disclosed after his death revealed that he was also the target of an extensive Los Angeles policing operation. Help support our independent journalism at theguardian.com/infocus
Klub Digital velfærd er til seminar på ITU med CUPP- Cupp er et treårigt forskningsprojekt mellem ITU og en række private aktører og det står for Critical Understanding of Predictive Policing. I december holdt CUPP et seminar, vi tog med og lyttede og talte blandt andet med cybersikkerhedseksperten Christian Damsgaard, DTU, Bjarke Friborg og Ole Tange fra PROSA, Vasislis Galis, Lektor ved ITU og Courtney Bowman, Global Director of Privacy and Civil Liberties hos PALANTIR, det firma, der har bygget politiets nye IT-system, POL-Intel. Det er ham til højre i billedet, i midten sidder Jesper Lund, formand for IT-politisk forening og til venstre er det Christian Wiese Svanberg, tidligere rigspolitiet.
•Sci-Fi-Krimi• Im Jahr 2060 gehört "Predictive Policing" längst zum Alltag der Menschen. Sogar die Wahrscheinlichkeit, mit der irgendwo ein gefährliches Virus lauern könnte, wird mitberechnet. // Von Martin Heindel / Komposition: haarmann / Regie: Martin Heindel / Autorenproduktion 2020 / www.wdr.de/k/hoerspiel-newsletter Ein 1LIVE-Podcast, © WDR 2022
•Sci-Fi-Krimi• Im Jahr 2060 gehört "Predictive Policing" längst zum Alltag der Menschen. Sogar die Wahrscheinlichkeit, mit der irgendwo ein gefährliches Virus lauern könnte, wird mitberechnet. // Von Martin Heindel / Komposition: haarmann / Regie: Martin Heindel / Autorenproduktion 2020 / www.wdr.de/k/hoerspiel-newsletter Ein 1LIVE-Podcast, © WDR 2022
Many law enforcement agencies use software that crunches crime statistics, 911 calls and other data to try to predict where crimes are likely to happen. The idea is, this can help them know where to deploy scarce resources. A recent investigation by Gizmodo and The Markup looked into one of the companies doing this, PredPol, and found that the software disproportionately targeted certain neighborhoods. Marketplace’s Kimberly Adams speaks with Aaron Sankin, a reporter with The Markup and one of the authors of the report. New Investors Week: Your first donation to Marketplace goes TWICE as far with a dollar-for-dollar match from the Investors Challenge Fund! Please give now.
Simon Egbert and Matthias Leese's Criminal Futures: Predictive Policing and Everyday Police Work (Routledge, 2020) explores how predictive policing transforms police work. Police departments around the world have started to use data-driven applications to produce crime forecasts and intervene into the future through targeted prevention measures. Based on three years of field research in Germany and Switzerland, this book provides a theoretically sophisticated and empirically detailed account of how the police produce and act upon criminal futures as part of their everyday work practices. The authors argue that predictive policing must not be analyzed as an isolated technological artifact, but as part of a larger sociotechnical system that is embedded in organizational structures and occupational cultures. The book highlights how, for crime prediction software to come to matter and play a role in more efficient and targeted police work, several translation processes are needed to align human and nonhuman actors across different divisions of police work. Police work is a key function for the production and maintenance of public order, but it can also discriminate, exclude, and violate civil liberties and human rights. When criminal futures come into being in the form of algorithmically produced risk estimates, this can have wide-ranging consequences. Building on empirical findings, the book presents a number of practical recommendations for the prudent use of algorithmic analysis tools in police work that will speak to the protection of civil liberties and human rights as much as they will speak to the professional needs of police organizations. An accessible and compelling read, this book will appeal to students and scholars of criminology, sociology, and cultural studies as well as to police practitioners and civil liberties advocates, in addition to all those who are interested in how to implement reasonable forms of data-driven policing. Geert Slabbekoorn works as an analyst in the field of public security. In addition he has published on different aspects of dark web drug trade in Belgium. Find him on twitter, tweeting all things drug related @GeertJS. Learn more about your ad choices. Visit megaphone.fm/adchoices
Many of us know our personal data is being collected online and used against us – to get us to buy certain things or vote a certain way. But for marginalized communities, the collection of data and photos has much bigger implications. Vinita is joined by two researchers who are calling for new protections for the most vulnerable populations. Yuan Stevens is the Policy Lead in the Technology, Cybersecurity and Democracy Programme at the Ryerson Leadership Lab and Wendy Hui Kyong Chun is professor and Canada 150 Research Chair in new media at Simon Fraser University.Show notes:https://theconversation.com/being-watched-how-surveillance-amplifies-racist-policing-and-threatens-the-right-to-protest-dont-call-me-resilient-ep-10-167522Transcript:https://theconversation.com/being-watched-how-surveillance-amplifies-racist-policing-and-threatens-the-right-to-protest-dont-call-me-resilient-ep-10-transcript-167523Related article: Intense police surveillance for Indigenous land defenders contrasts with a laissez-faire stance for anti-vax protestershttps://theconversation.com/intense-police-surveillance-for-indigenous-land-defenders-contrasts-with-a-laissez-faire-stance-for-anti-vax-protesters-169589Join The Conversation about this podcast: Use hashtag #DontCallMeResilient and tag us:Twitter: https://twitter.com/ConversationCA Instagram: https://www.instagram.com/theconversationdotcomFacebook: https://www.facebook.com/TheConversationCanadaLinkedIn: https://www.linkedin.com/company/theconversationcanada/Sign up for our newsletter: https://theconversation.com/ca/newsletters/Contact us: theculturedesk@theconversation.comPromo at beginning of episode:Telling Our Twisted Histories, CBC Podcasts:https://www.cbc.ca/listen/cbc-podcasts/906-telling-our-twisted-histories
Jawn Jang interviews Cynthia Khoo (Tekhnos Law) about algorithmic policing...and how that leads the police to "predict" who could be breaking the law.
For our season 4 kickoff, we're taking a look at uses of AI that aren't so black and white. When it comes to deepfakes, filtering, and predictive policing - when do the risks outweigh the benefits? Are these use-cases inherently bad, or is there a way to combat underlying unfairness? We're also welcoming our new host, Christopher Peter Makris to the show in his inaugural episode!Learn more about the articles referenced in this episode: Why Deepfakes are a Net Positive For Humanity by Simon Chandler (Forbes) Inside LGTBQ Vloggers' Class-Action 'Censorship' Suit Against YouTube by EJ Dickson (Rolling Stone) LAPD changing controversial program that uses data to predict where crime will occur by Mark Puente, Cindy Chang (LA Times)Be sure to subscribe to our weekly newsletter to get this podcast & a host of new and exciting data-happenings in your inbox!
Detroit math teacher and Carmen's friend Susan Mattingly talks about what teachers and families are wondering about school reopenings in this COVID age, as well as the problems of predictive policing. Mission Network News' Ruth Kramer talks about unrest in Mali could increase regional instability in west Africa.
Detroit math teacher and Carmen's friend Susan Mattingly talks about what teachers and families are wondering about school reopenings in this COVID age, as well as the problems of predictive policing. Mission Network News' Ruth Kramer talks about unrest in Mali could increase regional instability in west Africa.
In this episode, Lloyd discusses the topic of predictive policing, which has recently garnered particular attention as mathematicians circulated a petition urging colleagues to sever ties with law enforcement agencies. Episode Guide: 1:20 - Intro to Predictive Policing 3:59 - A Scientific Veneer for Racism 5:56 - Negative Feedback Loops 9:49 - PredPol & ICERM 14:30 - The Impact of Facial Recognition 18:22 - Playing Devil's Advocate 19:25 - Minority Report IRL 22:03 - Disparate Impact 26:05 - The PATRIOT Act & The Role of Abstraction More Info: Visit us at aiexperience.org Brought to you by ICED(AI) Host - Lloyd Danzig
In this episode, VICE Podcast Producer Sophie Kazis talks to Motherboard reporter Caroline Haskins about the predictive policing company PredPol. Using public information requests, Haskins verified dozens of previously unconfirmed police department contracts with this new policing technology. See acast.com/privacy for privacy and opt-out information.
Episode 4...L A Ramon and Thomas As far back as the 1980s, scientists began testing whether “millimeter wave energy could create a repel effect that might serve as a non-lethal weapon.” This led to the creation of the Active Denial System (ADS). According to the US Department of Defense the ADS “generates a focused and very directional millimeter-wave radio frequency beam” that penetrates the skin's surface and causes an intense stinging or burning. It is used for purposes like crowd control because “within seconds, an individual feels an intense heating sensation that stops when the transmitter is shut off or when the individual moves out of the beam.” Mobile networks operate in a different way to ADS weapons. ADS beams are strong and directional, and cause serious injury if exposure is too high or too long. It prompts the question, though: what happens when we're exposed to many, constant, low-level beams long term?
As far back as the 1980s, scientists began testing whether “millimeter wave energy could create a repel effect that might serve as a non-lethal weapon.” This led to the creation of the Active Denial System (ADS). According to the US Department of Defense the ADS “generates a focused and very directional millimeter-wave radio frequency beam” that penetrates the skin's surface and causes an intense stinging or burning. It is used for purposes like crowd control because “within seconds, an individual feels an intense heating sensation that stops when the transmitter is shut off or when the individual moves out of the beam.” Mobile networks operate in a different way to ADS weapons. ADS beams are strong and directional, and cause serious injury if exposure is too high or too long. It prompts the question, though: what happens when we're exposed to many, constant, low-level beams long term?
De politie kan misdaad voorspellen. Meer dan de helft van basiseenheden van de politie werkt al met Predictive Policing. Volgend jaar moeten alle politie-eenheden ermee gaan werken.Het Criminaliteits Informatie Systeem, Het Cas, maakt gebruik van de kracht van Big Data. Met behulp van politiegegevens, CBS-gegevens, cijfers van aangiftes enzovoort, kan bijvoorbeeld worden voorspeld in welke buurt morgen de kans op woninginbraken het grootst is. Het geeft de politie de mogelijkheid op te treden nog voordat er een misdaad is gepleegd. Maar wat voorspelt CAS eigenlijk? Wie controleert of de ingevoerde data kloppen? Kunnen algoritmes ook discrimineren?Rutger Rienks is data-analist, eerder ook bij de politie, en schreef in 2015 een boek over predictive policing. En Bart van der Sloot is onderzoeker op de Tilburg University, op het gebied van big data en privacy. Ook schreef hij mee aan een rapport van de Wetenschappelijke Raad voor het Regeringsbeleid over big data-gebruik bij de overheid. Beiden zijn te gast in Argos.