Austrian author and privacy activist
POPULARITY
Amerikanerne kan trække stikket på det digitale Danmark. De ejer stort set alt og har fuld adgang til den sky, hvor alle vores billeder, beskeder og noter svæver rundt. Tech-journalist Louise Olifent tager os med på en rejse i den digitale infrastruktur sammen med østrigeren Max Schrems, der i årevis har kæmpet for europæisk beskyttelse mod amerikanske techgiganter - og som nu har fået langt flere til at lytte med alvor til hans ord. Genstart ser i dag på, hvad der kan ske, hvis vores digitale infrastruktur bliver kontrolleret af mennesker fra én nation. Vært: Anna Ingrisch. Program publiceret i DR Lyd d. 13. april 2025.
Breitband - Medien und digitale Kultur (ganze Sendung) - Deutschlandfunk Kultur
Durch US-Präsident Trump steht der Datenschutzrahmen EU-USA auf der Kippe, bei dem es um die Übermittlung personenbezogener Daten geht. Außerdem: Warum halluzinieren wir Vibrationen unseres Telefons und die neue Switch wurde in Paris vorgestellt. Jenny Genzmer, Max Schrems, Bene Wenck, Pia Behme www.deutschlandfunkkultur.de, Breitband
Breitband - Medien und digitale Kultur - Deutschlandfunk Kultur
Durch US-Präsident Trump steht der Datenschutzrahmen EU-USA auf der Kippe, bei dem es um die Übermittlung personenbezogener Daten geht. Außerdem: Warum halluzinieren wir Vibrationen unseres Telefons und die neue Switch wurde in Paris vorgestellt. Jenny Genzmer, Max Schrems, Bene Wenck, Pia Behme www.deutschlandfunkkultur.de, Breitband
Im neuen Podcast mit Stefan Brink und Niko Härting dreht sich alles um Allianzen im Datenschutz: Zunächst (ab Minute 00:40) geht es in Querbeet um die Frage, ob der unberechenbare US-Präsident Trump das EU-US Data Privacy Framework zu Fall bringen könnte – aus dem Jahr 2023, als die Allianz zwischen EU und USA noch stand. Stefan erklärte in einem Beitrag für das „Handelsblatt“, warum es sich jetzt rächen könnte, dass der Angemessenheitsbeschluss der EU-Kommission nur mit einer Executive Order des Präsidenten Biden und nicht wie von Datenschützern verlangt durch ein Parlamentsgesetz umgesetzt und abgesichert wurde: Jeder Präsident der USA hat es nun selbst in der Hand, das Abkommen wieder scheitern zu lassen. Sodann geht es (ab Minute 13:21) beim Referentenentwurf zum Data Act-Durchführungsgesetz und die bröckelnde Allianz der Datenschutz-Aufsichtsbehörden der Länder und des Bundes: Der EU-DA, der eine faire Verteilung des Datenwertes vernetzter Produkte anstrebt, bekommt in Deutschland mit der Bundesnetzagentur eine zentrale Aufsichtsbehörde für die Durchsetzung und Überwachung der Verordnung (EU) 2023/2854. Ergänzend wird eine Sonderzuständigkeit für die Bundesbeauftragte für den Datenschutz und die Informationsfreiheit geschaffen – interessanterweise an den Aufsichtsbehörden der Länder vorbei. Aufgrund seiner Gesetzgebungskompetenz aus Artikel 74 Absatz 1 Nummer 11 Grundgesetz (Recht der Wirtschaft) begründet der Bund diese Sonderzuständigkeit der BfDI mit Artikel 87 Absatz 3 Satz 1 Grundgesetz. Die BfDI verfüge über das für eine zügige Identifizierung und Bewertung von Datenschutzfragen sowie die Aufbereitung von Sachverhalten notwendige Fachwissen und könne somit erheblich zu einer raschen Beurteilung der datenschutzrechtlichen Fragestellungen beitragen. Brechen da alte Allianzen zwischen der deutschen Datenschutzaufsichtsbehörden von Bund und Ländern? Haben wir es hier mit der Blaupause für Zentralisierung der Aufsicht über private Unternehmen beim Bund (BfDI) zu tun? Schließlich (ab Minute 29:02) gibt es offenbar eine denkwürdige Allianz zwischen Axel Voss (MdEP der EVP-Fraktion) und Max Schrems von der Datenschutz-Organisation NOYB. Voss präsentiert seinen Plan zur Revision der DS-GVO, er will in einem 3-Schichten-Modell eine Differenzierung der Pflichten der DS-GVO abhängig von der nach Unternehmensgröße (vgl. DSA zu very large online platforms VLOP) vornehmen. Schrems stimmt insoweit zu, das „one size fits all“ der DS-GVO sei schon immer verrückt gewesen. Allerdings korreliert ein an der Unternehmensgröße ausgerichteter asymmetrischer Ansatz keineswegs mit dem risikobasierten Ansatz der DS-GVO: Risiken ergeben sich aus Datenmenge, Datenarten (Art. 9-Daten) und TOMs als risikomindernden Maßnahmen – nicht zwingend aus der Unternehmensgröße. Ehemalige, bröckelnde und denkwürdige Allianzen im Datenschutz also…
The famous privacy advocate Max Schrems took the Facebook owner Meta to the European court of Justice and got a key ruling last month. This could have a major impact on online advertising and how personal information is handled by the big social media giants. The ruling led to Meta changing its terms and conditions and to criticism from One of its Directors Nick Clegg. But what exactly do companies need to know? To discuss further Joe spoke to Bryony Long is the Co lead of Lewis Silkin's Data, Privacy and Cyber Group.
In this episode of the Data Malarkey podcast, master data storyteller Sam Knowles is joined by Alice Wallbank, a professional support lawyer for the law firm Shoosmiths, whose clients include Mercedes-Benz, Octopus Ventures, and Travelodge. The company also specialises in working for businesses in both the property and banking sector. The Financial Times has garlanded Shoosmiths as “one of Europe's most innovative law firms”, and Alice's pioneering role at the company – focused on privacy, data, and increasingly AI – is symptomatic of a business in the vanguard of a profession catching up with the broadest implications of technology. At the start of this year, Alice co-hosted an excellent ‘data insights' conference – naturally hybrid, both in the room and online – which featured a keynote from Austrian activist and lawyer, Max Schrems. Schrems is famous for his successful campaigns against Facebook (and Meta) for their violations of European data privacy laws. Before joining Shoosmiths, Alice spent six years as the principal legal counsel for the cyber and information security division of the leading technology business, QinetiQ. Alice is a passionate advocate of the EU's General Data Protection Regulation (GDPR), describing it “as a good thing for data privacy – without a shadow of a doubt”. Although first introduced in 2016 and in place since 2018, it has its origins in a 1995 directive, designed to protect the rights and freedoms of individuals from Big Tech. Alice believes this showed “remarkable foresight”. One of the very few people in the UK, Europe, and the world to have read all 90,000 words of the EU's AI Act, artificial intelligence gives Alice that fabled reaction to trench warfare of “a combination of boredom and terror”. There are huge potential upsides – such as radiography diagnostics – and massive downsides from a system that is “at heart a self-limiting black box” dealing in “biases in, biases out”. And in a Data Malarkey exclusive, Alice is our first guest in more than 40 episodes … to sing. She dons her white stilettos, dances round her handbag, and turns the clock back to 1984 for a tuneful rendition of Rockwell's dancefloor smash, Somebody's Watching Me – for Alice, an insightful foreshadowing of data privacy issues 40 years into the future. EXTERNAL LINKS Shoosmiths home page – https://www.shoosmiths.com Alice's profile on the Shoosmiths' site – https://www.shoosmiths.com/people/cvdetails/alice-wallbank Alice's article on Ashley Madison – https://www.grip.globalrelay.com/could-the-ashley-madison-data-breach-happen-today/ Another blog from Alice, this time on the environmental credentials of GDPR. The EU AI Act – all 90,000 words of it – here Rockwell's Somebody's Watching Me from 1984 – https://www.youtube.com/watch?v=7YvAYIJSSZY To find out what kind of data storyteller you are, complete our data storytelling scorecard at https://data-storytelling.scoreapp.com. It takes just two minutes, and we'll send you your own personalised scorecard which tells you what kind of data storyteller you are.
Der EuGH war fleißig im Oktober und es gibt viel zu berichten. Und wenn Entschuldigungen als Schadensersatz genutzt werden können, setzen sich Laura und Cornelius für die (noch nicht vorhandene) Entschuldigungs-Verordnung ein. Laura lässt sich „kurz“ zur Never Ending Story zwischen Max Schrems und Meta aus. Im Grand Finale macht sich Cornelius Luft zu Gesundheitsdaten & Wettbewerbsrecht. Aber wer regt sich schon so schön auf wie Cornelius? Und wer weiß, vielleicht trudeln bald Entschuldigungen bei ihm ein.
Im neuen Podcast blicken Stefan Brink und Niko Härting in Querbeet (ab Minute 03:33) in die USA und stellen fest, dass die Handelskommission FTC in Sachen Datenschutz klare Ansagen an die großen Plattformen macht (https://www.ftc.gov/reports/look-behind-scenes-examining-data-practices-social-media-video-streaming-services) und ein Bundes-Datenschutzgesetz einfordert. Sodann (ab Minute 06:10) betrachten sie eine Entscheidung des EuGH zur Sanktionspflicht der Aufsicht (EuGH Urteil vom 26.9.24 C-768/21). Zur Erleichterung der Datenschutzaufsicht stellt der EuGH fest, dass die Aufsichtsbehörde nicht verpflichtet ist, in jedem Fall eines Verstoßes eine Abhilfemaßnahme zu ergreifen und insbesondere eine Geldbuße zu verhängen, etwa wenn der Verantwortlicher bereits für Abhilfe gesorgt hat. Das soll aber nur eine Ausnahme sein … Dann geht es (mal wieder) um Max Schrems vs. Meta (ab Minute 15:58): Im Wege der Vorabentscheidung klärt der EuGH (Urteil vom 4.10.2024 C-446/21) wie Meta mit besonderen Arten personenbezogener Daten (Art. 9 DS-GVO) im Werbekontext umzugehen hat. Schrems hatte in einer öffentlichen Podiumsdiskussion seine sexuelle Orientierung angesprochen, Meta hat dieses Datum genutzt, um ihm personalisierte Werbung anzubieten, obwohl er seine sexuelle Orientierung nicht in seinem Facebook-Profil angegeben hat. Hiergegen klagte Schrems u.a. wegen Verstoßes von Meta gegen den Grundsatz der Datenminimierung Art. 5 Abs. 1 lit. c DS-GVO. Etwas kryptisch meint dazu der EuGH, das die unbegrenzte Speicherung der Daten unzulässig sei (ohne eine Löschfrist vorzugeben) und führt zu Art. 9 Abs. 2 Buchst. e DSGVO (Daten, die „die betroffene Person offensichtlich öffentlich gemacht hat“) aus, dass solche Daten keinen Schutz mehr nach Art. 9 Abs. 1 DS-GVO beanspruchen können; dies gelte allerdings nicht für Art. 9-Daten „aus anderen Quellen“. Etwas ratlos bleiben Niko und Stefan zurück …
Join us for our weekly roundup where we analyse the unexpected rise in US job numbers with specialist guests from the US and New Zealand.Austrian activist Max Schrems discusses his pivotal win against Meta, reshaping the use of personal data in EU advertising.And finally, we examine France's reversal of its ban on the use of "steak" for vegetarian products, following pressure from vegan groups.
This Day in Legal History: Watergate Trial BeginsOn October 4, 1974, the trial of key Watergate conspirators began, marking a significant moment in American legal and political history. The defendants included top Nixon administration officials H.R. Haldeman, John Ehrlichman, John Mitchell, Robert Mardian, and Kenneth Parkinson. These men were accused of participating in the cover-up of the break-in at the Democratic National Committee headquarters, a scandal that would ultimately lead to President Nixon's resignation.The trial was presided over by Judge John Sirica, who had played a pivotal role in uncovering the truth behind Watergate. At the outset, the court reviewed transcripts of the now-infamous Watergate tapes, which had been secretly recorded by President Nixon in the Oval Office. These recordings captured critical conversations revealing the extent of the administration's involvement in the cover-up.The tapes provided key evidence, especially a June 23, 1972, conversation known as the "smoking gun" tape, in which Nixon and Haldeman discussed obstructing the FBI's investigation. The trial was part of the broader legal reckoning following Nixon's resignation two months earlier in August 1974.Haldeman and Ehrlichman, two of Nixon's closest aides, were found guilty of conspiracy, obstruction of justice, and perjury. John Mitchell, Nixon's former Attorney General, was also convicted on conspiracy charges. This trial helped close one chapter of the Watergate scandal, demonstrating the judicial system's role in holding even the highest-ranking officials accountable for abuses of power.A federal court has blocked President Biden's latest student debt relief plan, ruling it likely unconstitutional. The U.S. District Court for the Eastern District of Missouri granted a preliminary injunction requested by a coalition of seven Republican-led states. Judge Matthew T. Schelp, who issued the ruling, emphasized the public interest in ensuring the government follows the law. The plan, which would have canceled debt for 27 million borrowers, is now paused.The plaintiffs argue the plan is an executive overreach. While Georgia was dismissed from the lawsuit for lacking standing, Missouri's Higher Education Loan Authority was found to have suffered an injury, giving Missouri standing to continue the case. This ruling temporarily halts the debt relief program as the court considers whether to permanently strike it down. The injunction is a blow to Biden's efforts to reduce student debt, following the U.S. Supreme Court's earlier decision in 2023 that struck down a broader relief plan. Another related plan, known as “SAVE,” which aims to provide zero-dollar payments and expedite forgiveness for some borrowers, is also under review by the Eighth Circuit Court. This decision prevents the debt relief plan from taking effect while the court evaluates the case, reflecting the court's consideration of the public interest and potential harm.Biden's Latest Student Debt Relief Plan Blocked by Federal CourtThe U.S. government has supported Nvidia investors in a Supreme Court case where they allege the company misled the market about its reliance on cryptocurrency mining sales. The Department of Justice (DOJ) and the Securities and Exchange Commission (SEC) filed a brief urging the Court to allow the case to proceed, siding with the investors. The Ninth Circuit Court previously revived the class action, ruling that Nvidia's CEO Jensen Huang made materially false or misleading statements about the company's revenue dependence on crypto mining, particularly before a 2018 market downturn.Nvidia disputes the case, claiming the investors' allegations were based on unreliable expert data. However, the DOJ and SEC argued that the claims were backed by multiple sources, including accounts from former employees, a Royal Bank of Canada report, Nvidia's own public statements, and SEC filings. These sources collectively suggest Nvidia earned $1.35 billion more from crypto sales during the boom than it disclosed.The government emphasized that the Private Securities Litigation Reform Act (PSLRA), which governs investor lawsuits, was not misapplied by the Ninth Circuit. It rejected Nvidia's argument that the case was based solely on unsubstantiated expert opinions, pointing out that the allegations were supported by sufficient evidence to infer Nvidia's intent to mislead investors.Nvidia Investors Backed by US in Supreme Court Crypto Sales CaseThe European Union's top court has ruled that Meta must limit the use of personal data collected from Facebook users for targeted advertising. This decision supports privacy advocate Max Schrems, who argued that Meta's personalized advertising violated privacy regulations by processing personal data without proper limitations. The Court of Justice of the European Union (CJEU) emphasized that under the EU's General Data Protection Regulation (GDPR), the principle of data minimization restricts how much personal data companies can use for targeted ads.Meta responded by stating it has invested heavily in privacy features and does not use sensitive data for personalized ads. Schrems' lawyer welcomed the decision, saying it would significantly reduce the amount of data Meta and other companies can use for advertising, even when users consent. This ruling marks another victory for Schrems, who has repeatedly taken Meta to court over alleged GDPR violations.Meta must limit data use for targeted advertising, top EU court rules | ReutersCupertino, California, has reached a settlement with the state's Department of Tax and Fee Administration, allowing the city to retain millions in sales tax revenue tied to Apple's online sales through August 2024. The settlement resolves a dispute that began in 2021 when the department audited Cupertino's 26-year tax-sharing agreement with Apple. Under this agreement, Apple treated all in-state online sales as originating from Cupertino, directing a portion of the state sales tax to the city, which in turn shared 35% of the revenue with Apple.The tax department has scrutinized similar deals with other retailers like Best Buy and Williams-Sonoma, arguing that businesses need to show active participation in the transactions where the sales are reported. Last year, Cupertino set aside $56.5 million to potentially repay the state as the dispute escalated to $60.3 million by February 2024.The settlement avoids litigation, but the city did not disclose specific financial details or directly reference Apple in the announcement. It clarified that the agreement does not impact its tax-sharing arrangement with any taxpayer.Cupertino Settles With State in Apple Sales Tax-Sharing FightThis week's closing theme is by Pyotr Ilyich Tchaikovsky.This week's closing theme is the Adagio lamentoso from Tchaikovsky's Symphony No. 6 in B minor, Op. 74, also known as the "Pathétique" Symphony. Tchaikovsky composed this masterpiece in 1893, just a few months before his sudden and mysterious death. The "Pathétique" Symphony is widely regarded as his most personal and emotionally charged work, filled with profound sorrow and introspection.The fourth movement, "Adagio lamentoso," is the symphony's haunting conclusion. Unlike most symphonies, which end on a triumphant or uplifting note, Tchaikovsky chose to close his final symphony with this slow, lamenting movement. It reflects deep melancholy and resignation, expressing a sense of despair that resonates with listeners. The music ebbs and flows between quiet, intimate passages and moments of overwhelming intensity, capturing the fragility and tragedy of life.Tchaikovsky was known for his ability to express raw emotion through music, and the "Adagio lamentoso" exemplifies this talent. The theme's descending lines seem to mirror a downward spiral into sorrow, giving it an almost funereal quality. Some have speculated that the symphony reflects Tchaikovsky's own inner turmoil and struggles, adding a layer of poignancy to the already emotional score.As you listen to the "Finale" this week, take note of its deep, somber beauty and how Tchaikovsky blends anguish with moments of quiet reflection. It's a fitting close to a symphony that grapples with life's most profound emotions.Without further ado, Pyotr Ilyich Tchaikovsky's Symphony No. 6 in B minor, Op. 74, the Pathetique Symphony. Enjoy. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.minimumcomp.com/subscribe
Golbális együttműködés sikere: Az ózonlyuk a gyógyul PlanetZ 2024-09-26 08:00:27 Tudomány Antarktisz Ózon Az ózonlyuk a gyógyulás útjára lépett. A legfrissebb előrejelzések szerint az Antarktisz feletti ózonréteg 2066-ra visszatér az 1980-as évek szintjére, ami a globális kibocsátáscsökkentési erőfeszítések sikerét jelzi. Az MI átveri az emberek azonosítására szánt reCaptcha v2-t ITBusiness 2024-09-26 06:06:49 Infotech egyetem Mesterséges intelligencia Svájc Google Robot Svájci kutatók felfedezték, hogy a mesterséges intelligencia képes legyőzni a Google egyik legelterjedtebb bot-észlelő rendszerét, a reCaptcha v2-t, amelyet arra terveztek, hogy megkülönböztesse az embereket a robotoktól. Az ETH Zürich egyetem kutatói fejlett gépi tanulási módszereket alkalmazva 100 százalékban képesek voltak megoldani a reCaptchav Diákoknak és tanároknak kedveskedik a Microsoft Mínuszos 2024-09-26 07:33:25 Infotech Oktatás Iskolakezdés Microsoft Az új tanév számos újdonságot hoz a Microsoft 365 felhasználóinak, különösen az oktatás világában. Az új funkciókat – amelyek a tanítás és tanulás hatékonyságát hivatottak növelni- a tanároknak és a diákoknak fejlesztette a redmondi társaság. Clipchamp – A kreativitás új dimenziója A videós tartalomkészítés soha nem volt még ilyen egyszerű Ki figyel a legszorgalmasabban, amikor online vagy? ICT Global 2024-09-26 15:33:58 Infotech Google Kiberbiztonság Kaspersky Lab A Google abszolút vezető pozícióban van - ez a kijelentés számos tekintetben megállja a helyét. A Kaspersky legújabb jelentésében ez az állítás az emberek internetes viselkedésének megfigyelésére vonatkozott. Az ember aki legyőzte a Facebookot, azt mondja, sok EU-s adatvédelmi hatóság egyszerűen nem csinál semmit Telex 2024-09-26 14:27:22 Tudomány Drog Facebook Adatvédelem Max Schrems szerint vannak olyan tagállami hatóságok, amelyek vezetőjét épp azért nevezték ki, hogy ne végezze el a dolgát: "Mintha a kábítószer-ellenes hatóságot nem érdekelnék a kábítószerek." Négynapos, nemzetközi technológiai show-t rendeznek novemberben Budapesten Márkamonitor 2024-09-26 10:06:04 Infotech Vadonatúj nemzetközi technológiai esemény debütál idén novemberben Budapesten, amely egyszerre szólítja meg a szakembereket és a nagyközönséget. A szervezők célja, hogy a Techxpo néhány éven belül a kelet-közép-európai térség meghatározó technológiai expójává váljon, amely a régióból hiányzó, de nyugat-európai nagyvárosokban már hosszú múltra vissz 35 éve érte el csúcsát az orosz szuperfúrás First Class 2024-09-26 10:51:10 Tudomány 35 évvel ezelőtt értek el szovjet tudósok a lehető legmélyebbre a Föld titkai után kutató szuperfúrás során, amiről egy hamis legenda is született. Hamarosan kereskedelmi forgalomba kerül az eVinci nukleáris mikroreaktor Rakéta 2024-09-26 15:01:23 Infotech Az eVinci már az engedélyeztetési folyamatoknál jár, így az évtized végéig akár több ilyen mikroreaktor is telepíthetővé válik. Nemcsak a rovarirtó szerek, a hőhullámok is rontják a poszméhek szaglását Helló Sajtó! 2024-09-26 04:34:23 Tudomány Oktatás egyetem Rovar Eddig úgy tudtuk, a poszméhek szaglására csak a rovarirtó szerek hatnak negatívan, de a HUN-REN Agrártudományi Kutatóközpont Növényvédelmi Intézete és a Würzburgi Egyetem közös kutatásából kiderült, a magas hőmérséklet is csökkenti a rovarok szaglását. Ez pedig a haszonnövényeink beporzására is komoly hatással lehet, vélik a kutatók. A mostaninál ia nagyobb áradások várnak ránk 24.hu 2024-09-26 12:10:39 Tudomány A 2050-es években az ilyen viharok legalább 5 százalékkal több esővel fognak érkezni, és a mostaninál 50 százalékkal gyakrabban fordulnak majd elő. A magyarországi bankok is használják már az AI-t Fintech 2024-09-26 05:04:26 Modern Gazdaság Oktatás Mesterséges intelligencia Fintech A European Women Payments Network és a Magyar Fintech Szövetség (MAFISZ) rendezvényének keretében a magyarországi pénzügyi szektor mesterséges intelligencia (AI) használatának legfrissebb témái voltak terítéken. A beszélgetésen szóba került a biztonság, a bevezetési nehézségek az oktatás és a változás menedzsment is. A beszélgetésen olyan vezetők v A Webb teleszkóp egy „furcsa” galaxist talált a csillagoknál fényesebb valamivel PhoneBazis 2024-09-26 12:42:46 Infotech Világűr James Webb Amikor a csillagászok a James Webb űrteleszkópot a korai világegyetemre irányították, egy ősi galaxist találtak, amelynek szokatlan fényjelzése volt. Az adatok alapos tanulmányozása után a tudósok megállapították, hogy nem a galaxisban található csillagok a közvetlen okozói a rendkívüli fényességnek. Hanem az űrgáz. Ez a Monthly Notices of the Roya Földtől a tányérig: mit nyomtatnak 3D-vel a mezőgazdaságban? ICT Global 2024-09-26 06:03:27 Infotech Drón Robot 3D nyomtató Drónok, robotok, automata járművek, szenzorok és szenzorhálózatok után a 3D nyomtatás is megjelent a mezőgazdaságba. Az alkalmazások gyerekcipőben járnak még, de az elterjedés gyorsnak ígérkezik. Gépalkatrészektől a palántázásig, talajkutatástól a termés folyamatos megfigyeléséig, a technológia sokféleképpen hasznosítható. A további adásainkat keresd a podcast.hirstart.hu oldalunkon.
Golbális együttműködés sikere: Az ózonlyuk a gyógyul PlanetZ 2024-09-26 08:00:27 Tudomány Antarktisz Ózon Az ózonlyuk a gyógyulás útjára lépett. A legfrissebb előrejelzések szerint az Antarktisz feletti ózonréteg 2066-ra visszatér az 1980-as évek szintjére, ami a globális kibocsátáscsökkentési erőfeszítések sikerét jelzi. Az MI átveri az emberek azonosítására szánt reCaptcha v2-t ITBusiness 2024-09-26 06:06:49 Infotech egyetem Mesterséges intelligencia Svájc Google Robot Svájci kutatók felfedezték, hogy a mesterséges intelligencia képes legyőzni a Google egyik legelterjedtebb bot-észlelő rendszerét, a reCaptcha v2-t, amelyet arra terveztek, hogy megkülönböztesse az embereket a robotoktól. Az ETH Zürich egyetem kutatói fejlett gépi tanulási módszereket alkalmazva 100 százalékban képesek voltak megoldani a reCaptchav Diákoknak és tanároknak kedveskedik a Microsoft Mínuszos 2024-09-26 07:33:25 Infotech Oktatás Iskolakezdés Microsoft Az új tanév számos újdonságot hoz a Microsoft 365 felhasználóinak, különösen az oktatás világában. Az új funkciókat – amelyek a tanítás és tanulás hatékonyságát hivatottak növelni- a tanároknak és a diákoknak fejlesztette a redmondi társaság. Clipchamp – A kreativitás új dimenziója A videós tartalomkészítés soha nem volt még ilyen egyszerű Ki figyel a legszorgalmasabban, amikor online vagy? ICT Global 2024-09-26 15:33:58 Infotech Google Kiberbiztonság Kaspersky Lab A Google abszolút vezető pozícióban van - ez a kijelentés számos tekintetben megállja a helyét. A Kaspersky legújabb jelentésében ez az állítás az emberek internetes viselkedésének megfigyelésére vonatkozott. Az ember aki legyőzte a Facebookot, azt mondja, sok EU-s adatvédelmi hatóság egyszerűen nem csinál semmit Telex 2024-09-26 14:27:22 Tudomány Drog Facebook Adatvédelem Max Schrems szerint vannak olyan tagállami hatóságok, amelyek vezetőjét épp azért nevezték ki, hogy ne végezze el a dolgát: "Mintha a kábítószer-ellenes hatóságot nem érdekelnék a kábítószerek." Négynapos, nemzetközi technológiai show-t rendeznek novemberben Budapesten Márkamonitor 2024-09-26 10:06:04 Infotech Vadonatúj nemzetközi technológiai esemény debütál idén novemberben Budapesten, amely egyszerre szólítja meg a szakembereket és a nagyközönséget. A szervezők célja, hogy a Techxpo néhány éven belül a kelet-közép-európai térség meghatározó technológiai expójává váljon, amely a régióból hiányzó, de nyugat-európai nagyvárosokban már hosszú múltra vissz 35 éve érte el csúcsát az orosz szuperfúrás First Class 2024-09-26 10:51:10 Tudomány 35 évvel ezelőtt értek el szovjet tudósok a lehető legmélyebbre a Föld titkai után kutató szuperfúrás során, amiről egy hamis legenda is született. Hamarosan kereskedelmi forgalomba kerül az eVinci nukleáris mikroreaktor Rakéta 2024-09-26 15:01:23 Infotech Az eVinci már az engedélyeztetési folyamatoknál jár, így az évtized végéig akár több ilyen mikroreaktor is telepíthetővé válik. Nemcsak a rovarirtó szerek, a hőhullámok is rontják a poszméhek szaglását Helló Sajtó! 2024-09-26 04:34:23 Tudomány Oktatás egyetem Rovar Eddig úgy tudtuk, a poszméhek szaglására csak a rovarirtó szerek hatnak negatívan, de a HUN-REN Agrártudományi Kutatóközpont Növényvédelmi Intézete és a Würzburgi Egyetem közös kutatásából kiderült, a magas hőmérséklet is csökkenti a rovarok szaglását. Ez pedig a haszonnövényeink beporzására is komoly hatással lehet, vélik a kutatók. A mostaninál ia nagyobb áradások várnak ránk 24.hu 2024-09-26 12:10:39 Tudomány A 2050-es években az ilyen viharok legalább 5 százalékkal több esővel fognak érkezni, és a mostaninál 50 százalékkal gyakrabban fordulnak majd elő. A magyarországi bankok is használják már az AI-t Fintech 2024-09-26 05:04:26 Modern Gazdaság Oktatás Mesterséges intelligencia Fintech A European Women Payments Network és a Magyar Fintech Szövetség (MAFISZ) rendezvényének keretében a magyarországi pénzügyi szektor mesterséges intelligencia (AI) használatának legfrissebb témái voltak terítéken. A beszélgetésen szóba került a biztonság, a bevezetési nehézségek az oktatás és a változás menedzsment is. A beszélgetésen olyan vezetők v A Webb teleszkóp egy „furcsa” galaxist talált a csillagoknál fényesebb valamivel PhoneBazis 2024-09-26 12:42:46 Infotech Világűr James Webb Amikor a csillagászok a James Webb űrteleszkópot a korai világegyetemre irányították, egy ősi galaxist találtak, amelynek szokatlan fényjelzése volt. Az adatok alapos tanulmányozása után a tudósok megállapították, hogy nem a galaxisban található csillagok a közvetlen okozói a rendkívüli fényességnek. Hanem az űrgáz. Ez a Monthly Notices of the Roya Földtől a tányérig: mit nyomtatnak 3D-vel a mezőgazdaságban? ICT Global 2024-09-26 06:03:27 Infotech Drón Robot 3D nyomtató Drónok, robotok, automata járművek, szenzorok és szenzorhálózatok után a 3D nyomtatás is megjelent a mezőgazdaságba. Az alkalmazások gyerekcipőben járnak még, de az elterjedés gyorsnak ígérkezik. Gépalkatrészektől a palántázásig, talajkutatástól a termés folyamatos megfigyeléséig, a technológia sokféleképpen hasznosítható. A további adásainkat keresd a podcast.hirstart.hu oldalunkon.
If you're in the privacy industry, you know Max Schrems: Renowned privacy rights advocate and the David who took on Facebook's Goliath to shine a light on the misuse of consumer data. But that's just one facet of what he does. In this episode, Max talks to Arlo Gilbert about Facebook, Amazon, and Google, whether or not you can file a class action in Europe, and how somebody became so passionate about privacy that they ended up changing the world. Part II of a two-part series.
If you're in the privacy industry, you know Max Schrems: Renowned privacy rights advocate and the David who took on Facebook's Goliath to shine a light on the misuse of consumer data. But that's just one facet of what he does. In this episode, Max talks to Arlo Gilbert about Facebook, Amazon, and Google, whether or not you can file a class action in Europe, and how somebody became so passionate about privacy that they ended up changing the world. Part II of a two-part series.
If you're in the privacy industry, you know Max Schrems: Renowned privacy rights advocate and the David who took on Facebook's Goliath to shine a light on the misuse of consumer data. But that's just one facet of what he does. In this episode, Max talks to Arlo Gilbert about his organization, noyb, the often-unglamorous work of protecting privacy rights, and how the Florida educational system kicked off a lifelong passion for privacy. Part I of a two-part series.
If you're in the privacy industry, you know Max Schrems: Renowned privacy rights advocate and the David who took on Facebook's Goliath to shine a light on the misuse of consumer data. But that's just one facet of what he does. In this episode, Max talks to Arlo Gilbert about his organization, noyb, the often-unglamorous work of protecting privacy rights, and how the Florida educational system kicked off a lifelong passion for privacy. Part I of a two-part series.
Griechenland und Albanien haben wegen der schweren Waldbrände das EU-Katastrophenschutzverfahren mobilisiert. Wie gut ist die EU auf die immer schlimmeren Folgen der Klimakrise vorbereitet und wie kann sie betroffenen Ländern helfen? Europäischer Datenschutzverein gegen Elon Musk: "Illegales Datensammeln bei X für KI-Training", Interview mit Max Schrems, Vorsitzender NYOB; "Denk‘ ich an Europa" mit Dirk Stermann, Kabarettist: "Anschauen, was gut läuft in Europa!"; Mod.: Judith Schulte-Loh. Von WDR 5.
In this episode, David Carroll, an associate professor of media design in the MFA Design and Technology graduate program at the School of Art, Media and Technology at Parsons School of Design at The New School, speaks to Ravi Naik, legal director at AWO, a consultancy with offices in London, Brussels, and Paris that works on a range of data protection and tech policy issues. Their discussion delves into the evolution of data protection from the Cambridge Analytica scandal to current questions provoked by generative AI, with a focus on a GDPR complaint against OpenAI brought by Noyb, the non-profit founded by Austrian activist Max Schrems.
Henrik er utilfreds med, at borgerne bliver bedt om at preppe Jakabov og FM-radioer, når staten har overhørt adskillige advarsler om at forbedre sin it-sikkerhed til et acceptabelt niveau. I Storbritannien bliver togpassagerer overvåget af kameraer med Amazon-software, der kan lave følelsesgenkendelse, dvs. aflæse hvilket humør man er i. Og så har vi et interview med Max Schrems, der er blevet kaldt Metas banemand. Han fører retssager mod Meta, ejer af Facebook og Instagram, på vegne af borgerne, og han har nu været med til at få Meta til at trykke pause på træningen af deres AI-chatbot med europæernes data.
Meta mette in pausa l'uso dei contenuti di Instagram e Facebook per addestrare le IA!Dopo varie discussioni con il garante irlandese e l'intervento dell'ente no profit Noyb, guidato da Max Schrems, vediamo un significativo passo indietro da parte di Meta sulla questione dell'utilizzo dei dati degli utenti. Questo sviluppo ci solleva importanti riflessioni sul ruolo della normativa europea GDPR e su come essa protegga i cittadini dalle iniziative invasive delle grandi aziende tech.Per scoprire tutti i dettagli su questa vicenda e capire cosa significa veramente per noi utenti, guardate il video di oggi.~~~~~ SOSTIENI IL CANALE! ~~~~~Con la Membership PRO puoi supportare il Canale » https://link.mgpf.it/proSe vuoi qui la mia attrezzatura » https://mgpf.it/attrezzatura~~~~~ SEGUIMI ANCHE ONLINE CON LE NOTIFICHE! ~~~~~» CANALE WHATSAPP » https://link.mgpf.it/wa» CANALE TELEGRAM » https://mgpf.it/tg» CORSO (Gratis) IN FUTURO » https://mgpf.it/nl» NEWSLETTER » https://mgpf.it/nl~~~~~ CIAO INTERNET E MATTEO FLORA ~~~~~ Questo è “Ciao Internet!” la prima e più seguita trasmissione di TECH POLICY in lingua italiana, online su YouTube e in Podcast.Io sono MATTEO FLORA e sono:» Professore in Fondamenti di Sicurezza delle AI e delle SuperIntelligenze (ESE)» Professore ac in Corporate Reputation e Crisis Management (Pavia).Sono un Imprenditore Seriale del digitale e ho fondato:» The Fool » https://thefool.it - La società italiana leader di Customer Insight» The Magician » https://themagician.agency - Atelier di Advocacy e Gestione della Crisi» 42 Law Firm » https://42lf.it - Lo Studio Legale per la Trasformazione Digitale » ...e tante altre qui: https://matteoflora.com/#aziendeSono Future Leader (IVLP) del Dipartimento di Stato USA sotto Amministrazione Obama nel programma “Combating Cybercrime (2012)”.Sono Presidente di PermessoNegato, l'associazione italiana che si occupa di Pornografia Non- Consensuale e Revenge Porn.Conduco in TV “Intelligenze Artificiali” su Mediaset/TgCom.~~~~~ INGAGGI E SPONSORSHIP ~~~~~ Per contatti commerciali: sales@matteoflora.com
Meta hat angekündigt, persönliche Daten aus Facebook und Instagram für eigene KI-Technologie zu verwenden. Missbrauch, findet Max Schrems - und erklärt, warum seine Datenschutzorganisation NOYB Beschwerde in elf europäischen Ländern eingelegt hat. Max Schrems im Gespräch mit Sebastian Wellendorf www.deutschlandfunk.de, @mediasres
Vært: Henrik Heide Medvirkende: Agnes Rønberg, Laurids Hovgaard, Jens Ramskov og chatGPT4.o Sammen med eksplosionen af generativ tekst-AI har et nyt begreb bredt sig: hallucinationer. Denne bløde betegnelse for det, der i virkeligheden må betegnes forfalskninger er en indbygget, uundgåelig element ved de generative modeller. Man kan påpege, at det er et vilkår, når man beder en generativ AI om at gå ud over de træningsdata, den har haft at arbejde med. Men det bliver et problem, når AI-en begynder at opdigte og sprede falske personlige informationer om dig og mig. Informationerne kan nemlig ikke fjernes, siger skaberne af chatGPT. Men bor man i EU, har man en grundlæggende ret til korrekt information om sig selv og må bede om at få rettet, slettet eller ajourført forkerte oplysninger. Det står i både GDPR og EU's charter, og derfor har privatlivsaktivisten Max Schrems indgivet en klage til det østrigske datatilsyn mod OpenAI, firmaet bag ChatGPT, som kan havne ved EU's øverste databeskyttelsesråd EDPB. I Transformator tester vi det af og ser på, hvordan vi undgår, at AI spreder alle mulige sære informationer om os. Op til den seneste uges voldsomme solstorme og efterfølgende nordlys, var der udbredt frygt for, at alverdens elektronik ville blive sat ud af drift. Men det gik ikke meget værre end at traktorer gik i stå midt under såningen Og så skal vi have en rum-vejrudsigt til alle os, der ikke nåede at få set nordlyset i denne omgang. Der er mere på vej. Links Se rumvejret på SpaceWeatherLive Derfor kan du ikke tvinge ChatGPT til at sige sandheden ChatGPT kan ikke overholde GDPR: »Der er ingen gode løsninger« Kæmpe solstorm ramte danske landmænd: Maskiner gik i stå
Connaissez-vous Noyb ? Il s'agit d'une ONG de défense de la vie privée dirigée par le militant Max Schrems, redoutée par les géants de la technologie. Pourquoi ? Et bien parce que l'association autrichienne a permis de sanctionner très lourdement Meta, d'une amende de 1,2 milliard d'euros infligée l'année dernière par les autorités européennes. En cause, le transfert de données d'utilisateurs européens vers les États-Unis en dehors du cadre légal. Et en 2024, l'ONG semble se tourner vers OpenAI, avec en ligne de mire son célèbre chatbot ChatGPT. Pour Noyb, les informations parfois fausses générées par ChatGPT concernant des individus bien réels représentent une violation du Règlement général sur les données personnelles (RGPD). Une figure publique dont le nom n'est pas mentionné par l'association a en effet demandé à l'agent conversationnel de lui fournir sa date d'anniversaire. Le chatbot a livré des informations incorrectes à plusieurs reprises, sans indiquer qu'il ne disposait pas de données lui permettant de répondre. Un exemple parfait des « hallucinations » auxquelles est sujet ce type de modèle, et que les chercheurs peinent encore à rectifier. Je cite l'association « inventer de fausses informations est déjà problématique en soi. Mais lorsqu'il s'agit de fausses informations sur des personnes, les conséquences peuvent être graves. Il est clair que les entreprises sont actuellement incapables de faire en sorte que les chatbots tels que ChatGPT soient conformes à la législation de l'UE lorsqu'ils traitent des données concernant des personnes » fin de citation.Et d'après Noyb, la personne dont je vous parle depuis le début de cet épisode aurait tenté de contacter OpenAI pour rectifier les fausses informations, mais la start-up aurait refusé sa demande. De même, OpenAI n'a pas été en mesure de divulguer des informations sur le type de données traitées, leurs sources ou leurs destinataires. Selon Noyb, l'entreprise de Sam Altman enfreint le RGPD sur deux points supplémentaires : le droit à la rectification des personnes en cas de données erronées à leur sujet et le droit d'accès. Je cite à nouveau, « les entreprises doivent être en mesure de montrer quelles sont les données qu'elles détiennent sur les personnes et quelles en sont les sources » fin de citation. Noyb a donc décidé de saisir l'agence de protection des données autrichiennes, pour lui demander « d'enquêter sur le traitement des données d'OpenAI et sur les mesures prises pour garantir l'exactitude des données personnelles traitées dans le cadre des grands modèles linguistiques de l'entreprise ». Noyb l'encourage également à infliger une amende à l'entreprise, comme pour ce fut le cas pour Meta. D'ailleurs, souvenez-vous, en 2023 l'Italie a temporairement suspendu l'accès à ChatGPT après une importante fuite de données subie par la plateforme. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
There's a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it's appropriate, then, for our two lead stories to revive a theme from the 90s – who's better, Microsoft or Linux? Sadly for both, the current debate is over who's worse, at least for cybersecurity. Microsoft's sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports. The Board digs into the disastrous compromise of a Microsoft signing key that gave China access to US government email. The language of the report is sober, and all the more devastating because of its restraint. Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require a focus on security at a time when the company feels compelled to focus relentlessly on building AI into its offerings. The signs for improvement are not good. The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they've been questioned by the rest of government for decades. With Microsoft down, you might think open source would be up. Think again, Nick Weaver tells us. The strategic vulnerability of open source, as well as its appeal, is that anyone can contribute code to a project they like. And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access only to the authors of the feature. It was weeks from being in every Linux distro when a Microsoft employee discovered the implant. But the people who almost pulled this off seemed well-practiced and well-resourced. They've likely done this before, and will likely do it again. Leaving all open source projects facing their own strategic vulnerability. It wouldn't be the Cyberlaw Podcast without at least one Baker rant about political correctness. The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics. To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court's Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill. Why would the Republicans who control the House go along with this? I offer two reasons: first, business lobbyists want both preemption and a way to avoid charges of racial discrimination, even if it means relying on quotas; second, maybe Sen. Alan Simpson was right that the Republican Party really is the Stupid Party. Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is far more likely to sweep the world. Nick is critical of Israel's approach; I am less so. But there's no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be. Paul takes the next story, in which Microsoft serves up leftover “AI gonna steal yer election” tales that are not much different than all the others we've heard since 2016 (when straight social media was the villain). The bottom line: China is using AI in social media to advance its interests and probe US weaknesses, but it doesn't seem to be having much effect. Nick answers the question, “Will AI companies run out of training data?” with a clear viewpoint: “They already have.” He invokes the Hapsburgs to explain what's going wrong. We also touch on the likelihood that demand for training data will lead to copyright liability, or that hallucinations will lead to defamation liability. Color me skeptical. Paul comments on two US quasiagreements, with the UK and the EU, on AI cooperation. And Adam breaks down the FCC's burst of initiatives celebrating the arrival of a Democratic majority on the Commission for the first time since President Biden's inauguration. The commission is now ready to move out on net neutrality, on regulating cars as oddly shaped phones with benefits, and on SS7 security. Faced with a security researcher who responded to a hacking attack by taking down North Korea's internet, Adam acknowledges that maybe my advocacy of hacking back wasn't quite as crazy as he thought when he was in government. In Cyberlaw Podcast alumni news, I note that Paul Rosenzweig has been appointed an advocate at the Data Protection Review Court, where he'll be expected to channel Max Schrems. And Paul offers a summary of what has made the last 500 episodes so much fun for me, for our guests, and for our audience. Thanks to you all for the gift of your time and your tolerance!
There's a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it's appropriate, then, for our two lead stories to revive a theme from the 90s – who's better, Microsoft or Linux? Sadly for both, the current debate is over who's worse, at least for cybersecurity. Microsoft's sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports. The Board digs into the disastrous compromise of a Microsoft signing key that gave China access to US government email. The language of the report is sober, and all the more devastating because of its restraint. Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require a focus on security at a time when the company feels compelled to focus relentlessly on building AI into its offerings. The signs for improvement are not good. The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they've been questioned by the rest of government for decades. With Microsoft down, you might think open source would be up. Think again, Nick Weaver tells us. The strategic vulnerability of open source, as well as its appeal, is that anyone can contribute code to a project they like. And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access only to the authors of the feature. It was weeks from being in every Linux distro when a Microsoft employee discovered the implant. But the people who almost pulled this off seemed well-practiced and well-resourced. They've likely done this before, and will likely do it again. Leaving all open source projects facing their own strategic vulnerability. It wouldn't be the Cyberlaw Podcast without at least one Baker rant about political correctness. The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics. To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court's Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill. Why would the Republicans who control the House go along with this? I offer two reasons: first, business lobbyists want both preemption and a way to avoid charges of racial discrimination, even if it means relying on quotas; second, maybe Sen. Alan Simpson was right that the Republican Party really is the Stupid Party. Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is far more likely to sweep the world. Nick is critical of Israel's approach; I am less so. But there's no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be. Paul takes the next story, in which Microsoft serves up leftover “AI gonna steal yer election” tales that are not much different than all the others we've heard since 2016 (when straight social media was the villain). The bottom line: China is using AI in social media to advance its interests and probe US weaknesses, but it doesn't seem to be having much effect. Nick answers the question, “Will AI companies run out of training data?” with a clear viewpoint: “They already have.” He invokes the Hapsburgs to explain what's going wrong. We also touch on the likelihood that demand for training data will lead to copyright liability, or that hallucinations will lead to defamation liability. Color me skeptical. Paul comments on two US quasiagreements, with the UK and the EU, on AI cooperation. And Adam breaks down the FCC's burst of initiatives celebrating the arrival of a Democratic majority on the Commission for the first time since President Biden's inauguration. The commission is now ready to move out on net neutrality, on regulating cars as oddly shaped phones with benefits, and on SS7 security. Faced with a security researcher who responded to a hacking attack by taking down North Korea's internet, Adam acknowledges that maybe my advocacy of hacking back wasn't quite as crazy as he thought when he was in government. In Cyberlaw Podcast alumni news, I note that Paul Rosenzweig has been appointed an advocate at the Data Protection Review Court, where he'll be expected to channel Max Schrems. And Paul offers a summary of what has made the last 500 episodes so much fun for me, for our guests, and for our audience. Thanks to you all for the gift of your time and your tolerance!
Een betaald abonnement op Facebook en Instagram kost nu tien euro per maand, maar Meta wil dat best ruwweg halveren. Toch blijft de kritiek op deze 'oplossing', legt Joe van Burik uit in deze Tech Update. Sinds afgelopen najaar kun je een tientje per maand betalen voor Facebook of Instagram/ Dan mag Meta niet meer je data gebruiken voor gepersonaliseerde advertenties. Om zo te voldoen aan Europese weten, in het kader van privacy en dataverwerking. Daarom wordt dit ook wel 'Pay or Okay' genoemd, door critici zoals privacyactivist Max Schrems. Nu heeft Meta laten weten dat ze best willen zakken naar zes euro per maand voor één account. En als je daaraan gekoppelde accounts hebt, vier euro - dus gemiddeld een halvering van de prijs. Dat heeft een advocaat van Meta gezegd tijdens een verhoor met de Europese Commissie, meldt Reuters. Die daaraan toevoegt 'Dat is zo laag als we kunnen gaan, en dan krijg je nog steeds echt een kwaliteitsdienst. We willen af van de onzekerheid die deze regels bezorgen.' Dit aanbod ligt nu bij de Ierse privacy-waakhond/ Schrems heeft ook al gereageerd, en is geheel zoals verwacht hierover niet te spreken. Volgens zijn privacyinstantie None of Your Business blijft Meta hiermee mensen dwingen tot betalen óf het accepteren van privacy-schending. In een reactie laat hij weten: 'Uit onderzoek blijft dat zelfs voor 2 euro per maand of minder maar een paar mensen dit zou willen. En 99 procent van de mensen blijft dus instemmen met privacy-schending. De GDPR, oftewel AVG, schrijft voor dat betalen geen alternatief mag zijn voor een gratis dienst. Een prijsverlaging verandert daar helemaal niets aan.' Verder in deze Tech Update: De Europese Centrale Bank en diverse nationale banken, waaronder die in Duitsland, willen met AI meer inzicht krijgen in klimaatverandering, en hoe bedrijven hun voornemens en afspraken daarover rapporteren Ook volledig elektrische Ferrari's krijgen 'emotie' in het geluid dat ze produceren, belooft CEO Benedetto Vigna See omnystudio.com/listener for privacy information.
Folge 36: Am 21. März lädt der Landesdatenschutzbeauftragte zum „Thementag Internationaler Datentransfer“ in seine Behörde. Bürgerrechtler Max Schrems ist dabei, ebenso Vertretungen der Europäischen Kommission, der Wirtschaft, eines Interessenverbandes und einer Kanzlei. LfDI Abteilungsleiter David Schweizer erklärt, was an diesem Tag besprochen wird. LfDI Tobias Keber wirft einen genauen Blick auf ein interessantes EuGH Urteil, […]
Robert Bateman is a freelance writer who creates privacy and data protection content for blogs, emails, articles, websites, reports, and white papers. He's been an industry advocate since 2017 and has interviewed leading figures in the privacy field, including Max Schrems and Johnny Ryan. As a thought leader, Robert is a sought-after speaker and panelist for online and in-person privacy conferences, events, and webinars. Because of his thirst for knowledge and passion for privacy, Robert began providing training and consultancy work in 2023. In this episode… The United States and the United Kingdom have different approaches to privacy and data protection. The US has a patchwork of state privacy laws, while the UK has one unified national data protection law. So how can US companies comply with UK data protection laws when transferring data to the UK? Data privacy and protection thought leader Robert Bateman explains that one of the main challenges is understanding the different requirements of US state privacy laws and UK data protection laws. For example, some US states mandate that companies obtain consent from people before collecting their personal information. In contrast, the UK data protection law does not require consent for all types of data collection. To mitigate the risk of fines and other penalties, US companies should examine their data collection and processing procedures to comply with both US state privacy and UK data protection laws. Companies should also seek the counsel of an experienced data privacy attorney to assist them in understanding their obligations and developing a compliance plan. Join Justin and Jodi Daniels in this episode of the She Said Privacy/He Said Security Podcast as privacy and data protection content creator Robert Bateman joins the show. Robert explains the challenges UK data privacy professionals face, the difficulties US companies encounter in understanding UK data transfer rules, and why ICO regulators should adhere to cookie compliance.
Un'intervista esclusiva e approfondita con Max Schrems sul Trasferimento dei Dati, l'Accordo sul Trasferimento dei Dati EU-USA e il futuro del Trasferimento dei Dati (e Schrems III?).La registrazione completa dell'evento con una delle voci più autorevoli sul tema, nel primo incontro sponsorizzato da PrivacyForFutures!(Sottotitoli disponibili in EN, IT, DE, FR, ES)00:00:00 - Saluti00:04:25 - Anna Cataleta su Miglioramenti delle garanzie per gli individui dopo Schrems I e II00:15:59 - Giuseppe Vaciago sull'equilibrio tra protezione dei diritti e questioni geopolitiche 00:21:28 - Stefano Leucci sull'approccio delle Autorità di protezione dei dati sui trasferimenti di dati00:28:25 - Gabriele Faggioli sulla stabilità dell'accordo di trasferimento dati USA-UE00:36:43 - Alessandro Ortalda sul trasferimento di dati in giurisdizioni diverse da quella statunitense00:40:38 - Gabriele Rizzo sull'equilibrio tra protezione e liberalizzazione00:51:13 - Matteo Flora su ''Stai weaponizzando la privacy''?»»»Ti sei già iscritto alla [Membership PRO], per supportare il Canale e avere una serie di contenuti esclusivi?https://www.youtube.com/channel/UCm5H2LH2ISgpX-X5-SP4Vrg/join»»»Io sono MATTEO FLORA, sono un Imprenditore seriale nel Digitale e insegno in università #Reputazione e #Cybersecurity. Mi occupo di #Dati, #Polarizzazioni, Gestione di #Crisi e #Reputazione come Consulente Strategico di Aziende, Enti e Professionisti.Con “Ciao Internet” ti racconto come la Rete ci Cambia, come capirla e usarla al meglio per migliorare la tua vita e professione, non solo digitale.Aziende » http://matteoflora.com/#aziendeTelegram » https://mgpf.it/tgCorso Gratis » https://mgpf.it/nlPer contatti commerciali: sales@matteoflora.comQuesto show fa parte del network Spreaker Prime. Se sei interessato a fare pubblicità in questo podcast, contattaci su https://www.spreaker.com/show/2130193/advertisement
Your new host, Brent Gervais, had the opportunity to sit down together with Max Schrems - privacy lawyer and pioneer in the European data privacy movement - to explore some thoughts and ideas that stemmed from Max's keynote given at the Nextcloud Conference 2023. We explore the newly adopted Transatlantic Data Privacy Framework, it's implications and why we all need to care and pay attention. We also touch on the 15-year debate on EU-US data transfers, surveillance law, anti-competitive behaviours in Big Tech and the need for regulation, despite the challenges presented by the process.
X, het social-mediaplatform dat we tot voorkort kenden als Twitter, wil toegang tot zaken die vallen onder biometrische data. Bijvoorbeeld je vingerafdruk, ogen of hele gezicht. Het privacybeleid is namelijk aangepast om dat ook te vermelden. Daarin lezen we trouwens ook dat je toegang moet geven voor het verlenen van biometrische data. Al zeggen ze er dan weer niet bij om welke het precies gaat. Dat zou kunnen gaan om ook hier het te gebruiken ter verificatie, hetzij met inloggen, of zelfs om überhaupt te bewijzen dat je mens bent. Gezien X-eigenaar Elon Musk vaak heeft gezegd dat bots, oftewel nep-accounts, zijn grootste ergernis zijn. Daarnaast opvallend: X wil ook graag weten over je loopbaan, wat je opleiding en je (eerder) banen zijn geweest. Sterker nog, ze willen ook weten wat voor baan je graag wilt hebben, want je eigenlijk allemaal kunt en nog meer, om je zo banen aan te raden en jou ook te tippen bij werkgevers. Dat klinkt dan weer alsof X de strijd aan zou moeten gaan met LinkedIn. Al lezen we tevens dat met deze data X je betere advertenties wil tonen. Verder in deze Tech Update: Google zou die met de smartwatches van Fitbit incorrect verwerken, daarom heeft privacy-organisatie 'noyb' (None of your business) van Max Schrems een aanklacht ingediend bij onder meer de Nederlandse Autoriteit Persoonsgegevens. Google onthult op 4 oktober de nieuwe smartphone Pixel 8 en komt naar verwachting dan met nog meer nieuwe hardwareproducten. See omnystudio.com/listener for privacy information.
In this episode, you will listen to Luiza Jarovsky and Max Schrems talking about GDPR enforcement challenges and other topics such as:The 1.2 billion Euro fine against Meta, and what we can expect next;5 years of GDPR and noyb's 800+ cases: challenges, wins, and how to improve enforcement;Max's journey as a privacy advocate;Privacy, AI, and enforcement issues;The future of privacy;EU-U.S. data transfers.Max is a trailblazer, and his work and advocacy in the privacy field are groundbreaking and have literally changed how we learn, practice, and understand privacy today.Luiza Jarovsky is a lawyer, CEO of Implement Privacy, and author of Luiza's Newsletter.Read more about Luiza's work at https://www.luizajarovsky.comSubscribe to Luiza's Newsletter: https://www.luizasnewsletter.comCheck out the courses and training programs Luiza is leading at https://www.implementprivacy.comFollow Luiza on social media:LinkedIn: https://www.linkedin.com/in/luizajarovskyTwitter: https://www.twitter.com/luizaJarovskyYouTube: https://youtube.com/@luizajarovsky
This week, we discuss the new EU adequacy decision for the US, based on the Data Privacy Framework (plus Max Schrems's inevitable reaction), and a proposed UK-US ‘data bridge'; fixes for three more vulnerabilities in Progress Software's MOVEit Transfer app; plus this month's Patch Tuesday and other security updates.
Max Schrems is the lawyer and activist behind two (and, probably soon, a third) legal challenge to the adequacy of U.S. law to protect European personal data. Thanks to the Federalist Society's Regulatory Transparency Project, Max and I were able to spend an hour debating the law and policy behind Europe's generation-long fight with the United States over transatlantic data flows. It's civil, pointed, occasionally raucous, and wide-ranging – a fun, detailed introduction to the issues that will almost certainly feature in the next round of litigation over the latest agreement between Europe and the U.S. Don't miss it! Download 465th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Max Schrems is the lawyer and activist behind two (and, probably soon, a third) legal challenge to the adequacy of U.S. law to protect European personal data. Thanks to the Federalist Society's Regulatory Transparency Project, Max and I were able to spend an hour debating the law and policy behind Europe's generation-long fight with the United States over transatlantic data flows. It's civil, pointed, occasionally raucous, and wide-ranging – a fun, detailed introduction to the issues that will almost certainly feature in the next round of litigation over the latest agreement between Europe and the U.S. Don't miss it! Download 465th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Netflix zou dit najaar een groot golftoernooi live willen gaan uitzenden en daarmee voor het eerst live sport op het platform tonen. De deelnemers zouden echter niet alleen maar professionele golfers zijn. Netflix wordt al langer in verband gebracht met live sport, maar heeft vooralsnog alleen documentaires rond sport op het platform staan - onder meer rond golf en Formule 1. Die werelden moeten samen komen, als er dit najaar een groot golftoernooi komt in Las Vegas, rond de tijd dat er ook voor het eerst sinds vele jaren weer een Formule 1 Grand Prix gehouden wordt in die stad. Aan dat toernooi zouden dan ook zowel golfers als F1-coureurs deelnemen, meldt The Wall Street Journal. Verder in deze Tech Update: De race om chips raakt vandaag in een stroomversnelling, want straks komt één van de grote spelers met nieuws. Advanced Micro Devices, oftewel AMD, gaat een nieuwe 'superchip' onthullen: de MI300 is een van de meest geavanceerde grafische processoren, speciaal geschikt voor taalmodellen en toepassingen rond AI. Spotify heeft een flinke privacy-boete aan de broek gekregen: 5 miljoen euro voor het schenden van privacy van gebruikers in de EU. Die boete is uitgedeelde door de data-waakhond in Zweden, IMY. Deze zaak bleef lang liggen, maar werd opnieuw aangezwengeld door privacyactivist Max Schrems. See omnystudio.com/listener for privacy information.
This episode of the Cyberlaw Podcast kicks off with a spirited debate over AI regulation. Mark MacCarthy dismisses AI researchers' recent call for attention to the existential risks posed by AI; he thinks it's a sci-fi distraction from the real issues that need regulation—copyright, privacy, fraud, and competition. I'm utterly flummoxed by the determination on the left to insist that existential threats are not worth discussing, at least while other, more immediate regulatory proposals have not been addressed. Mark and I cross swords about whether anything on his list really needs new, AI-specific regulation when Big Content is already pursuing copyright claims in court, the FTC is already primed to look at AI-enabled fraud and monopolization, and privacy harms are still speculative. Paul Rosenzweig reminds us that we are apparently recapitulating a debate being held behind closed doors in the Biden administration. Paul also points to potentially promising research from OpenAI on reducing AI hallucination. Gus Hurwitz breaks down the week in FTC news. Amazon settled an FTC claim over children's privacy and another over security failings at Amazon's Ring doorbell operation. The bigger story is the FTC's effort to issue a commercial death sentence on Meta's children's business for what looks to Gus and me more like a misdemeanor. Meta thinks, with some justice, that the FTC is looking for an excuse to rewrite the 2019 consent decree, something Meta says only a court can do. Paul flags a batch of China stories: China's version of Bloomberg has begun quietly limiting the information about China's economy that is available to overseas users. TikTok is accused of storing influencers' sensitive financial information In China, contrary to its promises. Malaysia won't ban Huawei from it 5G network. The former Harvard chair convicted of lying about taking Chinese money has been sentenced to just two days in prison. And another professor charged and then exonerated of commercial espionage has won the right to sue the FBI for his arrest. Gus tells us that Microsoft has effectively lost a data protection case in Ireland and will face a fine of more than $400 million. I seize the opportunity to plug my upcoming debate with Max Schrems over the Privacy Framework. Paul is surprised to find even the State Department rising to the defense of section 702 of Foreign Intelligence Surveillance Act (“FISA"). Gus asks whether automated tip suggestions should be condemned as “dark patterns” and whether the FTC needs to investigate the New York Times's stubborn refusal to let him cancel his subscription. He also previews California's impending Journalism Preservation Act. Download 461st Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
This episode of the Cyberlaw Podcast kicks off with a spirited debate over AI regulation. Mark MacCarthy dismisses AI researchers' recent call for attention to the existential risks posed by AI; he thinks it's a sci-fi distraction from the real issues that need regulation—copyright, privacy, fraud, and competition. I'm utterly flummoxed by the determination on the left to insist that existential threats are not worth discussing, at least while other, more immediate regulatory proposals have not been addressed. Mark and I cross swords about whether anything on his list really needs new, AI-specific regulation when Big Content is already pursuing copyright claims in court, the FTC is already primed to look at AI-enabled fraud and monopolization, and privacy harms are still speculative. Paul Rosenzweig reminds us that we are apparently recapitulating a debate being held behind closed doors in the Biden administration. Paul also points to potentially promising research from OpenAI on reducing AI hallucination. Gus Hurwitz breaks down the week in FTC news. Amazon settled an FTC claim over children's privacy and another over security failings at Amazon's Ring doorbell operation. The bigger story is the FTC's effort to issue a commercial death sentence on Meta's children's business for what looks to Gus and me more like a misdemeanor. Meta thinks, with some justice, that the FTC is looking for an excuse to rewrite the 2019 consent decree, something Meta says only a court can do. Paul flags a batch of China stories: China's version of Bloomberg has begun quietly limiting the information about China's economy that is available to overseas users. TikTok is accused of storing influencers' sensitive financial information In China, contrary to its promises. Malaysia won't ban Huawei from it 5G network. The former Harvard chair convicted of lying about taking Chinese money has been sentenced to just two days in prison. And another professor charged and then exonerated of commercial espionage has won the right to sue the FBI for his arrest. Gus tells us that Microsoft has effectively lost a data protection case in Ireland and will face a fine of more than $400 million. I seize the opportunity to plug my upcoming debate with Max Schrems over the Privacy Framework. Paul is surprised to find even the State Department rising to the defense of section 702 of Foreign Intelligence Surveillance Act (“FISA"). Gus asks whether automated tip suggestions should be condemned as “dark patterns” and whether the FTC needs to investigate the New York Times's stubborn refusal to let him cancel his subscription. He also previews California's impending Journalism Preservation Act. Download 461st Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
This episode of the Cyberlaw Podcast features the second half of my interview with Paul Stephan, author of The World Crisis and International Law. But it begins the way many recent episodes have begun, with the latest AI news. And, since it's so squarely in scope for a cyberlaw podcast, we devote some time to the so-appalling- you-have-to-laugh-to keep-from-crying story of the lawyer who relied on ChatGPT to write his brief. As Eugene Volokh noted in his post, the model returned exactly the case law the lawyer wanted—because it made up the cases, the citations, and even the quotes. The lawyer said he had no idea that AI would do such a thing. I cast a skeptical eye on that excuse, since when challenged by the court to produce the cases he relied on, the lawyer turned not to Lexis-Nexis or Westlaw but to ChatGPT, which this time made up eight cases on point. And when the lawyer asked, “Are the other cases you provided fake,” the model denied it. Well, all right then. Who among us has not asked Westlaw, “Are the cases you provided fake?” Somehow, I can't help suspecting that the lawyer's claim to be an innocent victim of ChatGPT is going to get a closer look before this story ends. So if you're wondering whether AI poses existential risk, the answer for at least one lawyer's license is almost certainly “yes.” But the bigger story of the week was the cries from Google and Microsoft leadership for government regulation. Jeffery Atik and Richard Stiennon weigh in. Microsoft's President Brad Smith has, as usual, written a thoughtful policy paper on what AI regulation might look like. And they point out that, as usual, Smith is advocating for a process that Microsoft could master pretty easily. Google's Sundar Pichai also joins the “regulate me” party, but a bit half-heartedly. I argue that the best way to judge Silicon Valley's confidence in the accuracy of AI is by asking when Google and Apple will be willing to use AI to identify photos of gorillas as gorillas. Because if there's anything close to an extinction event for those companies it would be rolling out an AI that once again fails to differentiate between people and apes. Moving from policy to tech, Richard and I talk about Google's integration of AI into search; I see some glimmer of explainability and accuracy in Google's willingness to provide citations (real ones, I presume) for its answers. And on the same topic, the National Academy of Sciences has posted research suggesting that explainability might not be quite as impossible as researchers once thought. Jeffery takes us through the latest chapters in the U.S.—China decoupling story. China has retaliated, surprisingly weakly, for U.S. moves to cut off high-end chip sales to China. It has banned sales of U.S. - based Micron memory chips to critical infrastructure companies. In the long run, the chip wars may be the disaster that Invidia's CEO foresees. Jeffery and I agree that Invidia has much to fear from a Chinese effort to build a national champion to compete in AI chipmaking. Meanwhile, the Biden administration is building a new model for international agreements in an age of decoupling and industrial policy. Whether its effort to build a China-free IT supply chain will succeed is an open question, but we agree that it marks an end to the old free-trade agreements rejected by both former President Trump and President Biden. China, meanwhile, is overplaying its hand in Africa. Richard notes reports that Chinese hackers attacked the Kenyan government when Kenya looked like it wouldn't be able to repay China's infrastructure loans. As Richard points out, lending money to a friend rarely works out. You are likely to lose both the friend and the money. Finally, Richard and Jeffery both opine on Irelands imposing—under protest—of a $1.3 billion fine on Facebook for sending data to the United States despite the Court of Justice of the European Union's (CJEU) two Schrems decisions. We agree that the order simply sets a deadline for the U.S. and the EU to close their deal on a third effort to satisfy the CJEU that U.S. law is “adequate” to protect the rights of Europeans. Speaking of which, anyone who's enjoyed my rants about the EU will want to tune in for a June 15 Teleforum in which Max Schrems and I will debate the latest privacy framework. If we can, we'll release it as a bonus episode of this podcast, but listening live should be even more fun! Download 459th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
This episode of the Cyberlaw Podcast features the second half of my interview with Paul Stephan, author of The World Crisis and International Law. But it begins the way many recent episodes have begun, with the latest AI news. And, since it's so squarely in scope for a cyberlaw podcast, we devote some time to the so-appalling- you-have-to-laugh-to keep-from-crying story of the lawyer who relied on ChatGPT to write his brief. As Eugene Volokh noted in his post, the model returned exactly the case law the lawyer wanted—because it made up the cases, the citations, and even the quotes. The lawyer said he had no idea that AI would do such a thing. I cast a skeptical eye on that excuse, since when challenged by the court to produce the cases he relied on, the lawyer turned not to Lexis-Nexis or Westlaw but to ChatGPT, which this time made up eight cases on point. And when the lawyer asked, “Are the other cases you provided fake,” the model denied it. Well, all right then. Who among us has not asked Westlaw, “Are the cases you provided fake?” Somehow, I can't help suspecting that the lawyer's claim to be an innocent victim of ChatGPT is going to get a closer look before this story ends. So if you're wondering whether AI poses existential risk, the answer for at least one lawyer's license is almost certainly “yes.” But the bigger story of the week was the cries from Google and Microsoft leadership for government regulation. Jeffery Atik and Richard Stiennon weigh in. Microsoft's President Brad Smith has, as usual, written a thoughtful policy paper on what AI regulation might look like. And they point out that, as usual, Smith is advocating for a process that Microsoft could master pretty easily. Google's Sundar Pichai also joins the “regulate me” party, but a bit half-heartedly. I argue that the best way to judge Silicon Valley's confidence in the accuracy of AI is by asking when Google and Apple will be willing to use AI to identify photos of gorillas as gorillas. Because if there's anything close to an extinction event for those companies it would be rolling out an AI that once again fails to differentiate between people and apes. Moving from policy to tech, Richard and I talk about Google's integration of AI into search; I see some glimmer of explainability and accuracy in Google's willingness to provide citations (real ones, I presume) for its answers. And on the same topic, the National Academy of Sciences has posted research suggesting that explainability might not be quite as impossible as researchers once thought. Jeffery takes us through the latest chapters in the U.S.—China decoupling story. China has retaliated, surprisingly weakly, for U.S. moves to cut off high-end chip sales to China. It has banned sales of U.S. - based Micron memory chips to critical infrastructure companies. In the long run, the chip wars may be the disaster that Invidia's CEO foresees. Jeffery and I agree that Invidia has much to fear from a Chinese effort to build a national champion to compete in AI chipmaking. Meanwhile, the Biden administration is building a new model for international agreements in an age of decoupling and industrial policy. Whether its effort to build a China-free IT supply chain will succeed is an open question, but we agree that it marks an end to the old free-trade agreements rejected by both former President Trump and President Biden. China, meanwhile, is overplaying its hand in Africa. Richard notes reports that Chinese hackers attacked the Kenyan government when Kenya looked like it wouldn't be able to repay China's infrastructure loans. As Richard points out, lending money to a friend rarely works out. You are likely to lose both the friend and the money. Finally, Richard and Jeffery both opine on Irelands imposing—under protest—of a $1.3 billion fine on Facebook for sending data to the United States despite the Court of Justice of the European Union's (CJEU) two Schrems decisions. We agree that the order simply sets a deadline for the U.S. and the EU to close their deal on a third effort to satisfy the CJEU that U.S. law is “adequate” to protect the rights of Europeans. Speaking of which, anyone who's enjoyed my rants about the EU will want to tune in for a June 15 Teleforum in which Max Schrems and I will debate the latest privacy framework. If we can, we'll release it as a bonus episode of this podcast, but listening live should be even more fun! Download 459th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
We've been speaking to Max Schrems to get his reaction to the fine.
Vad händer när våra allra mest intima uppgifter läcker till ett av världens största techbolag Facebook? Ekots grävreportrar Sven Carlsson och Sascha Granberg söker svar. Svenska bankkunders, patienters och omkring en miljon apoteksbesökares personuppgifter har läckt till Facebook via deras verktyg Facebook Pixel. Konflikt skildrar hur spårningen på nätet växte till en miljardindustri, söker svar om vad som hänt med uppgifterna som läckt, och frågar sig vad politikerna i Bryssel gör när hälsoinformation blivit en handelsvara.Medverkande: Lou Montulli, entreprenör och fd anställd på Netscape, Göran Arvinius, tidigare vd för Doubleclick Scandinavia, Philip Eriksson, programmerare och P1-lyssnare, Arash Gilan, vd för Viva Media, Max Schrems, aktivist och grundare av NOYB, Abir Al-Sahlani, EU-parlamentariker för CenterpartietReportrar: Sven Carlsson och Sascha GranbergProgramledare: Lotten Collinlotten.collin@sr.seTekniker: Christer Tjernell, Lisa Abrahamsson, Maria Stillberg
Discover The Secrets To Build A Credible Personal Brand From An Award Winning Privacy WriterJoin us on today's episode as we sit down with Robert Bateman, Head of Content at GRC World Forums. Robert shares his unique career path from a postgraduate degree in psychotherapeutic counselling to finding his niche as a privacy writer. He shares his insight on the future of GDPR and some of his most memorable interview experiences with well-known privacy figures like Max Schrems. Tune in for actionable tips on how to establish a credible personal brand and find your voice in the privacy industry. Hi, my name is Jamal Ahmed and I'd like to invite you to listen to this special episode of the #1 ranked Data Privacy podcast.In this episode, discover:How to stay ahead of changing privacy lawsWhat the future holds for GDPRHow to build a credible personal brand with social mediaThe best practices for writing effective and engaging privacy contentWhether you're considering a career in privacy or looking to build your personal brand, don't miss this opportunity to learn from a respected privacy writer!Robert Bateman is a respected voice on data protection, privacy, and security law. He built his profile creating in-depth reports on legislation, compliance guidance documents for organizations, and news articles about the latest sectoral developments worldwide. Robert's passion for privacy and security began while studying for a postgraduate law degree. His dissertation, on the compatibility of the U.K.'s Data Protection Act with the European Convention on Human Rights, won the DMH Stallard Prize for best research project. Since then, Robert has worked with privacy and security-focused tech firms and startups, creating detailed guidance and policy documents to help organizations comply with their data protection and security compliance obligations. A passionate advocate for digital rights, Robert has also worked as a journalist reporting on the latest developments in digital privacy, tech policy, and state surveillance.Follow Jamal on LinkedIn: https://www.linkedin.com/in/kmjahmed/Follow Robert on LinkedIn: https://www.linkedin.com/in/protectionofdata/Take your career to the next level with The Privacy Pros Ultimate CIPPE Certification Programme: http://bit.ly/3ZmiJZzGet Exclusive Insights, Secret Expert Tips & Actionable Resources For A Thriving Privacy Career That We Only Share With Email Subscribers► https://newsletter.privacypros.academy/sign-upSubscribe to the Privacy Pros Academy YouTube Channel► https://www.youtube.com/c/PrivacyProsJoin the Privacy Pros Academy Private Facebook Group for:Free LIVE TrainingFree Easy Peasy Data Privacy GuidesData Protection Updates and so much moreApply to join here whilst it's still free: https://www.facebook.com/groups/privacypro
We are talking with Max Schrems. Max is an Austrian lawyer, data protection activist and founder of NOYB. We will speak about GDPR-enforcement mainly, but will also cover general issues of GDPR, including data transfers (to the US). This interview is part of the class on "Digital Law" taught at University of Vienna. Links: https://noyb.eu/en GDPR Hub: https://gdprhub.eu/index.php?title=Welcome_to_GDPRhub https://en.wikipedia.org/wiki/Max_Schrems Ars Boni 221 (03/12/2021) with Max Schrems: https://www.youtube.com/watch?v=IjzqMhUSdBU
The International Association of Privacy Professionals' Europe Data Protection Congress attracted large crowds in Brussels last week, with a plenty of meaty policy and enforcement issues on the table. Irish Data Protection Commission head Helen Dixon suggested that the EU's General Data Protection Regulation's system of cross-border enforcement wasn't “built for speed;” while a trio of Meta Platforms companies — Facebook, WhatsApp and Instagram — are being scrutinized by the bloc's national data-protection authorities over privacy concerns. Privacy advocate Max Schrems delighted the audience with some props – a black box and rubber stamp – to criticize a new court set up by the latest EU-US data transfer agreement, which he vowed to challenge. MLex's Brussels-based data-privacy team was able to beat the crowds, follow the ins and outs of the debates and buttonhole key players at the margins of the conference. Our reporters recorded this podcast as the conference wound down.
Advocaat en privacy-activist Max Schrems start een nieuwe zaak tegen Google om reclame in Gmail. Volgens Schrems zou Google toestemming moeten vragen voor ze reclame in je inbox zetten. Ook in de Tech Update: Musk: niet zeuren over dure testversie rijhulpsysteem Tesla Bizarre storing op Facebook, broodje kalkoen gaat viraal See omnystudio.com/listener for privacy information.
### Apero* servAtem mise tout sur le 7 ! -> https://www.lemonde.fr/blog/binaire/2022/04/01/servatem-mise-tout-sur-le-7/### Database* Making Real-Time Data Real: Change Data Capture for Astra DB -> https://thenewstack.io/making-real-time-data-real-change-data-capture-for-astra-db/https://github.com/datastax/cdc-apache-cassandra* Unite your data and get a highly scalable, performant, native GraphQL graph database in the cloud that delivers blazingly fast query speeds. -> https://dgraph.io/### ML/AI* Deep Learning Is Hitting a Wall -> https://nautil.us/deep-learning-is-hitting-a-wall-14467/### Data-Science* Validate Your pandas DataFrame with Pandera -> https://towardsdatascience.com/validate-your-pandas-dataframe-with-pandera-2995910e564* 0.9.0: FastAPI Integration, Support GeoPandas DataFrames -> https://github.com/pandera-dev/pandera/releases/tag/v0.9.0### IoT / Time Series* Hexa Data -> https://hexa-ai.fr/annonce-du-produit-hexa-data/* Announcing Grafana Mimir, the most scalable open source TSDB in the world -> https://grafana.com/blog/2022/03/30/announcing-grafana-mimir/### RGPD & co* “Privacy Shield 2.0”? - First Reaction by Max Schrems : -> https://noyb.eu/en/privacy-shield-20-first-reaction-max-schrems* Dataviz DataExploration -> * Like Streamlit, but fast. Enabling low-latency data apps. -> https://medium.com/@ramiromedina/like-streamlit-but-fast-enabling-low-latency-data-apps-948b95b098a2SponsorsCette publication est sponsorisée par [Affini-Tech](https://affini-tech.com/) et [CerenIT](https://www.cerenit.fr/).[CerenIT](https://www.cerenit.fr/) vous accompagne pour concevoir, industrialiser ou automatiser vos plateformes mais aussi pour faire parler vos données temporelles. Ecrivez nous à [contact@cerenit.fr](mailto:contact@cerenit.fr) et retrouvez-nous aussi sur [Time Series France](https://www.timeseriesfr.org/).Affini-Tech vous accompagne dans tous vos projets Cloud et Data, pour Imaginer, Expérimenter etExecuter vos services ! ([Affini-Tech](http://affini-tech.com), La plateforme [Datatask](https://datatask.io/)) pour accélérer vos services Data et IAConsulter le [blog d'Affini-Tech](https://affini-tech.com/blog/) et le [blog de Datatask](https://datatask.io/blog/) pour en savoir plus.On recrute ! Venez cruncher de la data avec nous ! Ecrivez nous à [recrutement@affini-tech.com](mailto:recrutement@affini-tech.com)Le générique a été composé et réalisé par Maxence Lecointe.
L'Irlande en fait la douloureuse expérience. Le gouvernement avait annoncé pouvoir accueillir 100 000 réfugiés, mais avec 25 000 arrivées à ce jour, les capacités du pays sont déjà presque à saturation. D'ici quelques jours, des milliers d'Ukrainiens pourraient donc se retrouver sans hébergement. «Pas question pour autant de limiter les arrivées», estime le gouvernement. Toutes les solutions sont à l'étude pour trouver des places, de la réquisition des logements touristiques, à l'aide financière pour les familles d'accueil. Parfois, de manière un peu brouillonne. Dans l'urgence, ce sont les réfugiés ukrainiens qui doivent composer avec les failles du système. C'est le reportage à Dublin, de Laura Touachanov. Favoriser le bénévolat, l'action sociale dans la société Des valeurs qui ne vont pas forcément de soi. En Lituanie, l'école a donc choisi d'en faire la promotion, voire même de les rendre obligatoires. À partir du collège, les élèves lituaniens doivent effectuer des heures sociales. Ces heures comptent dans le livret scolaire de l'année. Elles sont aussi un complément au cours d'éducation civique qui n'est enseigné qu'à partir du lycée. Elles peuvent même donner des points supplémentaires pour obtenir une place financée par l'État à l'université. Dans les prochaines réformes de l'éducation, le bénévolat sera également pris en compte dans l'examen de fin d'études du secondaire. À Vilnius, c'est le reportage de Marielle Vitureau. L'œil européen de Franceline Beretti : Twitter sans modération Le 24 avril 2022, le milliardaire américain Elon Musk s'est acheté Twitter ! Ce libertarien total a promis de rétablir la libre parole sur le réseau social... Modérer, c'est censurer, explique-t-il. Alors l'annonce de ce rachat a évidemment provoqué beaucoup de remous en Europe. Franceline Beretti les a observés pour son « œil européen ». ► À écouter aussi : Acquisition controversée de Twitter par Elon Musk: «Il va composer avec une régulation, il est obligé» Non au transfert des données personnelles Et il ne s'agit, pour l'instant, que d'un accord de principe entre Bruxelles et Washington pour le transfert des données personnelles, mais qui intervient à un moment tel que nombre d'acteurs du numérique ont parlé d'un chantage. Les États-Unis se disant prêts à nous fournir du gaz pour peu qu'on leur donne accès à une autre richesse nos données personnelles. Notre identité numérique, notre comportement sur la toile, notre géolocalisation, des informations qui valent de l'or, mais que Bruxelles et la Cour européenne de justice refusent, depuis des années, de livrer outre-Atlantique, où la protection de la vie privée est jugée insuffisante. En Autriche, le meilleur défenseur indépendant des droits des internautes ne s'est pas trompé sur les dangers à venir. À Vienne, Céline Béal. À noter que les actions de Max Schrems ont déjà eu des effets notables. L'interdiction en Autriche et en France de l'outil de mesure d'audience Google Analytics et l'obligation faite à toutes les entreprises publiques françaises d'héberger leurs données dans des sociétés de droit européen. ► À lire aussi : La loi sur la protection des données personnelles entre en vigueur dans l'UE
In Episode 2 of Series 7 of The Rights Track, Martin Scheinin, British Academy Global Professor at the University of Oxford and a member of the Scientific Committee of the EU Fundamental Rights Agency joins Todd to discuss whether the grammar of human rights law can cope with multiple challenges of the digital realm. Transcript 00:00 Todd Landman Welcome to The Rights Track podcast which gets the hard facts about the human rights challenges facing us today. In series seven, we're discussing human rights in the digital world. I'm Todd Landman, in our second episode of the series, I'm delighted to be joined by Martin Scheinin. Martin Scheinin is the British Academy, Global Professor at the University of Oxford, and a member of the scientific committee of the EU's Fundamental Rights Agency. He is currently exploring whether the grammar of human rights law can cope with multiple challenges of the digital realm. So Martin, welcome to this episode of The Rights Track, it's great to have you here. Well, you know, in our first episode of the series, we had a discussion with Ben Lucas, who's a data scientist. And the one thing that he said to me that has really is stuck in my mind is that the regulatory framework has not been able to keep pace with technological change. And I wanted to use that just as an opening framing that when we consider the international human rights framework, the international human rights regime as a regulatory framework of sorts, as against this rapid expanse in technological change, and in the digital space, this gap between regulation and technology is something that's pretty enduring. But I wonder what your early thoughts are about how do human rights address this question of technological change? 1:14 Martin Scheinin Well, I think that human rights law is very much needed now. There, there may be a widely held perception that human rights law would be unable to cope, for instance, because so much is done by private actors that traditionally are not seen as bound by human rights law, or because the digital realm knows no borders between countries, and therefore it escapes the jurisdiction of any single state, often relied upon as a necessary precondition for the application of human rights law. I do believe that human rights law can cope. And I can see with, with some satisfaction, how both the United Nations and Council of Europe human rights bodies and mechanisms have understood the importance of the challenge and are trying to address it, that doesn't mean that they would already have found a way but at least there is a good faith effort, I would say. 02:13 Todd Landman And you know, human rights is often criticised as being state centric, where the primary duty bearers is the state, and then therefore private actors are not part of that frame. But what has changed since you know this early perception of state centric human rights framework in your mind that might address some of the gaps that you've, you've already raised with us? 2:31 Martin Scheinin Well, I'm currently running a four-year research project as British Academy Global Professor at the Bonavero Institute of Human Rights at the University of Oxford. And I framed the project as dealing with the grammar of human rights law and how it can address the challenges of the digital realm. And this, this framing signals need to go back to the foundational concepts and structures and to see how new phenomena new challenges can be spoken about in the language of human rights law. And just to take one example, one of my first journal articles in this project, still in the process of being finalised is about new EU level and national laws about taking down internet material that is seen as terrorist online content. It's a whole wave of legislation in European and other countries. And there is an EU regulation which is directly applicable EU law in all EU member states. It's a striking example of the challenges. We are speaking of something intangible, ultimately, information, images, video, text in digitalized form, which is produced by one private actor in one country, published by someone else, another private actor in another country, and perhaps using a medium of a server located in a third country and operated by a third private party. Under this EU regulation, which as I said, is valid law, national authority in a fourth country, in any EU member state, can ask the authority of the third country where the server is located to give an order to take down the material. And that national authority has one hour to act, which usually means an order for the private server operator to take down the material. And then that private actor has one hour again to implement the request or the order. What we see here is a whole chain of private actors. 04:41 Todd Landman Yes It's an incredible reach and, and also what happens if they don't comply within the hour? That's an extraordinarily short time period. 4:49 Martin Scheinin Well, there are, of course, sanctions and enforcement mechanisms, penalties, etc. But we see here a whole chain of private actors in, in production and publishing of this information. And the challenges to human rights law are, firstly, the civil jurisdiction, the actors, the private actors are located maybe located in different countries. And the order comes from at least two different states where the server is located, and then the initiator of the actual requests. And neither one necessarily has any jurisdiction in relation to the person who actually is behind the message, who uploaded the so called terrorist online content, and is subject to a measure that constrains freedom of expression. And that relates to jurisdiction, and then we have the question of mechanism of redress, that there's a there's a vague clause saying there must be access to judicial remedies, but in what country? In what language by whom? Is the, is the question. So we risk the situation of putting people in a legal limbo. And here we need human rights law to navigate through this mess and and to provide some kind of guidance to what is permissible and where where to draw the limits both as to the substantive issue of what is terrorist online content? And also to the procedures what kinds of remedies will be required? 06:15 Todd Landman Yeah, and you know, I'm going to pick up on this freedom of expression, maybe add freedom of speech alongside it with the, you know, rather famous cases of former President Trump and now representative Marjorie Taylor Green having been banned from certain social media platforms. One was about misinformation with respect to COVID-19. The other was just about misinformation more generally, in a view to mobilising supporters. But what's your take on this ability for private organisations like a Facebook or a Twitter to ban people for life for something that they've posted on their own platforms? 6:52 Martin Scheinin Yeah, the traditional view, of course, is that a medium, a newspaper, has no obligation to publish every opinion, they exercise their freedom of expression, by choosing what message they want to carry. And as such, that belongs to freedom of expression. But then when we have the emergence of let's say, Facebook or Twitter as something that can be called public goods, or common goods, we have to ask the question, whether access itself is a freedom of expression issue and how can the mediation of content be done so that freedom of expression of the of the of the users is reflected. I see a certain asymmetry in the sense that those holding a public office, if they have a Twitter account, they shouldn't be allowed to block other voices who may be critical of them. So that critics couldn't respond to their messages, but can then Twitter block them by banning them from using the service? I think we are in in quite a challenging situation. Here, I do believe that some kind of extension of human rights obligations to private actors is necessary. It may happen through their own regimes of redress, as Facebook is trying to build. And I'm optimistic, not of the model itself, but of the possibility of perfecting the model so that we can have genuine human rights mechanisms, also within private actors. Ultimately, there could be something like a World Human Rights Court with jurisdiction over those private actors who have accepted it as a as an appeal court, in respect of their own internal mechanisms. 08:51 Todd Landman That's fascinating Martin. You know, back in all way back in 1998, I was on my way to Venice, to teach on the European master's degree in Human Rights and democratisation. And I think I was in the air when the British authorities arrested Augusto Pinochet from Chile and put him under house arrest, which I believe was about 18 month's worth of time when British Parliament debated the legality of his arrest in his detention. And there was an appeal made and often this case is cited as one in which the application of universal jurisdiction applied, and it really advanced the argument for universal jurisdiction. I wonder to what degree what you're exploring and talking about here today is the application of the principle of universal jurisdiction for digital technologies. 9:36 Martin Scheinin I think there's a need for a distinction in the sense that the Pinochet case was about enforcement jurisdiction, the powers of the state to do something over an individual who is is primarily subject to other country's laws. Whereas here we hold a state to account for something that happened outside its borders, because of the causal link to human rights harm elsewhere. And states have been very careful in not accepting extraterritorial jurisdiction in respect of human rights violations that materialise elsewhere, when they were not there themselves and the European Court of Human Rights has been struggling, we know the bombing of Belgrade, the Bankovic case where the European Court of Human Rights threw it out, because it was outside the legal space of Council of Europe. Subsequently, it has taken the view that if you take possession of a person through arrest, then you are there with human rights obligations, which is, of course a bit paradoxical that dropping bombs is not jurisdiction, but but handcuffing is. We are we are trying to impose upon States a broader notion of jurisdiction, which is simply based on their causal links with what happens in the digital realm. For instance, in curtailing freedom of expression, by actors outside their own territory. It is necessary that we do this because the internet knows no knows no borders, and there are causal links, which create the human rights harm we are addressing. And as we see in the EU terrorist online content regulation, there are multiple countries involved. And one country asks for the takedown, another country implements it that the server can be located in a third country and the actor himself/herself in a fourth country, there's a whole chain of action, but somebody must be held accountable. And that requires the extension of the notion of jurisdiction. 11:44 Todd Landman Okay, that that distinction between the two makes, makes perfect sense to me. And you know, the complexity and complication of that is, is very salient. I wonder beyond expression and freedom of speech, etc. What other human rights are at stake in this particular agenda? 11:58 Martin Scheinen Well, I don't think people realise how broadly their human rights are actually at issue, when dealing with new developments in the digital realm. When we say expression, of course, easily what follows is freedom of assembly and association. Their exercise has largely shifted to happen online, especially in the times of the pandemic, but we also can say at elections and democracy. And public accountability have become phenomena that take place online. And this issue of democracy is especially important because of the vulnerability of electoral systems to malicious operators in cyberspace. So democracy is facilitated by moving online, but also but also subject to new kinds of risk. Our intimate sphere happens, to a large extent, online, even if the most important manifestations, of course, are still interpersonal. That brings up a whole range of privacy issues. Data protection is of course the human right which is most often referred to simply because of the passing of lots of sensitive personal data, but the mother right, right, the right to privacy is equally important. Here we go to issue such as surveillance. And if I now may mentioned another article I'm working on within my British Academy Global Professor project, I've been looking into the privacy related developments during the pandemic. And of course, there are very important and very different developments over these 22 months. We, we have totalitarian control with countries like China, which erode totally the privacy of the individual, and utilise and exploit health information for social control. It is true that digitalized control tools are in a sense rational because humans are vectors of the virus. The epidemic is not simply a question of a virus that that keeps replicating. It is human society, which transforms the virus into an epidemic in democratic countries. We see innovations such as contact tracing apps, digital contact tracing apps, and COVID passports. Both are potentially privacy intrusive, but here we see a certain kind of paradox in that in order to function, they must be human rights compatible or at least must have human rights compatible features, because they will only work if they are widely accepted. So, here the issue of legitimacy comes to the defence of human rights. Solutions, technological solutions, that would be best simplistically will not work, because they will not be widespread enough, whereas, where privacy, by design is inbuilt in the solutions, they will have much better success. We get into new paradoxes however, because for instance, when the, when the contours of the epidemic change with new variants like the Omicron variant, we are speaking on today, the scope of for instance, a COVID Passport can be rapidly overnight changed. So previously, having a COVID passport did not reveal your actual health information. It only told that this person is at this moment, carrying a valid COVID passport. But it didn't tell whether they were vaccinated, whether they had COVID, or whether they were tested in the last 24 hours, 72 hours. Now, when the, when the requirements are being made more narrow. The COVID Passport suddenly starts to reveal health information. It was sold under a different label. But now it is transforming to, let's say worse for human rights in the sense that it breaks the promise of not revealing health information. 16:09 Todd Landman Yeah, and it really does hit the question of liberty versus public health and involves this question of proportionate response, right. And so the human rights framework often talks about proportionality, as well as reasonableness as well as of a certain, you know, time bound duration. So it's possible to rescind on particular rights commitments for a particular period of time, if that rescindment of rights is or taking away rights is proportionate to the threat that one faces. And of course, massive debates here in the UK about this, there's a very strong lobby that's advocating against the passports, another lobby that's advocating for them, and it is down almost to the individual user to give consent to those passports and move about planet Earth. But those who do not give their consent and want to move around planet Earth without demonstrating whatever status they have, they may in themselves be putting others at risk. But the probability of that risk is different, you know, because I could have all the passports I like and still be a contagion. And somebody couldn't have any of the passports or not be a contagion. So it's these huge tensions throughout this whole debate. 17:16 Martin Scheinin You mentioned, you mentioned proportionality, and I think there's an important issue that I want to address in the sense that many a human rights laws scholar is happy with proportionality. Ultimately, human rights would be a question of balancing between the competing public interest and the intrusion that results into an individual's human rights. But I belong to the, let's say, more fundamentalist school of scholars who say, there are also bright lines, there's something called the core or the inviolable ethics of every human right. So proportionality just does not justify every intrusion. And and that's an important task also in the context of COVID, that we must first define the ultimate limit up to which proportionality is taken into account. And there are applications of this approach include, including the two Max Schrems cases by the European Court of Justice, the highest EU court, where they did not conduct a proportionality assessment because they said this is mass surveillance, which is prohibited as as a bright line. I endorse that approach, that human rights are not only about balancing of competing values, they are also about protecting the inviolability of certain foundational principles and they belong to what I call the grammar. 18:39 Todd Landman I see, so this word grammar then becomes very important for you. And I suppose it almost invites you to deconstruct the grammar, and then reconstruct the grammar. So what can you tell us about the grammar of human rights? I'm very interested in this concept. 18:54 Martin Scheinin Well, my British Academy project lists ten antinomies or challenges, which are related to human rights in the digital realm, but at the same time, go back to these foundational principles, concepts, structures of human rights law, and what I mentioned about the essence inviolability of the essence versus proportionality is one. There's the question of the private versus the public actor as agent, and also as the duty bearer. There's the question of territorial versus extraterritorial action by states. And there's also the distinction between derogation and limitation. Limitations are in normal times. And they must be proven proportionate, whereas derogations are exceptions in times of crisis. And I think COVID has provided us an opportunity, us an opportunity to look once again into the question, are there different limits, a different definition of the inviolable core, for instance, when a country is in a state of emergency? These are just examples. 20:04 Todd Landman Yeah, they're great examples. We interviewed an anti-terror expert, Tom Parker in the last series, and he made this reference very similar set of things that you just said there. And, you know, this notion of limits is really important. But also he's worried that there's a kind of state bureaucracy, a state apparatus that has been developed for this particular public emergency. And he's worried that that will become permanent, that that that it won't fade away, it won't be brought back down again, after a period of duration, and that we are in a sense, living with a new kind of surveillance that will will not go away. What do you say to that? 20:40 Martin Scheinin I have worked on surveillance in an earlier EU funded research programme called SURVEILLE, which developed multi-dimensional and multidisciplinary methodology for assessing the utility of surveillance technologies versus their human rights harm. And we could show that the most intrusive methods of surveillance often were only marginally effective in actually producing the legitimate aim or benefit towards the legitimate aim. It was a semi-empirical, largely largely based on hypothetical or modelling situations. But nevertheless, we had the multidisciplinary teams working on it and could show that this technological hype around surveillance is unfounded, that traditional methods of policing, footwork and human intelligence deliver a much better proportionality approach to assessing the Human Rights harm in relation to the actual benefit obtained toward national security. There are many reasons why surveillance technology and other digital innovations tend to perpetuate then. And we can speak on the surveillance industrial complex. And I'm also sure that there as issues of mission creep and function creep, and many of the changes we see in the realm of treatment of sensitive health data will remain after COVID-19 is over. So something is lost. Or at least there's a risk that something is lost every time a special situation justifies resorting to exceptional measures. 22:31 Todd Landman And just in closing, I want to ask you a final question, which is you spend your time as Global Professor, you engage with academics at Oxford and the rest of the world in this area, and you come up with a new grammar for human rights - what next? What's the goal here? Is it to then advocate to the United Nations system, the European system to change laws, regulations and practices? Do you think you could have that kind of leverage to make the changes required to address the very deep issues that you've raised with us today? 22:59 Martin Scheinin Well, I, I did mentioned the surveillance project where I was involved. That gives a good example of what an eternal optimist who is a serious academic can achieve. So we developed this methodology for the multidisciplinary assessment of surveillance technologies. And we delivered our reports and on 29th of October 2015, the European Parliament adopted a resolution where they commended the methodology developed in the SURVEILLE project and recommended it for use. Two weeks later, happened Bataclan, one of the most dreadful terrorist attacks in Europe and everything was forgotten. Nothing came out of it. And that's the pendulum, especially in issues of terrorism, that there are all kinds of good efforts to develop constraints safeguards and make proposal about human rights compatibility. But when panic strikes, it goes down the drain. I am an eternal optimist, and I think that human rights law has to engage has to evolve and that it will be able to deliver outcomes that both make meaningful difference as to the facts on the ground, and at the same time, are able to correspond to the intuitions of ordinary people with a common sense, there is a certain legitimacy requirement that what we deliver must be backed by the people as acceptable. And I think we can cope with that. But we cannot cope with irrational panic. That's the big problem in this work. 24:37 Todd Landman Amazing. Yeah, I share your optimism, I'm afraid. And you know, the incremental gains you do you do face setbacks from these external threats panics, as you as you call them, and the perception of the disruption that's coming, but at the same time holding true to human rights and the philosophies that sit behind human rights, and then also this thing just about legitimacy, I think, you know, if we go back to Max Faber and his legal rational sources of authority, in where legitimacy comes from that acceptance that people think this is reasonable, proportional and something we can live with, but as you say, if there's overreach, mission, creep, panic and and other elements of state action and non-state action I might add, then the acceptability and legitimacy comes into question. So it's just been unbelievable talking to you and hearing your insights about this in the direction that you've taken our conversation today. So much to think about, you're in the middle of the project. We look forward to, you know, the results that you get at the end of the project and really seeing that that output and those conversations that will come from what you discover, but for now, I just want to thank you for appearing on this episode of The Right Track. 25:53 Chris Garrington Thanks for listening to this episode of The Rights Track, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3DI. You can find detailed show notes on the website at www.RightsTrack.org. And don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.