Podcasts about ai now

  • 28PODCASTS
  • 30EPISODES
  • 38mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 4, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai now

Latest podcast episodes about ai now

DEEP TALKS [CZE]
#BONUS: Petr Ludwig – SUPERSCHOPNOSTI PRO BUDOUCNOST: Jak AI změní trh práce a jak se na to připravit?

DEEP TALKS [CZE]

Play Episode Listen Later Dec 4, 2024 27:56


Jaké budou klíčové dovednosti a kompetence pro úspěch v nové přicházející AI éře? Jaké strategie vám pomohou zůstat relevantní na pracovním trhu? A jak ovlivní umělá inteligence vnímání smyslu práce a života celkově? Drazí přátelé, občas vezmu nějakou svou přednášku a dám ji bezplatně ke zhlédnutí. Rozhodl jsem se takhle zveřejnit svou úvodní přednášku letošní konference AI-NOW. Vzhledem k tomu, jak moc nám v ČR ujíždí vlak AI, a vzhledem k tomu, jak naše republika (na rozdíl třeba od mého drahého Singapuru) není na AI transformaci připravená, tak v tom vidím opravdu velký smysl tuto osvětu dělat... Jestli nám AI vlak jednou ujede, myslím, že ho již nikdy nemáme šanci dohnat... Řeknu to ještě jinak – AI transformace je naprosto klíčová pro budoucí konkurenceschopnost ČR i nás jako jednotlivců. V přednášce mimo jiné shrnuji, co nejdůležitějšího se v AI světě v letošním roce stalo, co nás v nejbližších měsících čeká, jaká hlavní rizika jsou s AI spojena, ale hlavně jak se na celou změnu můžeme jako jednotlivci a organizace připravit. Budu také moc rád, pokud se na přednášku nejen podíváte, ale pomůžete mi její poselství šířit. Děkuji moc. ❤️ ODKAZY: - Záznam celé konference AI-NOW 2024 zde (s kódem PETR20 -20%): https://www.edumame.cz/p/ainow-2024

White House Chronicle
"AI Now, AI Next, AI in the Wild"

White House Chronicle

Play Episode Listen Later Oct 4, 2024 27:55


Adam Russell, head of AI research at USC's Information Sciences Institute, has an engrossing discussion with Adam Clayton Powell III, who guest hosts the episode, about the development of AI -- and what Russell terms "AI Now, AI Next, AI in the Wild." 

Clearer Thinking with Spencer Greenberg
AI apocalypticism vs. AI optimism (with Adam Russell)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Aug 1, 2024 64:48


Read the full transcript here. What is "apocaloptimism"? Is there a middle ground between apocalypticism and optimism? What are the various camps in the AI safety and ethics debates? What's the difference between "working on AI safety" and "building safe AIs"? Can our social and technological coordination problems be solved only by AI? What is "qualintative" research? What are some social science concepts that can aid in the development of safe and ethical AI? What should we do with things that don't fall neatly into our categories? How might we benefit by shifting our focus from individual intelligence to collective intelligence? What is cognitive diversity? What are "AI Now", "AI Next", and "AI in the Wild"?Adam Russell is the Director of the AI Division at the University of Southern California's Information Sciences Institute (ISI). Prior to ISI, Adam was the Chief Scientist at the University of Maryland's Applied Research Laboratory for Intelligence and Security, or ARLIS, and was an adjunct professor at the University of Maryland's Department of Psychology. He was the Principal Investigator for standing up the INFER (Integrated Forecasting and Estimates of Risk) forecasting platform. Adam's almost 20-year career in applied research and national security has included serving as a Program Manager at the Intelligence Advanced Research Projects Activity (IARPA), then as a Program Manager at the Defense Advanced Research Projects Agency (DARPA) (where he was known as the DARPAnthropologist) and in May 2022 was appointed as the Acting Deputy Director to help stand up the Advanced Research Projects Agency for Health (ARPA-H). Adam has a BA in cultural anthropology from Duke University and a D.Phil. in social anthropology from Oxford University, where he was a Rhodes Scholar. He has also represented the United States in rugby at the international level, having played for the US national men's rugby team (the Eagles). StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

IT Masters Update
Update 217: El diagnóstico de UNESCO sobre AI en México

IT Masters Update

Play Episode Listen Later Jul 8, 2024 12:10


UNESCO llama a México a desarrollar una estrategia de AI | Now, el neobanco de Invex, lanza tarjeta de crédito de fácil acceso | La AIMX y el INAI firman acuerdo de colaboración | Fiscalía española investiga a Meta por uso de datos para entrenar su AI | Así lo dijo Carlos Marmolejo, director general de Finsus | Todavía en las nubes si X será multada por la UE | Grupo Elektra es una de las historias innovadoras | Eric Moguel, Data, Digital & Information Technology Director de Novartis, nos da el IT Masters Insight

Big Tech
The Real World Cost of AI

Big Tech

Play Episode Listen Later Jun 18, 2024 47:17


It seems like the loudest voices in AI often fall into one of two groups. There are the boomers – the techno-optimists – who think that AI is going to bring us into an era of untold prosperity. And then there are the doomers, who think there's a good chance AI is going to lead to the end of humanity as we know it.While these two camps are, in many ways, completely at odds with one another, they do share one thing in common: they both buy into the hype of artificial intelligence.But when you dig deeper into these systems, it becomes apparent that both of these visions – the utopian one and the doomy one – are based on some pretty tenuous assumptions.Kate Crawford has been trying to understand how AI systems are built for more than a decade. She's the co-founder of the AI Now institute, a leading AI researcher at Microsoft, and the author of Atlas of AI: Power, Politics and the Planetary Cost of AI.Crawford was studying AI long before this most recent hype cycle. So I wanted to have her on the show to explain how AI really works. Because even though it can seem like magic, AI actually requires huge amounts of data, cheap labour and energy in order to function. So even if AI doesn't lead to utopia, or take over the world, it is transforming the planet – by depleting its natural resources, exploiting workers, and sucking up our personal data. And that's something we need to be paying attention to. Mentioned:“ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum“Microsoft, OpenAI plan $100 billion data-center project, media report says,” Reuters“Meta ‘discussed buying publisher Simon & Schuster to train AI'” by Ella Creamer“Google pauses Gemini AI image generation of people after racial ‘inaccuracies'” by Kelvin Chan And Matt O'brien“OpenAI and Apple announce partnership,” OpenAIFairwork“New Oxford Report Sheds Light on Labour Malpractices in the Remote Work and AI Booms” by Fairwork“The Work of Copyright Law in the Age of Generative AI” by Kate Crawford, Jason Schultz“Generative AI's environmental costs are soaring – and mostly secret” by Kate Crawford“Artificial intelligence guzzles billions of liters of water” by Manuel G. Pascual“S.3732 – Artificial Intelligence Environmental Impacts Act of 2024″“Assessment of lithium criticality in the global energy transition and addressing policy gaps in transportation” by Peter Greim, A. A. Solomon, Christian Breyer“Calculating Empires” by Kate Crawford and Vladan Joler Further Reading:“Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence” by Kate Crawford“Excavating AI” by Kate Crawford and Trevor Paglen“Understanding the work of dataset creators” from Knowing Machines“Should We Treat Data as Labor? Moving beyond ‘Free'” by I. Arrieta-Ibarra et al.

The Book Hackers Show
Self-publishers: Are You Still Creating Characters the Old Way? Upgrade with AI Now!

The Book Hackers Show

Play Episode Listen Later May 23, 2024 5:59


S3E33 Self-publishers: Are You Still Creating Characters the Old Way? Upgrade with AI Now! Description: In today's episode, Cindy and Tammie dive into the transformative role of AI in character development for authors. They discuss how AI can enhance the creative process by offering diverse character perspectives, helping writers overcome writer's block, and refining character depth with psychological insights. Whether you're writing your first novel or looking to add depth to your characters in an ongoing series, this episode will equip you with practical tools and insights on integrating AI effectively into your character creation process. Links: ToolsADay: https://toolsaday.com/writing/character-generator WriterHand AI Character Generator: https://writerhand.com/tools/story-character-generator NovelAI: https://novelai.net/

AI Lawyer Talking Tech
Navigating the Evolving Landscape of AI and Law

AI Lawyer Talking Tech

Play Episode Listen Later Apr 2, 2024 21:24


Welcome to today's episode of AI Lawyer Talking Tech! In a world where artificial intelligence is rapidly transforming various industries, the legal sector finds itself at a critical juncture. From groundbreaking legislation and landmark lawsuits to the integration of AI in legal practice, the intersection of law and technology has never been more dynamic. Today, we'll explore the latest developments shaping this evolving landscape, including the EU's AI Act, Tennessee's ELVIS Act, and the growing importance of data privacy in the age of AI. We'll also delve into the challenges and opportunities faced by legal professionals as they navigate this uncharted territory, from conducting fundamental rights impact assessments to leveraging AI for enhanced efficiency and client service. So, join us as we unravel the complexities of AI and law, and discover how this powerful technology is redefining the future of the legal industry. How to Become an Immigration Lawyer02 Apr 2024Tech EdvocateLawmatics' New Custom Dashboards Let Your Law Firm Track and Visualize The Data That Matters To You02 Apr 2024LawSitesWhat Makes A Good In-House Lawyer Great?02 Apr 2024Above The LawArtificial Intelligence - Who02 Apr 2024The Silicon ReviewAlan Raul, Founder of Sidley Austin's Privacy and Cybersecurity Law Practice Elected FPF's New Board President02 Apr 2024Future of Privacy ForumExploring the Intersection of Generative AI and Cybersecurity at ILTA EVOLVE – Ken Jones and Josh Smith02 Apr 20243 Geeks and a Law BlogThe Anti-Innovation Supreme Court: Major Questions, Delegation, Chevron and More by Jack Michael Beermann :: SSRN02 Apr 2024OTHERWISEAIPPI UK Event Report: Roundup of 2023's Patent Cases02 Apr 2024The IPKatCan Self-Represented Litigants Access Justice? NSRLP's New Intake Report02 Apr 2024SlawHow to bridge the gap between the IT and legal staffs to better combat insider risk01 Apr 2024SC Magazine USUnderstanding The Increased Complexity Of The Data Privacy Landscape02 Apr 2024Forbes.comWhy DOJ's Antitrust Case Against Apple Falls Flat02 Apr 2024American Enterprise InstituteThe Law vs AI: Now the legal battles are starting to intensify02 Apr 2024RedShark NewsGoogle Agrees to Delete Users' ‘Incognito' Browsing Data in Lawsuit Settlement02 Apr 2024TimeGregory Ziegler – Attorney Making a Powerful Impact on Engineering and Architecture Law01 Apr 2024FinanceDigest.comThe Legal 500 EMEA 2024 Recognizes Cooley01 Apr 2024CooleyAlive and Kicking: Washington State's My Health My Data Act Goes into Effect Today01 Apr 2024EPIC – Electronic Privacy Information CenterEU Data Act (part 8): smart contracts02 Apr 2024Hogan LovellsDivergent Paths on Regulating Artificial Intelligence01 Apr 2024LittlerYoshikawa Interviewed on Tennessee's New AI Law, ELVIS Act02 Apr 2024Adams & Reese LLPColorado Close to First-in-the-Nation Neuro-Privacy Law Designed to Protect Biological and Neural Data02 Apr 2024BeneschArtificial intelligence in the insurance sector: fundamental right impact assessments01 Apr 2024Hogan LovellsFirst-of-its-Kind AI Law Addresses Deep Fakes and Voice Clones01 Apr 2024Holland & KnightREMINDER: Washington's “My Health My Data” Act Now In Effect01 Apr 2024BeneschNTIA issues report on recommended federal government actions to promote accountable AI01 Apr 2024Hogan Lovells

Interviews by Brainard Carey
Heather Dewey-Hagborg

Interviews by Brainard Carey

Play Episode Listen Later Nov 11, 2023 23:58


Heather Dewey-Hargborg, American artist and bio-hacker most knowned for the project Stranger Visions. Ana Brígida for The New York Times Dr. Heather Dewey-Hagborg is a transdisciplinary artist and educator who is interested in art as research and critical practice. Her controversial biopolitical art practice includes the project Stranger Visions in which she created portrait sculptures from analyses of genetic material (such as hair, cigarette butts, or chewed up gum) collected in public places. Heather has shown work internationally at events and venues including the World Economic Forum, the Daejeon Biennale, and the Shenzhen Urbanism and Architecture Biennale, the Van Abbemuseum, Transmediale and PS1 MOMA. Her work is held in public collections of the Centre Pompidou, the Victoria and Albert Museum, the Wellcome Collection, and the New York Historical Society, among others, and has been widely discussed in the media, from the New York Times and the BBC to Art Forum and Wired. Heather has a PhD in Electronic Arts from Rensselaer Polytechnic Institute. She is a visiting assistant professor of Interactive Media at NYU Abu Dhabi, an artist fellow at AI Now, an Artist-in-Residence at the Exploratorium, and is an affiliate of Data & Society. Hybrid (Trailer) from Heather Dewey-Hagborg on Vimeo. Installation view, Heather Dewey-Hagborg, Hybrid: an Interspecies Opera. Courtesy of the artist and Fridman Gallery. Still from Heather Dewey-Hagborg, Hybrid: an Interspecies Opera. Courtesy of the artist and Fridman Gallery.

Red Sky Fuel For Thought
Getting Ahead on Generative AI: Ep. 40 of Red Sky Fuel for Thought Podcast

Red Sky Fuel For Thought

Play Episode Listen Later Sep 27, 2023 62:19


Getting Ahead on Generative AI: Ep. 40 of Red Sky Fuel for Thought Podcast What You'll Learn in This Episode:·     How marketers and PR professionals can use generative AI to make our lives easier·     Where we should not use generative AI from a legal or ethical perspective·     How to strike the balance between being better with AI and being better than AI Now that the dust is settling on the AI maelstrom that's raged for the past few months, our September episode looks at what we've learned about generative AI in particular: the good, the bad and the uncertain. Host Lara Graulich examines how artificial intelligence, or AI, has become a buzzword that elicits many emotions: wonder, excitement, confusion and anxiety, among others. As she says, “One thing is certain: This technology is here to stay, and it's important for us to understand it as marketing and public relations professionals.” To help you make out the full picture of generative AI today, we've divided this episode into two parts. First, Umbar Shakir, a partner and client director at Gate One, gives us a whip-smart introduction to generative AI, what it's capable of and what its limitations are. In part two, we dig into the specific implications that generative AI has in the PR and marketing space. For this roundtable, we're chatting with Rachael Sansom, CEO of Havas Red U.K., and Myrna Van Pelt, head of technology and business for Havas Red Australia. The episode begins with Umbar (pronounced “Amber”), who differentiates traditional AI from generative AI. Traditional AI, she says, is the ability of machines to mimic human intelligence to perform tasks and automate workflows. This is AI as we've known it; it's what's been around for decades, and it's something technology consultants have been implementing for clients for a long time. However, when large language models began arriving over the past five years or so, generative AI stole the spotlight. With generative AI, trillions of bits of crowdsourced data can be used to synthesize new data. Does this new capability represent a threat to human creativity or to job security? No, says Umbar: “As marketers, your whole value add to customers is differentiation and personalization. Even though generative AI can generate content for us, you need the human brain to give the differentiation. And then you need the human heart and emotion. In all the marketing campaigns I've been involved in, an emotive response is really important to memorability. That comes from heart, and a lot of our emotional intelligence comes from our values, beliefs and moral judgments. At the moment, you can't mathematically program that in. What we need to remember is that we've built this tool, and we can interact with it; it might be faster than us, and it might be able to process more data than we can at any point in time, but it doesn't replace our humanity.” Instead, AI can create space for those of us in this industry to get back to our craft and to doing some of the things that drew us here in the first place — to creating human connection, for example — rather than the monotony of data analysis or transcription. Plus, with generative AI, we're going to get richer insights much more quickly than we would on our own.   When it comes to humans' job security, Umbar says, “I've got a slightly provocative view on things. When people worry that generative AI will cause people to lose jobs, I say there are some jobs out there that humans should never have been doing. We have taken really tedious work and turned it into careers for people. We've normalized tedium. How do we unshackle ourselves from some of that tedium? How do we then free up capacity to solve for bigger and better problems for society? How do you use this technology to replace what humans have been doing that fundamentally doesn't tap into our humanity or our values or our creativity?” Umbar's segment ends with her answering these questions, before Lara then welcomes Rachael and Myrna to the podcast. She first asks them what excites them most about generative AI and the capabilities it brings to our clients and which tools they've most enjoyed using. “Gen AI cannot create ideas, but what it can do is take great ideas, by humans, and push them faster and further and help iterate them more brilliantly,” says Rachael. In marketing and communications, Myrna says AI also has a distinct role to play in helping us in the area of rapid decision making. “As humans, we have finite ability to scan volumes of information,” she says. “However, AI does this at a fraction of the time. So, for example, when it comes to understanding audience preferences, or demographic nuances, AI can help sort through this massive volume of content, identifying patterns and trends, anticipating future scenarios, and then categorizing the data. We then have an absolute smorgasbord of useful pre-categorized content we can use to inform campaigns, particularly so in industries where a rapid pivot of a campaign might make the difference between success and failure — particularly so in political campaigns.” Among Myrna's go-to AI tools, she highlights Brandwatch, which provides media monitoring and competitor tracking; TLDR, which summarizes high-tech articles; and DeepL Translate, which can accurately translate content in dozens of different languages. Next, they talk about the inherent risks of using AI, including where we should and shouldn't use it from an ethical and legal perspective — e.g., is a press release fair game? Thank you to each of our guests for weighing in on the transformative power of AI. We hope you'll give “Red Sky Fuel for Thought” a listen, and subscribe to the show on iTunes, Spotify or your favorite podcasting app. Don't forget to rate and review to help more people find us!Also mentioned on this episode:·     ChatGPT·     Brandwatch·     TLDR·     DeepL Translate Follow Red Havas for a daily dose of comms news:·      Twitter·      Facebook·      Instagram·      LinkedIn Subscribe:Don't forget to subscribe to the show using your favorite podcasting app.·      iTunes·      Spotify What did you love? What would you like to hear about next?Remember to rate and review today's show; we'd love to hear from you!  

Background Briefing with Ian Masters
June 4, 2023 - James Galbraith | Lauren Kahn | Antony Loewenstein

Background Briefing with Ian Masters

Play Episode Listen Later Jun 4, 2023 63:52


A Phony Crisis Averted Now a Celebration of Compromise and Bipartisanship | The Next War of Drones and AI Now in Ukraine | Exporting the Technology of Repression and Australia's Trial of the Century backgroundbriefing.org/donate twitter.com/ianmastersmedia facebook.com/ianmastersmedia

TerraSpaces
AI Now: Agent Exploration with Community Developer CryptoAI

TerraSpaces

Play Episode Listen Later Jun 2, 2023 40:52


Today on the Ether we have Atari_buzzk1LL hosting Fetch.ai spaces AI Now with Fetch.ai community developer Crypto.AI. Recorded on June 2nd 2023. Make sure to check out the two newest tracks from Finn and the RAC FM gang over at ImaginetheSmell.org! The majority of the music at the end of these spaces can be found streaming over on Spotify, and the rest of the streaming platforms. Check out Project Survival, Virus Diaries, and Plan B wherever you get your music. Thank you to everyone in the community who supports TerraSpaces.

Values & Politics
The Folly of AI

Values & Politics

Play Episode Listen Later May 28, 2023 14:26


What additional rules/regulations do we need in place for AI NOW? --- Send in a voice message: https://podcasters.spotify.com/pod/show/ancienttexan/message Support this podcast: https://podcasters.spotify.com/pod/show/ancienttexan/support

folly ai now
The Institute of Black Imagination.
E39. Timnit Gebru: Asylum From A.I.

The Institute of Black Imagination.

Play Episode Listen Later Mar 13, 2022 93:00


Show Notes  Timnit Gebru is an artificial intelligence researcher.  Timnit advocates for fair and just use of the technology we use everyday. A former employee of Google, Timnit consistently calls in and calls out a Big Tech industry that leverages power, capital, and bias in favor of, well, themselves and their wallets. From language to surveillance- Timnit knows the potential harms of artificial intelligence know no bounds. In a time when we're at war, today's episode calls into question for whom we are fighting? Whose wars are worthy of discussion and what harms are so deeply ingrained within our consciousness that we ignore our own civilian casualties. As the world witnesses the 16th month of a war in Ethiopia, Timnint's journey reminds us of the refugee, the warrior, and the heroes we often dismiss and determine unworthy of home.  This conversation was recorded on Jan 27, 2022 Learn More about this topic  https://www.ruhabenjamin.com/ (Rhua benjamin) https://www.dukeupress.edu/dark-matters (Simone browne (Dark Matters: on Surveillance of Blackness) ) https://www.netflix.com/title/81328723 (Coded bias)  https://pacscenter.stanford.edu/person/tawana-petty/#:~:text=She%20is%20the%20National%20Organizing,and%20shared%20by%20government%20and (Tawana petty) https://www.politico.com/news/2021/06/02/senate-democrats-google-racial-equity-491605 (Support regulations to safeguard)  https://www.wired.com/story/facebook-ford-fall-from-grace/ (Mar Hicks wrote op ed for Wired (tech historian)) Who to follow?  https://www.ajl.org/ (Algorithm justice league)  https://datasociety.com/ (Data society)  https://d4bl.org/ (Data for black lives) https://ainowinstitute.org/ (AI Now)  https://www.dair-institute.org/ (DAIR ) Other Things we mention   https://contentauthenticity.org/ (contentauthenticity.org ) https://www.britannica.com/topic/Fairness-Doctrine (The fairness doctrine )  https://www.washingtonpost.com/outlook/2021/02/04/fairness-doctrine-wont-solve-our-problems-it-can-foster-needed-debate/ (Fairness doctrine washington post article ) Host https://www.instagram.com/dario.studio/ (Dario Calmese) 

AnexiPod – Anexinet
Episode 203: Skeletor v. Dr. Doom

AnexiPod – Anexinet

Play Episode Listen Later Apr 7, 2021 46:21


Show Notes Buffer Overflow: Skeletor v. Dr. Doom Episode 203 Google v. Oracle, Office 365 Outages, and VMware Cloud Part Trois Hosts Ned Bellavance https://www.linkedin.com/in/ned-bellavance-ba68a52 @Ned1313 Chris Hayner, Delivery Manager https://www.linkedin.com/in/chrismhayner Kimberly DeFilippi, Project Manager, Business Analyst https://www.linkedin.com/in/kimberly-defilippi-77b3986/ Brenda Heisler, ISG Operations https://www.linkedin.com/in/brenda-heisler-b5431989/ Longer Topics Google v. Oracle ends. No one wins. Supreme Court Decision: https://www.supremecourt.gov/opinions/20pdf/18-956_d18f.pdf The EFF is crowing about the victory Lightning Round Facebook takes pole position in the ‘race to lose the most user data’ Office 356 has it’s latest outage in a month Website I don’t understand in talks to acquire website I don’t understand VMware launches VMware Cloud The age old lesson to double check your sources comes to haunt AI Now we’ll never know how babby is formed Music Credits Intro: Jason Shaw - Tech Talk Outro: Jason Shaw – Feels Good 2 B

Buffer Overflow – Anexinet
Episode 203: Skeletor v. Dr. Doom

Buffer Overflow – Anexinet

Play Episode Listen Later Apr 7, 2021 46:21


Show Notes Buffer Overflow: Skeletor v. Dr. Doom Episode 203 Google v. Oracle, Office 365 Outages, and VMware Cloud Part Trois Hosts Ned Bellavance https://www.linkedin.com/in/ned-bellavance-ba68a52 @Ned1313 Chris Hayner, Delivery Manager https://www.linkedin.com/in/chrismhayner Kimberly DeFilippi, Project Manager, Business Analyst https://www.linkedin.com/in/kimberly-defilippi-77b3986/ Brenda Heisler, ISG Operations https://www.linkedin.com/in/brenda-heisler-b5431989/ Longer Topics Google v. Oracle ends. No one wins. Supreme Court Decision: https://www.supremecourt.gov/opinions/20pdf/18-956_d18f.pdf The EFF is crowing about the victory Lightning Round Facebook takes pole position in the ‘race to lose the most user data’ Office 356 has it’s latest outage in a month Website I don’t understand in talks to acquire website I don’t understand VMware launches VMware Cloud The age old lesson to double check your sources comes to haunt AI Now we’ll never know how babby is formed Music Credits Intro: Jason Shaw - Tech Talk Outro: Jason Shaw – Feels Good 2 B

AI with AI
How Machines Judge Humans

AI with AI

Play Episode Listen Later Jan 29, 2021 39:51


Listener Survey In COVID-related AI news, Andy and Dave discuss research that uses NLP to predict mutations in a virus that would allow it to avoid detection by antibodies. In regular AI news, the US Food and Drug Administration publishes an Action Plan for AI and ML, with more to follow. The White House launches the National AI Initiative Office, which will work with the private sector and academia on AI initiatives. The AI Now institute has launched an effort for “A New AI Lexicon,” in which it invites contributors to provide perspectives and narratives for describing new vocabulary that adequately reflects demands and concerns related to AI technology. And the Federal Reserve is asking for comments about the use of AI/ML in banking, as it considers increasing oversight of the technologies. In research, Michal Kosinski at Stanford University publishes in Nature Reports how facial recognition technology can identify a person’s political orientation (to 72% accuracy); Andy and Dave spend some extra time discussing the challenges and implications behind such applications of facial recognition technology. Researchers at Columbia University demonstrate the ability of an AI observer to “visualize the future plans” of an actor, solely through visual information. The report of the week comes from CNAS on AI and International Stability: Risks and Confidence-Building Measures. The book of the week examines How Humans Judge Machines. And finally, a YouTube documentary from Noclip examines how machine learning plays out in Microsoft’s Flight Simulator. Click here to visit our website and explore the links mentioned in the episode. 

AI with AI
CONSORTing with the GPT

AI with AI

Play Episode Listen Later Sep 25, 2020 36:51


In COVID-related AI news, another concerning report, this time in Nature Medicine, found “serious concerns” with 20,000 studies on AI systems in clinical trials, with many reporting only the best-case scenarios; in response, an international consortium has developed CONSORT-AI, reporting guidelines for clinical trials involving AI. In Nature, an open dataset provides a collection and overview of governmental interventions in response to COVID-19. In regular AI news, the DoD wraps up its 2020 AI Symposium. And the White House nominates USMC Maj. Gen. Groen to lead the JAIC. The latest report from the NIST shows that facial recognition technology still struggles to identify people of color. Portland, Oregon passes the toughest ban on facial recognition technology in the US. And The Guardian uses GPT-3 to generate some hype. In research, OpenAI demonstrates the ability to apply transformer-based language models to the task of automated theorem proving. Research from Berkeley, Columbia, and Chicago proposes a new test to measure a text model’s multitask accuracy, with 16,000 multiple choice questions across 57 task areas. A report from AI Now takes a look at regulating biometrics, which includes tech such as facial recognition. And the 37th International Conference on Machine Learning makes its proceedings available online. Click here to visit our website and explore the links mentioned in the episode.   

American Conservative University
The Social Dilemma- the rise of social media and the damage it has caused to society

American Conservative University

Play Episode Listen Later Sep 24, 2020 92:14


The Social Dilemma is a 2020 American docudrama. The dilemmaNever before have a handful of tech designers had such control over the way billions of us think, act, and live our lives. Discover what’s hiding on the other side of your screenWe tweet, we like, and we share— but what are the consequences of our growing dependence on social media? This documentary-drama hybrid reveals how social media is reprogramming civilization with tech experts sounding the alarm on their own creations. The Social Dilemma features the voices of technologists, researchers and activists working to align technology with the interests of humanity. The Social Dilemma is a 2020 American docudrama. The film explores the rise of social media and the damage it has caused to society, focusing on its exploitation of its users for financial gain through surveillance capitalism and data mining, how its design is meant to nurture an addiction, its use in politics, its impact on mental health (including the mental health of adolescents and rising teen suicide rates), and its role in spreading conspiracy theories such as Pizzagate and aiding groups such as flat-earthers. The film features interviews with former Google design ethicist and Center for Humane Technology co-founder Tristan Harris, his fellow Center for Humane Technology co-founder Aza Raskin, Asana co-founder and Facebook's like button co-creator Justin Rosenstein, Harvard University professor Shoshana Zuboff, former Pinterest president Tim Kendall, AI Now director of policy research Rashida Richardson, Yonder director of research Renee DiResta, Stanford University Addiction Medicine Fellowship program director Anna Lembke, and virtual reality pioneer Jaron Lanier. The interviews are cut together with dramatizations starring actors Skyler Gisondo, Kara Hayward, and Vincent Kartheiser, which tell the story of a teenager's social media addiction.

Interdependence
Interdependence 9: Kate Crawford (AI Now)

Interdependence

Play Episode Listen Later Jul 14, 2020 47:06


In this episode we speak with Kate Crawford, founder of the AI Now Institute and professor who has spent the last decade studying the political implications of data systems, machine learning and artificial intelligence. We discuss the anatomy of AI systems and full ecosystem of human and material resources behind an Amazon echo, the need to develop an understanding of the exponential accumulation of power under platform capitalism, the use of AI systems in predictive policing and other controversial areas, and Kate’s parallel experience as an electronic musician.This episode ends rather abruptly as we got lost in conversation and Kate had to run, so forgive us for the atypical ending! Relevant Kate links:AI Now Institute: https://ainowinstitute.org/Anatomy of AI: https://anatomyof.ai/Links we raised:Stance Features of Youtube Celebrities by Katri Mustonen: https://jyx.jyu.fi/bitstream/handle/123456789/56988/1/URN%3ANBN%3Afi%3Ajyu-201802011411.pdf 

Danny In The Valley
AI Now's Rashida Richardson: "Free-range facial recognition"

Danny In The Valley

Play Episode Listen Later Jun 14, 2020 36:13


The Sunday Times’ tech correspondent Danny Fortson brings on Rashida Richardson, head of policy research at AI Now, to talk about tech’s pang of conscience about facial recognition technology (3:40), predictive policing (5:20), the problem with the technology (8:15), how pervasive it is (11:30), the laws (13:40), the visceral effect of this technology (18:00), how AI is seeping into law enforcement (20:25), the data problem (25:20), whether this moment will lead to a crackdown (27:05), if a ban is realistic (29:25), and the race to the bottom (33:45). Support this show http://supporter.acast.com/dannyinthevalley. See acast.com/privacy for privacy and opt-out information.

The Georgian Impact Podcast | AI, ML & More
Episode 119: You Need a Data Strategy with Immuta's Dan Wu.

The Georgian Impact Podcast | AI, ML & More

Play Episode Listen Later May 8, 2020 27:09


A solid data strategy can prevent your company from running aground and turning a huge opportunity into a horrible mess. Dan Wu is our guest on this episode of the Georgian Impact Podcast. Dan is a superstar commentator in the privacy and data governance space. He's leveraging his Ph.D. in Sociology and Social Policy and his law degree to help protect people and their data. Dan believes that the best way to do that is through data strategies formed by cross-functional teams that include input from governance, analytics, marketing and product departments. You'll hear about: What we can learn from the botched launch of the Apple Credit Card Why every company needs a data strategy How regulation, like the Algorithmic Transparency Act, could add protections for consumers and accountability for business Offensive vs. defensive data strategy – HBR Article Where responsibility for inaction leading to data breaches should lie Data risks businesses face, including biased algorithms, sharing data with the wrong people, 3rd party data breaches, insider incidents, and technical mistakes Why data ethics need to go beyond what's strictly legal in order to establish and maintain trust. AI Now's 2019 report that touches on ethical inequality risk factors in AI   Who is Dan Wu? Dan Wu is the Privacy Counsel & Legal Engineer at Immuta, a leading automated data governance platform for analytics. He writes about purposeful data strategy on TechCrunch and LinkedIn. He holds a J.D. & Ph.D. from Harvard University.

Danny In The Valley
AI Now's Meredith Whitaker: "Exploitation by design"

Danny In The Valley

Play Episode Listen Later Apr 16, 2020 41:58


The Sunday Times’ tech correspondent Danny Fortson brings on Meredith Whitaker, founder of AI Now and organiser of the Google walk-out, to talk about how she arrived at the search giant 13 years ago (3:40), delving into tech’s effects on society (4:30), becoming a critic (6:15), and then a labour organiser (8:40), the debate on Silicon Valley working with the Pentagon (11:30), AI bias (14:50), sentencing algorithms (17:00), the Google walk-out (19:45), retaliation (22:30), the dangers of government co-opting Big Tech in the coronavirus response (25:25), how AI can reinforce societal divides (32:30), and the plight of “essential” workers (34:15). See acast.com/privacy for privacy and opt-out information.

Spanish Podcast
News in Slow Spanish - #562 - Easy Spanish Radio

Spanish Podcast

Play Episode Listen Later Dec 19, 2019 4:54


En la primera parte del programa, vamos a comentar la actualidad internacional. Comenzaremos con la aprobación de la destitución del presidente Donald Trump por parte de la Cámara de Representantes de EE. UU. el miércoles, la tercera vez que ocurre en la historia de EE. UU. Continuaremos con las elecciones de Reino Unido y la decisiva victoria de Boris Johnson y su partido. Discutiremos la petición del instituto de investigación AI Now para regular la tecnología de detección de emociones y revisaremos los resultados de un estudio llevado a cabo por investigadores británicos sobre las posibles ventajas del etiquetado alimentario en equivalentes de ejercicio. Hoy, en nuestra sección Trending in Spain, hablaremos de estadísticas. Una de ellas es muy positiva: hablaremos de cómo las vacaciones hacen aumentar la tasa de empleo en España. La segunda, en cambio, será todo lo contrario. Yo diría que los datos son incluso alarmantes. ¡La natalidad en España en el año 2018 ha sido la más baja de los últimos 20 años! - La Cámara de Representantes de EE. UU. aprueba la destitución de Trump - Boris Johnson obtiene una victoria aplastante en las elecciones de Reino Unido - AI Now pide leyes que restrinjan las tecnologías de detección de emociones - El etiquetado de alimentos en equivalentes de ejercicio funciona, según un grupo de investigadores británicos - El tan esperado puente de diciembre - España registró en 2018 la cifra más baja de nacimientos en 20 años

German Podcast
News in Slow German - #180 - Study German While Listening to the News

German Podcast

Play Episode Listen Later Dec 19, 2019 4:54


Im ersten Teil unseres Programms wird es um aktuelle Ereignisse gehen. Wir beginnen mit der Eröffnung des Amtsenthebungsverfahrens (Impeachment) gegen Präsident Donald Trump durch das US-Repräsentantenhaus am Mittwoch. Es ist erst das dritte Impeachment in der Geschichte der USA. Weiter geht es mit den Wahlen in Großbritannien und dem klaren Sieg von Boris Johnson und seiner konservativen Partei. Anschließend sprechen wir über die Forderung des AI Now Institute nach einer besseren Regulierung der Emotionserkennungstechnologie. Zum Schluss sehen wir uns noch die Ergebnisse einer von britischen Forschern durchgeführten Studie an, derzufolge eine neue Lebensmittelkennzeichnung, bei der angegeben wird, wie viel Sport man treiben muss, um die in diesem Lebensmittel enthaltenen Kalorien zu verbrennen, Vorteile haben könnte. In unserem Segment Trending in Germany sprechen wir heute über die Bundesstaatsanwaltschaft Deutschlands, die kurz davor ist, die russische Regierung offiziell zu beschuldigen, die Ermordung eines georgischen Bürgers im August in Berlin angeordnet zu haben. Dies wird sicherlich schwerwiegende diplomatische Folgen haben. Außerdem sprechen wir über den Vorschlag, dass die Sommerferien in allen deutschen Bundesländern gleichzeitig beginnen sollen. Das hat Vorteile für die Schulen, aber Verkehrsexperten warnen vor Staus und der Tourismusverband fürchtet wirtschaftliche Einbußen. - US-Repräsentantenhaus stimmt für Amtsenthebungsverfahren gegen US-Präsident Trump - Haushoher Sieg für Boris Johnson bei den Wahlen in Großbritannien - AI Now fordert gesetzliche Regulierung der Emotionserkennungstechnologie - Neue Lebensmittelkennzeichnung soll angeben, wie viel Sport zum Kalorienabtrainieren nötig ist - Mutmaßlicher Attentäter im Tiergarten-Mord womöglich selbst in Gefahr - Deutschland streitet über die Sommerferien

עושים טכנולוגיה
[עושים טכנולוגיה] זיהוי רגשות

עושים טכנולוגיה

Play Episode Listen Later Sep 15, 2019 43:09


חברות רבות בתחום הטכנולוגיות לזיהוי פנים טוענות שהן לא מציעות רק כלי שבאמצעותו ניתן לגלות את זהותו של אדם אלא גם כלי שבאמצעותו ניתן לגלות מה המצולם מרגיש וזאת באמצעות ניתוח של מיקרו הבעות פנים. חברות ענק כמו אמזון, יבמ ומיקרוסופט לצד חברות המתמחות בתחום, פועלות בשוק זיהוי הרגשות שמגלגל מדי שנה 20 מיליארד דולר אך האם זיהוי שכזה בכלל אפשרי?בפרק השני של סדרת "על הפנים" המוקדשת לטכנולוגיות לזיהוי פנים, נבדוק האם ניתן לזהות פדופילים, עבריינים ואפילו חוקרים באקדמיה באמצעות ניתוח תווי הפנים שלהם ומדוע קל לזהות עצב אבל קשה לזהות תדהמה.האזנה נעימהיובל דרור.קישורים:מהיכן מגיעים רגשותhttps://aeon.co/essays/human-culture-and-cognition-evolved-through-the-emotionsVaught's Practical Character Readerhttps://publicdomainreview.org/collections/vaughts-practical-character-reader-1902/דוח מכון המחקר AI Nowhttps://ainowinstitute.org/AI_Now_2018_Report.pdfההיסטוריה הגזענית של זיהוי פניםhttps://www.nytimes.com/2019/07/10/opinion/facial-recognition-race.htmlדף הבית של התכניתרשימת תפוצה בדואר האלקטרוני | iTunes | האפליקצייה שלנו לאנדרואיד | RSS Link | פייסבוק | טוויטר   

ai now rss link
City Arts & Lectures
Privacy and Technology

City Arts & Lectures

Play Episode Listen Later Aug 18, 2019 69:48


This week, a conversation about privacy, ethics, and organizing in the world of technology.Who benefits from the lack of diversity in the tech industry? Does artificial intelligence reflect the biases of those who create it? How can we push for regulation and transparency?  These are some of the questions discussed by our guests, Meredith Whittaker, co-founder of AI Now at NYU and the founder of Google’s Open Research Institute; and Kade Crockford, Director of the ACLU Massachusetts’ Technology and Liberty Program. They appeared at the Sydney Goldstein Theater in San Francisco on June 7, 2019.

Dobcast
Roboto News 16.10.18

Dobcast

Play Episode Listen Later Oct 16, 2018 3:10


Paul Allen, el cofundador de Microsoft, falleció a los 65 años, comienza el AI Now 2018 Symposium, mientras que en Brasil se lleva a cabo la 20º edición de Futurecom. www.amenazaroboto.com

The Future, This Week
The Future, This Week 28 Jul 2017

The Future, This Week

Play Episode Listen Later Jul 28, 2017 33:56


This week: what happened while we were gone, real problems with AI, spying vacuums, and a suicidal robot. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.   The stories this week: Elon is worried about killer robots The real problems with AI Roomba, the home mapping vacuum cleaner   Other stories we bring up: AI and enormous data Why AI is not colour blind Google's collaboration with Carnegie Mellon University paper AI Now initiative Cathy O'Neil's book Weapons of Math Destruction Do algorithms make better decisions?  Roomba data will be sold to the highest bidder How to Use iRobot Roomba 980 Robot Vacuum   Our robot of the week: A Knightscope security robot   You can subscribe to this podcast on iTunes, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au. Send us your news ideas to sbi@sydney.edu.au For more episodes of The Future, This Week see our playlists

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
This Week in ML & AI - 7/8/16: A BS Meter for AI, Retrieval Models for Chatbots & Predatory Robots

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Jul 10, 2016 29:29


This Week in Machine Learning & AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show covers the White House’s AI Now workshop, tuning your AI BS meter, research on predatory robots, an AI that writes Python code, plus acquisitions, financing, technology updates and a bunch more. Show notes for this episode can be found at https://twimlai.com/8.