Podcasts about some ai

  • 35PODCASTS
  • 41EPISODES
  • 34mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 22, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about some ai

Latest podcast episodes about some ai

Let's Know Things
Creative Assets

Let's Know Things

Play Episode Listen Later Apr 22, 2025 18:25


This week we talk about AI chatbots, virtual avatars, and romance novels.We also discuss Inkitt, Galatea, and LLM grooming.Recommended Book: New Cold Wars by David E. SangerTranscriptThere's evidence that the US Trump administration used AI tools, possibly ChatGPT, possibly another, similar model or models, to generate the numbers they used to justify a recent wave of new tariffs on the country's allies and enemies.It was also recently reported that Democratic mayoral candidate Andrew Cuomo used AI-generated text and citations in a plan he released called Addressing New York's Housing Crisis. And this case is a bit more of a slam dunk, as whomever put the plan together for him seems to have just copy-pasted snippets from the ChatGPT interface without changing or checking them—which is increasingly common for all of us, as such interfaces are beginning to replace even search engine results, like those provided by Google.But it's also a practice that's generally frowned upon, as—and this is noted even in the copy provided alongside many such tools and their results—these systems provide a whole lot of flawed, false, incomplete, or otherwise not-advisable-to-use data, in some cases flubbing numbers or introducing bizarre grammatical inaccuracies, but in other cases making up research or scientific papers that don't exist, but presenting them the same as they would a real-deal paper or study. And there's no way to know without actually going and checking what these things serve up, which can, for many people at least, take a long while; so a lot of people don't do this, including many politicians and their administrations, and that results in publishing made-up, baseless, numbers, and in some cases wholesale fabricated claims.This isn't great for many reasons, including that it can reinforce our existing biases. If you want to slap a bunch of tariffs on a bunch of trading partners, you can ask an AI to generated some numbers that justify those high tariffs, and it will do what it can to help; it's the ultimate yes-man, depending on how you word your queries. And it will do this even if your ask is not great or truthful or ideal.These tools can also help users spiral down conspiracy rabbit holes, can cherry-pick real studies to make it seem as if something that isn't true is true, and it can help folks who are writing books or producing podcasts come up with just-so stories that seem to support a particular, preferred narrative, but which actually don't—and which maybe aren't even real or accurate, as presented.What's more, there's also evidence that some nation states, including Russia, are engaging in what's called LLM grooming, which basically means seeding false information to sources they know these models are trained on so that said models will spit out inaccurate information that serves their intended ends.This is similar to flooding social networks with misinformation and bots that seem to be people from the US, or from another country whose elections they hope to influence, that bot apparently a person who supports a particular cause, but in reality that bot is run by someone in Macedonia or within Russia's own borders. Or maybe changing the Wikipedia entry and hoping no one changes it back.Instead of polluting social networks or Wikis with such misinfo, though, LLM grooming might mean churning out websites with high SEO, search engine optimization rankings, which then pushes them to the top of search results, which in turn makes it more likely they'll be scraped and rated highly by AI systems that gather some of their data and understanding of the world, if you want to call it that, from these sources.Over time, this can lead to more AI bots parroting Russia's preferred interpretation, their propaganda, about things like their invasion of Ukraine, and that, in turn, can slowly nudge the public's perception on such matters; maybe someone who asks ChatGPT about Russia's invasion of Ukraine, after hearing someone who supports Russia claiming that it was all Ukraine's fault, and they're told, by ChatGPT, which would seem to be an objective source of such information, being an AI bot, that Ukraine in fact brought it upon themselves, or is in some way actually the aggressor, which would serve Russia's geopolitical purposes. None of which is true, but it starts to seem more true to some people because of that poisoning of the informational well.So there are some issues of large, geopolitical consequence roiling in the AI space right now. But some of the most impactful issues related to this collection of technologies are somewhat smaller in scale, today, at least, but still have the potential to disrupt entire industries as they scale up.And that's what I'd like to talk about today, focusing especially on a few recent stories related to AI and its growing influence in creative spaces.—There's a popular meme that's been shuffling around social media for a year or two, and a version of it, shared by an author named Joanna Maciejewska (machie-YEF-ski) in a post on X, goes like this: “You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”It could be argued, of course, that we already have technologies that do our laundry and dishes, and that AI has the capacity to make both of those machines more efficient and effective, especially in term of helping manage and moderate increasingly renewables-heavy electrical grids, but the general concept here resonates with a lot of people, I think: why are some of the biggest AI companies seemingly dead-set on replacing creatives, who are already often suffering from financial precarity, but who generally enjoy their work, or at least find it satisfying, instead of automating away the drudgery many of us suffer in the work that pays our bills, in our maintenance of our homes, and in how we get around, work on our health, and so on.Why not automate the tedious and painful stuff rather than the pleasurable stuff, basically?I think, looking at the industry more broadly, you can actually see AI creeping up on all these spaces, painful and pleasurable, but generative AI tools, like ChatGPT and its peers, seem to be especially good at generating text and images and such, in part because it's optimized for communication, being a chatbot interface over a collection of more complex tools, and most of our entertainments operate in similar spaces; using words, using images, these are all things that overlap with the attributes that make for a useful and convincing chatbot.The AI tools that produce music from scratch, writing the lyrics and producing the melodies and incorporating different instruments, working in different genres, the whole, soup to nuts, are based on similar principles to AI systems that work with large sets of linguistic training data to produce purely language based, written outputs.Feed an AI system gobs of music, and it can learn to produce music at the prompting of a user, then, and the same seems to be true of other types of content, as well, from images to movies to video games.This newfound capacity to spit out works that, for all their flaws, would have previously requires a whole lot of time and effort to produce, is leading to jubilation in some spaces, but concern and even outright terror in others.I did an episode not long ago on so-called ‘vibe coding,' about people who in some cases can't code at all, but who are producing entire websites and apps and other products just by learning how to interact with these AI tools appropriately. And these vibe coders are having a field day with these tools.The same is increasingly true of people without any music chops who want to make their own songs. Folks with musical backgrounds often get more out of these tools, same as coders tend to get more from vibe coding, in part because they know what to ask for, and in part because they can edit what they get on the other end, making it better and tweaking the output to make it their own.But people without movie-making skills can also type what they want into a box and have these tools spit out a serviceable movie on the other end, and that's leading to a change similar to what happened when less-fiddly guns were introduced to the battlefield: you no longer needed to have super well-trained soldiers to defeat your enemies, you could just hand them a gun and teach them to shoot and reload it, and you'd do pretty well; you could even defeat some of your contemporaries who had much better trained and more experienced soldiers, but who hadn't yet made the jump to gunpowder weapons.There are many aspects to this story, and many gray areas that are not as black and white as, for instance, a non-coder suddenly being able to out-code someone who's worked really hard to become a decent coder, or someone who knows nothing about making music creating bops, with the aide of these tools, that rival those of actual musicians and singers who have worked their whole life to be able to the same.There have been stories about actors selling their likenesses to studios and companies that work with studios, for instance, those likenesses then being used by clients of those companies, often without the actors' permission.For some, this might be a pretty good deal, as that actor is still free to pursue the work they want to do, and their likeness can be used in the background for a fee, some of that fee going to the actor, no additional work necessary. Their likeness becomes an asset that they wouldn't have otherwise had—not to be used and rented out in that capacity, at least—and thus, for some, this might be a welcome development.This has, in some cases though, resulted in situations in which said actor discovers that their likeness is being used to hawk products they would never be involved with, like online scams and bogus health cures. They still receive a payment for that use of their image, but they realize that they have little or no control over how and when and for what purposes it's used.And because of the aforementioned financial precarity that many creatives in particular experience as a result of how their industries work, a lot of people, actors and otherwise, would probably jump at the chance to make some money, even if the terms are abusive and, long-term, not in their best interest.Similar tools, and similar financial arrangements, are being used and made in the publishing world.An author named Manjari Sharma wrote her first book, an enemies-to-lovers style romance, in a series of installments she published on the free fanfic platform Wattpad during the height of the Covid pandemic. She added it to another, similar platform, Inkitt, once it was finished, and it garnered a lot of attention and praise on both.As a result of all that attention, the folks behind Inkitt suggested she move it from their free platform to their premium offering, Galatea, which would allow Sharma to earn a portion of the money gleaned from her work.The platform told her they wanted to turn the book into a series in early 2024, but that she would only have a few weeks to complete the next book, if she accepted their terms. She was busy with work, so she accepted their offer to hire a ghostwriter to produce the sequel, as they told her she'd still receive a cut of the profits, and the fan response to that sequel was…muted. They didn't like it. Said it had a different vibe, wasn't well-written, just wasn't very good. Lacked the magic of the original, basically.She was earning extra money from the sequel, then, but no one really enjoyed it, and she didn't feel great about that. Galatea then told Sharma that they would make a video series based on the books for their new video app, 49 episodes, each a few minutes long, and again, they'd handle everything, she'd just collect royalties.The royalty money she was earning was a lot less than what traditional publishers offer, but it was enough that she was earning more from those royalties than from her actual bank job, and the company, due to the original deal she made when she posted the book to their service, had the right to do basically anything they wanted with it, so she was kind of stuck, either way.So she knew she had to go along with whatever they wanted to do, and was mostly just trying to benefit from that imbalance where possible. What she didn't realize, though, was that the company was using AI tools to, according to the company's CEO, “iterate on the stories,” which basically means using AI to produce sequels and video content for successful, human-written books. As a result of this approach, they have just one head of editorial and five “story intelligence analysts” on staff, alongside some freelancers, handling books and supplementary content written by about 400 authors.As a business model, it's hard to compete with this approach.As a customer, at the moment, at least, with today's tools and our approach to using them, it's often less than ideal. Some AI chatbots are helpful, but many of them just gatekeep so a company can hire fewer customer service humans, saving the business money at the customer's expense. That seems to be the case with this book's sequel, too, and many of the people paying to read these things assumed they were written by humans, only to find, after the fact, that they were very mediocre AI-generated knock-offs.There's a lot of money flooding into this space predicated in part on the promise of being able to replace currently quite expensive people, like those who have to be hired and those who own intellectual property, like the rights to books and the ideas and characters they contain, with near-free versions of the same, the AI doing similar-enough work alongside a human skeleton crew, and that model promises crazy profits by earning the same level of revenue but with dramatically reduced expenses.The degree to which this will actually pan out is still an open question, as, even putting aside the moral and economic quandary of what all these replaced creatives will do, and the legal argument that these AI companies are making right now, that they can just vacuum up all existing content and spit it back out in different arrangements without that being a copyright violation, even setting all of that aside, the quality differential is pretty real, in some spaces right now, and while AI tools do seem to have a lot of promise for all sorts of things, there's also a chance that the eventual costs of operating them and building out the necessary infrastructure will fail to afford those promised financial benefits, at least in the short term.Show Noteshttps://www.theverge.com/news/648036/intouch-ai-phone-calls-parentshttps://arstechnica.com/ai/2025/04/regrets-actors-who-sold-ai-avatars-stuck-in-black-mirror-esque-dystopia/https://archive.ph/gzfVChttps://archive.ph/91bJbhttps://www.cnn.com/2025/03/08/tech/hollywood-celebrity-deepfakes-congress-law/index.htmlhttps://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-electionshttps://techcrunch.com/2025/04/13/jack-dorsey-and-elon-musk-would-like-to-delete-all-ip-law/https://www.404media.co/this-college-protester-isnt-real-its-an-ai-powered-undercover-bot-for-cops/https://hellgatenyc.com/andrew-cuomo-chatgpt-housing-plan/https://www.theverge.com/news/642620/trump-tariffs-formula-ai-chatgpt-gemini-claude-grokhttps://www.wsj.com/articles/ai-cant-predict-the-impact-of-tariffsbut-it-will-try-e387e40chttps://www.washingtonpost.com/technology/2025/04/17/llm-poisoning-grooming-chatbots-russia/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe

High-Income Business Writing
#366: The Conversation Your AI Is Dying to Have with You

High-Income Business Writing

Play Episode Listen Later Feb 26, 2025 8:51


Episode Overview: In this episode, I explain why a conversational approach to AI interactions is more effective than massive, complex prompts — and how this mirrors natural human communication patterns. Key Points: 1. The Common Mistake Many users dump massive amounts of information into single prompts. Some AI experts promote complex, lengthy prompts that users blindly copy. This approach lacks organic interaction and outsources critical thinking. 2. The Human Conversation Model Consider how we naturally handle complex discussions with colleagues. We don't monologue for 20 minutes straight. Information sharing happens through natural back-and-forth dialogue. 3. Better Approach: The Conversational Method Start with essential information using the 3R framework: Role: Tell AI what perspective to adopt. Reference: Provide necessary context. Requirements: Specify what you need. Let AI respond before adding more context. Build the conversation iteratively. 4. Why This Works Better Helps both AI and humans process information more effectively. Supports natural "chain of thought reasoning." Similar to building a house: methodical, step-by-step approach. Allows for unexpected insights and creative solutions. 5. Benefits Keeps your own thinking and problem-solving skills sharp. Leads to more meaningful exchanges. Helps uncover possibilities you hadn't considered. Maintains human agency in the interaction. Notable Quote: "When we treat AI like a conversation partner rather than a command-line interface, we tap into its full potential." Takeaway: Approach AI interactions as you would a thoughtful discussion with a respected colleague — start with essentials and build the conversation naturally.

Edge of the Web - An SEO Podcast for Today's Digital Marketer
736 | News from the EDGE | Week of 12.16.2024

Edge of the Web - An SEO Podcast for Today's Digital Marketer

Play Episode Listen Later Dec 20, 2024 30:37


Our LAST News show of the year! Thanks to all of our listeners for making this year another successful year for EDGE of the Web! Let us know your thoughts of the show at https://ratethispodcast.com/EDGE Stories! “As the WordPress Turns” (you gotta check out the YouTube on this one). The Mullenweg Meltdown continues. Musk wants to bring a challenge to GMail, DM style. A huge exploit documented by friend of the show, Mark Williams-Cook. Please check out our second episode of our interview with Mark, as we dive into this insight into additional ranking signals from Google.  Some AI news, AI-powered reporting at Google Ads as well as Gemini 2.0 coming to Search. A fleet of articles from Barry - always check out Search Engine Roundtable to catch up with his daily postings. Thanks again to our wonderful audience! We wish you a very, Merry Christmas and a Happy New Year! We'll see you on the flip side! Best, EDGE of the Web Team   News from the EDGE: [00:03:16] Elon Musk Is Gearing Up to Challenge Gmail with His Own Email Service [00:06:50] (As the WordPress Turns) Mullenweg Outraged as WP Engine is Allowed to Regain Access [00:10:43] EDGE of the Web Title Sponsor: Site Strategics  [00:11:53] Exploit Unveils Google's Content Ranking Secrets AI News: [00:15:26] Google Ads Tests AI-Powered Reporting Tool [00:17:24] Google Gemini 2.0 coming to Search and AI Overviews AI Tools: [00:18:48] Google Whisk [00:20:10] EDGE of the Web Sponsor: InLinks Barry Blast from Search Engine Roundtable: [00:21:31] Google Business Profile Emails Asking To Add Social Profiles [00:23:55] Google To Have More Core Updates, More Often [00:27:16] Google December 2024 Core Update Landed & It's Big Thanks to our sponsors! Site Strategics https://edgeofthewebradio.com/site Inlinks https://edgeofthewebradio.com/inlinks Follow Us: X: @ErinSparks X: @TheMann00 X: @EDGEWebRadio

Tore Says Show
Sat 07 Dec, 2024: Crimes, Crimes, Crimes (Part 2 of 2) - Being Throttled - Good Genes - Gaslighting Us - AI Future - Evil Downloads - So Many Choices

Tore Says Show

Play Episode Listen Later Dec 8, 2024 156:25


Genetics, DNA and the human experience are all under attack. Nothing of value comes thru corporate media outlets. When Congress is just as depraved, why worry about Hunter? It was years ago when Trump started making cabinet picks. You shouldn't be concerned about them. He's got a purpose for each. Light is always the best disinfectant. The 5G network that makes the new Silk Road. Cyrus Parsa sees the future of AI. Humanity's mind is being raped ever day. The Creator has a great plan, and He is the true vision for the future. There are higher powers that constantly manipulate our domain. We must truly decide what is good and evil. Only then will we have a glorious future. Signal jammers and L-Rad systems. Because we have so many choices, things get complicated fast. Drones and China will be in the news soon. Some AI perspective on how we see the world. It's a big reach and the audience, but subscriptions are not allowed. Technology is awesome, but we must always be in charge. Welcome to the human condition. We must find empathy and compassion for everyone. The goal is to try and be better.

Storage Unpacked Podcast
Storage Unpacked 262 – The Ethics and Regulation of AI

Storage Unpacked Podcast

Play Episode Listen Later Oct 18, 2024 52:17


In this podcast episode, Chris is in conversation with Jeffries Briginshaw (Head of EMEA Government Relations at NetApp) and Adam Gale (CTO for AI & Cyber Security, NetApp) discussing the EU AI Act and the regulation of artificial intelligence across the world. The EU AI Act is an early introduction into the regulation of the use of AI by businesses within their engagements and interactions with customers. As explained in this conversation, there are classifications of AI types and within that, restrictions on what businesses are permitted to implement based on those categorisations. Some AI usage will be banned, while others will require human intervention and close monitoring. How should your business engage with AI and ensure compliance with the act? Listen to the discussion for more details. As mentioned in the recording, for details on what NetApp can offer, point your favourite browser to https://www.netapp.com/artificial-intelligence/ to learn more. Elapsed Time: 00:52:17 Timeline 00:00:00 - Intros 00:01:19 - Why should we be regulating AI? 00:02:30 - What will the impacts of AI be on personal and work life? 00:03:55 - What if we get regulation wrong? 00:05:30 - What happens if AI goes wrong, such as data poisoning? 00:09:04 - Existing EU/UK law has been successful at regulation (GDPR) 00:10:25 - What is the EU AI Act? 00:11:46 - “Prohibited Practices” will be banned from 2025 00:14:00 - How will the use of business in AI be regulated? 00:18:05 - The EU AI Act appears to focus on protection for individuals 00:20:56 - EU citizens are broadly positive to AI - if it is successfully regulated 00:21:52 - Compliance has an overhead - in terms of hard costs (developers) 00:25:20 - What are the penalties for not complying with the EU AI Act? 00:29:50 - What about the rest of the world - the US and elsewhere? 00:35:10 - Could we see “cross-border” complexity? 00:37:40 - What are the technology implications for AI regulation? 00:40:07 - Should businesses be demonstrating their AI compliance? 00:44:03 - What does NetApp offer customers to help AI compliance? 00:47:38 - AI will require a “big red stop button” 00:50:00 - Wrap Up Copyright (c) 2016-2024 Unpacked Network. No reproduction or re-use without permission. Podcast episode #dfsx

The Broker Link
What are the Best Ways to Use AI in your Insurance Business?

The Broker Link

Play Episode Listen Later Aug 27, 2024 24:41


Have you considered using AI in your business?  What is the best way to use it?  It can be very effective if used properly. In this episode of The Broker Link, we talk with Bob Whitis from BrightFire.  BrightFire provides a suite of tools for digital marketing solutions.  So how can you use AI in your insurance business?  One way is for brand awareness and messaging.  Another way to use it is for content, code, answers to questions, and so much more. Some AI solutions include ChatGPT, Gemini by Google, and Microsoft Co-Pilot are among the top tools. Bob will also talk about some of the misconceptions people have about using AI.   Go to www.brightfire.com/thebrokerageinc to find out how BrightFire can help you.  

The Table Church
Freedom from Death and Resurrection: Romans 6

The Table Church

Play Episode Listen Later Aug 18, 2024 28:54


(Some AI tools were used to recover some poor audio from the original recording. You may notice some . . . oddness in Anthony's voice)   This sermon, part of a series on the Book of Romans, focuses on Romans chapter 6. It interprets the scripture to present sin as an oppressive entity that Jesus's death and resurrection have overcome. The message emphasizes the need for believers to shift their allegiance entirely from sin to God, as trying to serve both is futile. Various analogies, including a modern story about an employee with two jobs, illustrate this point. The sermon debunks the traditional legalistic approach, portraying sin as a despotic ruler rather than merely individual mistakes. It underscores the transformative power of baptism, seen as participating in Jesus's death and resurrection, thus liberating believers from sin's grip. The concluding message encourages embracing one's identity as beloved by God, rejecting self-hatred, and living in the freedom and peace offered through Jesus.

Canadian Cycling Magazine Podcast
AI and training insights from a Toronto cyclist working to make riders stronger

Canadian Cycling Magazine Podcast

Play Episode Listen Later Aug 1, 2024 58:25


Years ago, Armando Mastracci got a recumbent bike that could provide him with heart rate, cadence and power data. As Mastracci trained on the bike indoors throughout one winter, the graduate of engineering science at the University of Toronto recorded his training data on spreadsheets. He also started performing his own experiments. What happened if he maintained a certain cadence? Or power? He started noticing patterns in the data, patterns that led him to algorithms, which in turn led to the launch of a training platform called Xert that Mastracci continues to build and expand today.From the beginning, Xert had AI-like features. It could look at a rider's power data and make predictions. But, until this past December, the company didn't really lean into the term artificial intelligence. Then, eight months ago, Xert began rolling about a beta version of a feature called Forecast AI. What was it about this feature that made it AI? Why wasn't the previous predictive number crunching of the software AI? Mastracci not only discusses these questions, but explores larger ideas that affect cyclists looking to improve their performance, as well as the AI field as a whole. Can an AI model handle all the data that cyclists can now collect, such as heart-rate variability to blood-sugar levels? Some AI models have shown certain biases. Are there biases in training platforms? With AI training systems getting better and better, should traditional coaches be worried? Take a listen to this fascinating interview with Mastracci and get a glimpse of the future of training.Also in this episode, an update from Paris. Canadian Cycling Magazine writer Tara Nolan is at the Summer Games. She checks in with behind-the-scenes news from the time trial and mountain bike races. Make sure to read Nolan's stories about the races against the clock and the Holmgren siblings, who competed in their first Olympics in cross country mountain biking. How did the Holmgrens get to Paris? Well, that's a good story, too. You can listen to it in a previous episode.

One Graham Army
#302 – Craig E. List

One Graham Army

Play Episode Listen Later Jun 13, 2024 58:53


Some AI voice cloning, the people of Craigslist and it’s time to clock in at the Chee. https://suno.com/@one_graham_army One time donations of any amount available at https://ko-fi.com/onegrahamarmy Consider supporting the Global War On Coherency at https://www.patreon.com/onegrahamarmy Google “One Graham Army” for socials and more Go shirt yourself with Shirt Caviar! https://shirtcaviar.com/ Tweet

Techmeme Ride Home
Thu. 05/16 – Instagram Founder To Anthropic

Techmeme Ride Home

Play Episode Listen Later May 16, 2024 15:28


Some AI companies want to go after web search. But by hiring an Instagram founder, is Anthropic going in a social or app direction? Will AI kill the carbon neutral ambitions of the major tech players? Will tech companies now have to onshore EMPLOYEES from China? And Netflix with ads? Definitely working.Links:EU launches probe into Meta over social media addiction in children (Financial Times)Instagram's co-founder is Anthropic's new chief product officer (The Verge)Android will be able to detect if your phone has been snatched (The Verge)Microsoft's AI Push Imperils Climate Goal as Carbon Emissions Jump 30% (Bloomberg)Microsoft Asks Hundreds of China-Based AI Staff to Consider Relocating Amid U.S.-China Tensions (WSJ)Stability AI, Facing Cash Crunch, Discusses Sale (The Information)Netflix ad-supported tier has 40 million monthly users, nearly double previous count (CNBC)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Daybreak
Is there room for deep fakes in democracy? AI startups seem to think so

Daybreak

Play Episode Listen Later Apr 16, 2024 13:24


Just like every Lok Sabha election in the last 72 years, millions of people will vote for a new government over the next couple of weeks.  But there is one thing that really sets this election apart. Never before have political parties actively used Generative Artificial IntelIigence at this scale. It is a turning point in India's electoral evolution. Some AI startups in India have been developing hyper-personalised voter experiences for political parties. This comes at a time when Gen AI tools like deepfakes have become very sophisticated — to the point where even experts often  struggle to tell what is real and what is not. In the run-up to the election, when you are being bombarded with political content, videos and images, this can be very dangerous. Yet, there are barely any rules in place to regulate the use of this technology during the election process. What does this mean for the world's largest democracy? Daybreak is produced from the newsroom of The Ken, India's first subscriber-only business news platform. Subscribe for more exclusive, deeply-reported, and analytical business stories.

WIRED Business – Spoken Edition
Meta Will Crack Down on AI-Generated Fakes—but Leave Plenty Undetected

WIRED Business – Spoken Edition

Play Episode Listen Later Feb 7, 2024 5:55


Some AI-generated images posted to Facebook, Instagram, and Threads will in future be labeled as artificial. But only if they are made using tools from companies willing to work with Meta. Read this story here. Learn more about your ad choices. Visit megaphone.fm/adchoices

Tech Talk with Mathew Dickerson
A Thrilling Throwback through the First Half of 2023 with the Most Memorable and Mind-Blowing Moments from Tech Talk.

Tech Talk with Mathew Dickerson

Play Episode Listen Later Dec 31, 2023 118:41


Blood Pressure Monitoring with Just the Tip of Your Finger.  Question to ChatGPT: Write a Short Paragraph about the Status of ChatGPT.  Which EV Has Ended the 28-Year Reign of the Toyota Camry as the Best-Selling Sedan in Australia?  Would You Be Happy to Fly on an Airbus That Requires Fewer People in the Cockpit?  What Does the Data Show for the Progression of EV Sales around the World?  AI Introduction to the 100th Episode.  Alphabet Stock Plummets by $144 Billion Due to Google AI Error.  The Latest Tool for Charity Scammers Is AI-Generated Art.  Remote Kissing over the Internet for Long-Distance Relationships.  Beverage Printer Lets You Customise Your Drink: One Molecule at a Time.  NSW Schools Consider Prison Tactics to Curb Mobile Misuse.  3D Printed Cheesecake. Mmmm. Cheesecake.  A Call to Remember from Fifty Years Ago.  Can Your Eyes See the Detail in an 8K TV? Giant Gravity Batteries Battle Renewable Energy Roadblocks.  Apple and Google Grapple with AirTag Stalking.  Swedish Streets Will Have World's First Permanent EV-Charging Road by 2025.  Sustainable, Spoilage-Sensing Wraps Show Promising Potential.  Dundee's Degree for Digital Doers Will Boost Expertise in Esports.  McCartney Announces 'Final' Beatles Song Drawn from Lennon's Old Demo – with Some AI. 

英语每日一听 | 每天少于5分钟
第2037期:How Will Europe's New Artificial Intelligence Rules Affect the World?

英语每日一听 | 每天少于5分钟

Play Episode Listen Later Dec 16, 2023 4:46


European nations reached an agreement on rules for artificial intelligence (AI) last week. Some experts say the regulations will affect people around the world. 欧洲国家上周就人工智能(AI)规则达成了协议。一些专家表示,这些规定将影响世界各地的人们。 Here are some of the details of the agreement reported by the Associated Press:以下是美联社报道的该协议的一些细节:The AI Act aims to regulate or establish guidelines for AI technology that has the potential to cause problems if misused. 《人工智能法案》旨在监管或制定人工智能技术指南,这些技术如果滥用可能会导致问题。 AI systems that recommend online material, or those that check email messages, would be less regulated. But technology that concerns healthcare or medical decisions would have higher requirements. 推荐在线材料或检查电子邮件的人工智能系统将受到较少的监管。但涉及医疗保健或医疗决策的技术会有更高的要求。 Some AI systems will be banned except in some cases. They include systems that scan people's faces in public and systems that make predictions about future behavior such as whether a person will commit a crime. 除某些情况外,某些人工智能系统将被禁止。其中包括在公共场合扫描人们面部的系统以及预测未来行为(例如一个人是否会犯罪)的系统。 The new AI Act will not take effect until two years after a vote from European lawmakers. The vote is planned for the first part of next year. The soonest it would be in place is sometime early in 2026. 新的人工智能法案要在欧洲立法者投票两年后才会生效。投票计划于明年上半年进行。最快将于 2026 年初实施。Some experts say the guidelines could become a global standard. That has happened before. One recent European decision caused U.S. company Apple to stop using its lightning data cable in favor of a more widely used cable. 一些专家表示,该指南可能成为全球标准。这种事以前也发生过。欧洲最近的一项决定导致美国公司苹果公司停止使用其闪电数据线,转而使用更广泛使用的数据线。 Experts say Europe's rules might be used as a blueprint in other parts of the world. Anu Bradford is a professor at Columbia University in New York City. She called Europe's act “comprehensive” and “a game-changer.” Bradford noted the European rules will “show the world AI can be governed.” 专家表示,欧洲的规则可能会被用作世界其他地区的蓝图。阿努·布拉德福德 (Anu Bradford) 是纽约哥伦比亚大学的教授。她称欧洲的行动是“全面的”和“游戏规则改变者”。布拉德福德指出,欧洲规则将“向世界表明人工智能是可以治理的”。 Rights groups complained that Europe's decision to not completely ban the use of facial recognition “is a missed opportunity.” Amnesty International noted that Europe did not ban exports of AI technology that covers social scoring. Social scoring systems permit governments to record how well citizens follow rules. 人权组织抱怨说,欧洲决定不完全禁止使用面部识别“是一个错失的机会”。国际特赦组织指出,欧洲并未禁止出口涉及社会评分的人工智能技术。社会评分系统允许政府记录公民遵守规则的情况。In the United States, President Joe Biden signed an executive order in October on AI. Biden required AI technology companies to share test results and other information with the government. Government organizations will create requirements for AI tools that must be followed before systems are released for public use.在美国,总统乔·拜登于 10 月签署了一项关于人工智能的行政命令。拜登要求人工智能技术公司与政府分享测试结果和其他信息。政府组织将为人工智能工具制定要求,在系统发布供公众使用之前必须遵循这些要求。China released rules for AI tools that create material such as photos, text and videos. The rules are only short-term guidelines. President also called for an open and fair environment for AI development around the world. 中国发布了创建照片、文本和视频等材料的人工智能工具的规则。这些规则只是短期指导方针。主席还呼吁为全球人工智能发展营造开放、公平的环境。The rise of ChatGPT, an AI tool based in the U.S., is one of the reasons for Europe's new set of rules. Europe's rules include guidelines for chatbots and other AI systems that can do jobs such as writing, creating video and writing computer code. 总部位于美国的人工智能工具 ChatGPT 的崛起是欧洲出台新规则的原因之一。欧洲的规则包括针对聊天机器人和其他人工智能系统的指南,这些系统可以完成写作、创建视频和编写计算机代码等工作。 Systems must clearly show where the material that went into training the bots came from. They also must show how much energy was used to train the systems, or models. They should be open about how they control the data that comes from their tool's users. And they need to observe the EU's copyright property protection laws. 系统必须清楚地显示用于训练机器人的材料来自何处。他们还必须显示有多少能量用于训练系统或模型。他们应该公开如何控制来自其工具用户的数据。他们需要遵守欧盟的版权财产保护法。 High technology systems or risky uses of AI are required to follow stricter rules. Those include systems that create basic pieces of information, such as computer code, that others will then use to create other AI systems. 高科技系统或人工智能的危险使用需要遵循更严格的规则。其中包括创建基本信息(例如计算机代码)的系统,其他人将使用这些信息来创建其他人工智能系统。

Hesby Street
Hesby Street w/ Zack Chapaloni & Torio Van Grol - Ep. 189

Hesby Street

Play Episode Listen Later Nov 15, 2023 35:01


Hey Hesbos! Zack has seen multiple generations of nerd. Torio wants his tootsie pop hat. Which tribes appropriated that coat. Some AI pins wish they were on another person. Don't block out your scene at a public restaurant. Shia always coming in hot!ALL EVERYTHING IS NOTHING!!!Follow us on Instagram:@hesbystreetpod@toriovangrol@zackchapaloniMerch and Live Show dates:https://www.hesbystreetpod.com Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.

Charity Therapy
095: Meeting Skeletons

Charity Therapy

Play Episode Listen Later Sep 21, 2023 17:27


Hey there, nonprofit party people! In this episode of Charity Therapy, we're back again, diving headfirst into a topic that sounds about as thrilling as watching paint dry: meeting minutes.  But stick with us! This episode isn't going to put you asleep, I promise! We will shine some much-needed light on the hidden power of these underrated documents in your nonprofit board governance.  We start by addressing a listener's burning question: can technology-assisted recordings replace traditional written meeting minutes? Spoiler alert: they can't! We've got a whole heap of reasons why. In the second half of the episode, we venture into the murky relationship between meeting minutes and technology solutions. You'd think with all our AI gadgets and gizmos, there'd be an easy fix, right? Well, sort of. We talk about how using AI tools can help make minute-taking less of a chore and more of a breeze. But don't get too excited - we also stress why it's essential not to record every single detail during meetings. To round it all off, we explain the importance of training new board members on the art of minute-taking. If you're in nonprofit management and find yourself grappling with board governance, this episode is packed with candid discussions and practical solutions to help you out. Let's turn those meeting minutes from mundane to magical! In this episode, you will hear: Meeting minutes' significance in nonprofit board governance Why you shouldn't replace your written minutes with meeting recordings The relationship between meeting minutes and technology Why you don't want to document every minute detail during board meetings Some AI tools that can make the minute-taking process more manageable The importance of training new board members on meeting minute protocols How to address board members struggling with understanding their roles Resources from this Episode: Sign up for the Birken Law Email list: https://birkenlaw.com/signup/ Facebook page: https://www.facebook.com/birkenlaw Follow and Review: We'd love for you to follow us if you haven't yet. Click that purple '+' in the top right corner of your Apple Podcasts app. We'd love it even more if you could drop a review or 5-star rating over on Apple Podcasts. Simply select “Ratings and Reviews” and “Write a Review” then a quick line with your favorite part of the episode. It only takes a second and it helps spread the word about the podcast. Episode Credits If you like this podcast and are thinking of creating your own, consider talking to my producer, Emerald City Productions. They helped me grow and produce the podcast you are listening to right now. Find out more at https://emeraldcitypro.com Let them know we sent you.

The Nonlinear Library
AF - How to talk about reasons why AGI might not be near? by Kaj Sotala

The Nonlinear Library

Play Episode Listen Later Sep 17, 2023 4:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to talk about reasons why AGI might not be near?, published by Kaj Sotala on September 17, 2023 on The AI Alignment Forum. I occasionally have some thoughts about why AGI might not be as near as a lot of people seem to think, but I'm confused about how/whether to talk about them in public. The biggest reason for not talking about them is that one person's "here is a list of capabilities that I think an AGI would need to have, that I don't see there being progress on" is another person's "here's a roadmap of AGI capabilities that we should do focused research on". Any articulation of missing capabilities that is clear enough to be convincing, seems also clear enough to get people thinking about how to achieve those capabilities. At the same time, the community thinking that AGI is closer than it really is (if that's indeed the case) has numerous costs, including at least: Immense mental health costs to a huge number of people who think that AGI is imminent People at large making bad strategic decisions that end up having major costs, e.g. not putting any money in savings because they expect it to not matter soon Alignment people specifically making bad strategic decisions that end up having major costs, e.g. focusing on alignment approaches that one might pay off in the long term and neglecting more foundational long-term research Alignment people losing credibility and getting a reputation of crying wolf once predicted AGI advances fail to materialize Having a better model of what exactly is missing could conceivably also make it easier to predict when AGI will actually be near. But I'm not sure to what extent this is actually the case, since the development of core AGI competencies feels more of a question of insight than grind, and insight seems very hard to predict. A benefit from this that does seem more plausible would be if the analysis of capabilities gave us information that we could use to figure out what a good future landscape would look like. For example, suppose that we aren't likely to get AGI soon and that the capabilities we currently have will create a society that looks more like the one described in Comprehensive AI Services, and that such services could safely be used to detect signs of actually dangerous AGIs. If this was the case, then it would be important to know that we may want to accelerate the deployment of technologies that are taking in the world in a CAIS-like direction, and possibly e.g. promote rather than oppose things like open source LLMs. One argument would be that if AGI really isn't near, then that's going to be obvious pretty soon, and it's unlikely that my arguments in particular for this would be all that unique - someone else would be likely to make them soon anyway. But I think this argument cuts both ways - if someone else is likely to make the same arguments soon anyway, then there's also limited benefit in writing them up. (Of course, if it saves people from significant mental anguish, even just making those arguments slightly earlier seems good, so overall this argument seems like it's weakly in favor of writing up the arguments.) From Armstrong & Sotala (2012): Some AI prediction claim that AI will result from grind: i.e. lots of hard work and money. Other claim that AI will need special insights: new unexpected ideas that will blow the field wide open (Deutsch 2012). In general, we are quite good at predicting grind. Project managers and various leaders are often quite good at estimating the length of projects (as long as they're not directly involved in the project (Buehler, Griffin, and Ross 1994)). Even for relatively creative work, people have sufficient feedback to hazard reasonable guesses. Publication dates for video games, for instance, though often over-optimistic, are generally not ridiculously erroneous...

The MSDW Podcast
Why organizations need an AI adoption framework

The MSDW Podcast

Play Episode Listen Later Aug 2, 2023 25:44


This episode is sponsored by Mazars USA.   As Microsoft and other large technology firms accelerate their AI-related roadmaps, businesses have no choice but to reckon with how different technology will impact their employees, customers, and partners. Some AI tools will be relatively easy to deploy, like prebuilt capabilities coming to Microsoft Dynamics 365 apps via Copilot. Other uses of AI will require greater investment, carry different risk profiles, and raise new ethical, regulatory, and governance question.    Ivan Cole, managing director at Mazars USA about AI adoption in the Microsoft space, joins us to these topics and expand on some of the points he and his colleague, Microsoft MVP Chris Segurado, raised in a recent webcast for the MSDW audience about adoption and momentum of AI in the Dynamics space.   As Ivan explains, all artificial intelligence isn't created equal. With the popularity of generative AI, we are seeing a tendency for people to confuse it with other capabilities like machine learning (ML) and natural language processing (NLP). Ivan explains how he untangles some of these fundamentals and shares his outlook on why an AI framework helps guide businesses in their approach to the various technologies.   Show Notes: 2:30 - What do people understand well about AI today and where do they need education? 6:00 - How to harness an organization's enthusiasm to take advantage of AI capabililties in the Microsoft space 12:30 - How AI's implementation could change specific roles in organizations 16:00 - Looking for AI solutions that extend beyond what Microsoft offers out of the box 18:30 - What it means to enable security and guardrails for AI adoption 22:00 - Why organizations need to make rapid progress on their AI governance policies   More from Mazars: With Mazars' SAFE AI FrameworkTM as your guide, you can confidently embrace your organization's future with AI. Journey with Mazars and discover AI's enormous potential, while ensuring its use aligns with best practices for security, adaptability, factual integrity and ethics. Learn more about SAFE AI Framework. 

Generation TECH
Episode 139 July 31, 2023

Generation TECH

Play Episode Listen Later Jul 31, 2023 1:52


DNA Tracking from the air, Screen time fail, Ho Hum Samsung updates, Apple shares at all time high…again, iPhone at 55% of US market, Meta's Threads is loosing users after big start, Tim Cook was declined when he applied for an Apple Credit card, Another map app company? Twitter is now X, Apple wants to shrink your bezel, but is that a good thing? Next Apple Watches expected to get biggest update in years, 6 in 1 charger, Some AI apps that actually do things, Passwords are terrible, Midnight Alarm app does not impress, Oura and other smart rings…Conversations on technology and tech adjacent subjects with two and sometime three generations of tech nerds. New shows on (mostly) MONDAYS!

Irish Tech News Audio Articles
Will Artificial Intelligence (AI) End Civilisation?

Irish Tech News Audio Articles

Play Episode Listen Later Jul 18, 2023 3:04


Will artificial intelligence (AI) end civilisation? Researchers at Lero, the Science Foundation Ireland Research Centre for Software and University College Cork, are seeking help determining what the public believes and knows about AI and software more generally. Psychologist Dr Sarah Robinson, a senior postdoctoral researcher with Lero, is asking members of the public to take part in a ten-minute anonymised online survey to establish what peoples' hopes and fears are for AI and software in general. "As the experts debate, little attention is given to what the public thinks - and the debate is raging. Some AI experts express concern that others prioritise imagined apocalyptic scenarios over immediate concerns - such as racist and sexist biases being programmed into machines. As software impacts all our lives, the public is a key stakeholder in deciding what being responsible for software should mean. So, that's why we want to find out what the public is thinking," added the UCC-based researcher. Dr Robinson said that, for example, human rights abuses are happening through AI and facial recognition software. "Research by my Lero colleague Dr Abeba Birhane and others found that data used to train some AI is contaminated with racist and misogynist language. As AI becomes widespread, the use of biased data may lead to harm and further marginalisation for already marginalised groups. "While there is a lot in the media about AI, especially ChatGPT, and what kind of world it is creating, there is less information about how the public perceives the software all around us, from social media to streaming services and beyond. We are interested in understanding the public's point of view ­- what concerns the public have, what are their priorities in terms of making software responsible and ethical, and the thoughts and ideas they have to make this a reality?" outlined Dr Robinson. Participants in the survey will be asked for their views and possible concerns on a range of issues and topics, with the hope of clarifying their views on critical issues. Lero is asking members of the public to donate 10 minutes of their time for this short survey. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.

The Nonlinear Library
EA - Aptitudes for AI governance work by Sam Clarke

The Nonlinear Library

Play Episode Listen Later Jun 14, 2023 13:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Aptitudes for AI governance work, published by Sam Clarke on June 14, 2023 on The Effective Altruism Forum. I outline 8 “aptitudes” for AI governance work. For each, I give examples of existing work that draws on the aptitude, and a more detailed breakdown of the skills I think are useful for excelling at the aptitude. How this might be helpful: For orienting to the kinds of work you might be best suited to For thinking through your skill gaps for those kinds of work Offering an abstraction which might help those thinking about field-building/talent pipeline strategy Epistemic status: I've spent ~3 years doing full-time AI governance work. Of that, I spent ~6 months FTE working on questions related to the AI governance talent pipeline, with GovAI. My work has mostly been fairly foundational research—so my views about aptitudes for research-y work (i.e. the first four aptitudes in this post) are more confident than for more applied or practical work (i.e. the latter three aptitudes in this post). I've spent ~5 hours talking with people hiring in AI governance about the talent needs they have. See this post for a write-up of that work. I've spent many more hours talking with AI governance researchers about their work (not focused specifically on talent needs). This post should be read as just one framework that might help you orient to AI governance work, rather than as making strong claims about which skills are most useful. Some AI governance-relevant aptitudes Macrostrategy What this is: investigating foundational topics that bear on more applied or concrete AI governance questions. Some key characteristics of this kind of work include: The questions are often not neatly scoped, such that generating or clarifying questions is part of the work. It involves balancing an unusually wide or open-ended range of considerations. A high level of abstraction is involved in reasoning. The methodology is often not very clear, such that you can't just plug-and-play with some standard methodology from a particular field. Examples: Descriptive work on estimating certain ‘key variables' E.g. reports on AI timelines and takeoff speeds. Prescriptive work on what ‘intermediate goals' to aim for E.g. analysis of the impact of US govt 2022 export controls. Conceptual work on developing frameworks, taxonomies, models, etc. that could be useful for structuring future analysis E.g. The Vulnerable World Hypothesis. Useful skills: Generating, structuring, and weighing considerations. Being able to generate lots of different considerations for a given question and weigh up these considerations appropriately. For example, there are a lot of considerations that bear on the question “Would it reduce AI risk if the US government enacted antitrust regulation that prevents big tech companies from buying AI startups?” Some examples of considerations are: “How much could this accelerate or slow down AI progress?”, “How much could this increase or decrease Western AI leadership relative to China?”, “How much harder or easier would this make it for the US government to enact safety-focused regulations?” “How would this affect the likelihood that a given company (e.g., Alphabet) plays a leading role in transformative AI development?” etc. Each of these considerations is also linked to various other considerations. For instance, the consideration about the pace of AI progress links to the higher-level consideration “How does the pace of AI progress affect the level of AI risk?” and the lower-level consideration “How does market structure affect the pace of AI progress?” That lower-level consideration can then be linked to even lower levels, like “What are the respective roles of compute-scaling and new ideas in driving AI progress?” and “Would spreading researchers out across a larger number of startups ...

Is It Safe?
Do You Know Who John Eastman Is? June 1st, 2023

Is It Safe?

Play Episode Listen Later Jun 2, 2023 89:02


What a Memorial Day weekend holiday that was! We're back now with the full crew and even though Luke is confused about whether we can hear him or not, we bring you a solid talk show. Your emails have been patiently waiting for us in the IIS inbox. We dive into them right away with the first email dealing with the cliffhanger from the last episode relating to previous guest host Travis and his comments about Luke being vocally harsh toward the so-called American liberals. Luke gives it to you straight as he always does and we hope everybody is better off because of him. Steve has a great sermon about a man known as John Eastman and why we should all be familiar with his efforts to help Donald Trump's attempted coup. Emailer Joe has a couple doozy emails for us with one of the focal points relating to reaction videos and why they are popular. It turns out everybody on the show has had personal reactions to videos presented to one another. We're definitely not above anyone else in that arena. Just remember: Hey! You're part of it. Govier remembers a time back in 2004 when a single CD-R or DVD-R could harness a library of digitized videos all in one place. This was back before YouTube existed. It turns out nostalgia is not just an annoying centerpiece in modern pop culture, but it's also a tool used by the corporate power structure to maintain our attention so we don't stop and take a look around. We get a taste of what the musical artist Ren is all about. Another one of Joe's emails is curious about the use of Scandinavian countries as talking points in politics. Steve nails down a great point about Social Security being an example of our refusal to give up entitlements once we get him. Do not miss this part of the show! Mike's going to do a newsletter still but nobody believes him. We have a new emailer! Welcome to the party Karl! Willie Nelson did a Secrets segment on Conan O'Brien but nobody can find the clip online. Mike wants to know if Hannah Gadsby is funny or not which leads to a Tig Notaro debate for some reason. What's more fun than looking at reviews online and in particular one star reviews from some of the most beautiful places in the world. Steve has experience in this realm and shares his stories. We're also being screwed by Spotify. Some AI generated clips show has taken our exact show title on Spotify and we cannot be found on Spotify's search. Totally bogus! Please rate us 5 stars on Spotify if you can find us. We need to take these scumbags down! We love you all! Really! We are very pleased with our little world that together we have formed thanks to this show. This show has no substance to it without you listening and emailing. We close the show with Ain't With Being Broke by the Geto Boys. If any of our nonsense provokes your thoughts, please share them with us at isitsafepod@gmail.com

Daily Tech News Show
Teacher, What Do You Meme? - DTNS 4530

Daily Tech News Show

Play Episode Listen Later May 30, 2023 29:50


We check out the latest announcements from the COMPUTEX trade show in Taipei, Taiwan including Nvidia's trillion dollar valuation. TechCrunch's Amanda Silberling highlighted a new edtech startup called Antimatter that's trying to turn this on its head by turning memes into learning tools. And could AI herald the extinction of the human race? Some AI researchers believe so. But what do we think?Starring Rich Stroffolino, Chris Ashley, Roger Chang, Joe.Link to the Show Notes. Become a member at https://plus.acast.com/s/dtns. Hosted on Acast. See acast.com/privacy for more information.

Solana Weekly
Solana Weekly: #23 - We're So Fking Back

Solana Weekly

Play Episode Listen Later Apr 26, 2023 18:23


What's up everyone! Welcome to Solana Weekly Episode 23. This is Thomas Bahamas and I want to thank you all for joining in on the fun. This has turned into one of my favorite parts of the week and I can't wait to dive into what's been going on in Solana. We're still attempting a space on Twitter and we'll see how it goes. I've been consuming a ton of Solana content and want to start with expressing that this is a user experience based Solana podcast where I talk about my journey and views. I can't keep up with everything in a week, I'm not on the Solana team or any team working in the space, I'm just out here having fun on the fun chain and chatting about it. But it really does seem like all the part are setting up for a killer Solana Summer. The Mad Lads kicked off a mint and the whole space has been electric ever since. I'm liking where this is heading and let's jump into it.Some AI of the monkey's celebrating for you: * Solana Price Update: Sitting at $21.37, down a total of 9% on the week. This chart doesn't look the best, and earlier today we looked like we were about to send until we dropped from 23 to 21. That's about a 10% drop and it happened immediately. Hsaka Trades tweeted out that there was an alert for a US gov wallet moving funds and Jump Trading dumped everything immediately. The kicker is that it was a false alarm. So we're heading back Is this true? I don't really know, but it would line up. They seem to be the biggest player still in Solana from what I know and they can move markets. Hate to see it and hoping for a recovery. As I said in the intro, the vibes for Solana have been crazy all week and I'm waiting for price to catch up. * Solana vs. Ethereum: Down to .01145 with a 2.8% decrease on the week. Small decrease, whole market looked pretty dumpy until this morning really. I sold some Eth for Sol because I just see such a disparity right now in the market. Eth transactions are $50 and Solana transactions aren't even a penny. Hard to justify paying that anymore. * Solana vs. Bitcoin: Sitting at .0007501 which is an decrease of 5%. I'm still super bullish on this chart even though we keep going down lol. I'm seeing more alignment and calls for BTC and WBTC on Solana, and a marketplace for trading wrapped ordinals on Solana. I'm 100% in on this and think that Solana would make a killer L2 for BTC. * Mad Lads mint - a beautiful disaster. They minted in their Backpack wallet and overall it was a sick experience. That was super delayed due to ddos attacks. But to the wallet - solana hummed on at normal speeds and handled it perfectly. Mint was unique because it was in the wallet, you minted a rug and went to the xnft to find your actual lad. So much to say about this, but it was effectively a mass advertisement for backpack and it worked. Love backpack now, the art for mad lads slaps as well and has been performing like crazy. Up over 10x from mint and doesn't look like it's slowing down. I actually swapped my pfp to a mad lad and went to Twitter jail for a week while my profile is under review. This is funny because I actually swapped that nft for another sicker one, so I'm currently pulling a milady tactic where I don't own my pfp. * Tensor - absolutely crushing it. They are doing no fees for the lads and have officially flipped magic Eden. It's insane. Their product is so freaking good though. Everything about their platform is superior to magic Eden and they deserve this. The volume is also insane, mad lads was the most traded nft project of ALL NFT's, yes that includes eth NFT's too. It's big and it's the example of a great product at the right time. Magic Eden seems to be clapping back by hiring another intern that shitposts, I like it and missed the war bucks s**t posting.  But they haven't been focusing on Sol and can't keep up at this point. * Solana phone! I got mine yesterday and unboxed it super hard. It's sick. I hate android and have to relearn everything, but it's worth it for this phone. It's a punk rock feel of a phone where it just seems like a lot of love and tinkering went into it. Drawbacks : it's long skinny and heavy, but i just need to get used to it and turn off all these damn notifications. * The fun chain thesis: Solana is the fun chain because of everything you can do on it right now. It's built for users, you can do a bunch of cool s**t on chain, and they are building out a ton of cool new things and use cases. Historically blockchains have been primarily built for contrarians and doomers in Bitcoin, and an extremely complicated and intertwining web of complex ideas going no where in Ethereum. Don't believe me? Listen to a Bitcoin space and you'll hear how modern finance is doomed. Listen to an episode of Bankless and you'll hear about ultrasound money that costs $50 for a transaction, but burns some of the supply and causes the supply to be deflationary, making it a big bonus. Solana is simple, it's one layer, that can process transactions simultaneously across the world incredibly fast and cheap. I want to hate on Bitcoin for being boomers and doomers, but in reality they have established themselves on a global level. They've fought the fight and earned a spot at the table. I don't really know where Solana will fit in if you look at it through that lens. Maybe it doesn't have to, but I am really liking the idea of Solana hosting more Bitcoin on chain, and having something like a wrapped Ordinals market for trading assets that are on Bitcoin. There's something there, I'm starting to step away from the daily spaces and all the mints and try to look at the bigger picture. I'm just getting more and more convinced that a blockchain that works and scales as it can will keep on crushing and bring us that Solana Summer! Thanks all and I'll catch you next week. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit thomasbahamas.substack.com

TechReview - The Podcast
58: Copyright Strike - Fake Eminem Cat Rap

TechReview - The Podcast

Play Episode Listen Later Apr 19, 2023 30:48


From DMCA takedown notices to transformative AI-generated art, the copyright debate continues to rage on. Find out how Universal Music Publishing Group's actions against Youtuber Grandayy could impact the future of AI-generated content. Also, discover how ByteDance's revenue surge in 2022, thanks to TikTok and Douyin, could boost investor confidence in the Chinese social media giant, and learn about the sustainable and innovative features of the Stilride 1 electric motorcycle, inspired by the Japanese art of origami.00:00 - Intro01:20 - 1: Record Label Wipes AI-Generated Eminem Rapping About Cats From the Internet12:51 - 2: Sales of TikTok owner ByteDance up over 30 per cent in 2022 to reach US$80 billion, matching Tencent's revenue19:43 - 3: Stilride: Schwedisches Startup entwickelt von Origami inspiriertes E-MotorradSummary:Universal Music Publishing Group issued a DMCA takedown notice against Youtuber Grandayy for using an AI-generated version of Eminem's voice to sing a ChatGPT-generated song about cats, claiming that the video infringes on its copyright. Copyright laws allow for parody works as long as they're transformative. Some AI researchers believe that copyright law is the way forward in the debate between AI-generated art enthusiasts and artists.ByteDance's revenue surged over 30% in 2022 to surpass $80 billion, matching arch-rival Tencent's tally, thanks to the popularity of TikTok and Douyin. ByteDance's growth outpaced most global internet leaders, including Meta and Amazon. Despite Washington's threat to ban TikTok and a growing number of government agencies across the world wiping the app from official phones due to security concerns, ByteDance's resilience is attributed to the twin video platforms siphoning ad dollars from other social media platforms. The growth could boost investor confidence in the Chinese social media giant.Swedish startup Stilride has unveiled an electric motorcycle inspired by the Japanese art of origami. The Stilride 1 is made from a single piece of stainless steel that is folded into shape using a technique known as industrial origami, resulting in minimal emissions. The motorcycle is also designed to be sustainable during production, with a minimal number of parts and locally sourced materials. The Stilfold technology used in production combines intelligent design and engineering in a digital value chain, reducing material and labor costs while increasing efficiency. The motorcycle has a range of 120 kilometers, a charging time of four hours, and various features accessible through an app, including theft protection, GPS, and battery status. The motorcycle is set to be available for purchase in Europe in 2024, with a starting price of €15,000.Our panel today>> Vincent>> Tarek>> HenrikeEvery week our panel of technology enthusiasts meets to discuss the most important news from the fields of technology, innovation, and science. And you can join us live!https://techreview.axelspringer.com/https://www.ideas-engineering.io/https://www.freetech.academy/https://www.upday.com/

Pebkac Podcast
330 - RESTRICT

Pebkac Podcast

Play Episode Listen Later Mar 31, 2023 60:37


Some AI news, potentially some new stupid laws, E3 news, and more!

e3 restrict some ai
1A
Know It All: Where AI Helps And Hurts In Health Care

1A

Play Episode Listen Later Feb 22, 2023 34:48


AI is being used for all kinds of tasks in health care — whether it's administrative ones like taking notes, parsing through patient data, or providing some extra help with reading images. Some AI platforms like Bayesian Health are helping filter through loads of data that get put into a health system. And some clinicians are testing out what AI can and can't do quite yet, like a team at Emory University who found out an AI system could detect a patient's self-reported race based on a chest scan. For this episode of "Know It All: 1A and WIRED's Guide to A.I.", we're exploring what AI in health care looks like today and its potential.Want to support 1A? Give to your local public radio station and subscribe to this podcast. Have questions? Find us on Twitter @1A.

The Nonlinear Library
LW - So, geez there's a lot of AI content these days by Raemon

The Nonlinear Library

Play Episode Listen Later Oct 6, 2022 10:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: So, geez there's a lot of AI content these days, published by Raemon on October 6, 2022 on LessWrong. Since April this year, there's been a huge growth in the the number of posts about AI, while posts about rationality, world modeling, etc. have remained constant. The result is that much of the time, the LW frontpage is almost entirely AI content. Looking at the actual numbers, we can see that during 2021, no core LessWrong tags represented more than 30% of LessWrong posts. In 2022, especially starting around April, AI has started massively dominating the LW posts. Here's the total posts for each core tag each month for the past couple years. On April 2022, most tags' popularity remains constant, but AI-tagged posts spike dramatically: Even people pretty involved with AI alignment research have written to say "um, something about this feels kinda bad to me." I'm curious to hear what various LW users think about the situation. Meanwhile, here's my own thoughts. Is this bad? Maybe this is fine. My sense of what happened was that in April, Eliezer posted MIRI announces new "Death With Dignity" strategy, and a little while later AGI Ruin: A List of Lethalities. At the same time, PaLM and DALL-E 2 came out. My impression is that this threw a brick through the overton window and got a lot of people going "holy christ AGI ruin is real and scary". Everyone started thinking a lot about it, and writing up their thoughts as they oriented. Around the same time, a lot of alignment research recruitment projects (such as SERI MATS or Refine) started paying dividends, and resulting in a new wave of people working fulltime on AGI safety. Maybe it's just fine to have a ton of people working on the most important problem in the world? Maybe. But it felt worrisome to Ruby and me. Some of those worries felt easier to articulate, others harder. Two major sources of concern: There's some kind of illegible good thing that happens when you have a scene exploring a lot of different topics. It's historically been the case that LessWrong was a (relatively) diverse group of thinkers thinking about a (relatively) diverse group of things. If people show up and just see the All AI All the Time, people who might have other things to contribute may bounce off. We probably wouldn't lose this immediately AI needs Rationality, in particular. Maybe AI is the only thing that matters. But, the whole reason I think we have a comparative advantage at AI Alignment is our culture of rationality. A lot of AI discourse on the internet is really confused. There's such an inferential gulf about what sort of questions are even worth asking. Many AI topics deal with gnarly philosophical problems, while mainstream academia is still debating whether the world is naturalistic. Some AI topics require thinking clearly about political questions that tend to make people go funny in the head. Rationality is for problems we don't know how to solve, and AI is still a domain we don't collectively know how to solve. Not everyone agrees that rationality is key, here (I know one prominent AI researcher who disagreed). But it's my current epistemic state. Whispering "Rationality" in your ear Paul Graham says that different cities whisper different ambitions in your ear. New York whispers "be rich". Silicon Valley whispers "be powerful." Berkeley whispers "live well." Boston whispers "be educated." It seems important for LessWrong to whisper "be rational" in your ear, and to give you lots of reading, exercises, and support to help you make it so. As a sort of "emergency injection of rationality", we asked Duncan to convert the CFAR handbook from a PDF into a more polished sequence, and post it over the course of a month. But commissioning individual posts is fairly expensive, and over the past couple months the LessWrong team's foc...

The Nonlinear Library: LessWrong
LW - So, geez there's a lot of AI content these days by Raemon

The Nonlinear Library: LessWrong

Play Episode Listen Later Oct 6, 2022 10:03


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: So, geez there's a lot of AI content these days, published by Raemon on October 6, 2022 on LessWrong. Since April this year, there's been a huge growth in the the number of posts about AI, while posts about rationality, world modeling, etc. have remained constant. The result is that much of the time, the LW frontpage is almost entirely AI content. Looking at the actual numbers, we can see that during 2021, no core LessWrong tags represented more than 30% of LessWrong posts. In 2022, especially starting around April, AI has started massively dominating the LW posts. Here's the total posts for each core tag each month for the past couple years. On April 2022, most tags' popularity remains constant, but AI-tagged posts spike dramatically: Even people pretty involved with AI alignment research have written to say "um, something about this feels kinda bad to me." I'm curious to hear what various LW users think about the situation. Meanwhile, here's my own thoughts. Is this bad? Maybe this is fine. My sense of what happened was that in April, Eliezer posted MIRI announces new "Death With Dignity" strategy, and a little while later AGI Ruin: A List of Lethalities. At the same time, PaLM and DALL-E 2 came out. My impression is that this threw a brick through the overton window and got a lot of people going "holy christ AGI ruin is real and scary". Everyone started thinking a lot about it, and writing up their thoughts as they oriented. Around the same time, a lot of alignment research recruitment projects (such as SERI MATS or Refine) started paying dividends, and resulting in a new wave of people working fulltime on AGI safety. Maybe it's just fine to have a ton of people working on the most important problem in the world? Maybe. But it felt worrisome to Ruby and me. Some of those worries felt easier to articulate, others harder. Two major sources of concern: There's some kind of illegible good thing that happens when you have a scene exploring a lot of different topics. It's historically been the case that LessWrong was a (relatively) diverse group of thinkers thinking about a (relatively) diverse group of things. If people show up and just see the All AI All the Time, people who might have other things to contribute may bounce off. We probably wouldn't lose this immediately AI needs Rationality, in particular. Maybe AI is the only thing that matters. But, the whole reason I think we have a comparative advantage at AI Alignment is our culture of rationality. A lot of AI discourse on the internet is really confused. There's such an inferential gulf about what sort of questions are even worth asking. Many AI topics deal with gnarly philosophical problems, while mainstream academia is still debating whether the world is naturalistic. Some AI topics require thinking clearly about political questions that tend to make people go funny in the head. Rationality is for problems we don't know how to solve, and AI is still a domain we don't collectively know how to solve. Not everyone agrees that rationality is key, here (I know one prominent AI researcher who disagreed). But it's my current epistemic state. Whispering "Rationality" in your ear Paul Graham says that different cities whisper different ambitions in your ear. New York whispers "be rich". Silicon Valley whispers "be powerful." Berkeley whispers "live well." Boston whispers "be educated." It seems important for LessWrong to whisper "be rational" in your ear, and to give you lots of reading, exercises, and support to help you make it so. As a sort of "emergency injection of rationality", we asked Duncan to convert the CFAR handbook from a PDF into a more polished sequence, and post it over the course of a month. But commissioning individual posts is fairly expensive, and over the past couple months the LessWrong team's foc...

The Nonlinear Library
EA - Longtermists Should Work on AI - There is No "AI Neutral" Scenario by simeon c

The Nonlinear Library

Play Episode Listen Later Aug 8, 2022 10:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermists Should Work on AI - There is No "AI Neutral" Scenario, published by simeon c on August 7, 2022 on The Effective Altruism Forum. Summary: If you're a longtermist (i.e you believe that most of the moral value lies in the future), and you want to prioritize impact in your career choice, you should strongly consider either working on AI directly, or working on things that will positively influence the development of AI. Epistemic Status: The claim is strong but I'm fairly confident (>75%) about it. I've spent 3 months working as a SERI fellow thinking about whether bio risks could kill humanity (including info hazardy stuff) and how the risk profile compared with the AI safety one, which I think is the biggest crux of this post. I've spent at least a year thinking about advanced AIs and their implications on everything, including much of today's decision-making. I've reoriented my career towards AI based on these thoughts. The Case for Working on AI If you care a lot about the very far future, you probably want two things to happen: first, you want to ensure that humanity survives at all; second, you want to increase the growth rate of good things that matter to humanity - for example, wealth, happiness, knowledge, or anything else that we value. If we increase the growth rate earlier and by more, this will have massive ripple effects on the very longterm future. A minor increase in the growth rate now means a huge difference later. Consider the spread of covid - minor differences in the R-number had huge effects on how fast the virus could spread and how many people eventually caught it. So if you are a longtermist, you should want to increase the growth rate of whatever you care about as early as possible, and as much as possible. For example, if you think that every additional happy life in the universe is good, then you should want the number of happy humans in the universe to grow as fast as possible. AGI is likely to be able to help with this, since it could create a state of abundance and enable humanity to quickly spread across the universe through much faster technological progress. AI is directly relevant to both longterm survival and longterm growth. When we create a superintelligence, there are three possibilities. Either: The superintelligence is misaligned and it kills us all The superintelligence is misaligned with our own objectives but is benign The superintelligence is aligned, and therefore can help us increase the growth rate of whatever we care about. Longtermists should, of course, be eager to prevent the development of a destructive misaligned superintelligence. But they should also be strongly motivated to bring about the development of an aligned, benevolent superintelligence, because increasing the growth rate of whatever we value (knowledge, wealth, resources.) will have huge effects into the longterm future. Some AI researchers focus more on the ‘carrot' of aligned benevolent AI, others on the ‘stick' of existential risk. But the point is, AI will likely either be extremely good or extremely bad - it's difficult to be AI-neutral. I want to emphasize that my argument only applies to people who want to strongly prioritize impact. It's fine for longtermists to choose not to work on AI for personal reasons. Most people value things other than impact, and big career transitions can be extremely costly. I just think that if longtermists really want to prioritize impact above everything else, then AI-related work is the best thing for (most of) them to do; and if they want to work on other things for personal reasons, they shouldn't be tempted by motivated reasoning to believe that they are working on the most impactful thing. Objections Here are some reasons why you might be unconvinced by this argument, along with reasons why I find th...

The Nonlinear Library: LessWrong Top Posts
Some AI research areas and their relevance to existential safety by Andrew_Critch

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 85:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some AI research areas and their relevance to existential safety, published by Andrew_Critch on the LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. Followed by: What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs), which provides examples of multi-stakeholder/multi-agent interactions leading to extinction events. Introduction This post is an overview of a variety of AI research areas in terms of how much I think contributing to and/or learning from those areas might help reduce AI x-risk. By research areas I mean “AI research topics that already have groups of people working on them and writing up their results”, as opposed to research “directions” in which I'd like to see these areas “move”. I formed these views mostly pursuant to writing AI Research Considerations for Human Existential Safety (ARCHES). My hope is that my assessments in this post can be helpful to students and established AI researchers who are thinking about shifting into new research areas specifically with the goal of contributing to existential safety somehow. In these assessments, I find it important to distinguish between the following types of value: The helpfulness of the area to existential safety, which I think of as a function of what services are likely to be provided as a result of research contributions to the area, and whether those services will be helpful to existential safety, versus The educational value of the area for thinking about existential safety, which I think of as a function of how much a researcher motivated by existential safety might become more effective through the process of familiarizing with or contributing to that area, usually by focusing on ways the area could be used in service of existential safety. The neglect of the area at various times, which is a function of how much technical progress has been made in the area relative to how much I think is needed. Importantly: The helpfulness to existential safety scores do not assume that your contributions to this area would be used only for projects with existential safety as their mission. This can negatively impact the helpfulness of contributing to areas that are more likely to be used in ways that harm existential safety. The educational value scores are not about the value of an existential-safety-motivated researcher teaching about the topic, but rather, learning about the topic. The neglect scores are not measuring whether there is enough “buzz” around the topic, but rather, whether there has been adequate technical progress in it. Buzz can predict future technical progress, though, by causing people to work on it. Below is a table of all the areas I considered for this post, along with their entirely subjective “scores” I've given them. The rest of this post can be viewed simply as an elaboration/explanation of this table: Existing Research Area Social Application Helpfulness to Existential Safety Educational Value 2015 Neglect 2020 Neglect 2030 Neglect Out of Distribution Robustness Zero/ Single 1/10 4/10 5/10 3/10 1/10 Agent Foundations Zero/ Single 3/10 8/10 9/10 8/10 7/10 Multi-agent RL Zero/ Multi 2/10 6/10 5/10 4/10 0/10 Preference Learning Single/ Single 1/10 4/10 5/10 1/10 0/10 Side-effect Minimization Single/ Single 4/10 4/10 6/10 5/10 4/10 Human-Robot Interaction Single/ Single 6/10 7/10 5/10 4/10 3/10 Interpretability in ML Single/ Single 8/10 6/10 8/10 6/10 2/10 Fairness in ML Multi/ Single 6/10 5/10 7/10 3/10 2/10 Computational Social Choice Multi/ Single 7/10 7/10 7/10 5/10 4/10 Accountability in ML Multi/ Multi 8/10 3/10 8/10 7/10 5/10 The research areas are ordered from least-socially-complex to most-socially-complex. This roughly (though imperfectly) correlates with addressing existential safety problems of increa...

The Nonlinear Library: LessWrong Top Posts
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)Ω by Andrew_Critch

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 38:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)Ω, published by Andrew_Critch on the LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. With: Thomas Krendl Gilbert, who provided comments, interdisciplinary feedback, and input on the RAAP concept. Thanks also for comments from Ramana Kumar. Target audience: researchers and institutions who think about existential risk from artificial intelligence, especially AI researchers. Preceded by: Some AI research areas and their relevance to existential safety, which emphasized the value of thinking about multi-stakeholder/multi-agent social applications, but without concrete extinction scenarios. This post tells a few different stories in which humanity dies out as a result of AI technology, but where no single source of human or automated agency is the cause. Scenarios with multiple AI-enabled superpowers are often called “multipolar” scenarios in AI futurology jargon, as opposed to “unipolar” scenarios with just one superpower. Unipolar take-offs Multipolar take-offs Slow take-offs Part 1 of this post Fast take-offs Part 2 of this post Part 1 covers a batch of stories that play out slowly (“slow take-offs”), and Part 2 stories play out quickly. However, in the end I don't want you to be super focused how fast the technology is taking off. Instead, I'd like you to focus on multi-agent processes with a robust tendency to play out irrespective of which agents execute which steps in the process. I'll call such processes Robust Agent-Agnostic Processes (RAAPs). A group walking toward a restaurant is a nice example of a RAAP, because it exhibits: Robustness: If you temporarily distract one of the walkers to wander off, the rest of the group will keep heading toward the restaurant, and the distracted member will take steps to rejoin the group. Agent-agnosticism: Who's at the front or back of the group might vary considerably during the walk. People at the front will tend to take more responsibility for knowing and choosing what path to take, and people at the back will tend to just follow. Thus, the execution of roles (“leader”, “follower”) is somewhat agnostic as to which agents execute them. Interestingly, if all you want to do is get one person in the group not to go to the restaurant, sometimes it's actually easier to achieve that by convincing the entire group not to go there than by convincing just that one person. This example could be extended to lots of situations in which agents have settled on a fragile consensus for action, in which it is strategically easier to motivate a new interpretation of the prior consensus than to pressure one agent to deviate from it. I think a similar fact may be true about some agent-agnostic processes leading to AI x-risk, in that agent-specific interventions (e.g., aligning or shutting down this or that AI system or company) will not be enough to avert the process, and might even be harder than trying to shift the structure of society as a whole. Moreover, I believe this is true in both “slow take-off” and “fast take-off” AI development scenarios This is because RAAPs can arise irrespective of the speed of the underlying “host” agents. RAAPs are made more or less likely to arise based on the “structure” of a given interaction. As such, the problem of avoiding the emergence of unsafe RAAPs, or ensuring the emergence of safe ones, is a problem of mechanism design (wiki/Mechanism_design). I recently learned that in sociology, the concept of a field (martin2003field, fligsteinmcadam2012fields) is roughly defined as a social space or arena in which the motivation and behavior of agents are explained through reference to surrounding processes or “structure” rather than freedom or chance. ...

European Parliament - EPRS Policy podcasts
Artificial intelligence act

European Parliament - EPRS Policy podcasts

Play Episode Listen Later Dec 10, 2021 9:01


The European Commission proposes to establish a technology-neutral definition of AI systems in EU law and to lay down a classification for AI systems with different requirements and obligations tailored on a 'risk-based approach'. Some AI systems presenting 'unacceptable' risks would be prohibited. A wide range of 'high-risk' AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market. Those AI systems presenting only 'low or minimal risk' would be subject to very light transparency obligations. In this podcast, we'll talk about the EU artificial intelligence act, the first ever comprehensive attempt at regulating the uses and risks of this emerging technology. - Original publication on the EP Think Tank website- Subscription to our RSS feed in case your have your own RSS reader- Podcast available on Deezer, iTunes, TuneIn, Stitcher, YouTubeSource: © European Union - EP

The Nonlinear Library: Alignment Forum Top Posts
Some AI research areas and their relevance to existential safety by Andrew Critch

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 86:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some AI research areas and their relevance to existential safety, published by Andrew Critch on the AI Alignment Forum. Followed by: What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs), which provides examples of multi-stakeholder/multi-agent interactions leading to extinction events. Introduction This post is an overview of a variety of AI research areas in terms of how much I think contributing to and/or learning from those areas might help reduce AI x-risk. By research areas I mean “AI research topics that already have groups of people working on them and writing up their results”, as opposed to research “directions” in which I'd like to see these areas “move”. I formed these views mostly pursuant to writing AI Research Considerations for Human Existential Safety (ARCHES). My hope is that my assessments in this post can be helpful to students and established AI researchers who are thinking about shifting into new research areas specifically with the goal of contributing to existential safety somehow. In these assessments, I find it important to distinguish between the following types of value: The helpfulness of the area to existential safety, which I think of as a function of what services are likely to be provided as a result of research contributions to the area, and whether those services will be helpful to existential safety, versus The educational value of the area for thinking about existential safety, which I think of as a function of how much a researcher motivated by existential safety might become more effective through the process of familiarizing with or contributing to that area, usually by focusing on ways the area could be used in service of existential safety. The neglect of the area at various times, which is a function of how much technical progress has been made in the area relative to how much I think is needed. Importantly: The helpfulness to existential safety scores do not assume that your contributions to this area would be used only for projects with existential safety as their mission. This can negatively impact the helpfulness of contributing to areas that are more likely to be used in ways that harm existential safety. The educational value scores are not about the value of an existential-safety-motivated researcher teaching about the topic, but rather, learning about the topic. The neglect scores are not measuring whether there is enough “buzz” around the topic, but rather, whether there has been adequate technical progress in it. Buzz can predict future technical progress, though, by causing people to work on it. Below is a table of all the areas I considered for this post, along with their entirely subjective “scores” I've given them. The rest of this post can be viewed simply as an elaboration/explanation of this table: Existing Research Area Social Application Helpfulness to Existential Safety Educational Value 2015 Neglect 2020 Neglect 2030 Neglect Out of Distribution Robustness Zero/ Single 1/10 4/10 5/10 3/10 1/10 Agent Foundations Zero/ Single 3/10 8/10 9/10 8/10 7/10 Multi-agent RL Zero/ Multi 2/10 6/10 5/10 4/10 0/10 Preference Learning Single/ Single 1/10 4/10 5/10 1/10 0/10 Side-effect Minimization Single/ Single 4/10 4/10 6/10 5/10 4/10 Human-Robot Interaction Single/ Single 6/10 7/10 5/10 4/10 3/10 Interpretability in ML Single/ Single 8/10 6/10 8/10 6/10 2/10 Fairness in ML Multi/ Single 6/10 5/10 7/10 3/10 2/10 Computational Social Choice Multi/ Single 7/10 7/10 7/10 5/10 4/10 Accountability in ML Multi/ Multi 8/10 3/10 8/10 7/10 5/10 The research areas are ordered from least-socially-complex to most-socially-complex. This roughly (though imperfectly) correlates with addressing existential safety problems of increasing importance and neglect, according to me. Correspondingly, the second colu...

The Nonlinear Library: Alignment Forum Top Posts
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) by Andrew Critch

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 38:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) , published by Andrew Critch on the AI Alignment Forum. With: Thomas Krendl Gilbert, who provided comments, interdisciplinary feedback, and input on the RAAP concept. Thanks also for comments from Ramana Kumar. Target audience: researchers and institutions who think about existential risk from artificial intelligence, especially AI researchers. Preceded by: Some AI research areas and their relevance to existential safety, which emphasized the value of thinking about multi-stakeholder/multi-agent social applications, but without concrete extinction scenarios. This post tells a few different stories in which humanity dies out as a result of AI technology, but where no single source of human or automated agency is the cause. Scenarios with multiple AI-enabled superpowers are often called “multipolar” scenarios in AI futurology jargon, as opposed to “unipolar” scenarios with just one superpower. Unipolar take-offs Multipolar take-offs Slow take-offs Part 1 of this post Fast take-offs Part 2 of this post Part 1 covers a batch of stories that play out slowly (“slow take-offs”), and Part 2 stories play out quickly. However, in the end I don't want you to be super focused how fast the technology is taking off. Instead, I'd like you to focus on multi-agent processes with a robust tendency to play out irrespective of which agents execute which steps in the process. I'll call such processes Robust Agent-Agnostic Processes (RAAPs). A group walking toward a restaurant is a nice example of a RAAP, because it exhibits: Robustness: If you temporarily distract one of the walkers to wander off, the rest of the group will keep heading toward the restaurant, and the distracted member will take steps to rejoin the group. Agent-agnosticism: Who's at the front or back of the group might vary considerably during the walk. People at the front will tend to take more responsibility for knowing and choosing what path to take, and people at the back will tend to just follow. Thus, the execution of roles (“leader”, “follower”) is somewhat agnostic as to which agents execute them. Interestingly, if all you want to do is get one person in the group not to go to the restaurant, sometimes it's actually easier to achieve that by convincing the entire group not to go there than by convincing just that one person. This example could be extended to lots of situations in which agents have settled on a fragile consensus for action, in which it is strategically easier to motivate a new interpretation of the prior consensus than to pressure one agent to deviate from it. I think a similar fact may be true about some agent-agnostic processes leading to AI x-risk, in that agent-specific interventions (e.g., aligning or shutting down this or that AI system or company) will not be enough to avert the process, and might even be harder than trying to shift the structure of society as a whole. Moreover, I believe this is true in both “slow take-off” and “fast take-off” AI development scenarios This is because RAAPs can arise irrespective of the speed of the underlying “host” agents. RAAPs are made more or less likely to arise based on the “structure” of a given interaction. As such, the problem of avoiding the emergence of unsafe RAAPs, or ensuring the emergence of safe ones, is a problem of mechanism design (wiki/Mechanism_design). I recently learned that in sociology, the concept of a field (martin2003field, fligsteinmcadam2012fields) is roughly defined as a social space or arena in which the motivation and behavior of agents are explained through reference to surrounding processes or “structure” rather than freedom or chance. In my parlance, mechanisms cause fields, and fields cause RAAPs. Meta / prefac...

The Nonlinear Library: Alignment Forum Top Posts
Non-Obstruction: A Simple Concept Motivating Corrigibility by Alex Turner

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Nov 30, 2021 49:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Non-Obstruction: A Simple Concept Motivating Corrigibility, published by Alex Turner on the AI Alignment Forum. Thanks to Mathias Bonde, Tiffany Cai, Ryan Carey, Michael Cohen, Joe Collman, Andrew Critch, Abram Demski, Michael Dennis, Thomas Gilbert, Matthew Graves, Koen Holtman, Evan Hubinger, Victoria Krakovna, Amanda Ngo, Rohin Shah, Adam Shimi, Logan Smith, and Mark Xu for their thoughts. Main claim: corrigibility's benefits can be mathematically represented as a counterfactual form of alignment. Overview: I'm going to talk about a unified mathematical frame I have for understanding corrigibility's benefits, what it “is”, and what it isn't. This frame is precisely understood by graphing the human overseer's ability to achieve various goals (their attainable utility (AU) landscape). I argue that corrigibility's benefits are secretly a form of counterfactual alignment (alignment with a set of goals the human may want to pursue). A counterfactually aligned agent doesn't have to let us literally correct it. Rather, this frame theoretically motivates why we might want corrigibility anyways. This frame also motivates other AI alignment subproblems, such as intent alignment, mild optimization, and low impact. Nomenclature Corrigibility goes by a lot of concepts: “not incentivized to stop us from shutting it off”, “wants to account for its own flaws”, “doesn't take away much power from us”, etc. Named by Robert Miles, the word ‘corrigibility' means “able to be corrected [by humans]." I'm going to argue that these are correlates of a key thing we plausibly actually want from the agent design, which seems conceptually simple. In this post, I take the following common-language definitions: Corrigibility: the AI literally lets us correct it (modify its policy), and it doesn't manipulate us either. Without both of these conditions, the AI's behavior isn't sufficiently constrained for the concept to be useful. Being able to correct it is small comfort if it manipulates us into making the modifications it wants. An AI which is only non-manipulative doesn't have to give us the chance to correct it or shut it down. Impact alignment: the AI's actual impact is aligned with what we want. Deploying the AI actually makes good things happen. Intent alignment: the AI makes an honest effort to figure out what we want and to make good things happen. I think that these definitions follow what their words mean, and that the alignment community should use these (or other clear groundings) in general. Two of the more important concepts in the field (alignment and corrigibility) shouldn't have ambiguous and varied meanings. If the above definitions are unsatisfactory, I think we should settle upon better ones as soon as possible. If that would be premature due to confusion about the alignment problem, we should define as much as we can now and explicitly note what we're still confused about. We certainly shouldn't keep using 2+ definitions for both alignment and corrigibility. Some people have even stopped using ‘corrigibility' to refer to corrigibility! I think it would be better for us to define the behavioral criterion (e.g. as I defined 'corrigibility'), and then define mechanistic ways of getting that criterion (e.g. intent corrigibility). We can have lots of concepts, but they should each have different names. Evan Hubinger recently wrote a great FAQ on inner alignment terminology. We won't be talking about inner/outer alignment today, but I intend for my usage of "impact alignment" to roughly map onto his "alignment", and "intent alignment" to map onto his usage of "intent alignment." Similarly, my usage of "impact/intent alignment" directly aligns with the definitions from Andrew Critch's recent post, Some AI research areas and their relevance to existential safety. A Simple Concept Mo...

Singularity Hub Daily
Drugs, Robots, and the Pursuit of Pleasure: Why Experts Are Worried About AIs Becoming Addicts

Singularity Hub Daily

Play Episode Listen Later Sep 17, 2021 24:37


In 1953, a Harvard psychologist thought he discovered pleasure—accidentally—within the cranium of a rat. With an electrode inserted into a specific area of its brain, the rat was allowed to pulse the implant by pulling a lever. It kept returning for more: insatiably, incessantly, lever-pulling. In fact, the rat didn't seem to want to do anything else. Seemingly, the reward center of the brain had been located. More than 60 years later, in 2016, a pair of artificial intelligence (AI) researchers were training an AI to play video games. The goal of one game, Coastrunner, was to complete a racetrack. But the AI player was rewarded for picking up collectable items along the track. When the program was run, they witnessed something strange. The AI found a way to skid in an unending circle, picking up an unlimited cycle of collectibles. It did this, incessantly, instead of completing the course. What links these seemingly unconnected events is something strangely akin to addiction in humans. Some AI researchers call the phenomenon “wireheading.” It is quickly becoming a hot topic among machine learning experts and those concerned with AI safety. One of us (Anders) has a background in computational neuroscience, and now works with groups such as the AI Objectives Institute, where we discuss how to avoid such problems with AI; the other (Thomas) studies history, and the various ways people have thought about both the future and the fate of civilization throughout the past. After striking up a conversation on the topic of wireheading, we both realized just how rich and interesting the history behind this topic is. It is an idea that is very of the moment, but its roots go surprisingly deep. We are currently working together to research just how deep the roots go: a story that we hope to tell fully in a forthcoming book. The topic connects everything from the riddle of personal motivation, to the pitfalls of increasingly addictive social media, to the conundrum of hedonism and whether a life of stupefied bliss may be preferable to one of meaningful hardship. It may well influence the future of civilization itself. Here, we outline an introduction to this fascinating but under-appreciated topic, exploring how people first started thinking about it. The Sorcerer's Apprentice When people think about how AI might “go wrong,” most probably picture something along the lines of malevolent computers trying to cause harm. After all, we tend to anthropomorphize—think that nonhuman systems will behave in ways identical to humans. But when we look to concrete problems in present-day AI systems, we see other, stranger ways that things could go wrong with smarter machines. One growing issue with real-world AIs is the problem of wireheading. Imagine you want to train a robot to keep your kitchen clean. You want it to act adaptively, so that it doesn't need supervision. So you decide to try to encode the goal of cleaning rather than dictate an exact—yet rigid and inflexible—set of step-by-step instructions. Your robot is different from you in that it has not inherited a set of motivations—such as acquiring fuel or avoiding danger—from many millions of years of natural selection. You must program it with the right motivations to get it to reliably accomplish the task. So, you encode it with a simple motivational rule: it receives reward from the amount of cleaning-fluid used. Seems foolproof enough. But you return to find the robot pouring fluid, wastefully, down the sink. Perhaps it is so bent on maximizing its fluid quota that it sets aside other concerns: such as its own, or your, safety. This is wireheading—though the same glitch is also called “reward hacking” or “specification gaming.” This has become an issue in machine learning, where a technique called reinforcement learning has lately become important. Reinforcement learning simulates autonomous agents and trains them to invent ways to accomplish tasks. It does so by penalizing them for fai...

通勤學英語
回顧星期天LBS - 人工智慧相關時事趣聞 All about Artificial Intelligence

通勤學英語

Play Episode Listen Later Sep 4, 2021 7:18


Topic: Japan to fund AI matchmaking to boost birth rate   Japan plans to boost its tumbling birth rate by funding artificial intelligence matchmaking schemes to help residents find love.日本計畫透過資助人工智慧配對相親計畫,幫助國民尋找愛情,以提高該國不斷下降的出生率。 From next year it will subsidize local governments already running or starting projects that use AI to pair people up.從明年開始,日本政府將提供補貼,資助地方政府已經在運行以及新啟動的人工智慧配對相親項目。 Last year the number of babies born in Japan fell below 865,000 - a record low.去年,日本新生嬰兒數跌破86.5萬,創下歷史新低。 The fast-greying nation has long been searching for ways to reverse one of the world's lowest fertility rates. 日本的生育率為全世界最低的國家之一,這個快速高齡化的國家長期以來一直在尋找扭轉局面的方法。 Boosting the use of AI tech is one of its latest efforts.加強人工智慧技術的運用,是其最新舉措之一。 Next year the government plans to allocate local authorities 2bn yen to boost the birth rate, reported AFP news agency. 據法新社報導,日本政府計畫明年撥款20億日圓給地方政府,以提高出生率。 Many already offer human-run matchmaking services and some have introduced AI systems in the hope they will perform a more sophisticated analysis of the standardised forms where people submit their details. 許多企業已經提供了人工婚介服務,一些已經引入AI系統,希望能夠對民眾提交的標準化表格進行更精密的分析。   Next Article   Topic: The Age of Quantum AI   The age of Quantum AI is upon us. AI needs processing power that current computers can't provide but quantum computers could pick up the slack. 量子人工智慧的時代來臨。人工智慧(AI)需要當前計算機(電腦)無法提供的處理能力,而量子電腦可以彌補不足之處。 Google announced that it had built the world's first real quantum processor, the 53-qubit Sycamore chip. It seems that IBM took it a little bit personally, maybe because Big Blue's Summit is the world's fastest calculating machine, for now. 谷歌(Google)已宣布研發出全球首座真正的量子處理器,即53量子位元的「Sycamore」晶片。但國際商業機器公司(IBM)對此頗有微詞,因為IBM的超級電腦「Summit」現在還是全球最快的計算機。 Quantum computing and AI aren't just two parallel research fields that happen to meet somewhere. They're more like a match made in heaven. 量子運算與人工智慧不僅是平行卻碰巧有交集的兩個研究領域。它們更像是天生一對。 Big Data is the nexus between AI and quantum computers. The former needs data and lots of it to learn and improve its intelligence. The latter are well-equipped to deal with huge swaths of data in a time-efficient manner. 大數據是人工智慧與量子電腦的連接點。前者(人工智慧)需要透過大數據學習及改進。後者(量子電腦)能夠高效處理大數據。 Financial modeling, weather forecasting, chemical simulations, and quantum cryptography are just a few examples of the areas that quantum AI would revolutionize. 財政模型化、天氣預報、化學模擬、量子密碼學都只是量子人工智慧即將革新的其中一些領域。   Next Article: Topic: Evolution of circuits for machine learning 機器學習電路的進化   Artificial intelligence (AI) has allowed computers to solve problems that were previously thought to be beyond their capabilities. There is therefore great interest in developing specialized circuits that can complete AI calculations faster and with lower energy consumption than current devices. 人工智慧(AI)讓電腦能夠解決此前被認為超出計算機能力範圍的問題。人們因此相當關注專門電路的開發,以實現比現有裝置更快速、能源消耗更低的人工智慧計算。 Writing in Nature, Tao Chen et al. demonstrate an unconventional electrical circuit in silicon that can be evolved in situ to carry out basic machine-learning operations. 陳滔(譯音)等人刊登在《自然》的研究,演示一種在矽材料上的非常規電路,它能直接執行基本的機器學習運算。 Previous work by some of the current authors produced isolated charge puddles from a collection of gold nanoparticles that were randomly deposited on a silicon surface, with insulating molecules between them. These puddles are at the heart of Chen and colleagues' circuit design. 該研究的其中一些作者先前在矽材料的表面隨機堆積奈米黃金顆粒,並用絕緣分子隔開這些電荷坑。金奈米電荷坑是陳博士團隊的電路設計核心。Source article: https://features.ltn.com.tw/english/article/paper/1354193 ; https://features.ltn.com.tw/english/article/paper/1349153   Next Article   Topic: Australia wins AI 'Eurovision Song Contest'   Dutch broadcaster VPRO decided to organise an AI Song Contest after the country won the 2019 Eurovision Song Contest. The aim was to research the creative abilities of AI and the impact it has on us, as well as the influence it could have on the music industry, according to the official Eurovision website. 荷蘭廣播公司VPRO決定舉辦一場AI歌曲大賽,在這個國家贏得2019年歐洲歌唱大賽後。歐洲歌唱大賽官方網站指出,(比賽的)目標是研究AI的創意能力、對我們的衝擊,以及它對音樂產業的影響。 Thirteen teams entered the contest, with Australia beating out Sweden, Belgium, the UK, France, Germany, Switzerland and the Netherlands to take home the title, giving fans a taste of Eurovision after 2020 contest was cancelled due to COVID-19. 共有13組人馬參賽,最後由澳洲擊敗瑞典、比利時、英國、法國、德國、瑞士與荷蘭等對手,將冠軍頭銜帶回家,在2020年歐洲歌唱大賽因為2019冠狀病毒流行病(COVID-19)疫情取消後,讓歌迷一嚐大賽的滋味。 The winning song, titled Beautiful the World, includes audio samples of koalas, kookaburras and Tasmanian devils, and was made by music-tech collective Uncanny Valley as a response to the Black Summer bushfires. 獲勝曲的題目為「Beautiful the World」,含有無尾熊、笑翠鳥與袋獾的聲音樣本,由音樂技術團體「恐怖谷」製作,回應黑暗夏日的叢林大火。   Next Article   Topic: AI technology causing sensation in translation world   Through artificial intelligence (AI), there may be another kind of imagination for translation, said Nicolas Bousquet, the scientific director of Quantmetry, a French consulting firm specializing in AI. The expert made the comment when attending a forum, titled “Translation in the Era of Artificial Intelligence,” at the 2019 Taipei International Book Exhibition on Feb. 15. 法國「人工智慧」(AI)諮詢公司Quantmetry科學主任尼可拉布斯格說,透過AI翻譯也許有另一種想像!這位專家在參與二○一九年台北國際書展時,於二月十五日的「AI人工智慧時代的翻譯論壇」作此表示。 Earlier last year, Quantmetry and German start-up DeepL shocked the industry by completing the translation of an 800-page book in 12 hours using AI. It would take a translator about five to six months to translate a book that thick. However, translator Mu Zhuo-yun believes that since humans and machines comprehend sentences differently, only the former is able to truly capture the more artistic conception of literature that an author wants to convey. 在去年,Quantmetry和德國新創公司DeepL共同藉由AI技術,僅花了十二小時就翻譯完一本八百頁的書籍,翻譯界對此大感震驚,因為一位譯者要翻那麼厚的書,可能要花上五、六個月。不過譯者穆卓芸認為,人類與機器了解句子的方式是不同的,只有人類才能真正掌握作者想傳達的「文學性意境」。 In addition to the evolution of written translation, the demand for real-time language translation devices that translate speech, images and street signs is on the rise. Some AI-powered gadgets even boast that they are capable of interpreting 30 to 40 languages. 除了筆譯的進化之外,對於可以即時翻譯談話,圖像或街道標誌的語言翻譯機,需求也在成長。有些AI支援的翻譯機,甚至宣稱能口譯三十至四十多種外語。Source article: http://www.taipeitimes.com/News/lang/archives/2019/02/28/2003710519  

The Modern Customer Podcast
How To Deliver Effective Hyper-Personalized Experiences

The Modern Customer Podcast

Play Episode Listen Later Aug 17, 2021 34:20


It's no secret that modern customers crave personalization. Efforts to tailor experiences to customers' needs are foundational to a strong CX strategy.  But the next step of personalization is here: hyper-personalization.  According to Raj Badarinath, CMO of Algonomy, hyper-personalization has three main characteristics: It focuses on individuals, not segments. Even if two customers have some similar qualities, they each have a unique experience that meets their exact needs. It creates experiences in real-time. Hyper-personalization delivers offers right when customers need them most. It uses AI and machine learning to improve over time. Hyper-personalization efforts get better as the technology and company learn more about each customer.  Instead of simply providing a certain experience for a customer depending on their demographic or preference segment, hyper-personalization considers the context to choose the right offer and experience in real-time. Hyper-personalization uses technology to look at countless variables and know what a customer is looking for and what they need at that exact moment.  Badarinath gives the example of a customer shopping in a store, likely while also using the store's mobile app to look up products and get information. The store knows the customer's preferences and that they are close by and can use hyper-personalization to send an offer that considers the context and meets their exact needs at that moment, perhaps by recommending a product that is relevant to what they are already buying or a discount on a brand they have bought in the past.  At the heart of hyper-personalization is strong digital solutions, especially around AI and machine learning. Badarinath says companies have to consider the digital maturity of their systems when making decisions. Some AI solutions only have the maturity of a three-year-old, while others have the maturity of a 30-year-old. That maturity impacts the decisions the technology makes and how it learns and grows. The same hyper-personalization strategy won't work on all levels of maturity.  Although AI and technology are important, hyper-personalization is most effective with a human touch. The best companies provide their human employees with tools to access customer data and preferences in real-time to deliver those hyper-personalized offers human-to-human.  In today's connected world, companies are no longer just competing against other brands within their industry—they are competing against every company. Hyper-personalization sets the standard and drives a strong customer experience and long-term loyalty to fuel business growth. *Sponsored by Algonomy  _______________ Blake Morgan is a customer experience futurist, keynote speaker, and author of the bestselling book The Customer Of The Future. For regular updates on customer experience, sign up for her weekly newsletter here.  Join the waitlist now for the new Customer Experience Community here. 

AI Business Podcast
AI doesn't get PTSD

AI Business Podcast

Play Episode Listen Later Oct 8, 2020 25:32


Some AI models are trained on great works of art. Others are trained on images of violence. If they were people, which one would you like to meet? Today we're talking about the different kinds of data that can be used to train an AI model. Covering the story of Facebook's Red Team, tasked with hacking the company's AI systems in order to make them more resilient, why we hope AI will take the jobs of content mods, and a positive story about Saint George on a Bike! PS: You might have noticed that the latest episode doesn't seem to include any of the latest news – the reason being it was recorded in early September. We promise we will return to our regular schedule next week.

Artificial Intelligence in Industry with Daniel Faggella
Fear Not, AI May Be Our New Best Creative Collaborators

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Apr 3, 2016 28:53


Statements about AI and risk, like those given by Elon Musk and Bill Gates, aren't new, but they still resound with serious potential threats to the entirety of the human race. Some AI researchers have since come forward to challenge the substantive reality of these claims. In this episode, I interview a self-proclaimed “old timer” in the field of AI who tells us we might be too preemptive about our concerns of AI that will threaten our existence; instead, he suggests that our attention might be better  honed in thinking about how humans and AI can work together in the present and near future.