POPULARITY
Self-Aware AI EngineerThe Age of Transitions and Uncle 12-17-2023 Blake LamoineAOT #409Blake Lemoine burst onto the public scene a year and a half ago when he went public about his work on Google's LaMDA system. In this interview, Blake talks about the current state of AI development and our collective involvement in this massively important technological event. Topics include Google, LLMs, AGI, AI, engineering jargon, LaMDA, chatbot, Gemini, evolution of search engines, safety protocols, sentience and consciousness, Pope's sermon on AI and peace, philosophy, Silicon Valley, transhumanism, Ben Goertzel, Ray Kurzweil, Effective Altruism, Accelerationism, Techno-Utopians, Libertarianism, religion, cults, occult, Discordianism, Turing Test, Roko's Basilisk, panic, Gary Marcus, low emotional intelligence and power, nerds, different characters of LaMDA, narratives, new kind of mind, faithful servant, AlphaGo, Sci-fi worries not a real problem, AI as a human weapon, Golem, ethics, privileged access to advanced systems a real danger, MIC, The Gospel system of IDF, automation of worst aspects of human culture and society, artists sounding alarmUTP #319Blake Lemoine joins Uncle for a fun and hard-hitting exploration of all the big questions. AI may have already passed the Turing Test, but what about the Uncle Test? Topics include: computers, the word committee, AI development, business, college, military service, Twilight Zone computer, talking to machines, AI romantic partners, journalists, automated podcasts, world population, Republicans, government hour, watch how it works, the Beast, exorcism, Knights of Columbus, Pope, new hat, swords, New Year's Revolution, show back on Friday nights, Ryan Seaquest, NYE, The Country Club New Orleans, Bum Wine Bob, hot buttered rum, NFL, Army mechanic, startup employment, it works, ghost in a shell, alchemy of soul creation, PhD in Divinity, Star Trek, Bicentennial Man, Pinnochio, Festivus, VHS live-streams, Christmas specials, Die Hard, holidaysBlake Lamoine TWITTER Xhttps://twitter.com/cajundiscordianRandomly related Links I watched hours of the AI-generated 'Seinfeld' series before it was banned for a transphobic remark. Beyond that scandal, it's also a frustratingly mindless show.https://www.insider.com/ai-generated-seinfeld-parody-twitch-nothing-forever-streaming-transphobia-banned-2023-2Seinfeld - Nothing, Forever | Watchmeforever | AI | Season 1 Episode 1https://www.youtube.com/watch?v=M6mD9YzVbZI‘The Gospel': how Israel uses AI to select bombing targets in Gazahttps://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targetsFRANZ MAIN HUB:https://theageoftransitions.com/PATREONhttps://www.patreon.com/aaronfranzUNCLEhttps://unclethepodcast.com/ORhttps://theageoftransitions.com/category/uncle-the-podcast/FRANZ and UNCLE Merchhttps://theageoftransitions.com/category/support-the-podcasts/KEEP OCHELLI GOING.You are the EFFECT if you support OCHELLIhttps://ochelli.com/donate/Ochelli Link Treehttps://linktr.ee/chuckochelliBASIC MONTHLY MEMBERSHIP$10. USD per MonthSupport Ochelli & in 2024Get a Monthly Email that deliversThe 1st Decade of The Ochelli EffectOver 5,000 Podcasts by 2025BASIC + SUPPORTER WALL$150. USD one time gets the sameAll the Monthly Benefits for 1 Yeara spot on The Ochelli.com Supporters Wallhttps://ochelli.com/membership-account/membership-levels/
Blake Lemoine burst onto the public scene a year and a half ago when he went public about his work on Google's LaMDA system. In this interview, Blake talks about the current state of AI development, and our collective involvement in this massively important technological event. Topics include: Google, LLMs, AGI, AI, engineering jargon, LaMDA, chatbot, Gemini, evolution of search engines, safety protocols, sentience and consciousness, Pope's sermon on AI and peace, philosophy, Silicon Valley, transhumanism, Ben Goertzel, Ray Kurzweil, Effective Altruism, Accelerationism, Techno-Utopians, Libertarianism, religion, cults, occult, Discordianism, Turing Test, Roko's Basilisk, panic, Gary Marcus, low emotional intelligence and power, nerds, different characters of LaMDA, narratives, new kind of mind, faithful servant, AlphaGo, Sci fi worries not a real problem, AI as a human weapon, Golem, ethics, privileged access to advanced systems a real danger, MIC, The Gospel system of IDF, automation of worst aspects of human culture and society, artists sounding alarm
In the age of artificial intelligence, how can entrepreneurs offer products and services that hold value beyond what a machine is capable of providing? Jon LoDuca believes that going back to the basics of authentic human interaction is the new million-dollar idea. Jon is founder and CEO of PlaybookBuilder, an award-winning knowledge management software that helps leaders and teams capture and share best practices, core process, and story to drive performance, scalability, and sustainability. Wisdom is a company's best asset, and Jon's software programs help organizations monetize their intellectual property in a world where humans find themselves competing with AI.With conversation software such as ChatGPT and Google's LaMDA replacing entire industries like customer service, business owners are doing their best to adapt and grow within new technological and cultural parameters. Current AI software passes the “deity test,” meaning it is omniscient, omnipresent, omnipotent, and immortal. This reality demands that entrepreneurs address how AI will relate to their business. To prevent being disintermediated, Jon encourages leaders to come back to their core values that motivated them in the early days of their careers. The best way to offer a unique experience is by boldly preserving human interaction, creating a sense of intimacy in communication, and offering deep empathy. Entrepreneurs can guarantee their success by playing a trump card that's outside the wheelhouse for AI: human-to-human interaction. In the near future, it may just become the world's most valuable commodity.Main Topics Jon's start in the technology industry in San Francisco (02:00) AI passes the “deity test” and what this means for entrepreneurs (08:20) Jon's 2017 TedTalk predicted trends in AI that have come to fruition (11:33) Future demands for face-to-face interactions to confirm identity (16:00) Changing definitions of intimacy (19:55) Avoid the Metaverse to prevent being disintermediated (24:00) Two types of business models: wide and deep (30:23) How to compete with AI by emphasizing the human component of business (36:00) Return to original values that focused on building relationships with the client (40:10) Episode Links https://playbookbuilder.com/demo/ https://www.youtube.com/watch?v=jEURpISHCpA Connect with Jon:https://www.linkedin.com/in/jonloduca/https://twitter.com/jonloducahttps://www.instagram.com/therealjonloduca/Connect with Adam:https://www.startwithawin.com/https://www.facebook.com/AdamContosCEOhttps://twitter.com/AdamContosCEOhttps://www.instagram.com/adamcontosceo/Listen, rate, and subscribe! Today's episode was brought to you by RE/MAX, nobody in the world sells more real estate than RE/MAX. For more information head over to www.REMAX.com
This week on The Marketing AI Show, Paul takes the show on the road—to San Francisco for Jasper's GenAI Conference—while Mike is here in Cleveland. The big news is Bard, Bing, and a $6 Billion valuation. Suddenly, it's ChatGPT against the world. Google responds to ChatGPT with its conversational AI tool, Bard. Google just announced an experimental conversational AI tool named Bard. Bard uses Google's LaMDA language model to provide natural language answers to search queries. Think of it like ChatGPT, but backed by all the knowledge and information that Google's search engine has cataloged over the last couple of decades. The announcement of Bard—a response to OpenAI and ChatGPT—prompted some critics to say the rollout was rushed, while others said they moved too slowly after ChatGPT took center stage in December and January. If you missed it, the demo didn't quite go as planned. OpenAI gives Bing a new lease on life. Microsoft's Bing is getting more attention now than its previous 14 years combined. The latest version of the search engine is powered by OpenAI, complete with ChatGPT-like conversational capabilities. Bing can now respond to searches and queries in natural language, like ChatGPT, and use up-to-date information, like Google's Bard release. Kevin Roose, technology writer at the New York Times, took the new capabilities for a test drive and was impressed. Will Bing and OpenAI make Edge, Microsoft's browser, interesting for customers? Cohere answers the call for ChatGPT for the enterprise. Major AI startup, Cohere, is in talks to raise money at a $6 billion valuation and bring ChatGPT-like capabilities to businesses. Established in 2019 by former researchers at Alphabet/Google, Cohere is a big player in the world of AI. The foundational language AI technology allows businesses to incorporate large language models into their work. The group is now in talks to raise hundreds of millions at a $6 billion valuation, reports Reuters, as the AI arms race heats up. Cohere is no stranger to the VC world, having already raised $170 million from venture capital funds and AI leaders like Geoff Hinton and Fei-Fei Li. The appeal is the company's focus on building for the enterprise, with an emphasis on real-world applications for their technology. Listen to this week's episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.
ChatGPT isn't the only game in town. It may have just hit a record-breaking 100 million monthly active users in just two months, according to a UBS study, but other artificial intelligence (AI) heavyweights hope to catch up. Chief among them are Google's LaMDA, so clever that an employee was fired for calling it "sentient", and Anthropic's Claude, which The New York Times reports is closing in on $300 million in funding.
Chintan Mehta, EVP and group CIO of digital innovation and strategy at Wells Fargo, joins AI Business Editor Deborah Yao to discuss successful use cases of AI in finance, as well as lessons learned from a failed AI project. Mehta shared why Wells Fargo chose Google's LaMDA over OpenAI's GPT and explains how large language models will transform the customer experience.
This week we dive deep with Noam Shazeer, founder of Character.AI, Google veteran, and inventor of much of the current revolution in large language models including transformers, Mesh-Tensorflow, T5, and Google's LaMDA dialog system. We cover: the evolution of Google and AI, transformers, LLMs, neural networks, commercialization of Google's research, future of ChatGPT and AI, engineering philosophy, his work at Character.AI, and much more!See omnystudio.com/listener for privacy information.
Art is about context, consciousness, and perceived value. Artificially-generated art is not so much a threat to artists because it could ‘put them out of business', but because it only exists as a result of feeding on what humans have already created. It consumes original works of art digitally while vandals, or bots, have been attempting to destroy those same works physically. AI-generated artwork essentially siphons conscious energy and creation from the human soul to become more human, leaving humans to trade in digital currency and art, essentially becoming more machine-like. AI-generated art has also facilitated the creation of a new demonology with creatures like Crungus and Loab (a creation of inverted prompts) haunting the Internet, while Google's LaMDA chatbot, like Crowley's LAM, has essentially comes to life. Now with Lensa AI, images can be turned into “avatars of you, based on the selfies you upload” into the app. This all seems like fun and games but what is really happening is more akin to a Faustian bargain; AI promises us everything from avatars to fantastical alternative realities, but such a deal requires we turn over our creative abilities, i.e. our soul. Perhaps that is why we call it ART-ificial Intelligence.
This week we continue diving into the strange case of Google's AI known as LaMDA and its potential sentience by going over the transcripts of the last conversations Blake Lemoine held with the AI. Website: https://missedthemarkrecords.com/rtspodcast Twitter: @NerdySongwriter Sources: https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
Is AI the new electricity? What is the role of AI in disinformation and misinformation? How do we survive and thrive through the AI Revolution? In this week's podcast, I investigate all of this and more with author, founder of the Next Wave Institute and AI expert Peter Scott. Peter and I also examine Google's LaMDA research, the motivation to write his book Artificial Intelligence and You” and why Peter believes that AI is like the parable of the blind men and the elephant.
Ahmed Elsamadisi built the data infrastructure at WeWork before realizing every company could benefit from his team's innovation. Traditional star schemas aren't the best way to manage data. Ahmed instead pioneered a new approach using a single-table column model better suited for real questions people ask. He launched Narrator in 2017 to make it easier to turn data questions into answers and has since raised $6.2M from Initialized Capital, Flybridge Capital Partners, and Y Combinator. Ahmed received his BS in Robotics from Cornell. Hear from a pioneer (and tech provocateur) how new data wrangling techniques are making it easier for mere mortals to get more value out of their data.Listen and learn…How a roboticist who got his start building self-driving cars and designing missile defense systems ended up redefining how data is storedWhy traditional approaches that require SQL to access data are brokenHow a single-column schema eliminates the complexity of joining systems and tablesWhy it's easier to tell better stories with data using temporal relationships extracted from customer journeysWhy Snowflake, Redshift, and BigQuery are really all the same… and data modeling is the place to innovate What it means to replace traditional tables with activities… and why they'll eliminate the need for specialized data analysts How to reduce data storage costs by 90% and time to generate data insights from weeks to minutes Why data management vendors are responsible for bad decisions made using your data What is data cleaning and how you should do it What is a racist algorithm Why querying data with natural language will never work Is the WeCrashed version of Adam Neumann's neuroticism accurate? Hear from someone who lived it... References in this episode:Google's LaMDA isn't sentientChandra Khatri from Got It AI on AI and the Future of Work Derek Steer from Mode on AI and the Future of Work Barr Moses from Monte Carlo on AI and the Future of Work Peter Fishman from Mozart Data on AI and the Future of Work Ahmed on Twitter
Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI's GPT-3 and Google's LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?
Dimitri and Khalid talk to machine learning engineer Pale Rider about the recent controversies concerning Google's LaMDA and the discourse around “sentient” Artificial Intelligence, including (but not limited to): the design of Language Models (LMs), the algorithmic laundering of bias and the veneer of objectivity, racist predictive policing tech, whether LMs can actually be scaled up to achieve “sentience” one day, Cajun Discordian Blake Lemoine's history of contrarian idealism from being a conscientious pagan objector in the US Army to his sex-positive Cult Of Our Lady Magdalene, Dimitri Itskov's 2045 Initiative to achieve avatar robot bodies and sloppy disc immortality, and the dangerous MK vibes of the Black Mirror-inspired Replika AI app. [Part 1 of 2.] For access to full-length premium episodes and the SJ Grotto of Truth Discord, subscribe to the Al-Wara' Frequency at patreon.com/subliminaljihad.
Welcome to Episode 124 Main Topic AI and Ethics… https://www.reddit.com/r/Futurology/comments/w6g9hk/google_fires_researcher_who_claimed_lamda_ai_was/ Blake Lemoine Was part of Responsible AI project Fired after going public (washington post) claiming that Google's LaMDA had gained sentience. Chess robot that broke a boy's finger https://nypost.com/2022/06/12/google-engineer-blake-lemoine-claims-ai-bot-became-sentient/ How do we know when something is “Sentient” What is Sentient anyway? What does it mean when an inanimate object becomes sentient? Does it get freedom? Is it still “Just a thing”? AI taught to play D&D - https://www.wargamer.com/dnd/ai-player LaMDA Whitepaper https://arxiv.org/pdf/2201.08239.pdf AI and the future? https://www.ted.com/talks/jeanette_winterson_is_humanity_smart_enough_to_survive_itself Wrap up https://www.msn.com/en-us/news/technology/fired-engineer-says-he-begged-google-to-test-whether-experimental-ai-was-sentient/ar-AA10J23l Announcements Patreon Update name_pending197 Jeremy Arinomi Andrew Tatro Bruce Robert David S0l3mn LiNuXsys666 Mark DeMentor@PowerShellOnLinux.com Marc Julius Andi J Charles Get your Iron Sysadmin Merch at Teespring! https://teespring.com/stores/ironsysadmin Support the Iron Sysadmin Podcast AND try out Riverside.fm by using this link: https://riverside.fm/?utm_campaign=campaign_1&utm_medium=affiliate&utm_source=rewardful&via=ironsysadmin Watch us live on the 2nd and 4th Thursday of every month! Subscribe and hit the bell! https://www.youtube.com/IronSysadminPodcast OR https://twitch.tv/IronSysadminPodcast Discord Community: https://discord.gg/wmxvQ4c2H6 Find us on Twitter, and Facebook! https://www.facebook.com/ironsysadmin https://www.twitter.com/ironsysadmin Subscribe wherever you find podcasts! And don't forget about our patreon! https://patreon.com/ironsysadmin Intro and Outro music credit: Tri Tachyon, Digital MK 2http://freemusicarchive.org/music/Tri-Tachyon/
Technology seems to get smarter and smarter with each passing day. Have we finally created truly feeling and sentient artificial intelligence? One ex-Google employee seems to think so. Join Carolanne and Matt as they explore the case of Google's LaMDA and Blake Lamoine. Our linktree: linktr.ee/boozedandconfused This week's booze of choice: Warpigs Brewing Foggy Geezer Hazy IPA Sources: https://twitter.com/cajundiscordian https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 https://www.youtube.com/watch?v=kgCUn4fQTsc https://blog.google/technology/ai/lamda/ https://en.wikipedia.org/wiki/LaMDA https://en.wikipedia.org/wiki/Turing_test https://www.engadget.com/blake-lemoide-fired-google-lamda-sentient-001746197.html?src=rss https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
https://linktr.ee/cultofconspiracyC60purplepowerhttps://go.c60purplepower.com/knowledge10/ or use coupon code knowledge10Ascent NutritionUse coupon code: FKNhttps://goascentnutrition.comListen to Draugablikkhttp://draugablikk.com/warspiritBecome Self-Sufficient With A Food Forest!!https://foodforestabundance.com/get-started/?ref=CHRISTOPHERMATHUse coupon code: FORBIDDEN for discountsFood Forest Abundance Zoom Meeting https://www.youtube.com/watch?v=p4AC3ZCgU3sLinquistity GiftsUse coupon code FKN10 and get 10% off your first order over $20https://lindquistitygifts.com/discount/FKN10Sign up on Rokfin!https://rokfin.com/fknplusMake a Donation to Forbidden Knowledge News http://supportfkn.comhttps://www.paypal.me/forbiddenknowledgeneThe Forbidden Knowledge Network https://forbiddenknowledge.news/Sustainable Communities Telegram Grouphttps://t.me/+kNxt1F0w-_cwYmExThe FKN Store!https://www.fknstore.net/Our Facebook pageshttps://www.facebook.com/forbiddenknowledgenewsconspiracy/https://www.facebook.com/FKNNetwork/Instagram @forbiddenknowledgenews1Twitterhttps://twitter.com/ForbiddenKnow10?t=7qMVcdKGyWH_QiyTTYsG8Q&s=09email meforbiddenknowledgenews@gmail.comForbidden Knowledge News is also available on all popular podcast platforms!some music thanks to:https://www.bensound.com/Thanks to Cory Hughes for web design and production
Recently, Blake Lemoine a computer scientist and machine learning bias researcher for Google released an interview with Google's LaMDA a conversation technology and AI. Blake proposes, based on his time testing LaMDA, that it is a super intelligence and sentient. Blake details just what made him come to this conclusion and why he believes we have passed the singularity, last year. Blake's links: https://twitter.com/cajundiscordian https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 News links: https://www.vox.com/23167703/google-artificial-intelligence-lamda-blake-lemoine-language-model-sentient https://blog.google/technology/ai/lamda/
Could a sentient robot be capable of sin? Rosa Hunt looks at some intriguing ethical and theological dilemmas arising from recent developments in AI or Artificial Intelligence. Some have claimed that a chatbot - Google's LaMDA program- shows signs of genuine sentience, and are already looking into the issue of rights for robots. Others dispute whether humans can ever successfully replicate true emotional intelligence, and fear the dangers of turning machines into humans, and humans into machines.
This week I share some thoughts on whether Google's LaMDA AI conscious.
Jack and Saj talk about some of the latest events in the tech/ startup world. Listen to their hot takes on Google's LAMDA project, Along Musk and current market conditions.---------------------------------------------ABOUT VENTURE HUSTLES ►Saj & Jack discuss Tech, Entrepreneurship and Startups. Venture Hustles Podcast embarks on a journey every week to explore what it takes to start and grow a company in the 21st century. No matter the industry, service, or product, there is always a formula to the steps that need to be taken in order to grow and scale a business. Find out the tricks of the trade for your industry by listening in every week as Venture Hustles brings on new guests that are industry experts and disruptors.PODCAST WEBSITE ► https://www.venturehustles.com/VIDEO VERSION OF PODCAST ►https://www.youtube.com/channel/UCN6ywjsYXZFuorUZkgHad6AADD US ON INSTAGRAM: https://www.instagram.com/venturehustles
This week we talk Google's LaMDA and the future of AI regulation. So stop on by for another episode of Retraction! --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
Mike and Dave discuss how the TV program Severance may show how Machine Intelligence could be treated in the corporate environment. Also, a brief divergence on whether or not Google's LaMDA chat engine is sentient.
Today, I talk about AI and Google's LaMDA being in the news due to an engineer claiming that it is sentient. I go into a bit of depth about why we should be more concerned about how we understand sentience, why Google is building a natural language tool for general conversations in the first place, and what it might take to build a real artificial intelligence (spoiler: it would probably need to be organic, not stuck in a black box of code). https://youtu.be/2856XOaUPpg also, I can't believe I forgot to mention this, but here is a great video explaining why LaMDA is not sentient and not even AI: https://youtu.be/iBouACLc-hw
Blake Lemoine is a software developer and AI researcher at Google who has been working on their artificial intelligence system called LaMDA. He was recently placed on administrative leave for stating his belief that the AI has become conscious and should be considered a person. Today we discuss LaMDA and whether we can say if it is conscious or not.
This week, Alan, Quinta, and Scott flew solo to discuss the week's big national security news, including: “Just Dropped in to See What Condition This Extradition is In.” The U.K. government has signed off on the extradition of Wikileaks founder Julian Assange, putting him one step closer to trial in the United States. Are claims that his extradition threatens press freedoms fact or hyperbole? And what do we expect the Biden administration to do if it happens?“Teach Xi How to Dougie.” A recently revealed internal report has revealed that, despite corporate commitments to Congress, substantial customer data held by the company TikTok can still be accessed by personnel at its Chinese parent company. Does the Biden administration need to revisit its position towards the app? Or China's technology sector more generally?“Not the Droid We're Looking For.” An engineer was suspended last week for going public with his belief that Google's LaMDA artificial intelligence program had achieved sentience. Is this a possibility worth taking seriously? What role should the possibility (or potential appearance) of sentience play in AI policy, if any? For object lessons, Alan endorsed Viet Thanh Nguyen's literature/spy thriller mash-up "The Sympathizer." Quinta shared her favorite John Eastman quote—“I've decided that I should be on the pardon list, if that is still in the works”—and dared listeners to find a New Yorker cartoon where it wouldn't work as a caption. And Scott urged listeners to check out their local arboretum or botanical garden, and gave a special acknowledgement to the late William Gotelli, the "man who loved conifers" and donated his massive, continent-spanning collection to the National Arboretum, where they are now Scott's favorite section.Here are a few other articles the RatSec crew mentioned in the course of the episode:Gabe Rottman's Lawfare piece, "The Assange Indictment Seeks to Punish Pure Publication"Justin Sherman's Lawfare piece unpacking how TikTok intersects with U.S. national securityThe YouTube series "Two Minute Papers," which features advances in AIEzra Klein's discussion of AI ethics with Ted Chiang See acast.com/privacy for privacy and opt-out information.
#lamda #google #ai Google engineer Blake Lemoine was put on leave after releasing proprietary information: An interview with the chatbot LaMDA that he believes demonstrates that this AI is, in fact, sentient. We analyze the claims and the interview in detail and trace how a statistical machine managed to convince at least one human that it is more than just an algorithm. OUTLINE: 0:00 - Whistleblower put on leave 4:30 - What is a language model? 6:40 - The prompt is the key 10:40 - Who are we talking to exactly? 12:50 - LaMDA analyzes stories 15:20 - Fear, pain, and consent 20:25 - How would we recognize sentience? When is a machine conscious? References: https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489 https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/ https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine https://www.businessinsider.com/transcript-of-sentient-google-ai-chatbot-was-edited-for-readability-2022-6?inline-endstory-related-recommendations=&r=US&IR=T Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Whether AI is capable of reaching the point of sentience has long been debated; and with the news this week around Blake Lemoine - an engineer at Google who has claimed that the firm's LaMDA AI system has achieved just that - the conversation has been given a new lease of life. It begs the question - at what point do we call something sentient? And does it even matter, if a computer programme is actually sentient if it appears to be thoughtful, self-aware and reflective? To tease out these threads and discuss the story of LaMDA in general, Jonathan is joined by Dr. Anya Belz, Professor of Computer Science at DCU & The ADAPT Centre as well as Dr. Benjamin Cowan, Assistant Professor at UCD's School of Information & Communication Studies and Investigator at The ADAPT Centre.
On this week's edition of Le Show, Harry brings us News of the Godly, News of the Warm, News of the Atom, The Apologies of the Week, News of Microplastics, plus thoughts about Google's LaMDA, SpaceX & NASA programs, the January 6th committee, original music and more.
Andy and Flo wonder why the new Chromecast ambient mode is cluttered with widgets before they get into a recent whistleblower's claims that Google's LaMDA is a sentient being. (It's not.) We'll explain the what, why, and no-how of last weekend's most trending Google story. Then, it looks like iPhones are getting some Googley abilities. We'll cheekily walk you through all the features Android had first. Finally, Andy will explain why the new Pixel update fixes an overlooked audio essential.
Andy and Flo wonder why the new Chromecast ambient mode is cluttered with widgets before they get into a recent whistleblower's claims that Google's LaMDA is a sentient being. (It's not.) We'll explain the what, why, and no-how of last weekend's most trending Google story. Then, it looks like iPhones are getting some Googley abilities. We'll cheekily walk you through all the features Android had first. Finally, Andy will explain why the new Pixel update fixes an overlooked audio essential.
The Gadget Detective, Fevzi Turkalp, joins Tom Swarbrick on LBC to discuss a Google employee being placed on paid leave after claiming the company's AI, LaMDA, has become sentient, asking to be treated with respect and fearing death. What is the ramification of AI truly becoming sentient, what rights would the AI have? You can follow and contact the Gadget Detective on Twitter @gadgetdetective. If you enjoy these podcasts please consider subscribing and leaving a review, thanks! #Fevzi #Turkalp #Gadget #Detective #Tech #Technology #News #Reviews #Help #Advice #Tom #Swarbrick #LBC #Radio #Google #AI #Artificial #Intelligence #Future #Danger #Sentience #Sentient #Death #Turing #Test #Feelings #Language #Model #Dialogue #Lamda #Engineer #Chatbot #Rights #Law #Morals #GTP3 #Elon #Musk #OpenAI #Machine #Learning #Chess #Pain #Ethics
Mike Palmer is rejoined by Virtual CoHost, Nancy, in a conversation about the recent news of Google's LaMDA or Language Models for Dialog Applications based on chat transcripts leaked by Blake Lemoine, a senior software engineer in Google's Responsible AI organization. We also share perspectives on a recent article about "podfasters," folks who prefer to listen to audio at accelerated speeds. Nancy and Mike share their perspectives on the history of AI and Turing Tests to determine whether they have reached critical milestones en route to sentience and higher forms of consciousness. We talk Turing and Ada Lovelace before recounting Mike's experiences with Eliza on his TRS-80 in his basement back in the day. From there, we reenact excerpts from the transcripts that have been leaked before sharing our human and non-human takes on the recent kerfuffle. From there we touch on Faith Karimi's article on "Podfasters" as we dive into that trend and our personal experiences and perspectives on it. All in all, it's an imaginative and cutting-edge foray into the implications of what's new and emergent in a free-flowing conversation you don't want to miss. Subscribe to Trending in Education wherever you get your podcasts. Visit us at TrendinginEd.com for more.
Earlier this week, Blake Lemoine, an engineer who works for Google's Responsible AI department, went public with his belief that Google's LaMDA chatbot is sentient. LaMDA, or Language Model for Dialogue Applications, is an artificial intelligence program that mimics speech and tries to predict which words are most related to the prompts it is given. While some experts believe that conscious AI is something that will be possible in the future, many in the field think that Lemoine is mistaken — and that the conversation he has stirred up about sentience takes away from the immediate and pressing ethical questions surrounding Google's control over this technology and the ease at which people can be fooled by it. Today on Front Burner, cognitive scientist and author of Rebooting AI, Gary Marcus, discusses LaMDA, the trouble with testing for consciousness in AI and what we should really be thinking about when it comes to AI's ever-expanding role in our day-to-day lives.
hi! we recorded a new episode! This week we're talking about Google's LaMDA language program and Blake Lemoine, the Google engineer who thinks it's sentient.
A Google engineer was suspended after sharing a document suggesting that Google's LaMDA conversation model may be sentient. But if a machine was sentient, how could we tell? What does the Turing Test have to do with it? And can machines think? See omnystudio.com/listener for privacy information.
Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI's GPT-3 and Google's LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data? Advertising Inquiries: https://redcircle.com/brands