Podcasts about design ethicist

  • 15PODCASTS
  • 16EPISODES
  • 51mAVG DURATION
  • ?INFREQUENT EPISODES
  • May 21, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about design ethicist

Latest podcast episodes about design ethicist

Think for Yourself
Google's Former Design Ethicist, Tristan Harris, Has A Warning for Humanity

Think for Yourself

Play Episode Listen Later May 21, 2023 27:35


Social Media was just the beginning. It was our First Encounter with AI. The Second Encounter, about to invade every facet of our reality and distort it beyond recognition, is at our doorstep. Do we have the wisdom and commitment to ethics which will allow us to survive its tentacles? A warning from Tristan Harris.

The Nonlinear Library
LW - Transcript: NBC Nightly News: AI ‘race to recklessness' w/ Tristan Harris, Aza Raskin by WilliamKiely

The Nonlinear Library

Play Episode Listen Later Mar 23, 2023 5:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcript: NBC Nightly News: AI ‘race to recklessness' w/ Tristan Harris, Aza Raskin, published by WilliamKiely on March 23, 2023 on LessWrong. Video Link: AI ‘race to recklessness' could have dire consequences, tech experts warn in new interview Highlights AI Impacts' Expert Survey on Progress in AI cited: "Raskin points to a recent survey of AI researchers, where nearly half said they believe there's at least a 10% chance AI could eventually result in an extremely bad outcome like human extinction." Airplane crash analogy: Raskin: "Imagine you're about to get on an airplane and 50% of the engineers that built the airplane say there's a 10% chance that their plane might crash and kill everyone." Holt: "Leave me at the gate!" Tristan Harris on there being an AI arms race: "The race to deploy becomes the race to recklessness. Because they can't deploy it that quickly and also get it right." Holt: "So what would you tell a CEO of a Silicon Valley company right now? "So yeah, you don't want to be last, but can you take a pause?" Is that realistic?" Transcript Lester Holt: Recent advances in artificial intelligence now available to the masses have both fascinated and enthralled many Americans. But amid all the "wows" over AI, there are some saying "Wait!" including a pair of former Silicon Valley insiders who are now warning tech companies there may be no returning the AI genie to the bottle. I sat down with them for our series A.I. Revolution. Holt: It's hard to believe it's only been four months since ChatGPT launched, kicking the AI arms race into high gear. Tristan Harris: That was like firing the starting gun. That now, all the other companies said, 'If we don't also deploy, we're going to lose the race to Microsoft.' Holt: Tristan Harris is Google's former Design Ethicist. He co-founded the Center for Humane Technology with Aza Raskin. Both see an AI welcome possibilities. Harris: What we want is AI that enriches our lives, that is helping us cure cancer, that is helping us find climate solutions. Holt: But will the new AI arms race take us there? Or down a darker path? Harris: The race to deploy becomes the race to recklessness. Because they can't deploy it that quickly and also get it right. Holt: In the 2020 Netflix doc the Social Dilemma they sounded the alarm on the dangers of social media. Harris: We built these things and we have the responsibility to change it. Holt: But tonight they have an even more dire warning about ignoring the perils of artificial intelligence. Harris: It would be the worst of all human mistakes to have ever been made. And we literally don't know how it works and we don't know all the things it will do. And we're putting it out there before we actually know whether it's safe. Holt: Raskin points to a recent survey of AI researchers, where nearly half said they believe there's at least a 10% chance AI could eventually result in an extremely bad outcome like human extinction. Holt: Where do you come down on that? Aza Raskin: I don't know! Holt: That's scary to me you don't know. Raskin: Yeah, well here's the point. Imagine you're about to get on an airplane and 50% of the engineers that built the airplane say there's a 10% chance that their plane might crash and kill everyone. Holt: Leave me at the gate! Raskin: Yeah, right, exactly! Holt: AI tools can already mimic voices, ace exams, create art, and diagnose diseases. And they're getting smarter everyday. Raskin: In two years, by the time of the election, human beings will not be able to tell the difference between what is real and what is fake. Holt: Who's building the guardrails here? Harris: No one is building the guard rails and this has moved so much faster than our government has been able to understand or appreciate. It's important to note the CEOs of the major AI labs—they've ...

The Nonlinear Library: LessWrong
LW - Transcript: NBC Nightly News: AI ‘race to recklessness' w/ Tristan Harris, Aza Raskin by WilliamKiely

The Nonlinear Library: LessWrong

Play Episode Listen Later Mar 23, 2023 5:41


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcript: NBC Nightly News: AI ‘race to recklessness' w/ Tristan Harris, Aza Raskin, published by WilliamKiely on March 23, 2023 on LessWrong. Video Link: AI ‘race to recklessness' could have dire consequences, tech experts warn in new interview Highlights AI Impacts' Expert Survey on Progress in AI cited: "Raskin points to a recent survey of AI researchers, where nearly half said they believe there's at least a 10% chance AI could eventually result in an extremely bad outcome like human extinction." Airplane crash analogy: Raskin: "Imagine you're about to get on an airplane and 50% of the engineers that built the airplane say there's a 10% chance that their plane might crash and kill everyone." Holt: "Leave me at the gate!" Tristan Harris on there being an AI arms race: "The race to deploy becomes the race to recklessness. Because they can't deploy it that quickly and also get it right." Holt: "So what would you tell a CEO of a Silicon Valley company right now? "So yeah, you don't want to be last, but can you take a pause?" Is that realistic?" Transcript Lester Holt: Recent advances in artificial intelligence now available to the masses have both fascinated and enthralled many Americans. But amid all the "wows" over AI, there are some saying "Wait!" including a pair of former Silicon Valley insiders who are now warning tech companies there may be no returning the AI genie to the bottle. I sat down with them for our series A.I. Revolution. Holt: It's hard to believe it's only been four months since ChatGPT launched, kicking the AI arms race into high gear. Tristan Harris: That was like firing the starting gun. That now, all the other companies said, 'If we don't also deploy, we're going to lose the race to Microsoft.' Holt: Tristan Harris is Google's former Design Ethicist. He co-founded the Center for Humane Technology with Aza Raskin. Both see an AI welcome possibilities. Harris: What we want is AI that enriches our lives, that is helping us cure cancer, that is helping us find climate solutions. Holt: But will the new AI arms race take us there? Or down a darker path? Harris: The race to deploy becomes the race to recklessness. Because they can't deploy it that quickly and also get it right. Holt: In the 2020 Netflix doc the Social Dilemma they sounded the alarm on the dangers of social media. Harris: We built these things and we have the responsibility to change it. Holt: But tonight they have an even more dire warning about ignoring the perils of artificial intelligence. Harris: It would be the worst of all human mistakes to have ever been made. And we literally don't know how it works and we don't know all the things it will do. And we're putting it out there before we actually know whether it's safe. Holt: Raskin points to a recent survey of AI researchers, where nearly half said they believe there's at least a 10% chance AI could eventually result in an extremely bad outcome like human extinction. Holt: Where do you come down on that? Aza Raskin: I don't know! Holt: That's scary to me you don't know. Raskin: Yeah, well here's the point. Imagine you're about to get on an airplane and 50% of the engineers that built the airplane say there's a 10% chance that their plane might crash and kill everyone. Holt: Leave me at the gate! Raskin: Yeah, right, exactly! Holt: AI tools can already mimic voices, ace exams, create art, and diagnose diseases. And they're getting smarter everyday. Raskin: In two years, by the time of the election, human beings will not be able to tell the difference between what is real and what is fake. Holt: Who's building the guardrails here? Harris: No one is building the guard rails and this has moved so much faster than our government has been able to understand or appreciate. It's important to note the CEOs of the major AI labs—they've ...

Yoga Wisdom with Acharya das
#195 Leave a Comment Below – the age of rage

Yoga Wisdom with Acharya das

Play Episode Listen Later Apr 2, 2022 58:12


The wide use of the social media mechanism of “leave a comment below” in all it's different forms, including Twitter, has significantly contributed to the current breakdown of social etiquette and damaged communication between people. Poisoning the well of social discourse has contributed to the serious loss of peace of mind and the erosion of happiness of much of the population at large. As Tristan Harris the former Design Ethicist of Google has stated, this is part of Big Tech's strategy and business model helping it "to create a society that is addicted, outraged, polarized, performative and disinformed.” From a spiritual perspective, this is incredibly damaging to people and disruptive to spiritual growth. We need to be continuously evaluating whether something I am hearing, seeing, reading, etc, heightens my emotional responses? And does this content appeal to or stimulate my baser instincts? Does it create division (team spirit), or does it encourage empathy, compassion, and kindness? Is it based on spiritual understanding, or does it promote ignorance?

No turning Back
Tristan Harris on technology's potential and power

No turning Back

Play Episode Listen Later Mar 30, 2021 59:35


This week, Stan and Chris speak with Tristan Harris, the Co-Founder & President of the Center for Humane Technology, the primary subject of the Netflix documentary, “The Social Dilemma,” and former Design Ethicist for Google. He brings years of experience in the technology sector to the conversation, focusing on social media's pervasive influence on how we think and interact. Between the addictive nature of our devices and how much influence something as little as the “like” button has on citizens worldwide, Tristan discusses how the power our technology holds is beyond measure.   We release our McChrystal Group Leader’s Journal every other Wednesday with new No Turning Back episodes and thought leadership pieces. Join us for valuable insights on leadership here: https://www.mcchrystalgroup.com/newsletter/

No Turning Back
Tristan Harris on technology's potential and power

No Turning Back

Play Episode Listen Later Mar 30, 2021 59:35


This week, Stan and Chris speak with Tristan Harris, the Co-Founder & President of the Center for Humane Technology, the primary subject of the Netflix documentary, “The Social Dilemma,” and former Design Ethicist for Google. He brings years of experience in the technology sector to the conversation, focusing on social media's pervasive influence on how we think and interact. Between the addictive nature of our devices and how much influence something as little as the “like” button has on citizens worldwide, Tristan discusses how the power our technology holds is beyond measure.   We release our McChrystal Group Leader’s Journal every other Wednesday with new No Turning Back episodes and thought leadership pieces. Join us for valuable insights on leadership here: https://www.mcchrystalgroup.com/newsletter/

Studio 1.0
The Social Solution

Studio 1.0

Play Episode Listen Later Oct 23, 2020 27:32


Emily Chang sits down with multiple Silicon Valley veterans profiled in Netflix's hit film "The Social Dilemma" Tristan Harris, former Design Ethicist at Google, now Co-Founder of the Center for Humane Technology, Tim Kendall, former Head of Monetization at Facebook and former President of Pinterest, now CEO of Moment. They are joined by Safiya Noble, Associate Professor at UCLA, and author of a best-selling book on bias in technology to discuss the impact of social media on society.

Rebel Wisdom
Can Truth Survive Big Tech? Tristan Harris

Rebel Wisdom

Play Episode Listen Later Jul 22, 2020 62:48


How is social media and big tech pushing us towards conflict and polarisation?   Tristan Harris has been called the "conscience of silicon valley". He worked for Google as a Design Ethicist until 2016, when he became so concerned about the direction tech was taking us that he quit, to set up the Centre for Humane Technology.   In this conversation with Rebel Wisdom's David Fuller, they discuss how the incentive structure of the "attention economy" is turning us into different kinds of beings, and that if we can't recover some human values, then the future for humanity looks bleak.   Links:   Centre for Humane Technology: https://humanetech.com/   Tristan's TED Talk: How a Handful of Tech Companies Control Billions of Minds Every Day: https://www.ted.com/talks/tristan_har...   Interview with Tim Ferris: https://tim.blog/2019/09/19/tristan-h...   To get access to more exclusive content, become a Rebel Wisdom subscriber: https://www.rebelwisdom.co.uk/plans   We've also just launched the Rebel Wisdom store! Buy T Shirts and more on https://shop.rebelwisdom.co.uk

Singularity.FM
Ex-Google Design Ethicist Tristan Harris on Technology and Human Downgrading

Singularity.FM

Play Episode Listen Later Jun 16, 2019 70:02


Tristan Harris is one of my heroes. And I don’t know about you but I am much more demanding and harder on my heroes. I just expect them to hold themselves to a higher standard, to know more, to do more, to be more and, perhaps most of all, to live and breathe their own […]

Ihmisiä, siis eläimiä
#15: Henry Vistbacka. Kommunikaatioteknologia, merkityksellisyys, tarinat

Ihmisiä, siis eläimiä

Play Episode Listen Later Feb 16, 2018 60:07


Rahoita podcastin tekoa Patreonissa. Pienikin tuki auttaa! https://www.patreon.com/vistbacka Videoversio: https://www.youtube.com/watch?v=GfPZfudWd2U RSS: http://feeds.soundcloud.com/users/soundcloud:users:358481639/sounds.rss Podcastin 15. jaksossa podcastin vetäjä Henry Vistbacka keskustelee keskenään. Jakso taltioitiin 27.11.2017 Jaksossa käsiteltyjä teemoja: • Spontaanius • Podcastaus • Lajityypilliset asennot • Digitaalikommunikaatioteknologiat ja niiden kritiikki • Käyttöliittymät • Kehollisuus • Somen addiktoivuus • Kasvottoman kommunikaation epäinhimillistävä vaikutus • Lynkkaus • Merkityksellisyys • Metamodernismi • Suuri narratiivi • Atomisoituminen • Ristiriitaisuus • Kyynisyys • Kuplautuminen • Tarinallisuus • Ihmisen potentiaali • Liiallinen individualismi ja liiallinen kollektivismi • Perustulo ja automaatio • Yhdessä tekeminen ja osallistuminen • Ekologiset kriisit yhdistävänä narratiivina • Pelillistäminen • Romahdus • Kyyninen optimismi • Luottamus elämään • Heittäytyminen • Epäonnistuminen Linkkejä keskustelun tiimoilta: • Jaron Lanierin haastattelu: https://www.nytimes.com/2017/11/08/style/jaron-lanier-new-memoir.html • Jaron Lanierin kirja "Dawn of the New Everything": https://www.goodreads.com/book/show/23848323-dawn-of-the-new-everything • Imogen Heapin midipuku: https://www.youtube.com/watch?v=6btFObRRD9k • Postaus kostamisesta: https://www.facebook.com/erisgumma/posts/10155357740333772 • Artikkeli "The Making of an American Nazi": https://www.theatlantic.com/magazine/archive/2017/12/the-making-of-an-american-nazi/544119/ • Tristan Harris Sam Harrisin podcastissa: https://www.youtube.com/watch?v=jlPF9_1VIso • Tristan Harrisin projekti Time Well Spent / Center for Humane Technology: http://humanetech.com/ • Tristan Harrisin artikkeli "How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist": http://www.tristanharris.com/2016/05/how-technology-hijacks-peoples-minds%E2%80%8A-%E2%80%8Afrom-a-magician-and-googles-design-ethicist/ • Meaningness-hyperkirja: https://meaningness.com/ • Ihmisiä, siis eläimiä -podcastin jakso Lilja Tammisen kanssa: https://www.youtube.com/watch?v=TpanwjVAilU • Meaningness-hyperkirjan artikkeli atomisoitumisesta: https://meaningness.com/atomized-mode • Tarina kymmenestä simpanssista, jonka kuvaamaa tutkimusta ei ilmeisesti ole koskaan oikeasti tehty: https://skeptics.stackexchange.com/questions/6828/was-the-experiment-with-five-monkeys-a-ladder-a-banana-and-a-water-spray-condu • Patreon-sivu, jonka kautta tämän podcastin tekemistä voi tukea: http://patreon.com/inspiraatiokanava • Facebook-sivuni: https://facebook.com/erisgumma • Twitter-sivuni: https://twitter.com/erisgumma ----- Ihmisiä, siis eläimiä -podcast rakastaa ymmärrystä avartavia näkökulmia. Syvän tiedonjanon ajaman ohjelman visiona on luoda asioiden ytimeen pureutuvaa, hitaampaa mediaa. Podcastin keskeisiä teemoja ovat tiede ja taide, tavallinen ja erikoinen, yksilö ja yhteiskunta sekä ihminen ja muu luonto. Ohjelman vetäjä, ymmärrykseltään keskeneräinen mutta utelias Henry Vistbacka on sekatekijä, muusikko ja kirjoittaja. Podcastin yhteistyökumppanina toimii Helsingin Vallilassa päämajaansa pitävä, tiedettä raaka-aineenaan käyttävä taiteellinen tuotantoyhtiö Artlab. • Facebook: https://facebook.com/ihmisiis • Twitter: https://twitter.com/ihmisiis • Instagram: https://www.instagram.com/ihmisiis • Soundcloud: https://soundcloud.com/ihmisiis • Kieku: https://www.kieku.com/channel/Ihmisi%C3%A4%2C%20siis%20el%C3%A4imi%C3%A4 • Studio podcastin takana: https://artlab.fi

WiTcast
WiTcast ep 53.2 – ด้านมืด ด้านสว่าง และความพอดีของเทคโนโลยี / คุยกับแก๊งค์อิไท่ (iThai) ต่อ

WiTcast

Play Episode Listen Later Jun 12, 2017 103:46


  TIME STAMP 0:00 Blockchain คืออะไร 11:20 Deep web, dark web คืออะไร 17:55 อนาคตและจริยธรรมของ Self-driving car 26:47 เทคนิกการโฆษณาของ Google และ Facebook 36:08 ข้อเสียโลกโซเชียลที่แข่งกันแย่งความสนใจเรา (Attention economy) / สังคมจะปรับตัวเองได้หรือไม่ ทุกอย่างจะโอเค หรือว่ายังมีบางมุมน่าเป็นห่วง 52:11 Long form content อย่าง podcast จะอยู่รอดได้หรือไม่ / Facebook Live 58:10 ข้อมูลพฤติกรรมคนสมัยนี้มีเยอะมาก / สถิติงบ digital marketing ในไทย 1:00:35 ข้อจำกัดของกระบวนการคัดเลือกตามธรรมชาติ (Natural selection) / ทิ้งคำถามปลายเปิด 1:15:32 Ai จะมาแย่งงานคนหรือไม่ 1:31:02 Chatbot 1:34:14 ความคืบหน้า WiTapp 1:40:10 WiT Game SHOW NOTE มาคุยกับแก๊งค์นี้กันต่อนะครับ ซ้ายไปขวา ตุ๊ก นัด แม็คแอป แทนไท แม็คต่าย ส่วนอาบันเป็นคนถ่าย Blockchain Deep Web, Dark Web -1,2,3,4,5,6 Self-driving car, Driverless car https://www.youtube.com/watch?v=tP7VdxVY6UQ https://www.youtube.com/watch?v=uHbMt6WDhQ8 Apple car? อันนี้คือรูปคอนเซ็ปที่คนเค้าจินตนาการกันสนุกเลย ช่วงถกเรื่องข้อเสียของโลกโซเชียล สังคมจะปรับตัวเองได้หรือไม่ ทุกอย่างจะโอเคเอง และค่อนข้างโอเคอยู่แล้ว หรือว่ายังมีบางมุมน่าเป็นห่วง ประเด็นความเป็นห่วงผมเอามาจาก Podcast คุณ Sam Harris คุยกับคณ Tristan Harris ผู้เคยทำงานกับ Google ในฐานะนักจริยธรรมการออกแบบ (Design Ethicist ) ด้วยความเป็นห่วงว่าถ้าปล่อยให้บริษัทยักษ์ใหญ่แข่งกันแย่งความสนใจผู้ใช้ต่อไปเรื่อยๆ อาจนำไปสู่โลกที่อยู่ยากขึ้นๆ โดยเฉพาะในแง่การวางแผนใช้เวลาชีวิตอย่างมีความหมาย https://www.youtube.com/watch?v=jlPF9_1VIso เรื่อง Natural Selection แนะนำลองอ่านผลงาน Richard Dawkins, กับ Matt Ridley Ai จะมาแย่งงานคนหรือไม่? แนะนำ TED talk และ podcast คุณ Sam Harris https://www.youtube.com/watch?v=8nt3edWLgIg https://www.youtube.com/watch?v=Ih_SPciek9k https://www.youtube.com/watch?v=msL1rbgJLlM วิดิโอ Live ถ่ายทอดพิธีเปิดตัว WiTapp https://www.facebook.com/witcastthailand/videos/1521274014602098/ ฟังจบแล้วออกความเห็นได้ในโพสต์นี้เลยครับ https://www.facebook.com/witcastthailand/photos/a.384378794958298.93979.380263635369814/1526046187458214/?type=3

WiTcast
WiTcast ep 53.1 – คุยกับแก๊งค์ไอทีไทย (iThai) เรื่องเทร็นด์เทคโนโลยีเปลี่ยนชีวิต

WiTcast

Play Episode Listen Later May 27, 2017 74:37


SHOW NOTE สมาชิกวงจากซ้ายไปขวา ตุ๊ก นัด แม็คแอป แทนไท แม็คต่าย ส่วนอาบันเป็นคนถ่ายครับ รวมลิงค์สิ่งที่พูดถึงในตอน Rabbit Digital Group -  ณ ออฟฟิซThe Burrow ซอฟแวร์เฮาส์ CUPCODE และ CUPCAST PODCAST คุณตุ๊ก (ซ้าย) คุณนัด (ขวา) นิยาม IT (Information Technology) แบบวิชาการ และแบบหลวมๆ ตามความเข้าใจคนทั่วไป เว็บเช็กก่อนแชร์  เพจจบข่าว Podcast คุณ Sam Harris คุยกับคณ Tristan Harris ผู้เคยทำงานกับ Google ในฐานะนักจริยธรรมการออกแบบ (Design Ethicist ) ด้วยความเป็นห่วงว่าถ้าปล่อยให้บริษัทยักษ์ใหญ่แข่งกันแย่งความสนใจผู้ใช้ต่อไปเรื่อยๆ อาจนำไปสู่โลกที่อยู่ยากขึ้นๆ โดยเฉพาะในแง่การวางแผนใช้เวลาชีวิตอย่างมีความหมาย https://www.youtube.com/watch?v=jlPF9_1VIso บทความของ Gardner เรื่อง Top 10 เทร็นด์เทคโนโลยี 2017  แทร็นด์แรก Intelligent Ai และ Smart นู่นนี่นั่น Gatebox ของญี่ปุ่น เป็น Ai ที่คอยช่วยเหลือดูแลเรา มีเสียง+บุคลิกเป็นตัวการ์ตูน https://www.youtube.com/watch?v=nkcKaNqfykg Google Now https://www.youtube.com/watch?v=pPqliPzHYyc เทร็นด์ 2 - Digital Augmented Reality (AR) https://www.youtube.com/watch?v=4EvNxWhskf8 Virtual Reality (VR) https://www.youtube.com/watch?v=oVLXtyEyPvc Mixed Reality (MR) ตัวอย่าง ZapBox https://www.youtube.com/watch?v=_SQycGTSaEU Hololens ของ Microsoft https://www.youtube.com/watch?v=4p0BDw4VHNo แนะนำซีรีส์ Black Mirror Season 3, Episode 2 "Playtest" Facebook Spaces ไว้เจอกันในโลก Virtual https://www.youtube.com/watch?v=PVf3m7e7OKU https://www.youtube.com/watch?v=QtiM_DaBtLE Brain Machine Interface Elon Musk กับ Neuralink https://www.youtube.com/watch?v=TYJEoAckT6o https://www.youtube.com/watch?v=S6n26bBBr80 https://www.youtube.com/watch?v=ZrGPuUQsDjo https://www.youtube.com/watch?v=kuZKtY6Bc4Y Facebook Brain Typing https://www.youtube.com/watch?v=uEuLBNg9xxk ขยายความที่ผมพูดในรายการนิดนึงนะฮะ ผมเป็นคนนึงที่ติดตามและเชียร์เทคโนโลยีเชื่อมสมองกับคอมพ์มาโดยตลอด โดยเฉพาะกรณีใช้ช่วยผู้ป่วยที่เป็นอัมพาตให้สามารถบังคับแขนกลหรือใช้คอมพ์ได้ อันนั้นมีมานานพอสมควรแล้ว แต่ที่ผมบอกว่าใน 10 ปียังไม่น่าเกิด คือหมายถึงคนธรรมดาไปเข้ารับการผ่าตัดฝังชิปบนสมองกันแบบเป็นเรื่องธรรมดาสามัญ อันนี้ผมยังนึกภาพไม่ออก แต่ถ้าจะมาใน 10 ปี ผมว่าน่าจะเป็นเทคโนโลยีอ่านคลื่นสมองซึ่งยังไม่ต้องใช้การผ่าตัดมากกว่า ซึ่งก็น่าจะสามารถพัฒนาไปไกลระดับหนึ่ง เช่น พอจะแทนคีย์บอร์ดกับเมาส์ได้ หรือใช้บังคับตัวละครในเกมได้ บังคับเครื่องจักรกลไกบางอย่างได้ แต่ที่ยังไม่น่าจะได้เลย คืออีกไกลๆๆๆ เลย คือการ input ข้อมูลสัญญาณไฟฟ้าไปกระตุ้นสมองให้เราเห็นภาพ ได้ยินเสียง ได้กลิ่น รู้สึกร้อนเย็นสัมผัสกายต่างๆ  ตามที่โปรแกรมไว้ (แบบ The Matrix)  อันนี้ตัว interface เชื่อมต่อกับสมองหรือระบบส่งคลื่นสัญญาณจะต้องมีความละเอียดขั้นเว่อร์มาก แบบระดับเซลหรือเล็กกว่า แถมยังต้องอาศํยความรู้เกี่ยวกับการทำงานของสมองในระดับละเอียดสุดๆ ซึ่งเรายังไม่มี ไหนจะต้องคำนึงความแตกต่างสมองแต่ละคนอีก  และยังต้องตอบคำถามอีกว่า สมมติพูดเรื่องเสียง เราจะกระตุ้นสมองส่วนการฟังโดยตรงไปทำไม ในเมื่อแค่เสียบหูฟังเราก็ได้ยินเสียงแบบฟินพอแล้ว? แต่แน่นอน  ผมยังเชียร์ให้มนุษย์พัฒนาก้าวข้ามกายเนื้อไปสู่ความเป็นไซบอร์ก หรืออาจจะถึงขั้นอัพโหลดจิตขึ้นโลกเสมือนบนเซิฟเวอร์อยู่นะครับ แต่แค่คิดว่าคงยังไม่เกิดในเร็วๆ นี้ Google คิดภาษากลางใหม่  การเก็บข้อมูลส่วนตัวโดย facebook, google, และอุปกรณ์ติดตัวต่างๆ อย่าง smart watch ข่าวใช้ทวิตส่งภาพกระตุ้นให้ชัก  ข้อมูลสุขภาพ นาฬิกาอ่านได้เองว่าเรากินอะไร calorie เท่าไหร่ https://www.youtube.com/watch?v=beNq3iQVGPQ Telemedicine / Telehealth   เทร็นด์ 3 - Mesh  Mesh App / Service Architecture ระบบ API กลาง Promtpay / eMoney Cashless society -1,2 https://www.facebook.com/witcastthailand/photos/a.384378794958298.93979.380263635369814/1511105498952283/?type=3

google ai smart matrix gardner burrow black mirror season playtest gatebox it information technology design ethicist witcast
Making Sense with Sam Harris - Subscriber Content
#71 - What is Technology Doing to Us?

Making Sense with Sam Harris - Subscriber Content

Play Episode Listen Later Apr 14, 2017 105:55


Tristan Harris has been called the “closest thing Silicon Valley has to a conscience,” by The Atlantic magazine. He was the Design Ethicist at Google and left the company to lead Time Well Spent, where he focuses on how better incentives and design practices can create a world that helps us spend our time well. Harris’s work has been featured on 60 Minutes and the PBS NewsHour, and in many journals, websites, and conferences, including: The Atlantic, ReCode, TED, the Economist, Wired, the New York Times, Der Spiegel, and the New York Review of Books. He was rated #16 in Inc Magazine’s “Top 30 Entrepreneurs Under 30” in 2009 and holds several patents from his work at Apple, Wikia, Apture and Google. Harris graduated from Stanford University with a degree in Computer Science.

This Much I Know - The Seedcamp Podcast
Tristan Harris on product ethics & morality in design

This Much I Know - The Seedcamp Podcast

Play Episode Listen Later Mar 2, 2017 44:28


How do you organise a billion people’s life choices? For tech giants like Google, Facebook or Apple that isn’t a flippant question: incremental design choices can have monumental significance when introduced at the scale such companies command. ‘They’re urban planners,’ argues Tristan Harris, ‘designing this invisible city that a billion people live inside, and they don’t know it.’ Called the ‘closest thing Silicon Valley has to a conscience,’ by The Atlantic magazine, Tristan Harris was formerly a Design Ethicist at Google and is now a leader in Time Well Spent, a movement to align technology with our humanity. Time Well Spent aims to heighten consumer awareness about how technology shapes our minds, empower consumers with better ways to use technology and change business incentives and design practices to align with humanity’s best interest. Previously, Tristan was CEO of Apture, which Google acquired in 2011. Speaking to Seedcamp partner Carlos Espinal, Tristan calls for greater recognition by technology giants and software engineers of their ethical responsibilities. He argues that companies like Facebook, YouTube and Snapchat, which jostle aggressively for command over people’s attention in a zero-sum game, are in a ‘nuclear arms race’, or ‘race to the bottom of the brainstem’. Given the perverse incentives such companies face – such as deploying ‘drip-by-drip’, instant notifications to sustain people’s interest – Tristan encourages the creation of coordinating mechanisms, akin to the Geneva Convention, and the adoption of shared norms to clean up what he calls the ‘pollution in the attention economy’. Show notes: Carlos Medium: sdca.mp/2entVR3 Seedcamp: www.seedcamp.com Time Well Spent: www.timewellspent.io Tristan Harris: www.tristanharris.com Related bio links: Carlos: linkedin.com/in/carloseduardoespinal / twitter.com/cee Tristan: linkedin.com/in/tristanharris / twitter.com/tristanharris

Presentable
Presentable 19: Design Ethics and the Race to the Bottom of the Brain Stem

Presentable

Play Episode Listen Later Mar 1, 2017 45:13


This week's special guest is Tristan Harris, former Design Ethicist at Google and the founder of the Time Well Spent movement. We talk about ethics in design, and how even our best intentions in serving users can often make use of manipulative patterns.

Tagesform - über die Musik, das Leben und den ganzen Rest

Nach der Lektüre von "How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist" habe ich (fast) all meinen Programmen verboten mir Benachrichtigungen zu schicken. Heute erzähl ich von den Gedanken dahinter.