POPULARITY
Im ausgehenden 18. Jahrhundert verblüffte ein Schach-Roboter die Menschheit: Der Mechanical Turk – eine Puppe, gekleidet in osmanische Gewänder – schlug selbst versierte Schachspieler. Heute sorgt KI für Aufregung: Künstliche Intelligenz, die Texte schreibt oder ein Auto lenkt. Was beidem gemeinsam Der ganze Podcast im Überblick: (00:01:58) Das war der historische Mechanical Turk (00:09:39) Heute ist der Mechanical Turk eine Amazon-Plattform (00:11:11) Moderner Mechanical Turk zum 1.: KI-Training (00:14:06) Belastende Arbeit, schlecht bezahlt (00:18:35) Moderner Mechanical Turk zum 2.: Content-Moderation (00:21:20) Ghost-Work und Absent Indians (00:23:47) Noch mehr moderne Mechanical Turks
Im ausgehenden 18. Jahrhundert verblüffte ein Schach-Roboter die Menschheit: Der Mechanical Turk – eine Puppe, gekleidet in osmanische Gewänder – schlug selbst versierte Schachspieler. Heute sorgt KI für Aufregung: Künstliche Intelligenz, die Texte schreibt oder ein Auto lenkt. Was beidem gemeinsam Der ganze Podcast im Überblick: (00:01:58) Das war der historische Mechanical Turk (00:09:39) Heute ist der Mechanical Turk eine Amazon-Plattform (00:11:11) Moderner Mechanical Turk zum 1.: KI-Training (00:14:06) Belastende Arbeit, schlecht bezahlt (00:18:35) Moderner Mechanical Turk zum 2.: Content-Moderation (00:21:20) Ghost-Work und Absent Indians (00:23:47) Noch mehr moderne Mechanical Turks
RJ and Elle explore the life and inventions of Wolfgang Von Kempelen. His speech making machine, the great great great great great great great grandfather of Hatsune Miku, and his wondrous "Mechanical Turk" A chess playing automaton that stunned the world by defeating Chess legends and world leaders. RJ then reveals the Mechanical's turks secret and Elle discusses the implications. RJ does a brief overview of fortune telling arcade machines, such as Zoltar.RJ and Elle then venture to a nearby mall to consult a Zoltar machine for 2 readings.Support the show
When did automatons first emerge? Which science-fiction depiction of robots is the most accurate? And why did so many people fall for a hoax machine called the "Mechanical Turk"? In conversation with Spencer Mizen, author and academic Kerry McInerney answers some of the most pressing questions on the history of robots. Michael Wooldridge explores our fascination with – and fear of – AI: https://link.chtbl.com/vUZSSfjm. The HistoryExtra podcast is produced by the team behind BBC History Magazine. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Ben recounts his adventures in Australia at Skepticon, and then Celestia brings up bird flu-infected raw milk, a closing Bigfoot museum, and ghosts that pay rent. Our main guest is esteemed cybersecurity expert, deceptionologist, folklorist, magician, and author Perry Carpenter. He discusses topics from his new book FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation and AI-Generated Deceptions. We talk deepfakes versus cheapfakes and how AI is similar to cold reading. We learn the three H's programmers aim for (helpful, honest, harmless) and how they don't always hit the mark. And Perry describes "the Liar's Dividend" and "Emergence," and how they relate to our near future if not already our present.
Peter Doggers is an award-winning chess journalist who is currently the director of News and Events at Chess.com. His role has given him a ringside seat to modern chess history, as he has attended many of the world's top events, and interviewed the likes of GM Garry Kasparov, Hikaru Nakamura, and countless others. Peter's new book, The Chess Revolution: From the Ancient World to the Digital Age covers everything from the beginnings of chess, to its many representations in pop culture, to the explosion in popularity the game has experienced in the internet age. Peter's excellent book covers chess from all angles, so we zeroed in on a few of my favorite topics from the book including: the history of chess and computers, recent stories around chess cheating, and their implications for the future of chess, the rise of Chess.com, and the story behind the making of Peter's book. Every chess fan will learn something from Peter's book, and I greatly enjoyed discussing it with him. Timestamps of topics discussed are below. Thanks to our sponsors, Chessable.com. Check out their new offerings including GM Garry Kasparov's much anticipated Chessable debut! If you sign up for Chessable Pro, please use the following link to help support Perpetual Chess: https://www.chessable.com/pro/?utm_source=affiliate&utm_medium=benjohnson&utm_campaign=pro 0:03- We begin by discussing allegations of GM Kirill Shevchenko cheating at an OTB tournament 11:00- We discuss OTB and online cheating more generally 16:00- Peter tells a story from his book, of the famous Mechanical Turk, which was an 18th-century chess-playing machine with a human hidden behind it. https://www.google.com/search?client=firefox-b-1-d&q=the+turk+chess 21:00- We discuss the famous Kasparov-Deep Blue match as well as another watershed “man vs. machine” match featuring GM Michael Adams. Mentioned: The TV series Rematch- more info here: https://en.wikipedia.org/wiki/Rematch_(miniseries) 25:00- Patreon mailbag question: Where does Peter see chess heading in the next 10 years? Mentioned: IM Daniel Rensch's interview with Ilya Levitov https://www.youtube.com/watch?v=6JeMsV9aNec 32:00- Peter tells the story of what he believes to be the first online encounter between Magnus and Hikaru in 2004 on the Internet Chess Club. We also discuss a “Secret” Magnus-Hikaru blitz match. Mentioned: YouTube excerpt from Macauley Peterson https://www.youtube.com/watch?v=tGXvcQP6VPo 37:00- The history of Chess.com and LiChess Mentioned: Lichess founder Thibault Duplessis 39:00- We discuss the chess GOATS- Magnus, Fischer and Kasparov. As discussed in last week's episode with FM Charles Hertan, did he consider Morphy for the GOAT list? 44:00- What was the biggest challenge in writing the book and compiling the material? 52:00- How did Peter find a publisher, that ultimately wound up in a bidding war? 58:00- Did Peter write in English or Dutch? 1:00:00- Peter's favorite chapters? 1:02:00- Peter's plans for an OTB comeback. 1:04:00- Thanks to Peter for joining! Here is how to reach him: You email Peter at Peter at chess dot com peterdoggers.com Chess.com profile- https://www.chess.com/players/peter-doggers Get the book here- https://www.amazon.com/s?k=the+chess+revolution+peter+doggers&crid=2PYSZWX9BPES&sprefix=the+chess+rev%2Caps%2C164&ref=nb_sb_ss_ts-doa-p_1_13 Learn more about your ad choices. Visit podcastchoices.com/adchoices
Dr. Obsoleto : Amiga 500, Mechanical Turk, 80s, Detroit Night Shift
From CAPTCHAs to shadow banning, Scott and Wes break down the best strategies for keeping your app safe from spam and fraud. They cover tools like email verification, rate limiting, shadow banning, and more to protect your users and data. Show Notes 00:00 Welcome to Syntax! 00:15 The problem. 02:47 Brought to you by Sentry.io. 04:23 CAPTCHA. 06:24 Data collection. 07:11 Stripe Radar score calculation. 09:12 Rate limiting. 11:34 Shadow banning. 13:53 Email verification. 15:38 Tie to accounts. 16:23 Tied to real identity. 19:36 Manual approval. 21:19 Blocking ASN. 23:17 Honey pot field. 24:28 SMS verification. 25:05 Mechanical Turk. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Wolfgang von Kempelen was unimpressed. Empress Maria Theresa had invited him to attend a magic show, hoping he'd tell her how the tricks worked. Instead, he told her – and everyone else in her court – that the tricks just plain sucked. He claimed he could do better. Stunned, Maria gave him six months off work to create something that would dazzle her court. So, six months later, Wolfgang von Kempelen showed up with what he claimed was an automaton chess playing machine. It would soon become known simply as “The Turk.” People were amazed. A machine that could play chess??? The Turk soon developed a life of its own. Remember, kids, history hoes always cite their sources! For this episode, Kristin pulled from: The book, “The Turk: The life and Times of the Famous 19th Century Chess-Playing Machine,” by Tom Standage “How a phony 18th-century chess robot fooled the world,” by Evan Andrews for History.com “The mechanical chess player that unsettled the world,” by Ella Morton for Slate.com “Debunking the Mechanical Turk helped set Edgar Allan Poe on the path to mystery writing,” by Kat Eschner for Smithsonian Magazine “The Mechanical Turk: AI Marvel or Parlor Trick,” Britannica “Turkish Gambit,” by Dick Teresi for The New York Times Are you enjoying An Old Timey Podcast? Then please leave us a 5-star rating and review wherever you listen to podcasts! Are you *really* enjoying An Old Timey Podcast? Well, calm down, history ho! You can get more of us on Patreon at patreon.com/oldtimeypodcast. At the $5 level, you'll get a monthly bonus episode (with video!), access to our 90's style chat room, plus the entire back catalog of bonus episodes from Kristin's previous podcast, Let's Go To Court.
The AI startup scene is bonkers. Investors are pouring so much money into AI startup companies that some of those businesses are making unsubstantiated AI claims. We explore stories of a few companies that weren't as AI-focused as they initially claimed to be.See omnystudio.com/listener for privacy information.
Rob Lawrence is the Founder of Good Engine.xyz which is a first-of-its-kind, generative offer comparison engine built round a
Johnny 5 - "." BMO - "." Conky 2000 - "." Wall-E - "." Bender - "." Gumby Robots - "." T-1000 - "." SAM The Wake & Bake Morning Show Computer K9 - "." Iron Giant - "." Transformers OLD ONES Mr. Roboto - "." Crow T. Robot ROOMBA Paulie's Robot - "." Robbie Junior - "." Mechanical Turk - "." Fuureal - "." Nanobot Alpha 5 - "." Furby - "." Rosie the Robot - "." Ottoman - "." Mr. Machine - "." Robot from Lost In Space - "." Pepper the Social Robot - "." Crying baby Dolls - "." Gigantor - "." Replicant??? - "." Tobor The Great - "." Dingaling Fireman - "." Master Cylinder - "." Thermostat - "." NEW TRANSFORMERS - "." https://www.wfmu.org/playlists/shows/140615
Check out this episode, mate! This week's gambit involves Romanian stand-up comedian and chess enthusiast Radu Isac joining us to discuss the world's worst chess grandmaster. We blitz our way through various aspects of the game's history, from its multiple weird adaptations to those players who cheated using everything from phones to sleight-of-hand, and there is even an appearance by our old friend The Mechanical Turk. Follow us on Twitter: @worstfoot @bazmcstay @VanderLaugh @raduisac Follow us on Instagram: @worstfoot Join us on our Discord server! https://discord.gg/9buWKthgfx Visit www.worstfootforwardpodcast.com for all previous episodes and you can donate to us on Patreon if you'd like to support the show during this whole pandemic thing, and especially as we work on our first book and plan some live shows! https://www.patreon.com/WorstFootForward Worst Foot Forward is part of Podnose: www.podnose.com
“We haven't invested this much money into an infrastructure like this really until you go back to the pyramids”—Kate CrawfordTranscript with links to audio and external links. Ground Truths podcasts are on Apple and Spotify. The video interviews are on YouTube Eric Topol (00:06):Well, hello, this is Eric Topol with Ground Truths, and I'm really delighted today to welcome Kate Crawford, who we're very lucky to have as an Australian here in the United States. And she's multidimensional, as I've learned, not just a scholar of AI, all the dimensions of AI, but also an artist, a musician. We're going to get into all this today, so welcome Kate.Kate Crawford (00:31):Thank you so much, Eric. It's a pleasure to be here.Eric Topol (00:34):Well, I knew of your work coming out of the University of Southern California (USC) as a professor there and at Microsoft Research, and I'm only now learning about all these other things that you've been up to including being recognized in TIME 2023 as one of 100 most influential people in AI and it's really fascinating to see all the things that you've been doing. But I guess I'd start off with one of your recent publications in Nature. It was a world view, and it was about generative AI is guzzling water and energy. And in that you wrote about how these large AI systems, which are getting larger seemingly every day are needing as much energy as entire nations and the water consumption is rampant. So maybe we can just start off with that. You wrote a really compelling piece expressing concerns, and obviously this is not just the beginning of all the different aspects you've been tackling with AI.Exponential Growth, Exponential Concerns Kate Crawford (01:39):Well, we're in a really interesting moment. What I've done as a researcher in this space for a very long time now is really introduce a material analysis of artificial intelligence. So we are often told that AI is a very immaterial technology. It's algorithms in the cloud, it's objective mathematics, but in actual fact, it comes with an enormous material infrastructure. And this is something that I took five years to research for my last book, Atlas of AI. It meant going to the mines where lithium and cobalt are being extracted. It meant going into the Amazon fulfillment warehouses to see how humans collaborate with robotic and AI systems. And it also meant looking at the large-scale labs where training data is being gathered and then labeled by crowd workers. And for me, this really changed my thinking. It meant that going from being a professor for 15 years focusing on AI from a very traditional perspective where we write papers, we're sitting in our offices behind desks, that I really had to go and do these journeys, these field trips, to understand that full extractive infrastructure that is needed to run AI at a planetary scale.(02:58):So I've been keeping a very close eye on what would change with generative AI and what we've seen particularly in the last two years has been an extraordinary expansion of the three core elements that I really write about in Atlas, so the extraction of data of non-renewable resources, and of course hidden labor. So what we've seen, particularly on the resources side, is a gigantic spike both in terms of energy and water and that's often the story that we don't hear. We're not aware that when we're told about the fact that there gigantic hundred billion computers that are now being developed for the next stage of generative AI that has an enormous energy and water footprint. So I've been researching that along with many others who are now increasingly concerned about how we might think about AI more holistically.Eric Topol (03:52):Well, let's go back to your book, which is an extraordinary book, the AI Atlas and how you dissected not just the well power of politics and planetary costs, but that has won awards and it was a few years back, and I wonder so much has changed since then. I mean ChatGPT in late 2022 caught everybody off guard who wasn't into this knowing that this has been incubating for a number of years, and as you said, these base models are just extraordinary in every parameter you can think about, particularly the computing resource and consumption. So your concerns were of course registered then, have they gone to exponential growth now?Kate Crawford (04:45):I love the way you put that. I think you're right. I think my concerns have grown exponentially with the models. But I was like everybody else, even though I've been doing this for a long time and I had something of a heads up in terms of where we were moving with transformer models, I was also quite taken aback at the extraordinary uptake of ChatGPT back in November 2022 in fact, gosh, it still feels like yesterday it's been such an extraordinary timescale. But looking at that shift to a hundred million users in two months and then the sort of rapid competition that was emerging from the major tech companies that I think really took me by surprise, the degree to which everybody was jumping on the bandwagon, applying some form of large language model to everything and anything suddenly the hammer was being applied to every single nail.(05:42):And in all of that sound and fury and excitement, I think there will be some really useful applications of these tools. But I also think there's a risk that we apply it in spaces where it's really not well suited that we are not looking at the societal and political risks that come along with these approaches, particularly next token prediction as a way of generating knowledge. And then finally this bigger set of questions around what is it really costing the planet to build these infrastructures that are really gargantuan? I mean, as a species, we haven't invested this much money into an infrastructure like this really until you go back to the pyramids, you really got to go very far back to say that type of just gargantuan spending in terms of capital, in terms of labor, in terms of all of the things are required to really build these kinds of systems. So for me, that's the moment that we're in right now and perhaps here together in 2024, we can take a breath from that extraordinary 18 month period and hopefully be a little more reflective on what we're building and why and where will it be best used.Propagation of BiasesEric Topol (06:57):Yeah. Well, there's so many aspects of this that I'd like to get into with you. I mean, one of course, you're as a keen observer and activist in this whole space, you've made I think a very clear point about how our culture is mirrored in our AI that is our biases, and people are of course very quick to blame AI per se, but it seems like it's a bigger problem than just that. Maybe you could comment about, obviously biases are a profound concern about propagation of them, and where do you see where the problem is and how it can be attacked?Kate Crawford (07:43):Well, it is an enormous problem, and it has been for many years. I was first really interested in this question in the era that was known as the big data era. So we can think about the mid-2000s, and I really started studying large scale uses of data in scientific applications, but also in what you call social scientific settings using things like social media to detect and predict opinion, movement, the way that people were assessing key issues. And time and time again, I saw the same problem, which is that we have this tendency to assume that with scale comes greater accuracy without looking at the skews from the data sources. Where is that data coming from? What are the potential skews there? Is there a population that's overrepresented compared to others? And so, I began very early on looking at those questions. And then when we had very large-scale data sets start to emerge, like ImageNet, which was really perhaps the most influential dataset behind computer vision that was released in 2009, it was used widely, it was freely available.(09:00):That version was available for over a decade and no one had really looked inside it. And so, working with Trevor Paglen and others, we analyzed how people were being represented in this data set. And it was really quite extraordinary because initially people are labeled with terms that might seem relatively unsurprising, like this is a picture of a nurse, or this is a picture of a doctor, or this is a picture of a CEO. But then you look to see who is the archetypical CEO, and it's all pictures of white men, or if it's a basketball player, it's all pictures of black men. And then the labeling became more and more extreme, and there are terms like, this is an alcoholic, this is a corrupt politician, this is a kleptomaniac, this is a bad person. And then a whole series of labels that are simply not repeatable on your podcast.(09:54):So in finding this, we were absolutely horrified. And again, to know that so many AI models had trained on this as a way of doing visual recognition was so concerning because of course, very few people had even traced who was using this model. So trying to do the reverse engineering of where these really problematic assumptions were being built in hardcoded into how AI models see and interpret the world, that was a giant unknown and remains to this day quite problematic. We did a recent study that just came out a couple of months ago looking at one of the biggest data sets behind generative AI systems that are doing text to image generation. It's called LAION-5B, which stands for 5 billion. It has 5 billion images and text captions drawn from the internet. And you might think, as you said, this will just mirror societal biases, but it's actually far more weird than you might imagine.(10:55):It's not a representative sample even of the internet because particularly for these data sets that are now trying to use the ALT tags that are used around images, who uses ALT tags the most on the internet? Well, it's e-commerce sites and it's often stock image sites. So what you'll see and what we discovered in our study was that the vast majority of images and labels are coming from sites like Shopify and Pinterest, these kind of shopping aspirational collection sites. And that is a very specific way of seeing the world, so it's by no means even a perfect mirror. It's a skewed mirror in multiple ways. And that's something that we need to think of particularly when we turn to more targeted models that might be working in say healthcare or in education or even in criminal justice, where we see all sorts of problems emerge.Exploiting Humans for RLHFEric Topol (11:51):Well, that's really interesting. I wonder to extend that a bit about the human labor side of this. Base models are tweaked, fine-tuned, and one of the ways to do that, of course is getting people to weigh in. And this has been written about quite a bit about how the people that are doing this can be exploited, getting wages that are ridiculously weak. And I wonder if you could comment about that because in the ethics of AI, this seems to be one of the many things that a lot of people don't realize about reinforcement learning.Kate Crawford (12:39):Oh, I completely agree. It's quite an extraordinary story. And of course now we have a new category of crowd labor that's called reinforcement learning with human feedback or RLHF. And what was discovered by multiple investigations was that these laborers are in many cases paid less than $2 an hour in very exploitative conditions, looking at results that in many cases are really quite horrifying. They could be accounts of murder, suicide, trauma, this can be visual material, it can be text-based material. And again, the workers in these working for these companies, and again, it's often contract labor, it's not directly within a tech company, it's contracted out. It's very hidden, it's very hard to research and find. But these laborers have been experiencing trauma and are really now in many cases bringing lawsuits, but also trying to unionize and say, these are not acceptable conditions for people to be working under.(13:44):So in the case of OpenAI, it was found that it was Kenyan workers who were doing this work for just poverty wages, but it's really across the board. It's so common now that humans are doing the hard work behind the scenes to make these systems appear autonomous. And that's the real trap that we're being told that this is the artificial intelligence. But in actual fact, what Jeff Bezos calls Mechanical Turk is that it's artificial, artificial intelligence otherwise known as human beings. So that is a very significant layer in terms of how these systems work that is often unacknowledged. And clearly these workers in many cases are muzzled from speaking, they're not allowed to talk about what they do, they can't even tell their families. They're certainly prevented from collective action, which is why we've seen this push towards unionization. And finally, of course, they're not sharing in any of the profits that are being generated by these extraordinary new systems that are making a very small number of people, very wealthy indeed.Eric Topol (14:51):And do you know if that's improving or is it still just as bad as it has been reported? It's really deeply concerning to see human exploitation, and we all know well about sweatshops and all that, but here's another version, and it's really quite distressing.Kate Crawford (15:09):It really is. And in fact, there have been several people now working to create really almost like fair work guidelines. So Oxford has the sort of fair work initiative looking specifically at crowd work. They also have a rating system where they rate all of the major technology companies for how well they're treating their crowd laborers. And I have to say the numbers aren't looking good in the last 12 months, so I would love to see much more improvement there. We are also starting to see legislation be tabled specifically on this topic. In fact, Germany was one of the most recent to start to explore how they would create a strong legislative backing to make sure that there's fair labor conditions. Also, Chile was actually one of the first to legislate in this space, but you can imagine it's very difficult to do because it's a system that is operating under the radar through sort of multiple contracted chains. And even some of the people within tech companies will tell me it's really hard to know if they're working with a company that's doing this in the right way and paying people well. But frankly, I'd like to see far greater scrutiny otherwise, as you say, we're building on this system, which looks like AI sweatshops.Eric Topol (16:24):Yeah, no, I think people just have this illusion that these machines are doing everything by themselves, and that couldn't be further from the truth, especially when you're trying to take it to the next level. And there's only so much human content you can scrape from the internet, and obviously it needs additional input to take it to that more refined performance. Now, besides your writing and being much of a conscience for AI, you're also a builder. I mean, I first got to know some of your efforts through when you started the AI Now Institute. Maybe you can tell us a bit about that. Now you're onto the Knowing Machines Project and I don't know how many other projects you're working on, so maybe you can tell us about what it's like not just to be a keen observer, but also one to actually get initiatives going.Kate Crawford (17:22):Well, I think it's incredibly important that we start to build interdisciplinary coalitions of researchers, but sometimes even beyond the academic field, which is where I really initially trained in this space, and really thinking about how do we involve journalists, how do we involve filmmakers, how do we involve people who will look at these issues in really different ways and tell these stories more widely? Because clearly this really powerful shift that we're making as a society towards using AI in all sorts of domains is also a public issue. It's a democratic issue and it's an issue where we should all be able to really see into how these systems are working and have a say in how they'll be impacting our lives. So one of the things that I've done is really create research groups that are interdisciplinary, starting at Microsoft Research as one of the co-founders of FATE, a group that stands for fairness, accountability, transparency and ethics, and then the AI Now Institute, which was originally at NYU, and now with Knowing Machines, which is an international group, which I've been really delighted to build, rather than just purely focusing on those in the US because of course these systems are inherently transnational, they will be affecting global populations.(18:42):So we really need to think about how do you bring people from very different perspectives with different training to ask this question around how are these systems being built, who is benefiting and who might be harmed, and how can we address those issues now in order to actually prevent some of those harms and prevent the greatest risks that I see that are possible with this enormous turn to artificial intelligence everywhere?Eric Topol (19:07):Yeah, and it's interesting how you over the years are a key advisor, whether it's the White House, the UN or the European Parliament. And I'm curious about your experience because I didn't know much about the Paris ENS. Can you tell us about you were Visiting Chair, this is AI and Justice at the École Normale Supérieure (ENS), I don't know if I pronounce that right. My French is horrible, but this sounds like something really interesting.Kate Crawford (19:42):Well, it was really fascinating because this was the first time that ENS, which is really one of the top research institutions in Europe, had turned to this focus of how do we contend with artificial intelligence, not just as a technical question, but as a sort of a profound question of justice of society of ethics. And so, I was invited to be the first visiting chair, but tragically this corresponded with the start of the pandemic in 2020. And so, it ended up being a two-year virtual professorship, which is really a tragedy when you're thinking about spending time in Paris to be spending it on Zoom. It's not quite the same thing, but I had the great fortune of using that time to assemble a group of scholars around the world who were looking at these questions from very different disciplines. Some were historians of science, others were sociologists, some were philosophers, some were machine learners.(20:39):And really essentially assembled this group to think through some of the leading challenges in terms the potential social impacts and current social impacts of these systems. And so, we just recently published that through the academies of Science and Engineering, and it's been almost like a template for thinking about here are core domains that need more research. And interestingly, we're at that moment, I think now where we can say we have to look in a much more granular fashion beyond the hype cycles, beyond the sense of potential, the enormous potential upside that we're always hearing about to look at, okay, how do these systems actually work now? What kinds of questions can we bring into the research space so that we're really connecting the ideas that come traditionally from the social sciences and the humanistic disciplines into the world of machine learning and AI design. That's where I see the enormous upside that we can no longer stay in these very rigorously patrolled silos and to really use that interdisciplinary awareness to build systems differently and hopefully more sustainably as well.Is Working At Microsoft A Conflict?Eric Topol (21:55):Yeah, no, that's what I especially like about your work is that you're not a doomsday person or force. You're always just trying to make it better, but now that's what gets me to this really interesting question because you are a senior principal researcher at Microsoft and Microsoft might not like some of these things that you're advocating, how does that potential conflict work out?Kate Crawford (22:23):It's interesting. I mean, people often ask me, am I a technology optimist or a technology pessimist? And I always say I'm a technology realist, and we're looking at these systems being used. I think we are not benefited by discourses of AI doomerism nor by AI boosterism. We have to assess the real politic and the political economies into which these systems flow. So obviously part of the way that I've got to know what I know about how systems are designed and how they work at scale is through being at Microsoft Research where I'm working alongside extraordinary colleagues and all of whom come from, in many cases, professorial backgrounds who are deep experts in their fields. And we have this opportunity to work together and to look at these questions very early on in the kinds of production cycles and enormous shifts in the way that we use technology.(23:20):But it is interesting of course that at the moment Microsoft is absolutely at the leading edge of this change, and I've always thought that it's incredibly important for researchers and academics who are in industrial spaces to be able to speak freely, to be able to share what they see and to use that as a way that the industry can, well hopefully keep itself honest, but also share between what it knows and what everybody else knows because there's a giant risk in having those spaces be heavily demarcated and having researchers really be muzzled. I think that's where we see real problems emerge. Of course, one of the great concerns a couple of years ago was when Timnit Gebru and others were fired from Google for speaking openly about the concerns they had about the first-generation large language models. And my hope is that there's been a lesson through that really unfortunate set of decisions made at Google that we need people speaking from the inside about these questions in order to actually make these systems better, as you say, over the medium and long term.Eric Topol (24:26):Yeah, no, that brings me to thought of Peter Lee, who I'm sure because he wrote a book about GPT-4 and healthcare and was very candid about its potential, real benefits and the liabilities, and he's a very humble kind of guy. He's not one that has any bravado that I know of, so it speaks well to at least another colleague of yours there at Microsoft and their ability to see all the different sides here, not just what we'll talk about in a minute the arms race both across companies and countries. But before I get to that, there's this other part of you and I wonder if there's really two or three of you that is as a composer of music and art, I looked at your Anatomy of an AI System, I guess, which is on exhibit at the Museum of Modern Art (MoMA) in New York, and that in itself is amazing, but how do you get into all these other parts, are these hobbies or is this part of a main part of your creative work or where does it fit in?Kate Crawford (25:40):Eric, didn't I mention the cloning program that I participated in early and that there are many Kate's and it's fantastic we all work together. Yeah, that explains it. Look, it's interesting. Way back as a teenager, I was fascinated with technology. Of course, it was the early stages of the web at that moment, and I could see clearly that this was, the internet was going to completely change everything from my generation in terms of what we would do in terms of the way that we would experience the world. And as I was also at that time an electronic musician in bands, I was like, this was a really fantastic combination of bringing together creative practice with a set of much larger concerns and interests around at a systems level, how technology and society are co-constituted, how they evolve together and shape each other. And that's really been the map of how I've always worked across my life.(26:48):And it's interesting, I've always collaborated with artists and Vladan Joler who I worked with on anatomy of an AI system. We actually met at a conference on voice enabled AI systems, and it was really looking at the ethics of could it be possible to build an open source, publicly accessible version of say Alexa rather than purely a private model owned by a corporation, and could that be done in a more public open source way? And we asked a different question, we looked at each other and we're like, oh, I haven't met you yet, but I can see that there are some problems here. One of them is it's not just about the data and it's not just about the technical pipelines, it's about where the components come from. It's about the mining structures that needed to make all of these systems. It's about the entire end of life what happens when we throw these devices out from generally between three to four years of use and how they go into these giant e-waste tips.(27:51):And we basically started looking at this as an enormous sort of life and death of a single AI system, which for us started out by drawing these things on large pieces of butcher's paper, which just expanded and expanded until we had this enormous systems level analysis of what it takes just to ask Alexa what the weather is today. And in doing that, it taught me a couple of things. One that people really want to understand all of the things that go into making an AI system work. This piece has had a very long life. It's been in over a hundred museums around the world. It's traveled further than I have, but it's also very much about that broader political economy that AI systems aren't neutral, they don't just exist to serve us. They are often sort of fed into corporate structures that are using them to generate profits, and that means that they're used in very particular ways and that there are these externalities in terms of how they produced that linger in our environments that have really quite detrimental impacts on systems of labor and how people are recompensed and a whole range of relationships to how data is seen and used as though it's a natural resource that doesn't actually come from people's lives, that doesn't come with risks attached to it.(29:13):So that project was really quite profound for me. So we've continued to do these kinds of, I would call them research art projects, and we just released a new one called Calculating Empires, which looks at a 500 year history of technology and power looking specifically at how empires over time have used new technologies to centralize their power and expand and grow, which of course is part of what we're seeing at the moment in the empires of AI.Eric Topol (29:43):And what about the music side?Kate Crawford (29:45):Well, I have to say I've been a little bit slack on the music side. Things have been busy in AI Eric, I have to say it's kept me away from the music studio, but I always intend to get back there. Fortunately, I have a kid who's very musical and he's always luring me away from my desk and my research saying, let's write some music. And so, he'll keep me honest.Geopolitics and the Arms RacesEric Topol (30:06):Well, I think it's striking just because you have this blend of the humanities and you're so deep into trying to understand and improve our approaches in technology. And it seems like a very unusual, I don't know, too many techies that have these different dimensions, so that's impressive. Now let's get back to the arms race. You just were talking about tracing history over hundreds of years and empires, but right now we have a little problem. We have the big tech titans that are going after each other on a daily basis, and of course you know the group very well. And then you have China and the US that are vying to be the dominant force and problems with China accessing NVIDIA chips and Taiwan sitting there in a potentially very dangerous position, not just for Taiwan, but also for the US. And I wonder if you could just give us your sense about the tensions here. They're US based as well of course, because that's some of the major forces in companies, but then they're also globally. So we have a lot of stuff in the background that people don't like to think about, but it's actually happening right now.Kate Crawford (31:35):I think it's one of the most important things that we can focus on, in fact. I mean and again, this is why I think a materialist analysis of artificial intelligence is so important because not only does it force you to look at the raw components, where does the energy come from? Where does the water come from? But it means you're looking at where the chipsets come from. And you can see that in many cases there are these infrastructural choke points where we are highly dependent on specific components that sit within geopolitical flashpoints. And Taiwan is really the exemplar of this sort of choke point at the moment. And again, several companies are trying to address this by spinning up new factories to build these components, but this takes a lot of time and an enormous amount of resources yet again. So what we're seeing is I think a very difficult moment in the geopolitics of artificial intelligence.(32:31):What we've had certainly for the last decade has been almost a geopolitical duopoly. We've had the US and China not only having enormous power and influence in this space, but also goading each other into producing the most extreme forms of both data extractive and surveillance technologies. And unfortunately, this is just as true in the United States that I commonly hear this in rooms in DC where you'll hear advisors say, well, having any type of guardrails or ethical considerations for our AI systems is a problem if it means that China's going to do it anyway. And that creates this race to the bottom dynamic of do as much of whatever you can do regardless of the ethical and in some cases legal problems that will create. And I think that's been the dynamic that we've seen for some time. And of course the last 18 months to two years, we've seen that really extraordinary AI war happening internally in the United States where again, this race dynamic I think does create unfortunately this tendency to just go as fast as possible without thinking about potential downsides.(33:53):And I think we're seeing the legacy of that right now. And of course, a lot of the conversations from people designing these systems are now starting to say, look, being first is great, but we don't want to be in a situation as we saw recently with Google's Gemini where you have to pull an entire model off the shelves and you have to say, this is not ready. We actually have to remove it and start again. So this is the result I think of that high pressure, high speed dynamic that we've been seeing both inside the US but between the US and China. And of course, what that does to the rest of the world is create this kind of client states where we've got the EU trying to say, alright, well we'll export a regulatory model if we're not going to be treated as an equivalent player here. And then of course, so many other countries who are just seen as spaces to extract low paid labor or the mineralogical layer. So that is the big problem that I see is that that dynamic has only intensified in recent years.A.I. and MedicineEric Topol (34:54):Yeah, I know it's really another level of concern and it seems like it could be pretty volatile if for example, if the US China relations takes another dive and the tensions there go to levels that haven't been seen so far. I guess the other thing, there's so much that is I think controversial, unsettled in this space and so much excitement. I mean, just yesterday for example, was the first AI randomized trial to show that you could save lives. When I wrote that up, it was about the four other studies that showed how it wasn't working. Different studies of course, but there's so much excitement at the same time, there's deep concerns. You've been a master at articulating these deep concerns. What have we missed in our discussion today, I mean we've covered a lot of ground, but what do you see are other things that should be mentioned?Kate Crawford (36:04):Well, one of the things that I've loved in terms of following your work, Eric, is that you very carefully walk that line between allowing the excitement when we see really wonderful studies come out that say, look, there's great potential here, but also articulating concerns where you see them. So I think I'd love to hear, I mean take this opportunity to ask you a question and say what's exciting you about the way that this particularly new generation AI is being used in the medical context and what are the biggest concerns you have there?Eric Topol (36:35):Yeah, and it's interesting because the biggest advance so far in research and medicine was the study yesterday using deep learning without any transformer large language model effort. And that's where that multiplicative of opportunity or potential is still very iffy, it's wobbly. I mean, it needs much more refinement than where we are right now. It's exciting because it is multimodal and it brings in the ability to bring all the layers of a human being to understand our uniqueness and then do much better in terms of, I got a piece coming out soon in Science about medical forecasting and how we could really get to prevention of conditions that people are at high risk. I mean like for example today the US preventive task force said that all women age 40 should have mammograms, 40.Kate Crawford (37:30):I saw that.Eric Topol (37:30):Yeah, and this is just crazy Looney Tunes because here we have the potential to know pretty precisely who are those 12%, only 12% of women who would ever get breast cancer in their lifetime, and why should we put the other 88% through all this no less the fact that there are some women even younger than age 40 that have significantly high risk that are not picked up. But I do think eventually when we get these large language models to actualize their potential, we'll do really great forecasting and we'll be able to not just prevent or forestall cancer, Alzheimer's and so many things. It's quite exciting, but it's the earliest, we're not even at first base yet, but I think I can see our way to get there eventually. And it's interesting because the discussion I had previously with Geoffrey Hinton, and I wonder if you think this as well, that he sees the health medical space as the only really safe space. He thinks most everything else has got more concerns about the downsides is the sweet spot as he called it. But I know that's not particularly an area that you are into, but I wonder if you share that the excitement about your health could be improved in the future with AI.Kate Crawford (38:52):Well, I think it's a space of enormous potential, but again, enormous risk for the same reasons that we discussed earlier, which is we have to look at the training data and where it's coming from. Do we have truly representative sources of data? And this of course has been a consistent problem certainly for the last hundred years and longer. When we look at who are the medical patients whose data is being collected, are we seeing skews? And that has created all sorts of problems, particularly in the last 50 years in terms of misdiagnosing women, people of color, missing and not taking seriously the health complaints of people who are already seen as marginalized populations, thus then further skewing the data that is then used to train AI models. So this is something that we have to take very seriously, and I had the great fortune of being invited by Francis Collins to work with the NIH on their AI advisory board.(39:50):They produced a board to look just at these questions around how can this moment in AI be harnessed in such a way that we can think about the data layer, think about the quality of data and how we train models. And it was a really fascinating sort of year long discussion because in the room we had people who were just technologists who just wanted as much data as possible and just give us all that data and then we'll do something, but we'll figure it out later. Then there were people who had been part of the Human Genome Project and had worked with Francis on questions around the legal and ethical and social questions, which he had really centered in that project very early on. And they said, no, we have to learn these lessons. We have to learn that data comes from somewhere. It's not divorced of context, and we have to think about who's being represented there and also who's not being represented there because that will then be intensified in any model that we train on that data.Humans and Automation Bias(40:48):And then also thinking about what would happen in terms of if those models are only held by a few companies who can profit from them and not more publicly and widely shared. These were the sorts of conversations that I think at the absolute forefront in terms of how we're going to navigate this moment. But if we get that right, if we center those questions, then I think we have far greater potential here than we might imagine. But I'm also really cognizant of the fact that even if you have a perfect AI model, you are always going to have imperfect people applying it. And I'm sure you saw that same study that came out in JAMA back in December last year, which was looking at how AI bias, even slightly biased models can worsen human medical diagnosis. I don't know if you saw this study, but I thought it was really extraordinary.(41:38):It was sort of 450 doctors and physician's assistants and they were really being shown a handful of cases of patients with acute respiratory failure and they really needed come up with some sort of diagnosis and they were getting suggestions from an AI model. One model was trained very carefully with highly accurate data, and the other was a fairly shoddy, shall we say, AI model with quite biased data. And what was interesting is that the clinicians when they were working with very well-trained AI model, we're actually producing a better diagnosis across the board in terms of the cases they were looking at. I think their accuracy went up by almost 4.5 percentage points, but when they were working with the less accurate model, their capacity actually dropped well below their usual diagnostic baseline, something like almost 12 percentage points below their usual diagnostic quality. And so, this really makes me think of the kind of core problem that's been really studied for 40 years by social scientists, which is called automation bias, which is when even an expert, a technical system which is giving a recommendation, our tendency is to believe it and to discard our own knowledge, our own predictions, our own sense.(42:58):And it's been tested with fighter pilots, it's been tested with doctors, it's been tested with judges, and it's the same phenomenon across the board. So one of the things that we're going to need to do collectively, but particularly in the space of medicine and healthcare, is retaining that skepticism, retaining that ability to ask questions of where did this recommendation come from with this AI system and should I trust it? What was it trained on? Where did the data come from? What might those gaps be? Because we're going to need that skepticism if we're going to get through particularly this, as you say, this sort of early stage one period where in many cases these models just haven't had a lot of testing yet and people are going to tend to believe them out of the box.The Large Language Model Copyright IssueEric Topol (43:45):No, it's so true. And one of the key points is that almost every study that's been published in large language models in medicine are contrived. They're using patient actors or they're using case studies, but they're not in the real world. And that's where you have to really learn, as you know, that's a much more complex and messy world than the in silico world of course. Now, before wrapping up, one of the things that's controversial we didn't yet hit is the fact that in order for these base models to get trained, they basically ingest all human content. So they've ingested everything you've ever written, your books, your articles, my books, my articles, and you have the likes of the New York Times suing OpenAI, and soon it's going to run out of human content and just use synthetic content, I guess. But what's your sense about this? Do you feel that that's trespassing or is this another example of exploiting content and people, or is this really what has to be done in order to really make all this work?Kate Crawford (44:59):Well, isn't it a fascinating moment to see this mass grabbing of data, everything that is possibly extractable. I actually just recently published an article in Grey Room with the legal scholar, Jason Schultz, looking at how this is producing a crisis in copyright law because in many ways, copyright law just cannot contend with generative AI in particular because all of the ways in which copyright law and intellectual property more broadly has been understood, has been premised around human ideas of providing an incentive and thus a limited time monopoly based on really inspiring people to create more things. Well, this doesn't apply to algorithms, they don't respond to incentives in this way. The fact that, again, it's a longstanding tradition in copyright that we do not give copyright to non-human authors. So you might remember that there was a very famous monkey selfie case where a monkey had actually stepped on a camera and it had triggered a photograph of the monkey, and could this actually be a copyright image that could be given to the monkey?(46:12):Absolutely not, is what the court's decided. And the same has now happened, of course, for all generative AI systems. So right now, everything that you produce be that in GPT or in Midjourney or in Stable Diffusion, you name it, that does not have copyright protections. So we're in the biggest experiment of production after copyright in world history, and I don't think it's going to last very long. To be clear, I think we're going to start to see some real shifts, I think really in the next 6 to 12 months. But it has been this moment of seeing this gigantic gap in what our legal structures can do that they just haven't been able to contend with this moment. The same thing is true, I think, of ingestion, of this capturing of human content without consent. Clearly, many artists, many writers, many publishing houses like the New York Times are very concerned about this, but the difficulty that they're presented with is this idea of fair use, that you can collect large amounts of data if you are doing something with that, which is sufficiently transformative.(47:17):I'm really interested in the question of whether or not this does constitute sufficiently transformative uses. Certainly if you looked at the way that large language models a year ago, you could really prompt them into sharing their training data, spitting out entire New York Times articles or entire book chapters. That is no longer the case. All of the major companies building these systems have really safeguarded against that now but nonetheless, you have this question of should we be moving towards a system that is based on licensing, where we're really asking people if we can use their data and paying them a license fee? You can see how that could absolutely work and would address a lot of these concerns, but ultimately it will rely on this question of fair use. And I think with the current legal structures that we have in the current case law, that is unlikely to be seen as something that's actionable.(48:10):But I expect what we'll look at is what really happened in the early 20th century around the player piano, which was that I'm sure you remember this extraordinary technology of the player piano. That was one of the first systems that automated the playing of music and you'd have a piano that had a wax cylinder that almost like code had imprinted on a song or a piece of music, and it could be played in the public square or in a bar or in a saloon without having to pay a single artist and artists were terrified. They were furious, they were public hearings, there were sort of congressional hearings and even a Supreme Court case that decided that this was not a copyright infringement. This was a sufficiently transformative use of a piece of music that it could stand. And in the end, it was actually Congress that acted.(49:01):And we from that got the 1908 Copyright Act and from that we got this idea of royalties. And that has become the basis of the music industry itself for a very long time. And now we're facing another moment where I think we have a legislative challenge. How would you actually create a different paradigm for AI that would recognize a new licensing system that would reward artists, writers, musicians, all of the people whose work has been ingested into training data for AI so that they are recognized and in some ways, recompensed by this massive at scale extraction?Eric Topol (49:48):Wow, this has been an exhilarating conversation, Kate. I've learned so much from you over the years, but especially even just our chance to talk today. You articulate these problems so well, and I know you're working on solutions to almost everything, and you're so young, you could probably make a difference in the decades ahead. This is great, so I want to thank you not just for the chance to visit today, but all the work that you've been doing, you and your colleagues to make AI better, make it fulfill the great promise that it has. It is so extraordinary, and hopefully it'll deliver on some of the things that we have big unmet needs, so thanks to you. This has really been fun.Kate Crawford (50:35):This has been wonderful. And likewise, Eric, your work has just been a fantastic influence and I've been delighted to get to know you over the years and let's see what happens. It's going to be a wild ride from now to who knows when.Eric Topol (50:48):No question, but you'll keep us straight, I know that. Thank you so much.Kate Crawford (50:52):Thanks so much, Eric.*******************************Your support of subscribing to Ground Truths, and sharing it with your network of friends and colleagues, is much appreciated.The Ground Truths newsletters and podcasts are all free, open-access, without ads.Voluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Thanks to my producer Jessica Nguyen and Sinjun Balabanoff tor audio and video support at Scripps ResearchNote: you can select preferences to receive emails about newsletters, podcasts, or all I don't want to bother you with an email for content that you're not interested in.Comments for this post are welcome from all subscribers. Get full access to Ground Truths at erictopol.substack.com/subscribe
Web scraping 101! Dive into the world of web scraping with Scott and Wes as they explore everything from tooling setup and navigating protected routes to effective data management. In this Tasty Treat episode, you'll gain invaluable insights and techniques to scrape (almost) any website with ease. Show Notes 00:00 Welcome to Syntax! 03:13 Brought to you by Sentry.io. 05:00 What is scraping? Socialblade. 08:01 Examples of past scrapers. Canadian Tire. 10:06 Cloud app downloader. 16:13 Other use cases. 16:58 Scraping 101. 17:28 Client Side. 19:08 Private API. Proxyman. 22:40 Server rendered. 23:27 Initial state. 24:57 What format is the data in? Google Puppeteer Extension. 27:08 Working with the DOM. 27:12 Linkedom npm package. 29:02 querySelector everything. 31:28 How to find the elements without classes. 34:08 Use XPath selectors for select by word. 34:53 Make them as flexible as you can. Classes change! 35:10 AI is good at this! 36:26 File downloading. 38:20 Working with protected routes. Proxyman. 40:41 Programatically retrieve authentication keys because they are short-lived. Fetch Cookie. 43:20 Deal-breakers. Mechanical Turk. 44:58 What happened with Amazon? Uniqlo Self-Checkout 46:42 Wes' portable refrigerator utopia. 47:25 Sick Picks & Shameless Plugs. Sick Picks Scott: KeyboardCleanTool. Wes: Yabai. Shameless Plugs Scott: Syntax on YouTube Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott:X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Why updating iPhones in their sealed boxes might have some downsides, Amazon's “AI” turned out to just be people, LLMs hallucinating imaginary dependencies is potentially a security risk, Aruba backs up its government data to the Internet Archive, and disk queue schedulers in Linux. Plug Support us on patreon and get an ad-free RSS […]
Why updating iPhones in their sealed boxes might have some downsides, Amazon's “AI” turned out to just be people, LLMs hallucinating imaginary dependencies is potentially a security risk, Aruba backs up its government data to the Internet Archive, and disk queue schedulers in Linux. Plug Support us on patreon and get an ad-free RSS... Read More
I have data from two big Internet surveys, Less Wrong 2014 and Clearer Thinking 2023. Both asked questions about IQ: The average LessWronger reported their IQ as 138. The average ClearerThinking user reported their IQ as 130. These are implausibly high. Only 1/200 people has an IQ of 138 or higher. 1/50 people have IQ 130, but the ClearerThinking survey used crowdworkers (eg Mechanical Turk) who should be totally average. Okay, fine, so people lie about their IQ (or foolishly trust fake Internet IQ tests). Big deal, right? But these don't look like lies. Both surveys asked for SAT scores, which are known to correspond to IQ. The LessWrong average was 1446, corresponding to IQ 140. The ClearerThinking average was 1350, corresponding to IQ 134. People seem less likely to lie about their SATs, and least likely of all to optimize their lies for getting IQ/SAT correspondences right. And the Less Wrong survey asked people what test they based their estimates off of. Some people said fake Internet IQ tests. But other people named respected tests like the WAIS, WISC, and Stanford-Binet, or testing sessions by Mensa (yes, I know you all hate Mensa, but their IQ tests are considered pretty accurate). The subset of about 150 people who named unimpeachable tests had slightly higher IQ (average 140) than everyone else. Thanks to Spencer Greenberg of ClearerThinking, I think I'm finally starting to make progress in explaining what's going on. https://www.astralcodexten.com/p/the-mystery-of-internet-survey-iqs
Sarah Tavel is a General Partner at Benchmark and sits on the boards of Chainalysis, Hipcamp, Rekki, Cambly, and Medely. She is a founding member of All Raise, the nonprofit organization working to accelerate the success of women in the venture-capital and VC-backed startup ecosystem. Before Benchmark, Sarah was a partner at Greylock Partners. She joined Pinterest in 2012 as their first PM and launched their first search and recommendations features. She also led three acquisitions as she helped the company scale through a period of hypergrowth. In this episode, we discuss:Sarah's Hierarchy of Engagement framework for growing a consumer startup• The three levels of the Hierarchy of Engagement: core action, retention, and self-perpetuation• The importance of measuring cohorts and maintaining focus on the core action• Examples of core user actions from Pinterest and YouTubeSarah's Hierarchy of Marketplaces framework for building a marketplace startup• The three vectors of growth for dominating a marketplace• Advice on “tipping the marketplace” and ultimately dominating the market• The value of focusing on a constrained market• How to avoid disruption—This entire episode is brought to you by Gelt—Redefine your approach to taxes.—Find the full transcript at: https://www.lennyspodcast.com/the-hierarchy-of-engagement-sarah-tavel-benchmark-greylock-pinterest/—Where to find Sarah Tavel:• X: https://twitter.com/sarahtavel• LinkedIn: https://www.linkedin.com/in/sarahtavel/• Substack: https://www.sarahtavel.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Sarah's background(03:33) Framework 1: The Hierarchy of Engagement(06:03) Level 1: Core action(10:33) Level 2: Retention(14:00) Level 3: Self-perpetuation(19:32) The importance of focus(23:54) The challenge of anonymity(26:04) Advice for founders who want to increase retention(29:34) What founders often get wrong(31:43) Examples of core actions(37:37) Finding your North Star Metric(38:12) Who should use the Hierarchy of Engagement framework(38:54) The Hierarchy of Marketplaces framework(46:09) Level 1: Focus on a constrained opportunity(50:19) Sarah's “happy GMV” and “minimum viable happiness” concepts(54:47) Thumbtack: a counterexample to this approach(56:36) Signs you're ready to move to level 2(58:06) Level 2: Tipping the marketplace(01:04:15) Tipping loops(01:10:53) Not all markets are susceptible to tipping(01:15:55) The challenge of homogeneity in B2B marketplaces(01:20:29) Signs you're tipping successfully(01:21:43) Level 3: Dominating the market(01:28:29) The opportunity in underestimated markets(01:30:11) The challenges of chasing GMV and losing focus(01:36:36) Recognizing currents and momentum in the market(01:39:20) You can never rest on your laurels(01:41:03) How to apply these frameworks outside of marketplaces(01:42:57) Three ways to find marketplace opportunity(01:45:10 ) Lightning round—Referenced:• Hierarchy of Engagement, Expanded: https://sarahtavel.medium.com/the-hierarchy-of-engagement-expanded-648329d60804• Pinterest: https://www.pinterest.com/• Evernote: https://evernote.com/• Notion: https://www.notion.so/• Houseparty app: https://en.wikipedia.org/wiki/Houseparty_(app)• Clubhouse: https://www.clubhouse.com/• How to price your product | Naomi Ionita (Menlo Ventures): https://www.lennyspodcast.com/how-to-price-your-product-naomi-ionita-menlo-ventures/• TikTok: https://www.tiktok.com/• Lessons on building a viral consumer app: The story of Saturn: https://www.lennysnewsletter.com/p/lessons-on-building-a-viral-consumer• Saturn: https://www.joinsaturn.com/• What happened to Secret?: https://www.failory.com/cemetery/secret• How to determine your activation metric: https://www.lennysnewsletter.com/p/how-to-determine-your-activation• Shishir Mehrotra on LinkedIn: https://www.linkedin.com/in/shishirmehrotra/• The rituals of great teams | Shishir Mehrotra of Coda, YouTube, Microsoft: https://www.lennyspodcast.com/the-rituals-of-great-teams-shishir-mehrotra-coda-youtube-microsoft/• Engagement Hierarchy: Core Actions: https://sarahtavel.medium.com/engagement-hierarchy-core-actions-dd4f72042100• Choosing Your North Star Metric: https://www.lennysnewsletter.com/p/choosing-your-north-star-metric• Hierarchy of Marketplaces: https://sarahtavel.medium.com/the-hierarchy-of-marketplaces-introduction-and-level-1-983995aa218e• Mike Williams on LinkedIn: https://www.linkedin.com/in/yoroomie/• Everything Marketplaces: https://www.everythingmarketplaces.com/• Fabrice Grinda on LinkedIn: https://www.linkedin.com/in/fabricegrinda/• OLX: https://www.olx.com/• DoorDash Loves the 'Burbs as Much as You Do: https://www.wsj.com/articles/doordash-loves-the-burbs-as-much-as-you-do-11605618001• Thumbtack: https://www.thumbtack.com/• NPS: https://en.wikipedia.org/wiki/Net_promoter_score• Sean Ellis on LinkedIn: https://www.linkedin.com/in/seanellis/• Rekki: https://rekki.com/• Ronen Givon on LinkedIn: https://www.linkedin.com/in/ronen-givon-535b2514• Hipcamp: https://www.hipcamp.com/• Demand driving supply: The little-understood growth loop behind a surprising number of iconic billion-dollar companies: https://www.lennysnewsletter.com/p/demand-driving-supply-marketplaces• Inside the Revolution at Etsy: https://www.nytimes.com/2017/11/25/business/etsy-josh-silverman.html• Faire: https://www.faire.com/• Bill Gurley on LinkedIn: https://www.linkedin.com/in/billgurley/• Mechanical Turk: https://www.mturk.com/• Parker Conrad on LinkedIn: https://www.linkedin.com/in/parkerconrad/• Rippling: https://www.rippling.com/• “White Space” for Building a Marketplace: How to Find Your Competition's Vulnerabilities—and Capitalize: https://sarahtavel.medium.com/white-space-for-building-a-marketplace-how-to-find-your-competitions-vulnerabilities-and-79674aa4d399• Pachinko: https://www.amazon.com/Pachinko-National-Book-Award-Finalist/dp/1455563935• The Five Temptations of a CEO: https://www.amazon.com/Five-Temptations-CEO-Anniversary-Leadership/dp/0470267585• The Five Dysfunctions of a Team: A Leadership Fable: https://www.amazon.com/Five-Dysfunctions-Team-Leadership-Fable/dp/0787960756• Tesla: https://www.tesla.com/• Reid Hoffman on LinkedIn: https://www.linkedin.com/in/reidhoffman/—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Welcome to the final episode of our season of forgers, where we've been exploring the stories of some of the most amazing forgeries – and those behind them -- throughout history. But it wasn't all about fakes and forgeries. There were plenty of cocktails and mocktails to go around, too.See omnystudio.com/listener for privacy information.
During the 18th century, early animatronics were hot. They were featured in circuses, carnivals, and other touring exhibitions, and were usually built and operated with various parts like axles, chains, cogs, gears, levers, pulleys, wheels, wind-up keys – you get the point. For Austrian Empress Maria Theresa, in 1769, Hungarian inventor Wolfgang von Kempelen's created the Mechanical Turk, a chess-playing machine that could beat almost any person who played against it. The Turk appeared to be a fully functional artificial intelligence to those who saw and interacted with it. It left audiences delighted, but baffled as to how it worked -- until a young poet named Edgar Allan Poe convinced many audiences it was not what it seemed.See omnystudio.com/listener for privacy information.
Anand Rajaraman is a Partner at Rocketship VC and co-owner of SF Unicorns cricket team. Previously, he co-founded Junglee (acquired by Amazon.com) and Kosmix (acquired by Walmart). Anand has helped create Amazon's Marketplace, co-invented Mechanical Turk, and created and co-headed WalmartLabs. He is an early investor in Facebook, Lyft, AppNexus, Aster Data, and Efficient Frontier among others. He is a Professor at Stanford University, and co-author of the popular textbook Mining of Massive Datasets. --- Support this podcast: https://podcasters.spotify.com/pod/show/theindustryshow/support
“There is a widespread cross-cultural stereotype suggesting that atheists are untrustworthy and lack a moral compass. Is there any truth to this notion?” opens a new paper published in PLOS One, investigating the relationship between moral values and religious belief or disbelief. The paper describes the results of four surveys of 4,622 atheists and theists across America and Sweden, examining the relationship between religious belief – or lack thereof – and moral values. “The most general take-home message from these studies is that people who do not believe in God do have a moral compass. In fact, they share many of the same moral concerns that religious believers have, such as concerns about fairness, and about protecting vulnerable individuals from harm,” said study author Tomas Ståhl of the University of Illinois at Chicago in a statement. There still seem to be widespread negative attitudes towards atheists worldwide. In a 2019 Pew poll, an average of 45 percent of people across 34 countries say that belief in God is necessary to be moral and have good values. Only 60 percent of respondents to a 2020 Gallup poll would vote for an atheist presidential candidate, whereas 95 percent would vote for a Catholic candidate, 93 percent would vote for a Jewish candidate, and 80 percent would vote for an evangelical Christian. The studies in this paper investigated the participant's endorsement of Liberty/oppression and amoral tendencies, as well as the five foundations of the Moral Foundations Theory: Care/harm, Fairness/cheating, Loyalty/betrayal, Authority/subversion, and Sanctity/degradation. The first two studies described in the paper used Amazon's Mechanical Turk platform, a website to crowdsource responses, to survey hundreds of US residents. From this data, religiosity was found to be unrelated to amoral tendencies and endorsement of Liberty/oppression, as well as individualizing moral foundations of care and fairness. However, atheists were found to have a weaker endorsement of binding moral foundations such as deference to authority, sanctity, and in-group loyalty. “It is possible that the negative stereotype of atheists as immoral may stem in part from the fact that they are less inclined than religious people to view respect for authority, ingroup loyalty, and sanctity as relevant for morality, and they are more likely to make moral judgments about harm on a consequentialist, case by case basis,” said Ståhl. The second two studies surveyed thousands of individuals across America and Sweden – America is a more religious country, whereas Sweden is very secular. Data from these studies showed the same pattern of results as the previous two. These studies also took into account credibility-enhancing displays and perception of existential threat, examining whether they had an influence on having religious beliefs. Credibility-enhancing displays involve important members of the community engaging in behaviors in service of religious beliefs that would be personally costly if the beliefs were false. The data showed that less exposure to credibility-enhancing displays, lower perception of existential threat, as well as a more analytical cognitive style, was associated with a lack of religious belief. As a whole, these results support the notion that the moral compass of atheists is just as good as the moral compass of theists. However, atheists are more likely to assess the morality of actions based on their consequences, whereas religious people tend to endorse moral values that promote group cohesion. As Ståhl writes in the paper: “Atheism merely implies the absence of religious belief, and says nothing about what positive beliefs the disbeliever holds.” -IFL Science Bye way, believing is not knowing and knowing is not believing. Knowing means realness and believing is a possibility of a lack of realness. I embrace complexities, nuances, complications, and gray areas. --- Send in a voice message: https://podcasters.spotify.com/pod/show/antonio-myers4/message Support this podcast: https://podcasters.spotify.com/pod/show/antonio-myers4/support
They're the essential workers of AI — yet mostly invisible and exploited. Does it have to be this way? Bridget Todd talks to data workers and entrepreneurs pushing for change.Millions of people work on data used to train AI behind the scenes. Often, they are underpaid and even traumatized by what they see. In this episode: a company charting a different path; a litigator holding big tech accountable; and data workers organizing for better conditions.Thank you to Foxglove and Superrr for sharing recordings from the the Content Moderators Summit in Nairobi, Kenya in May, 2023.Richard Mathenge helped establish a union for content moderators after surviving a traumatic experience as a contractor in Kenya training Open AI's ChatGPT.Mercy Mutemi is a litigator for digital rights in Kenya who has issued challenges to some of the biggest global tech companies on behalf of hundreds of data workers.Krista Pawloski is a full time data worker on Amazon's Mechanical Turk platform and is an organizer with the worker-led advocacy group, Turkopticon.Safiya Husain is the co-founder of Karya, a company in India with an alternative business model to compensate data workers at rates that reflect the high value of the data.IRL: Online Life is Real Life is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd talks to AI builders that put people ahead of profit.
Why are there chess Grandmasters, but not Grandmasters of noughts and crosses (otherwise known as tic-tac-toe)? It is because chess is “harder” – but what do we really mean by that? Answering that question leads us to develop the idea of mathematical complexity, which is a measure of how ‘big' a game is. We'll look at the complexity of popular games, and ask: what is the hardest game of all time?A lecture by Sarah Hart recorded on 10 October 2023 at Barnard's Inn Hall, LondonThe transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/maths-gamesGresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/Website: https://gresham.ac.ukTwitter: https://twitter.com/greshamcollegeFacebook: https://facebook.com/greshamcollegeInstagram: https://instagram.com/greshamcollegeSupport the show
Episode: 2765 The Mechanical Turk. Today, the chess playing automaton.
This is a recap of the top 10 posts on Hacker News on July 7th, 2023.This podcast was generated by Wondercraft: https://www.wondercraft.ai/?utm_source=hackernews_recap Please ping at team AT wondercraft.ai with feedback.(00:43): Excellence is a habit, but so is failureOriginal post: https://news.ycombinator.com/item?id=36628356&utm_source=wondercraft_ai(02:37): 32“ E Ink screen that displays daily newspapers on your wall (2021)Original post: https://news.ycombinator.com/item?id=36629085&utm_source=wondercraft_ai(04:42): OpenWorm – A computational model of C. elegans wormOriginal post: https://news.ycombinator.com/item?id=36628258&utm_source=wondercraft_ai(06:32): Why I Hate Frameworks (2005)Original post: https://news.ycombinator.com/item?id=36637655&utm_source=wondercraft_ai(08:27): Meta releases Intermediate Graphics LibraryOriginal post: https://news.ycombinator.com/item?id=36635526&utm_source=wondercraft_ai(09:58): Homebrew's analytics have moved to the EU and away from GoogleOriginal post: https://news.ycombinator.com/item?id=36628013&utm_source=wondercraft_ai(11:32): TypeScript please give us reflection/runtime typesOriginal post: https://news.ycombinator.com/item?id=36637221&utm_source=wondercraft_ai(13:39): Mechanical Turk workers are using AI to automate being humanOriginal post: https://news.ycombinator.com/item?id=36629777&utm_source=wondercraft_ai(15:44): ProtonMail Rewrites Your EmailsOriginal post: https://news.ycombinator.com/item?id=36639530&utm_source=wondercraft_ai(17:24): The underground economy of Glassdoor reviewsOriginal post: https://news.ycombinator.com/item?id=36634529&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
We're firing up our operating systems this week as we search for the worst computer in history. By our side we've got host of the Probably Science podcast, finalist of Last Comic Standing and award-winning writer for The Jim Jeffries Show: it's Matt Kirshen! We head back 2000 years to chat about the Antikythera, dip our toes into the art-prank world to meet the Persistence of Chaos and welcome back an old friend of the podcast, The Mechanical Turk. Follow us on Twitter: @worstfoot @bazmcstay @VanderLaugh @mattkirshen Follow us on Instagram: @worstfoot @VanderLaugh @Matt_Kirshen Join us on our Discord server! https://discord.gg/9buWKthgfx Visit www.worstfootforwardpodcast.com for all previous episodes and you can donate to us on Patreon if you'd like to support the show! https://www.patreon.com/WorstFootForward Worst Foot Forward is part of Podnose: www.podnose.com
https://thinkfuture.com | https://aidaily.us | In a fascinating turn of events, the humans driving Amazon's Mechanical Turk service are adopting AI behaviors to maintain relevance in the digital labor market. This episode explores this intriguing phenomenon and discusses the implications for our understanding of AI, human labor, and their increasingly blurry boundaries. Mechanical Turk, a crowdsourcing marketplace by Amazon, has been employing humans to perform tasks that are too complex for AI. Now, the tables have turned, with humans emulating AI behavior to meet the demands of certain tasks. This curious blend of human intelligence and machine-like performance raises profound questions about the evolving relationship between humans and AI, and the societal and ethical implications that come with it. Join us as we delve into this intriguing topic, shedding light on the changing dynamics of work in the AI era and what it means for the future of human labor. --- Send in a voice message: https://podcasters.spotify.com/pod/show/thinkfuture/message Support this podcast: https://podcasters.spotify.com/pod/show/thinkfuture/support
Author : Francis Bass Narrator : Valerie Valdes Host : Tina Connolly Audio Producer : Summer Brooks This story was previously published in Uncharted Magazine in May 2022 The Mechanical Turk Has a Panic Attack by Francis Bass Gab gripped her right wrist with her left hand at the small of her back. “Are we […] Source
Ein modernes Märchen von einem, der auszog, ein Tech-Start-up zu gründen. Und eine kluge Satire auf die Zumutungen des digitalen Kapitalismus. Kevin Knecht träumt davon, ein ganz Grosser zu sein, mindestens der nächste Steve Jobs. In Wahrheit betreibt er eine marode Astrologie- und Sex-Hotline. Die Türen sind klapprig, die Mitarbeiter mürrisch. Sein Coach rät ihm, die Firma zu verkaufen und ins Silicon Valley zu gehen. Gesagt, getan. Doch während Kevin in den Mülltonnen der Tech-Firmen nach neuen Ideen sucht, sucht sich seine Freundin zuhause einen neuen Partner. Der mechanische Türke war ein lebensgrosser Schachroboter, der 1770 erfunden wurde. Der Clou: Im Inneren des Schachroboters versteckte sich ein Mensch. Etwa zur gleichen Zeit fingen Textilfabrikanten in Nordengland an, ihre Arbeiter durch Maschinen zu ersetzen – die Geburtsstunde des Kapitalismus. Heutzutage gibt es den «Mechanical Turk» von Amazon: eine Crowdworking-Website, auf der Unternehmen Arbeiter aus der Ferne einstellen können, um für wenig Geld Arbeiten ausführen zu lassen, die Maschinen noch nicht erledigen können. Mit: Susanna Clasen (Türe), Oliver Wnuk (Kevin), Mona Petri (Ella), Barbara Falter (Emma) und Erich Räuker (Coach) Musik: Karl Atteln - Tontechnik: Basil Kneubühler - Regie: Johannes Mayr - Produktion: SRF 2020 - Dauer: 43
SponsorVercel is the platform for frontend developers, providing the speed and reliability innovators need to create at the moment of inspiration. Founded by the creators of Next.js, Vercel has zero configuration support for 35+ frontend frameworks, including SvelteKit. We enable the world's largest brands like Under Armour, eBay, and Nintendo, to iterate faster and create quality software. Try out Vercel today to experience the easiest way to use Svelte.NOTE: RECORDED April 13th.DescriptionThis week we sit down with Wasita and Eshin at the Cosanlab at Darthmouth to talk about how they use Svelte in their research as well as the intersection between research and web development and the possibilities that it opens up.MusicIntro music by Braden Wiggins a.k.a. Fractal (braden@fractal-hq.com)Discussion topics Guests: Wasita: Twitter Website Eshin: Twitter Website Computational Social Affective Neuroscience Laboratory https://github.com/wasita/svelte-vid-sync-chat-app -> The "Netflix party" experiment we mentioned, written with Svelte, socket.io, firebase https://github.com/cosanlab/trust-game -> Interactive 2-player trust game experiment written in Svelte and client-synchronization powered by Firebase https://github.com/ejolly/create-svelte-experiment -> A starter Svelte SPA template for experiments powered by Firebase https://maxfarrens.github.io/continuous-rater-book/Intro.html -> An interface for collecting continuous time-series emotion ratings from people watching videos https://github.com/ejolly/thought_tagger -> An interface for segmenting audio-files of people's memories. Built on top of the BBC's peaks.js. https://eshinjolly.com/svelteturk/#/ -> The "missing GUI" for Amazon's Mechanical Turk labor market. An electron desktop app that helps manage data collection and administration. Unpopular Opinions Brittney: Click bait twitter Antony: Helium Picks Kev: Cold Plunges Antony: Same Eshin: Raised by Wolves Wasita: Beef Brittney: Shadow and Bone (TV Show)
Ned tells Meg about the once-fabulous, then not fabulous, and now once-again-fabulous Mechanical Turk, a fraudy machine from the late 1700s that beat famous people at chess, sometimes. The best things about this machine are the weird number of insanely famous people who interacted with it. Meg applauds Edgar Allan Poe's well-placed skepticism of it. This machine was created for marketing purposes back in the day and was never actually functional as described, but it nonetheless made a big splash and went on tour and everything. As Rubin Farr once said, you can be famous and a fraud; a famous fraud. Thanks for listening, wash your hands, don't be a dick, maybe learn how to play chess for real!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons, published by Hazard on March 12, 2023 on LessWrong. “They are playing a game. They are playing at not playing a game. If I show them I see they are, I shall break the rules and they will punish me. I must play their game, of not seeing I see the game”- R. D. Laing "It's not lying if everyone knows it's lying." I see this sentiment in a lot of places. It pops up in corporate managerial contexts. It's been used as a legal defense and worked. It's a claim that communication that looks adversarial isn't, it's just high-context communication between people "in the know", there's no deception happening, no conflict, you just don't get how we do things here. I don't buy it. My claim in a nutshell: It situations where people insist "it's not lying because everyone knows it's lying" the people in the know aren't deceiving each other, but the reason this game is being played is to fool people not in the know, and insisting that it's just "high context communication" is part of an effort to obscure the fact that a conflict is going on. If that makes perfect sense to you, dope, you already get my main point. The rest of this post is adding nuance, actually arguing the case, and providing more language for talking about these sorts of dynamics. Case Study: "Are Founders Allowed to Lie?" This essay by Alex Danco talks about how "it's not lying because everybody knows it's lying" works in the Silicon Valley startup scene. It's short enough that it's worth reading now so you can decide for yourself if I'm misrepresenting him. If you don't feel like reading it I still quote enough of it for my post to make sense. Some snippets. It's really hard to start a business without lying: If you are only allowed to tell the literal, complete truth, and you're compelled to tell that truth at all times, it is very difficult to create something out of nothing. You probably don't call it “lying”, but founders have to will an unlikely future into existence. To build confidence in everyone around you – investors, customers, employees, partners – sometimes you have to paint a picture of how unstoppable you are, or how your duct tape and Mechanical Turk tech stack is scaling beautifully, or tell a few “pre-truths” about your progress. Hey, it will be true, we're almost there, let's just say it's done, it will be soon enough. It's not lying because everyone's in on it. You're not misleading investors; your investors get it: they're optimizing for authenticity over ‘fact-fulness'. It's not fraud. It's just jump starting a battery, that's all. Some abstracted examples of what this "pre-truth" looks like: You've all seen this. It doesn't look like much; the overly optimistic promises, the “our tech is scaling nicely” head fakes, the logo pages of enterprise customers (whose actual contract status might be somewhat questionable), maybe some slightly fudged licenses to sell insurance in the state of California. It's not so different from Gates and Allen starting Microsoft with a bit of misdirection. It comes true in time; by the next round, for sure. Why it's important and also why you can't talk about it: Founders will present you with something pre-true, under the total insistence that it's really true; and in exchange, everyone around them will experience the genuine emotion necessary to make the project real. Neither party acknowledges the bargain, or else the magic is ruined. Before investigating if Danco's story checks out I'm going to introduce some frames for talking about communication to make it easier for me to clarify what's going on here. Context & Language Games All communication relies on context and context has a nested structure which operates on multiple levels of communication. Some context operate...
This is what AI came up with as an episode description: Lisa talks to a gay millennial named Will who loves the idea of dating AI. Find it on Apple Podcasts or search for "Gayish" on your podcast app. (Human editor's Note: that is not actually what this episode is about. It's about ChatGPT, homophobia in AI, Alan Turing, the Mechanical Turk, and more.) In this episode: News- 6:40 || Main Topic (Artificial Intelligence)- 25:23 || Gayest & Straightest- 1:12:53 Patreon reminder! Gaytrons of all levels can join our Patreon Happy Hours (the next one is Feb. 1!), get ad-free episodes, and hear episodes 1 day early. Visit www.patreon.com/gayishpodcast for all the details.
Abstract from the book we discuss with the author during the episode: The rise of the platform economy into statelike dominance over the lives of entrepreneurs, users, and workers. The early Internet was a lawless place, populated by scam artists who made buying or selling anything online risky business. Then Amazon, eBay, Upwork, and Apple established secure digital platforms for selling physical goods, crowdsourcing labor, and downloading apps. These tech giants have gone on to rule the Internet like autocrats. How did this happen? How did users and workers become the hapless subjects of online economic empires? The Internet was supposed to liberate us from powerful institutions. In Cloud Empires, digital economy expert Vili Lehdonvirta explores the rise of the platform economy into statelike dominance over our lives and proposes a new way forward. Digital platforms create new marketplaces and prosperity on the Internet, Lehdonvirta explains, but they are ruled by Silicon Valley despots with little or no accountability. Neither workers nor users can “vote with their feet” and find another platform because in most cases there isn't one. And yet using antitrust law and decentralization to rein in the big tech companies has proven difficult. Lehdonvirta tells the stories of pioneers who helped create—or resist—the new social order established by digital platform companies. The protagonists include the usual suspects—Amazon founder Jeff Bezos, Travis Kalanick of Uber, and Bitcoin's inventor Satoshi Nakamoto—as well as Kristy Milland, labor organizer of Amazon's Mechanical Turk, and GoFundMe, a crowdfunding platform that has emerged as an ersatz stand-in for the welfare state. Only if we understand digital platforms for what they are—institutions as powerful as the state—can we begin the work of democratizing them.
Augmented reveals the stories behind the new era of industrial operations, where technology will restore the agility of frontline workers. In this episode of the podcast, the topic is "A Scandinavian Perspective on Industrial Operator Independence." Our guest is Johan Stahre (https://www.linkedin.com/in/jstahre/), Professor and Chair of Production Systems at Chalmers University in Sweden. In this conversation, we talk about how the field of human-centered automation has evolved, the contemporary notion of operator 4.0, Scandinavian worker independence, shop floor innovation at Volvo, factories of the future, modern production systems, robots, and cobots in manufacturing. If you like this show, subscribe at augmentedpodcast.co (https://www.augmentedpodcast.co/). If you like this episode, you might also like Episode 84 on The Evolution of Lean with Professor Torbjørn Netland from ETH Zürich (https://www.augmentedpodcast.co/84). Augmented is a podcast for industry leaders, process engineers, and shop floor operators, hosted by futurist Trond Arne Undheim (https://trondundheim.com/) and presented by Tulip (https://tulip.co/). Follow the podcast on Twitter (https://twitter.com/AugmentedPod) or LinkedIn (https://www.linkedin.com/company/75424477/). Trond's Takeaway: Human-centered automation is the only kind of automation that we should be thinking about, and this is becoming more and more clear. Operators are fiercely independent, and so should they be. This is the only way they can spot problems on the shop floor, by combining human skills with automation in new ways augmenting workers. It seems the workforce does not so much need engagement as they need enablement. Fix that, and a lot can happen. Transcript: TROND: Welcome to another episode of the Augmented Podcast. Augmented brings industrial conversations that matter, serving up the most relevant conversations on industrial tech. Our vision is a world where technology will restore the agility of frontline workers. In this episode of the podcast, the topic is A Scandinavian Perspective on Industrial Operator Independence. Our guest is Johan Stahre, Professor and Chair of Production Systems at Chalmers University in Sweden. In this conversation, we talk about how the field of human-centered automation has evolved, the contemporary notion of operator 4.0, Scandinavian worker independence, shop floor innovation at Volvo, factories of the future, modern production systems, robots, and cobots in manufacturing. Augmented is a podcast for industrial leaders, process engineers, and shop floor operators hosted by futurist Trond Arne Undheim and presented by Tulip. Johan, Welcome. How are you? JOHAN: I'm fine, thank you, Trond. It's really nice to see you. TROND: Yeah, likewise. JOHAN: Fellow Nordic person. TROND: Fellow Nordic person. And I apologize for this very American greeting, you know, how are you? As you know, I'm from the Nordic region. I actually mean it, [laughs] you know, it was a question. So I do wonder. [laughs] JOHAN: I'm actually fine. It's just ending the vacation, so I'm a little bit sad about that because everyone...but it's a very nice time now because the rest of the world seems to be on vacation, so you can get a lot of work done. TROND: I concur; that is a wonderful time. Johan, I wanted to just briefly talk about your exciting background. You are an engineer, a mechanical engineer from Sweden. And you had your initial degree from Linköping University. Then you went on to do your Ph.D. a while back in manufacturing automation, and this was at Chalmers, the University in Sweden. And that's where you have done your career in manufacturing research. You are, I think, the first Scandinavian researcher certainly stationed currently in Sweden that we've had on the podcast. So I'm kind of curious, what is manufacturing like in Scandinavia? And what is it that fascinated you about this topic so that you have moved so deeply into it? JOHAN: Manufacturing in Sweden is the core; it's the backbone of our country in a sense. We have statistically too many large manufacturing companies in Sweden as compared to, I mean, we're only 10 million people, but we have like 10, 12 pretty large companies in the manufacturing area in automotive but also in electronics like Ericsson, you have Volvo, we have SKF. We have a lot of big companies. Sweden has an industrial structure that we have several small companies and a couple of large companies, not so many in the middle section there. This happened, actually, in the 1800s somewhere. There was a big growth of big companies, and there was a lot of effort from the government to support this, and that has been continued. So the Swedish government has supported the growth of industry in Sweden, and therefore we have a very strong industry and also quite good digital growth and maturity. TROND: So the Scandinavian background to me when I was there, I remember that one of the things that at least Scandinavian researchers think is distinct about Scandinavia is worker independence. And it's something that I kind of wanted to just tease out a little bit in the beginning of this podcast. Am I wrong in this, or is there something distinct about the relationship between, I guess, workers and managers in Scandinavia, particularly? One speaks about the Scandinavian model. Can you outline a little bit what that means in manufacturing if it still exists? It's an open question. JOHAN: From my perspective, Sweden usually ranks very high in innovation, also when it comes to international rankings. And I think some of that has to do with the openness and the freedom of thinking in a sense and not so hierarchical, more consensus-oriented, ability to test and check and experiment at work without getting repercussions from top management. And it is much easier. In fact, if you are at one department in a manufacturing company or in university as such and you want to collaborate with another colleague across the aisle, if you have a two hierarchical system, you need to go three levels up in order to be able to do that. But here, I think it's easier to just walk across the aisle to have this collaboration and establish a cooperative environment. I think that that's part of the reason. Also, we're not so many; I mean, I think historically, we needed to do a lot of things ourselves in Sweden. We were a country up north with not so many people, and we have harsh environments, and I think it's the same as Norway. I mean, you need to be self-sustainable in that sense, and that creates, I think, environmental collaboration. TROND: We'll go more deeply into your research on manufacturing and to what extent a question I asked here matters to that. But do you have a sense just at the outset here that this type of worker and operators sort of independence, relative independence, perhaps compared to other regions, is it changing at all? Or is this kind of a feature that is a staple of Scandinavian culture and will be hard to change both for good and for bad? JOHAN: I think that as everything...digitalization has sort of erased a lot of the cultural differences across the world in that sense. Because when I was a student, there was not this expressed digital environment, of course. The information environment was less complex. But I think now all the young people, as well as my mother, does her banking...she's 90, but she does her banking on her iPad; I mean, it's very well-spread. And I think that we are all moving towards a similar culture, and the technology is spreading so quick. So you cannot really have cultural differences in that sense. But I think that's still the way that we're using this. And I think that the collaborative sense I think that that is still there. The reason why Sweden is comparatively innovative still is that we still maintain our culture and use the technology to augment that capability. TROND: So, Johan, we'll talk about a bunch of your experiences because you obviously are based in Sweden. And because of Sweden's industrial situation, you have some examples, you know, Volvo, a world-famous company obviously, and also famous for its management practices, and its factory practices, we'll get into that. But you've also worked, and you're advising entities such as the World Economic Forum, and you are active on the European stage with the European Institute of Technology. Your activity clearly goes way, way beyond these borders. But why don't we maybe start with some of these Scandinavian experiences and research projects that you've done maybe with Volvo? What is it with Volvo that captured people's attention early on? And what sort of experience and research have you done with Volvo? JOHAN: I think that Volvo is very innovative, and Volvo today is two types of companies; one is the car company that has now gone fully electric. It was introduced at the stock market, most recently owned by a Chinese company, and before that, it was owned by Ford, and before that, it was also public. But you also have the other part, which is the Volvo Group, which is looking at trucks, and boats, and things like that. And they both share a high level of innovation, ambition, innovation, and power, I think, using the experiences already from the '60s, where you had a lot of freedom as an employee. And also very good collaboration with the union in investments and in all the changes in the company I think that has been very beneficial. And it's made them...what is now Volvo Cars was very, very early, for example, with digital twins. They were experimenting with digital twins already in the 1990s. And we work together with Volvo but also with SKF, which is a roller-bearing company here to look at how we can support frontline workers and augment their capabilities because they're very skilled and they're very experienced. But sometimes you need to have sensor input, and you need to have structures, and rules, and procedures, and instructions. So we worked quite early with them already, maybe in 2009, 2010, to see how can we transform their work situation, provide them with work instructions through wearable devices. It was very popular at that time. MIT was experimenting with cyborgs. And the people that were...I think it was Thad Starner; he was trying to put on a lot of computer equipment. Then he went through the security at the airport and had some problems there. But that's not the case for the operators. But it was a little bit too early, I think. We tried to experiment with some of the maintenance people at Volvo cars. And they were very interested in the technology, but the use for it was a little bit obscure. And this was at the time when you had the mobile connectivity was 9,600 kilobits through a mobile phone or in the modem, so Wi-Fi more or less did not exist. And the equipment: the batteries weighed two kilos, and the computer weighed one kilo. And then you had a headset that looked like you came from deployment in a war zone. So it was a little bit...it looked a little bit too spacy for them to be actually applicable. And then some 10 years later, we actually did a similar experiment with SKF, the roller bearing company where we deployed the first iPod touch, I think they were called. That was right before the iPhone. I think it was an experiment by Steve Jobs to see how can we create what then became the iPhone screen. And we put that on the arms of the operators and tried to see how can we give them an overview of the process situation. So they were constantly aware, and they were quite happy about this. And then, we wanted to finish the experiment. The operators actually said, "Well, we don't want to give the equipment back." And then we said, "Well, we need to have it back. Of course, you can use the software." So they brought their own phones, and they downloaded the software. And they're still using it, actually, not on their own phones anymore. But they use this kind of software that we developed at that time together with them. So that was quite interesting. TROND: That's fascinating. Extrapolating from some of these early experiences up until now, I wanted to just ask you this from a research perspective, but also, I guess, from a management perspective. So you work on production systems. What is really the goal here, or what has the objective been early on? You talked about these early MIT experiments. And I know control systems is a very old area of research. And from what I understand, in the early days, the use cases weren't just factories; they were also on spacecraft and things. But to your point, especially earlier, we were working with very, very different technology interfaces. But now, obviously, we are starting to roll out 5G, which gives a whole other type of richness. But does it really matter how rich the technology interface is? Or does it matter more what the objective is with these various types of augmentations that have been attempted really throughout the decades? Can you just give us a little sense of what researchers and yourself what you were trying to augment and how that depends or doesn't depend on the quality of technology? JOHAN: First, we need to realize that the manufacturing industry has always been a very, very early adopter. The first computers were used for war simulations and for making propellers for submarines to see how you can program the milling machines. This was in the 1950s. And the industrial robots in the '60s in the '70s were also very early applications of digitalization. Before anything else had computers, the manufacturing industry was using it, and that's still the case. That might surprise some people. When they walk out into a shop floor, they see no computers around because all the computers are built into the machines already. What is still missing is the link, perhaps to the people. So they are still using the screens. And they are the ones...people are the key components of handling complex and unforeseeable situations. So you need to provide them, I think...to be really productive, you need to provide the frontline staff with the equipment for them to avoid and to foresee and to handle unforeseen situations because that's what differs between the man and machine or a human and the machine. People are much more apt to solve a complex situation that was not programmed before. That's the augmentation part here; how can we augment the human capabilities? And people talk about augmented reality; I mean, I don't think it's the reality that needs to be augmented; it's the human to be handling the reality that needs to be augmented. TROND: Johan, this is so fascinating because, first of all, it's quite easy to dismiss manufacturing a little bit these days because, to the untrained eye, all the excitement is in the consumer space because that's where the new devices get released, and that's, obviously, where all the attention is these days unless you obviously are in manufacturing. But can you bring us back to those early days of computing when a lot of the use cases for computing were first explored with manufacturing? So you talked about MIT, and back at MIT and at Stanford, all the way back to the '60s, they were exploring this new and fascinating field of even artificial intelligence, but before that, just regular control systems, electronic interfaces. What fork in the road would you say happened there? Because clearly, the fascination has been with digitalizing everything and software kind of one for 30 years, but in manufacturing, it's more complicated. You say people, so it's people, and then it's kind of these production systems that you research. That's not the same as the use case of an individual with their phone, and they're sort of talking to people. There are many, many more variables in play here. What is the real difference? JOHAN: Last year actually the European Commission put forth industry 5.0, which should be the follower after industry 4.0. And they based that on three main challenges. One is sustainability, one is resilience, and the various kinds of resilience towards the shock of the war but also by climate, et cetera. And the third one is actually human-centeredness to see how can we really fully deploy human capabilities in a society and also in industry, of course. I think what you're referring to is the two guys at Stanford in the '60s; one was John McCarthy. He was the inventor of the artificial intelligence concept. His aim then was to replace human work. That was the ambition with the artificial intelligence because human work is not as productive as computing work, but it still has some drawbacks. But in the same place not so far away, in another department at Stanford, was a guy called Douglas Engelbart. And he was actually the father of...he called it intelligence augmentation. So it was AI and IA at that time. But his ambition was to augment human work to see how can you have this. And he was the one that invented hypertext and the mouse. And he put up the first hypermedia set in Silicon Valley. So this was a guy that inspired companies like Apple, and Xerox PARC, those kinds of institutions that had a huge bearing. There was a book by a research colleague at Oxford. He was comparing that over time, from the early industrial days and then forward, technology that replaces people always has more complications when introduced and scaled than technology that augments people. If you look at the acceptance and the adoption of the iPhone, that took months, or weeks, or whatever, seconds for some people, for me, for example. If you look at what happened in the industrial revolutions in the 1800s and the 1700s, you had a lot of upheaval, and already in the 1960s...I'm starting to sound like a university professor. But in '96, in the U.S., there was a Senate hearing about is automation taking the jobs from people or not? And the conclusion was that it is not, it is actually creating companies that then employ more people because of the productivity gains and the innovation gains. And you allow people to use the automation as augmentation, not only cognitive augmentation. We think a lot about augmentation as something that you do with your eyes and your brain. But robots are also augmenting people. It lifts heavy objects like cars or big containers, whatever. That's the kind of augmentation that maybe you don't consider when you look at it from an artificial or an augmented reality perspective. TROND: Well, so many things to pick up here. But the variety of meanings of augmentation are kind of astounding, aren't they? And you've written about this operator 4.0 several times. There's obviously cognitive augmentation, and then there's physical augmentation. Are there other types of augmentation that you can speak of? JOHAN: I really can't think of any. TROND: But those are the main ones. So it's either kind of your mentality or sort of your knowledge. So the work instruction parts go to the skills-based, I guess, augmentation, which perhaps is an additional one. Or I'm just thinking if manufacturing wants to make progress in these things, it would perhaps make sense to really verify what workers at any moment actually themselves express that they need. And I guess that's what I was fishing for a little bit here in this history of all of this, whether the technology developers at all moments really have a clear idea of what it is that the workers are saying themselves they're missing or that they obviously are missing. Because automation and augmentation, I mean, do you find them diametrically opposed, or are they merely complementary when it works well? JOHAN: I mean, automation traditionally has been the way to scale, and, I mean, in the beginning, you want to see what the machine is doing, right? And then you really don't want to see it. You just want it to work. So it's really helping you to scale up your work. And in that sense, automation, like collaborative robots, for example, which people are talking about robots, are something that is replacing jobs, but if you look at it, it is a very small portion of statistics. In Singapore, which is the highest user of robots installed, there were 950 maybe robots per 10,000 employees. And the average in the Americas is 100 robots per 10,000 employees, and that's not really a lot. And so there is plenty of space for robots to be the tools for people. So if you don't treat them as something that will replace you but something that will actually augment you, I think it would be much easier. What could happen, though, and I think that is maybe part of your question, is that, well, these tools are becoming so complex that you cannot use them unless you increase your skill. How do you do that? Because no company would like to end up in a situation where the tools that you have bought and invested a lot of money in are too complex for your employees to use. That's a lost investment. It's like you're building a big factory out in a very remote place, and you don't have enough electric power to run it. You don't want to end up in that situation. Like you expressed, I think that maybe what's missing and what's trending right now is that the upskilling of the workforce is becoming extremely important. TROND: And how do you do that, Johan? Because there's obviously...there's now an increased attention on upskilling. But that doesn't mean that everyone has the solution for it. And employers are always asking for other people to pay for it, for example, governments, or the initiative of the worker, perhaps. It seems like Europe has taken this challenge head-on. Germany, at least, is recognized as a leader in workforce training. The U.S. is a latecomer to the game from that perspective. But it typically shows up in a big way. So something is going to happen here in the U.S. when it comes to workforce training. What is the approach? I mean, there seems to be two approaches to me; one is to simplify the technology, so you need less training. And the other would be obviously an enormous reskilling effort that either is organized, perhaps ideally in the workplace itself, so it's not removed from the tasks. Or some enormous schooling effort that is highly efficient and perhaps online. What do you think are the winning approaches to re-skilling that entire manufacturing workforce continuously? Because it's not like you have to rescale them once, you have to rescale them every time. JOHAN: Well, I can only guess. I think that you need to do all of these, all of the above. One complicating factor is the demographics of, especially Japan; of course, we know that from a long time that, they have an aging population. But Europe is now becoming the new Japan in that sense. We have a very big problem in terms of aging populations, especially countries like Italy and perhaps Germany but also in northern countries. And we don't have perhaps...there's a lot of discussion on immigration right now. But actually, the workforce would need a lot of immigration to be able to respond to the needs of our industry in the forthcoming situation. I think that China is maybe 4 or 5 years behind Europe, and the U.S. is maybe 10-12 years behind Europe as well. So that will happen...the only non-affected regions right now are India and Africa. And that means that the European, and Chinese, and U.S. industries will have to compete with a rather young population in Africa and India. And so that will become over time, but it is a long time, so that means that it's not always on the political agenda. Things that take a long time are usually not the things that you speak about when you have election times that we have in Sweden right now. It's mostly what's on the table. So I think that how to do that is really complex. We had some collaboration within the World Economic Forum. It is a fantastic organization because it spans the whole globe. So that means that the information comes from different parts of the world, and you can see different aspects of this. And a country that has done a lot about this is Singapore, very good experiments, very nice projects, initiatives regarding upskilling. And Europe is now launching an innovation program where they want to go deeper into deep tech to try to...the commissioner for research and education in June launched a big initiative around innovation and how that can be supported by deep technology. So we'll see what comes out of that. It'll be very, very interesting to see. MID-ROLL AD: In the new book from Wiley, Augmented Lean: A Human-Centric Framework for Managing Frontline Operations, serial startup founder Dr. Natan Linder and futurist podcaster Dr. Trond Arne Undheim deliver an urgent and incisive exploration of when, how, and why to augment your workforce with technology, and how to do it in a way that scales, maintains innovation, and allows the organization to thrive. The key thing is to prioritize humans over machines. Here's what Klaus Schwab, Executive Chairman of the World Economic Forum, says about the book: "Augmented Lean is an important puzzle piece in the fourth industrial revolution." Find out more on www.augmentedlean.com, and pick up the book in a bookstore near you. TROND: Speaking about the World Economic Forum for a minute, Johan, you have been part of this group project called the Augmented Workforce Initiative. You told me when we spoke earlier that, in your opinion, this initiative couldn't have existed even just five years ago. Can you explain what you mean by that? Because augmentation, the way that you've been speaking about it now, is a perspective that was nascent, even in the early days of computing and manufacturing control systems. Yet, it seems to have disappeared a little bit, at least from the top end of the political and research agenda. Yet here we are and you said this initiative couldn't have existed five years ago. Can you explain what you meant by that? JOHAN: That is a very, very nice initiative by the World Economic Forum, and it's run by the forum and Cambridge University, who has a very, very good group on this and some very nice people. And I'm honored to be part of that group together with my colleague from Mexico, David Romero. You may know him as well. And I think that what they're looking at is the increased understanding. And that was actually one of the sessions at this World Economic Forum, you know, the Davos days that were run this year. And it was actually part of those days as a theme about how to engage, and how to support, and to augment the workforce, which has never happened before on that level. So it's really, really high on the agenda. The Forum has been running previous projects also on the future of work and how the demographic situation is affecting or how the skill situation is affecting the companies. They have come up with suggestions that more or less half the workforce needs to be upskilled within the next couple of years. And that's a huge undertaking. TROND: The novelty here is that the world's elite managers, I guess, who are represented at the World Economic Forum are increasingly aware of the complexity of workforce issues generally, and then specifically of upskilling, and maybe even upskilling in this very specific meaning of augmenting a worker which, I guess to my mind, is a little bit different from just generally speaking about robotic automation and hammering these efficiency points. But obviously, it's a much more challenging debate because it's one thing to find a budget for an automation effort and introduce a lot of computers or introduce a lot of whatever technology, usually hardware, but what we're talking about here is a lot more challenging because you need to tailor it to these workers. And there are many workers, obviously, so it's a complicated phenomenon. How is that going? What would you say are some of the findings of the Augmented Workforce Initiative? JOHAN: I think that companies like Tulip, companies like Black & Decker, and others have a lot of good use cases actually already, which may or may not before have been labeled augmentation. It might have been labeled as operator support, or decision-making support, or things like that, or upskilling. But I think that the findings are that there is a lot out there, but it's not emphasized as something that is really important for the company's survival in that sense. TROND: It wasn't so glorified before. A lot of the decision support systems were viewed as lower-level systems that were just kind of more like HR systems or just tinkering with necessary stuff that people had to know kind of a thing. And so you're saying it's been elevated now, yeah, as having a much more essential impact on the quality of work. JOHAN: It has a leveraging impact for the whole company, I would say, but that's also part of this industry 4.0 approach. And you have the hierarchical integration of companies where the CEO should be aware of what's going on on the shop floor and vice versa, as well as the horizontal integration where you have the companies up and down the supply chain and value chain knowing what's going on early. And that is really something that maybe stopped at mid-management level before, but now it needs to be distributed out to the places where the complexity is higher, and that's the frontline workers. Maybe...now I'm guessing, but I think that also the understanding that the investments done by this company in complex manufacturing equipment could be at risk if you don't have the right skills to use them is now penetrating, I think, a lot of the companies. In Europe, in 2019 or something like that, there were almost 30 million people employed in the manufacturing industry. And if you look at the number of...if you say that half of these need to be upskilled somehow over a period of three years...and I actually made a mock calculation that the re-skilling need for in-person months in Europe if we were to fulfill this is 50 million person-months, 50 million person-months, just the time for the people to participate in these trainings. So that's a huge undertaking. And I think that that scares companies as well as governments because just imagine taking 50 million person-months out of productivity or the production equation. But the alternative might be worse. If you lose your capability to use your equipment, that might even be worse. TROND: Wow, these are daunting things. I guess that brings me to the last section here and some thoughts from you on the future outlook. When it comes to technology and these tools for human augmentation, what are the timelines for, well, either making the improvements or, as you said, not losing competitiveness because of this skills crisis? What are we looking at here? Is there some imminent challenge and opportunity? Or is this going to play out over 25 years? JOHAN: I think that in 25 years, the demographic situations will have changed again, so I assume that they will look different. But right now, we have a problem with an aging population. And we have a lot of people going into retirement. A lot of knowledge will disappear unless we can store it somehow. A lot of people will not go into industry. I mean, when I talk to colleagues, they say, "Well, we need to make the manufacturing industry more sexy. It should be cleaner, or it should be nicer because young people don't go to industry." But if I go to the healthcare section, they will say the same thing, "Oh, we need to make it much better because people are not applying for these educations." TROND: [laughs] Where are people applying, the tech companies? JOHAN: No, that's the problem. They don't exist. They were never born. TROND: [laughs] Right. JOHAN: So the demographic bomb is that they are actually not there. So you cannot rely on employing young people because they are not existing in Europe and soon not in the U.S. to the extent that they were before. So therefore, you need to focus on the older people. So you need to re-upskill not only the middle-aged people but the people in their 50s and even in their 60s. That adds to the complexity. In the next 5 to 10 years, there will be a lot of discussions on how to fill the missing places in industry to remain competitive. I also think that you can see the augmentation here as a fantastic tool together with the upskilling because upskilling the new skills together with the augmented tools like collaborative robots, like cognitive support, like whatever you can put in an iPhone, or whatever phone, or tool, or watch, or whatever, you can add the capability to make decisions. And that's the augmentation you will see. And you will see a lot of digital twins try to foresee problems. You will see a lot of transversal technologies going from different high-tech industry into manufacturing industry to support especially the frontline people and to enable their innovation capabilities. TROND: Johan, you said earlier that the complexity is higher at the level of frontline workers. Did you mean that, basically, the complexity of frontline work of itself at an individual level is also underestimated? Or were you simply saying that because there are so many frontline workers and the various situations of various types of frontline workers is so different that it's obviously an underappreciated management challenge? Or were you truly saying that frontline work in and of itself is either complicated or becoming more complex? JOHAN: If a task was not automated, it is inherently complex. So you couldn't automate it, right? TROND: Right. JOHAN: Because if you can teach a robot or whatever to do tasks, then it's not difficult, and you can foresee the results. There was a lady called Lisanne Bainbridge. She put out The Paradox of Automation that the more you automate, the more dependent you become on the few people that are still there to handle the situations that are so complex that you could not foresee them. So everything that is programmed is programmed by a programmer, and the programmer tries to foresee every foreseeable situation, and to that extent, the robots and the automation works. But if these situations go out of hand, if they're too complex, and something happens, then there is no robot that can fix that. Unfortunately, AI is not there yet. TROND: Well, you said, "Unfortunately, AI is not there yet," but I would also conjecture that, fortunately, AI is not there yet because you're pointing to something missing, I think. And a lot of the AI debate is starting to come back now. And it was there in the '60s because people realized that for lots of different reasons, to have a human oversight over robotic processes is actually a good thing. And you talked to me earlier about the experiments with imagining a trip to Mars and having to execute robotic actions on Mars in a control system environment where you actually had to foresee the action and plan; it was always a supervised type of situation. So the supervisory control concept has been there from the beginning of computing. If you were to think of a future where AI actually does get more advanced, and a lot of people feel like that's imminent, maybe you and I don't, but in any case, let's imagine that it does become more advanced and becomes sort of a challenge, how do we maintain human control over those kinds of decisions? I mean, there are researchers that have imagined, you know, famously in Superintelligence, Bostrom imagines this paperclip factory that goes amok and starts to optimize for producing paperclips, and everyone is suddenly producing, you know, and the machine then just reallocates resources to this enormously ridiculous task of producing only paper clips. It's a very memorable example. But a lot of people feel that AI could soon or at some point reach that level. How do we, as a failsafe, avoid that that becomes an issue? Or do you see it as such a far-fetched topic in manufacturing that it would be decades, if not centuries, away? JOHAN: I think that AI has been seasonal if you allow the expression. There's talk about these AI winters every now and then, and they tend to come every 10 or 15 years, and that matches two Ph.D. lifetimes, Ph.D. development. I mean, people tend to forget the problems, and then they tend to use these Gartner curves. If you look at the Gartner curve, you have the expectation part. I'm not being arrogant towards the AI research. I think that AI is fantastic, but it should be seen, from my perspective, as what it is, as an advanced form of automation that can be used as an augmentation tool. I think it was Kasparov that started to collaborate with a chess computer maker or developer, and they won every tournament because the combination of the human and the chess computer was astounding. And now I think there are even competitions with chess computers plus chess experts comes with them. There was, I think, in the 1800s, there was a traveling exhibitionist where they had the Mechanical Turk, I think it was called. It was a chess player that was competing then against the people in the audience. And actually, inside this box, there was a small human that was making all the chess moves. And they were beating all the chess champions. So there was a man inside this. I think that there is still a man inside a lot of the automation. TROND: A man and a woman. I wanted to just lastly end on a more positive note because you told me earlier that you are more optimistic now than ten years ago on behalf of your industry that you've researched for so many years. Why is that? JOHAN: I think that the technology, I mean, I'm a techno-optimist. And I think that we have also the full scale, the full attention from the ICT industry on various industrial processes right now. It was a lot of service-oriented. And I think that that is playing out now in the platform wars, the different services, but these different services are actually making a lot of good in the manufacturing and the tougher industries. And so, there is a bigger focus now on creating CO2-less steel. And there's an exploration of different industries that are going across; you look at the electrification of vehicles which is cutting across several sectors in the industry, automotive industry, electronics industry. And I think that the problems in industry are becoming so complex. So the ICT attention is on industry now more than perhaps on consumers, as it were, and I think that that's promising. I see companies like Ericsson promoting 5G. I see companies doing the Amazon Web Services and such companies looking at services that are useful for industry. And that's also augmenting the people's capability in that sense, so that's why I'm so positive. I see all the sensors coming. I see all the computing power coming into the hands of the frontline operators. And I see also the use for the upskilling and the skilling technologies that are emerging. How do you do that? What they do in Matrix when the leading lady downloads the instructions for the helicopter or motorcycle or whatever it is. But how do you do that in real life? How do you prepare for something that's coming in the next few minutes? That is something that people are now looking at using technologies, augmenting technologies, digital twins, and things like that in a completely different way than they were five years ago. TROND: Wow. So these are exciting moments for learning in manufacturing with perhaps wide-ranging consequences if we succeed. Johan, I thank you so much for these reflections. You've spent a career investigating production systems, and manufacturing, and workers. And these are very rich debates. And it seems like they're not over, Johan. So, hopefully, we'll have you back when something happens. And we'll have you comment on some developments. Thank you very much. JOHAN: Thank you, Trond. Thank you for a very interesting discussion. You always learn a lot by being asked a lot of questions, so thank you so much for this learning experience. Thank you. TROND: You're very gracious. Thank you, Johan. You have just listened to another episode of the Augmented Podcast with host Trond Arne Undheim. The topic was a Scandinavian Perspective on Industrial Operator Independence. Our guest was Johan Stahre, Professor and Chair of Production Systems at Chalmers University of Sweden. In this conversation, we talked about how the field of human-centered automation has evolved. My takeaway is that human-centered automation is the only kind of automation that we should be thinking about, and this is becoming more and more clear. Operators are fiercely independent, and so should they be. This is the only way they can spot problems on the shop floor, by combining human skills with automation in new ways augmenting workers. It seems the workforce does not so much need engagement as they need enablement. Fix that, and a lot can happen. Thanks for listening. If you liked the show, subscribe at augmentedpodcast.co or in your preferred podcast player, and rate us with five stars. If you liked this episode, you might also like Episode 84 on The Evolution of Lean with Professor Torbjørn Netland from ETH Zürich. Hopefully, you'll find something awesome in these or in other episodes and if so, do let us know by messaging us. We would love to share your thoughts with other listeners. The Augmented Podcast is created in association with Tulip, the frontline operation platform that connects people, machines, devices, and systems in a production or logistics process in a physical location. Tulip is democratizing technology and empowering those closest to operations to solve problems. Tulip is also hiring, and you can find Tulip at tulip.co. Please share this show with colleagues who care about where industry and especially about where industrial tech is heading. To find us on social media is easy; we are Augmented Pod on LinkedIn and Twitter and Augmented Podcast on Facebook and YouTube. Augmented — industrial conversations that matter. See you next time. Special Guest: Johan Stahre.
Robots have always fascinated and horrified humanity in equal measure. The prospect of a synthetic lifeform is at times exciting, but can quickly turn south. Luckily we've never gotten that far... or have we? This episode we will look at a selection of early robots, from the Mechanical Turk to Elektro. All have one thing in common: they run off smoke and mirrors. Selected Sources: Robots of Westinghouse by Scott Shaut - Best source on Elektro and his friends https://www.youtube.com/watch?v=T35A3g_GvSg - See Elektro in action https://www.google.com/books/edition/Inanimate_Reason_Or_a_Circumstantial_Acc/mvVdAAAAcAAJ?hl=en&gbpv=1 - On the Mechanical Turk
Alex shares the mythical origin of the word's narcissist and echo. Mike tells the tale of a long-forgotten automaton chess player nicknamed the Mechanical Turk. Pat reveals the strange history behind the rapid population growth in New France (Canada). Link to Vote Facebook/Instagram/Twitter: @factoffpod Email: thefactoffpodcast@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices
On The Cloud Pod this week, the team weighs the merits of bitcoin mining versus hacking. Plus: AWS Trusted Advisor prioritizes Support customers, Google provides impenetrable protection from a major DDoS attack, and Oracle Linux 9 is truly unbreakable. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
Taking one historic hoax and applying it to a future reality is common. In this Boles.tv live stream highlight, David Boles examines the want to take one bad idea, and then applying it to a completely different, worse, notion with the name of the meme being the only connectoid. The Peer Gynt onion, Google AI, Eliza Redux and the Squid Game gameshow are all examples of this misapplication of memeing.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Germans' Opinions on Translations of "longtermism": Survey Results, published by Konstantin Pilz on June 27, 2022 on The Effective Altruism Forum. TL;DR I conducted a survey with 32 respondents via Mechanical Turk about which word for “longtermism” Germans prefer. Most think that the English word works fine and should not necessarily be translated. However, the word “Zukunftsschutz” (future protection) gets the highest ranking, even higher than “Longtermism” itself. As the word has a very positive connotation in German, it might work well for outreach purposes. However, since existing media articles already use “Longtermism”, I suppose publishing the translated content with the same framing. Introduction I am currently working part-time on translating more key longtermist content to German. Since this may shape the debate on the topic for many years, I want to do this carefully and be open to adapting it to the German culture and language. After all, the target group of the content is Germans that have not yet heard of longtermism. I ran this competition for the best German translation of “longtermism”. Thank you to everyone who participated! I ended up with more useful words than I had hoped. I thus picked the most promising ones and surveyed the German public's preferences. The words I used were: Longtermism Longtermismus (just a Germanification of the word) Langzeitismus (a rather literary German translation) Zukunftismus “futurism” (though futurism translate to Futurismus so it does not have that connotation) Zukunftsschutz “future protection” Langzeitdenken/ Langfristdenken “long-term thinking” Sorry, I confused the two words in the survey, though I think they are reasonably close to still make the results interpretable Ganzzeitdenken “inclusive time thinking” “whole-time thinking” “thinking about all time” There was some discussion on the term Zukunftsethik (“future ethics”) though I decided not to include it as it sounds more like a field than an ideology. Methods Consider skipping this part. The survey was conducted on Mechanical Turk. Respondents were paid 2€ for spending ~ 5-10 minutes. The only requirement was having Germany as a location. This means respondents were not necessarily native Germans, but I have no data on this. (I could not include Austria and Switzerland though I would be surprised to see different results there.) Besides that, I collected no data on age, gender, education, etc. Before launching the final version, a small version was tested to see whether the platform worked well and whether the answers were reasonable. The final survey had 35 respondents. You can find the original survey here and the results here (raw) and here (edited). I presented the core idea of longtermism and then asked participants to summarize it. Afterward, the first set of questions followed, presenting each German word in random order in the context of a constant German sentence, e.g. "Zukunftismus" ist die Einstellung, dass der Schutz künftiger Generationen stärker priorisiert werden soll. Another set of questions put the same words in another order in the context of a member of the movement, e.g. Clara, überzeugte "Langzeitdenkerin", macht sich dafür stark, deutlich mehr Geld in die Pandemieprävention zu investieren. The words used were: Longtermism, Longtermismus, Zukunftismus, Langzeitismus, Zukunftsschutz, Langfristdenken, Ganzzeitdenken I realized I had mixed up the word Langfristdenken and used Langzeitdenkerin as the corresponding word. Participants were asked to rate each word on a scale from 1 to 6. In the end, participants were asked if they thought not translating the word at all would be better on a scale from 0 to 6. Finally, I asked if they knew about Effective Altruism, though this question was not stated well and I suspect some participants did ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflection Mechanisms as an Alignment target: A survey, published by Marius Hobbhahn on June 22, 2022 on The AI Alignment Forum. This is a product of the 2022 AI Safety Camp. The project has been done by Marius Hobbhahn and Eric Landgrebe under the supervision of Beth Barnes. We would like to thank Jacy Reese Anthis and Tyna Eloundou for detailed feedback. You can find the google doc for this post here. Posts to other sections of the text automatically link to the google doc. Feel free to add comments there. Abstract We surveyed ~1000 US-based Mechanical Turk workers (selected and quality tested by Positly) on their attitudes to moral questions, conditions under which they would change their moral beliefs, and approval towards different mechanisms for society to resolve moral disagreements. Unsurprisingly, our sample disagreed strongly on questions such as whether abortion is immoral. In addition, a substantial fraction of people reported that these beliefs wouldn't change even if they came to different beliefs about factors we view as morally relevant such as whether the fetus was conscious in the case of abortion. However, people were generally favorable to the idea of society deciding policies by some means of reflection - such as democracy, a debate between well-intentioned experts, or thinking for a long time. In a hypothetical idealized setting for reflection (a future society where people were more educated, informed, well-intentioned e.t.c.), people were favorable to using the results of the top reflection mechanisms to decide policy. This held even when respondents were asked to assume that the results came to the opposite conclusion as them on strongly-held moral beliefs such as views on abortion. This suggests that ordinary Americans may be willing to defer to an idealized reflection mechanism, even when they have strong object-level moral disagreements. This indicates that people would likely support aligning AIs to the results of some reflection mechanism, rather than people's current moral beliefs. Introduction Optimistically, a solution to the technical alignment problem will allow us to align an AI to “human values.” This naturally raises the question of what we mean by this phrase. For many object-level moral questions (e.g. “is abortion immoral?”), there is no consensus that we could call a “human value.” When lacking moral clarity we, as humans, resort to a variety of different procedures to resolve conflicts both with each other (democracy/voting, debate) and within ourselves (read books on the topic, talk with our family/religious community). In this way, although we may not be able to gain agreement at the object level, we may be able to come to a consensus by agreeing at the meta level (“whatever democracy decides will determine the policy when there are disagreements”); this is the distinction between normative ethics and meta-ethics in philosophy. We see the meta question of value choice as being relevant to strategic decisions around AI safety for a few reasons. First, understanding people's preferred conflict resolution mechanisms could be relevant for strategic decisions around AI governance and governance in general. Specifically, we want to avoid arms race conditions so that responsible labs have adequate time to ensure the safety of human-level AI. We feel that the ability to avoid race conditions could be strongly influenced by people's broad ability to agree on metaethics. In particular, people should be less likely to fight for control of AI if they trust that AI builders will produce a system that reflects their (potentially-meta level) values. Understanding people's expressed meta-ethics will therefore hopefully inform what types of governance structures are most likely to avoid race conditions. Additionally, understanding people's...
After a year plus long absence of curios, we once again rank a strange item! This time around we travel to the 1700s to find an automaton chess player dressed as a Turkish man. The 1700s had robots!? Join us on this journey to discover the reality behind this robotic chess player who played against people such as Benjamin Franklin and Napoleon. There are so many twists and turns in this story, you may throw your back out!
Today we learn about automatons from hundreds of years ago, the most famous of which may have been The Mechanical Turk, which played chess with actual people in the room and got pissy with you if you tried to cheat. Then it's a personal story of a deeply unsettling babysitting gig, the 1989 Olympic Festival in Norman, Oklahoma, and the bleak shitscape that our lives as Americans is becoming. It's fucking awful! Join us.BONUS: The OK 89 Olympic Festival in its entirety, cued up to Roger Miller's performance, which was unquestionably the best part.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Org announcement: [AC]RC, published by Vivek Hebbar on April 17, 2022 on LessWrong. It seems that certain thermodynamic phenomena are highly relevant to alignment. From Paul Christiano's rebuttal to a recent seminal paper: In fact, it appears that thousands of words may have been written on this topic, from which many lesser researchers have bounced off. Seeing that Paul Christiano and John Swentworth have spent countless hours investigating this important topic, it's importance is quite evident. In order to free up their time for other strategies like ELK, I am launching a new EA-funded initiative: Our initial staff consists of myself and 10 Mechanical Turk workers, who shall be tasked with data gathering. For seed funding, we are requesting a modest sum of $50M. Given that Paul's time is worth $20 trillion per hour, this seems like a reasonable tradeoff. Our initial research agenda consists of investigating the space of designs with a complex number of hoses. Further, we have reason to believe that regularizing by the complexity of the stupidest argument for a given design leads to good inductive bias properties. Our work may even have direct implications for other alignment agendas, as it shares structure with many important problems. Furthermore, building infrastructure for practical engineering projects is of great value, even if air conditioners aren't quite mechanistically identical to AGI. You can support our work by loudly advertising your loyalty to the 1-hose or 2-hose camp. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
We discuss the Mechanical Turk, a chess-playing automaton from the 1700s whose skill at the game became famous across Europe. The mysterious clockwork machine played and defeated many notable figures, including Napoleon and Benjamin Franklin.
Beep beep boop. This text has been procedurally generated by a mechanical creation that's also looking for LOVE. This week, Shaunn and TJ talk ROBOTS, including Maria, Johnny 5, Bender and Vision. Plus we play FMK and I See What You Did There. Hopefully you're sentient enough to listen!
Welcome back, Spooky Lovelies! Tonight Our Ms. Merrick has a dark & devious tale of the very first handmade beings! From The Golem to The Mechanical Turk, she'll tell you all about how they work… or seem to. Even Our Friend, Mr. Poe has a part in this peculiar tale! And be sure to see The Midnight Library's Own Glass Case Gypsy, Maria, whilst you visit, and have your...misfortune told. Special Thanks to Sounds Like an Earful Music Supply for the amazing music AND sound design during this episode.