 
			POPULARITY
My fellow pro-growth/progress/abundance Up Wingers,For most of history, stagnation — not growth — was the rule. To explain why prosperity so often stalls, economist Carl Benedikt Frey offers a sweeping tour through a millennium of innovation and upheaval, showing how societies either harness — or are undone by — waves of technological change. His message is sobering: an AI revolution is no guarantee of a new age of progress.Today on Faster, Please! — The Podcast, I talk with Frey about why societies midjudge their trajectory and what it takes to reignite lasting growth.Frey is a professor of AI and Work at the Oxford Internet Institute and a fellow of Mansfield College, University of Oxford. He is the director of the Future of Work Programme and Oxford Martin Citi Fellow at the Oxford Martin School.He is the author of several books, including the brand new one, How Progress Ends: Technology, Innovation, and the Fate of Nations.In This Episode* The end of progress? (1:28)* A history of Chinese innovation (8:26)* Global competitive intensity (11:41)* Competitive problems in the US (15:50)* Lagging European progress (22:19)* AI & labor (25:46)Below is a lightly edited transcript of our conversation. The end of progress? (1:28). . . once you exploit a technology, the processes that aid that run into diminishing returns, you have a lot of incumbents, you have some vested interests around established technologies, and you need something new to revive growth.Pethokoukis: Since 2020, we've seen the emergence of generative AI, mRNA vaccines, reusable rockets that have returned America to space, we're seeing this ongoing nuclear renaissance including advanced technologies, maybe even fusion, geothermal, the expansion of solar — there seems to be a lot cooking. Is worrying about the end of progress a bit too preemptive?Frey: Well in a way, it's always a bit too preemptive to worry about the future: You don't know what's going to come. But let me put it this way: If you had told me back in 1995 — and if I was a little bit older then — that computers and the internet would lead to a decade streak of productivity growth and then peter out, I would probably have thought you nuts because it's hard to think about anything that is more consequential. Computers have essentially given people the world's store of knowledge basically in their pockets. The internet has enabled us to connect inventors and scientists around the world. There are few tools that aided the research process more. There should hardly be any technology that has done more to boost scientific discovery, and yet we don't see it.We don't see it in the aggregate productivity statistics, so that petered out after a decade. Research productivity is in decline. Measures of breakthrough innovation is in decline. So it's always good to be optimistic, I guess, and I agree with you that, when you say AI and when you read about many of the things that are happening now, it's very, very exciting, but I remain somewhat skeptical that we are actually going to see that leading to a huge revival of economic growth.I would just be surprised if we don't see any upsurge at all, to be clear, but we do have global productivity stagnation right now. It's not just Europe, it's not just Britain. The US is not doing too well either over the past two decades or so. China's productivity is probably in the negative territory or stagnant, by more optimistic measures, and so we're having a growth problem.If tech progress were inevitable, why have predictions from the '90s, and certainly earlier decades like the '50s and '60s, about transformative breakthroughs and really fast economic growth by now, consistently failed to materialize? How does your thesis account for why those visions of rapid growth and progress have fallen short?I'm not sure if my thesis explains why those expectations didn't materialize, but I'm hopeful that I do provide some framework for thinking about why we've often seen historically rapid growth spurts followed by stagnation and even decline. The story I'm telling is not rocket science, exactly. It's basically built on the simple intuitions that once you exploit a technology, the processes that aid that run into diminishing returns, you have a lot of incumbents, you have some vested interests around established technologies, and you need something new to revive growth.So for example, the Soviet Union actually did reasonably well in terms of economic growth. A lot of it, or most of it, was centered on heavy industry, I should say. So people didn't necessarily see the benefits in their pockets, but the economy grew rapidly for about four decades or so, then growth petered out, and eventually it collapsed. So for exploiting mass-production technologies, the Soviet system worked reasonably well. Soviet bureaucrats could hold factory managers accountable by benchmarking performance across factories.But that became much harder when something new was needed because when something is new, what's the benchmark? How do you benchmark against that? And more broadly, when something is new, you need to explore, and you need to explore often different technological trajectories. So in the Soviet system, if you were an aircraft engineer and you wanted to develop your prototype, you could go to the red arm and ask for funding. If they turned you down, you maybe had two or three other options. If they turned you down, your idea would die with you.Conversely, in the US back in '99, Bessemer Venture declined to invest in Google, which seemed like a bad idea with the benefit of hindsight, but it also illustrates that Google was no safe bet at the time. Yahoo and Alta Vista we're dominating search. You need somebody to invest in order to know if something is going to catch on, and in a more decentralized system, you can have more people taking different bets and you can explore more technological trajectories. That is one of the reasons why the US ended up leading the computer revolutions to which Soviet contributions were basically none.Going back to your question, why didn't those dreams materialize? I think we've made it harder to explore. Part of the reason is protective regulation. Part of the reason is lobbying by incumbents. Part of the reason is, I think, a revolving door between institutions like the US patent office and incumbents where we see in the data that examiners tend to grant large firms some patents that are of low quality and then get lucrative jobs at those places. That's creating barriers to entry. That's not good for new startups and inventors entering the marketplace. I think that is one of the reasons that we haven't seen some of those dreams materialize.A history of Chinese innovation (8:26)So while Chinese bureaucracy enabled scale, Chinese bureaucracy did not really permit much in terms of decentralized exploration, which European fragmentation aided . . .I wonder if your analysis of pre-industrial China, if there's any lessons you can draw about modern China as far as the way in which bad governance can undermine innovation and progress?Pre-industrial China has a long history. China was the technology leader during the Song and Tang dynasties. It had a meritocratic civil service. It was building infrastructure on scales that were unimaginable in Europe at the time, and yet it didn't have an industrial revolution. So while Chinese bureaucracy enabled scale, Chinese bureaucracy did not really permit much in terms of decentralized exploration, which European fragmentation aided, and because there was lots of social status attached to becoming a bureaucrat and passing the civil service examination, if Galileo was born in China, he would probably become a bureaucrat rather than a scientist, and I think that's part of the reason too.But China mostly did well when the state was strong rather than weak. A strong state was underpinned by intensive political competition, and once China had unified and there were fewer peer competitors, you see that the center begins to fade. They struggle to tax local elites in order to keep the peace. People begin to erect monopolies in their local markets and collide with guilds to protect production and their crafts from competition.So during the Qing dynasty, China begins to decline, whereas we see the opposite happening in Europe. European fragmentation aids exploration and innovation, but it doesn't necessarily aid scaling, and so that is something that Europe needs to come to terms with at a later stage when the industrial revolution starts to take off. And even before that, market integration played an important role in terms of undermining the guilds in Europe, and so part of the reason why the guilds persist longer in China is the distance is so much longer between cities and so the guilds are less exposed to competition. In the end, Europe ends up overtaking China, in large part because vested interests are undercut by governments, but also because of investments in things that spur market integration.Global competitive intensity (11:41)Back in the 2000s, people predicted that China would become more like the United States, now it looks like the United States is becoming more like China.This is a great McKinsey kind of way of looking at the world: The notion that what drives innovation is sort of maximum competitive intensity. You were talking about the competitive intensity in both Europe and in China when it was not so centralized. You were talking about the competitive intensity of a fragmented Europe.Do you think that the current level of competitive intensity between the United States and China —and I really wish I could add Europe in there. Plenty of white papers, I know, have been written about Europe's competitive state and its in innovativeness, and I hope those white papers are helpful and someone reads them, but it seems to be that the real competition is between United States and China.Do you not think that that competitive intensity will sort of keep those countries progressing despite any of the barriers that might pop up and that you've already mentioned a little bit? Isn't that a more powerful tailwind than any of the headwinds that you've mentioned?It could be, I think, if people learn the right lessons from history, at least that's a key argument of the book. Right now, what I'm seeing is the United States moving more towards protectionist with protective tariffs. Right now, what I see is a move towards, we could even say crony capitalism with tariff exemptions that some larger firms that are better-connected to the president are able to navigate, but certainly not challengers. You're seeing the United States embracing things like golden shares in Intel, and perhaps even extending that to a range of companies. Back in the 2000s, people predicted that China would become more like the United States, now it looks like the United States is becoming more like China.And China today is having similar problems and on, I would argue, an even greater scale. Growth used to be the key objective in China, and so for local governments, provincial governments competing on such targets, it was fairly easy to benchmark and measure and hold provincial governors accountable, and they would be promoted inside the Communist Party based on meeting growth targets. Now, we have prioritized common prosperity, more national security-oriented concerns.And so in China, most progress has been driven by private firms and foreign-invested firms. State-owned enterprise has generally been a drag on innovation and productivity. What you're seeing, though, as China is shifting more towards political objectives, it's harder to mobilize private enterprise, where the yard sticks are market share and profitability, for political goals. That means that China is increasingly relying more again on state-owned enterprises, which, again, have been a drag on innovation.So, in principle, I agree with you that historically you did see Russian defeat to Napoleon leading to this Stein-Hardenberg Reforms, and the abolishment of Gilded restrictions, and a more competitive marketplace for both goods and ideas. You saw that Russian losses in the Crimean War led to the of abolition of serfdom, and so there are many times in history where defeat, in particular, led to striking reforms, but right now, the competition itself doesn't seem to lead to the kinds of reforms I would've hoped to see in response.Competitive problems in the US (15:50)I think what antitrust does is, at the very least, it provides a tool that means that businesses are thinking twice before engaging in anti-competitive behavior.I certainly wrote enough pieces and talked to enough people over the past decade who have been worried about competition in the United States, and the story went something like this: that you had these big tech companies — Google, and Meta, Facebook and Microsoft — that these were companies were what they would call “forever companies,” that they had such dominance in their core businesses, and they were throwing off so much cash that these were unbeatable companies, and this was going to be bad for America. People who made that argument just could not imagine how any other companies could threaten their dominance. And yet, at the time, I pointed out that it seemed to me that these companies were constantly in fear that they were one technological advance from being in trouble.And then lo and behold, that's exactly what happened. And while in AI, certainly, Google's super important, and Meta Facebook are super important, so are OpenAI, and so is Anthropic, and there are other companies.So the point here, after my little soliloquy, is can we overstate these problems, at least in the United States, when it seems like it is still possible to create a new technology that breaks the apparent stranglehold of these incumbents? Google search does not look quite as solid a business as it did in 2022.Can we overstate the competitive problems of the United States, or is what you're saying more forward-looking, that perhaps we overstated the competitive problems in the past, but now, due to these tariffs, and executives having to travel to the White House and give the president gifts, that that creates a stage for the kind of competitive problems that we should really worry about?I'm very happy to support the notion that technological changes can lead to unpredictable outcomes that incumbents may struggle to predict and respond to. Even if they predict it, they struggle to act upon it because doing so often undermines the existing business model.So if you take Google, where the transformer was actually conceived, the seven people behind it, I think, have since left the company. One of the reasons that they probably didn't launch anything like ChatGPT was probably for the fear of cannibalizing search. So I think the most important mechanisms for dislodging incumbents are dramatic shifts in technology.None of the legacy media companies ended up leading social media. None of the legacy retailers ended up leading e-commerce. None of the automobile leaders are leading in EVs. None of the bicycle companies, which all went into automobile, so many of them, ended up leading. So there is a pattern there.At the same time, I think you do have to worry that there are anti-competitive practices going on that makes it harder, and that are costly. The revolving door between the USPTO and companies is one example of that. We also have a reasonable amount of evidence on killer acquisitions whereby firms buy up a competitor just to shut it down. Those things are happening. I think you need to have tools that allow you to combat that, and I think more broadly, the United States has a long history of fairly vigorous antitrust policy. I think it'd be a hard pressed to suggest that that has been a tremendous drag on American business or American dynamism. So if you don't think, for example, that American antitrust policy has contributed to innovation and dynamism, at the very least, you can't really say either that it's been a huge drag on it.In Japan, for example, in its postwar history, antitrust was extremely lax. In the United States, it was very vigorous, and it was very vigorous throughout the computer revolution as well, which it wasn't at all in Japan. If you take the lawsuit against IBM, for example, you can debate this. To what extent did it force it to unbundle hardware and software, and would Microsoft been the company it is today without that? I think AT&T, it's both the breakup and it's deregulation, as well, but I think by basically all accounts, that was a good idea, particularly at the time when the National Science Foundation released ARPANET into the world.I think what antitrust does is, at the very least, it provides a tool that means that businesses are thinking twice before engaging in anti-competitive behavior. There's always a risk of antitrust being heavily politicized, and that's always been a bad idea, but at the same time, I think having tools on the books that allows you to check monopolies and steer their investments more towards the innovation rather than anti-competitive practices, I think is, broadly speaking, a good thing. I think in the European Union, you often hear that competition policy is a drag on productivity. I think it's the least of Europe's problem.Lagging European progress (22:19)If you take the postwar period, at least Europe catches up in most key industries, and actually lead in some of them. . . but doesn't do the same in digital. The question in my mind is: Why is that?Let's talk about Europe as we sort of finish up. We don't have to write How Progress Ends, it seems like progress has ended, so maybe we want to think about how progress restarts, and is the problem in Europe, is it institutions or is it the revealed preference of Europeans, that they're getting what they want? That they don't value progress and dynamism, that it is a cultural preference that is manifested in institutions? And if that's the case — you can tell me if that's not the case, I kind of feel like it might be the case — how do you restart progress in Europe since it seems to have already ended?The most puzzling thing to me is not that Europe is less dynamic than the United States — that's not very puzzling at all — but that it hasn't even managed to catch up in digital. If you take the postwar period, at least Europe catches up in most key industries, and actually lead in some of them. So in a way, take automobiles, electrical machinery, chemicals, pharmaceuticals, nobody would say that Europe is behind in those industries, or at least not for long. Europe has very robust catchup growth in the post-war period, but doesn't do the same in digital. The question in my mind is: Why is that?I think part of the reason is that the returns to innovation, the returns to scaling in Europe are relatively muted by a fragmented market in services, in particular. The IMF estimates that if you take all trade barriers on services inside the European Union and you add them up, it's something like 110 percent tariffs. Trump Liberation Day tariffs, essentially, imposed within European Union. That means that European firms in digital and in services don't have a harmonized market to scale into, the way the United States and China has. I think that's by far the biggest reason.On top of that, there are well-intentioned regulations like the GDPR that, by any account, has been a drag on innovation, and particularly been harmful for startups, whereas larger firms that find it easier to manage compliance costs have essentially managed to offset those costs by capturing a larger share of the market. I think the AI Act is going in the same direction there, ad so you have more hurdles, you have greater costs of innovating because of those regulatory barriers. And then the return to innovation is more capped by having a smaller, fragmented market.I don't think that culture or European lust for leisure rather than work is the key reason. I think there's some of that, but if you look at the most dynamic places in Europe, it tends to be the Scandinavian countries and, being from Sweden myself, I can tell you that most people you will encounter there are not workaholics.AI & labor (25:46)I think AI at the moment has a real resilience problem. It's very good that things where there's a lot of precedent, it doesn't do very well where precedence is thin.As I finish up, let me ask you: Like a lot of economists who think about technology, you've thought about how AI will affect jobs — given what we've seen in the past few years, would it be your guess that, if we were to look at the labor force participation rates of the United States and other rich countries 10 years from now, that we will look at those employment numbers and think, “Wow, we can really see the impact of AI on those numbers”? Will it be extraordinarily evident, or would it be not as much?Unless there's very significant progress in AI, I don't think so. I think AI at the moment has a real resilience problem. It's very good that things where there's a lot of precedent, it doesn't do very well where precedence is thin. So in most activities where the world is changing, and the world is changing every day, you can't really rely on AI to reliably do work for you.An example of that, most people know of AlphaGo beating the world champion back in 2016. Few people will know that, back in 2023, human amateurs, using standard laptops, exposing the best Go programs to new positions that they would not have encountered in training, actually beat the best Go programs quite easily. So even in a domain where basically the problem is solved, where we already achieved super-human intelligence, you cannot really know how well these tools perform when circumstances change, and I think that that's really a problem. So unless we solve that, I don't think it's going to have an impact that will mean that labor force participation is going to be significantly lower 10 years from now.That said, I do think it's going to have a very significant impact on white collar work, and people's income and sense of status. I think of generative AI, in particular, as a tool that reduces barriers to entry in professional services. I often compare it to what happened with Uber and taxi services. With the arrival of GPS technology, knowing the name of every street in New York City was no longer a particularly valuable skill, and then with a platform matching supply and demand, anybody could essentially get into their car who has a driver's license and top up their incomes on the side. As a result of that, incumbent drivers faced more competition, they took a pay cut of around 10 percent.Obviously, a key difference with professional services is that they're traded. So I think it's very likely that, as generative AI reduces the productivity differential between people in, let's say the US and the Philippines in financial modeling, in paralegal work, in accounting, in a host of professional services, more of those activities will shift abroad, and I think many knowledge workers that had envisioned prosperous careers may feel a sense of loss of status and income as a consequence, and I do think that's quite significant.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Professor Carl Benedict Frey thinks that high value technology jobs have a far higher multiplier effect than manufacturing jobs. Ie each highly skilled professional creates or supports up to five other jobs whereas each manufacturing job only has 1.6 supported jobs elsewhere in the business. Professor Carl Benedict Frey the Dieter Schwarz Associate Professor of AI and Work at the Oxford Internet Institute discussed all with Joe this morning.
This and all episodes at: https://aiandyou.net/ . "The book seems to be more timely than originally anticipated." I'm talking with Carl Benedikt Frey about his new book, How Progress Ends: Technology, Innovation, and the Fate of Nations, and its exploration of the political and economic effects of policies like tariffs and university defunding comes at a very critical time. AI is projected to have enormous economic and social impacts that call for the biggest of big picture thinking, and Frey is the co-author of the 2013 study The Future of Employment: How Susceptible Are Jobs to Computerization, which has received over 12,000 citations. He is Associate Professor of AI and Work at the Oxford Internet Institute and Director and Founder of the Future of Work Programme at the Oxford Martin School, both at the University of Oxford. His 2019 book, The Technology Trap: Capital, Labor, and Power in the Age of Automation, was selected as a Financial Times Best Book of the Year and awarded Princeton University's Richard A. Lester Prize. In the conclusion, we talk about the links between innovation and industry productivity, why AI hasn't yet delivered broad gains, automation's uneven effects on workers, the role of antitrust in sustaining competition, and the need for institutions like Oxford to adapt. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Many people expect improvements in technology over the next few years, but fewer people are optimistic about improvements in the economy. Especially in Europe, there's a narrative that productivity has stalled, that the welfare state is over-stretched, and that the regions of the world where innovation will be rewarded are the US and China – although there are lots of disagreements about which of these two countries will gain the upper hand.To discuss these topics, our guest in this episode is Carl Benedikt Frey, the Dieter Schwarz Associate Professor of AI & Work at the Oxford Internet Institute. Carl is also a Fellow at Mansfield College, University of Oxford, and is Director of the Future of Work Programme and Oxford Martin Citi Fellow at the Oxford Martin School.Carl's new book has the ominous title, “How Progress Ends”. The subtitle is “Technology, Innovation, and the Fate of Nations”. A central premise of the book is that our ability to think clearly about the possibilities for progress and stagnation today is enhanced by looking backward at the rise and fall of nations around the globe over the past thousand years. The book contains fascinating analyses of how countries at various times made significant progress, and at other times stagnated. The book also considers what we might deduce about the possible futures of different economies worldwide.Selected follow-ups:Professor Carl-Benedikt Frey - Oxford Martin SchoolHow Progress Ends: Technology, Innovation, and the Fate of Nations - Princeton University PressStop Acting Like This Is Normal - Ezra Klein ("Stop Funding Trump's Takeover")OpenAI o3 Breakthrough High Score on ARC-AGI-PubA Human Amateur Beat a Top Go-Playing AI Using a Simple Trick - ViceThe future of employment: How susceptible are jobs to computerisation? - Carl Benedikt Frey and Michael A. OsborneEurope's Choice: Policies for Growth and Resilience - Alfred Kammer, IMFMIT Radiation Laboratory ("Rad Lab")Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
This and all episodes at: https://aiandyou.net/ . "The book seems to be more timely than originally anticipated." I'm talking with Carl Benedikt Frey about his new book, How Progress Ends: Technology, Innovation, and the Fate of Nations, and its exploration of the political and economic effects of policies like tariffs and university defunding comes at a very critical time. AI is projected to have enormous economic and social impacts that call for the biggest of big picture thinking, and Frey is the co-author of the 2013 study The Future of Employment: How Susceptible Are Jobs to Computerization, which has received over 12,000 citations. He is Associate Professor of AI and Work at the Oxford Internet Institute and Director and Founder of the Future of Work Programme at the Oxford Martin School, both at the University of Oxford. His 2019 book, The Technology Trap: Capital, Labor, and Power in the Age of Automation, was selected as a Financial Times Best Book of the Year and awarded Princeton University's Richard A. Lester Prize. We talk about whether progress is inevitable, how growth depends on the interplay of technology and institutions, the link between productivity and innovation, the importance of institutional flexibility and decentralized funding, the effects of tariffs, the risks of China's increasingly centralized model, and why the US and China are both triggering declining dynamism in each other. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
In this episode we continue exploring children's online safety and cross the Atlantic to unpack the UK's Online Safety Act (OSA). Vicki Nash, Director of the Oxford Internet Institute, joins host Yasmin Sheikh to discuss the rationale of OSA and how it aims to protect children from illegal and harmful content online. Vicki talks about the difficulty of producing rigorous empirical studies of online harms and benefits to inform policymaking, the debates around free speech, age verification, and privacy, and the differences in approaches to these issues between the UK, EU, and US. Finally, they discuss whether lawyers can do their part by improving their own literacy around digital issues, and the need to continually evolve our policy approach to online governance.
Meta's Submarine Cable Megaproject Meta's Project Waterworth isn't just another undersea cable - it's an 18,000 mile behemoth set to link continents, with India's data centres as a key hub. Doctoral researcher Anniki Mikelsaar from the Oxford Internet Institute breaks down its geopolitical stakes: Will it dodge regulatory battles? The project also raises questions about regulation and security. Who controls these underwater data highways? As demand for low-latency connections grows, Meta's investment could redefine how the world stays online - and who profits from it. AI vs. Diabetic Foot Ulcers Diabetes can silently attack nerves, leading to foot ulcers that risk amputation. Enter LookInside, an AI app that diagnoses ulcers in seconds using smartphone scans. Founder Dr. Hermilio Carvalho explains how it could slash global healthcare costs—and why it's part of a wave of AI tools fighting diabetes' invisible threats. Tinder's double dating revolution Swipe culture just got safer: Tinder's Double Date lets users' team up with a friend to match with pairs. Gareth cheers it as a no-brainer for safety, while Chhavi wonders: Could this kill awkward first-date small talk? Either way, it's a bold step towards reimagining digital dating. The programme is presented by Gareth Mitchell and the studio expert is Chhavi Sachdev. More on this week's stories: Meta's submarine cable – the longest ever LookInside – Diagnosing diabetic foot ulcers with AI Tinder Double Dating For the PodExtra version of the show please subscribe via this link: https://somewhere-on-earth-the-global-tech-podcast-the-podextra-edition.pod.fan/ Follow us on all the socials: Join our Facebook group Instagram BlueSky YouTube If you like Somewhere on Earth, please rate and review it on Apple Podcasts or Spotify Contact us by email: hello@somewhereonearth.co Send us a voice note: via WhatsApp: +44 7486 329 484 Find a Story + Make it News = Change the World Learn more about your ad choices. Visit megaphone.fm/adchoices
This week we're looking at AI and public services. How far could AI tools help to tackle stagnant public sector productivity? What dangers are associated with AI adoption? And how can these dangers be addressed? Artificial intelligence is increasingly being touted as a game-changer across various sectors, including public services. But while AI presents significant opportunities for improving efficiency and effectiveness, concerns about fairness, equity, and past failures in public sector IT transformations loom large. And, of course, the idea of tech moguls like Elon Musk wielding immense influence over our daily lives is unsettling for many. So, what are the real opportunities AI offers for public services? What risks need to be managed? And how well are governments—particularly in the UK—rising to the challenge? In this episode, we dive into these questions with three expert guests who have recently published an article in The Political Quarterly on the subject: Helen Margetts – Professor of Society and the Internet at the Oxford Internet Institute, University of Oxford, and Director of the Public Policy Programme at The Alan Turing Institute. Previously, she was Director of the School of Public Policy at UCL. Cosmina Dorobantu – Co-director of the Public Policy Programme at The Alan Turing Institute. Jonathan Bright – Head of Public Services and AI Safety at The Alan Turing Institute. Mentioned in this episode: Margetts, H., Dorobantu, C. and Bright, J. (2024), How to Build Progressive Public Services with Data Science and Artificial Intelligence. The Political Quarterly. Transcription link: https://uncoveringpolitics.com/episodes/ai-and-public-services/transcript Date of episode recording: 2025-02-13T00:00:00Z Duration: 00:42:48 Language of episode: English (UK) TAGS: AI, government, politics, bureaucracy, political quarterly, efficiency Presenter:Alan Renwick Guests: Helen Margettes, Cosmina Dorobantu, Jonathan Bright Producer: Eleanor Kingwell-Banham
The Disturbing Reality of AI-Generated AbuseA new study reveals nearly 35,000 publicly downloadable AI models capable of generating deepfake pornography—often targeting women and celebrities. These "model variants" can be tweaked to create millions of non-consensual intimate images, fuelling a growing crisis in digital exploitation. Researchers identified 35,000 deepfake model variants in public repositories, with each model being used to generate countless AI-generated explicit images of real people. While 35,000 models may seem small compared to the vast internet, each one represents a potential weapon for harassment. Experts warn that without stricter regulation, deepfake abuse could spiral further out of control. The study was led by Will Hawkins from the Oxford Internet Institute. He joins Gareth and Ghislaine on the show. MySQL at 30: The Unsung Hero Powering Facebook, the Web & Your Data - The Database That Quietly Runs the InternetYou've probably never heard of it—but your data lives on MySQL. Celebrating its 30th anniversary this year, this open-source database is the invisible backbone of Facebook, countless websites, and nearly every major online service. It's an open-source success story, which relies on simplicity and speed and it's easy to set up. We spoke with Peter Zaitsev, who joined MySQL in 2002 and later co-founded Percona, a leading database consultancy. Next time you log into Facebook or book a flight, remember - there's a 30-year-old database working behind the scenes. Not bad for a tech "underdog." The programme is presented by Gareth Mitchell and the studio expert is Ghislaine Boddington. More on this week's stories: Dramatic rise in publicly downloadable deep fake image generators 1995-2025: MySQL at 30! Production Manager: Liz Tuohy Editor: Ania Lichtarowicz For the PodExtra version of the show please subscribe via this link: https://somewhere-on-earth-the-global-tech-podcast-the-podextra-edition.pod.fan/ Follow us on all the socials: Join our Facebook group Instagram BlueSky If you like Somewhere on Earth, please rate and review it on Apple Podcasts or Spotify Contact us by email: hello@somewhereonearth.co Send us a voice note via WhatsApp: +44 7486 329 484 Find a Story + Make it News = Change the World Learn more about your ad choices. Visit megaphone.fm/adchoices
Artificial intelligence (AI) is predicted to transform many aspects of our lives in the coming years. One of these is the functioning of the democratic system. AI could shift how election campaigns are conducted and how citizens receive and process information. It could also fundamentally alter patterns of power across society. So what potential effects of AI on democracy should we be particularly aware of? To what degree are there opportunities as well as dangers? And how should policy-makers respond? We discussed these questions with an expert panel.Speakers:Danielle Allen, James Bryant Conant University Professor at Harvard University, specialising in political philosophy, public policy, and ethics, and Founder and Chairperson of Partners In Democracy.Helen Margetts, Professor of Society and the Internet at the Oxford Internet Institute at the University of Oxford, and Director of the Public Policy programme at the Alan Turing Institute.Ciaran Martin, Professor of Practice in the Management of Public Organisations at the Blavatnik School of Government, University of Oxford, and formerly Chief Executive of the UK government's National Cyber Security Centre.Chair: Professor Alan Renwick – Deputy Director of the Constitution UnitAdditional reading:'The Real Dangers of Generative AI' by Danielle Allen and E. Glen Weyl Links:Website: www.ucl.ac.uk/constitution-unitMailing list: www.ucl.ac.uk/constitution-unit/get-involved/mailing-listBlog: constitution-unit.com
These days, so much of our lives takes place online - but what about our afterlives? A recent study by the Oxford Internet Institute predicts that the number of deceased Facebook users could outnumber the living by 2070. As AI advances, a debate is growing over digital remains and what should be done with the vast amounts of data we leave behind. In this episode, Carl Öhman, author of The Afterlife of Data: What Happens to Your Information When You Die and Why You Should Care, explores the ethics, politics, and future of our digital identities. Named one of The Economist's Best Books of 2024, Öhman's work sheds light on who truly owns our data after death - and whether we should have a say in our digital legacy. Carl Öhman is an assistant professor of political science at Uppsala University, Sweden. His research spans several topics, including the politics and ethics of AI, deepfakes and digital remains. He is joined in conversation by Stephanie Hare, researcher, broadcaster, and author of Technology is Not Neutral: A Short Guide to Technology Ethics. If you'd like to become a Member and get access to all our full conversations, plus all of our Members-only content, just visit intelligencesquared.com/membership to find out more. For £4.99 per month you'll also receive: - Full-length and ad-free Intelligence Squared episodes, wherever you get your podcasts - Bonus Intelligence Squared podcasts, curated feeds and members exclusive series - 15% discount on livestreams and in-person tickets for all Intelligence Squared events ... Or Subscribe on Apple for £4.99: - Full-length and ad-free Intelligence Squared podcasts - Bonus Intelligence Squared podcasts, curated feeds and members exclusive series … Already a subscriber? Thank you for supporting our mission to foster honest debate and compelling conversations! Visit intelligencesquared.com to explore all your benefits including ad-free podcasts, exclusive bonus content and early access. … Subscribe to our newsletter here to hear about our latest events, discounts and much more. https://www.intelligencesquared.com/newsletter-signup/ Learn more about your ad choices. Visit podcastchoices.com/adchoices Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this episode on Speaking Out of Place podcast Professor David Palumbo-Liu talks with Sarah T. Roberts about the hidden humans behind Artificial Intelligence, which is reliant on executives and business managers to direct AI to promote their brand and low-level, out-sourced, and poorly paid content managers to slog through masses of images, words, and data before they get fed into the machine. They talk about the cultural, sociological, financial, and political aspects of AI. They end by taking on Elon Musk and the DOGE project, as an emblem of how Silicon Valley executives have embraced a brand of tech rapture that disdains and destroys democracy and attacks the idea that people can take care of each other, independent of sociopathic libertarianism.Sarah T. Roberts, Ph.D., is a full professor at UCLA (Gender Studies, Information Studies, Labor Studies), specializing in Internet and social media policy, infrastructure, politics and culture, and the intersection of media, technology, and society. She is the faculty director and co-founder of the UCLA Center for Critical Internet Inquiry (C2i2), co-director of the Minderoo Initiative on Technology & Power, and a research associate of the Oxford Internet Institute. Informed by feminist Science and Technology Studies perspectives, Roberts is keenly interested in the way power, geopolitics, and economics play out on and via the internet, reproducing, reifying, and exacerbating global inequities and social injustice.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_placePhoto of Elon Musk: Debbie RoweCreative Commons Attribution-Share Alike 3.0 Unported
In this episode on Speaking Out of Place podcast Professor David Palumbo-Liu talks with Sarah T. Roberts about the hidden humans behind Artificial Intelligence, which is reliant on executives and business managers to direct AI to promote their brand and low-level, out-sourced, and poorly paid content managers to slog through masses of images, words, and data before they get fed into the machine. They talk about the cultural, sociological, financial, and political aspects of AI. They end by taking on Elon Musk and the DOGE project, as an emblem of how Silicon Valley executives have embraced a brand of tech rapture that disdains and destroys democracy and attacks the idea that people can take care of each other, independent of sociopathic libertarianism.Sarah T. Roberts, Ph.D., is a full professor at UCLA (Gender Studies, Information Studies, Labor Studies), specializing in Internet and social media policy, infrastructure, politics and culture, and the intersection of media, technology, and society. She is the faculty director and co-founder of the UCLA Center for Critical Internet Inquiry (C2i2), co-director of the Minderoo Initiative on Technology & Power, and a research associate of the Oxford Internet Institute. Informed by feminist Science and Technology Studies perspectives, Roberts is keenly interested in the way power, geopolitics, and economics play out on and via the internet, reproducing, reifying, and exacerbating global inequities and social injustice.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_placePhoto of Elon Musk: Debbie RoweCreative Commons Attribution-Share Alike 3.0 Unported
In this episode on Speaking Out of Place podcast Professor David Palumbo-Liu talks with Sarah T. Roberts about the hidden humans behind Artificial Intelligence, which is reliant on executives and business managers to direct AI to promote their brand and low-level, out-sourced, and poorly paid content managers to slog through masses of images, words, and data before they get fed into the machine. They talk about the cultural, sociological, financial, and political aspects of AI. They end by taking on Elon Musk and the DOGE project, as an emblem of how Silicon Valley executives have embraced a brand of tech rapture that disdains and destroys democracy and attacks the idea that people can take care of each other, independent of sociopathic libertarianism.Sarah T. Roberts, Ph.D., is a full professor at UCLA (Gender Studies, Information Studies, Labor Studies), specializing in Internet and social media policy, infrastructure, politics and culture, and the intersection of media, technology, and society. She is the faculty director and co-founder of the UCLA Center for Critical Internet Inquiry (C2i2), co-director of the Minderoo Initiative on Technology & Power, and a research associate of the Oxford Internet Institute. Informed by feminist Science and Technology Studies perspectives, Roberts is keenly interested in the way power, geopolitics, and economics play out on and via the internet, reproducing, reifying, and exacerbating global inequities and social injustice.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_placePhoto of Elon Musk: Debbie RoweCreative Commons Attribution-Share Alike 3.0 Unported
In this episode on Speaking Out of Place podcast Professor David Palumbo-Liu talks with Sarah T. Roberts about the hidden humans behind Artificial Intelligence, which is reliant on executives and business managers to direct AI to promote their brand and low-level, out-sourced, and poorly paid content managers to slog through masses of images, words, and data before they get fed into the machine. They talk about the cultural, sociological, financial, and political aspects of AI. They end by taking on Elon Musk and the DOGE project, as an emblem of how Silicon Valley executives have embraced a brand of tech rapture that disdains and destroys democracy and attacks the idea that people can take care of each other, independent of sociopathic libertarianism.Sarah T. Roberts, Ph.D., is a full professor at UCLA (Gender Studies, Information Studies, Labor Studies), specializing in Internet and social media policy, infrastructure, politics and culture, and the intersection of media, technology, and society. She is the faculty director and co-founder of the UCLA Center for Critical Internet Inquiry (C2i2), co-director of the Minderoo Initiative on Technology & Power, and a research associate of the Oxford Internet Institute. Informed by feminist Science and Technology Studies perspectives, Roberts is keenly interested in the way power, geopolitics, and economics play out on and via the internet, reproducing, reifying, and exacerbating global inequities and social injustice.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_placePhoto of Elon Musk: Debbie RoweCreative Commons Attribution-Share Alike 3.0 Unported
Today on Speaking Out of Place I talk with Sarah T Roberts about the hidden humans behind Artificial Intelligence, which is reliant on executives and business managers to direct AI to promote their brand and low-level, out-sourced, and poorly paid content managers to slog through masses of images, words, and data before they get fed into the machine. We talk about the cultural, sociological, financial, and political aspects of AI. We end by taking on Elon Musk and the DOGE project, as an emblem of how Silicon Valley executives have embraced a brand of tech rapture that disdains and destroys democracy and attacks the idea that people can take care of each other, independent of sociopathic libertarianism.Sarah T. Roberts, Ph.D. is a full professor at UCLA (Gender Studies, Information Studies, Labor Studies), specializing in Internet and social media policy, infrastructure, politics and culture, and the intersection of media, technology and society. She is the faculty director and co-founder of the UCLA Center for Critical Internet Inquiry (C2i2), co-director of the Minderoo Initiative on Technology & Power, and a research associate of the Oxford Internet Institute. Informed by feminist Science and Technology Studies perspectives, Roberts is keenly interested in the way power, geopolitics and economics play out on and via the internet, reproducing, reifying and exacerbating global inequities and social injustice.
Artificial intelligence is increasingly being touted as a game-changer across various sectors, including public services. But while AI presents significant opportunities for improving efficiency and effectiveness, concerns about fairness, equity, and past failures in public sector IT transformations loom large. And, of course, the idea of tech moguls like Elon Musk wielding immense influence over our daily lives is unsettling for many.So, what are the real opportunities AI offers for public services? What risks need to be managed? And how well are governments—particularly in the UK—rising to the challenge?In this episode, we dive into these questions with three expert guests who have recently published an article in The Political Quarterly on the subject:Helen Margetts – Professor of Society and the Internet at the Oxford Internet Institute, University of Oxford, and Director of the Public Policy Programme at The Alan Turing Institute. Previously, she was Director of the School of Public Policy at UCL.Cosmina Dorobantu – Co-director of the Public Policy Programme at The Alan Turing Institute.Jonathan Bright – Head of Public Services and AI Safety at The Alan Turing Institute. Mentioned in this episode:Margetts, H., Dorobantu, C. and Bright, J. (2024), How to Build Progressive Public Services with Data Science and Artificial Intelligence. The Political Quarterly. UCL's Department of Political Science and School of Public Policy offers a uniquely stimulating environment for the study of all fields of politics, including international relations, political theory, human rights, public policy-making and administration. The Department is recognised for its world-class research and policy impact, ranking among the top departments in the UK on both the 2021 Research Excellence Framework and the latest Guardian rankings.
Is AI going to replace your job? Is artificial intelligence a threat to your employment? If your profession involves machine learning or chatbot development, the outlook appears promising but for translators and some writing roles it's not. Those are the findings of a study recently published in the Journal of Economic Behaviour and Organisation. Researchers analysed over three million job postings from a global freelancing platform since 2019. They identified which skills are being replaced, complemented, or remain unaffected by AI. Two of the authors, Dr. Fabian Braesemann from the Oxford Internet Institute, and Dr. Ole Tuetloff from the Copenhagen Centre for Social Data Science join Gareth and Ania on the show. The World's First Inclusive Digital Map Jez Donaldson continues his search for the very best tech for good stories from the Consumer Electronics Show. This week he meets Brandon Biggs, CEO and Co-founder of XR Navigation (Audiom) inclusive map viewer. Using enhanced visual tech, audio and text it makes maps accessible to blind people, as well as those with other disabilities. The programme is presented by Gareth Mitchell and the studio expert is Ania Lichtarowicz. More on this week's stories: Winners and Losers of Generative AI XR Navigation Production Manager: Liz Tuohy Editor: Ania Lichtarowicz For the PodExtra version of the show please subscribe via this link: https://somewhere-on-earth-the-global-tech-podcast-the-podextra-edition.pod.fan/ Follow us on all the socials: Join our Facebook group Instagram BlueSky If you like Somewhere on Earth, please rate and review it on Apple Podcasts or Spotify Contact us by email: hello@somewhereonearth.co Send us a voice note: via WhatsApp: +44 7486 329 484 Find a Story + Make it News = Change the World Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of Human Rights Talks, we talk to Samuel Woolley, the Dietrich Endowed Chair in Disinformation Studies at the University of Pittsburgh. Samuel talks about the role of encrypted messaging apps in spreading mis-and disinformation, how it impacts diaspora communities in particular and democracy more generally, and how some organizations are fighting back. Samuel Woolley is a writer and researcher specializing in the study of automation/artificial intelligence, emergent technology, politics, persuasion and social media. He is currently the Dietrich Endowed Chair in Disinformation Studies at the University of Pittsburgh. Previously he founded the Propaganda Research Lab, Center for Media Engagement at The University of Texas at Austin. He also founded and directed the Digital Intelligence Lab at the Institute for the Future, a 50-year-old think tank based in the heart of Silicon Valley. He also cofounded and directed the research team at the Computational Propaganda Project at the Oxford Internet Institute, University of Oxford. He has written on political manipulation of technology for a variety of publications including Wired, The Atlantic, Motherboard VICE, TechCrunch, The Guardian, Quartz and Slate. His work has been presented to members of the North Atlantic Treaty Organization, the US Congress, the UK Parliament and to numerous private entities and civil society organizations.
Having arrived in Britain as a five-year-old refugee fleeing the Nazis on the Kindertransport, Dame Stephanie Shirley went on to build one of Britain's most successful software companies. She built a business valued at $3 billion and has dedicated the last three decades of her life to philanthropy, giving away most of her personal wealth. Here are Dame Stephanie's three crisis comforts. Full episode https://www.crisiswhatcrisis.com/podcasts/dame-stephanie-shirley-on-91-years-of-resilience-and-brilliance/ Links Instagram: https://www.instagram.com/damestephanie_/?hl=enWebsite: https://www.steveshirley.com/ BookLet It Go (2019): https://www.amazon.co.uk/Let-Go-Extraordinary-Entrepreneur-Philanthropist/dp/0241395496 Charities Autistica: https://www.autistica.org.uk/Autism at Kingwood: https://www.kingwood.org.uk/Prior's Court: https://www.priorscourt.org.uk/The Oxford Internet Institute: https://www.oii.ox.ac.uk/The Worshipful Company of Information Technologists: https://wcit.org.uk/ Stream/buy ‘Allies' by Some Velvet Morning: https://ampl.ink/qp6bm Some Velvet Morning Website: www.somevelvetmorning.co.uk Your Daily Practice: Sleep by Myndstream: https://open.spotify.com/track/5OX9XgJufFz9g63o2Dv2i5?si=b2f9397c92084682 This podcast is brought to you by Crisis What Crisis? Productions and Coulson PartnersHost – Andy Coulson CWC team: Jane Sankey, Louise Difford, Mabel PickeringWith special thanks to Ioana Barbu and the brilliant people at Global For all PR and guest approaches please contact – podcast@coulsonpartners.com
Having arrived in Britain as a five-year-old refugee fleeing the Nazis on the Kindertransport, Dame Stephanie Shirley went on to build one of Britain's most successful software companies. Using the pen name ‘Steve' to be taken more seriously by clients, she built a business valued at $3 billion, turning 70 staff members into millionaires along the way. But her impact goes far beyond the business world. Dame Stephanie has dedicated the last three decades of her life to philanthropy, giving away most of her personal wealth. Once the 11th richest woman in the UK, she's the only person to drop off the Sunday Times Rich List due to generosity. Inspired by her late son, Giles, who was profoundly autistic, Dame Stephanie has helped transform care for those with autism along with supporting a range of other causes. In this episode she shares stories from her childhood, her career and her personal life – all of them packed with invaluable lessons. A former podcast guest of ours, Baroness Martha Lane Fox, wrote the foreword to Dame Stephanie's brilliant memoir, Let it Go; and described her as “ ... a role model for how to build your life, not just your company.” She wasn't wrong. Links Instagram: https://www.instagram.com/damestephanie_/?hl=enWebsite: https://www.steveshirley.com/ Book Let It Go (2019): https://www.amazon.co.uk/Let-Go-Extraordinary-Entrepreneur-Philanthropist/dp/0241395496 CharitiesAutistica: https://www.autistica.org.uk/Autism at Kingwood: https://www.kingwood.org.uk/Prior's Court: https://www.priorscourt.org.uk/The Oxford Internet Institute: https://www.oii.ox.ac.uk/The Worshipful Company of Information Technologists: https://wcit.org.uk/ Stream/buy ‘Allies' by Some Velvet Morning: https://ampl.ink/qp6bm Some Velvet Morning Website: www.somevelvetmorning.co.uk Your Daily Practice: Sleep by Myndstream: https://open.spotify.com/track/5OX9XgJufFz9g63o2Dv2i5?si=b2f9397c92084682 This podcast is brought to you by Crisis What Crisis? Productions and Coulson PartnersHost – Andy Coulson CWC team: Jane Sankey, Louise Difford, Mabel Pickering, Linus FielderWith special thanks to Ioana Barbu and the brilliant people at Global For all PR and guest approaches please contact – podcast@coulsonpartners.com
On Sept. 4, 2024, Me, Myself, and AI host Sam Ransbotham moderated a panel discussion at a Georgetown University/World Bank event, Jobs in the Age of AI. Afterward, he interviewed keynote speaker Carl Benedikt Frey, Dieter Schwarz Associate Professor of AI and Work at the Oxford Internet Institute, and panelist Karin Kimbrough, LinkedIn's chief economist. In this bonus episode recorded during this discussion, hear from Frey and Kimbrough about how artificial intelligence is impacting workers, labor trends, and the economy. Read the episode transcript here. For further information: Watch sessions from the AI in Action event on demand. Access on-demand recordings from all prior AI in Action events. Read event organizers Timothy DeStefano and Jonathan Timmis's paper, “Do Capital Incentives Distort Technology Diffusion? Evidence on Cloud, Big Data, and AI.” Me, Myself, and AI is a collaborative podcast from MIT Sloan Management Review and Boston Consulting Group and is hosted by Sam Ransbotham and Shervin Khodabandeh. Our engineer is David Lishansky, and the coordinating producers are Allison Ryder and Alanna Hooper. Stay in touch with us by joining our LinkedIn group, AI for Leaders at mitsmr.com/AIforLeaders or by following Me, Myself, and AI on LinkedIn. We encourage you to rate and review our show. Your comments may be used in Me, Myself, and AI materials.
Recording of two keynotes given at the event "AI and Security - A multidisciplinary Approach" given in Vienna on November 18th 2024. Speakers: 00:02 Welcome speech by Prof. Dr. Ronald Maier, Vice-Rector for Digitalisation and Knowledge Transfer of the University of Vienna Opening remarks by 05:43 Christoph Campregher, Head of IT-Security, Vienna University Computer Center 08:15 Nikolaus Forgó, Head of the Department of Innovation and Digitalisation in Law of the University of Vienna 10:45 Barbara Prainsack, Department of Political Science of the University of Vienna Keynote II chaired by Barbara Prainsack: 16:30 Jörg Heidrich, Lawyer for IT law and DPO at Heise Keynote I chaired by Barbara Prainsack: 52:40 Brent Mittelstadt, Associate Professor and Director of Research at the Oxford Internet Institute, University of Oxford
Laurie Taylor lifts the lid on a sector of the economy associated with wealth, innovation & genius. Mark Graham, Professor of Internet Geography at the Oxford Internet Institute, uncovers the hidden human labour powering AI. His study, based on hundreds of interviews and thousands of hours of fieldwork, is the first to tell the stories of this army of underpaid and exploited workers. Beneath the promise of a frictionless technology that will bring riches to humanity, the interviews he has conducted reveal a grimmer reality involving a precarious global workforce of millions labouring under often appalling conditions. Also, Paula Bialski, Associate Professor for Digital Sociology at the University of St. Gallen in St. Gallen, Switzerland, discusses her research with software developers at a non-flashy, run-of-the-mill tech company. Beyond the awesome images of the Gods of Silicone Valley, she finds that technology breaks due to human-related issues and staff are often engaged in patch up and repair, rather than dreaming up the next killer app. Producer: Jayne Egerton
In this episode, Professor Rory O'Connor and Craig spoke to the Research Programme Lead at MQ Mental Health Research, Mariana Bolivar, and Research Associate and PhD student at the University of Oxford, Thomas Hakman. As the lines between the offline and online worlds blur, the impact of internet use on young people's mental health has become a critical concern. With an overwhelming amount of information available, it can be challenging to separate evidence-based knowledge from low-quality sources. In response, MQ Mental Health Research, in collaboration with Melbourne University and the Oxford Internet Institute, has conducted a comprehensive review to understand the true effects of the internet on youth mental health. In this conversation, they discussed the recent findings from the study, how to stay safe online, and whether the internet is truly harmful.
Subscriber-only episodeSend us a textIs Kamala Harris the AI Tsar? The upcoming debate between Kamala Harris and Donald Trump is scheduled for next week, where the two candidates for the U.S. presidency will engage in a direct confrontation on the ABC news channel in anticipation of the November election. Should the topic of artificial intelligence arise, it may favour Kamala Harris, given her significant involvement in AI policy as Vice President. This perspective is shared by Bhaskar Chakravorti, Dean of Global Business at The Fletcher School, Tufts University, who recently discussed technology, the election, and Kamala Harris with our editor Ania. While the Vice President enjoys substantial support from influential figures in Silicon Valley, it is important to note that prominent tech entrepreneurs such as Elon Musk and Peter Thiel are aligned with Team Trump. Video Games were good for mental health through COVIDEngaging in gaming for a few hours each day has been shown to enhance mental well-being, as indicated by a recent study published in the journal Nature Human Behaviour. One significant finding of this research is that merely owning a gaming console can have positive effects. The critical aspect of this study is that it establishes causation rather than mere correlation, which is a significant advancement in this field of research. For instance, if an individual purchases a gaming console and exhibits good mental health, one might question whether their happiness is a pre-existing condition or if the acquisition of the console has contributed to their improved well-being. This study was conducted during the pandemic, and the unique circumstances of that period enabled researchers to uncover valuable insights regarding the connection between gaming and mental health. Professor Andrew Przybylski from the Oxford Internet Institute explains the significance of the work. The programme is presented by Gareth Mitchell and the studio expert is Peter Guest.More on this week's stories: If Kamala Harris Was the Czar of Anything, It Would Be AI Professor Andrew Przybylski Editor: Ania LichtarowiczProduction Manager: Liz Tuohy Recording and audio editing : Lansons | Team Farner For new episodes, subscribe wherever you get your podcasts or via this link:https://www.buzzsprout.com/2265960/supporters/newFollow us on all the socials: Join our Facebook group Instagram Twitter/X If you like Somewhere on Earth, please rate and review it on Apple PodcastsContact us by email: hello@somewhereonearth.coSend us a voice note: via WhatsApp: +44 7486 329 484Find a Story + Make it News = Change the World
Send us a textIs Kamala Harris the AI Tsar? The upcoming debate between Kamala Harris and Donald Trump is scheduled for next week, where the two candidates for the U.S. presidency will engage in a direct confrontation on the ABC news channel in anticipation of the November election. Should the topic of artificial intelligence arise, it may favour Kamala Harris, given her significant involvement in AI policy as Vice President. This perspective is shared by Bhaskar Chakravorti, Dean of Global Business at The Fletcher School, Tufts University, who recently discussed technology, the election, and Kamala Harris with our editor Ania. While the Vice President enjoys substantial support from influential figures in Silicon Valley, it is important to note that prominent tech entrepreneurs such as Elon Musk and Peter Thiel are aligned with Team Trump. Video Games were good for mental health through COVIDEngaging in gaming for a few hours each day has been shown to enhance mental well-being, as indicated by a recent study published in the journal Nature Human Behaviour. One significant finding of this research is that merely owning a gaming console can have positive effects. The critical aspect of this study is that it establishes causation rather than mere correlation, which is a significant advancement in this field of research. For instance, if an individual purchases a gaming console and exhibits good mental health, one might question whether their happiness is a pre-existing condition or if the acquisition of the console has contributed to their improved well-being. This study was conducted during the pandemic, and the unique circumstances of that period enabled researchers to uncover valuable insights regarding the connection between gaming and mental health. Professor Andrew Przybylski from the Oxford Internet Institute explains the significance of the work. The programme is presented by Gareth Mitchell and the studio expert is Peter Guest. More on this week's stories: If Kamala Harris Was the Czar of Anything, It Would Be AI Professor Andrew Przybylski Support the showEditor: Ania LichtarowiczProduction Manager: Liz Tuohy Recording and audio editing : Lansons | Team Farner For new episodes, subscribe wherever you get your podcasts or via this link:https://www.buzzsprout.com/2265960/supporters/newFollow us on all the socials: Join our Facebook group Instagram Twitter/X If you like Somewhere on Earth, please rate and review it on Apple PodcastsContact us by email: hello@somewhereonearth.coSend us a voice note: via WhatsApp: +44 7486 329 484Find a Story + Make it News = Change the World
Dr Nayana Prakash, formerly of the Oxford Internet Institute, comes into studio this week to give an insight into the shape and structure of the internet in India. She talks about the country's testy relationships with Facebook/ Meta and Twitter/X, why so many Indians see the internet as essential for social mobility and escape, and the ways in which internet culture has been weaponised, only to be subsequently restricted, by the Indian government under Narendra Modi's BJP. Read and listen to Skeptechs here https://skeptechs.substack.com/ ------- PALESTINE AID LINKS As the humanitarian crisis continues to unfold in Gaza, we encourage anyone who can to donate to Medical Aid for Palestinians. You can donate using the links below. Please also donate to the gofundmes of people trying to escape Rafah, or purchase ESIMs. These links are for if you need a well-respected name attached to a fund to feel comfortable sending money. https://www.map.org.uk/donate/donate https://www.savethechildren.org.uk/how-you-can-help/emergencies/gaza-israel-conflict -------- PHOEBE ALERT Can't get enough Phoebe? Check out her Substack Here! -------- This show is supported by Patreon. Sign up for as little as $5 a month to gain access to a new bonus episode every week, and our entire backlog of bonus episodes! Thats https://www.patreon.com/10kpostspodcast -------- Ten Thousand Posts is a show about how everything is posting. It's hosted by Hussein (@HKesvani), Phoebe (@PRHRoy) and produced by Devon (@Devon_onEarth).
Are dogs color blind? Many people believe so, but they are not. They do see color but not the way we do. This episode begins with a look at what colors they can and can't see and why it is important. https://www.psychologytoday.com/intl/blog/canine-corner/200810/can-dogs-see-colors How you will react in an emergency or disaster is hard to predict. Yet how people react can make the difference between life and death. It's not just physical preparation but also mental – to think about what you will do. Here to explain the process your brain goes through when an emergency or disaster strikes and offer some suggestions on how to better plan for them is Amanda Ripley. She is a writer whose work has appeared in the Washington Post, The New York Times, The Atlantic, The Wall Street Journal, and many other publications and she is author of the book, The Unthinkable Who Survives When Disaster Strikes--and Why (https://amzn.to/4fGJakN). Some people like their solitude more than others. They cherish their time alone. Yet there is a stigma about solitude. People often think that others who spend a lot of time alone must have something wrong with them or they must be lonely or have no friends. Not necessarily. In the right dose, solitude can be very powerful. Joining me to discuss this is Netta Weinstein is an internationally recognized psychologist and director of the European Research Council's 'Solitude: Alone but Resilient (SOAR)' project. She is also professor of psychology at the University of Reading and an associate researcher at the Oxford Internet Institute, University of Oxford, UK. Netta is author of the book Solitude: The Science and Power of Being Alone (https://amzn.to/3X1XkWf). Your cellphone is crawling with germs – more than you realize. You touch it all the time, you take it everywhere and put it down on all kinds of surfaces. Listen as I reveal how all the junk on your phone can make you sick – and the simple solution to make sure that doesn't happen. https://www.nm.org/healthbeat/healthy-tips/quick-dose-is-your-cell-phone-making-you-sick Learn more about your ad choices. Visit megaphone.fm/adchoices
The fear of AI taking our jobs has been buzzing for years, but it's not a new conversation. Technology has been shaking up industries and displacing workers since the dawn of the Industrial Revolution.In this episode, Greg sits down with Carl Benedikt Frey, the Dieter Schwarz Associate Professor of AI & Work at the Oxford Internet Institute, to dive deep into these shifts. As the Director of the Future of Work Programme and author of The Technology Trap, Frey sheds light on the historical and current impacts of automation, the Industrial Revolution, and the role of political power in technological progress. Together, they explore who wins and loses in the AI era and what history can teach us about the future. *unSILOed Podcast is produced by University FM.*Episode Quotes:Will AI drive long-term productivity or just short-term automation?46:21: If all that AI is about is automation, then the future of productivity simply depends on the potential scope of automation, so to speak, and that will then eventually peter off. Whereas if it's about creating new tasks, new products, and new innovations, then it can be more sustained, right? And I think that's a key reason that the second industrial revolution lasted for a very long period of time: it created a host of entirely new types of economic activities. And so I think a key question going forward is: can we design our institutions to help make sure that AI is more being used to create new activities? I think it's likely to have a much more sustained impact on productivity growth going forward.Starting from the past to predict the future03:07: If you want to say that the future is likely to be very different from the past, then at the very least, I think we should be able to state why. So I think history should always be our starting point. On the race between technology and education39:18: The race between technology and education is a world in which everybody is better off, right? That has not been the case. So we need to somehow modify that model of the world, and what we've seen since the 1980s, in particular across advanced economies, but also in some emerging economies, is labor market polarization and the decline of middle-income jobs, right? And so the race between technology and education and the view of technological change does not explain that part of the story, right? That's sort of the task-based view, and things like replacing versus enabling technologies do have some explanatory power.Should we be thinking of this new revolution as being more like the first than the second?44:22: I think it is more like the first industrial revolution. And I still think that I can't think of a single AI application that is not about automation or doing something that people are already doing a bit more productively, whether it's writing, coding, or image generation. Show Links:Recommended Resources:Engels' pauseLudditesGuest Profile:Faculty Profile at Oxford Martin SchoolCarl Benedikt FreyCarl Benedikt Frey (@carlbfrey) / XHis Work:The Technology Trap: Capital, Labor, and Power in the Age of Automation
There are two core critiques of AI. The first is that it is an existential threat because it replaces humans with algorithms. The second is that AI is a mirror that only compounds preexisting injustices. James Muldoon, an associate professor of management at Essex Business School and co-author of Feeding the Machine, fits into the second category. Reminding us that “AI is people”, he travelled around the world in search of the hidden human labor that is the powering the AI revolution. What he found was a huge precariat (estimated by the World Bank to be over 100 million people) who are doing the dirty human work that powers “artificial” intelligence. The AI revolution, then, for Muldoon, is only compounding the exploitative nature of labor in today's increasingly inegalitarian global economy. It is the core problem with, rather than the solution to 21st century networked capitalism. James Muldoon is a Reader (Associate Professor) in Management at the Essex Business School, a Research Associate at the Oxford Internet Institute and Head of Digital Research at the Autonomy think tank. His research examines how modern technologies such as artificial intelligence and digital platforms can create public value and serve the common good. It explores how notions of freedom, power and democracy need to be rethought in a digital age and what we can do to harness the positive potential of new technology. His recent work has focussed on the hidden human labour of artificial intelligence and the global production networks that power AI. He also analyses how digital labour is changing across multiple sectors including ride hail, food delivery, domestic work, childcare and microwork.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
So many of us fantasize about being rich. Wouldn't it be great to win the lottery or inherit a ton of money or be born into a wealthy family? Well it isn't always as wonderful as you might imagine. Listen as I begin by explaining what being rich does to some people – especially young people. Source: Dr. Stephen Berglas author of Reclaiming the Fire (https://amzn.to/3VjeRIS) Almost no one dreams about math. Almost everyone dreams about falling or being chased. These are just some of the fascinating things I discuss about dreams with Dr. Rahul Jandial. He is a neurosurgeon and neuroscientist who has been studying why people dream and what happens in the brain when dreams occur. If you have wondered about your dreams and how they affect you, you need to hear this conversation. Dr. Jandial is the author of several books, his latest is called, This Is Why You Dream: What Your Sleeping Brain Reveals About Your Waking Life (https://amzn.to/3KmOE5N). Do you like solitude? We all like it somewhat – and some of us like a lot of solitude. Is that a problem? After all, humans are social creatures. We like to be with others. Still there are many people who cherish “alone time.” To understand why solitude is so important, listen to my guest Netta Weinstein. She is a psychologist and director of the European Research Council's 'Solitude: Alone but Resilient (SOAR)' project. She is also professor of psychology at the University of Reading and an associate researcher at the Oxford Internet Institute at the University of Oxford. Netta is author of the book Solitude: The Science and Power of Being Alone (https://amzn.to/3wVA7eb). Every once in a while, you will see something on a menu labeled “organic seafood.” What does that mean? How can seafood be organic? Listen and I will explain why it is probably not as organic as you would like it to be. https://www.foodrepublic.com/1413904/why-organic-seafood-myth/ PLEASE SUPPORT OUR SPONSORS! Indeed is offering SYSK listeners a $75 Sponsored Job Credit to get your jobs more visibility at https://Indeed.com/SOMETHING Luckily for those of us who live with the symptoms of allergies, we can Live Claritin Clear with Claritin-D! eBay Motors has 122 million parts for your #1 ride-or-die, to make sure it stays running smoothly. Keep your ride alive at https://eBayMotors.com We really like The Jordan Harbinger Show! Check out https://jordanharbinger.com/start OR search for it on Apple Podcasts, Spotify or wherever you listen! Learn more about your ad choices. Visit podcastchoices.com/adchoices
TikTok, the phone app that sends a stream of viral videos to every user, is in trouble in the United States. The company must either change its Chinese ownership, or leave the American market.In response, TikTok is underlining its economic contribution, in Australia and elsewhere. So how much difference does it make?Guests:Dr Joanne Gray, Lecturer in Digital Cultures, University of SydneyAriana Hendry, Co-founder, BeysisKristian Kolding, Head of Consulting, Oxford Economics AustraliaProfessor Vili Lehdonvirta, Oxford Internet Institute, Oxford University
OpenAI announces their latest LLM GPT-4o. What does it mean for the LLM space and its competitors? Plus US and Chinese officials will meet in Switzerland Tuesday to discuss AI security concerns. And a new study by the Oxford Internet Institute suggests a high correlation between Internet access and use with positive wellbeing.Starring Tom Merritt, Justin Robert Young, Roger Chang, Joe.Link to the Show Notes.
OpenAI announces their latest LLM GPT-4o. What does it mean for the LLM space and its competitors? Plus US and Chinese officials will meet in Switzerland Tuesday to discuss AI security concerns. And a new study by the Oxford Internet Institute suggests a high correlation between Internet access and use with positive wellbeing. Starring Tom Merritt, Justin Robert Young, Roger Chang, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!
Meta's Nick Clegg told an event in London the social media giant isn't seeing artificial intelligence being used a on a "systemic basis" to disrupt elections arguing that the technology can be a 'sword and shield' against harmful content. We discuss the risks posted by AI and social media with our technology reporter Mark Bergen, and Keegan McBride from the Oxford Internet Institute. Plus, James Woolcock reviews Rishi Sunak's appearance on a phone-in radio show. Hosted by Stephen Carroll and Yuan Potts. See omnystudio.com/listener for privacy information.
Subscriber-only episodeSOEP meets Dame Steve Shirley – a coding female pioneer Dame Stephanie (Steve) Shirley is a game changer in whatever she does and we were delighted that she agreed to chat to Somewhere on Earth. She arrived in England as an unaccompanied child refugee on the Kindertransport in 1939. She convinced her all girls' school to allow her to learn maths at the boys' school and in 1962 started a software business from her dining table which grew to have 8,500 employees and was worth US$3bn. The company initially only employed women working from home. Since retiring in 1993, Dame Stephanie's life has been dedicated to philanthropy in IT and autism, including setting up the Oxford Internet Institute. Brazil's Supercomputer to predict natural disasters Brazil's National Institute for Space Research (INPE) has finally received funding for a new supercomputer that will significantly improve climate forecasting. The new supercomputer is 15 times more powerful than the institute's current one and scientists hope it can help the country deal with a massive rise in extreme weather events – last year they averaged more than three a day. Angelica Mari speaks to Clezio Nardin, INPE's Director about the project. The programme is presented by Gareth Mitchell and the studio expert is Angelica Mari. More on this week's stories: Dame Steve Shirley Brazil's New Supercomputer Editor: Ania LichtarowiczProduction Manager: Liz Tuohy Recording and audio editing : Lansons | Team Farner For new episodes, subscribe wherever you get your podcasts or via this link:https://www.buzzsprout.com/2265960/supporters/newFollow us on all the socials: Join our Facebook group Instagram Twitter/X If you like Somewhere on Earth, please rate and review it on Apple PodcastsContact us by email: hello@somewhereonearth.coSend us a voice note: via WhatsApp: +44 7486 329 484Find a Story + Make it News = Change the World
SOEP meets Dame Steve Shirley – a coding female pioneerDame Stephanie (Steve) Shirley is a game changer in whatever she does and we were delighted that she agreed to chat to Somewhere on Earth. She arrived in England as an unaccompanied child refugee on the Kindertransport in 1939. She convinced her all girls' school to allow her to learn maths at the boys' school and in 1962 started a software business from her dining table which grew to have 8,500 employees and was worth US$3bn. The company initially only employed women working from home. Since retiring in 1993, Dame Stephanie's life has been dedicated to philanthropy in IT and autism including setting up the Oxford Internet Institute. Brazil's Supercomputer to predict natural disastersBrazil's National Institute for Space Research (Inpe) has finally received funding for a new supercomputer that will significantly improve climate forecasting. The new supercomputer is 15 times more powerful than the institute's current one and scientists hope it can help the country deal with a massive rise in extreme weather events – last year they averaged more than three a day. Angelica Mari speaks to Clezio Nardin, Inpe's Director about the project. The programme is presented by Gareth Mitchell and the studio expert is Angelica Mari. More on this week's stories:Dame Steve Shirley Brazil's New SupercomputerEveryday AI: Keep up and get ahead by making AI work for yourCan't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead.Listen on: Apple Podcasts SpotifySupport the showEditor: Ania LichtarowiczProduction Manager: Liz Tuohy Recording and audio editing : Lansons | Team Farner For new episodes, subscribe wherever you get your podcasts or via this link:https://www.buzzsprout.com/2265960/supporters/newFollow us on all the socials: Join our Facebook group Instagram Twitter/X If you like Somewhere on Earth, please rate and review it on Apple PodcastsContact us by email: hello@somewhereonearth.coSend us a voice note: via WhatsApp: +44 7486 329 484Find a Story + Make it News = Change the World
Max Pearson presents a collection of this week's Witness History episodes from the BBC World Service.We hear about Cyberia - the first commercial internet café which opened in London in 1994. Director of the Oxford Internet Institute at the University of Oxford, Professor Vicki Nash, talks us through other notable landmarks in the internet's history. Plus how the Covid N95 mask was invented by a scientist from Taiwan in 1992.Also how Brazilian theologian Leonardo Boff was punished for his writing on liberation theology. Staying with Brazil, we hear how poor rural workers occupied land owned by the rich, resulting in violent clashes in 1980.And the world's first global seed vault, buried deep inside a mountain on an Arctic island.Contributors: Eva Pascoe – a founder of Cyberia internet café Prof Vicki Nash – Director of the Oxford Internet Institute at the University of Oxford Peter Tsai – inventor of N95 mask Leonardo Boff – Brazilian theologian Maria Salete Campigotto – Landless Workers Movement protestor Dr Cary Fowler – founder of Doomsday seed vault(Photo: People using Cyberia in 1994. Credit: Mathieu Polak/Sygma/Sygma via Getty Images)
The Big 5 Model of Personality is perhaps the most consistently reliable model of personality used in research around the world. Focusing on the key elements of conscientiousness, agreeableness, neuroticism, openness to experience and extraversion, the Big 5 model has been applied to evaluate personality traits in my contexts including not least leadership. A recent paper – co-authored by our guest today on Brain for Business – Dr Fabian Braesemann – considered the Big 5 traits most commonly found in entrepreneurs and founders, assessing amongst other things what are the personality characteristics of typical of founders and how they contribute to start-up success.About our guest…Dr Fabian Braesemann is a Departmental Research Lecturer in AI & Work at the Oxford Internet Institute at the University of Oxford.Fabian's research focuses on the Science of Success. He uses data science methods to quantify the determinants of success in different fields:1. The Science of Success in Business2. Success and the Future of Work, and 3. Quantifying success online and offlineBefore Fabian started to work as a Departmental Research Lecturer at the OII, he worked as a Research Fellow & Data Scientist in the Future of Real Estate Initiative at the Saïd Business School, University of Oxford, and as a Data Scientist at the OII on projects that applied data science to understand human development and labour markets The article co-authored by Fabian is available here: https://www.nature.com/articles/s41598-023-41980-yThe 2019 Nature article discussed in the podcast (“Quantifying the dynamics of failure across science, startups and security” by Yin et al.) is available here: https://arxiv.org/pdf/1903.07562.pdfYou can find out more about Fabian via his Linkedin page (https://www.linkedin.com/in/fabian-braesemann-210645138/_ or else via the OII website (https://www.oii.ox.ac.uk/people/profiles/fabian-braesemann/) Hosted on Acast. See acast.com/privacy for more information.
In this episode, we discuss the AI gold rush and its impact on businesses. According to a study by IDC, companies are reaping 3.5 times returns on their AI investments, with a return on investment within 14 months on average. The report also highlights how generative AI is driving increased interest and investment in the technology. Additionally, the Oxford Internet Institute conducted a study that found AI skills and knowledge can increase a worker's salary by up to 40%. The study examined over 1,000 skills in 25,000 workers, showing the positive impact of AI-related knowledge on potential salaries.Three things to know today00:00 The AI Gold Rush Pays: Companies Reap 3.5x Returns & Salaries Surge by 40%04:50 Federal AI Blueprint Draws Industry Eyeballs as OMB Solicits Public Wisdom07:58 Watermarking, the AI Apocalypse, & Adult Content Leveraging AIAdvertiser: https://movebot.io/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Support the show on Patreon: https://patreon.com/mspradio/Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.comFollow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftech
Welcome to the new series of the Big Questions podcast, where we ask Oxford scientists to shed light on everyday questions that you really want to know the answer to. Remember those photos or videos online that don't look quite right? Perhaps you've heard a celebrity's voice somewhere unexpected? In this episode, we chat to computational social scientist Dr Bernie Hogan from the Oxford Internet Institute about deepfakes; media synthetically generated by technology to capture someone's likeness. As AI and machine learning technology develop rapidly, how can we regulate the creation of deepfakes to know what is real? Tune in to find out!
On this episode, host Sima Vasa is joined by Dr. Loubna Bouarfa, CEO and Founder of OKRA.ai and Member of the Board of Advisors of the Oxford Internet Institute, who shares her unique perspective on the ethical questions around the increasing capabilities of AI technology. Sima and Dr. Bouarfa discuss the following: - How different business models can leverage new technologies to either improve or harm society. - Why transparency in AI development and implementation is essential to maintaining consumer protections. - How aligning the incentives of regulatory bodies and technology developers can create natural safeguards against harmful business models. - The neutral nature of technology and the potential it holds for both positive and negative implementation. Dr. Bouarfa's work puts her on the frontier of technological development and gives her a unique insight into the future of data acquisition and consumer privacy as new AI tools are rolled out. - OKRA.ai - Oxford Internet Institute - Sima Vasa - Infinity Squared - Infinity Squared - LinkedIn Thanks for listening to the Data Gurus podcast, brought to you by Infinity Squared. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show, and be sure to subscribe so you never miss another insightful conversation. #Analytics #MA #Data #Strategy #Innovation #Acquisitions
FULL EPISODE available at: patreon.com/Macrodose Our guests today are Anjali Krishan and Kavita Dattani. They've joined us today on behalf of Fairwork, an action-research project coordinated by the Oxford Internet Institute to discuss the new report they've co-authored entitled ‘Beyond techno-solutionism: gender and platform work'. The report is based on research into working conditions on online platforms spanning four years, 38 countries, over 190 unique platforms, and interviews with more than 5000 platform workers. You can access the full report here: https://fair.work/wp-content/uploads/sites/17/2023/07/Fairwork-Gender-Report-2023-FINAL-red.pdf
Hello! Do you ever glance at your phone to check an email then look up to discover that two hours have gone by? In a world where information is abundant, our attention is hot property. What exactly do we know about how our attention is drawn and held by the environment and technology around us? We talk to Professor Polly Dalton who researches the psychology of attentional capture and to tech ethicist James Williams about why the issue of the attention economy cuts deep. Does it have the potential to change the course of our lives and restrict our freedom? We try to break free from the shackles of Silicon Valley with Tom Hodgkinson, editor of The Idler, who tells us about his strategy to resist the demands on our attention and time without even having to resort to a brick phone.Plus: Ed's in denial about his National Portrait Gallery debut…GuestsJames Williams, technology ethicist at the Oxford Internet Institute and the author of Stand Out of Our Light: Freedom and Resistance in the Attention Economy (@WilliamsJames_)Polly Dalton, professor of cognitive psychology, Royal Holloway, University of London (@PollyDalton)Tom Hodgkinson, editor of the Idler magazine and author of How to Be Idle (@idler)More informationBuy or read a copy of James' book (open access) hereWatch a video of James talking about the attention economy (Youtube)Subscribe to the Idler or purchase Tom's book How to be IdleIs modern life ruining our powers of concentration? (The Guardian, 2023)Ed mentions The Shallows by Nicholas Carr Hosted on Acast. See acast.com/privacy for more information.
Luciano Floridi is the Oxford Internet Institute's Professor of Philosophy and Ethics of Information at the University of Oxford, Distinguished Research Fellow of the Uehiro Centre for Practical Ethics of the Faculty of Philosophy, and Research Associate and Fellow in Information Policy of the Department of Computer Science. Beginning in the fall, he will be the Founding Director of the Digital Ethics Center and Professor of Cognitive Science at Yale University. For much of the past twenty-five years Luciano has been developing the philosophy of information as its own free-standing discipline within the philosophical world. In this episode he and Robinson delve into just one small corner of the subject. They talk about Luciano's view of artificial intelligence as a novel form of agency before turning to some future applications of AI and the novel ethical considerations its use raises in the modern world. Luciano's Website: https://www.philosophyofinformation.net Luciano's Twitter: https://twitter.com/Floridi Information: A Very Short Introduction: https://a.co/d/5Jgq1wS OUTLINE 00:00 In This Episode… 01:04 Introduction 04:58 Luciano's Tetralogy 09:27 Artificial Intelligence as a New Form of Agency 26:49 Future Applications of AI 32:50 Ethics and Levels of Explanation 46:09 The Ethics of AI Robinson's Website: http://robinsonerhardt.com Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University. Join him in conversations with philosophers, scientists, weightlifters, artists, and everyone in-between. --- Support this podcast: https://podcasters.spotify.com/pod/show/robinson-erhardt/support
No Meio-Dia em Brasília desta terça-feira (2), Kiss Vasconcelos entrevista Jonas Valente sobre o PL das Fake News. Valente é pesquisador do Oxford Internet Institute e do Laboratório de Políticas de Comunicação da UnB e detalha quais as principais resistências sobre o projeto de lei. Inscreva-se e receba a newsletter: https://bit.ly/2Gl9AdL Confira mais notícias em nosso site: https://oantagonista.uol.com.br/ https://crusoe.uol.com.br/ Acompanhe nossas redes sociais: https://www.fb.com/oantagonista https://www.twitter.com/o_antagonista https://www.instagram.com/o_antagonista https://www.tiktok.com/@oantagonista_oficial No Youtube deixe seu like e se inscreva no canal: https://www.youtube.com/c/OAntagonista
No Meio-Dia em Brasília desta terça-feira (2), Kiss Vasconcelos entrevista Jonas Valente sobre o PL das Fake News. Valente é pesquisador do Oxford Internet Institute e do Laboratório de Políticas de Comunicação da UnB e detalha quais as principais resistências sobre o projeto de lei. Inscreva-se e receba a newsletter: https://bit.ly/2Gl9AdL Confira mais notícias em nosso site: https://oantagonista.uol.com.br/ https://crusoe.uol.com.br/ Acompanhe nossas redes sociais: https://www.fb.com/oantagonista https://www.twitter.com/o_antagonista https://www.instagram.com/o_antagonista https://www.tiktok.com/@oantagonista_oficial No Youtube deixe seu like e se inscreva no canal: https://www.youtube.com/c/OAntagonista
Today, we speak to Vili Lehdonvirta, Professor of Economics, Sociology, and Digital Social Research at the Oxford Internet Institute at the University of Oxford. Professor Lehdonvirta is a social scientist whose research focuses on ways digital technologies are reshaping the organization of economies, including their associated social effects. He is also the author of two books, Cloud Empires and Virtual Economies, which provide readers with an in-depth look into the power that crypto platforms hold and a well-rounded characterization of digital markets. In this episode, we talk about the ideological underpinnings of crypto and the role of governance in making cryptocurrencies possible. We discuss the role of states in scaling markets, how states and platform companies differ, the impacts of smart contracts on governance issues, and how control and power are centralized within crypto markets, as well as the social implications of blockchain technology. Listeners will also learn about past controversies within the crypto space, why people are still needed within crypto, and the blockchain paradox, plus more! Key Points From This Episode: We start by learning about John Perry Barlow's vision for cyberspace. [0:05:06] Find out about the role that states play in markets. [0:07:03] How markets function at scale if the state is not involved. [0:07:55] Professor Lehdonvirta's view on whether governance may precede markets. [0:08:59] The role massive platform companies play in today's economy. [0:09:44] Ways in which states and platform companies differ. [0:10:42] Why he thinks public blockchain technology has garnered so much attention. [0:11:27] An explanation of the influence John Perry Barlow's vision had on cryptocurrencies. [0:13:04] Learn what a Kleroterion is and the role it played in Athenian democracy. [0:14:01] Professor Lehdonvirta shares what it means to ‘trust in the code.' [0:17:05] An outline of the new properties smart contracts created. [0:18:59] Social and economic implications of unstoppable censorship-resistant contracts. [0:21:08] A brief rundown of how impactful smart contracts have been. [0:22:27] How the trustless and unstoppable claims of cryptocurrencies and DAOs were affected by the DAO story. [0:24:20] Whether the Bitcoin block-size conflict affected the perception of crypto as a trustless system. [0:28:17] We find out the current size of the Bitcoin development team. [0:31:05] Other examples of human discretion affecting the direction of Bitcoin. [0:31:46] Professor Lehdonvirta explains the strategies used to preserve trustlessness after the human interventions took place. [0:35:16] Details about an important strategy: the appeal to technical expertise. [0:38:53] Find out if the ability to fork blockchain networks restores trustless claims of crypto. [0:39:42] Whether users of a blockchain network, who are not miners, can influence crypto markets. [0:45:02] Professor Lehdonvirta's opinion on who has the most control over cryptocurrency networks. [0:49:35] Hear what aspect of Athenian democracy Nakamoto failed to replicate. [0:54:26] We learn what the blockchain paradox is (also known as the governance paradox). [0:56:50] Find out if Professor Lehdonvirta thinks technology changes the fundamental aspects which shape how societies are organized. [01:00:11] Find out if blockchain has eliminated the need for nation-states. [01:02:11] What cryptocurrencies have accomplished since their inception. [01:03:40]
