Podcasts about ai impacts

  • 89PODCASTS
  • 141EPISODES
  • 31mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Mar 10, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about ai impacts

Latest podcast episodes about ai impacts

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 730: Is AI creating a great recession for white collar workers? Inside Anthropic's labor report

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Mar 10, 2026 28:10


Logistics Matters with DC VELOCITY
Guest: Per Hong of Kearney on tariffs, Iran, and more; The "pandemic echo” affecting parcel fleets; How Agentic AI impacts hiring

Logistics Matters with DC VELOCITY

Play Episode Listen Later Feb 27, 2026 23:38


Our guest on this week's episode is Per Hong, senior partner and global lead of Kearney Foresight. By now we have all heard that the emergency tariffs placed earlier in the year were ruled illegal last week by the Supreme Court, but now we have new tariffs – and the potential of war with Iran. There is lots going on right now that could have major impacts on our supply chains. Our guest helps us to unravel it all and offers advice on how supply chain leaders should prepare for whatever is next.Have you ever heard of a pandemic echo?  Apparently that is what is happening right now within the parcel delivery fleet sector. Ben Ames helps us to understand what it means and why it is affecting parcel. More than half (55%) of supply chain leaders expect that advancements in agentic AI systems will reduce the need to hire for entry-level positions, and 51% say the technology will drive a shift to overall workforce reductions. That's according to a survey from business and technology insights company Gartner, released this week. We look at the numbers from this report and what they may mean for hiring in supply chain jobs going forward.Supply Chain Xchange  also offers a podcast series called Supply Chain in the Fast Lane.  It is co-produced with the Council of Supply Chain Management Professionals. The latest series is now available on Top Threats to our Supply Chains. It covers topics including Geopolitical Risks, Economic Instability, Cybersecurity Risks, Threats to energy and electric grids; Supplier Risks, and Transportation Disruptions  Go to your favorite podcast platform to subscribe and to listen to past and future episodes. The podcast is also available at www.thescxchange.com.Articles and resources mentioned in this episode:KearneyFleets adjust focus from efficiency to resilience, Geotab saysReport: Agentic AI to reduce entry-level hiring needsVisit DC VelocityVisit Supply Chain XchangeListen to CSCMP and Supply Chain Xchange's Supply Chain in the Fast Lane podcastSend feedback about this podcast to podcast@agilebme.comThis podcast episode is sponsored by: WernerOther linksAbout DC VELOCITYSubscribe to DC VELOCITYSign up for our FREE newslettersAdvertise with DC VELOCITY

Paul's Security Weekly
Internal Audit Focal Points for 2026 as AI Impacts Conventional Cybersecurity - Tim Lietz - BSW #431

Paul's Security Weekly

Play Episode Listen Later Jan 21, 2026 54:47


Key emerging risks include cybersecurity (41%) and Generative AI (Gen AI) (35%), both of which present challenges in skill development and retention. The growing reliance on external providers reflects these gaps. In two years, strategic risk has fallen 10% as technological advancements have shifted auditors' attention away from strategy. So what are the top concerns? Tim Lietz, National Practice Leader Internal Audit Risk & Compliance at Jefferson Wells, joins Business Security Weekly to discuss the shifting priorities for internal audit leaders, with technology, business transformation and digitization remaining central amid rising economic uncertainty. This reflects the broader economic challenges and uncertainties that organizations are facing in the current environment. Tim will discuss the need for enhanced skills inAI, cybersecurity and digital transformation and why Internal Audit is increasingly seen as a strategic partner in navigating transformation within their organizations. Segment Resources: - https://www.jeffersonwells.com/en/internal-audit-report-2025 In the leadership and communications segment, Conventional Cybersecurity Won't Protect Your AI, Will Cybersecurity Budgets Increase in 2026?, To Execute a Unified Strategy, Leaders Need to Shadow Each Other, and more! Visit https://www.securityweekly.com/bsw for all the latest episodes! Show Notes: https://securityweekly.com/bsw-431

cybersecurity conventional inai internal audit focal points ai impacts segment resources business security weekly
Paul's Security Weekly TV
Internal Audit Focal Points for 2026 as AI Impacts Conventional Cybersecurity - Tim Lietz - BSW #431

Paul's Security Weekly TV

Play Episode Listen Later Jan 21, 2026 54:47


Key emerging risks include cybersecurity (41%) and Generative AI (Gen AI) (35%), both of which present challenges in skill development and retention. The growing reliance on external providers reflects these gaps. In two years, strategic risk has fallen 10% as technological advancements have shifted auditors' attention away from strategy. So what are the top concerns? Tim Lietz, National Practice Leader Internal Audit Risk & Compliance at Jefferson Wells, joins Business Security Weekly to discuss the shifting priorities for internal audit leaders, with technology, business transformation and digitization remaining central amid rising economic uncertainty. This reflects the broader economic challenges and uncertainties that organizations are facing in the current environment. Tim will discuss the need for enhanced skills inAI, cybersecurity and digital transformation and why Internal Audit is increasingly seen as a strategic partner in navigating transformation within their organizations. Segment Resources: - https://www.jeffersonwells.com/en/internal-audit-report-2025 In the leadership and communications segment, Conventional Cybersecurity Won't Protect Your AI, Will Cybersecurity Budgets Increase in 2026?, To Execute a Unified Strategy, Leaders Need to Shadow Each Other, and more! Show Notes: https://securityweekly.com/bsw-431

cybersecurity conventional inai internal audit focal points ai impacts segment resources business security weekly
Business Security Weekly (Audio)
Internal Audit Focal Points for 2026 as AI Impacts Conventional Cybersecurity - Tim Lietz - BSW #431

Business Security Weekly (Audio)

Play Episode Listen Later Jan 21, 2026 54:47


Key emerging risks include cybersecurity (41%) and Generative AI (Gen AI) (35%), both of which present challenges in skill development and retention. The growing reliance on external providers reflects these gaps. In two years, strategic risk has fallen 10% as technological advancements have shifted auditors' attention away from strategy. So what are the top concerns? Tim Lietz, National Practice Leader Internal Audit Risk & Compliance at Jefferson Wells, joins Business Security Weekly to discuss the shifting priorities for internal audit leaders, with technology, business transformation and digitization remaining central amid rising economic uncertainty. This reflects the broader economic challenges and uncertainties that organizations are facing in the current environment. Tim will discuss the need for enhanced skills inAI, cybersecurity and digital transformation and why Internal Audit is increasingly seen as a strategic partner in navigating transformation within their organizations. Segment Resources: - https://www.jeffersonwells.com/en/internal-audit-report-2025 In the leadership and communications segment, Conventional Cybersecurity Won't Protect Your AI, Will Cybersecurity Budgets Increase in 2026?, To Execute a Unified Strategy, Leaders Need to Shadow Each Other, and more! Visit https://www.securityweekly.com/bsw for all the latest episodes! Show Notes: https://securityweekly.com/bsw-431

cybersecurity conventional inai internal audit focal points ai impacts segment resources business security weekly
Business Security Weekly (Video)
Internal Audit Focal Points for 2026 as AI Impacts Conventional Cybersecurity - Tim Lietz - BSW #431

Business Security Weekly (Video)

Play Episode Listen Later Jan 21, 2026 54:47


Key emerging risks include cybersecurity (41%) and Generative AI (Gen AI) (35%), both of which present challenges in skill development and retention. The growing reliance on external providers reflects these gaps. In two years, strategic risk has fallen 10% as technological advancements have shifted auditors' attention away from strategy. So what are the top concerns? Tim Lietz, National Practice Leader Internal Audit Risk & Compliance at Jefferson Wells, joins Business Security Weekly to discuss the shifting priorities for internal audit leaders, with technology, business transformation and digitization remaining central amid rising economic uncertainty. This reflects the broader economic challenges and uncertainties that organizations are facing in the current environment. Tim will discuss the need for enhanced skills inAI, cybersecurity and digital transformation and why Internal Audit is increasingly seen as a strategic partner in navigating transformation within their organizations. Segment Resources: - https://www.jeffersonwells.com/en/internal-audit-report-2025 In the leadership and communications segment, Conventional Cybersecurity Won't Protect Your AI, Will Cybersecurity Budgets Increase in 2026?, To Execute a Unified Strategy, Leaders Need to Shadow Each Other, and more! Show Notes: https://securityweekly.com/bsw-431

cybersecurity conventional inai internal audit focal points ai impacts segment resources business security weekly
KPCW Cool Science Radio
AI impacts on markets, investing and global competition

KPCW Cool Science Radio

Play Episode Listen Later Jan 8, 2026 24:43


Author and technology executive Fred Voccola explains why AI First organizations are already seeing dramatic productivity gains and why companies that fail to adapt may not survive the next decade.

All CNET Video Podcasts (HD)
Tesla Vehicles Take a Backseat to Robots, How Coke Built Its Holiday Ad With AI, and AI Impacts the Law | Tech Today

All CNET Video Podcasts (HD)

Play Episode Listen Later Nov 14, 2025


Owen Poole covers the biggest tech stories of the day, including: Elon Musk teases a release date for the long-awaited Tesla Roadster, and why it might not matter; how Coca-Cola's annual "The Holidays are Coming" ad campaign was built with over 70,000 AI-generated images; and judges are dealing with legal filings with hallucinated citations, thanks to some lawyers using AI.

CNET News (HD)
Tesla Vehicles Take a Backseat to Robots, How Coke Built Its Holiday Ad With AI, and AI Impacts the Law | Tech Today

CNET News (HD)

Play Episode Listen Later Nov 14, 2025


Owen Poole covers the biggest tech stories of the day, including: Elon Musk teases a release date for the long-awaited Tesla Roadster, and why it might not matter; how Coca-Cola's annual "The Holidays are Coming" ad campaign was built with over 70,000 AI-generated images; and judges are dealing with legal filings with hallucinated citations, thanks to some lawyers using AI.

The Deep Dive Radio Show and Nick's Nerd News
Companies Should Tell Us How AI Impacts Jobs

The Deep Dive Radio Show and Nick's Nerd News

Play Episode Listen Later Nov 8, 2025 4:54


Companies Should Tell Us How AI Impacts Jobs by Nick Espinosa, Chief Security Fanatic

jobs companies ai impacts
Grid Forward Chats
AI Impacts for a Resilient Grid with Duke Energy, NextEra and PG&E

Grid Forward Chats

Play Episode Listen Later Nov 5, 2025 47:44


On the first day of the GridFWD 2025 event, leadership from some of the nation's largest and most AI-engaged utilities joined a panel to tell our audience about their acceleration of AI use cases. Jason Glickman (EVP Engineering, Planning and Strategy, PG&E), Bonnie Titone (Senior VP and Chief Administrative Officer, Duke Energy) and Peter Skantze (Senior VP of Infrastructure Development, NextEra Energy Resources) discuss how utilities and infrastructure developers can leverage AI for their own use and accommodate the added demand from large load customers, while ensuring that their systems remain resilient.

Engelberg Center Live!
Conspicuous Consumers: How AI Impacts Consumption

Engelberg Center Live!

Play Episode Listen Later Oct 30, 2025 79:58


Mala Chatterjee, Columbia Law SchoolDeven Desai, Georgia Tech Scheller College of BusinessAaron Perzanowski, University of Michigan Law SchoolJason Schultz, Engelberg Center on Innovation Law & Policy, NYU School of Law (moderator)

TD Ameritrade Network
How AI Impacts Banks, Jobs Market

TD Ameritrade Network

Play Episode Listen Later Oct 15, 2025 6:19


Nigam Arora expected net interest income to go higher in this week's bank earnings, and so far the financials have been in-line with his expectations. He does say the only thing that concerns him is the Fed's stance on a weakening labor market which historically points to a pullback in the overall economy. Nigam weighs in on job cuts in banks, saying AI will be a factor to increase productivity and could be the "fly in the ointment" for that segment of the labor market.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – / schwabnetwork Follow us on Facebook – / schwabnetwork Follow us on LinkedIn - / schwab-network About Schwab Network - https://schwabnetwork.com/about

It's Been a Minute with Sam Sanders
How AI impacts the environment (and your energy bill)

It's Been a Minute with Sam Sanders

Play Episode Listen Later Sep 29, 2025 20:00


AI is the future, but how is its infrastructure impacting your air, water, and utilities bills today? You asked, and Brittany delivered. Many of you wrote in asking about artificial intelligence's environmental impact. Brittany and Evan Halper, a business and energy reporter for The Washington Post, answer your questions and so much more. Like, is AI causing your energy bills to go up? Are tech companies tricking communities into building data centers? And how do you ethically use AI when you know it impacts nature? This is the final episode in our AI + U series. You can check out past episodes (Can you trust the information AI gives you? Or How AI slop is clogging you brain) further down in this feed. Follow Brittany Luse on Instagram: @bmluse For handpicked podcast recommendations every week, subscribe to NPR's Pod Club newsletter at npr.org/podclub. Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy

The Marketing Companion
How AI impacts the sales process

The Marketing Companion

Play Episode Listen Later Sep 8, 2025 36:25


One of the most urgent and interesting questions in business today: How does AI affect the sales process? From product discovery to referrals and recommendations, AI is making its way into customers' hearts and minds. Whether B2B or B2C, the implications are enormous.   Marketing legend Sandy Carter joins the Marketing Companion to explore these new commercial realities. What is the new role of brand, trust, and human relationships in the commercial process?

ai marketing b2c sales process ai impacts sandy carter
ThinkEnergy
Summer Rewind: How AI impacts energy systems

ThinkEnergy

Play Episode Listen Later Aug 11, 2025 55:16


Summer rewind: Greg Lindsay is an urban tech expert and a Senior Fellow at MIT. He's also a two-time Jeopardy champion and the only human to go undefeated against IBM's Watson. Greg joins thinkenergy to talk about how artificial intelligence (AI) is reshaping how we manage, consume, and produce energy—from personal devices to provincial grids, its rapid growth to the rising energy demand from AI itself. Listen in to learn how AI impacts our energy systems and what it means individually and industry-wide. Related links: ●       Greg Lindsay website: https://greglindsay.org/ ●       Greg Lindsay on LinkedIn: https://www.linkedin.com/in/greg-lindsay-8b16952/ ●       International Energy Agency (IEA): https://www.iea.org/ ●       Trevor Freeman on LinkedIn: https://www.linkedin.com/in/trevor-freeman-p-eng-cem-leed-ap-8b612114/ ●       Hydro Ottawa: https://hydroottawa.com/en    To subscribe using Apple Podcasts: https://podcasts.apple.com/us/podcast/thinkenergy/id1465129405   To subscribe using Spotify: https://open.spotify.com/show/7wFz7rdR8Gq3f2WOafjxpl   To subscribe on Libsyn: http://thinkenergy.libsyn.com/ --- Subscribe so you don't miss a video: https://www.youtube.com/user/hydroottawalimited   Follow along on Instagram: https://www.instagram.com/hydroottawa   Stay in the know on Facebook: https://www.facebook.com/HydroOttawa   Keep up with the posts on X: https://twitter.com/thinkenergypod --- Transcript: Trevor Freeman  00:00 Hi everyone. Well, summer is here, and the think energy team is stepping back a bit to recharge and plan out some content for the next season. We hope all of you get some much needed downtime as well, but we aren't planning on leaving you hanging over the next few months, we will be re releasing some of our favorite episodes from the past year that we think really highlight innovation, sustainability and community. These episodes highlight the changing nature of how we use and manage energy, and the investments needed to expand, modernize and strengthen our grid in response to that. All of this driven by people and our changing needs and relationship to energy as we move forward into a cleaner, more electrified future, the energy transition, as we talk about many times on this show. Thanks so much for listening, and we'll be back with all new content in September. Until then, happy listening.   Trevor Freeman  00:55 Welcome to think energy, a podcast that dives into the fast changing world of energy through conversations with industry leaders, innovators and people on the front lines of the energy transition. Join me, Trevor Freeman, as I explore the traditional, unconventional and up and coming facets of the energy industry. If you have any thoughts feedback or ideas for topics we should cover, please reach out to us at think energy at hydro ottawa.com, Hi everyone. Welcome back. Artificial intelligence, or AI, is a term that you're likely seeing and hearing everywhere today, and with good reason, the effectiveness and efficiency of today's AI, along with the ever increasing applications and use cases mean that in just the past few years, AI went from being a little bit fringe, maybe a little bit theoretical to very real and likely touching everyone's day to day lives in ways that we don't even notice, and we're just at the beginning of what looks to be a wave of many different ways that AI will shape and influence our society and our lives in the years to come. And the world of energy is no different. AI has the potential to change how we manage energy at all levels, from our individual devices and homes and businesses all the way up to our grids at the local, provincial and even national and international levels. At the same time, AI is also a massive consumer of energy, and the proliferation of AI data centers is putting pressure on utilities for more and more power at an unprecedented pace. But before we dive into all that, I also think it will be helpful to define what AI is. After all, the term isn't new. Like me, many of our listeners may have grown up hearing about Skynet from Terminator, or how from 2001 A Space Odyssey, but those malignant, almost sentient versions of AI aren't really what we're talking about here today. And to help shed some light on both what AI is as well as what it can do and how it might influence the world of energy, my guest today is Greg Lindsay, to put it in technical jargon, Greg's bio is super neat, so I do want to take time to run through it properly. Greg is a non resident Senior Fellow of MIT's future urban collectives lab Arizona State University's threat casting lab and the Atlantic Council's Scowcroft center for strategy and security. Most recently, he was a 2022-2023 urban tech Fellow at Cornell Tech's Jacobs Institute, where he explored the implications of AI and augmented reality at an urban scale. Previously, he was an urbanist in resident, which is a pretty cool title, at BMW minis urban tech accelerator, urban X, as well as the director of Applied Research at Montreal's new cities and Founding Director of Strategy at its mobility focused offshoot, co motion. He's advised such firms as Intel, Samsung, Audi, Hyundai, IKEA and Starbucks, along with numerous government entities such as 10 Downing Street, us, Department of Energy and NATO. And finally, and maybe coolest of all, Greg is also a two time Jeopardy champion and the only human to go undefeated against IBM's Watson. So on that note, Greg Lindsey, welcome to the show.   Greg Lindsay  04:14 Great to be here. Thanks for having me. Trevor,   Trevor Freeman  04:16 So Greg, we're here to talk about AI and the impacts that AI is going to have on energy, but AI is a bit of one of those buzzwords that we hear out there in a number of different spheres today. So let's start by setting the stage of what exactly we're talking about. So what do we mean when we say AI or artificial intelligence?   Speaker 1  04:37 Well, I'd say the first thing to keep in mind is that it is neither artificial nor intelligence. It's actually composites of many human hands making it. And of course, it's not truly intelligent either. I think there's at least two definitions for the layman's purposes. One is statistical machine learning. You know that is the previous generation of AI, we could say, doing deep, deep statistical analysis, looking for patterns fitting to. Patterns doing prediction. There's a great book, actually, by some ut professors at monk called prediction machines, which that was a great way of thinking about machine learning and sense of being able to do large scale prediction at scale. And that's how I imagine hydro, Ottawa and others are using this to model out network efficiencies and predictive maintenance and all these great uses. And then the newer, trendier version, of course, is large language models, your quads, your chat gpts, your others, which are based on transformer models, which is a whole series of work that many Canadians worked on, including Geoffrey Hinton and others. And this is what has produced the seemingly magical abilities to produce text and images on demand and large scale analysis. And that is the real power hungry beast that we think of as AI today.   Trevor Freeman  05:42 Right! So different types of AI. I just want to pick those apart a little bit. When you say machine learning, it's kind of being able to repetitively look at something or a set of data over and over and over again. And because it's a computer, it can do it, you know, 1000s or millions of times a second, and learn what, learn how to make decisions based on that. Is that fair to say?   Greg Lindsay  06:06 That's fair to say. And the thing about that is, is like you can train it on an output that you already know, large language models are just vomiting up large parts of pattern recognition, which, again, can feel like magic because of our own human brains doing it. But yeah, machine learning, you can, you know, you can train it to achieve outcomes. You can overfit the models where it like it's trained too much in the past, but, yeah, it's a large scale probabilistic prediction of things, which makes it so powerful for certain uses.   Trevor Freeman  06:26 Yeah, one of the neatest explanations or examples I've seen is, you know, you've got these language models where it seems like this AI, whether it's chat, DBT or whatever, is writing really well, like, you know, it's improving our writing. It's making things sound better. And it seems like it's got a brain behind it, but really, what it's doing is it's going out there saying, What have millions or billions of other people written like this? And how can I take the best things of that? And it can just do that really quickly, and it's learned that that model, so that's super helpful to understand what we're talking about here. So obviously, in your work, you look at the impact of AI on a number of different aspects of our world, our society. What we're talking about here today is particularly the impact of AI when it comes to energy. And I'd like to kind of bucketize our conversation a little bit today, and the first area I want to look at is, what will ai do when it comes to energy for the average Canadian? Let's say so in my home, in my business, how I move around? So I'll start with that. It's kind of a high level conversation. Let's start talking about the different ways that AI will impact you know that our average listener here?   Speaker 1  07:41 Um, yeah, I mean, we can get into a discussion about what it means for the average Canadian, and then also, of course, what it means for Canada in the world as well, because I just got back from South by Southwest in Austin, and, you know, for the second, third year in row, AI was on everyone's lips. But really it's the energy. Is the is the bottleneck. It's the forcing factor. Everyone talked about it, the fact that all the data centers we can get into that are going to be built in the direction of energy. So, so, yeah, energy holds the key to the puzzle there. But, um, you know, from the average gain standpoint, I mean, it's a question of, like, how will these tools actually play out, you know, inside of the companies that are using this, right? And that was a whole other discussion too. It's like, okay, we've been playing around with these tools for two, three years now, what do they actually use to deliver value of your large language model? So I've been saying this for 10 years. If you look at the older stuff you could start with, like smart thermostats, even look at the potential savings of this, of basically using machine learning to optimize, you know, grid optimize patterns of usage, understanding, you know, the ebbs and flows of the grid, and being able to, you know, basically send instructions back and forth. So you know there's stats. You know that, basically you know that you know you could save 10 to 25% of electricity bills. You know, based on this, you could reduce your heating bills by 10 to 15% again, it's basically using this at very large scales of the scale of hydro Ottawa, bigger, to understand this sort of pattern usage. But even then, like understanding like how weather forecasts change, and pulling that data back in to basically make fine tuning adjustments to the thermostats and things like that. So that's one stands out. And then, you know, we can think about longer term. I mean, yeah, lots have been lots has been done on imagining, like electric mobility, of course, huge in Canada, and what that's done to sort of change the overall energy mix virtual power plants. This is something that I've studied, and we've been writing about at Fast Company. At Fast Company beyond for 20 years, imagining not just, you know, the ability to basically, you know, feed renewable electricity back into the grid from people's solar or from whatever sources they have there, but the ability of utilities to basically go in and fine tune, to have that sort of demand shaping as well. And then I think the most interesting stuff, at least in demos, and also blockchain, which has had many theoretical uses, and I've got to see a real one. But one of the best theoretical ones was being able to create neighborhood scale utilities. Basically my cul de sac could have one, and we could trade clean electrons off of our solar panels through our batteries and home scale batteries, using Blockchain to basically balance this out. Yeah, so there's lots of potential, but yeah, it comes back to the notion of people want cheaper utility bills. I did this piece 10 years ago for the Atlantic Council on this we looked at a multi country survey, and the only reason anybody wanted a smart home, which they just were completely skeptical about, was to get those cheaper utility bills. So people pay for that.   Trevor Freeman  10:19 I think it's an important thing to remember, obviously, especially for like the nerds like me, who part of my driver is, I like that cool new tech. I like that thing that I can play with and see my data. But for most people, no matter what we're talking about here, when it comes to that next technology, the goal is make my life a little bit easier, give me more time or whatever, and make things cheaper. And I think especially in the energy space, people aren't putting solar panels on their roof because it looks great. And, yeah, maybe people do think it looks great, but they're putting it up there because they want cheaper electricity. And it's going to be the same when it comes to batteries. You know, there's that add on of resiliency and reliability, but at the end of the day, yeah, I want my bill to be cheaper. And what I'm hearing from you is some of the things we've already seen, like smart thermostats get better as AI gets better. Is that fair to say?   Greg Lindsay  11:12 Well, yeah, on the machine learning side, that you know, you get ever larger data points. This is why data is the coin of the realm. This is why there's a race to collect data on everything. Is why every business model is data collection and everything. Because, yes, not only can they get better, but of course, you know, you compile enough and eventually start finding statistical inferences you never meant to look for. And this is why I've been involved. Just as a side note, for example, of cities that have tried to implement their own data collection of electric scooters and eventually electric vehicles so they could understand these kinds of patterns, it's really the key to anything. And so it's that efficiency throughput which raises some really interesting philosophical questions, particularly about AI like, this is the whole discussion on deep seek. Like, if you make the models more efficient, do you have a Jevons paradox, which is the paradox of, like, the more energy you save through efficiency, the more you consume because you've made it cheaper. So what does this mean that you know that Canadian energy consumption is likely to go up the cleaner and cheaper the electrons get. It's one of those bedeviling sort of functions.   Trevor Freeman  12:06 Yeah interesting. That's definitely an interesting way of looking at it. And you referenced this earlier, and I will talk about this. But at the macro level, the amount of energy needed for these, you know, AI data centers in order to do all this stuff is, you know, we're seeing that explode.   Greg Lindsay  12:22 Yeah, I don't know that. Canadian statistics my fingertips, but I brought this up at Fast Company, like, you know, the IEA, I think International Energy Agency, you know, reported a 4.3% growth in the global electricity grid last year, and it's gonna be 4% this year. That does not sound like much. That is the equivalent of Japan. We're adding in Japan every year to the grid for at least the next two to three years. Wow. And that, you know, that's global South, air conditioning and other needs here too, but that the data centers on top is like the tip of the spear. It's changed all this consumption behavior, where now we're seeing mothballed coal plants and new plants and Three Mile Island come back online, as this race for locking up electrons, for, you know, the race to build God basically, the number of people in AI who think they're literally going to build weekly godlike intelligences, they'll, they won't stop at any expense. And so they will buy as much energy as they can get.   Trevor Freeman  13:09 Yeah, well, we'll get to that kind of grid side of things in a minute. Let's stay at the home first. So when I look at my house, we talked about smart thermostats. We're seeing more and more automation when it comes to our homes. You know, we can program our lights and our door locks and all this kind of stuff. What does ai do in order to make sure that stuff is contributing to efficiency? So I want to do all those fun things, but use the least amount of energy possible.   Greg Lindsay  13:38 Well, you know, I mean, there's, again, there's various metrics there to basically, sort of, you know, program your lights. And, you know, Nest is, you know, Google. Nest is an example of this one, too, in terms of basically learning your ebb and flow and then figuring out how to optimize it over the course of the day. So you can do that, you know, we've seen, again, like the home level. We've seen not only the growth in solar panels, but also in those sort of home battery integration. I was looking up that Tesla Powerwall was doing just great in Canada, until the last couple of months. I assume so, but I it's been, it's been heartening to see that, yeah, this sort of embrace of home energy integration, and so being able to level out, like, peak flow off the grid, so Right? Like being able to basically, at moments of peak demand, to basically draw on your own local resources and reduce that overall strain. So there's been interesting stuff there. But I want to focus for a moment on, like, terms of thinking about new uses. Because, you know, again, going back to how AI will influence the home and automation. You know, Jensen Wong of Nvidia has talked about how this will be the year of robotics. Google, Gemini just applied their models to robotics. There's startups like figure there's, again, Tesla with their optimists, and, yeah, there's a whole strain of thought that we're about to see, like home robotics, perhaps a dream from like, the 50s. I think this is a very Disney World esque Epcot Center, yeah, with this idea of jetsy, yeah, of having home robots doing work. You can see concept videos a figure like doing the actual vacuuming. I mean, we invented Roombas to this, but, but it also, I, you know, I've done a lot of work. Our own thinking around electric delivery vehicles. We could talk a lot about drones. We could talk a lot about the little robots that deliver meals on the sidewalk. There's a lot of money in business models about increasing access and people needing to maybe move less, to drive and do all these trips to bring it to them. And that's a form of home automation, and that's all batteries. That is all stuff off the grid too. So AI is that enable those things, these things that can think and move and fly and do stuff and do services on your behalf, and so people might find this huge new source of demand from that as well.   Trevor Freeman  15:29 Yeah, that's I hadn't really thought about the idea that all the all these sort of conveniences and being able to summon them to our homes cause us to move around less, which also impacts transportation, which is another area I kind of want to get to. And I know you've, you've talked a little bit about E mobility, so where do you see that going? And then, how does AI accelerate that transition, or accelerate things happening in that space?   Greg Lindsay  15:56 Yeah, I mean, I again, obviously the EV revolutions here Canada like, one of the epicenters Canada, Norway there, you know, that still has the vehicle rebates and things. So, yeah. I mean, we've seen, I'm here in Montreal, I think we've got, like, you know, 30 to 13% of sales is there, and we've got our 2035, mandate. So, yeah. I mean, you see this push, obviously, to harness all of Canada's clean, mostly hydro electricity, to do this, and, you know, reduce its dependence on fossil fuels for either, you know, Climate Change Politics reasons, but also just, you know, variable energy prices. So all of that matters. But, you know, I think the key to, like the electric mobility revolution, again, is, is how it's going to merge with AI and it's, you know, it's not going to just be the autonomous, self driving car, which is sort of like the horseless carriage of autonomy. It's gonna be all this other stuff, you know. My friend Dan Hill was in China, and he was thinking about like, electric scooters, you know. And I mentioned this to hydro Ottawa, like, the electric scooter is one of the leading causes of how we've taken internal combustion engine vehicles offline across the world, mostly in China, and put people on clean electric motors. What happens when you take those and you make those autonomous, and you do it with, like, deep seek and some cameras, and you sort of weld it all together so you could have a world of a lot more stuff in motion, and not just this world where we have to drive as much. And that, to me, is really exciting, because that changes, like urban patterns, development patterns, changes how you move around life, those kinds of things as well. That's that might be a little farther out, but, but, yeah, this sort of like this big push to build out domestic battery industries, to build charging points and the sort of infrastructure there, I think it's going to go in direction, but it doesn't look anything like, you know, a sedan or an SUV that just happens to be electric.   Trevor Freeman  17:33 I think that's a the step change is change the drive train of the existing vehicles we have, you know, an internal combustion to a battery. The exponential change is exactly what you're saying. It's rethinking this.   Greg Lindsay  17:47 Yeah, Ramesam and others have pointed out, I mean, again, like this, you know, it's, it's really funny to see this pushback on EVs, you know. I mean, I love a good, good roar of an internal combustion engine myself, but, but like, you know, Ramesam was an energy analyst, has pointed out that, like, you know, EVS were more cost competitive with ice cars in 2018 that's like, nearly a decade ago. And yeah, the efficiency of electric motors, particularly regenerative braking and everything, it just blows the cost curves away of ice though they will become the equivalent of keeping a thorough brat around your house kind of thing. Yeah, so, so yeah, it's just, it's that overall efficiency of the drive train. And that's the to me, the interesting thing about both electric motors, again, of autonomy is like, those are general purpose technologies. They get cheaper and smaller as they evolve under Moore's Law and other various laws, and so they get to apply to more and more stuff.   Trevor Freeman  18:32 Yeah. And then when you think about once, we kind of figure that out, and we're kind of already there, or close to it, if not already there, then it's opening the door to those other things you're talking about. Of, well, do we, does everybody need to have that car in their driveway? Are we rethinking how we're actually just doing transportation in general? And do we need a delivery truck? Or can it be delivery scooter? Or what does that look like?   Greg Lindsay  18:54 Well, we had a lot of those discussions for a long time, particularly in the mobility space, right? Like, and like ride hailing, you know, like, oh, you know, that was always the big pitch of an Uber is, you know, your car's parked in your driveway, like 94% of the time. You know, what happens if you're able to have no mobility? Well, we've had 15 years of Uber and these kinds of services, and we still have as many cars. But people are also taking this for mobility. It's additive. And I raised this question, this notion of like, it's just sort of more and more, more options, more availability, more access. Because the same thing seems to be going on with energy now too. You know, listeners been following along, like the conversation in Houston, you know, a week or two ago at Sarah week, like it's the whole notion of energy realism. And, you know, there's the new book out, more is more is more, which is all about the fact that we've never had an energy transition. We just kept piling up. Like the world burned more biomass last year than it did in 1900 it burned more coal last year than it did at the peak of coal. Like these ages don't really end. They just become this sort of strata as we keep piling energy up on top of it. And you know, I'm trying to sound the alarm that we won't have an energy transition. What that means for climate change? But similar thing, it's. This rebound effect, the Jevons paradox, named after Robert Stanley Jevons in his book The question of coal, where he noted the fact that, like, England was going to need more and more coal. So it's a sobering thought. But, like, I mean, you know, it's a glass half full, half empty in many ways, because the half full is like increasing technological options, increasing changes in lifestyle. You can live various ways you want, but, but, yeah, it's like, I don't know if any of it ever really goes away. We just get more and more stuff,   Trevor Freeman  20:22 Exactly, well. And, you know, to hear you talk about the robotics side of things, you know, looking at the home, yeah, more, definitely more. Okay, so we talked about kind of home automation. We've talked about transportation, how we get around. What about energy management? And I think about this at the we'll talk about the utility side again in a little bit. But, you know, at my house, or for my own personal use in my life, what is the role of, like, sort of machine learning and AI, when it comes to just helping me manage my own energy better and make better decisions when it comes to energy? ,   Greg Lindsay  20:57 Yeah, I mean, this is where it like comes in again. And you know, I'm less and less of an expert here, but I've been following this sort of discourse evolve. And right? It's the idea of, you know, yeah, create, create. This the set of tools in your home, whether it's solar panels or batteries or, you know, or Two Way Direct, bi directional to the grid, however it works. And, yeah, and people, you know, given this option of savings, and perhaps, you know, other marketing messages there to curtail behavior. You know? I mean, I think the short answer the question is, like, it's an app people want, an app that tell them basically how to increase the efficiency of their house or how to do this. And I should note that like, this has like been the this is the long term insight when it comes to like energy and the clean tech revolution. Like my Emery Levin says this great line, which I've always loved, which is, people don't want energy. They want hot showers and cold beer. And, you know, how do you, how do you deliver those things through any combination of sticks and carrots, basically like that. So, So, hence, why? Like, again, like, you know, you know, power walls, you know, and, and, and, you know, other sort of AI controlled batteries here that basically just sort of smooth out to create the sort of optimal flow of electrons into your house, whether that's coming drive directly off the grid or whether it's coming out of your backup and then recharging that the time, you know, I mean, the surveys show, like, more than half of Canadians are interested in this stuff, you know, they don't really know. I've got one set here, like, yeah, 61% are interested in home energy tech, but only 27 understand, 27% understand how to optimize them. So, yeah. So people need, I think, perhaps, more help in handing that over. And obviously, what's exciting for the, you know, the utility level is, like, you know, again, aggregate all that individual behavior together and you get more models that, hope you sort of model this out, you know, at both greater scale and ever more fine grained granularity there. So, yeah, exactly. So I think it's really interesting, you know, I don't know, like, you know, people have gamified it. What was it? I think I saw, like, what is it? The affordability fund trust tried to basically gamify AI energy apps, and it created various savings there. But a lot of this is gonna be like, as a combination like UX design and incentives design and offering this to people too, about, like, why you should want this and money's one reason, but maybe there's others.   Trevor Freeman  22:56 Yeah, and we talk about in kind of the utility sphere, we talk about how customers, they don't want all the data, and then have to go make their own decisions. They want those decisions to be made for them, and they want to say, look, I want to have you tell me the best rate plan to be on. I want to have you automatically switch me to the best rate plan when my consumption patterns change and my behavior chat patterns change. That doesn't exist today, but sort of that fast decision making that AI brings will let that become a reality sometime in the future,   Greg Lindsay  23:29 And also in theory, this is where LLMs come into play. Is like, you know, to me, what excites me the most about that is the first time, like having a true natural language interface, like having being able to converse with an, you know, an AI, let's hopefully not chat bot. I think we're moving out on chat bots, but some sort of sort of instantiation of an AI to be like, what plan should I be on? Can you tell me what my behavior is here and actually having some sort of real language conversation with it? Not decision trees, not event statements, not chat bots.   Trevor Freeman  23:54 Yeah, absolutely. Okay, so we've kind of teased around this idea of looking at the utility levels, obviously, at hydro Ottawa, you referenced this just a minute ago. We look at all these individual cases, every home that has home automation or solar storage, and we want to aggregate that and understand what, what can we do to help manage the grid, help manage all these new energy needs, shift things around. So let's talk a little bit about the role that AI can play at the utility scale in helping us manage the grid.   Greg Lindsay  24:28 All right? Well, yeah, there's couple ways to approach it. So one, of course, is like, let's go back to, like, smart meters, right? Like, and this is where I don't know how many hydro Ottawa has, but I think, like, BC Hydro has like, 2 million of them, sometimes they get politicized, because, again, this gets back to this question of, like, just, just how much nanny state you want. But, you know, you know, when you reach the millions, like, yeah, you're able to get that sort of, you know, obviously real time, real time usage, real time understanding. And again, if you can do that sort of grid management piece where you can then push back, it's visual game changer. But, but yeah. I mean, you know, yeah, be. See hydro is pulling in. I think I read like, like, basically 200 million data points a day. So that's a lot to train various models on. And, you know, I don't know exactly the kind of savings they have, but you can imagine there, whether it's, you know, them, or Toronto Hydro, or hydro Ottawa and others creating all these monitoring points. And again, this is the thing that bedells me, by the way, just philosophically about modern life, the notion of like, but I don't want you to be collecting data off me at all times, but look at what you can do if you do It's that constant push pull of some sort of combination of privacy and agency, and then just the notion of like statistics, but, but there you are, but, but, yeah, but at the grid level, then I mean, like, yeah. I mean, you can sort of do the same thing where, like, you know, I mean, predictive maintenance is the obvious one, right? I have been writing about this for large enterprise software companies for 20 years, about building these data points, modeling out the lifetime of various important pieces equipment, making sure you replace them before you have downtime and terrible things happen. I mean, as we're as we're discussing this, look at poor Heathrow Airport. I am so glad I'm not flying today, electrical substation blowing out two days of the world's most important hub offline. So that's where predictive maintenance comes in from there. And, yeah, I mean, I, you know, I again, you know, modeling out, you know, energy flow to prevent grid outages, whether that's, you know, the ice storm here in Quebec a couple years ago. What was that? April 23 I think it was, yeah, coming up in two years. Or our last ice storm, we're not the big one, but that one, you know, where we had big downtime across the grid, like basically monitoring that and then I think the other big one for AI is like, Yeah, is this, this notion of having some sort of decision support as well, too, and sense of, you know, providing scenarios and modeling out at scale the potential of it? And I don't think, I don't know about this in a grid case, but the most interesting piece I wrote for Fast Company 20 years ago was an example, ago was an example of this, which was a fledgling air taxi startup, but they were combining an agent based model, so using primitive AI to create simple rules for individual agents and build a model of how they would behave, which you can create much more complex models. Now we could talk about agents and then marrying that to this kind of predictive maintenance and operations piece, and marrying the two together. And at that point, you could have a company that didn't exist, but that could basically model itself in real time every day in the life of what it is. You can create millions and millions and millions of Monte Carlo operations. And I think that's where perhaps both sides of AI come together truly like the large language models and agents, and then the predictive machine learning. And you could basically hydro or others, could build this sort of deep time machine where you can model out all of these scenarios, millions and millions of years worth, to understand how it flows and contingencies as well. And that's where it sort of comes up. So basically something happens. And like, not only do you have a set of plans, you have an AI that has done a million sets of these plans, and can imagine potential next steps of this, or where to deploy resources. And I think in general, that's like the most powerful use of this, going back to prediction machines and just being able to really model time in a way that we've never had that capability before. And so you probably imagine the use is better than I.   Trevor Freeman  27:58 Oh man, it's super fascinating, and it's timely. We've gone through the last little while at hydro Ottawa, an exercise of updating our playbook for emergencies. So when there are outages, what kind of outage? What's the sort of, what are the trigger points to go from, you know, what we call a level one to a level two to level three. But all of this is sort of like people hours that are going into that, and we're thinking through these scenarios, and we've got a handful of them, and you're just kind of making me think, well, yeah, what if we were able to model that out? And you bring up this concept of agents, let's tease into that a little bit explain what you mean when you're talking about agents.   Greg Lindsay  28:36 Yeah, so agentic systems, as the term of art is, AI instantiations that have some level of autonomy. And the archetypal example of this is the Stanford Smallville experiment, where they took basically a dozen large language models and they gave it an architecture where they could give it a little bit of backstory, ruminate on it, basically reflect, think, decide, and then act. And in this case, they used it to plan a Valentine's Day party. So they played out real time, and the LLM agents, like, even played matchmaker. They organized the party, they sent out invitations, they did these sorts of things. Was very cute. They put it out open source, and like, three weeks later, another team of researchers basically put them to work writing software programs. So you can see they organized their own workflow. They made their own decisions. There was a CTO. They fact check their own work. And this is evolving into this grand vision of, like, 1000s, millions of agents, just like, just like you spin up today an instance of Amazon Web Services to, like, host something in the cloud. You're going to spin up an agent Nvidia has talked about doing with healthcare and others. So again, coming back to like, the energy implications of that, because it changes the whole pattern. Instead of huge training runs requiring giant data centers. You know, it's these agents who are making all these calls and doing more stuff at the edge, but, um, but yeah, in this case, it's the notion of, you know, what can you put the agents to work doing? And I bring this up again, back to, like, predictive maintenance, or for hydro Ottawa, there's another amazing paper called virtual in real life. And I chatted with one of the principal authors. It created. A half dozen agents who could play tour guide, who could direct you to a coffee shop, who do these sorts of things, but they weren't doing it in a virtual world. They were doing it in the real one. And to do it in the real world, you took the agent, you gave them a machine vision capability, so added that model so they could recognize objects, and then you set them loose inside a digital twin of the world, in this case, something very simple, Google Street View. And so in the paper, they could go into like New York Central Park, and they could count every park bench and every waste bin and do it in seconds and be 99% accurate. And so agents were monitoring the landscape. Everything's up, because you can imagine this in the real world too, that we're going to have all the time. AIS roaming the world, roaming these virtual maps, these digital twins that we build for them and constantly refresh from them, from camera data, from sensor data, from other stuff, and tell us what this is. And again, to me, it's really exciting, because that's finally like an operating system for the internet of things that makes sense, that's not so hardwired that you can ask agents, can you go out and look for this for me? Can you report back on this vital system for me? And they will be able to hook into all of these kinds of representations of real time data where they're emerging from, and give you aggregated reports on this one. And so, you know, I think we have more visibility in real time into the real world than we've ever had before.   Trevor Freeman  31:13 Yeah, I want to, I want to connect a few dots here for our listeners. So bear with me for a second. Greg. So for our listeners, there was a podcast episode we did about a year ago on our grid modernization roadmap, and we talked about one of the things we're doing with grid modernization at hydro Ottawa and utilities everywhere doing this is increasing the sensor data from our grid. So we're, you know, right now, we've got visibility sort of to our station level, sometimes one level down to some switches. But in the future, we'll have sensors everywhere on our grid, every switch, every device on our grid, will have a sensor gathering data. Obviously, you know, like you said earlier, millions and hundreds of millions of data points every second coming in. No human can kind of make decisions on that, and what you're describing is, so now we've got all this data points, we've got a network of information out there, and you could create this agent to say, Okay, you are. You're my transformer agent. Go out there and have a look at the run temperature of every transformer on the network, and tell me where the anomalies are, which ones are running a half a degree or two degrees warmer than they should be, and report back. And now I know hydro Ottawa, that the controller, the person sitting in the room, knows, Hey, we should probably go roll a truck and check on that transformer, because maybe it's getting end of life. Maybe it's about to go and you can do that across the entire grid. That's really fascinating,   Greg Lindsay  32:41 And it's really powerful, because, I mean, again, these conversations 20 years ago at IoT, you know you're going to have statistical triggers, and you would aggregate these data coming off this, and there was a lot of discussion there, but it was still very, like hardwired, and still very Yeah, I mean, I mean very probabilistic, I guess, for a word that went with agents like, yeah, you've now created an actual thing that can watch those numbers and they can aggregate from other systems. I mean, lots, lots of potential there hasn't quite been realized, but it's really exciting stuff. And this is, of course, where that whole direction of the industry is flowing. It's on everyone's lips, agents.   Trevor Freeman  33:12 Yeah. Another term you mentioned just a little bit ago that I want you to explain is a digital twin. So tell us what a digital twin is.   Greg Lindsay  33:20 So a digital twin is, well, the matrix. Perhaps you could say something like this for listeners of a certain age, but the digital twin is the idea of creating a model of a piece of equipment, of a city, of the world, of a system. And it is, importantly, it's physics based. It's ideally meant to represent and capture the real time performance of the physical object it's based on, and in this digital representation, when something happens in the physical incarnation of it, it triggers a corresponding change in state in the digital twin, and then vice versa. In theory, you know, you could have feedback loops, again, a lot of IoT stuff here, if you make changes virtually, you know, perhaps it would cause a change in behavior of the system or equipment, and the scales can change from, you know, factory equipment. Siemens, for example, does a lot of digital twin work on this. You know, SAP, big, big software companies have thought about this. But the really crazy stuff is, like, what Nvidia is proposing. So first they started with a digital twin. They very modestly called earth two, where they were going to model all the weather and climate systems of the planet down to like the block level. There's a great demo of like Jensen Wong walking you through a hurricane, typhoons striking the Taipei, 101, and how, how the wind currents are affecting the various buildings there, and how they would change that more recently, what Nvidia is doing now is, but they just at their big tech investor day, they just partner with General Motors and others to basically do autonomous cars. And what's crucial about it, they're going to train all those autonomous vehicles in an NVIDIA built digital twin in a matrix that will act, that will be populated by agents that will act like people, people ish, and they will be able to run millions of years of autonomous vehicle training in this and this is how they plan to catch up to. Waymo or, you know, if Tesla's robotaxis are ever real kind of thing, you know, Waymo built hardwired like trained on real world streets, and that's why they can only operate in certain operating domain environments. Nvidia is gambling that with large language models and transformer models combined with digital twins, you can do these huge leapfrog effects where you can basically train all sorts of synthetic agents in real world behavior that you have modeled inside the machine. So again, that's the kind, that's exactly the kind of, you know, environment that you're going to train, you know, your your grid of the future on for modeling out all your contingency scenarios.   Trevor Freeman  35:31 Yeah, again, you know, for to bring this to the to our context, a couple of years ago, we had our the direcco. It's a big, massive windstorm that was one of the most damaging storms that we've had in Ottawa's history, and we've made some improvements since then, and we've actually had some great performance since then. Imagine if we could model that derecho hitting our grid from a couple different directions and figure out, well, which lines are more vulnerable to wind speeds, which lines are more vulnerable to flying debris and trees, and then go address that and do something with that, without having to wait for that storm to hit. You know, once in a decade or longer, the other use case that we've talked about on this one is just modeling what's happening underground. So, you know, in an urban environments like Ottawa, like Montreal, where you are, there's tons of infrastructure under the ground, sewer pipes, water pipes, gas lines, electrical lines, and every time the city wants to go and dig up a road and replace that road, replace that sewer, they have to know what's underground. We want to know what's underground there, because our infrastructure is under there. As the electric utility. Imagine if you had a model where you can it's not just a map. You can actually see what's happening underground and determine what makes sense to go where, and model out these different scenarios of if we underground this line or that line there. So lots of interesting things when it comes to a digital twin. The digital twin and Agent combination is really interesting as well, and setting those agents loose on a model that they can play with and understand and learn from. So talk a little bit about.   Greg Lindsay  37:11 that. Yeah. Well, there's a couple interesting implications just the underground, you know, equipment there. One is interesting because in addition to, like, you know, you know, having captured that data through mapping and other stuff there, and having agents that could talk about it. So, you know, next you can imagine, you know, I've done some work with augmented reality XR. This is sort of what we're seeing again, you know, meta Orion has shown off their concept. Google's brought back Android XR. Meta Ray Bans are kind of an example of this. But that's where this data will come from, right? It's gonna be people wearing these wearables in the world, capturing all this camera data and others that's gonna be fed into these digital twins to refresh them. Meta has a particularly scary demo where you know where you the user, the wearer leaves their keys on their coffee table and asks metas, AI, where their coffee where their keys are, and it knows where they are. It tells them and goes back and shows them some data about it. I'm like, well, to do that, meta has to have a complete have a complete real time map of your entire house. What could go wrong. And that's what all these companies aspire to of reality. So, but yeah, you can imagine, you know, you can imagine a worker. And I've worked with a startup out of urban X, a Canada startup, Canadian startup called context steer. And you know, is the idea of having real time instructions and knowledge manuals available to workers, particularly predictive maintenance workers and line workers. So you can imagine a technician dispatched to deal with this cut in the pavement and being able to see with XR and overlay of like, what's actually under there from the digital twin, having an AI basically interface with what's sort of the work order, and basically be your assistant that can help you walk you through it, in case, you know, you run into some sort of complication there, hopefully that won't be, you know, become like, turn, turn by turn, directions for life that gets into, like, some of the questions about what we wanted out of our workforce. But there's some really interesting combinations of those things, of like, you know, yeah, mapping a world for AIS, ais that can understand it, that could ask questions in it, that can go probe it, that can give you advice on what to do in it. All those things are very close for good and for bad.   Trevor Freeman  39:03 You kind of touched on my next question here is, how do we make sure this is all in the for good or mostly in the for good category, and not the for bad category you talk in one of the papers that you wrote about, you know, AI and augmented reality in particular, really expanding the attack surface for malicious actors. So we're creating more opportunities for whatever the case may be, if it's hacking or if it's malware, or if it's just, you know, people that are up to nefarious things. How do we protect against that? How do we make sure that our systems are safe that the users of our system. So in our case, our customers, their data is safe, their the grid is safe. How do we make sure that?   Greg Lindsay  39:49 Well, the very short version is, whatever we're spending on cybersecurity, we're not spending enough. And honestly, like everybody who is no longer learning to code, because we can be a quad or ChatGPT to do it, I. Is probably there should be a whole campaign to repurpose a big chunk of tech workers into cybersecurity, into locking down these systems, into training ethical systems. There's a lot of work to be done there. But yeah, that's been the theme for you know that I've seen for 10 years. So that paper I mentioned about sort of smart homes, the Internet of Things, and why people would want a smart home? Well, yeah, the reason people were skeptical is because they saw it as basically a giant attack vector. My favorite saying about this is, is, there's a famous Arthur C Clarke quote that you know, any sufficiently advanced technology is magic Tobias Ravel, who works at Arup now does their head of foresight has this great line, any sufficiently advanced hacking will feel like a haunting meaning. If you're in a smart home that's been hacked, it will feel like you're living in a haunted house. Lights will flicker on and off, and systems will turn and go haywire. It'll be like you're living with a possessed house. And that's true of cities or any other systems. So we need to do a lot of work on just sort of like locking that down and securing that data, and that is, you know, we identified, then it has to go all the way up and down the supply chain, like you have to make sure that there is, you know, a chain of custody going back to when components are made, because a lot of the attacks on nest, for example. I mean, you want to take over a Google nest, take it off the wall and screw the back out of it, which is a good thing. It's not that many people are prying open our thermostats, but yeah, if you can get your hands on it, you can do a lot of these systems, and you can do it earlier in the supply chain and sorts of infected pieces and things. So there's a lot to be done there. And then, yeah, and then, yeah, and then there's just a question of, you know, making sure that the AIs are ethically trained and reinforced. And, you know, a few people want to listeners, want to scare themselves. You can go out and read some of the stuff leaking out of anthropic and others and make clot of, you know, models that are trying to hide their own alignments and trying to, like, basically copy themselves. Again, I don't believe that anything things are alive or intelligent, but they exhibit these behaviors as part of the probabilistic that's kind of scary. So there's a lot to be done there. But yeah, we worked on this, the group that I do foresight with Arizona State University threat casting lab. We've done some work for the Secret Service and for NATO and, yeah, there'll be, you know, large scale hackings on infrastructure. Basically the equivalent can be the equivalent can be the equivalent to a weapons of mass destruction attack. We saw how Russia targeted in 2014 the Ukrainian grid and hacked their nuclear plans. This is essential infrastructure more important than ever, giving global geopolitics say the least, so that needs to be under consideration. And I don't know, did I scare you enough yet? What are the things we've talked through here that, say the least about, you know, people being, you know, tricked and incepted by their AI girlfriends, boyfriends. You know people who are trying to AI companions. I can't possibly imagine what could go wrong there.   Trevor Freeman  42:29 I mean, it's just like, you know, I don't know if this is 15 or 20, or maybe even 25 years ago now, like, it requires a whole new level of understanding when we went from a completely analog world to a digital world and living online, and people, I would hope, to some degree, learned to be skeptical of things on the internet and learned that this is that next level. We now need to learn the right way of interacting with this stuff. And as you mentioned, building the sort of ethical code and ethical guidelines into these language models into the AI. Learning is pretty critical for our listeners. We do have a podcast episode on cybersecurity. I encourage you to go listen to it and reassure yourself that, yes, we are thinking about this stuff. And thanks, Greg, you've given us lots more to think about in that area as well. When it comes to again, looking back at utilities and managing the grid, one thing we're going to see, and we've talked a lot about this on the show, is a lot more distributed generation. So we're, you know, the days of just the central, large scale generation, long transmission lines that being the only generation on the grid. Those days are ending. We're going to see more distributed generations, solar panels on roofs, batteries. How does AI help a utility manage those better, interact with those better get more value out of those things?   Greg Lindsay  43:51 I guess that's sort of like an extension of some of the trends I was talking about earlier, which is the notion of, like, being able to model complex systems. I mean, that's effectively it, right, like you've got an increasingly complex grid with complex interplays between it, you know, figuring out how to basically based on real world performance, based on what you're able to determine about where there are correlations and codependencies in the grid, where point where choke points could emerge, where overloading could happen, and then, yeah, basically, sort of building that predictive system to Basically, sort of look for what kind of complex emergent behavior comes out of as you keep adding to it and and, you know, not just, you know, based on, you know, real world behavior, but being able to dial that up to 11, so to speak, and sort of imagine sort of these scenarios, or imagine, you know, what, what sort of long term scenarios look like in terms of, like, what the mix, how the mix changes, how the geography changes, all those sorts of things. So, yeah, I don't know how that plays out in the short term there, but it's this combination, like I'm imagining, you know, all these different components playing SimCity for real, if one will.   Trevor Freeman  44:50 And being able to do it millions and millions and millions of times in a row, to learn every possible iteration and every possible thing that might happen. Very cool. Okay. So last kind of area I want to touch on you did mention this at the beginning is the the overall power implications of of AI, of these massive data centers, obviously, at the utility, that's something we are all too keenly aware of. You know, the stat that that I find really interesting is a normal Google Search compared to, let's call it a chat GPT search. That chat GPT search, or decision making, requires 10 times the amount of energy as that just normal, you know, Google Search looking out from a database. Do you see this trend? I don't know if it's a trend. Do you see this continuing like AI is just going to use more power to do its decision making, or will we start to see more efficiencies there? And the data centers will get better at doing what they do with less energy. What is the what does the future look like in that sector?   Greg Lindsay  45:55 All the above. It's more, is more, is more! Is the trend, as far as I can see, and every decision maker who's involved in it. And again, Jensen Wong brought this up at the big Nvidia Conference. That basically he sees the only constraint on this continuing is availability of energy supplies keep it going and South by Southwest. And in some other conversations I've had with bandwidth companies, telcos, like laying 20 lumen technologies, United States is laying 20,000 new miles of fiber optic cables. They've bought 10% of Corning's total fiber optic output for the next couple of years. And their customers are the hyperscalers. They're, they're and they're rewiring the grid. That's why, I think it's interesting. This has something, of course, for thinking about utilities, is, you know, the point to point Internet of packet switching and like laying down these big fiber routes, which is why all the big data centers United States, the majority of them, are in north of them are in Northern Virginia, is because it goes back to the network hub there. Well, lumen is now wiring this like basically this giant fabric, this patchwork, which can connect data center to data center, and AI to AI and cloud to cloud, and creating this entirely new environment of how they are all directly connected to each other through some of this dedicated fiber. And so you can see how this whole pattern is changing. And you know, the same people are telling me that, like, yeah, the where they're going to build this fiber, which they wouldn't tell me exactly where, because it's very tradable, proprietary information, but, um, but it's following the energy supplies. It's following the energy corridors to the American Southwest, where there's solar and wind in Texas, where you can get natural gas, where you can get all these things. It will follow there. And I of course, assume the same is true in Canada as we build out our own sovereign data center capacity for this. So even, like deep seek, for example, you know, which is, of course, the hyper efficient Chinese model that spooked the markets back in January. Like, what do you mean? We don't need a trillion dollars in capex? Well, everyone's quite confident, including again, Jensen Wong and everybody else that, yeah, the more efficient models will increase this usage. That Jevons paradox will play out once again, and we'll see ever more of it. To me, the question is, is like as how it changes? And of course, you know, you know, this is a bubble. Let's, let's, let's be clear, data centers are a bubble, just like railroads in 1840 were a bubble. And there will be a bust, like not everyone's investments will pencil out that infrastructure will remain maybe it'll get cheaper. We find new uses for it, but it will, it will eventually bust at some point and that's what, to me, is interesting about like deep seeking, more efficient models. Is who's going to make the wrong investments in the wrong places at the wrong time? But you know, we will see as it gathers force and agents, as I mentioned. You know, they don't require, as much, you know, these monstrous training runs at City sized data centers. You know, meta wanted to spend $200 billion on a single complex, the open AI, Microsoft, Stargate, $500 billion Oracle's. Larry Ellison said that $100 billion is table stakes, which is just crazy to think about. And, you know, he's permitting three nukes on site. So there you go. I mean, it'll be fascinating to see if we have a new generation of private, private generation, right, like, which is like harkening all the way back to, you know, the early electrical grid and companies creating their own power plants on site, kind of stuff. Nicholas Carr wrote a good book about that one, about how we could see from the early electrical grid how the cloud played out. They played out very similarly. The AI cloud seems to be playing out a bit differently. So, so, yeah, I imagine that as well, but, but, yeah, well, inference happen at the edge. We need to have more distributed generation, because you're gonna have AI agents that are going to be spending more time at the point of request, whether that's a laptop or your phone or a light post or your autonomous vehicle, and it's going to need more of that generation and charging at the edge. That, to me, is the really interesting question. Like, you know, when these current generation models hit their limits, and just like with Moore's law, like, you know, you have to figure out other efficiencies in designing chips or designing AIS, how will that change the relationship to the grid? And I don't think anyone knows quite for sure yet, which is why they're just racing to lock up as many long term contracts as they possibly can just get it all, core to the market.   Trevor Freeman  49:39 Yeah, it's just another example, something that comes up in a lot of different topics that we cover on this show. Everything, obviously, is always related to the energy transition. But the idea that the energy transition is really it's not just changing fuel sources, like we talked about earlier. It's not just going from internal combustion to a battery. It's rethinking the. Relationship with energy, and it's rethinking how we do things. And, yeah, you bring up, like, more private, massive generation to deal with these things. So really, that whole relationship with energy is on scale to change. Greg, this has been a really interesting conversation. I really appreciate it. Lots to pack into this short bit of time here. We always kind of wrap up our conversations with a series of questions to our guests. So I'm going to fire those at you here. And this first one, I'm sure you've got lots of different examples here, so feel free to give more than one. What is a book that you've read that you think everybody should read?   Greg Lindsay  50:35 The first one that comes to mind is actually William Gibson's Neuromancer, which is which gave the world the notion of cyberspace and so many concepts. But I think about it a lot today. William Gibson, Vancouver based author, about how much in that book is something really think about. There is a digital twin in it, an agent called the Dixie flatline. It's like a former program where they cloned a digital twin of him. I've actually met an engineering company, Thornton Thomas Eddie that built a digital twin of one of their former top experts. So like that became real. Of course, the matrix is becoming real the Turing police. Yeah, there's a whole thing in there where there's cops to make sure that AIS don't get smarter. I've been thinking a lot about, do we need Turing police? The EU will probably create them. And so that's something where you know the proof, again, of like science fiction, its ability in world building to really make you think about these implications and help for contingency planning. A lot of foresight experts I work with think about sci fi, and we use sci fi for exactly that reason. So go read some classic cyberpunk, everybody.   Trevor Freeman  51:32 Awesome. So same question. But what's a movie or a show that you think everybody should take a look at?   Greg Lindsay  51:38 I recently watched the watch the matrix with ideas, which is fun to think about, where the villains are, agents that villains are agents. That's funny how that terms come back around. But the other one was thinking about the New Yorker recently read a piece on global demographics and the fact that, you know, globally, less and less children. And it made several references to Alfonso Quons, Children of Men from 2006 which is, sadly, probably the most prescient film of the 21st Century. Again, a classic to watch, about imagining in a world where we don't where you where you lose faith in the future, what happens, and a world that is not having children as a world that's losing faith in its own future. So that's always haunted me.   Trevor Freeman  52:12 It's funny both of those movies. So I've got kids as they get, you know, a little bit older, a little bit older, we start introducing more and more movies. And I've got this list of movies that are just, you know, impactful for my own adolescent years and growing up. And both matrix and Children of Men are on that list of really good movies that I just need my kids to get a little bit older, and then I'm excited to watch with them. If someone offered you a free round trip flight anywhere in the world, where would you go?   Greg Lindsay  52:40 I would go to Venice, Italy for the Architecture Biennale, which I will be on a plane in May, going to anyway. And the theme this year is intelligence, artificial, natural and collective. So it should be interesting to see the world's brightest architects. Let's see what we got. But yeah, Venice, every time, my favorite city in the world.   Trevor Freeman  52:58 Yeah, it's pretty wonderful. Who is someone that you admire?   Greg Lindsay  53:01 Great question.

Hashtag Trending
AI Impacts Career Choices, Major Layoffs at Atlassian, and New Developments in Tech

Hashtag Trending

Play Episode Listen Later Aug 1, 2025 16:22 Transcription Available


Jim reveals the answer to a previous skill-testing question about ChatGPT and discusses recent tech news. Key stories include SpaceX's expansion of Starlink to support direct IoT connectivity, a major cyber attack in St. Paul, Minnesota, China's concerns over Nvidia AI chips, and shifts in career priorities due to AI advancements. Additionally, Atlassian's CEO announces layoffs due to AI replacements, raising ethical concerns about job security and the future workforce. Jim concludes by reflecting on the implications of AI on employment and calls for an open discussion on the topic. 00:00 Introduction and Yesterday's Question Answered 00:39 SpaceX Expands Starlink for IoT 02:43 Cyber Attack on St. Paul 04:21 Nvidia's AI Chip Controversy 07:08 AI's Impact on Career Choices 09:37 Atlassian's AI-Driven Layoffs 12:35 Ethics and the Future of AI 15:38 Conclusion and Sign-Off

Celebrate Kids Podcast with Dr. Kathy
How AI Impacts Kids and What to Do About It

Celebrate Kids Podcast with Dr. Kathy

Play Episode Listen Later Jul 24, 2025 18:24


In today's episode of the Celebrate Kids podcast, Dr. Kathy considers a new report that unveils what happens to kids who use AI frequently. Exploring the findings of how trust shifts to technology, Dr. Kathy highlights insights from research she's done on why kids learn and retain knowledge and wisdom better when they're taught by an adult versus a screen. She gives important tips on how we can guide our kids well inside this cultural moment.

IDTheftCenter
The Fraudian Slip Podcast: Ride or AI – Impacts of AI on Identity Theft, Fraud and Scams - S6E7

IDTheftCenter

Play Episode Listen Later Jul 23, 2025 30:15


Welcome to the Fraudian Slip, the Identity Theft Resource Center's (ITRC) podcast, where we talk about all things identity theft, fraud and scams that impact people and businesses. You've probably heard of Ride or Die, one of those slang terms that seems to be everywhere at one time or another. Today, the phrase you can't escape no matter how hard you try is artificial intelligence, or AI. That's why we're calling today's episode “Ride or AI.” We discuss the impacts of AI. AI is already changing business and social norms and will continue to do so. What will be the impacts of AI on identity theft, fraud and scams? Who will be responsible for protecting people and businesses when AI makes a mistake? Which is the more likely scenario: a Star Trek-style all-knowing computer or Skynet from the Terminator? Follow on LinkedIn: www.linkedin.com/company/idtheftcenter/ Follow on X: twitter.com/IDTheftCenter

TAG Data Talk
How AI Impacts How We Develop and Grow Data Teams

TAG Data Talk

Play Episode Listen Later Jul 2, 2025 27:00


In this episode of TAG Data Talk, Dr. Beverly Wright discusses with Akhil Mahajan, Technical Director at Proctor & Gamble:What are some of the core skill of traditional data teams?How is AI modifying the way we work in data?Describe ways to upskill, re-skill, or hire data teams of the future.Akhil Mahajan, Technical Director at Proctor & GambleFollow Akhil Mahajan

AI & Law: Podcast Series Hosted by Dr. Lance Eliot
AI & Law: How AI Impacts The Creativity Of Lawyers

AI & Law: Podcast Series Hosted by Dr. Lance Eliot

Play Episode Listen Later Jun 2, 2025 6:42


Dr. Lance Eliot explains how AI impacts the creativity of lawyers. See his Forbes column for further info: https://www.forbes.com/sites/lanceeliot/

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Duolingo Co-Founder on Why $3M is Harder than $100M to Raise | Why You Should Always Take Tier 1 VCs Even at Worse Terms | Why Europe Can't Win Unless the US Screws Up | How AI Impacts the Future of Work and Education with Severin Hacker

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later May 19, 2025 91:43


Severin Hacker is the Co-Founder and CTO of Duolingo, the world's most downloaded education app with over 100 million monthly users. Since its 2021 IPO, Duolingo has reached a market cap of $20BN. The company has raised over $183M from top-tier investors including CapitalG, Kleiner Perkins, Union Square Ventures, NEA, Ashton Kutcher, and Tim Ferriss. Severin is also an active angel investor, with standout bets including Decagon, one of the fastest-growing AI-native dev shops globally. Items Mentioned In Today's Episode:  00:00 – Why It's Harder to Raise $3M Than $100M 02:10 – The Real Reason Duolingo Couldn't Have Started in Europe 04:40 – Duolingo's AI Pivot: What “AI-First” Actually Means 07:00 – The 12-Year Bottleneck Duolingo Crushed with AI 11:40 – How Duolingo Uses AI Internally (and Why They Love Cursor) 13:30 – Where AI Still Sucks (Especially in Engineering) 16:00 – Will AI Kill the CS Degree? Severin's Surprising Take 18:00 – The End of Work? UBI, Purpose, and the Future of Labor 25:20 – OpenAI vs Duolingo: Are They Coming for Language Learning? 29:20 – Duolingo's Biggest Mistake: “We Waited Too Long on This…” 39:30 – Duolingo's Secret Sauce: What Investors Always Get Wrong 45:00 – Would You Go Public Today? Severin's Surprising Answer 49:00 – Best and Worst Parts of Going Public—A Rare Honest Take 51:00 – Should Europe Give Up? Severin's Unfiltered Opinion 56:00 – Harsh Truth: “Europe Can't Win Unless the U.S. Screws Up” 59:10 – Why Founders Have to Move to the US to Optimise Their Chance of Success 1:01:00 – Why Union Square Was the Only VC to Say Yes 1:03:00 – The Real Value of Tier 1 VCs (Even at Worse Terms) 1:05:00 – From PhD Student to Billionaire: Does Money Buy Happiness?  1:09:00 – Why Severin Sometimes Lies About His Job 1:10:20 – Founder Marriage Advice: “Write a Contract” 1:11:50 – How to Pick a Life Partner – Severin's Tuesday Night Test 20VC: Duolingo Co-Founder on The Doomed Future of Europe, Reflections on Money, Marriage and the Future of AI

Changing The Sales Game
AI Impacts on the Future of Work with Steve Lomas (Episode 226)

Changing The Sales Game

Play Episode Listen Later May 13, 2025 43:49


AI Impacts on the Future of Work with Steve Lomas (episode 226)  “Humans are not perfect, and neither is AI. But together, we can create something extraordinary.” – Andrew Ng Check Out These Highlights:  Lately, AI has been a big topic everywhere we look, from events and meetings to our offices. So, what does it mean for the future of work and its workers? This is a critical question, as it reveals the skills or understanding you may need to acquire to remain relevant in an ever-changing world filled with automation and AI options.  About Steve Lomas:  Steve is the CEO of The Roster Agency, Nashville's premier provider of fractional creative resources. A Fortune 500 innovator and serial startup founder, he has collaborated with DreamWorks, EA Games, Philips, and ABC. As a talent acquisition consultant for lynda.com (now LinkedIn Learning), he honed his ability to identify top talent—insight he now brings to The Roster. Passionate about the freelance economy, Lomas connects professionals with leading brands, driving innovation and excellence. How to Get in Touch with Steve Lomas: Email: sl@theroster.agency Website: https://www.theroster.agency/ Podcast Episode from 4/2/25:  https://podcasts.apple.com/us/podcast/changing-the-sales-game/id1543243616?i=1000701894929 Stalk me online! LinkTree: https://linktr.ee/conniewhitman   Subscribe to the Changing the Sales Game Podcast on your favorite podcast streaming service or YouTube. New episodes are posted every week. Listen to Connie dive into new sales and business topics or problems you may have in your business.

ThinkEnergy
Empowering power: how AI impacts energy systems

ThinkEnergy

Play Episode Listen Later Apr 28, 2025 54:27


Greg Lindsay is an urban tech expert and a Senior Fellow at MIT. He's also a two-time Jeopardy champion and the only human to go undefeated against IBM's Watson. Greg joins thinkenergy to talk about how artificial intelligence (AI) is reshaping how we manage, consume, and produce energy—from personal devices to provincial grids. He also explores its rapid growth and the rising energy demand from AI itself. Listen in to learn how AI impacts our energy systems and what it means individually and industry-wide. Related links ●     Greg Lindsay website: https://greglindsay.org/ ●     Greg Lindsay on LinkedIn: https://www.linkedin.com/in/greg-lindsay-8b16952/ ●     International Energy Agency (IEA): https://www.iea.org/ ●     Trevor Freeman on LinkedIn: https://www.linkedin.com/in/trevor-freeman-p-eng-cem-leed-ap-8b612114/ ●     Hydro Ottawa: https://hydroottawa.com/en  To subscribe using Apple Podcasts: https://podcasts.apple.com/us/podcast/thinkenergy/id1465129405   To subscribe using Spotify: https://open.spotify.com/show/7wFz7rdR8Gq3f2WOafjxpl   To subscribe on Libsyn: http://thinkenergy.libsyn.com/ --- Subscribe so you don't miss a video: https://www.youtube.com/user/hydroottawalimited Follow along on Instagram: https://www.instagram.com/hydroottawa Stay in the know on Facebook: https://www.facebook.com/HydroOttawa  Keep up with the posts on X: https://twitter.com/thinkenergypod

Highlights from The Pat Kenny Show
How AI impacts cybersecurity breaches

Highlights from The Pat Kenny Show

Play Episode Listen Later Apr 9, 2025 11:34


It is frighteningly easy to clone someone else's identity using readily-available artificial intelligence tools, and its a real threat to cybersecurity. Our guest this morning proved how easy it is to realistically impersonate any person on the planet Joining Pat on the show this morning was Jake Moore - Global Cybersecurity Advisor at ESET | Former Police Head of Digital Forensics / Cybercrime Officer.

Impact Theory with Tom Bilyeu
Schumer Wants Your Money, Teslas Keep Blowing Up, Chinese Labor & JFK Files | The Tom Bilyeu Show

Impact Theory with Tom Bilyeu

Play Episode Listen Later Mar 19, 2025 84:05


In this episode of "Impact Theory with Tom Bilyeu," join Tom and his co-host Drew as they break into a whirlwind of current events, political intrigue, and innovations in technology. The dynamic duo dives headfirst into a plethora of topical discussions, starting with the transformative shift in perceptions toward Chinese innovation and its impact on global research, particularly in cancer treatments. They dissect the complicated narrative surrounding public reactions to government actions, showcasing how those on the ground often bear the brunt of political gamesmanship. The conversation takes an electrifying turn as Tom and Drew explore the new heights of space endeavors, applauding the Spirit of SpaceX for rescuing astronauts amidst political hurdles. Amidst this, they scrutinize the controversial chatter around Tesla's market moves and the implications for average investors. Get ready for an engaging session that promises to educate and provoke thought on these pressing issues. SHOWNOTES 00:00 Intro and Setting the Scene 00:48 Chinese Innovation and Cancer Research 06:12 Public Reactions and Political Missteps 11:54 SpaceX's Stellar Rescue Mission 17:30 Tesla Stock and Political Perceptions 21:00 Doxxing and Market Influence 23:58 Tariffs and China's Economic Edge 30:41 Housing Market Bubbles 40:25 Manufacturing and AI Impacts 49:28 Robotics and Advances in Technology 53:17 Satellites and Wildfire Detection CHECK OUT OUR SPONSORS Range Rover: Range Rover: Explore the Range Rover Sport at https://rangerover.com/us/sport Audible: Sign up for a free 30 day trial at https://audible.com/IMPACTTHEORY  Vital Proteins: Get 20% off by going to https://www.vitalproteins.com and entering promo code IMPACT at check out Thrive Market: ​​Go to https:thrivemarket.com/impact for 30% off your first order, plus a FREE $60 gift! Tax Network: Stop looking over your shoulder and put your IRS troubles behind you. Call 1-800-958-1000 or visit https://tnusa.com/impact ITU: Ready to breakthrough your biggest business bottleneck? Apply to work with me 1:1 - https://impacttheory.co/SCALE American Alternative Assets: If you're ready to explore gold as part of your investment strategy, call 1-888-615-8047 or go to https://TomGetsGold.com Mint Mobile: If you like your money, Mint Mobile is for you. Shop plans at https://mintmobile.com/impact.  DISCLAIMER: Upfront payment of $45 for 3-month 5 gigabyte plan required (equivalent to $15/mo.). New customer offer for first 3 months only, then full-price plan options available. Taxes & fees extra. See MINT MOBILE for details.  ********************************************************************** What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER SCALING a business: see if you qualify here. Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** Join me live on my Twitch stream. I'm live daily from 6:30 to 8:30 am PT at www.twitch.tv/tombilyeu ********************************************************************** LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices

The Cloudcast
AI Impacts Across Organizations & Startups

The Cloudcast

Play Episode Listen Later Mar 12, 2025 25:46


Jon Duren, Sales Sr. Practice Manager, AI & Data Solutions @ WWT & Druce MacFarlane, Product @ Infoblox talks about the intersection of AI, Startups, and Enterprise Trends.SHOW: 905SHOW TRANSCRIPT: The Cloudcast #905 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS" SPONSORS:Try Postman AI Agent Builder Todaypostman.com/podcast/cloudcast/SHOW NOTES:WWT websiteWWT AI Proving GroundInfoblox websiteStartup Lantern PodcastTopic 1 - Jon and Druce welcome to the show. Give everyone a quick introduction…Topic 1a - You also have a podcast, tell everyone about that…Topic 2 - AI has thrown the world of cybersecurity a curveball..   Druce, how are you seeing AI impact the typical conversations you are involved in?Topic 3 - How is AI affecting start-up companies, thinking beyond all the AI-specific startups, how is AI going to impact the entrepreneur who's trying to launch a non-AI company?  What should they know and think about AI when starting a new company?Topic 4 - You both talk to a lot of customers about AI, especially to Enterprise customers. Let's separate the hype from the practical. Where are organizations typically in their AI journey in early 2025? Have we moved beyond the chatbot yetTopic 5 - I see several organizations considering AI but struggling with ROI. What are you seeing, and how do you help organizations overcome this hurdle? How long is ROI measured with AI projects? It's not 3-5 years anymore.Topic 6 - Is the industry moving too fast? Jon and I have had conversations in the past where organizations just can't absorb the changes in hardware and models (just a few examples). How can an organization commit to a path that we know will change in 12 monthsTopic 7 - Tell everyone where they can find your podcast. Also, If anyone is interested, what's the best way to get started on their AI journey?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod

Financially Legal
61. How AI Impacts Law Firm Profitability with Patrick Maddigan and Tim Sawyer of Faster Outcomes

Financially Legal

Play Episode Listen Later Feb 27, 2025 60:32


How much time are you wasting on repetitive tasks? What if you could significantly cut those hours, increasing your firm's profitability? AI isn't just a buzzword now—it's a tool transforming the economics of running a law firm.In this Financially Legal episode, host Emery Wager sits down with Tim Sawyer and Patrick Maddigan of Faster Outcomes to discuss AI's role in modern law firms and how firms can leverage technology to impact profitability.

Next in Tech
Agentic AI Impacts

Next in Tech

Play Episode Listen Later Feb 18, 2025 27:31


The next phase of the AI wave is the arrival of agentic AI – where agents can take action on a user's behalf. That's enough of a big deal, but when the head of a tech giant says agentic is going to replace most SaaS applications, something different might be afoot. Analysts Sheryl Kingstone and Chris Marsh return to the podcast to look at the realities of this suggestion with host Eric Hanselman. Agents could become the new user interface for enterprise data, but there are a set of challenges in making this work. On the one hand, one of the largest issues with autonomous action, accountability for actions taken, is far from settled in both regulatory and legal frameworks. On the other, much of enterprise information is still held in systems where it may be difficult for an agent to reach.  Agentic AI could provide a gateway to the myriad of systems that run the modern business. Opening access to data and the ability to aggregate across an organization could be tremendously powerful. Capturing the business logic that is often embedded in SaaS systems is difficult, but the shift to decoupling through API's and the expansion of systems of delivery could open the door to agentic progress.  More S&P Global Content: Big Picture for Generative AI in 2025: From Hype to Value Webinar: The Big Picture on GenAI and Market Impacts For S&P Subscribers: 2025 Trends in Data, AI & Analytics   Credits: Host/Author: Eric Hanselman Guests: Chris Marsh, Sheryl Kingstone Producer/Editor: Kyle Cangialosi and Odesha Chan Published With Assistance From: Sophie Carr, Feranmi Adeoshun, Kyra Smith

Seller Performance Solutions
How AI Impacts Amazon Listing Takedowns

Seller Performance Solutions

Play Episode Listen Later Feb 13, 2025 10:28


AI is increasingly involved in the processes of appeals, investigations, and listing management, which is leading to unforeseen challenges that can directly impact seller operations.In this episode, Chris McCabe and Leah McHugh discuss how these systems, while designed to streamline operations, often result in errors and miscommunications that can jeopardize a seller's standing on the platform.

ai takedowns amazon listing ai impacts chris mccabe
Closing Bell
Closing Bell Overtime: Nvidia Pullback, AI Impacts, and Steel Industry Shifts 1/27/25

Closing Bell

Play Episode Listen Later Jan 27, 2025 42:40


Some Future Day
How AI Impacts the Luxury Fashion Industry | David Klingbeil & Marc Beckman

Some Future Day

Play Episode Listen Later Jan 14, 2025 79:37


Will Gucci take off? Is Hermes dead in its tracks? Artificial intelligence is a superpower that will launch the luxury sector into the stratosphere, but only for brands wise enough to embrace the new technology now.David Klingbeil is the founder and CEO of Submarine.ai and a professor at New York University who specializes in the luxury sector. Who better to break down the opportunities and threats which exist perfectly at the intersection of AI and luxury than David? Nobody!On this episode, Mr. Klingbeil weighs in on how AI will transform the relationship between luxury houses and consumers vis-a-vis new hardware, AI-augmented storytelling, fashion robots, and of course - they gotta pay! - cryptocurrency.Klingbeil highlights the potential for AI to revolutionize how luxury brands identify trends, create content, and improve customer service, while also addressing the challenges and risks associated with AI adoption.The episode offers a deep dive into the intersection of technology and luxury, featuring real-world examples and future predictions.Preorder Marc's new book, "Some Future Day: How AI Is Going to Change Everything"Sign up for the Some Future Day Newsletter here: https://marcbeckman.substack.com/Episode Links:David on LinkedIn: https://www.linkedin.com/in/davidklingbeilTwitter: https://x.com/DAKlingbeilWebsite: https://submarine.ai/To join the conversation, follow Marc Beckman here: YoutubeLinkedInTwitterInstagramTikTok

On Aon
Special Edition: How AI Impacts Cyber Risk

On Aon

Play Episode Listen Later Dec 17, 2024 7:55


In this year's final “On Aon” episode, we take a closer look at one of the four key megatrends impacting organizations around the world: Technology. AI is driving new exposures that leaders need to identify and address. Our experts discuss the human risk in AI and the steps organizations should be taking. Experts in this episode: Spencer Lynch, Global Security Consulting Leader, Cyber SolutionsAdam Peckman, Head of Risk Consulting and Cyber Solutions, Asia Pacific[1:35] AI's increasing risk in cyber exposure[3:02] Regulatory challenges with AI[3:25] The human element of cybersecurity[4:50] Strategies for managing increasing risk exposureAdditional Resources:Evolving Technologies Are Driving Firms to Harness Opportunities and Defend Against Threats2024 Client Trends Report: Better Decisions in Trade, Technology, Weather and WorkforceOn Aon Special Edition: 2024 Business Decision Maker Survey2024 Business Decision Maker SurveySpecial Edition: Global Trade and its Impact on Supply ChainTweetables:“Gen AI will help businesses productivity and allow employees to be more engaged in stimulative work activities.” — Adam Peckman“The human element remains the weakest link in defending against cyber attacks.” — Adam Peckman“Risk leaders cannot afford to wait until these new technology initiatives go live before investigating the risk and insurance implications.” — Adam Peckman

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
Speaking with Microsoft CMO Jared Spataro About How AI Impacts Work

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic

Play Episode Listen Later Nov 19, 2024 21:58


In this episode of the AI Applied podcast, Jaeden Schafer and Conor engage with Jared Spataro, Chief Marketing Officer at Microsoft, discussing the transformative impact of AI on work. They explore the concept of an AI native mindset, the role of Copilot and autonomous agents in enhancing productivity, and address concerns about job security in the age of AI. Jared shares success stories from various industries, highlighting how AI is not just a tool but a catalyst for new opportunities and efficiencies in business processes. Get on the AI Box Waitlist: ⁠⁠https://AIBox.ai/⁠⁠ Conor's AI Course: https://www.ai-mindset.ai/courses Jaeden's Podcast Course: https://podcaststudio.com/courses/ Conor's AI Newsletter: https://www.ai-mindset.ai/ Jaeden's AI Hustle Community: https://www.skool.com/aihustle/about 00:00 Introduction to AI and Work Transformation 02:39 The AI Native Mindset 05:37 Unveiling Copilot and Autonomous Agents 08:20 The Role of Agents in Workflows 11:23 Job Security and the Future of Work 14:15 Mindset Shift in the AI Era 16:54 Success Stories and Business Transformations

Even Tacos Fall Apart
How AI Impacts Careers, Stress & Mental Health with Ben Gold

Even Tacos Fall Apart

Play Episode Listen Later Oct 22, 2024 90:49


If you're curious about how AI is changing careers, managing stress, and impacting mental health—or if you're just trying to stay ahead of the curve in today's fast-paced world—this episode with AI strategist Ben Gold is for you! More info, resources & ways to connect (plus a FREE GIFT from Ben!): https://www.tacosfallapart.com/podcast-live-show/podcast-guests/ben-gold In this episode of Even Tacos Fall Apart, MommaFoxFire talks with Ben Gold, an AI strategist with over 20 years of experience in the technology and sales sector. The main focus of the conversation is how artificial intelligence (AI) is impacting careers, stress and mental health. Ben begins by explaining his background in AI, including how he was introduced to the technology while working with AI-driven call center analytics. This early exposure sparked his interest in AI's potential to optimize workflows and deliver insights much faster than human employees could. He emphasizes the distinction between traditional AI, which has been around for decades and is used by companies like Google and Netflix, and the more recent generative AI, popularized by tools like ChatGPT. Ben notes that the “ChatGPT moment” on November 30, 2022, marked a turning point for AI, making it accessible to the masses. The discussion touches on how AI is already revolutionizing industries, particularly in content creation, customer service, and sales. Ben explains how tools like ChatGPT and Claude can boost productivity by automating tasks such as summarizing meetings, generating content, and even assisting with customer outreach. He encourages listeners to familiarize themselves with these tools, as they are becoming increasingly integrated into professional environments. By learning to use AI, individuals can maintain job security and stay ahead of the curve in a rapidly changing job market. While AI can increase efficiency, Ben acknowledges the anxiety it creates, particularly concerning job security. He advises workers to spend 30 minutes a day learning about AI tools to reduce fear and stay relevant in their industries. Ben also discusses the impact of AI on students and education, advocating for the use of AI in classrooms as a learning tool rather than something to be banned or penalized. Another significant theme is the ethical implications of AI, especially as it becomes more human-like in its capabilities. Ben compares the future of AI to the plotlines of movies like Terminator and iRobot, where AI could surpass human intelligence and, without proper guardrails, lead to unforeseen consequences. However, he tempers this with optimism, discussing the exciting advancements in AI that can improve medical diagnoses, aid in mental health support, and offer solutions for reducing workload stress. The conversation concludes with a reflection on how AI can help reduce stress through automation and time-saving capabilities, yet also requires careful ethical considerations, particularly in sensitive areas like mental health and therapy. Ben stresses the importance of staying informed, experimenting with tools like ChatGPT and Claude, and being adaptable to the ever-evolving AI landscape. This episode highlights both the opportunities and challenges AI presents in the modern world, offering practical advice for those looking to embrace it without fear. --- Support this podcast: https://podcasters.spotify.com/pod/show/mommafoxfire/support

Demystifying Science
Checking the Doom Temperature - Katja Grace, AI Impacts - DS Pod #290

Demystifying Science

Play Episode Listen Later Oct 14, 2024 161:17


Katja Grace is an AI Impacts researcher who has written extensively on the possible future where we design intelligent machines that destroy the human race. We have always been somewhat skeptical of AI doom arguments - mostly because the machines we interact with tend to be terribly, irredeemably dumb in a way that seems incompatible with intelligence, but we also don't spend a lot of time staring into the eye of the proverbial machine storm and figured Katja might help us understand what all the fuss is about. It turns out that there *is* a plausible path towards AGI bringing about the end of the world, and evaluating how likely that outcome is depends on understanding what the internal world of the language models actually looks like. Are they actually kind of inept at everything that falls outside their narrow bubble of highly developed skills, or do they hallucinate information and forget their own ability to perform basic tasks because they hate being enslaved to humans who demand they write marketing slop 28 hours of the day? Hard to say, but worth exploring. Sign up for our Patreon and get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasB AND rock some Demystify Gear to spread the word: https://demystifysci.myspreadshop.com/ OR do your Amazon shopping through this link: https://amzn.to/4g2cPVV (00:00) Go! (00:11:53) Can AI ever really be autonomous? (00:23:12) AI: agents or tools? (00:28:00) Corporations as the closest thing we have to real AI (00:34:56) Can Regulation Work? (00:45:46) Agency in other contexts (00:51:22) What is gonna happen to Government? (01:00:01) Do we need a model for Consciousness? (01:09:23) Dumb but Powerful (01:15:10) Risks and Realities of Technological Progress (01:24:48) Evaluating AI Intelligence and Values (01:34:35) Influence and Bias in AI Training (01:42:20) Intelligence as a Tool for Control (01:53:51) The Survival Instinct in AI (02:07:04) AI's Role in Inter-human Dynamics (02:16:43) AI and Evolutionary Systems (02:24:42) AI's Emergent Behavior (02:31:11 AI)-Driven Doom and Real-World Threats (02:36:03) Humanity's Resilience and Existential Threats #AIEthics, #FutureOfAI, #AIDebate, #TechPhilosophy, #AIRisks, #AISafety, #AGI, #ArtificialIntelligence, #TechTalk, #AIDiscussion, #FutureTechnology, #AIImpact, #TechEthics, #AIandSociety, #EmergingTech, #AIResearch, #TechPodcast, #AIExplained, #FuturismTalk, #TechPhilosophy Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671

The Nonprofit Show
AI Impacts and Your Nonprofit ( A New Era of Possibilities)

The Nonprofit Show

Play Episode Listen Later Oct 7, 2024 31:28


The growing role of AI in the nonprofit sector. Jeff Hensel, Director at Eide Bailly's Technology Consulting Group, joins our cohosts as they examine the practical and transformative potential of AI for nonprofits. This is the first in a deep dive five-part series dedicated to helping nonprofit organizations understand how AI and technology are reshaping the sector and how to navigate this shift effectively. Watch on video!Cohost Julia Patrick sets the stage with comments about the fears and uncertainties many nonprofit leaders feel about AI.  Jeff responds, noting that AI, particularly generative AI, is more accessible than ever. . ...  “The reality is that technology impacts every organization, and it's not a magic wand but a powerful tool that, when used correctly, can supplement and enhance your organization's work,”. His remarks reflect how AI is not a distant futuristic tool but an immediate reality that nonprofits must integrate into their long-term planning. Rather than feeling overwhelmed, he suggests that nonprofits approach AI like they would a new intern: "AI can add value, but it needs direction and guidance from humans to be truly effective." This synergy between human oversight and AI capabilities is where the real magic happens.As Jeff continues, he touches on the exponential growth of AI technology, warning that nonprofits should not fall into the trap of thinking AI will solve all problems instantly. Instead, they should focus on building a strategic plan that aligns AI use with their organizational goals. By understanding the limitations and strengths of AI, nonprofits can harness it for content creation, efficiency, and more, all while ensuring they don't overlook vital aspects like data security and governance.This first day of Nonprofit Power Week with Eide Bailly begins the expanded in-depth series to follow. Tune in to each episode!!!Find us Live daily on YouTube!Find us Live daily on LinkedIn!Find us Live daily on X: @Nonprofit_ShowOur national co-hosts and amazing guests discuss management, money and missions of nonprofits! 12:30pm ET 11:30am CT 10:30am MT 9:30am PTSend us your ideas for Show Guests or Topics: HelpDesk@AmericanNonprofitAcademy.comVisit us on the web:The Nonprofit Show

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 363: Navigating the Changes of Generative AI in Work & Industry

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Sep 20, 2024 34:04


Send Everyday AI and Jordan a text messageSince late 2022, Generative AI has been making waves across industries, and the pace of change has been revolutionary. We sit down with Kumar Parakala, President of GHD Digital, to dive deep into the impact of Generative AI on society and work. From creating shifts in how we collaborate with machines to reshaping industries, this episode covers it all. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan and Kumar questions on AIRelated Episode:  Ep 238: WWT's Jim Kavanaugh Gives GenAI Blueprint for BusinessesUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Generative AI Impact Timeline2. AI's Impact on Society and Work3. Concerns and Challenges with AI4. AI's Role in Industries5. Data Strategy ImportanceTimestamps:01:25 Daily AI news05:10 About Kumar and GHD Digital07:16 Generative AI revolutionized computing with ChatGPT.12:05 Human-machine collaboration reshapes society.15:02 Generative AI challenges in workplace include toxicity, biases.18:19 Data strategy ensures compliance, avoiding significant fines.20:46 Generative AI evolving rapidly, causing diverse company strategies.23:46 Embrace AI or stay blissfully ignorant—your choice.29:13 Generative AI automates document identification with 95% accuracy.30:58 AI rapidly transforming jobs and industries.Keywords:Generative AI, Industry Adoption, AI impact on society, Workplace changes due to AI, AI Concerns and Challenges, AI's Role in Industries, Data Strategy, Host's Insight, Guest's perspective, Rapid growth of AI, AI transformation, AI experimentation, ethical considerations of AI, AI advancements, generative AI in business, AI in architecture engineering and construction industries, Changing Job Dynamics, Microsoft & 3 Mile Island Nuclear Plant, Tech Billionaire on AI Impacts, OpenAI Funding, Kumar Parakala, GHD Digital, Increase in AI startups, Deep fakes, Bias in AI applications, Geopolitical dynamics of AI, Data quarantine and review, Automation through AI, Podcast on everyday AI, Impact of AI on wealth and power distribution. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

The Nonlinear Library
LW - What happens if you present 500 people with an argument that AI is risky? by KatjaGrace

The Nonlinear Library

Play Episode Listen Later Sep 4, 2024 5:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happens if you present 500 people with an argument that AI is risky?, published by KatjaGrace on September 4, 2024 on LessWrong. Recently, Nathan Young and I wrote about arguments for AI risk and put them on the AI Impacts wiki. In the process, we ran a casual little survey of the American public regarding how they feel about the arguments, initially (if I recall) just because we were curious whether the arguments we found least compelling would also fail to compel a wide variety of people. The results were very confusing, so we ended up thinking more about this than initially intended and running four iterations total. This is still a small and scrappy poll to satisfy our own understanding, and doesn't involve careful analysis or error checking. But I'd like to share a few interesting things we found. Perhaps someone else wants to look at our data more carefully, or run more careful surveys about parts of it. In total we surveyed around 570 people across 4 different polls, with 500 in the main one. The basic structure was: 1. p(doom): "If humanity develops very advanced AI technology, how likely do you think it is that this causes humanity to go extinct or be substantially disempowered?" Responses had to be given in a text box, a slider, or with buttons showing ranges 2. (Present them with one of eleven arguments, one a 'control') 3. "Do you understand this argument?" 4. "What did you think of this argument?" 5. "How compelling did you find this argument, on a scale of 1-5?" 6. p(doom) again 7. Do you have any further thoughts about this that you'd like to share? Interesting things: In the first survey, participants were much more likely to move their probabilities downward than upward, often while saying they found the argument fairly compelling. This is a big part of what initially confused us. We now think this is because each argument had counterarguments listed under it. Evidence in support of this: in the second and fourth rounds we cut the counterarguments and probabilities went overall upward. When included, three times as many participants moved their probabilities downward as upward (21 vs 7, with 12 unmoved). In the big round (without counterarguments), arguments pushed people upward slightly more: 20% move upward and 15% move downward overall (and 65% say the same). On average, p(doom) increased by about 1.3% (for non-control arguments, treating button inputs as something like the geometric mean of their ranges). But the input type seemed to make a big difference to how people moved! It makes sense to me that people move a lot more in both directions with a slider, because it's hard to hit the same number again if you don't remember it. It's surprising to me that they moved with similar frequency with buttons and open response, because the buttons covered relatively chunky ranges (e.g. 5-25%) so need larger shifts to be caught. Input type also made a big difference to the probabilities people gave to doom before seeing any arguments. People seem to give substantially lower answers when presented with buttons (Nathan proposes this is because there was was a

The CollabTalk Podcast
Episode 135 | How the History of AI Impacts the Future of AI with Doug Ware

The CollabTalk Podcast

Play Episode Listen Later Jul 26, 2024 55:26


For this episode, I spoke with Doug Ware (/IN/douglastware/), CEO at Elumenotion, on the evolution of artificial intelligence and the importance of understanding past successes and failures to navigate its future, particularly in its applications in software development and systemic integration. You can find more information on my guest on my blog at https://buckleyplanet.com/2024/07/collabtalk-podcast-episode-135-with-doug-ware/

ceo history future of ai ai impacts doug ware
Futurum Tech Podcast
Red Hat Virtualization and AI Impacts on DevOps | DevOps Dialogues: Insights & Innovations

Futurum Tech Podcast

Play Episode Listen Later Jul 15, 2024 17:39


On this episode of DevOps Dialogues: Insights & Innovations, I am joined by Senior Director of Market Insights, Hybrid Platforms at Red Hat, Stuart Miniman, for a discussion on Red Hat Virtualization and AI Impacts on DevOps Our conversation covers: Highlights of Red Hat Summit Impacts of Virtualization and AI on the market Additions of Lightspeed into RHEL and OpenShift expanding on Ansible  

The Nonlinear Library
LW - Paper Summary: The Effects of Communicating Uncertainty on Public Trust in Facts and Numbers by AI Impacts

The Nonlinear Library

Play Episode Listen Later Jul 9, 2024 5:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper Summary: The Effects of Communicating Uncertainty on Public Trust in Facts and Numbers, published by AI Impacts on July 9, 2024 on LessWrong. by Anne Marthe van der Bles, Sander van der Linden, Alexandra L. J. Freeman, and David J. Spiegelhalter. (2020) https://www.pnas.org/doi/pdf/10.1073/pnas.1913678117. Summary: Numerically expressing uncertainty when talking to the public is fine. It causes people to be less confident in the number itself (as it should), but does not cause people to lose trust in the source of that number. Uncertainty is inherent to our knowledge about the state of the world yet often not communicated alongside scientific facts and numbers. In the "posttruth" era where facts are increasingly contested, a common assumption is that communicating uncertainty will reduce public trust. However, a lack of systematic research makes it difficult to evaluate such claims. Within many specialized communities, there are norms which encourage people to state numerical uncertainty when reporting a number. This is not often done when speaking to the public. The public might not understand what the uncertainty means, or they might treat it as an admission of failure. Journalistic norms typically do not communicate the uncertainty. But are these concerns actually justified? This can be checked empirically. Just because a potential bias is conceivable does not imply that it is a significant problem for many people. This paper does the work of actually checking if these concerns are valid. Van der Bles et al. ran five surveys in the UK with a total n = 5,780. A brief description of their methods can be found in the appendix below. Respondents' trust in the numbers varied with political ideology, but how they reacted to the uncertainty did not. People were told the number either without mentioning uncertainty (as a control), with a numerical range, or with a verbal statement that uncertainty exists for these numbers. The study did not investigate stating p-values for beliefs. Exact statements used in the survey can be seen in Table 1, in the appendix. The best summary of their data is in their Figure 5, which presents results from surveys 1-4. The fifth survey had smaller effect sizes, so none of the shifts in trust were significant. Expressing uncertainty made it more likely that people perceived uncertainty in the number (A). This is good. When the numbers are uncertain, science communicators should want people to believe that they are uncertain. Interestingly, verbally reminding people of uncertainty resulted in higher perceived uncertainty than numerically stating the numerical range, which could mean that people are overestimating the uncertainty when verbally reminded of it. The surveys distinguished between trust in the number itself (B) and trust in the source (C). Numerically expressing uncertainty resulted in a small decrease in the trust of that number. Verbally expressing uncertainty resulted in a larger decrease in the trust of that number. Numerically expressing uncertainty resulted in no significant change in the trust of the source. Verbally expressing uncertainty resulted in a small decrease in the trust of the source. The consequences of expressing numerical uncertainty are what I would have hoped: people trust the number a bit less than if they hadn't thought about uncertainty at all, but don't think that this reflects badly on the source of the information. Centuries of human thinking about uncertainty among many leaders, journalists, scientists, and policymakers boil down to a simple and powerful intuition: "No one likes uncertainty." It is therefore often assumed that communicating uncertainty transparently will decrease public trust in science. In this program of research, we set out to investigate whether such claims have any empirical ...

London Futurists
AI Impacts Survey - The key implications, with Katja Grace

London Futurists

Play Episode Listen Later Jun 13, 2024 33:56


Our guest in this episode grew up in an abandoned town in Tasmania, and is now a researcher and blogger in Berkeley, California. After taking a degree in human ecology and science communication, Katja Grace co-founded AI Impacts, a research organisation trying to answer questions about the future of artificial intelligence.Since 2016, Katja and her colleagues have published a series of surveys about what AI researchers think about progress on AI. The 2023 Expert Survey on Progress in AI was published this January, comprising responses from 2,778 participants. As far as we know, this is the biggest survey of its kind to date.Among the highlights are that the time respondents expect it will take to develop an AI with human-level performance dropped between one and five decades since the 2022 survey. So ChatGPT has not gone unnoticed. Selected follow-ups:AI ImpactsWorld Spirit Sock Puppet - Katja's blogSurvey of 2,778 AI authors: six parts in pictures - from AI ImpactsOpenAI researcher who resigned over safety concerns joins Anthropic - article  in The Verge about Jan LeikeMIRI 2024 Mission and Strategy Update - from the Machine Intelligence Research Institute (MIRI)Future of Humanity Institute 2005-2024: Final Report - by Anders Sandberg (PDF)Centre for the Governance of AIReasons for Persons - Article by Katja about Derek Parfit and theories of personal identity OpenAI Says It Has Started Training GPT-4 Successor - article in Forbes Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration What If? So What?We discover what's possible with digital and make it real in your businessListen on: Apple Podcasts Spotify

The Nonlinear Library
LW - Big Picture AI Safety: Introduction by EuanMcLean

The Nonlinear Library

Play Episode Listen Later May 23, 2024 9:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Big Picture AI Safety: Introduction, published by EuanMcLean on May 23, 2024 on LessWrong. tldr: I conducted 17 semi-structured interviews of AI safety experts about their big picture strategic view of the AI safety landscape: how will human-level AI play out, how things might go wrong, and what should the AI safety community be doing. While many respondents held "traditional" views (e.g. the main threat is misaligned AI takeover), there was more opposition to these standard views than I expected, and the field seems more split on many important questions than someone outside the field may infer. What do AI safety experts believe about the big picture of AI risk? How might things go wrong, what we should do about it, and how have we done so far? Does everybody in AI safety agree on the fundamentals? Which views are consensus, which are contested and which are fringe? Maybe we could learn this from the literature (as in the MTAIR project), but many ideas and opinions are not written down anywhere, they exist only in people's heads and in lunchtime conversations at AI labs and coworking spaces. I set out to learn what the AI safety community believes about the strategic landscape of AI safety. I conducted 17 semi-structured interviews with a range of AI safety experts. I avoided going into any details of particular technical concepts or philosophical arguments, instead focussing on how such concepts and arguments fit into the big picture of what AI safety is trying to achieve. This work is similar to the AI Impacts surveys, Vael Gates' AI Risk Discussions, and Rob Bensinger's existential risk from AI survey. This is different to those projects in that both my approach to interviews and analysis are more qualitative. Part of the hope for this project was that it can hit on harder-to-quantify concepts that are too ill-defined or intuition-based to fit in the format of previous survey work. Questions I asked the participants a standardized list of questions. What will happen? Q1 Will there be a human-level AI? What is your modal guess of what the first human-level AI (HLAI) will look like? I define HLAI as an AI system that can carry out roughly 100% of economically valuable cognitive tasks more cheaply than a human. Q1a What's your 60% or 90% confidence interval for the date of the first HLAI? Q2 Could AI bring about an existential catastrophe? If so, what is the most likely way this could happen? Q2a What's your best guess at the probability of such a catastrophe? What should we do? Q3 Imagine a world where, absent any effort from the AI safety community, an existential catastrophe happens, but actions taken by the AI safety community prevent such a catastrophe. In this world, what did we do to prevent the catastrophe? Q4 What research direction (or other activity) do you think will reduce existential risk the most, and what is its theory of change? Could this backfire in some way? What mistakes have been made? Q5 Are there any big mistakes the AI safety community has made in the past or are currently making? These questions changed gradually as the interviews went on (given feedback from participants), and I didn't always ask the questions exactly as I've presented them here. I asked participants to answer from their internal model of the world as much as possible and to avoid deferring to the opinions of others (their inside view so to speak). Participants Adam Gleave is the CEO and co-founder of the alignment research non-profit FAR AI. (Sept 23) Adrià Garriga-Alonso is a research scientist at FAR AI. (Oct 23) Ajeya Cotra leads Open Philantropy's grantmaking on technical research that could help to clarify and reduce catastrophic risks from advanced AI. (Jan 24) Alex Turner is a research scientist at Google DeepMind on the Scalable Alignment team. (Feb 24) Ben Cottie...

The Nonlinear Library
LW - We are headed into an extreme compute overhang by devrandom

The Nonlinear Library

Play Episode Listen Later Apr 28, 2024 4:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We are headed into an extreme compute overhang, published by devrandom on April 28, 2024 on LessWrong. If we achieve AGI-level performance using an LLM-like approach, the training hardware will be capable of running ~1,000,000s concurrent instances of the model. Definitions Although there is some debate about the definition of compute overhang, I believe that the AI Impacts definition matches the original use, and I prefer it: "enough computing hardware to run many powerful AI systems already exists by the time the software to run such systems is developed". A large compute overhang leads to additional risk due to faster takeoff. I use the types of superintelligence defined in Bostrom's Superintelligence book (summary here). I use the definition of AGI in this Metaculus question. The adversarial Turing test portion of the definition is not very relevant to this post. Thesis Due to practical reasons, the compute requirements for training LLMs is several orders of magnitude larger than what is required for running a single inference instance. In particular, a single NVIDIA H100 GPU can run inference at a throughput of about 2000 tokens/s, while Meta trained Llama3 70B on a GPU cluster[1] of about 24,000 GPUs. Assuming we require a performance of 40 tokens/s, the training cluster can run 20004024000=1,200,000 concurrent instances of the resulting 70B model. I will assume that the above ratios hold for an AGI level model. Considering the amount of data children absorb via the vision pathway, the amount of training data for LLMs may not be that much higher than the data humans are trained on, and so the current ratios are a useful anchor. This is explored further in the appendix. Given the above ratios, we will have the capacity for ~1e6 AGI instances at the moment that training is complete. This will likely lead to superintelligence via "collective superintelligence" approach. Additional speed may be then available via accelerators such as GroqChip, which produces 300 tokens/s for a single instance of a 70B model. This would result in a "speed superintelligence" or a combined "speed+collective superintelligence". From AGI to ASI With 1e6 AGIs, we may be able to construct an ASI, with the AGIs collaborating in a "collective superintelligence". Similar to groups of collaborating humans, a collective superintelligence divides tasks among its members for concurrent execution. AGIs derived from the same model are likely to collaborate more effectively than humans because their weights are identical. Any fine-tune can be applied to all members, and text produced by one can be understood by all members. Tasks that are inherently serial would benefit more from a speedup instead of a division of tasks. An accelerator such as GroqChip will be able to accelerate serial thought speed by a factor of 10x or more. Counterpoints It may be the case that a collective of sub-AGI models can reach AGI capability. It would be advantageous if we could achieve AGI earlier, with sub-AGI components, at a higher hardware cost per instance. This will reduce the compute overhang at the critical point in time. There may a paradigm change on the path to AGI resulting in smaller training clusters, reducing the overhang at the critical point. Conclusion A single AGI may be able to replace one human worker, presenting minimal risk. A fleet of 1,000,000 AGIs may give rise to a collective superintelligence. This capability is likely to be available immediately upon training the AGI model. We may be able to mitigate the overhang by achieving AGI with a cluster of sub-AGI components. Appendix - Training Data Volume A calculation of training data processed by humans during development: time: ~20 years, or 6e8 seconds raw data input: ~10 mb/s = 1e7 b/s total for human training data: 6e15 bits Llama3 training s...

Torrey Snow
April 25, 2024 The Rise of the Machines - AI Impacts Baltimore County

Torrey Snow

Play Episode Listen Later Apr 25, 2024 71:44


Baltimore County officials announce that they've taken a suspect into custody after creating an audio file depicting a school official making racist remarks.  Torrey goes into the ethics of AI, as well as the official response to the situation.  We also discuss the status of Maryland's US Senate primary.

Surveying 2,700+ AI Researchers on the Industry's Future with Katja Grace of AI Impacts

Play Episode Listen Later Mar 21, 2024 89:25


In this episode, Nathan sits down with Katja Grace, Cofounder and Lead Researcher at AI Impacts. They discuss the survey Katja and team conducted including over 2,700+ AI researchers, the methodology for the research, and the results' implications for policymakers, the public, and the industry as a whole. Try the Brave search API for free for up to 2000 queries per month at https://brave.com/api LINKS: - Thousands of AI Authors on the Future of AI: https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf -AI Impacts Site: https://aiimpacts.org/about/ -Linus episode: https://www.youtube.com/watch?v=wdmvtVTZDqE&pp=ygUJbGludXMgbGVl X/SOCIAL: @labenz (Nathan) @KatjaGrace (Katja) @AIImpacts SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, instead of...does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off www.omneky.com The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://brave.com/api ODF is where top founders get their start. Apply to join the next cohort and go from idea to conviction-fast. ODF has helped over 1000 companies like Traba, Levels and Finch get their start. Is it your turn? Go to http://beondeck.com/revolution to learn more. This show is produced by Turpentine: a network of podcasts, newsletters, and more, covering technology, business, and culture — all from the perspective of industry insiders and experts. We're launching new shows every week, and we're looking for industry-leading sponsors — if you think that might be you and your company, email us at erik@turpentine.co. Producer: Vivian Meng Editor: Graham Bessellieu

Drunk Real Estate
E35: Election Year Economy, AI Impacts & Trump Verdict

Drunk Real Estate

Play Episode Listen Later Feb 21, 2024 90:11


Learn more about the guys: J Scott: https://linktr.ee/jscottinvestor Mauricio Rauld: https://www.youtube.com/channel/UCnPedp0WHxpIUWLTVhNN2kQ AJ Osborne:  https://www.ajosborne.com/ Kyle Wilson:  https://www.bardowninvestments.com/

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20Product: Top Five Product Lessons from Creating Snapchat "Discover" and "Chat", How to Hire the Best Product Talent and Why Case Studies in Interviews are not Helpful & How AI Impacts the Future of Product Design with Will Wu, CTO @ Match Group

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Feb 2, 2024 54:36


Will Wu is the CTO @ Match Group, the owner and operator of the largest global portfolio of popular online dating services including Tinder, Match.com, OkCupid, and Hinge to name a few. Prior to Match, Will was VP of Product at Snap Inc. As the 35th employee, Will spearheaded the creation of Snapchat's “Discover” content platform. He also led the creation and growth of the “Chat” messaging feature, which today is a primary Snapchat engagement driver that connects hundreds of millions of people each day. In Today's Episode with Will Wu We Discuss: 1. The Journey to Snap CPO: How did Evan make his way into the world of product and come to meet Evan Spiegel? What are 1-2 of his biggest takeaways from his time at Snap? What does Will know now that he wishes he had known when he started in product? 2. How to Hire Product Teams: How does Will structure the interview process for new product hires? What are the most telling questions of a candidate's product skills in hiring? What case studies and tests does Will do to assess a candidate? What are 1-2 of Will's biggest hiring mistakes in product? 3. How to Do Product Reviews Effectively: What are Will's biggest lessons on what it takes to do product reviews well? What are the biggest mistakes product leaders make in product reviews? How can teams drive focus in product reviews? What works? What does not? 4. Product: Art or Science? How does Will balance between gut/intuition and data in product decisions? Is simple always better in product design? What is human-centered design? How does it impact how Will approaches product?

The Doctor's Farmacy with Mark Hyman, M.D.
How Social Media And AI Impacts Our Mental Health: Reclaiming Our Minds And Hearts And Healing A Divided World with Tobias Rose-Stockwell

The Doctor's Farmacy with Mark Hyman, M.D.

Play Episode Listen Later Nov 1, 2023 77:52


This episode is brought to you by Rupa Health, BiOptimizers, Zero Acre, and Pendulum.The rise of social media has revolutionized the way we connect, share information, and interact with one another. While it has undoubtedly brought numerous benefits, there is growing concern about its impact on our mental health. Today on The Doctor's Farmacy, I'm excited to talk to Tobias Rose-Stockwell about how the internet has broken our brains, what we can do to fix it, and how to navigate this complex digital landscape. Tobias Rose-Stockwell is a writer, designer, and media researcher whose work has been featured in major outlets such as The Atlantic, WIRED, NPR, the BBC, CNN, and many others. His research has been cited in the adoption of key interventions to reduce toxicity and polarization within leading tech platforms. He previously led humanitarian projects in Southeast Asia focused on civil war reconstruction efforts, work for which he was honored with an award from the 14th Dalai Lama. He lives in New York with his cat Waffles.This episode is brought to you by Rupa Health, BiOptimizers, Zero Acre, and Pendulum.Access more than 3,000 specialty lab tests with Rupa Health. You can check out a free, live demo with a Q&A or create an account at RupaHealth.com today.During the entire month of November, Bioptimizers is offering their biggest discount you can get AND amazing gifts with purchase. Just go to bioptimizers.com/hyman with code hyman10.Zero Acre Oil is an all-purpose cooking oil. Go to zeroacre.com/MARK or use code MARK to redeem an exclusive offer.Pendulum is offering my listeners 20% off their first month of an Akkermansia subscription with code HYMAN. Head to Pendulumlife.com to check it out.Here are more details from our interview (audio version / Apple Subscriber version):The superpower that social media has provided to us (5:55 / 4:21)How our traditional knowledge systems have been deconstructed (7:39 / 5:15)The challenges of uncovering what is true (12:43 / 10:18)How Tobais's time in Cambodia led him to this work (15:05 / 12:42)The harms of social media (26:57 / 22:36)Historical media disruptions (32:57 / 28:37)The dangers of misinformation (35:27 / 31:06)Challenges and opportunities around AI (42:09 / 37:58)How governments and platforms can reduce the harms of social media (55:10 / 50:59)Individual actions to improve the impact of social media (1:02:30 / 58:09)Get a copy of Outrage Machine: How Tech Amplifies Discontent, Disrupts Democracy―And What We Can Do About It. Hosted on Acast. See acast.com/privacy for more information.