POPULARITY
Brian David Johnson is Futurist in Residence at Arizona State University's Center for Science and the Imagination, a professor in the School for the Future of Innovation in Society, and the Director of the ASU Threatcasting Lab. He is Author of The Future You: How to Create the Life You Always Wanted, Science Fiction Prototyping: Designing the Future with Science Fiction, 21st Century Robot: The Dr. Simon Egerton Stories, Humanity in the Machine: What Comes After Greed?, Screen Future: The Future of Entertainment, Computing, and the Devices We Love.https://csi.asu.edu/people/brian-david-johnson/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
"I think the most important thing that I would like young people to know is that they can build their future. That they have the power and they have the agency to shape their future and they have the ability and the power when working with others to have an even broader impact.The thing that scares me the most about the future is when people give up that agency and they let other people design their futures for them. For me, I think it's incredibly powerful to go to young people and say you can do it. But also you need to tell me what you want. And I think empowering them to have a vision for the future, that's why I spend so much time in schools and talking to young people because it's those visions that I think are incredibly important."Brian David Johnson is Futurist in Residence at Arizona State University's Center for Science and the Imagination, a professor in the School for the Future of Innovation in Society, and the Director of the ASU Threatcasting Lab. He is Author of The Future You: How to Create the Life You Always Wanted, Science Fiction Prototyping: Designing the Future with Science Fiction, 21st Century Robot: The Dr. Simon Egerton Stories, Humanity in the Machine: What Comes After Greed?, Screen Future: The Future of Entertainment, Computing, and the Devices We Love.https://csi.asu.edu/people/brian-david-johnson/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
"I think, oftentimes, what'll happen as a trap when we talk about technology. People say, 'Well, what do you think is the future of artificial intelligence? Or what is the future of neural interfaces? Or what is the future of this?' And I always pause them and say, 'Wait a minute. If you're just talking about the technology, you're having the wrong conversation because it's not about the technology.'So when people talk about what's the future of AI? I say, I don't know. What do we want the future of AI to be? And I think that's a shift that sounds quite subtle to some people, but it's really important because if you look at any piece of news or anything like that, they talk about AI as if it was a thing that was fully formed, that sprang out of the Earth and is now walking around doing things. And what will AI do in the future and how will it affect our jobs? It's not AI that's doing it. These are people. These are companies. These are organizations that are doing it. And that's where we need to keep our focus. What are those organizations doing. And also what do we want from it as humans?"Brian David Johnson is Futurist in Residence at Arizona State University's Center for Science and the Imagination, a professor in the School for the Future of Innovation in Society, and the Director of the ASU Threatcasting Lab. He is Author of The Future You: How to Create the Life You Always Wanted, Science Fiction Prototyping: Designing the Future with Science Fiction, 21st Century Robot: The Dr. Simon Egerton Stories, Humanity in the Machine: What Comes After Greed?, Screen Future: The Future of Entertainment, Computing, and the Devices We Love.https://csi.asu.edu/people/brian-david-johnson/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
"I think the most important thing that I would like young people to know is that they can build their future. That they have the power and they have the agency to shape their future and they have the ability and the power when working with others to have an even broader impact.The thing that scares me the most about the future is when people give up that agency and they let other people design their futures for them. For me, I think it's incredibly powerful to go to young people and say you can do it. But also you need to tell me what you want. And I think empowering them to have a vision for the future, that's why I spend so much time in schools and talking to young people because it's those visions that I think are incredibly important."Brian David Johnson is Futurist in Residence at Arizona State University's Center for Science and the Imagination, a professor in the School for the Future of Innovation in Society, and the Director of the ASU Threatcasting Lab. He is Author of The Future You: How to Create the Life You Always Wanted, Science Fiction Prototyping: Designing the Future with Science Fiction, 21st Century Robot: The Dr. Simon Egerton Stories, Humanity in the Machine: What Comes After Greed?, Screen Future: The Future of Entertainment, Computing, and the Devices We Love.https://csi.asu.edu/people/brian-david-johnson/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
"I think, oftentimes, what'll happen as a trap when we talk about technology. People say, 'Well, what do you think is the future of artificial intelligence? Or what is the future of neural interfaces? Or what is the future of this?' And I always pause them and say, 'Wait a minute. If you're just talking about the technology, you're having the wrong conversation because it's not about the technology.'So when people talk about what's the future of AI? I say, I don't know. What do we want the future of AI to be? And I think that's a shift that sounds quite subtle to some people, but it's really important because if you look at any piece of news or anything like that, they talk about AI as if it was a thing that was fully formed, that sprang out of the Earth and is now walking around doing things. And what will AI do in the future and how will it affect our jobs? It's not AI that's doing it. These are people. These are companies. These are organizations that are doing it. And that's where we need to keep our focus. What are those organizations doing. And also what do we want from it as humans?"Brian David Johnson is Futurist in Residence at Arizona State University's Center for Science and the Imagination, a professor in the School for the Future of Innovation in Society, and the Director of the ASU Threatcasting Lab. He is Author of The Future You: How to Create the Life You Always Wanted, Science Fiction Prototyping: Designing the Future with Science Fiction, 21st Century Robot: The Dr. Simon Egerton Stories, Humanity in the Machine: What Comes After Greed?, Screen Future: The Future of Entertainment, Computing, and the Devices We Love.https://csi.asu.edu/people/brian-david-johnson/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
"I think, oftentimes, what'll happen as a trap when we talk about technology. People say, 'Well, what do you think is the future of artificial intelligence? Or what is the future of neural interfaces? Or what is the future of this?' And I always pause them and say, 'Wait a minute. If you're just talking about the technology, you're having the wrong conversation because it's not about the technology.'So when people talk about what's the future of AI? I say, I don't know. What do we want the future of AI to be? And I think that's a shift that sounds quite subtle to some people, but it's really important because if you look at any piece of news or anything like that, they talk about AI as if it was a thing that was fully formed, that sprang out of the Earth and is now walking around doing things. And what will AI do in the future and how will it affect our jobs? It's not AI that's doing it. These are people. These are companies. These are organizations that are doing it. And that's where we need to keep our focus. What are those organizations doing. And also what do we want from it as humans?"Brian David Johnson is Futurist in Residence at Arizona State University's Center for Science and the Imagination, a professor in the School for the Future of Innovation in Society, and the Director of the ASU Threatcasting Lab. He is Author of The Future You: How to Create the Life You Always Wanted, Science Fiction Prototyping: Designing the Future with Science Fiction, 21st Century Robot: The Dr. Simon Egerton Stories, Humanity in the Machine: What Comes After Greed?, Screen Future: The Future of Entertainment, Computing, and the Devices We Love.https://csi.asu.edu/people/brian-david-johnson/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
"Being worried about the future is just that, it's worrying. Think about how much time and energy you spend worrying about stuff that hasn't happened, and maybe even never will. But what if you instead put all of your energy towards the creation of a positive and lasting future?I think the most important thing that I would like young people to know is that they can build their future. That they have the power and they have the agency to shape their future and they have the ability and the power when working with others to have an even broader impact.The thing that scares me the most about the future is when people give up that agency and they let other people design their futures for them. For me, I think it's incredibly powerful to go to young people and say you can do it. But also you need to tell me what you want. And I think empowering them to have a vision for the future, that's why I spend so much time in schools and talking to young people because it's those visions that I think are incredibly important."Brian David Johnson is Futurist in Residence at Arizona State University's Center for Science and the Imagination, a professor in the School for the Future of Innovation in Society, and the Director of the ASU Threatcasting Lab. He is Author of The Future You: How to Create the Life You Always Wanted, Science Fiction Prototyping: Designing the Future with Science Fiction, 21st Century Robot: The Dr. Simon Egerton Stories, Humanity in the Machine: What Comes After Greed?, Screen Future: The Future of Entertainment, Computing, and the Devices We Love.https://csi.asu.edu/people/brian-david-johnson/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
The Cognitive Crucible is a forum that presents different perspectives and emerging thought leadership related to the information environment. The opinions expressed by guests are their own, and do not necessarily reflect the views of or endorsement by the Information Professionals Association. During this episode, Sam Wooley of the University of Texas School of Journalism discusses journalism, propaganda, and ethics. Our conversations unpacks the definition of propaganda and how today's technology fuels propaganda and influence. Research Question: Encrypted messaging apps (like WhatApp, Signal, Discord, etc) are becoming more popular, and incubation of disinformation campaigns happens in those spaces. How does disinformation and propaganda spread in encrypted spaces? How will we study propaganda in transport-layer encrypted spaces? Resources: Cognitive Crucible Podcast Episodes Mentioned #112 Jake Sotiriadis on the Value Proposition of Future Studies #107 Vanessa Otero on News Ecosystem Health #14 BDJ on Threatcasting #116 Matt Jackson on Social Learning and Game Theory Sam Wooley's Bio Manufacturing Consent: The Political Economy of the Mass Media by Edward S. Herman and Noam Chomsky Yellow Journalism Bots by Nick Monaco, Samuel Woolley Manufacturing Consensus: Understanding Propaganda in the Era of Automation and Anonymity by Sam Woolley Center for Media Engagement at University of Texas Link to full show notes and resources https://information-professionals.org/episode/cognitive-crucible-episode-117 Guest Bio: Samuel C. Woolley is an assistant professor in the School of Journalism and an assistant professor, by courtesy, in the School of Information--both at the University of Texas at Austin. He is also the project director for propaganda research at the Center for Media Engagement (CME) at UT. Woolley is currently a research associate at the Project for Democracy and the Internet at Stanford University. He has held past research affiliations at the Oxford Internet Institute, University of Oxford and the Center for Information Technology Research in the Interest of Society (CITRIS) at the University of California at Berkeley. Woolley's research is focused on how emergent technologies are used in and around global political communication. His work on computational propaganda—the use of social media in attempts to manipulate public opinion—has revealed the ways in which a wide variety of political groups in the United States and abroad have leveraged tools such as bots and trending algorithms and tactics of disinformation and trolling in efforts to control information flows online. His research on digital politics, automation/AI, social media, and political polarization is currently supported by grants from by Omidyar Network (ON), the Miami Foundation, and the Knight Foundation. His past research has been funded by the Ford Foundation, the Hewlett Foundation, the Open Society Foundations, the New Venture Fund for Communications, and others. His latest book, The Reality Game: How the Next Wave of Technology Will Break the Truth, was released in January 2020 by PublicAffairs (US) and Octopus/Endeavour (UK). It explores the ways in which emergent technologies--from deep fakes to virtual reality--are already being leveraged to manipulate public opinion, and how they are likely to be used in the future. He proposes strategic responses to these threats with the ultimate goal of empowering activists and pushing technology builders to design for democracy and human rights. He is currently working on two other books. Manufacturing Consensus (Yale University Press) explores the ways in which social media, and automated tools such as bots, have become global mechanisms for creating illusions of political support or popularity. He discusses the power of these tools for amplification and suppression of particular modes of digital communication, building on Herman and Chomsky's (1988) integral work on propaganda. His other book, co-authored with Nicholas Monaco, is titled Bots (Polity) and is a primer on the ways these automated tools have become integral to the flow of all manner of information online. Woolley is the co-editor, with Philip N. Howard (Oxford) of Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media, released in 2018 by the Oxford Studies in Digital Politics series at Oxford University Press. This volume of country specific case studies explores the rise of social media--and tools like algorithms and automation--as mechanisms for political manipulation around the world. He has published several peer-reviewed articles, book chapters, and white papers on emergent technology, the Internet and public life in publications such as the Journal of Information Technology and Politics, the International Journal of Communication, A Networked Self: Platforms, Stories, Connections, The Political Economy of Robots: Prospects for Prosperity and Peace in an Automated 21st Century, The Handbook of Media, Conflict and Security, and Can Public Diplomacy Survive the Internet? Bots, Echo Chambers and Disinformation. Woolley is the founding director of the Digital Intelligence Lab, a research and policy oriented project at the Institute for the Future—a 50-year-old think-tank located in Palo Alto, CA. Before this he served as the director of research at the National Science Foundation and European Research Council supported Computational Propaganda Project at the Oxford Internet Institute, University of Oxford. He is a former resident fellow at the German Marshall Fund's Digital Innovation Democracy Initiative and a former Belfer Fellow at the Anti-Defamation League's Center for Science and technology. He is a former research fellow at Jigsaw, Google's think-tank and technology incubator, at the Center Tech Policy Lab at the University of Washington's Schools of Law and Information, and at the Center for Media, Data and Society at Central European University. His public work on computational propaganda and social media bots has appeared in venues including Wired, the Guardian,TechCrunch, Motherboard, Slate, and The Atlantic. For his research, Woolley has been featured in publications such as the New York Times, the Washington Post, and the Guardian and on PBS' Frontline, BBC's News at Ten, and ABC's Today. His work on computational propaganda and bots has been presented to members of the U.S. Congress, the U.K. Parliament, NATO, and others. His Ph.D. is in Communication from the University of Washington. His website is samwoolley.org and he tweets from @samuelwoolley. About: The Information Professionals Association (IPA) is a non-profit organization dedicated to exploring the role of information activities, such as influence and cognitive security, within the national security sector and helping to bridge the divide between operations and research. Its goal is to increase interdisciplinary collaboration between scholars and practitioners and policymakers with an interest in this domain. For more information, please contact us at communications@information-professionals.org. Or, connect directly with The Cognitive Crucible podcast host, John Bicknell, on LinkedIn. Disclosure: As an Amazon Associate, 1) IPA earns from qualifying purchases, 2) IPA gets commissions for purchases made through links in this post.
The Cognitive Crucible is a forum that presents different perspectives and emerging thought leadership related to the information environment. The opinions expressed by guests are their own, and do not necessarily reflect the views of or endorsement by the Information Professionals Association. During this episode, Josh Kerbel of the National Intelligence University discusses the need for anticipatory intelligence. He contrasts the relatively simple historical national environment with today's complex world. Josh explains why the traditional mindset of containment, which The West deployed to counter the Soviet agenda during the Cold War, is inappropriate today. Traditional analysis tools and linear problem solving are likewise inadequate for understanding complex, emergent dynamics. Resources: IPA Members Only Social and Live Podcast Recording Phoenix Challenge Conference (last week of April 2022) Cognitive Crucible Podcast Episodes Mentioned #47 Yaneer Bar-Yam on Complex Systems and the War on Ideals #14 BDJ on Threatcasting #32 Greg Treverton on Intelligence Global Trends and Technopolitics National Security Language Is Stuck in the Cold War by Josh Kerbel Our 'cold war' frame distorts more than just our view of China by Josh Kerbel National Intelligence University Dark sky tourism is on the rise across the U.S. Link to full show notes and resources https://information-professionals.org/episode/cognitive-crucible-episode-85 Guest Bio: Josh Kerbel is a member of the research faculty at the National Intelligence University where he explores the increasingly complex security environment and the associated intelligence challenges. Prior to joining NIU, he held senior analytical positions at DIA, ODNI (including the NIC), the Navy staff, CIA, and ONI. His writings on the intersections of government (especially intelligence) and complexity have been published in Foreign Policy, The Washington Post, Studies in Intelligence, Slate, The National Interest, The Hill, War on the Rocks, Defense One, Parameters, and other outlets. Mr. Kerbel has degrees from the George Washington University and the London School of Economics as well as professional certifications from the Naval War College and the Naval Postgraduate School. More recently he was a post-graduate fellow at the Massachusetts Institute of Technology. The views expressed here are his alone. About: The Information Professionals Association (IPA) is a non-profit organization dedicated to exploring the role of information activities, such as influence and cognitive security, within the national security sector and helping to bridge the divide between operations and research. Its goal is to increase interdisciplinary collaboration between scholars and practitioners and policymakers with an interest in this domain. For more information, please contact us at communications@information-professionals.org. Or, connect directly with The Cognitive Crucible podcast host, John Bicknell, on LinkedIn. Disclosure: As an Amazon Associate, 1) IPA earns from qualifying purchases, 2) IPA gets commissions for purchases made through links in this post.
During this episode, we have a wide ranging conversation with futurist, Brian David Johnson (or BDJ). Threastcasting is an innovative, interdisciplinary technique being used by a wide range of organizations and institutions to create actionable models to comprehend possible futures and identify, track, disrupt, mitigate and recover from them as well. Threatcasting bridges gaps and prompts information exchange and learning across military, academics, industrial and governmental communities. Click here for full show notes & resources Brian David Johnson is a Professor of Practice & Director of the Threatcasting Lab at Arizona State University’s School for the Future of Innovation in Society, and a Futurist and Fellow at Frost & Sullivan, a innovation company that’s focused on growth. Brian works with governments, militaries, trade organizations, and startups to help them envision their future. He has over 40 patents and is the author of a number of books of fiction and nonfiction. He’s also directed two feature films, and is an illustrator and commissioned painter. IPA is a non-profit organization dedicated to exploring the role of information activities, such as influence and cognitive security, within the national security sector and helping to bridge the divide between operations and research. Its goal is to increase interdisciplinary collaboration between scholars and practitioners and policymakers with an interest in this domain. For more information, please contact us at communications@information-professionals.org. Or, you can connect directly with The Cognitive Crucible podcast host, John Bicknell on LinkedIn. Disclosure: As an Amazon Associate, 1) IPA earns from qualifying purchases, 2) IPA gets commissions for purchases made through links in this post.
Late last year, we designed Threatcast 2020: a brainstorming game for groups of people trying to predict the new, innovative, and worrying forms of misinformation and disinformation that might come into play in the upcoming election. We ran a few in-person sessions before the pandemic hit and ended our plans for more, then last month we moved it online with the help of the fun interactive event platform Remo. We've learned a lot and hit on some disturbingly real-feeling predictions throughout these events, so this week we're joined by our partner in designing the game — Randy Lubin of Leveraged Play — to discuss our experiences "threatcasting" the 2020 election, and our plans to keep doing it. We really want to run more of these online events for new groups, so if that's something you or your organization might be interested in, please get in touch!
A Lost Generation of Athletes Article
We join producer and strategic insight expert Cyndi Coon for a peek into the world of Threatcasting, envisioning possible threats ten years into the future. She speaks about her efforts with the Threatcasting Lab at Arizona State University, a premier resource for strategic insight, teaching materials, and subject matter expertise on envisioning possible threats ten years in the future. The Lab strives to provide a wide range of organizations and institutions actionable models to not only comprehend these possible futures but to a means to identify, track, disrupt, mitigate and recover from them as well.
Why should a business utilize science fiction? What do you think your business plan is? That’s the message of Brian David Johnson, a leading expert on science fiction prototyping and threatcasting. Threatcasting is a sub-genre of forecast that details future threats and how the organization can track threat development and know when to respond. Brian David Johnson joins Continuous Foresight to walk us through why threatcasting is effective and how you can use it in your forecasting work. Gartner is an impartial, independent analyst of business and technology. This content should not be construed as a Gartner endorsement of any enterprise’s products or services. All content provided by other speakers is expressly the views of those speakers and their organizations. Meet Brian David Johnson Brian David Johnson is a Professor of Practice at Arizona State University’s School for the Future of Innovation in Society, and a Futurist and Fellow at Frost & Sullivan. He also works with governments, militaries, trade organizations and startups to help them envision their future. He has over 30 patents and is the author of a number of books of fiction and nonfiction, including Science Fiction Prototyping; Screen Future: The Future of Entertainment, Computing and the Devices We Love; Humanity and the Machine: What Comes After Greed?; and Vintage Tomorrows: A Historian and a Futurist Journey through Steampunk into the Future of Technology. His writing has appeared in publications ranging from The Wall Street Journal and Slate to IEEE Computer and Successful Farming, and he appears regularly on Bloomberg TV, PBS, Fox News, and the Discovery Channel. He has directed two feature films and is an illustrator and commissioned painter.
Threatcasting has indicated that IoT and AI are going to be some of the biggest security concerns in the future. But what can we do now to protect these home devices? And why does it matter if an Internet-connected appliance, like a smart coffee maker, is compromised?
Introduction: (0:00:1:26) Present Futures #1: CRISPR-Cas9 and AI: (1:27-24:30) Present Futures #2: The Future of Energy and The Future of Society: (24:31-40:05) Paleo Futures: Black Mirror: (40:13-51:20) FizBiz: Doubling Down on Cryptocurrency: (41:37-1:08:09) Futurist Interview: Brian David Johnson: Threatcasting and Working in Corporate Futures: (1:08:19-1:48:40)
I love having guests on the show who look at something that everyone takes for granted and said, why do we do it that way? That’s what today’s guest, Jason Gross, has done with credit cards. Jason is the CEO of Petal. He and his co-founders have designed a card that has a different business model and aims at a different market. Credit cards have always sparked mixed feelings among consumer advocates and policymakers. On the upside, they provide incredible convenience and safety, over cash. On the down side, critics worry that that convenience factor is a double edged sword -- making spending too convenient and fueling over-consumption and under-saving. People also worry that cards are hard to understand. For decades, policymakers have tried repeatedly to solve that by regulating card disclosures and practices with requirements to disclose the annual percentage rate and fees; make key information prominent in the so-called Schumer Box (named after New York Senator Charles Schumer); require a grace period; bar retroactive raising of interest rates; limit marketing to college students; disclose the long-term costs of paying only the minimum balance and much more. Concern about credit cards prompted Senator Elizabeth Warren, back when she was a Harvard Law professor, to begin fighting what she called “tricks and traps” in financial products, and also business models that rely on penalty fees or interest for their profitability. That concern led to the CARD Act of 2009, and then to the creation of the CFPB itself. The Petal card is trying to solve these challenges. It’s offering simpler, more transparent terms. It addresses the overspending problem by designing the card to encourage customers to pay the full balance each month, rather than revolve. And it’s tackling a third problem, which is reaching the tens of millions of people who don’t have a credit card. For most people, these cards are the first rung on the ladder to building a credit record so that they can later get a car loan or mortgage. Millennials have tended to avoid them, partly due to coming of age in the financial crisis and becoming leery about incurring debt. Jason notes that some young people are “sponsored” into the system by parents who provided them with cards, but for those who aren’t, it’s hard to build credit. That’s a catch 22 -- you can’t get a card because you don’t have a credit record, and you can’t build a credit record because you don’t have a card. Petal is solving that with one of the most important kinds of innovations underway in finance, namely, use of alternative data to evaluate risk. They are looking at the person’s own payment and income history, as reflected in their bank account, to determine ability to pay. Accessing that information has become controversial, as banks worry about allowing a third party to see this data even with the consumer’s permission, in case something goes wrong. However, the ability of consumers to allow such access is the life’s blood of most of the innovation underway in consumer finance. The CFPB is evaluating the issues arising around this, including questions like who really “owns” consumers’ bank account data, whether third parties’ data uses should be regulated, and whether we need to clarify where liability should fall in the event data is misused or breached. A lot of people are working on this issue. Solving it is one of the most important steps we can take toward making finance more inclusive. Listeners have often heard me say that I’ve spent most of my career working with efforts to promote consumer financial health and inclusion by regulating the financial industry, and that I think the results have been mixed at best! A few years ago, I realized that technology could do most of what we’ve been trying to do through policy (if we get the policy right). Petal is trying to do that -- use new data and technology to offer a product that they think will be highly profitable -- despite leaving some kinds of revenue on the table -- because consumers will choose it. Watching them will be fascinating. More information Articles on Petal’s September launch: https://www.paymentssource.com/organization/simple https://bankinnovation.net/2017/09/no-credit-score-say-hello-to-petal-card/ http://paybefore.com/finance-and-strategy/petal-uses-machine-learning-underwrite-credit-without-credit-score/ http://www.thisisgoingtobebig.com/blog/2017/9/8/introducing-petal-providing-access-to-credit-to-thin-file-consumers https://www.nytimes.com/2017/09/08/your-money/new-credit-card-option-for-those-with-scant-credit-histories.html?mcubz=1&_r=0 Jason's Article in Medium: https://medium.com/@jasonbgross_/petal-ba2bb74718f4 My podcast with Digit CEO Ethan Block (another example of innovators leveraging bank account data) http://www.jsbarefoot.com/podcasts/2016/2/25/effortless-saving-digit-ceo-ethan-bloch More for our listeners Please remember to review Barefoot Innovation on ITunes, and sign up to get emails that bring you the newest podcast, newsletter, and blog posts, at jsbarefoot.com. Be sure to follow me on twitter and facebook. And please send in your “buck a show” to keep Barefoot Innovation going. Support our Podcast I’ll hope to see you at the events where I’ll be speaking this fall: Finovate, September 13th, New York City RegTech: Compliance Transformed, October 3-4th, Brooklyn, NY BAI Beacon/Fintech Stage, October 4-5, Atlanta, GA CFSI Network Summit, Fireside Chat with Thomas Curry, October 5, Chicago, IL FISCA, October 5-8th, Las Vegas, NV Money 20/20, October 25th, Las Vegas, NV Monetary Authority of Singapore Fintech Festival, November 13-17, Singapore RegTech Enable, November 27-29th, Washington, DC UN/ITU conference on financial inclusion in Bangalore (invitation only) Fintech Connect Live, December 6th, London S&P’s Fintech Intel, December 13, New York I’ll also be speaking in December to the Dutch Central Bank on financial innovation in December. I do many presentations for regulators and welcome those invitations. Regulators, in my view, have the hardest and most important role to play in financial innovation. We have wonderful shows coming up. I’ll be talking with Andres Wolberg-Stok of Citi Fin Tech. And I’m going to do one on one of the most fascinating experiences I’ve had in years -- I participated this month in the U.S. Army’s Threatcasting exercise -- sort of a war-gaming process where we “threatcast” technology risks ten years into the future, and then “backcast” thinking about what we could do, today, to prevent the problem. It was off the chart fascinating. We also have a show coming up with Miles Reidy, Partner at the venture firm QED. Miles and I had a fun and fascinating talk about two topics -- the investment outlook for regtech, and then how to find and work with a venture capital firm. Speaking of RegTech, we’re going to have a show with Merlon Intelligence, an AML regtech firm, and also a special show with my own co-founders -- at Hummingbird RegTech. I’m proud to say that Hummingbird has been selected to present at Money 2020 in the startup pitch session. Be sure to come and watch! Meanwhile, keep innovating! Subscribe Sign up with your email address to receive news and updates. Email Address Sign Up We respect your privacy. Thank you!
In today's podcast, we hear about observers who look around and think they may be seeing Cold War Two in cyberspace. (But this is no bipolar conflict.) Investigation into Vault 7 continues as people wonder where WikiLeaks gets its leaks. The quiz app Wishbone has been breached—take it as a teachable moment with the children. Fileless malware gets quieter as researchers get close to the cyber gang. A cloud-based keylogger is getting ready to take black market share. Palo Alto Networks' Rick Howard describes a capture-the-flag collaboration. Futurist Brian David Johnson explains Threatcasting. The proposed Active Cyber Defense Certainty Act. And what we're seeing at a policy competition.