Podcasts about fisma

  • 60PODCASTS
  • 101EPISODES
  • 29mAVG DURATION
  • ?INFREQUENT EPISODES
  • Oct 28, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about fisma

Latest podcast episodes about fisma

The Digital Supply Chain podcast
Data-Driven Cold Chains: Enhancing Efficiency, Reducing Waste, and Meeting Sustainability Goals

The Digital Supply Chain podcast

Play Episode Listen Later Oct 28, 2024 43:43 Transcription Available


Send me a messageIn this episode of the *Sustainable Supply Chain* podcast, I sit down with Karl McDermott, Chief SaaS Officer at DeltaTrak, to dive deep into the complex world of cold chain logistics and explore how data is transforming this critical industry. With 35 years under their belt, DeltaTrak has moved from paper-based temperature monitoring to real-time data, unlocking insights that are reshaping how fresh and frozen goods travel across the globe. Karl unpacks the evolution of temperature tracking tech – from USB loggers to advanced real-time monitoring. These innovations enable better control over temperature-sensitive products, reduce waste, and improve food quality. We also discuss the impact of climate change on the cold chain and how rising global temperatures place added pressure on logistics.An intriguing point Karl brings up is the industry's shift from traditional temperature standards (like -18°C) to more energy-efficient options, as well as the integration of real-time data to predict shelf life, calculate carbon emissions, and enable new insurance and financing models. Plus, with regulations like FISMA 204 in the US and sustainability demands in the EU, there's no doubt compliance is driving change, but DeltaTrak is turning that compliance into competitive advantage.If you're interested in the intersection of technology, sustainability, and supply chain management, this episode is packed with insights. Join us as we unpack the future of cold chain with a seasoned industry leader.Elevate your brand with the ‘Sustainable Supply Chain' podcast, the voice of supply chain sustainability.Last year, this podcast's episodes were downloaded over 113,000 times by senior supply chain executives around the world.Become a sponsor. Lead the conversation.Contact me for sponsorship opportunities and turn downloads into dialogues.Act today. Influence the future.Rumi.aiAll-in-one meeting tool with real-time transcription & searchable Meeting Memory™Support the showPodcast supportersI'd like to sincerely thank this podcast's generous supporters: Lorcan Sheehan Olivier Brusle Alicia Farag Kieran Ognev And remember you too can Support the Podcast - it is really easy and hugely important as it will enable me to continue to create more excellent episodes like this one.Podcast Sponsorship Opportunities:If you/your organisation is interested in sponsoring this podcast - I have several options available. Let's talk!FinallyIf you have any comments/suggestions or questions for the podcast - feel free to just send me a direct message on LinkedIn, or send me a text message using this link.If you liked this show, please don't forget to rate and/or review it. It makes a big difference to help new people discover it. Thanks for listening.

Federal Publications Seminars Podcasts
FPS Podcast #62 CMMC Rule is Final! What does that mean now?

Federal Publications Seminars Podcasts

Play Episode Listen Later Oct 21, 2024 34:31


In October 2024, the final CMMC rule was published in the CFR and will be in effect 60 days from the published date.  After many comments received, this final rule is making all contractors ask a lot of questions.  We already knew the three levels of CMMC, NIST Standards, FISMA and FedRAMP compliance, but the question is how will CMMC roll out and what is the cost of compliance?   Listen to what the rule says and what can be challenges to you and your business as a prime, as a sub or as just as a supplier within the supply chain of the Defense Industrial Base (DIB). 

Next Level Supply Chain with GS1 US
How EPCIS is Revolutionizing Supply Chains with Matt Andrews

Next Level Supply Chain with GS1 US

Play Episode Listen Later Oct 9, 2024 23:56


As supply chains become increasingly complex and stringent regulations like DSCSA and FISMA become more prevalent, understanding how to leverage EPCIS (Electronic Product Code Information Services) for granular visibility and efficient data management is more crucial than ever. In this episode, hosts Reid Jackson and Liz Sertl are joined by Matt Andrews, Global Standards Director at GS1 US. Matt unpacks the fundamentals and applications of EPCIS, from its role in modeling supply chain processes to its transformative impact across industries like healthcare, food, retail, and logistics. EPCIS can help your organization achieve unparalleled supply chain visibility, improve compliance, and drive competitive advantage.   In this episode, you'll learn: The intricacies of EPCIS (Electronic Product Code Information Services) and its universal application across industries for enhanced supply chain visibility, compliance, and efficiency. How EPCIS can revolutionize inventory management with real-time data accuracy, from monitoring cycle counts to tracking product movement from back of house to point of sale. How industries such as healthcare and food service leverage EPCIS to comply with regulations like DSCSA and FISMA 204, ensuring traceability down to the unique item level.   Jump into the Conversation: (00:00) Introducing Next Level Supply Chain (06:25) Benefits that organizations are seeing by leveraging EPCIS (08:00) Full granular visibility, item-level tracking, inventory management (13:54) How EPCIS can log events from manufacturing to sales (17:03) Enhanced supply chain visibility through real-time EPCIS data (18:28) Accessing claims compliance through advanced visibility   Connect with GS1 US: Our website - www.gs1us.org GS1 US on LinkedIn   Connect with the guests: Matt Andrews on LinkedIn

Omni Talk
Ask An Expert | Mastering Item Data Accuracy: SPS Commerce Shares Insights for Retailers & Brands

Omni Talk

Play Episode Listen Later Mar 27, 2024 33:24


SPS Commerce's Nick Schwalbach and Brandon Pierre dive deep into the critical importance of item data accuracy for retailers and brands in today's fast-paced, omnichannel landscape. Discover how gaps in item information can lead to supply chain inefficiencies, missed sales opportunities, and poor customer experiences. Schwalbach and Pierre discuss the challenges retailers face when relying on manual processes and disparate systems like Excel and EDI for managing item data. They emphasize the need for effective vendor collaboration and the adoption of standardized data pools such as GDSN and GS1 to streamline data exchange and ensure consistency across channels. Learn how retailers can tackle specific business problems, such as optimizing freight and reducing dimensional weight charges, by focusing on critical item attributes. The duo also shares best practices for seamless new item setup and the importance of aligning e-commerce and in-store experiences through accurate and complete item data. As consumer demands evolve and government regulations like ESG, FISMA, and traceability requirements come into play, having a solid foundation of item data becomes increasingly crucial. Schwalbach and Pierre offer actionable advice for retailers and brands looking to embark on their item data accuracy journey and position themselves for success in the ever-changing retail landscape. #SPSCommerce #NickSchwalbach #BrandonPierre #ItemDataAccuracy #OmniTalkRetail #Retailers #Brands #ItemInformation #PIMs #EDI #SupplyChain #Excel #ManualProcesses #VendorCollaboration #DataPools #GDSN #ExtendedAttributes #GS1 #SupplierLanguage #RetailLanguage #DataGaps #BusinessProblems #DimensionalAttributes #FreightOptimization #NewItemSetup #Ecommerce #Omnichannel #InStoreDigitalSignage #DataJourney #GovernmentRegulations #ESG #FISMA #Traceability #ConsumerDemands #Sustainability #LinkedIn

ConvoCourses
Convocourses Podcast: Discount FISMA Book

ConvoCourses

Play Episode Listen Later Mar 21, 2024 108:07


  for a limited time only, the FISMA Compliance book is being offered at a discount: https://a.co/d/06493yI   http://convocourses.net  

FIA Speaks
European Commission's Ugo Bassi on EU capital markets, CCPs and regulation

FIA Speaks

Play Episode Listen Later Dec 11, 2023 35:17


Ugo Bassi, Director of Financial Markets at FISMA, European Commission, sat down with FIA President and CEO Walt Lukken at our Asia Derivatives Conference in November to discuss the progress of the EU's Capital Markets Union, including digital assets regulation, third-country CCP recognition and equivalence, and other key work that could have an impact on the Asia-Pacific region.

Federal Newscast
Month after defeat, Virginia lawmakers still fighting to land FBI HQ

Federal Newscast

Play Episode Listen Later Dec 5, 2023 6:36


(12/5/23) - On today's Federal Newscast: CENTCOM's got a new chief data officer. A month after the decision was announced, Virginia lawmakers are still fighting to be the site of the new FBI headquarters. And the Internet of Things looms large in OMB's 2024 FISMA guidance. Learn more about your ad choices. Visit podcastchoices.com/adchoicesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Federal Newscast
Month after defeat, Virginia lawmakers still fighting to land FBI HQ

Federal Newscast

Play Episode Listen Later Dec 5, 2023 6:36


(12/5/23) - On today's Federal Newscast: CENTCOM's got a new chief data officer. A month after the decision was announced, Virginia lawmakers are still fighting to be the site of the new FBI headquarters. And the Internet of Things looms large in OMB's 2024 FISMA guidance. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Cyber Queens Podcast
Episode 17 GRC and Privacy Roles with Christa Weik

The Cyber Queens Podcast

Play Episode Listen Later Oct 17, 2023 46:25 Transcription Available


**DISCLAIMER: All of our opinions are our own. They do not represent, nor are they affiliated with the interests and beliefs of the companies we work for. **In this episode, The Cyber Queens are joined by Christa Weik who is a Certified Information Privacy Professional (CIPP), GRC and Cybersecurity Policy and Program Manager. Christa's educational background is in cyber so she did attend a bootcamp before entering the field. She can build, scale, and maintain the GRC and Privacy considerations of any cybersecurity strategy for any sized enterprise and security organization. She also has knowledge of the following regulatory laws, audits, and frameworks: ISO, PCI-DSS, FISMA, HIPAA, GDPR, SOX, SOC2.Key Topics:Christa Weik's Story & How She Became The Cyber Queens First MenteeWhat is GRC?GRC & Privacy Specialist OverviewWhat Does A Career In GRC & Privacy Look Like?How Can Someone Get Into GRC & Privacy?Road To The Cyber Queens Mentorship What Do You Hope To Get Out Of This?What Does The Future Look Like?Sources:What is GRC? https://tinyurl.com/2p8vjktt What is a GRC Analyst? https://tinyurl.com/meh9xu5p What is a Data Privacy Analyst? https://tinyurl.com/ymajpu8x What is GDPR? https://tinyurl.com/yme88xekWhat is CISO? https://tinyurl.com/2pewzrmw What is Data Privacy? https://tinyurl.com/34rh4mvv Forcepoint: https://www.forcepoint.com/ Google: https://www.google.com/ Trello: https://trello.com/What is Jira? https://www.atlassian.com/software/jira ServiceNow: https://www.servicenow.com/ SentinelOne: https://www.sentinelone.com/ SentinelOne SKO: https://tinyurl.com/bd9cdn5f WICyS: https://www.wicys.org/ What is SEO? https://en.wikipedia.org/wiki/Security_engineering Audience 1st Podcast: https://www.audience1st.fm/podcast/episodesWTF Did I Just Read Podcast: https://wtfdidijustread.com/ CISO Distillery:  https://tinyurl.com/5xm8bf9d Get in Touch: Maril Vernon - @SheWhoHacks Erika Eakins - @ErikaEakins Amber DeVilbiss - @EngineerAmber Queens Twitter - @TheCyberQueens Queens LinkedIn Calls to Action: Subscribe to our newsletter for exclusive insight and new episodes! If you love us- share us!

InfosecTrain
What is a Zero-Trust Cybersecurity Model?

InfosecTrain

Play Episode Listen Later Aug 9, 2023 4:53


The growth of the modern workforce and the migration to remote work have resulted in a continuous rise in cybercrime, data breaches, data theft, and ransomware attacks. As a result, many experts today believe that a zero-trust cybersecurity model is the best strategy for preventing such threats. Implementing a zero-trust cybersecurity model gives enterprises visibility into their data, applications, and the activity around them, making it simple to notice suspicious activities. Zero-trust adheres to stringent identity verification standards for every person and device that tries to access an enterprise's resources on a network, in contrast to typical network security approaches that concentrate on keeping hackers and cybersecurity risk outside the network. What is a Zero-Trust Cybersecurity Model? A zero-trust cybersecurity model is a comprehensive approach to business network security that includes various techniques and principles to safeguard businesses from cutting-edge attacks and data breaches. This approach ensures that any user or device, within or outside an organization's network, must be authorized, authenticated, and continually validated before attempting or accessing its applications and data. Furthermore, this approach integrates analytics, filtering, and logging to confirm behavior and continuously look for compromised signs. This approach also aids in compliance with other important data privacy or security legislation, such as GDPR, HIPAA, FISMA, and CCPA. View More: What is a Zero-Trust Cybersecurity Model?

ServiceNow Podcasts
ServiceNow Federal Forum 2023: Congressional Spotlight- Working for the Collective Cyber Defense

ServiceNow Podcasts

Play Episode Listen Later Jul 14, 2023 18:27


Cybersecurity is everyone's responsibility, which means that Capitol Hill is taking a hard look at the role legislation can play in our collective defense. This bipartisan congressional panel will discuss what congressional action is needed to ensure a resilient, secure Federal government – from FedRAMP to FISMA, to other critical cyber considerations. It will also review cyber measures passed in the recent National Defense Authorization Act, such as the reauthorization of the National Computer Forensics Institute.Featured Speakers:Nichole Francis Reynolds, Vice President and Global Head of Government Relations, ServiceNowRep. Mark Green, R-TN 7th District House of Representatives See video recording here: link to YouTube See omnystudio.com/listener for privacy information.

Federal Fridays with ServiceNow (Government)
ServiceNow Federal Forum 2023: Congressional Spotlight- Working for the Collective Cyber Defense

Federal Fridays with ServiceNow (Government)

Play Episode Listen Later Jul 14, 2023 18:27


Cybersecurity is everyone's responsibility, which means that Capitol Hill is taking a hard look at the role legislation can play in our collective defense. This bipartisan congressional panel will discuss what congressional action is needed to ensure a resilient, secure Federal government – from FedRAMP to FISMA, to other critical cyber considerations. It will also review cyber measures passed in the recent National Defense Authorization Act, such as the reauthorization of the National Computer Forensics Institute.Featured Speakers:Nichole Francis Reynolds, Vice President and Global Head of Government Relations, ServiceNowRep. Mark Green, R-TN 7th District House of Representatives See video recording here: link to YouTube See omnystudio.com/listener for privacy information.

CFR On the Record
Higher Education Webinar: Implications of Artificial Intelligence in Higher Education

CFR On the Record

Play Episode Listen Later Jun 27, 2023


Pablo Molina, associate vice president of information technology and chief information security officer at Drexel University and adjunct professor at Georgetown University, leads the conversation on the implications of artificial intelligence in higher education.   FASKIANOS: Welcome to CFR's Higher Education Webinar. I'm Irina Faskianos, vice president of the National Program and Outreach here at CFR. Thank you for joining us. Today's discussion is on the record, and the video and transcript will be available on our website, CFR.org/Academic, if you would like to share it with your colleagues. As always, CFR takes no institutional positions on matters of policy. We are delighted to have Pablo Molina with us to discuss implications of artificial intelligence in higher education. Dr. Molina is chief information security officer and associate vice president at Drexel University. He is also an adjunct professor at Georgetown University. Dr. Molina is the founder and executive director of the International Applies Ethics in Technology Association, which aims to raise awareness on ethical issues in technology. He regularly comments on stories about privacy, the ethics of tech companies, and laws related to technology and information management. And he's received numerous awards relating to technology and serves on the board of the Electronic Privacy Information Center and the Center for AI and Digital Policy. So Dr. P, welcome. Thank you very much for being with us today. Obviously, AI is on the top of everyone's mind, with ChatGPT coming out and being in the news, and so many other stories about what AI is going to—how it's going to change the world. So I thought you could focus in specifically on how artificial intelligence will change and is influencing higher education, and what you're seeing, the trends in your community. MOLINA: Irina, thank you very much for the opportunity, to the Council on Foreign Relations, to be here and express my views. Thank you, everybody, for taking time out of your busy schedules to listen to this. And hopefully, I'll have the opportunity to learn much from your questions and answer some of them to the best of my ability. Well, since I'm a professor too, I like to start by giving you homework. And the homework is this: I do not know how much people know about artificial intelligence. In my opinion, anybody who has ever used ChatGPT considers herself or himself an expert. To some extent, you are, because you have used one of the first publicly available artificial intelligence tools out there and you know more than those who haven't. So if you have used ChatGPT, or Google Bard, or other services, you already have a leg up to understand at least one aspect of artificial intelligence, known as generative artificial intelligence. Now, if you want to learn more about this, there's a big textbook about this big. I'm not endorsing it. All I'm saying, for those people who are very curious, there are two great academics, Russell and Norvig. They're in their fourth edition of a wonderful book that covers every aspect of—technical aspect of artificial intelligence, called Artificial Intelligence: A Modern Approach. And if you're really interested in how artificial intelligence can impact higher education, I recommend a report by the U.S. Department of Education that was released earlier this year in Washington, DC from the Office of Education Technology. It's called Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations. So if you do all these things and you read all these things, you will hopefully transition from being whatever expert you were before—to a pandemic and Ukrainian war expert—to an artificial intelligence expert. So how do I think that all these wonderful things are going to affect artificial intelligence? Well, as human beings, we tend to overestimate the impact of technology in the short run and really underestimate the impact of technology in the long run. And I believe this is also the case with artificial intelligence. We're in a moment where there's a lot of hype about artificial intelligence. It will solve every problem under the sky. But it will also create the most catastrophic future and dystopia that we can imagine. And possibly neither one of these two are true, particularly if we regulate and use these technologies and develop them following some standard guidelines that we have followed in the past, for better or worse. So how is artificial intelligence affecting higher education? Well, number one, there is a great lack of regulation and legislation. So if you know, for example around this, OpenAI released ChatGPT. People started trying it. And all of a sudden there were people like here, where I'm speaking to you from, in Italy. I'm in Rome on vacation right now. And Italian data protection agency said: Listen, we're concerned about the privacy of this tool for citizens of Italy. So the company agreed to establish some rules, some guidelines and guardrails on the tool. And then it reopened to the Italian public, after being closed for a while. The same thing happened with the Canadian data protection authorities. In the United States, well, not much has happened, except that one of the organizations on which board I serve, the Center for Artificial Intelligence and Digital Policy, earlier this year in March of 2023 filed a sixty-four-page complaint with the Federal Trade Commission. Which is basically we're asking the Federal Trade Commission: You do have the authority to investigate how these tools can affect the U.S. consumers. Please do so, because this is your purview, and this is your responsibility. And we're still waiting on the agency to declare what the next steps are going to be. If you look at other bodies of legislation or regulation on artificial intelligence that can help us guide artificial intelligence, well, you can certainly pay attention to the U.S. Congress. And what is the U.S. Congress doing? Yeah, pretty much that, not much, to be honest. They listen to Sam Altman, the founder of ChatGPT, who recently testified before Congress, urging Congress to regulate artificial intelligence. Which is quite clever on his part. So it was on May 17 that he testified that we could be facing catastrophic damage ahead if artificial intelligence technology is not regulated in time. He also sounded the alarm about counterfeit humans, meaning that these machines could replace what we think a person is, at least virtually. And also warned about the end of factual evidence, because with artificial intelligence anything can be fabricated. Not only that, but he pointed out that artificial intelligence could start wars and destroy democracy. Certainly very, very grim predictions. And before this, many of the companies were self-regulating for artificial intelligence. If you look at Google, Microsoft, Facebook now Meta. All of them have their own artificial intelligence self-guiding principles. Most of them were very aspirational. Those could help us in higher education because, at the very least, it can help us create our own policies and guidelines for our community members—faculty, staff, students, researchers, administrators, partners, vendors, alumni—anybody who happens to interact with our institutions of higher learning. Now, what else is happening out there? Well, we have tons, tons of laws that have to do with the technology and regulations. Things like the Gramm-Leach-Bliley Act, or the Securities and Exchange Commission, the Sarbanes-Oxley. Federal regulations like FISMA, and Cybersecurity Maturity Model Certification, Payment Card Industry, there is the Computer Fraud and Abuse Act, there is the Budapest Convention where cybersecurity insurance providers will tells us what to do and what not to do about technology. We have state laws and many privacy laws. But, to be honest, very few artificial intelligence laws. And it's groundbreaking in Europe that the European parliamentarians have agreed to discuss the Artificial Intelligence Act, which could be the first one really to be passed at this level in the world, after some efforts by China and other countries. And, if adopted, could be a landmark change in the adoption of artificial intelligence. In the United States, even though Congress is not doing much, what the White House is trying to position itself in the realm of artificial intelligence. So there's an executive order in February of 2023—that many of us in higher education read because, once again, we're trying to find inspiration for our own rules and regulations—that tells federal agencies that they have to root out bias in the design and use of new technologies, including artificial intelligence, because they have to protect the public from algorithm discrimination. And we all believe this. In higher education, we believe in being fair and transparent and accountable. I would be surprised if any of us is not concerned about making sure that our technology use, our artificial technology use, does not follow these particular principles as proposed by the Organization for Economic Cooperation and Development, and many other bodies of ethics and expertise. Now, the White House also announced new centers—research and development centers with some new national artificial intelligence research institutes. Many of us will collaborate with those in our research projects. A call for public assessments of existing generative artificial intelligence systems, like ChatGPT. And also is trying to enact or is enacting policies to ensure that U.S. government—the U.S. government, the executive branch, is leading by example when mitigating artificial intelligence risks and harnessing artificial intelligence opportunities. Because, in spite of all the concerns about this, it's all about the opportunities that we hope to achieve with artificial intelligence. And when we look at how specifically can we benefit from artificial intelligence in higher education, well, certainly we can start with new and modified academic offerings. I would be surprised if most of us will not have degrees—certainly, we already have degrees—graduate degrees on artificial intelligence, and machine learning, and many others. But I would be surprised if we don't even add some bachelor's degrees in this field, or we don't modify significantly some of our existing academic offerings to incorporate artificial intelligence in various specialties, our courses, or components of the courses that we teach our students. We're looking at amazing research opportunities, things that we'll be able to do with artificial intelligence that we couldn't even think about before, that are going to expand our ability to generate new knowledge to contribute to society, with federal funding, with private funding. We're looking at improved knowledge management, something that librarians are always very concerned about, the preservation and distribution of knowledge. The idea would be that artificial intelligence will help us find better the things that we're looking for, the things that we need in order to conduct our academic work. We're certainly looking at new and modified pedagogical approaches, new ways of learning and teaching, including the promise of adaptive learning, something that really can tell students: Hey, you're not getting this particular concept. Why don't you go back and study it in a different way with a different virtual avatar, using simulations or virtual assistance? In almost every discipline and academic endeavor. We're looking very concerned, because we're concerned about offering, you know, a good value for the money when it comes to education. So we're hoping to achieve extreme efficiencies, better ways to run admissions, better ways to guide students through their academic careers, better way to coach them into professional opportunities. And many of this will be possible thanks to artificial intelligence. And also, let's not forget this, but we still have many underserved students, and they're underserved because they either cannot afford education or maybe they have physical or cognitive disabilities. And artificial intelligence can really help us reach to those students and offer them new opportunities to advance their education and fulfill their academic and professional goals. And I think this is a good introduction. And I'd love to talk about all the things that can go wrong. I'd love to talk about all the things that we should be doing so that things don't go as wrong as predicted. But I think this is a good way to set the stage for the discussion. FASKIANOS: Fantastic. Thank you so much. So we're going to go all of you now for your questions and comments, share best practices. (Gives queuing instructions.) All right. So I'm going first to Gabriel Doncel has a written question, adjunct faculty at the University of Delaware: How do we incentivize students to approach generative AI tools like ChatGPT for text in ways that emphasize critical thinking and analysis? MOLINA: I always like to start with a difficult question, so I very much, Gabriel Doncel, for that particular question. And, as you know, there are several approaches to adopting tools like ChatGPT on campus by students. One of them is to say: No, over my dead body. If you use ChatGPT, you're cheating. Even if you cite ChatGPT, we can consider you to be cheating. And not only that, but some institutions have invested in tools that can detect whether or something was written with ChatGPT or similar rules. There are other faculty members and other academic institutions that are realizing these tools will be available when these students join the workforce. So our job is to help them do the best that they can by using these particular tools, to make sure they avoid some of the mishaps that have already happened. There are a number of lawyers who have used ChatGPT to file legal briefs. And when the judges received those briefs, and read through them, and looked at the citations they realized that some of the citations were completely made up, were not real cases. Hence, the lawyers faced professional disciplinary action because they used the tool without the professional review that is required. So hopefully we're going to educate our students and we're going to set policy and guideline boundaries for them to use these, as well as sometimes the necessary technical controls for those students who may not be that ethically inclined to follow our guidelines and policies. But I think that to hide our heads in the sand and pretend that these tools are not out there for students to use would be—it's a disserve to our institutions, to our students, and the mission that we have of training the next generation of knowledge workers. FASKIANOS: Thank you. I'm going to go next to Meena Bose, who has a raised hand. Meena, if you can unmute yourself and identify yourself. Q: Thank you, Irina. Thank you for this very important talk. And my question is a little—(laughs)—it's formative, but really—I have been thinking about what you were saying about the role of AI in academic life. And I don't—particularly for undergraduates, for admissions, advisement, guidance on curriculum. And I don't want to have my head in the sand about this, as you just said—(laughs)—but it seems to me that any kind of meaningful interaction with students, particularly students who have not had any exposure to college before, depends upon kind of multiple feedback with faculty members, development of mentors, to excel in college and to consider opportunities after. So I'm struggling a little bit to see how AI can be instructive for that part of college life, beyond kind of providing information, I guess. But I guess the web does that already. So welcome your thoughts. Thank you. FASKIANOS: And Meena's at Hofstra University. MOLINA: Thank you. You know, it's a great question. And the idea that everybody is proposing right here is we are not—artificial intelligence companies, at least at first. We'll see in the future because, you know, it depends on how it's regulated. But they're not trying, or so they claim, to replace doctors, or architects, or professors, or mentors, or administrators. They're trying to help those—precisely those people in those professions, and the people they served gain access to more information. And you're right in a sense that that information is already on the web. But we've aways had a problem finding that information regularly on the web. And you may remember that when Google came along, I mean, it swept through every other search engine out there AltaVista, Yahoo, and many others, because, you know, it had a very good search algorithm. And now we're going to the next level. The next level is where you ask ChatGPT in human-natural language. You're not trying to combine the three words that say, OK, is the economics class required? No, no, you're telling ChatGPT, hey, listen, I'm in the master's in business administration at Drexel University and I'm trying to take more economic classes. What recommendations do you have for me? And this is where you can have a preliminary one, and also a caveat there, as most of these search engine—generative AI engines already have, that tell you: We're not here to replace the experts. Make sure you discuss your questions with the experts. We will not give you medical advice. We will not give you educational advice. We're just here, to some extent, for guiding purposes and, even now, for experimental and entertainment purposes. So I think you are absolutely right that we have to be very judicious about how we use these tools to support the students. Now, that said, I had the privilege of working for public universities in the state of Connecticut when I was the CIO. I also had the opportunity early in my career to attend public university in Europe, in Spain, where we were hundreds of students in class. We couldn't get any attention from the faculty. There were no mentors, there were no counselors, or anybody else. Is it better to have nobody to help you or is it better to have at least some technology guidance that can help you find the information that otherwise is spread throughout many different systems that are like ivory towers—emissions on one side, economics on the other, academics advising on the other, and everything else. So thank you for a wonderful question and reflection. FASKIANOS: I'm going to take the next question written from Dr. Russell Thomas, a senior lecturer in the Department of International Relations and Diplomatic Studies at Cavendish University in Uganda: What are the skills and competencies that higher education students and faculty need to develop to think in an AI-driven world? MOLINA: So we could argue here that something very similar has happened already with many information technologies and communication technologies. It is the understanding at first faculty members did not want to use email, or the web, or many other tools because they were too busy with their disciplines. And rightly so. They were brilliant economists, or philosophers, or biologists. They didn't have enough time to learn all these new technologies to interact with the students. But eventually they did learn, because they realized that it was the only way to meet the students where they were and to communicate with them in efficient ways. Now, I have to be honest; when it comes to the use of technology—and we'll unpack the numbers—it was part of my doctoral dissertation, when I expanded the adoption of technology models, that tells you about early adopters, and mainstream adopters, and late adopters, and laggards. But I uncovered a new category for some of the institutions where I worked called the over-my-dead-body adopters. And these were some of the faculty members who say: I will never switch word processors. I will never use this technology. It's only forty years until I retire, probably eighty more until I die. I don't have to do this. And, to be honest, we have a responsibility to understand that those artificial intelligence tools are out there, and to guide the students as to what is the acceptable use of those technologies within the disciplines and the courses that we teach them in. Because they will find those available in a very competitive work market, in a competitive labor market, because they can derive some benefit from them. But also, we don't want to shortchange their educational attainment just because they go behind our backs to copy and paste from ChatGPT, learning nothing. Going back to the question by Gabriel Doncel, not learning to exercise the critical thinking, using citations and material that is unverified, that was borrowed from the internet without any authority, without any attention to the different points of view. I mean, if you've used ChatGPT for a while—and I have personally, even to prepare some basic thank-you speeches, which are all very formal, even to contest a traffic ticket in Washington, DC, when I was speeding but I don't want to pay the ticket anyway. Even for just research purposes, you could realize that most of the writing from ChatGPT has a very, very common style. Which is, oh, on the one hand people say this, on the other hand people say that. Well, the critical thinking will tell you, sure, there are two different opinions, but this is what I think myself, and this is why I think about this. And these are some of the skills, the critical thinking skills, that we must continue to teach the students and not to, you know, put blinds around their eyes to say, oh, continue focusing only on the textbook and the website. No, no. Look at the other tools but use them judiciously. FASKIANOS: Thank you. I'm going to go next to Clemente Abrokwaa. Raised hand, if you can identify yourself, please. Q: Hi. Thanks so much for your talk. It's something that has been—I'm from Penn State University. And this is a very important topic, I think. And some of the earlier speakers have already asked the questions I was going to ask. (Laughs.) But one thing that I would like to say that, as you said, we cannot bury our heads in the sand. No matter what we think, the technology is already here. So we cannot avoid it. My question, though, is what do you think about the artificial intelligence, the use of that in, say, for example, graduate students using it to write dissertations? You did mention about the lawyers that use it to write their briefs, and they were caught. But in dissertations and also in class—for example, you have students—you have about forty students. You give a written assignment. You make—when you start grading, you have grading fatigue. And so at some point you lose interest of actually checking. And so I'm kind of concerned about that how it will affect the students' desire to actually go and research without resorting to the use of AI. MOLINA: Well, Clemente, fellow colleague from the state of Pennsylvania, thank you for that, once again, both a question and a reflection here. Listen, many of us wrote our doctoral dissertations—mine at Georgetown. At one point of time, I was so tired of writing about the same topics, following the wonderful advice, but also the whims of my dissertation committee, that I was this close from outsourcing my thesis to China. I didn't, but I thought about it. And now graduate students are thinking, OK, why am I going through the difficulties of writing this when ChatGPT can do it for me and the deadline is tomorrow? Well, this is what will distinguish the good students and the good professionals from the other ones. And the interesting part is, as you know, when we teach graduate students we're teaching them critical thinking skills, but also teaching them now to express themselves, you know, either orally or in writing. And writing effectively is fundamental in the professions, but also absolutely critical in academic settings. And anybody who's just copying and pasting from ChatGPT to these documents cannot do that level of writing. But you're absolutely right. Let's say that we have an adjunct faculty member who's teaching a hundred students. Will that person go through every single essay to find out whether students were cheating with ChatGPT? Probably not. And this is why there are also enterprising people who are using artificial intelligence to find out and tell you whether a paper was written using artificial intelligence. So it's a little bit like this fighting of different sources and business opportunities for all of them. And we've done this. We've used antiplagiarism tools in the past because we knew that students were copying and pasting using Google Scholar and many other sources. And now oftentimes we run antiplagiarism tools. We didn't write them ourselves. Or we tell the students, you run it yourself and you give it to me. And make sure you are not accidentally not citing things that could end up jeopardizing your ability to get a graduate degree because your work was not up to snuff with the requirements of our stringent academic programs. So I would argue that this antiplagiarism tools that we're using will more often than not, and sooner than expected, incorporate the detection of artificial intelligence writeups. And also the interesting part is to tell the students, well, if you do choose to use any of these tools, what are the rules of engagement? Can you ask it to write a paragraph and then you cite it, and you mention that ChatGPT wrote it? Not to mention, in addition to that, all the issues about artificial intelligence, which the courts are deciding now, regarding the intellectual property of those productions. If a song, a poem, a book is written by an artificial intelligence entity, who owns the intellectual property for those works produced by an artificial intelligence machine? FASKIANOS: Good question. We have a lot of written questions. And I'm sure you don't want to just listen to my voice, so please do raise your hands. But we do have a question from one of your colleagues, Pablo, Pepe Barcega, who's the IT director at Drexel: Considering the potential biases and limitations of AI models, like ChatGPT, do you think relying on such technology in the educational domain can perpetuate existing inequalities and reinforce systemic biases, particularly in terms of access, representation, and fair evaluation of students? And Pepe's question got seven upvotes, we advanced it to the top of the line. MOLINA: All right, well, first I have to wonder whether he used ChatGPT to write the question. But I'm going to leave it that. Thank you. (Laughter.) It's a wonderful question. One of the greatest concerns we have had, those of us who have been working on artificial intelligence digital policy for years—not this year when ChatGPT was released, but for years we've been thinking about this. And even before artificial intelligence, in general with algorithm transparency. And the idea is the following: That two things are happening here. One is that we're programming the algorithms using instructions, instructions created by programmers, with all their biases, and their misunderstandings, and their shortcomings, and their lack of context, and everything else. But with artificial intelligence we're doing something even more concerning than that, which is we have some basic algorithms but then we're feeling a lot of information, a corpus of information, to those algorithms. And the algorithms are fine-tuning the rules based on those. So it's very, very difficult for experts to explain how an artificial intelligence system actually makes decisions, because we know the engine and we know the data that we fed to the engine, but we don't know the real outcome how those decisions are being made through neural networks, through all of the different systems that we have and methods that we have for artificial intelligence. Very, very few people understand how those work. And those are so busy they don't have time to explain how the algorithm works for others, including the regulators. Let's remember some of the failed cases. Amazon tried this early. And they tried this for selecting employees for Amazon. And they fed all the resumes. And guess what? It turned out that most of the recommendations were to hire young white people who had gone to Ivy League schools. Why? Because their first employees were feeding those descriptions, and they had done extremely well at Amazon. Hence, by feeding that information of past successful employees only those were there. And so that puts away the diversity that we need for different academic institutions, large and small, public and private, from different countries, from different genders, from different ages, from different ethnicities. All those things went away because the algorithm was promoting one particular one. Recently I had the opportunity to moderate a panel in Washington, DC, and we had representatives from the Equal Employment Opportunity Commission. And they told us how they investigated a hiring algorithm from a company that was disproportionately recommending that they hired people whose first name was Brian and had played lacrosse in high school because, once again, a disproportionate number of people in that company had done that. And the algorithm realized, oh, this must be important characteristics to hire people for this company. Let's not forget, for example, with the artificial facial recognition and artificial intelligence by Amazon Rekog, you know, the facial recognition software, that the American Civil Liberties Union, decided, OK, I'm going to submit the pictures of all the congressmen to this particular facial recognition engine. And it turned out that it misidentified many of them, particularly African Americans, as felons who had been convicted. So all these artificial—all these biases could have really, really bad consequences. Imagine that you're using this to decide who you admit to your universities, and the algorithm is wrong. You know, you are making really biased decisions that will affect the livelihood of many people, but also will transform society, possibly for the worse, if we don't address this. So this is why the OECD, the European Union, even the White House, everybody is saying: We want this technology. We want to derive the benefits of this technology, while curtailing the abuses. And it's fundamental we achieve transparency. We are sure that these algorithms are not biased against the people who use them. FASKIANOS: Thank you. So I'm going to go next to Emily Edmonds-Poli, who is a professor at the University of San Diego: We hear a lot about providing clear guidelines for students, but for those of us who have not had a lot of experience using ChatGPT it is difficult to know what clear guidelines look like. Can you recommend some sources we might consult as a starting point, or where we might find some sample language? MOLINA: Hmm. Well, certainly this is what we do in higher education. We compete for the best students and the best faculty members. And we sometimes compete a little bit to be first to win groundbreaking research. But we tend to collaborate with everything else, particularly when it comes to policy, and guidance, and rules. So there are many institutions, like mine, who have already assembled—I'm sure that yours has done the same—assembled committees, because assembling committees and subcommittees is something we do very well in higher education, with faculty members, with administrators, even with the student representation to figure out, OK, what should we do about the use of artificial intelligence on our campus? I mentioned before taking a look at the big aspirational declarations by Meta, and Google, and IBM, and Microsoft could be helpful for these communities to look at this. But also, I'm a very active member of an organization known as EDUCAUSE. And EDUCAUSE is for educators—predominantly higher education educators. Administrators, staff members, faculty members, to think about the adoption of information technology. And EDUCAUSE has done good work on this front and continues to do good work on this front. So once again, EDUCAUSE and some of the institutions have already published their guidelines on how to use artificial intelligence and incorporate that within their academic lives. And now, that said, we also know that even though all higher education institutions are the same, they're all different. We all have different values. We all believe in different uses of technology. We trust more or less the students. Hence, it's very important that whatever inspiration you would take, you work internally on campus—as you have done with many other issues in the past—to make sure it really reflects the values of your institution. FASKIANOS: So, Pablo, would you point to a specific college or university that has developed a code of ethics that addresses the use of AI for their academic community beyond your own, but that is publicly available? MOLINA: Yeah, I'm going to be honest, I don't want to put anybody on the spot. FASKIANOS: OK. MOLINA: Because, once again, there many reasons. But, once again, let me repeat a couple resources. One is of them is from the U.S. Department of Education, from the Office of Educational Technology. And the article is Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, published earlier this year. The other source really is educause.edu. And if you look at educause.edu on artificial intelligence, you'll find links to articles, you'll find links to universities. It would be presumptuous of me to evaluate whose policies are better than others, but I would argue that the general principles of nonbiased, transparency, accountability, and also integration of these tools within the academic life of the institution in a morally responsible way—with concepts by privacy by design, security by design, and responsible computing—all of those are good words to have in there. Now, the other problem with policies and guidelines is that, let's be honest, many of those have no teeth in our institutions. You know, we promulgate them. They're very nice. They look beautiful. They are beautifully written. But oftentimes when people don't follow them, there's not a big penalty. And this is why, in addition to having the policies, educating the campus community is important. But it's difficult to do because we need to educate them about so many things. About cybersecurity threats, about sexual harassment, about nondiscriminatory policies, about responsible behavior on campus regarding drugs and alcohol, about crime. So many things that they have to learn about. It's hard to get at another topic for them to spend their time on, instead of researching the core subject matter that they chose to pursue for their lives. FASKIANOS: Thank you. And we will be sending out a link to this video, the transcript, as well as the resources that you have mentioned. So if you didn't get them, we'll include them in the follow-up email. So I'm going to go to Dorian Brown Crosby who has a raised hand. Q: Yes. Thank you so much. I put one question in the chat but I have another question that I would like to go ahead and ask now. So thank you so much for this presentation. You mentioned algorithm biases with individuals. And I appreciate you pointing that out, especially when we talk about face recognition, also in terms of forced migration, which is my area of research. But I also wanted you to speak to, or could you talk about the challenges that some institutions in higher education would have in terms of support for some of the things that you mentioned in terms of potential curricula, or certificates, or other ways that AI would be woven into the new offerings of institutions of higher education. How would that look specifically for institutions that might be challenged to access those resources, such as Historically Black Colleges and Universities? Thank you. MOLINA: Well, very interesting question, and a really fascinating point of view. Because we all tend to look at things from our own perspective and perhaps not consider the perspective of others. Those who have much more money and resources than us, and those who have fewer resources and less funding available. So this is a very interesting line. What is it that we do in higher education when we have these problems? Well, as I mentioned before, we build committees and subcommittees. Usually we also do campus surveys. I don't know why we love doing campus surveys and asking everybody what they think about this. Those are useful tools to discuss. And oftentimes the thing that we do also, that we've done for many other topics, well, we hire people and we create new offices—either academic or administrative offices. With all of those, you know, they have certain limitations to how useful and functional they can be. And they also continue to require resources. Resources that, in the end, are paid for by students with, you know, federal financing. But this is the truth of the matter. So if you start creating offices of artificial intelligence on our campuses, however important the work may be on their guidance and however much extra work can be assigned to them instead of distributed to every faculty and the staff members out there, the truth of the matter is that these are not perfect solutions. So what is it that we do? Oftentimes, we work with partners. And our partners love to take—(inaudible)—vendors. But the truth of the matter is that sometimes they have much more—they have much more expertise on some of these topics. So for example, if you're thinking about incorporating artificial intelligence to some of the academic materials that you use in class, well, I'm going to take a guess that if you already work with McGraw Hill in economics, or accounting, or some of the other books and websites that they put that you recommend to your students or you make mandatory for your students, that you start discussing with them, hey, listen, are you going to use artificial intelligence? How? Are you going to tell me ahead of time? Because, as a faculty member, you may have a choice to decide: I want to work with this publisher and not this particular publisher because of the way they approach this. And let's be honest, we've seen a number of these vendors with major information security problems. McGraw Hill recently left a repository of data misconfigured out there on the internet, and almost anybody could access that. But many others before them, like Chegg and others, were notorious for their information security breaches. Can we imagine that these people are going to adopt artificial intelligence and not do such a good job of securing the information, the privacy, and the nonbiased approaches that we hold dear for students? I think they require a lot of supervision. But in the end, these publishers have the economies of scale for you to recommend those educational materials instead of developing your own for every course, for every class, and for every institution. So perhaps we're going to have to continue to work together, as we've done in higher education, in consortia, which would be local, or regional. It could be based on institutions of the same interest, or on student population, on trying to do this. And, you know, hopefully we'll get grants, grants from the federal government, that can be used in order to develop some of the materials and guidelines that are going to help us precisely embrace this and embracing not only to operate better as institutions and fulfill our mission, but also to make sure that our students are better prepared to join society and compete globally, which is what we have to do. FASKIANOS: So I'm going to combine questions. Dr. Lance Hunter, who is an associate professor at Augusta University. There's been a lot of debate regarding if plagiarism detection software tools like Turnitin can accurately detect AI-generated text. What is your opinion regarding the accuracy of AI text generation detection plagiarism tools? And then Rama Lohani-Chase, at Union County College, wants recommendations on what plagiarism checker devices you would recommend—or, you know, plagiarism detection for AI would you recommend? MOLINA: Sure. So, number one, I'm not going to endorse any particular company because if I do that I would ask them for money, or the other way around. I'm not sure how it works. I could be seen as biased, particularly here. But there are many there and your institutions are using them. Sometimes they are integrated with your learning management system. And, as I mentioned, sometimes we ask the students to use them themselves and then either produce the plagiarism report for us or simply know themselves this. I'm going to be honest; when I teach ethics and technology, I tell the students about the antiplagiarism tools at the universities. But I also tell them, listen, if you're cheating in an ethics and technology class, I failed miserably. So please don't. Take extra time if you have to take it, but—you know, and if you want, use the antiplagiarism tool yourself. But the question stands and is critical, which is right now those tools are trying to improve the recognition of artificial intelligence written text, but they're not as good as they could be. So like every other technology and, what I'm going to call, antitechnology, used to control the damage of the first technology, is an escalation where we start trying to identify this. And I think they will continue to do this, and they will be successful in doing this. There are people who have written ad hoc tools using ChatGPT to identify things written by ChatGPT. I tried them. They're remarkably good for the handful of papers that I tried myself, but I haven't conducted enough research myself to tell you if they're really effective tools for this. So I would argue that for the timing you must assume that those tools, as we assume all the time, will not catch all of the cases, only some of the most obvious ones. FASKIANOS: So a question from John Dedie, who is an assistant professor at the Community College of Baltimore County: To combat AI issues, shouldn't we rethink assignments? Instead of papers, have students do PowerPoints, ask students to offer their opinions and defend them? And then there was an interesting comment from Mark Habeeb at Georgetown University School of Foreign Service. Knowledge has been cheap for many years now because it is so readily available. With AI, we have a tool that can aggregate the knowledge and create written products. So, you know, what needs to be the focus now is critical thinking and assessing values. We need to teach our students how to assess and use that knowledge rather than how to find the knowledge and aggregate that knowledge. So maybe you could react to those two—the question and comment. MOLINA: So let me start with the Georgetown one, not only because he's a colleague of mine. I also teach at Georgetown, and where I obtained my doctoral degree a number of years ago. I completely agree. I completely agree with the issue that we have to teach new skills. And one of the programs in which I teach at Georgetown is our master's of analysis. Which are basically for people who want to work in the intelligence community. And these people have to find the information and they have to draw inferences, and try to figure out whether it is a nation-state that is threatening the United States, or another, or a corporation, or something like that. And they do all of those critical thinking, and intuition, and all the tools that we have developed in the intelligence community for many, many years. And artificial intelligence, if they suspend their judgement and they only use artificial intelligence, they will miss very important information that is critical for national security. And the same is true for something like our flagship school, the School of Foreign Service at Georgetown, one of the best in the world in that particular field, where you want to train the diplomats, and the heads of state, and the great strategical thinkers on policy and politics in the international arena to precisely think not in the mechanical way that a machine can think, but also to connect those dots. And, sure they should be using those tools in order to, you know, get the most favorable position and the starting position, But they should also use their critical thinking always, and their capabilities of analysis in order to produce good outcomes and good conclusions. Regarding redoing the assignments, absolutely true. But that is hard. It is a lot of work. We're very busy faculty members. We have to grade. We have to be on committees. We have to do research. And now they ask us to redo our entire assessment strategy, with new assignments that we need to grade again and account for artificial intelligence. And I don't think that any provost out there is saying, you know what? You can take two semesters off to work on this and retool all your courses. That doesn't happen in the institutions that I know of. If you get time off because you're entitled to it, you want to devote that time to do research because that is really what you sign up for when you pursued an academic career, in many cases. I can tell you one thing, that here in Europe where oftentimes they look at these problems with fewer resources than we do in the United States, a lot of faculty members at the high school level, at the college level, are moving to oral examinations because it's much harder to cheat with ChatGPT with an oral examination. Because they will ask you interactive, adaptive questions—like the ones we suffered when we were defending our doctoral dissertations. And they will realize, the faculty members, whether or not you know the material and you understand the material. Now, imagine oral examinations for a class of one hundred, two hundred, four hundred. Do you do one for the entire semester, with one topic chosen and run them? Or do you do several throughout the semester? Do you end up using a ChatGPT virtual assistance to conduct your oral examinations? I think these are complex questions. But certainly redoing our assignments and redoing the way we teach and the way we evaluate our students is perhaps a necessary consequence of the advent of artificial intelligence. FASKIANOS: So next question from Damian Odunze, who is an assistant professor at Delta State University in Cleveland, Mississippi: Who should safeguard ethical concerns and misuse of AI by criminals? Should the onus fall on the creators and companies like Apple, Google, and Microsoft to ensure security and not pass it on to the end users of the product? And I think you mentioned at the top in your remarks, Pablo, about how the founder of ChatGPT was urging the Congress to put into place some regulation. What is the onus on ChatGPT to protect against some of this as well? MOLINA: Well, I'm going to recycle more of the material from my doctoral dissertation. In this case it was the Molina cycle of innovation and regulation. It goes like this, basically there are—you know, there are engineers and scientists who create new information technologies. And then there are entrepreneurs and businesspeople and executives to figure out, OK, I know how to package this so that people are going to use it, buy it, subscribe to it, or look at it, so that I can sell the advertisement to others. And, you know, this begins and very, very soon the abuses start. And the abuses are that criminals are using these platforms for reasons that were not envisioned before. Even the executives, as we've seen with Google, and Facebook, and others, decide to invade the privacy of the people because they only have to pay a big fine, but they make much more money than the fines or they expect not to be caught. And what happened in this cycle is that eventually there is so much noise in the media, congressional hearings, that eventually regulators step in and they try to pass new laws to do this, or the regulatory agencies try to investigate using the powers given to them. And then all of these new rules have to be tested in courts of law, which could take years by the time it reaches sometimes all the way to the Supreme Court. Some of them are even knocked down on the way to the Supreme Court when they realize this is not constitutional, it's a conflict of laws, and things like that. Now, by the time we regulate these new technologies, not only many years have gone by, but the technologies have changed. The marketing products and services have changed, the abuses have changed, and the criminals have changed. So this is why we're always living in a loosely regulated space when it comes to information technology. And this is an issue of accountability. We're finding this, for example, with information security. If my phone is my hacked, or my computer, my email, is it the fault of Microsoft, and Apple, and Dell, and everybody else? Why am I the one paying the consequences and not any of these companies? Because it's unregulated. So morally speaking, yes. These companies are accountable. Morally speaking also the users are accountable, because we're using these tools because we're incorporating them professionally. Legally speaking, so far, nobody is accountable except the lawyers who submitted briefs that were not correct in a court of law and were disciplined for that. But other than that, right now, it is a very gray space. So in my mind, it requires everybody. It takes a village to do the morally correct thing. It starts with the companies and the inventors. It involves the regulators, who should do their job and make sure that there's no unnecessary harm created by these tools. But it also involves every company executive, every professional, every student, and professor who decides to use these tools. FASKIANOS: OK. I'm going to take—combine a couple questions from Dorothy Marinucci and Venky Venkatachalam about the effect of AI on jobs. Dorothy talks about—she's from Fordham University—about she read something about Germany's best-selling newspaper Bild reportedly adopting artificial intelligence to replace certain editorial roles in an effort to cut costs. Does this mean that the field of journalism communication will change? And Venky's question is: AI—one of the impacts is in the area of automation, leading to elimination of certain types of jobs. Can you talk about both the elimination of jobs and what new types of jobs you think will be created as AI matures into the business world with more value-added applications? MOLINA: Well, what I like about predicting the future, and I've done this before in conferences and papers, is that, you know, when the future comes ten years from now people will either not remember what I said, or, you know, maybe I was lucky and my prediction was correct. In the specific field of journalism, and we've seen it, the journalism and communications field, decimated because the money that they used to make with advertising—and, you know, certainly a bit part of that were in the form of corporate profits. But many other one in the form of hiring good journalists, and investigative journalism, and these people could be six months writing a story when right now they have six hours to write a story, because there are no resources. And all the advertisement money went instead to Facebook, and Google, and many others because they work very well for advertisements. But now the lifeblood of journalism organizations has been really, you know, undermined. And there's good journalism in other places, in newspapers, but sadly this is a great temptation to replace some of the journalists with more artificial intelligence, particularly the most—on the least important pieces. I would argue that editorial pieces are the most important in newspapers, the ones requiring ideology, and critical thinking, and many others. Whereas there are others that tell you about traffic changes that perhaps do not—or weather patterns, without offending any meteorologists, that maybe require a more mechanical approach. I would argue that a lot of professions are going to be transformed because, well, if ChatGPT can write real estate announcements that work very well, well, you may need fewer people doing this. And yet, I think that what we're going to find is the same thing we found when technology arrived. We all thought that the arrival of computers would mean that everybody would be without a job. Guess what? It meant something different. It meant that in order to do our jobs, we had to learn how to use computers. So I would argue that this is going to be the same case. To be a good doctor, to be a good lawyer, to be a good economist, to be a good knowledge worker you're going to have to learn also how to use whatever artificial intelligence tools are available out there, and use them professionally within the moral and the ontological concerns that apply to your particular profession. Those are the kind of jobs that I think are going to be very important. And, of course, all the technical jobs, as I mentioned. There are tons of people who consider themselves artificial intelligence experts. Only a few at the very top understand these systems. But there are many others in the pyramid that help with preparing these systems, with the support, the maintenance, the marketing, preparing the datasets to go into these particular models, working with regulators and legislators and compliance organizations to make sure that the algorithms and the tools are not running afoul of existing regulations. All of those, I think, are going to be interesting jobs that will be part of the arrival of artificial intelligence. FASKIANOS: Great. We have so many questions left and we just couldn't get to them all. I'm just going to ask you just to maybe reflect on how the use of artificial intelligence in higher education will affect U.S. foreign policy and international relations. I know you touched upon it a little bit in reacting to the comment from our Georgetown University colleague, but any additional thoughts you might want to add before we close? MOLINA: Well, let's be honest, one particular one that applies to education and to everything else, there is a race—a worldwide race for artificial intelligence progress. The big companies are fighting—you know, Google, and Meta, many others, are really putting—Amazon—putting resources into that, trying to be first in this particular race. But it's also a national race. For example, it's very clear that there are executive orders from the United States as well as regulations and declarations from China that basically are indicating these two big nations are trying to be first in dominating the use of artificial intelligence. And let's be honest, in order to do well in artificial intelligence you need not only the scientists who are going to create those models and refine them, but you also need the bodies of data that you need to feed these algorithms in order to have good algorithms. So the barriers to entry for other nations and the barriers to entry by all the technology companies are going to be very, very high. It's not going to be easy for any small company to say: Oh, now I'm a huge player in artificial intelligence. Because even if you may have created an interesting new algorithmic procedure, you don't have the datasets that the huge companies have been able to amass and work on for the longest time. Every time you submit a question to ChatGPT, the ChatGPT experts are using their questions to refine the tool. The same way that when we were using voice recognition with Apple or Android or other companies, that we're using those voices and our accents and our mistakes in order to refine their voice recognition technologies. So this is the power. We'll see that the early bird gets the worm of those who are investing, those who are aggressively going for it, and those who are also judiciously regulating this can really do very well in the international arena when it comes to artificial intelligence. And so will their universities, because they will be able to really train those knowledge workers, they'll be able to get the money generated from artificial intelligence, and they will be able to, you know, feedback one with the other. The advances in the technology will result in more need for students, more students graduating will propel the industry. And there will also be—we'll always have a fight for talent where companies and countries will attract those people who really know about these wonderful things. Now, keep in mind that artificial intelligence was the core of this, but there are so many other emerging issues in information technology. And some of them are critical to higher education. So we're still, you know, lots of hype, but we think that virtual reality will have an amazing impact on the way we teach and we conduct research and we train for certain skills. We think that quantum computing has the ability to revolutionize the way we conduct research, allowing us to do competitions that were not even thinkable today. We'll look at things like robotics. And if you ask me about what is going to take many jobs away, I would say that robotics can take a lot of jobs away. Now, we thought that there would be no factory workers left because of robots, but that hasn't happened. But keep adding robots with artificial intelligence to serve you a cappuccino, or your meal, or take care of your laundry, or many other things, or maybe clean your hotel room, and you realize, oh, there are lots of jobs out there that no longer will be there. Think about artificial intelligence for self-driving vehicles, boats, planes, cargo ships, commercial airplanes. Think about the thousands of taxi drivers and truck drivers who may end up being out of jobs because, listen, the machines drive safer, and they don't get tired, and they can be driving twenty-four by seven, and they don't require health benefits, or retirement. They don't get depressed. They never miss. Think about many of the technologies out there that have an impact on what we do. So, but artificial intelligence is a multiplier to technologies, a contributor to many other fields and many other technologies. And this is why we're so—spending so much time and so much energy thinking about these particular issues. FASKIANOS: Well, thank you, Pablo Molina. We really appreciate it. Again, my apologies that we couldn't get to all of the questions and comments in the chat, but we appreciate all of you for your questions and, of course, your insights were really terrific, Dr. P. So we will, again, be sending out the link to this video and transcript, as well as the resources that you mentioned during this discussion. I hope you all enjoy the Fourth of July. And I encourage you to follow @CFR_Academic on Twitter and visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. Again, you send us comments, feedback, suggestions to CFRacademic@CFR.org. And, again, thank you all for joining us. We look forward to your continued participation in CFR Academic programming. Have a great day. MOLINA: Adios. (END)

That Tech Pod
Aviation Tech, Hacking and Crime Confessions with Boom Supersonic CISO Chris Roberts

That Tech Pod

Play Episode Listen Later Jun 20, 2023 28:42


Today Kevin and Laura talk with Chris Roberts, Boom Supersonic's CISO, about aviation technology, the Concorde, hacking all the things (including the Mars Rover!), building planes, epic beards, DefCon, Back to the Future, hover boards and flying cars!  Chris also casually confessed to breaking into prison, money laundering and robbing banks.   Chris is the CISO for Boom Supersonic and works as an advisor for several entities and organizations around the globe.  His most recent projects are focused within the aerospace, deception, identity, cryptography, Artificial Intelligence, and services sectors. Over the years, he's founded or worked with several folks specializing in OSINT/SIGINT/HUMINT research, intelligence gathering, cryptography, and deception technologies. These days he's working on spreading the risk, maturity, collaboration, and communication word across the industry. Since the late 90's Chris has been deeply involved with security R&D, consulting, and advisory services in his quest to protect and defend businesses and individuals against various types of attack. Prior to that he jumped out of planes for a living, visiting all sorts of interesting countries and cultures while doing his best to avoid getting shot at too often. He's considered one of the world's foremost experts on counter threat intelligence and vulnerability research within the Information Security industry. He's also gotten a name for himself in the transportation arena, basically anything with wings, wheels, tracks, tyres, fins, props or paddles has been the target for research for the last 15 years.Chris has led or been involved in information security assessments and engagements for the better part of 25 years and has a wealth of experience with regulations such as GLBA, GDPR, HIPAA, HITECH, FISMA, and NERC/FERC.  He has also worked with government, state, and federal authorities on standards such as CMS, ISO, CMMC, and NIST.Chris has been credentialed in many of the top IT and information security disciplines and as a CyberSecurity advocate and passionate industry voice, he is regularly featured in national newspapers, television news, industry publications and several documentaries. And worst case, to jog the memory, Chris was the researcher who gained global attention in 2015 for demonstrating the linkage between various aviation systems, both on the ground and while in the air that allowed the exploitation of attacks against flight control system.

The Tech Trek
Tailoring Cybersecurity Strategies for Startups and Enterprise Companies

The Tech Trek

Play Episode Listen Later May 17, 2023 28:00


On this episode of Tech Trek, Lisa Hall, Chief Information Security Officer, talks about her experiences building a security program at the genetic testing company. The program covers infrastructure security, application security, product security, governance, risk, and compliance. Lisa discusses the challenges and strategies in building and maintaining a security program in a constantly evolving landscape. Highlights [00:02:29] Building security strategies. [00:03:42] Adapting to different company cultures. [00:07:16] Engineering first organizations. [00:11:16] Finding security champions. [00:14:30] Celebrating quick wins. [00:17:25] Finding the right leadership voice. [00:20:49] Cybersecurity and Business Impact. [00:23:55] Productivity and motivation. [00:27:47] Call-to-action for engagement. With over 16 years of experience in information security, Lisa Hall has built security programs from the ground up and optimized existing security and compliance initiatives at scale. She focuses on building holistic security strategies and comprehensive information security management programs- ensuring products and business systems are developed with security in mind. Lisa has experience building and growing teams, leading companies through IPO, acquisitions & mergers, and leading Application/Product Security, Infrastructure Security, and Compliance programs (SOX, SOC2, ISO 27001, FISMA, FedRAMP, & HITRUST). She believes security should make it easy to do the right thing. Lisa has previously held Information Security roles at PagerDuty, Twilio, and EY. Lisa is a Venture Advisor at YL Ventures and an Advisory Board Member for Day of Shecurity. She is also a co-author in "Reinventing Cybersecurity"- A JupiterOne book authored by female and non-binary security practitioners. --- Thank you so much for checking out this episode of The Tech Trek, and we would appreciate it if you would take a minute to rate and review us on your favorite podcast player. Want to learn more about us? Head over at https://www.elevano.com Have questions or want to cover specific topics with our future guests? Please message me at  https://www.linkedin.com/in/amirbormand (Amir Bormand)

Ask the CIO
DeRusha says new 2023 cyber metrics reflect agility needed in today's environment

Ask the CIO

Play Episode Listen Later Dec 27, 2022 46:38


Chris DeRusha, the federal chief information security officer, said new FISMA metrics will ask agencies for more granular data on how they are meeting administration priorities.

@BEERISAC: CPS/ICS Security Podcast Playlist
42: How Skills Outside of the CyberSecurity Space Lay the Groundwork for a Great CyberSecurity Career with Art Conklin

@BEERISAC: CPS/ICS Security Podcast Playlist

Play Episode Listen Later Jun 15, 2022 48:56


Podcast: Control System Cyber Security Association International: (CS)²AIEpisode: 42: How Skills Outside of the CyberSecurity Space Lay the Groundwork for a Great CyberSecurity Career with Art ConklinPub date: 2022-06-14Derek Harp is happy to have Art Conklin, another legendary ICS control systems cybersecurity figure joining him on the show today! Art is an experienced Information Systems Security professional. He has a background in software development, systems science, and information security. He is qualified with CISSP, GICSP, GRID, GCIP, GCFA, GCIA, GCDA, CSSLP, CRISC, and Security+.His specialties include information systems security management, network, and systems security, intrusion detection and intrusion detection monitoring, penetration testing, Incident Response, security policy and procedures, risk/threat assessments, Security training/awareness, user interface design and evaluation, FISMA, Secure code design/software engineering, cyber-physical systems security, and security metrics.Art is a hacker at heart. Art was born in St. Louis, Missouri, in 1960. He has been a professor at the University of Houston for many years! He is also a well-known speaker, military veteran, technologist, author, sailor, rocket scientist, father, husband, and grandfather. In this episode of the (CS)²AI Podcast, he talks about his formative years, a life-changing Navy experience, taking advantage of learning situations outside of college, the application of knowledge, the benefits of getting an MBA, and the benefits of on the job training,If you want to get into the cybersecurity space, you will not want to miss this episode - even if you have qualifications in a different area. Show highlights:There is a different level of thinking that gets taught and applied today. (5:49)After doing courses at different universities and then starting med school, Art realized it was not where he wanted to go because it was science, not tech, and it was very theory-driven. (8:10)Art wanted a career where he could do stuff, so he was advised to get an MBA from Harvard or join the military to learn how to lead men, manage a budget, and learn the difference between those things. Harvard was out of reach, so he joined the Navy. (9:07)Art talks about the unique military experience that changed his perspective and made him who he is today. (11:05)The cyber-world can benefit from people with no college degree who have problem-solving abilities, communication skills, and the ability to lead. (15:08)Learning is about more than just knowledge because knowledge needs to be applied. (18:38)Art wanted to leave the Navy to join IBM, but the Admiral did not want him to leave and offered him the opportunity to go to Navy Post Graduate School with no payback. So Art spent three years studying space system engineering, got a Ph.D. equivalent, and flew on a spacecraft. (20:40)In some respects, transitioning out of the military is not easy, from a job perspective. (24:01)Art explains why he did another degree after getting his doctorate. (27:44)Art talks about the qualities of his various mentors and the importance of having connections with people with aspects that will broaden you and make you smarter. (29:14)What he has done and is currently doing at the University of Houston. (32:32)If you want to work in cybersecurity and you have a breadth of knowledge and experience, you are likely to succeed in the space. (39:16)If you want to learn more about OT, many resources are available. Use and apply them. You can also email Art for local resources at waconklin@uh.edu. Most people are willing to share their knowledge and become mentors, so reach out to those you look up to. (44:42)How to invest in yourself. (46:20)Links:(CS)²AIArt Conklin on LinkedInThe University of Houston (Search for cybersecurity)The podcast and artwork embedded on this page are from Derek Harp, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.

Control System Cyber Security Association International: (CS)²AI
42: How Skills Outside of the CyberSecurity Space Lay the Groundwork for a Great CyberSecurity Career with Art Conklin

Control System Cyber Security Association International: (CS)²AI

Play Episode Listen Later Jun 14, 2022 50:08


Derek Harp is happy to have Art Conklin, another legendary ICS control systems cybersecurity figure joining him on the show today!  Art is an experienced Information Systems Security professional. He has a background in software development, systems science, and information security.  He is qualified with CISSP, GICSP, GRID, GCIP, GCFA, GCIA, GCDA, CSSLP, CRISC, and Security+. His specialties include information systems security management, network, and systems security, intrusion detection and intrusion detection monitoring, penetration testing, Incident Response, security policy and procedures, risk/threat assessments, Security training/awareness, user interface design and evaluation, FISMA, Secure code design/software engineering, cyber-physical systems security, and security metrics. Art is a hacker at heart. Art was born in St. Louis, Missouri, in 1960. He has been a professor at the University of Houston for many years! He is also a well-known speaker, military veteran, technologist, author, sailor, rocket scientist, father, husband, and grandfather. In this episode of the (CS)²AI Podcast, he talks about his formative years, a life-changing Navy experience, taking advantage of learning situations outside of college, the application of knowledge, the benefits of getting an MBA, and the benefits of on the job training, If you want to get into the cybersecurity space, you will not want to miss this episode - even if you have qualifications in a different area.  Show highlights: There is a different level of thinking that gets taught and applied today. (5:49) After doing courses at different universities and then starting med school, Art realized it was not where he wanted to go because it was science, not tech, and it was very theory-driven. (8:10) Art wanted a career where he could do stuff, so he was advised to get an MBA from Harvard or join the military to learn how to lead men, manage a budget, and learn the difference between those things. Harvard was out of reach, so he joined the Navy. (9:07) Art talks about the unique military experience that changed his perspective and made him who he is today. (11:05) The cyber-world can benefit from people with no college degree who have problem-solving abilities, communication skills, and the ability to lead. (15:08) Learning is about more than just knowledge because knowledge needs to be applied. (18:38) Art wanted to leave the Navy to join IBM, but the Admiral did not want him to leave and offered him the opportunity to go to Navy Post Graduate School with no payback. So Art spent three years studying space system engineering, got a Ph.D. equivalent, and flew on a spacecraft. (20:40) In some respects, transitioning out of the military is not easy, from a job perspective. (24:01) Art explains why he did another degree after getting his doctorate. (27:44) Art talks about the qualities of his various mentors and the importance of having connections with people with aspects that will broaden you and make you smarter. (29:14) What he has done and is currently doing at the University of Houston. (32:32) If you want to work in cybersecurity and you have a breadth of knowledge and experience, you are likely to succeed in the space. (39:16) If you want to learn more about OT, many resources are available. Use and apply them. You can also email Art for local resources at waconklin@uh.edu.  Most people are willing to share their knowledge and become mentors, so reach out to those you look up to. (44:42) How to invest in yourself. (46:20) Links: https://www.cs2ai.org/ ((CS)²AI) https://www.linkedin.com/in/waconklin/ (Art Conklin on LinkedIn) https://uh.edu/ (The University of Houston) (Search for cybersecurity) Mentioned in this episode: Our Sponsors: We'd like to thank our sponsors for their faithful support of this podcast. Without their support we would not be able to bring you this valuable content. We'd appreciate it if...

Nomad Futurist
ARE YOU REALLY LISTENING?

Nomad Futurist

Play Episode Listen Later May 9, 2022 29:08


What's it like to be a facilities manager running the IT infrastructure of a leading global research university?In this engaging Nomad Futurist podcast, Raymond Parpart, Director of Data Center Operations and Strategy at the University of Chicago shares a journey that led from theater to technology and draws us into the fascinating world of critical infrastructure and supercomputers within a multi-faceted academic environment. Parpart, a theater major, began his career working on the road doing lights and sound and then opted for a different lifestyle. His wife-to-be suggested he apply for an available mailroom position at Aon. As Parpart already had programming experience, he was hired instead as a programmer to work on Y2K compliance. Programming led to networking which led to data centers. “From a technology perspective, I've always managed to latch onto whatever the next thing was...I'm a hungry learner… I want to know!” Parpart then went into consulting. He subsequently joined General Motors where he managed infrastructure and networking.  Parpart's work at the University of Chicago involves managing many types of systems ranging from administrative databases to facilities that are responsible for computing for high-end research projects. This requires that he be able to manage different types of facilities depending on the need.  “If you want to see the world of cooling or racks or power…we're doing all kinds of crazy things with them. Come see me. I've got all kinds of crazy stuff!” Parpart talks about the pros and cons of working in the world of education versus the corporate environment. “In education the politics are particularly challenging and require patience! I also need to be a partner or a support person and make sure that I'm not seen as an impediment.” He talks about how he applies lessons learned in the world of business.  “You never give anybody one option. You give them two because they'll pick one. And hopefully you can sell it so they pick the right one!” Parpart does enjoy the camaraderie of being able to share insights with peers at other big research universities which would be difficult to achieve in the competitive business environment where trade secrets cannot be shared. For newcomers to the space, Papert highlights the importance of being willing to learn, willing to listen, and being transparent about what you know and don't know. For those who are further along in their careers, he particularly stresses the importance of being a good listener.  “Are you really listening? Don't be the smartest person in the room, even if you think you are. If you are, take the time to mentor those around you and to draw them into the conversation, to draw them into the solution. Help them think, but let them think!”  Raymond Parpart serves as Director of Data Center Operations and Strategy at the University of Chicago where he is responsible for mission critical data center facilities, delivering expertise from system/facility design to operational support, to government compliance for areas such as HIPPA, FISMA, and PCI. Parpart works closely with stakeholders to ensure 7x24 reliability and the constant improvement of system hosting, colocation services, and energy efficiency in complex computing environments. His purview extends to outsourcing and cloud integration strategy.   Parpart has over 20 years of global experience with technology. Prior to joining the University in 2007, Parpart was a Global Architect responsible for global infrastructure, data center operations, desktop, and server standards for General Motors where he developed cost saving innovations in the areas of voice, video, and data networking. Earlier in his career, he served in both technical and management roles for a major, regional bank and global consulting company delivering infrastructure design and operations solutions to resolve business...

The Daily Scoop Podcast
Air Force software factories; More awards from TMF; Government struggles with implementing FISMA

The Daily Scoop Podcast

Play Episode Listen Later Apr 19, 2022 31:23


On today's episode of The Daily Scoop Podcast, the Department of Veterans Affairs receives $10.5 million from the Technology Modernization Fund to support the agency's transition to Login.gov. The Air Force will look at restructuring the 16 software factories it has now. Lt. Gen. Bill Bender (USAF, ret.), senior vice president for strategic accounts and government relations at Leidos and former chief information officer at the Air Force, explains what the collaboration across the organization should look like to sort out what's next. Dave Wennergren, CEO at ACT-IAC and former chief information officer at the Navy, discusses what else could be on the way from the TMF Board as the Biden administration requests an additional $300 million for the fund in fiscal year 2023. Less than a third of CFO Act agencies have effective security programs as of FY20. Jennifer Franks, director of information technology and cybersecurity issues at the Government Accountability Office, breaks down agencies struggles with implementing required safety programs. The Daily Scoop Podcast is available every weekday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Google Podcasts, Spotify and Stitcher. And if you like what you hear, please let us know in the comments.

Brain Bytes
HIPPA and CMCC and FISMA, Compliance Oh My!

Brain Bytes

Play Episode Listen Later Apr 18, 2022 22:01 Transcription Available


This week will be a standout performance for your IT Bingo card! Blake and James are jumping into the world of compliance. Almost all industries now deal with some form of regulation and they all seemingly touch IT. Tune in and see how you can keep your lions, tigers, and bears in line and keep your company safe from the flying monkeys! 

Man Group: Perspectives Towards a Sustainable Future
Alain Deckers, European Commission DG-FISMA, on Regulating the Transition to a Sustainable Economy

Man Group: Perspectives Towards a Sustainable Future

Play Episode Listen Later Apr 13, 2022 43:20


How does sustainable finance regulation represent a sea change for investors? Listen to Jason Mitchell talk to Alain Deckers, European Commission Directorate-General for Financial Stability, Financial Services and Capital Markets Union (DG FISMA), about greenwashing, enforcement, materiality, regulatory harmonisation and how the European Commission's Sustainable Finance Strategy is bringing transparency to the ESG space. Alain Deckers is the newly-appointed Head of the Asset Management Unit within the European Commission's Directorate-General for Financial Stability, Financial Services and Capital Markets Union or DG-FISMA. He was the Vice-Chairman of the EFRAG European Lab Steering Group. With over 20 years of experience at the European Commission, Alain has been responsible for policy reviews and policy development in areas including trade in goods, environmental policy, public procurement and financial services regulation.   * The views set out in this podcast are those of Alain and not the official position of the European Commission, nor the views of individual Commissioners or other officials of the European Commission. Learn more about your ad choices. Visit megaphone.fm/adchoices

Twins Talk it Up Podcast
Twins Talk it Up Episode 90: Entrepreneurial Success in Government Contracting

Twins Talk it Up Podcast

Play Episode Listen Later Mar 30, 2022 51:06


Congratulations are in order for our next guest, Jasson Walker, Jr. The Founder and CEO of cFocus Software, a $10M in Revenue Company located in the Washington DC region was featured in Microsoft's Black Partner Growth Initiative Black History Month 2022 Campaign. Jasson joined Twins Talk it Up to share about his entrepreneurial journey within the Tech and Government Contracting space. We touch on how his ATO as a Service™ helps to automate FISMA, RMF, and FedRAMP compliance and reporting. We dive into some of the partnerships he has, including Microsoft (Gold-Certified), Black Channel Partner Alliance (BCPA) and AppMeetup. We ask Jasson to share his best tips for winning contracts with the Federal Government. Jasson mentions two keys for success:-Cultivating relationships with Decision Makers-Ability to write and articulate your value proposition with ProposalsHe also adds that working with the government is all about 'risk-management' and with bringing in the best talent who align with your vision. Acknowledging strengths and placing leadership on the right seats on the bus lead to success. Jasson echoes what we've been hearing from other entrepreneurs we've had on the program, in that you must be willing to let go, have repeatable processes, and scale with the right leaders. He did not become an entrepreneur to gain freedom as much as he did to gain flexibility. To learn more about Jasson and cFocus Software, visit https://cfocussoftware.com/Support and Follow us by Sponsoring, Subscribing & Downloading.--- more ---If you are looking to learn the art of audience engagement while listening for methods to conquer speaking anxiety, deliver persuasive presentations, and close more deals, then this is the podcast for you.Twins Talk it Up is a podcast where identical twin brothers Danny Suk Brown and David Suk Brown discuss leadership communication strategies to support professionals who believe in the power of their own authentic voice. Together, we will explore tips and tools to increase both your influence and value. Along the way, let's crush some goals, deliver winning sales pitches, and enjoy some laughs.Danny Suk Brown and David Suk Brown train on speaking and presentation skills. They also share from their keynote entitled, “Identically Opposite: the Pursuit of Identity”.Support and Follow us:YouTube: youtube.com/channel/UCL18KYXdzVdzEwMH8uwLf6gInstagram: @twinstalkitupInstagram: @dsbleadershipgroupTwitter: @dsbleadershipLinkedIn: linkedin.com/company/twins-talk-it-up/LinkedIn: linkedin.com/company/dsbleadershipgroup/Facebook: facebook.com/TwinsTalkitUpFacebook: facebook.com/dsbleadership/Website: dsbleadershipgroup.com/TwinsTalkitUp

Resilient Cyber
S2E19: Renee Wynn - Organizational Leadership, FISMA Reform and Soft Skills

Resilient Cyber

Play Episode Listen Later Mar 1, 2022 38:30


We know you've held several executive roles, we would love to hear your perspective regarding balancing business and organization leadership with the technology sideYour recently testified before Congress regarding FISMA reform. Why do you feel this reform is so needed and what do you feel in particular would make the biggest impact? What advice would you have for technology professionals who want to advance to executive roles like you've held? What do you think we as an industry can do to help encourage more women into STEM and tech fields? 

Access Control
Security Compliance & FedRAMP

Access Control

Play Episode Listen Later Feb 20, 2022 41:09


Interview with Hisham Alhakim about FedRAMP, FISMA, Nist, FIPS, SBOM, Zero Trust, collaboration with engineers.

Cloud Security Today
Fed Clouds

Cloud Security Today

Play Episode Play 33 sec Highlight Listen Later Feb 14, 2022 34:08 Transcription Available


In a world where cyber-attacks are ever-changing, cybersecurity has to adapt accordingly. Joining us today to delve into the world of cloud security for federal agencies is Sandeep Shilawat, Vice President of Cloud and Edge Computing at ManTech. Sandeep has extensive experience in both Commercial and Federal technology markets. We'll get to hear his predictions on where the cloud world is heading, as well as what the Federal Authority to Operate (ATO) process will look like in the future. We learn the benefits of cloud compliance standards, as well as how FedRAMP is leveling the playing field in federal cloud computing. We also touch on the role of 5G in cloud computing, and why its presence will disrupt going forward. Join us as we pick Sandeep's brain for some insights into the present and future of federal cybersecurity.Tweetables“Visibility has become [the] single biggest challenge and nobody's dealing with cloud management in a multi-cloud perspective from cradle to grave.” — @Shilawat [0:09:03]“I think that having a managed cloud service is probably the first approach that should be considered by an agency head. I do think that that's where the market is heading. Sooner or later, it will probably become a de facto way of doing cloud security.” — @Shilawat [0:19:43]Comprehensive, full-stack cloud security Secure infrastructure, apps and data across hybrid and multi-cloud environments with Prisma Cloud.

The Daily Scoop Podcast
New Army Climate Strategy; Rethinking federal office spaces; Changing cyber incident reporting

The Daily Scoop Podcast

Play Episode Listen Later Feb 10, 2022 32:56


On today's episode of The Daily Scoop Podcast, former New York City Mayor Michael Bloomberg has been nominated to lead the Pentagon's Defense Innovation Board. As you heard yesterday, new cyber legislation in Congress combines aspects of FISMA, FedRAMP and cybersecurity reporting. John Zangardi, president and CEO of Redhorse Corporation and former chief information officer at the Department of Homeland Security, explains the impact this legislation would have on federal CIOs. The Army calls its new climate strategy “a roadmap of actions that will enhance unit and installation readiness and resilience in the face of climate-related threats.” John Conger, director emeritus of the Center for Climate and Security and senior advisor to the Council on Strategic Risk, discusses the significance of the new strategy. Federal agency back to office plans so far include a mix of in-office time and remote work time for lots of employees. Dan Mathews, head of federal sales at WeWork and former commissioner of the Public Buildings Service at the General Services Administration, discusses how the government will need to adjust office spaces to fit the workplace of the future. The Daily Scoop Podcast is available every weekday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Google Podcasts, Spotify and Stitcher. And if you like what you hear, please let us know in the comments.

Mixxstation Radio Live
GET INTO CYBERSECURITY COMMERCIAL

Mixxstation Radio Live

Play Episode Listen Later Feb 8, 2022 0:31


O-Line Security is a Cyber Security Consultancy and a CompTIA Authorized Training Organization. We provide high quality training and services. We have a great history of customer satisfaction as evidenced by our online reviews, and we pride ourselves on being able to provide first-class services and education at a fantastic value. Our consulting services help you create a robust security environment that combats against current and emerging threats, ensures your most valuable assets are identified and protected, and effectively develops and matures your security policies to support your business goals.Our training academy equips you with the skills, confidence, and ability to pass industry certifications and excel in the workplace. O-Line Security is where professionals come to advance their careers with certifications and skills training. It's where employers and employees implement best security practices supported by FISMA and NIST publications. We are here to empower you with technical expertise and knowledge to combat your most concerning security issues.

Federal Drive with Tom Temin
Congress wants to overhaul FISMA. Agencies are already measuring security differently

Federal Drive with Tom Temin

Play Episode Listen Later Feb 2, 2022 16:20


Members of Congress are pushing to overhaul federal cybersecurity standards. But agencies are already starting to measure security a lot differently this year. That's because the White House made some big revisions to quarterly cybersecurity metrics. Federal News Network's Justin Doubleday reported on Federal Drive with Tom Temin.

The Daily Scoop Podcast
New Federal CISO Power Coming; Zero Trust Inside CISA

The Daily Scoop Podcast

Play Episode Listen Later Jan 28, 2022 22:16


On today's episode of The Daily Scoop Podcast, agencies should start asking employees about their booster status, according to the Safer Federal Workforce Task Force. The State Department says a “technical explanation” is behind an email problem it suffered Thursday. Dave Nyczepir explains what that means. The Federal Chief Information Security Officer would get new budget authority under new FISMA legislation in the House. Former Federal CISO Grant Schneider testified about it in the House, and tells you what he told Congress about what he thinks about the idea. The Cybersecurity and Infrastructure Security Agency will be one of the pivot points of this week's Zero Trust Strategy for the federal government. The agency is working its own zero trust items too. Robert Costello is Chief Information Officer of CISA. He talked about it with Scoop News Group's Wyatt Kash.

The Daily Scoop Podcast
Changes to make on the FITARA scorecard; Modernizing FISMA; Integrating automation to improve CX

The Daily Scoop Podcast

Play Episode Listen Later Jan 21, 2022 34:10


On today's episode of The Daily Scoop Podcast, a new IT modernization caucus in the House of Representatives. Dan Chenok, executive director at the IBM Center for The Business of Government and former branch chief for information policy and technology for the Office of Management and Budget, explains how integrating automation can help improve government's delivery of the recent customer experience executive order. Gordon Bitko, senior vice president at Information Technology Industry Council and former FBI chief information officer, discusses his recommendations to Congress for modernizing FISMA. Dave Powner, executive director of the Center for Data-Driven Policy at MITRE and former director for IT Issues at GAO, talks with Francis about his takeaways from the new FITARA scorecard and what to look for in FITARA 14. The Daily Scoop Podcast is available every weekday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Google Podcasts, Spotify and Stitcher. And if you like what you hear, please let us know in the comments.

The Daily Scoop Podcast
Recommendations to improve FISMA implementation; New FITARA scorecard; improving cyber awareness

The Daily Scoop Podcast

Play Episode Listen Later Jan 20, 2022 41:20


On today's episode of The Daily Scoop Podcast, the new FITARA scorecard is out. Congress is moving to reform the Federal Information Security Management Act. Former NASA Chief Information Officer Renee Wynn explains her recommendations to the House Oversight and Reform Committee. FISMA reform in Congress could change several things about how agencies do the business of cybersecurity and how they show their work. Jennifer Franks, director of information technology and cybersecurity issues at the Government Accountability Office, gives an update on the implementation of FISMA requirements across government. Kristina Balaam, senior threat researcher for threat intelligence at Lookout, explains steps organizations can take to make sure their employees avoid cyberattacks. This interview is sponsored by Lookout. The Daily Scoop Podcast is available every weekday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Google Podcasts, Spotify and Stitcher. And if you like what you hear, please let us know in the comments.

The Daily Scoop Podcast
Modernizing FISMA; Legacy of the Cyberspace Solarium Commission

The Daily Scoop Podcast

Play Episode Listen Later Jan 13, 2022 18:00


On today's episode of The Daily Scoop Podcast, a new federal website for requesting COVID-19 rapid tests should be online this weekend. Ari Schwartz, managing director for cybersecurity at Venable and former special assistant to the President and White House senior director for cybersecurity, joins Francis to discuss legislation on Capitol Hill to modernize the Federal Information Security Management Act and improve federal responses to cyber breaches. The Cyberspace Solarium Commission transitioned to a non-profit organization at the start of the new year. Chris Cummiskey,CEO at Cummiskey Strategic Solutions and former acting under secretary for management at the Department of Homeland Security, explains the legacy of the commission and the continued push from the federal government for a unified cybersecurity infrastructure. The Daily Scoop Podcast is available every weekday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Google Podcasts, Spotify and Stitcher. And if you like what you hear, please let us know in the comments.

ATARC Federal IT Newscast
Cloud and Coffee with a Special Guest from SBA, Ryan Hillard!

ATARC Federal IT Newscast

Play Episode Listen Later Dec 15, 2021 61:51


During this Cloud and Coffee session, Ryan will share his experience being being a part of SBA's first cloud project (Certify.SBA.gov), assisting SBA with their scale from 0 cloud to 4 FISMA classified systems in AWS, architecting the account structure and enterprise setup, and implementing server-less technology on SBA.gov and then spreading it out to the 4x other systems.

The Daily Scoop Podcast
The Daily Scoop Podcast: October 21, 2021

The Daily Scoop Podcast

Play Episode Listen Later Oct 21, 2021 34:00


On today's episode of The Daily Scoop Podcast, the Department of Homeland Security is overhauling how it hires cybersecurity professionals. Richard Spires, Principal, Richard A. Spires Consulting, former Chief Information Officer, DHS and IRS, discusses the coming update to the Federal Information Security Management Act as Congress a potential overhaul to FISMA. David Berteau, President and CEO, Professional Services Council, breaks down the logistical complications as the deadline approaches for federal contractors to get the COVID-19 vaccine. Alvin “Tony” Plater, Acting Chief Information Security Officer, Dept. of Navy and Rear Adm. Bob Day (USCG, ret.), former Chief Information Officer, U.S. Coast Guard and President, BlackBerry Government Solutions, join FedScoop Editor-in-Chief Billy Mitchell during SNG Live: Modernizing Federal Cybersecurity, to chat about securing the Navy's weapons systems. The Daily Scoop Podcast is available every weekday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Google Podcasts, Spotify and Stitcher. And if you like what you hear, please let us know in the comments.

Meanwhile in Security
Standing in the Rain Isn't Diving in the Sea

Meanwhile in Security

Play Episode Listen Later Sep 2, 2021 9:11


Links: Microsoft Azure Cloud Vulnerability Exposed Thousands of Databases: https://www.darkreading.com/cloud/microsoft-azure-cloud-vulnerability-exposed-thousands-of-databases Google, Amazon, Microsoft Share New Security Efforts After White House Summit: https://www.darkreading.com/operations/google-amazon-microsoft-share-new-security-efforts-post-white-house-summit New Data-Driven Study Reveals 40% of SaaS Data Access is Unmanaged, Creating Significant Insider and External Threats to Global Organizations: https://www.darkreading.com/cloud/new-data-driven-study-reveals-40-of-saas-data-access-is-unmanaged-creating-significant-insider-and-external-threats-to-global-organizations Researchers Share Common Tactics of ShinyHunters Threat Group: https://www.darkreading.com/attacks-breaches/researchers-share-common-tactics-of-shinyhunters-threat-group How to automate forensic disk collection in AWS: https://aws.amazon.com/blogs/security/ Confidential computing: an AWS perspective: https://aws.amazon.com/blogs/security/ New in October: AWS Security Awareness Training and AWS Multi-factor Authentication available at no cost: https://aws.amazon.com/blogs/security/amazon-security-awareness-training-and-aws-multi-factor-authentication-tokens-to-be-made-available-at-no-cost/ Use IAM Access Analyzer to generate IAM policies based on access activity found in your organization trail: https://aws.amazon.com/blogs/security/ TranscriptJesse: Welcome to Meanwhile in Security where I, your host Jesse Trucks, guides you to better security in the cloud.Corey: This episode is sponsored in part by Thinkst Canary. This might take a little bit to explain, so bear with me. I linked against an early version of their tool, canarytokens.org, in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, or anything else like that that you can generate in various parts of your environment, wherever you want them to live; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use them. It's an awesome approach to detecting breaches. I've used something similar for years myself before I found them. Check them out. But wait, there's more because they also have an enterprise option that you should be very much aware of: canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment and manage them centrally. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files that it presents on a fake file store, you get instant alerts. It's awesome. If you don't do something like this, instead you're likely to find out that you've gotten breached the very hard way. So, check it out. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I am so glad I found them. I love it.” Again, those URLs are canarytokens.org and canary.tools. And the first one is free because of course it is. The second one is enterprise-y. You'll know which one of those you fall into. Take a look. I'm a big fan. More to come from Thinkst Canary weeks ahead.Jesse: Disaster befell much of the middle south of the US when Ida slammed into the coast and plowed its way up north through the land. What does a hurricane have to do with security? Business continuity. Business continuity is the discipline of maintaining business operations, even in the face of disasters of any kind, such as a hurricane-driven storm surge running over the levees and flooding whole towns. If you have all your computing systems in the cloud in multiple regions, then such a disaster won't fully halt your business operations.However, you still might have connectivity issues and possibly either temporary or permanent loss of non-cloud systems. Be sure your non-cloud systems have appropriate backups off-site to another geographically disparate location. Better yet, push backups into your cloud infrastructure and consider ways to utilize that data with your cloud systems during a crisis. Hmm, perhaps you'll like it so much you will push everything else up to the cloud that isn't a laptop, tablet, or phone.Meanwhile in the news, Microsoft Azure Cloud Vulnerability Exposed Thousands of Databases. Security for cloud providers can potentially have catastrophic and large scale repercussions. Keep an eye out for any problems that come up that might affect your operations and your data. Do keep in mind your platform has a direct impact on your own risk profile.Google, Amazon, Microsoft Share New Security Efforts After White House Summit. The National Institute of Standards and Technology—or NIST—is building a technology supply chain framework with the big tech companies, including Apple, Amazon, Google, IBM, and Microsoft, and this is a big deal. I'm sure the fighting amongst those companies will make this initiative die on the vine, but I hope I'm wrong.New Data-Driven Study Reveals 40% of SaaS Data Access is Unmanaged, Creating Significant Insider and External Threats to Global Organizations. Back to basics: secure your data; lock down those buckets; don't be stupid. Also, when we're talking cloud apps and services, there should be no assumption that anyone accessing the application via an obfuscated link or permissions too broad to effectively secure the data therein.Announcer: Have you implemented industry best practices for securely accessing SSH servers, databases, or Kubernetes? It takes time and expertise to set up. Teleport makes it easy. It is an identity-aware access proxy that brings automatically expiring credentials for everything you need, including role-based access controls, access requests, and the audit log. It helps prevent data exfiltration and helps implement PCI and FedRAMP compliance. And best of all, teleport is open-source and a pleasure to use. Download teleport at goteleport.com. That's goteleport.com.Researchers Share Common Tactics of ShinyHunters Threat Group. Put Indicators of Compromise—or IOC—data for the latest APT group or malware into your monitoring tool or tools. It's possible, depending on the vendor, that there are already detections you can add to your production monitoring. Save some time and look for those pre-made searches, configurations, and scripts before you make your own.How to automate forensic disk collection in AWS. Automating forensic data gathering is incredibly valuable. This not only has obvious value in security incident response, but it has value in teaching us how these parts in AWS work. This is worth a close read—several times if you need to—to understand how EBS, S3, automating EC2 actions, CloudWatch logging—among other services—operate. There are other pieces to the glue here to learn, as well.Confidential computing: an AWS perspective. If you use EC2, you need to understand the AWS Nitro System. Their hardware-based approach to their hypervisor for virtualization combined with hardware-based security and encryption is quite well made. Everyone worried about security at all while using EC2—which I argue should be all of you—should know the concepts of how Nitro works.New in October: AWS Security Awareness Training and AWS Multi-factor Authentication available at no cost. Now, this has value. Free basic security training for average users on fundamental computer security, including things like phishing and social engineering, is an amazing gift. Also, how many times have I wanted to point someone into an easy-to-understand multi-factor authentication tutorial? Oh, not often; only every single day.Use IAM Access Analyzer to generate IAM policies based on access activity found in your organization trail. Creating solid IAM access policies is hard because you have to know all things an account needs to touch to perform an operation or deliver a service. The IAM Access Analyzer is a total game-changer.You can review the activity to ensure you don't see anything nefarious happening, then apply the config generated. Now, you have a working app that has the bare minimum permissions required to function, but blocking all operations outside those things. This prevents many malware from sneakily doing other things.And now for the tip of the week. Know your compliance requirements; are you a school, preschool, K-12, college? FERPA; are you a medical facility? HIPAA; are you a US government entity? FISMA; are you conducting credit card transactions? PCI; are you storing data on an EU citizen? GDPR. The list goes on, and on, and on.You need to know every single one of the compliance requirements your systems and people touch. Most of these compliance rules and laws cover a fair amount of the same ground, so compliance with several of them isn't an order of magnitude more work than compliance with one or two of them. However, it is critical that you have clear documentation for each one on how you are compliant and what processes, or data, or report proves compliance. If you build these processes into your IT or security operations monitoring or reporting system, your life will be far better off than doing it by hand every single time someone asks—or demands—proof of compliance. And that it for the week, folks. Securely yours, Jesse Trucks.Jesse: Thanks for listening. Please subscribe and rate us on Apple and Google Podcast, Spotify, or wherever you listen to podcasts.Announcer: This has been a HumblePod production. Stay humble.

Federal Drive with Tom Temin
Senate report advocates FISMA reforms after finding slow progress on agency cybersecurity

Federal Drive with Tom Temin

Play Episode Listen Later Aug 4, 2021 17:45


A new Senate report lays the groundwork for potential reforms to the law governing federal cybersecurity standards. Lawmakers said many federal agencies are still struggling to comply with the law as it stands, leaving sensitive data at risk. The White House is also contemplating changes to how it oversees agency cybersecurity efforts. For the latest, Federal News Network's Justin Doubleday spoke to Federal Drive with Tom Temin.

Ask the CIO
Federal CISO DeRusha: FISMA report details a key part of cyber roadmap

Ask the CIO

Play Episode Listen Later Jun 18, 2021 43:25


Chris DeRusha, the federal chief information security officer, said the annual Federal Information Security Management Act (FISMA) report to Congress further highlights why the administration is focusing on some key areas to improve.

Federal Drive with Tom Temin
What lies ahead for federal cybersecurity

Federal Drive with Tom Temin

Play Episode Listen Later Jun 18, 2021 20:22


Agencies faced more than 30,000 cyber incidents in fiscal 2020, an 8% increase over the year before. Email phishing and website authentication continue to be among the biggest attack vectors hackers are using to get to agency networks and data. But despite this escalation, the annual Federal Information Security Management Act or FISMA report to Congress, highlights real progress. Chris DeRusha is the federal chief information security officer. He tells executive editor Jason Miller why the report, along with the recent executive order, lays out the cybersecurity path forward.

Government Matters
Agency digital customer experience recommendations, Reviewing FISMA – June 15, 2021

Government Matters

Play Episode Listen Later Jun 16, 2021 26:55


Recommendations for government digital customer experience Nick Sinai, former deputy chief technology officer of the United States, talks about the importance of user experience on government websites and various ways organizations can improve their web presence Reviewing the Federal Information Security Modernization Act Karen Evans, former federal chief information officer, discusses some weaknesses of FISMA and strategies for shifting to a new model to mitigate and respond to security threats

Government Matters
Collaboration on OPM reforms, Reform initiatives across decades, FISMA reform – April 12, 2021

Government Matters

Play Episode Listen Later Apr 13, 2021 28:33


Collaboration on Office of Personnel Management reforms Janice Lachance, Executive Vice President for Strategic Leadership and Global Outreach at the American Geophysical Union, discusses partnerships necessary for the Office of Personnel Management to successfully move forward Developing reform initiatives for a new administration Dan Chenok, Executive Director of the IBM Center for the Business of Government, discusses the successful continuation of reform efforts across decades as it relates to the Biden administration Reforming the Federal Information Security Management Act Richard Spires, Principal at Richard A. Spires Consulting, evaluates the state of cybersecurity across government and areas of necessary reform

CarahCast: Podcasts on Technology in the Public Sector

On behalf of F5 and Carahsoft, we would like to welcome you to today's podcast, focused around zero trust, where Scott Rose, computer scientist at NIST and a co-author on NIST's 800-207, Zero Trust Architecture publication; Gerald Caron, Director of Enterprise Network Management for the Department of State; Brandon Iske, Chief Engineer at DISA; and Jason Wilburn, zero trust engineer at F5, will discuss the pros and cons of different zero trust designs, how other federal initiatives tie into zero trust, and understanding what zero trust principles do for cybersecurity posture. Ryan Johnson: Thank you. Once again thanks, everyone, for joining. My name is Ryan Johnson. I'm a solutions engineering manager with F5 Government Solutions. Today, we have a group of exciting guests, mostly from the federal space, to discuss zero trust in theory and talk about the implementation of zero trust. First off, I have Scott Rose with NIST. Scott, would you like to talk a little bit about yourself?Scott Rose: Sure, thanks. I'm Scott Rose. I am currently at the Information Technology Lab at NIST. I am the coauthor of the NIST special publication 800-207, Zero Trust Architecture, and also, attached as a subject matter expert for the upcoming NCCOE, or National Cybersecurity Center of Excellence Project on Zero Trust Architecture.Ryan Johnson: Thank you, Scott. If anyone hasn't had a chance to read that 800-207, definitely take a look. It's well worth your time. Next off, we have Gerald Caron who's with HHS. Gerald, would you like to tell us a little about yourself?Gerald Caron: Well, I'm on detail to HHS, but technically I am the representative of the Department of State, then SES. I'm the director for Enterprise Network Management at the Department of State. Basically, the infrastructure person, do the network, active directory, a lot of the security implementation aspects of things. I am participating and starting to co-chair the CIO's innnovation council working group on zero trust. I am Forrester certified and zero trust strategist as well.Ryan Johnson: Very good. Thank you, Gerald. Next up, we have Jason Wilburn with F5 Networks. He's identity and access guru or [inaudible 00:02:20], if you will. Jason, would you like to tell us a little bit about yourself?Jason Wilburn: Sure. Thanks, Ryan. So, I'm a system engineer, covering the system integrator space for F5 Federal. But as Ryan mentioned, I am also the co-lead for [inaudible 00:02:35], which is anything related to access and authorization controls or access policy manager product.Ryan Johnson: Thank you, Jason. Next up, we have Brandon Iske with DISA. Brandon, would you like to tell us a little bit about yourself.Brandon Iske: Yes, thank you, Ryan. So, I'm Brandon Iske. I'm the Chief Engineer for our Security Enablers Portfolio. So, that includes ICAM or Identity and Credential Access Management, Zero Trust reference architecture development, Public Key Infrastructure, PKI, and then Software Defined Enterprise. So, I'm part of the Defense Information Systems Agency. Again, it's a [inaudible 00:03:12] support agency to the Department of Defense. Thank you.Ryan Johnson: Well, thank you, Brandon. There are two topic we're going to talk about. The first is behind the theory Zero Trust, understanding federal zero trust straight from the source. The second topic is the reality, the implementation of zero trust. So, jumping into the first topic, the theory. This question to you, Scott Rose. You're one of the authors of NIST 800-207 Zero Trust Architecture. Can you tell us briefly what problem zero trust is trying to solve, and what are the main goals?Scott Rose: Well, yeah, zero trust is the new paradigm of how you want to look at enterprise security. Basically it's taking a lot of the trends that we saw emerging over the last 10 years or so and pulling them together and layering them together to solve what we see is like company attacks that the common script from attacks that you see are going out there. It's where the initial breach happens. The attacker then moves laterally through the network, and then performs the actual attack ransomware, data exfil, whatever. Then they're not discovered until the next audit, some six, eight months later.Zero trust tries to minimize that kind of attack scenario where you segment away, you micro segment away resources, you do endpoint security, you do strong authentication both inside the infrastructure, on-prem as well as outside coming in to limit that lateral movement and make sure that every connection from a client to an enterprise and resource is both authenticated and authorized. The ideas that you want to try, don't rely on your perimeter defenses anymore, but you're doing it every step of the way. So, there's a little mini perimeter around like now, every resource and every user. So, you always have, at least, more knowledge, not total knowledge, of what's going on in your enterprise.Ryan Johnson: Thank you Scott. This next question is for you, Gerald. What is the biggest misconception about zero trust?Gerald Caron: First of all, the level setting on the definition that I find is most difficult and people really understanding. No offense to any of the vendors here, but depending on who you talk to, they spend the definition their own way. So getting that common understanding of what zero trust is, is really important. Some people think its identity, but it's a little more than that. As Scott was saying, it's about protecting what's important and shifting that paradigm in that culture that we do. We're very compliance-focused culture. FISMA makes us that way, put our scorecards, things like that.But I think zero trust gets us to a more effective cybersecurity posture. Commonly, we've done that peanut butter spread approach, where we try to protect everything equally, with Frederick the Great says, "If you try to protect everything equally, you protect nothing." That quote up, basically, but great IT innovator that he was. But really that peanut butter spread approach is not sustainable. You can't cover everything you can't 100 be and 100% patched when you have 109,000 workstations across the world. It's pretty unlikely.So what's important, as Scott was talking about? What's important? Definitely, if you need to understand what zero trust is. You're grappling with that definition. Yes, definitely. Don't suggest, but do read 800-207. I believe, and Scott would agree with me that, that's going to morph as new technologies and capabilities and concepts come about, that that is going to morph and mature as we go along on this journey as well.Ryan Johnson: Yeah, I would agree with you on that. This next question's to Brandon. Looking ahead, what are the next or the biggest stumbling blocks for creating a zero trust environment?Brandon Iske: Thank you for that question. So from my perspective, I think within DISA and DoD again, we're a very large environment. So I think from our vantage point, just trying to set the standards is really what where we're at. So again, we very much leverage the 800-207 as a framework for DoD and what we develop for the zero trust reference architecture. So, we've recently approved that. So that's available internal to the DoD right now. So that's our way to get the common framework, and language, and taxonomy established across the department.Other trends, we see, again a lot of the pillars of zero trust really do rely on existing capabilities and cybersecurity efforts that we have. From my vantage point, I think there are a few gaps in those technologies, at least, for what the department has adopted from an enterprise perspective. So, I'll talk on some of those. Again, it's making sure we're doing the existing capabilities, whether it's ICAM, whether it's endpoint, whether it's network segmentation. All those things really have to start coming together. Again, it's eliminating those stove pipes and enabling more API access to these capabilities, tighter integration, and really trying to drive towards conditional access beyond just what we do with PKI, CAC, or PIV today.The one gap I see the department has been looking at pretty heavily across the board is as how do we access our IL5 cloud environments from commercial internet. Really with COVID and mass telework, that's been a big challenge for us is to enable secure, collaboration, and access to applications and data, but still from most of us being off the network. So, for [inaudible 00:09:07] that's a big challenge because, in those cases, a lot of our designs assume all the users are on inside the perimeter. So, this concept really changes that or turns the problem on its head. So again, that's secure access.We're also looking at some of the SASE-type capabilities or secure access edge capabilities. But even in that space, the duty is large. We're not going to be able to just use one vendor across the board. So, trying to drive interoperability of those capabilities, looking at what's best of breed, but also how can we... I don't want to have 10 agents on my computer just to be able to get to different applications across the department. So those are some of the big challenge I think we still see us ahead beyond just the obvious cultural challenges of getting everyone to understand the concept, build their maturity model towards that, and then adopt these concepts and integrations.Ryan Johnson: Yeah. I would definitely agree with you. This is not a single vendor solution by any means. This will be a grouping of different vendors to maybe some homegrown stuff to address these type of issues. Thank you, Brandon. Next question is to Jason Wilburn. Zero trust makes identity to the new perimeter. Why does zero trust take this approach?Jason Wilburn: So, one of the things that I always laugh when I hear that it's the new perimeter because I've heard that it's the new perimeter for 10 years. I think I even have it coined from F5 from eight years ago, they said identity is the new perimeter. So I guess my wife's car that's 10 years old is still new to her. So, the fact is, is identity, really, is a linchpin in a zero trust infrastructure because without identity, you can't really secure anything because we have to know who that person is or what is making that request. That becomes really important in a couple of things.One is the account creation. Are we creating accounts? Where do those accounts live, and how many entities of that identity actually just wrote an organization because the identity of John Smith can exist in multiple places? Really, what we're trying to do is to reduce the number of identities down to really holistically one single identity for, say, John Smith. But also, the next piece and that is really getting down to how they authenticate or how they assert themselves inside of the environment. That really gets down to things like multifactor neighbor, or if we can really get to the holy grail of going full password, which in the federal space we do a lot of password list-based authentication, doing things like smart cars, CAF, PIV, things like that.That's really what we're trying to do is truly validate that that user is who they really are because to truly achieve zero trust, a lot of things revolve around one knowing who that user is and then once that user starts doing things within the network, really, should he be able to do those things in this network based off the permission levels and their user behavior and the device they're coming from, and where they're going to, but it all really revolves around the first step, and that user... they're truly identifying who that user is.Ryan Johnson: Yeah. That ties into what everyone else has said, as well Jason. Appreciate that. The-Gerald Caron: Ryan, can I add something to that question?Ryan Johnson: Absolutely.Gerald Caron: That identity of the new perimeter thing really scares me because then people get super focused on identity and say [inaudible 00:12:57] zero trust. That's just a, for lack of a better term, a pillar. Everything Jason said is absolutely important. But if Jason's account got compromised, for instance, what's the first two questions probably the cyber guy is going to ask that's looking at the problem? What did he have access to, and is there [inaudible 00:13:16]?So it actually becomes about the data more than anything. So, it's about protecting that data at the end of the day. So I think it's really important. I think one of the things that, really, an identity itself is we do it very linear today, where it's one-time authentication, it's one-time access and then. Okay. Have a nice day. It's got to be a constant dynamic checking and rechecking of many other factors, as well as authentication and access. It's going to be continuous.Jason Wilburn: Yeah. You're completely right, Gerald. Identity really is just one more data point to determine access to something, right?Gerald Caron: Yeah, I totally agree. I just like to clarify that that's just one piece of it. [crosstalk 00:14:01].Ryan Johnson: Not the entire enchilada, if you will.Gerald Caron: Correct because I see a lot of people talk about it that way.Jason Wilburn: No, no.Ryan Johnson: Yeah, I would agree with you on that because a lot of places aren't doing that currently, and they think this is the solution, but it's just, like you said, part of the solution.Jason Wilburn: Right. The enforcement point, like to take back to Scott's document, with the 207, the enforcement point's right, they will know about the identity, but the enforcement point takes in a lot more consideration beyond just the user's identity. There's all that telemetry data that we're getting in. What's the machines coming from? What they're trying to access? There's lots more information than just the user identity to determine access control.Gerald Caron: Right. It's not always a human, right.Jason Wilburn: That's right.Gerald Caron: There's data flowing all the time and then there's data at rest. So, you got to protect that. There's not always the human involved.Jason Wilburn: Completely right. So let's go down the road of what do we do with the service account that's coming from and making an API call from one PC to another PC in the same data center. How do you validate that and secure that beyond really when I think... a lot of times when we talk about zero trust, a lot of times we talk about remote users or just users in general, talking to resources and what we've been trying to get away from [inaudible 00:15:24] the user doesn't really matter where they live, whether they live in corporate environment or whether they live at home, or they're in Starbucks, where the user live resides doesn't really matter because at a network level, that's just an IP address.We care about, one, how did they authenticate; and two, what device are they trying to access from, not just... is he on the corporate... The corporate land might give us more information and more telemetry by just being on the WiFi at Starbucks, but it's more than identity definitely.Ryan Johnson: One thing that really hits home for me is the proliferation of modern applications, and API's talking everything. You got APIs on the cloud or even within the same agency or interagency or app, however, and Gerald's point about these non-human interactions verifying those, especially, when it's so spread out with different APIs. To me that really hits home. The next question is to Scott. There are multiple architectures listed in the 800-207. Why would an organization choose one architecture over another?Scott Rose: Basically, as they need to look at whatever they're trying to push a zero trust architecture on, what workflow, what mission they're doing, all that will help decide which model will fit best for them. You got to take into account, both what they may already have owned or what technology needs they have, what can they just... what they can use anyway, just configure in a different way. Let's say they already went with vendor A and they have an installed base, but there are certain features that they're not using now, but as they move towards a zero trust architecture, they just turn those on because some things work better than others, some solutions require like agents installed, may not be able to put agents on things, especially if you're looking at [inaudible 00:17:28] an IoT kind of deployment. You can't push a lot of agents on the small form devices, but you have to go with a different model there.But when it comes to the approaches that we described, like the enhanced identity governance, microsegmentation, software-defined perimeters, I think of the most mature as zero trust enterprises and architectures out there will have elements of all three. Those three approaches, we're just calling those like what is the load bearing technology that you're using in your architecture, whereas the models are more of what kind of products are you using, that dictates the model. Whereas like what technology are you putting the emphasis on, whether you're the identity management governance part, the micro segmentation parts, or using a software-defined networking or software-defined perimeter model. All those depends what's you're doing in that initial analysis, both what is the mission or workflow that you're working on to try and make more secure, and then you develop the other set of policies and controls around those, and then those guide you as to which model that you may be going towards.Ryan Johnson: Thank you, Scott. Appreciate that. Next question is to Gerald. Looking into the future, what's next in zero trust? What technologies are going to impact zero trust security or require security in a different way than we see right now?Gerald Caron: Technology moves so fast nowadays, you can't keep up. As I'm speaking right now something new, something new just come out that I don't know about. But Brandon, I think, mentioned SASE and edge computing. I think that's something that people are very much looking at services through the cloud. One of the things I advocate for that I'm looking at is I hate being tethered to an on-premise network. We're in a new normal. Everybody's working mobily now. I have to Boomerang back just to go back out to the cloud on the internet. So, how can I be untethered but to have all the security that I need in telemetry to make the right decisions is something that I'm looking at. So, it's something that I advocate for as well.So, technology is moving so fast. I think some are a little more mature than others in this space. But I see it's going to be very much competitive because we're all looking this way now. I think, as I said before, we're all trying to become more effective at our cybersecurity, not just check marks and coming compliant. We really need to protect the data and then the things that we need to protect. I equate I get to protect the crown jewels versus the bologna sandwich. You can have my bologna sandwich. But I'm going to put my concentration on those crown jewels.So understanding what's important to you and understanding what the heck is your risk posture. A lot of people struggle with accepting and understanding what their risk is. There is a lot of non-technical aspects to zero trust that people need to understand, the methodologies, what is your risk tolerance and the processes, and what is the data, and where is your data, and what is that categorization of that data. Those are all non-technical things. There's a lot of work in those areas that people do struggle with that I find. So, there's a lot. But I see every day talking with a lot of vendors, there's a lot of maturity in the space, and I just look forward to seeing some of the capabilities because there's a lot of concepts in 800-207, like I talked about ongoing authentication and ongoing access.Right now, it's very linear still. That's something that would be maturing that people are looking at doing so. I think there's a lot. I look forward to it because a lot of people are putting their emphasis here, especially, with what we just experienced with the solar winds. There's a lot of focus in this area now, even more so if there wasn't before.Brandon Iske: Ryan, if I can add in there, I think, Gerald is spot on. I think, as we can build towards more dynamic access, conditional access, and then having applications be aware of that context to govern what I can and can't do what's on that application. I think that's where... As all this comes together, those are the type of outcomes that we start to get at, whether if I'm from a personal device and maybe a low-assurance model, maybe I can't download attachments or something, but I can view those or view some content. So, those additional granular controls, I think, start to come out there, become achievable once we have some of these capabilities, conditional access and aggregation of telemetry together as well.Jason Wilburn: If I can jump in, too, Ryan. I think that just being able to absorb the additional telemetry data, whether it be some sort of behavioral analytics coming out of a risk engine, just coming out of various security tools, I thought had mentioned this before, the breaking down of the silos between the team. I think that's one of the biggest things about zero trust. Holistically, from a security model perspective, what we're saying is that, hey, it all needs to work together as a single point of control that is closest to the resource, that Gerald mentioned. There can be some context around it that no longer is it just the firewall blocking IPEs and things like that, and DLP looking at data exfil, and antivirus looking at what's happening on the server from a virus perspective or malware happening on the client. It all needs to work together, and it all needs to come back because that becomes part of the behavior or of the workflow that's happening between the client and the resources for accessing so that we can truly understand, is this a permitted flow? Yeah, this is a permitted user coming from a permitted device to a resource that it should have allowed to.But based off not just what happened at the very beginning of the session, but what's happening throughout the life of the session, what's changed throughout the life of the session, that becomes critically important to really secure everything day one because back to Gerald's data exfil comment. Cool. You've got access to the data right now. Should you be able to download some document or upload some document five minutes into the session based off what something has changed? Maybe not.Ryan Johnson: Yeah, I agree that's what we're trying to get to. All right. That concludes the first topic of the theory. Now, we're going to jump into the second topic, the reality, adopting zero trust. The first question is once again to Scott Rose. What components are available to federal entities to assist in forming zero trust architecture?Scott Rose: Well, most of these are not real solid technologies, but it's more of frameworks and things that may help. There are existing government programs already out there. Both like a DHS, they have their CDM program. There's FICAM, things like that. These are already in place to actually build these, kind of like what Gerald called the pillars of zero trust. They've already been in place for a while. We looked at how zero trust extends those, how those reliant on those programs.I mean, as well as we have for NIST, there's the risk management framework. That isn't the end all be all, but you can think of that as a tool to help one level down. Once you've developed that architecture, the RMF can maybe help develop that set of controls and checks in place to actually ensure that what you're doing, you're implementing correctly to your stated goals. These things are in place that are basically technology neutral, that whatever vendors you're using, you can always apply these frameworks and tools to help along the way.In a way, that NIST, the Special Publication 800-207, that's also... think of that as a framework, [inaudible 00:25:53] just both on the architects, but also the way that the architects can then talk to the procurement people. They can, hopefully, understand what exactly you want. So when the procurement and the architects talk to the vendors, they're all speaking that same set of term, not just [inaudible 00:26:09] randomly zero trust or something like that. There's actually a set of rules and uses for these technologies that they can both use as a common set of terms.Ryan Johnson: All right, next question... Thanks again for that, Scott. Next question is for Gerald. What are the things that enterprise needs to understand before migrating to ZTA or zero trust architecture?Gerald Caron: That's a really good question. Think of the difficulty that some folks are going to have. I mentioned the data, understanding the data, where it is, where it's going and what classification it is. The where it's going. Where is it normally go? What is the flow? What is normal look like? How do you baseline normal? That's going to be really difficult because understanding what normal looks like will depend on when something happens now, what actions do I have to take? So understanding where that data flow is, where that data resides, what it is, who owns it because you're going to have to work with data owners. It's going to take a village. It's not just the network guys, not just the IT guys. It's going to take a village to do with zero trust in my estimate at an agency.But, as Scott was saying, be on the same page with terminology and things like that. But I think that's the difficult part. I think that answers one of the questions is how do you know what abnormal is? Well, you got to know what normal looks like to know what abnormal looks like. So I think that's really important. So, I like the inside out method, that start with the data, and then all right, what's facilitating access to that data. Device app. What do you do with those things, and then work back to the identity, given the right access to the right people at the right time.We talked about this from the end user standpoint a lot. I want to go back to this. The administrators as well are very powerful. So you have to address the administrators. I think that gets lost a lot of times when people start talking about... They talk about users accessing data. Well, your administrators need to be addressed as well in a zero trust. So that's something that's difficult.The one other thing I would say that's difficult, Ryan, is that we all, as different agencies, we all share data, we all classify it differently. If I want to share with Brandon a certain amount of data, I do sensitive but unclassified, but he may classify it in a different way. Where do we meet when we want to share data with those different classifications, so that we can properly do that? Then when I give Brandon my data, it's my data. He's going to be a good steward for it. If he doesn't have the right things in place, now, I've put my data out there. So, how can we all get on that same page? Interagency sharing is I think going to be a challenge as well.Ryan Johnson: Absolutely. It makes complete sense. That's a big, big challenge. Next question is for Brandon. Is it necessary to have a ZTA if the enterprise does not utilize cloud resources?Brandon Iske: Thank you for that question. I would say absolutely. Again, the threat is the same whether you're in the cloud or not. So, whether you have disconnected resources, or closed networks, or connected networks. You still have very similar threats to some extent. So I think it absolutely applies. Again whether you look across the pillars, whether it's identity or endpoint, we still have to do those same things and even what we're doing in DoD to enhance our identity ICAM processes. Again, it's all about authentication and account lifecycle management. Those are the big pieces that... We still have a long journey to get to from an enterprise perspective to get those under control in a better fashion than what we do today.We have CAC or PIV programs that are very strong, but again, those are a strong authenticator. It's the entire lifecycle of the additional pieces of identity that come into play. Again, all those same concepts apply regardless of where the data or applications exist. Other efforts that we've done in this arena as well, too, I would say is our cloud-based internet isolation. So again, this is a way that we move the end user browsing to a cloud environment for our actual benefit. So, in this case, basically, my browsing session is going to be terminated in a cloud environment. From a data protection and exploit perspective, those drive by downloads basically would happen in that cloud environment, not on my endpoint. So, it actually comes to help us also in this mass telework environment as well, too.So, I can split my traffic going straight to the cloud for browsing and not backhaul that all the way back to the VPN to come on to the internal network. So, that's given us a few really big benefits, again, in a very hybrid model where in some cases, we're using cloud; in other cases, we still have a huge set of legacy that's still going to be on-prem for the foreseeable future until they modernize or whatever schedule they have to modernize.Jason Wilburn: Brandon, if I could ask a question about the browser isolation component. Is this going to be in when a user is accessing internal resources inside of the agencies, or is this going to be also a service that's internet-facing? So, when a user's setting on-prem or anywhere, and he's now going to the internet once they go to Google, is all internet traffic really going to be browser isolated? Is that the envisioning?Brandon Iske: So, it is what we're doing. So, the basically .com or any commercial internet browsing [inaudible 00:31:55] capability [inaudible 00:31:57] .mil is going to bypass that. So, whether I'm on a VPN or the .mil resources already internet facing, those are the [inaudible 00:32:08]. So I mean, basically, you're not routing either way. So, it does allow us to basically not be backhauling that traffic back onto the doden or [inaudible 00:32:16] for duty terminology, for our internal network.Ryan Johnson: Thank you, Brandon. Next question is to Scott Rose. Looking to the future, what is next in zero trust? What technologies are going to impact it or acquired in a different way than what we see right now? I love the question.Scott Rose: Yeah. I don't know for sure because everybody makes predictions and are constantly surprised about how they don't pan out. But at least in the near term, I see a lot of people focusing both on IoT like we are as well. How do you get those and manage those in an automatic fashion? So, you don't actually have to have human administrators going out and touching all those devices or doing something to those devices. They're getting to the point where you can just quickly get them onboard them onto a network. You know exactly what they're doing because they say what they're doing in [inaudible 00:33:19]. Manufacturer vouches for them. You onboard them, you have go through the entire lifecycle, and you offboard them if you need to all in a more streamlined automated fashion. That's going to be coming on as people look for IoT solutions.The other one is we're seeing more people looking at machine learning when it comes to developing user profiles as feedback to what we call like the policy engine or the trust algorithm moving on. Building up again, what does this user normally do in order to see when something abnormal happens? You always [inaudible 00:33:57] this. You have a person, say, working in HR, and they connect to this database with all the user information. They do roughly, say, three to five gigs of traffic going back and forth from this database a day. Suddenly, you see that jump up to 800 gigs. That should cause a red flag going up because that's abnormal. But then again, maybe it's because there's the annual performance review, where they're downloading everything and going through everything.Maybe that happens every year at a certain time. Then again, you're building up that profile saying, "Okay, we know that does happen at a certain timeframe. So what happens outside of that timeframe, then maybe something strange is going on." Those kind of trends we're seeing, just try and improve the dynamic nature of zero trust. That's kind of the things that are just on the horizon and starting to appear.Ryan Johnson: Thank you, Scott. Next question is for Gerald. What mistakes or what are the biggest misunderstandings with zero trust in the industry or within federal entities right now?Gerald Caron: Definition. Understanding the totality of zero trust, understanding as a full architecture, full framework. People talk about it in bits and pieces. Unfortunately, some vendors will talk about zero trust, but you got to understand the whole landscape of it because they may come in and do the authentication and access management piece, but not do the data segmentation piece, or the app hardening piece, or network mapping for understanding where your data's flowing and things. So, understanding that it's not just a one-product thing. It is truly going to be an integration. It's going to take a whole effort, a whole village to do it.So, really understanding and getting level set, and understanding the use cases and understanding what your risk tolerance is, is very important. What are you willing to take risk for? What's important to you? Putting your emphasis on what's important. The cafeteria schedule, okay. But your medical records, I'm going to put a little more emphasis on that probably than the cafeteria schedule. So, and understanding where does that reside? How do I protect that and things? So, really understanding what it is you're trying to accomplish, and then we all have our little special snowflakes in all of our different agencies. So, what is our little spin on things? So understanding what your use cases are, I think's really important.Ryan Johnson: Thank you, Gerald. Next question is for Jason. Let's go to another identity question, Jason. If identity is a new perimeter, what should federal agency entities consider when looking at making identity their enforcement point? How is this achieved?Jason Wilburn: So, it's not going to be the enforcement point. It's just going to be another piece of information, a data point that can be used by an enforcement point. To Gerald's point, it needs to be looked at holistically. Identity just needs to be one part of it. I think the biggest thing is understanding really where are all your identities within an organization. Are they all in active directory? Are they all in a SaaS-based [inaudible 00:37:22]? Do each application have their own directory structure? So, while you think that John Smith's account only exists in say active directory, it might exist in multiple locations. So then you need a good strategy to onboard identity, decommission identity, and then also validate identity. That means back into needing some sort of MFA or a good authentication method.Ryan Johnson: Next question is to Scott. What are the concerns a federal entity needs to understand before migrating to ZTA?Scott Rose: Well, the concerns I need to think or that they need to worry about is, basically, they need to know what they do, they need to know their mission, they need to know the risks inherent to that they're doing their mission, and then they need to know what they have, who both.... These are accounts of the network, the devices, the workflows, they need to have those knowledge at first. They need to be able to detect and monitor things previously before they can actually start moving down this road to zero trust because you can't really build a policy and a set of checks around things that you don't actually know. So, those are the main concerns.Other concerns are how it will impact the users. We need to educate them to make sure everybody else is onboard because if the other kind of operating units in an organization or a federal agency or something, if they're not onboard, there's going to be a problem because the way things are... because they may result in the changes of the workflow of [inaudible 00:39:02] times. They're accessing things. What permissions they have or don't have? There's always that learning curve when you're trying to actually refine these policies. If that becomes aggravating, they're going to start trying to find ways around it. That's the last thing you want because then you have the shadow IT springing up behind it and things that you've sorted all these strange traffic that you're not seeing on the network, but people claim that it's very important for them to do their job. Those sorts of things. So you need to actually realize that going down the road of zero trust is a unified front. Everybody needs to take those steps together.Ryan Johnson: Yeah. Thank you, Scott. Probably the last question here, this is directed to Gerald once again. How does zero trust relate to TIC 3.0 and CDM?Gerald Caron: So, I think the great thing about CDM, for those that have been participating in it, it's such a good foundational things that I think you can build on for zero trust. I think Brandon said it, well, earlier, is like, you're probably already doing some things and taking a good inventory of some of those efforts that you already have going on, and how it fits into the zero trust architecture that... So, there may be some tweaks. TIC, I think, definitely is part of... a contributor to the solution, especially, some of these efforts that allow for the telemetry and the services to do that untethering that I was talking about, and get all that data and make decisions based off that.Definitely. I think the way CDM is taking in and doing like the asset discovery, a lot of the understanding of the mapping, eventually in the subsequent phases later on to do the network access control, so you can quarantine or trigger an action on a device. There's a lot of good things that I think they provide some good building blocks that will get you a part of your zero trust solution. Not the totality. Of course, we've already talked about that, but I think there's some good foundational pieces that they've put in place that contribute to the overall zero trust architecture.Scott Rose: Yeah. To follow up on that, if you go through the part of the NIST 800-207, we have a coauthor from DHS, and he's the head of the TIC program. We made sure that, at least, the text that we had in those sections where we talk about CDM and TIC, we had a lot of input and overview from DHS there. So, he made sure that the wordings and both of the tone and both matchly don't contradict. So yeah, we made sure that we were expressing the fact that these programs are interlaced. Thanks for listening. If you would like more information on how Carahsoft or F5 can assist your federal agency, please visit www.carahsoft.com or email us at f5-sales@carahsoft.com. Thanks again for listening, and have a great day. 

No President is above FISMA Law
Episode 66 US Capital Assault and FISMA Compliance

No President is above FISMA Law

Play Episode Listen Later Jan 6, 2021 6:38


In this episode the author discusses the assault on the United States Capital and the security controls required to protect United States Capital operations and physical boundaries.

Aprende SecTY podcast
Ep 2: Regulaciones que aplican a tu negocio

Aprende SecTY podcast

Play Episode Listen Later Dec 7, 2020 12:39


¡Aprende SecTY!  Verifica cuáles regulaciones de la industria aplican a tu negocio. Cada una aplica a las compañías dependiendo de la información que manejen. Es importante conocer cuales te aplican para poder cumplir con ellas y evitar multas. SOX : https://www.ucipfg.com/Repositorio/MAES/MAES-04/BLOQUE-ACADEMICO/Unidad-3/lecturas/Caso_Enron_2.pdf https://www.soxlaw.com/ GLBA: https://www.ftc.gov/tips-advice/business-center/privacy-and-security/gramm-leach-bliley-act PCI-DSS : https://www.pcihispano.com/que-es-pci-dss/#:~:text=El%20est%C3%A1ndar%20PCI%20DSS%20se,Comerciantes%20(merchants)&text=Entidades%20emisoras%20(issuers) https://www.pcisecuritystandards.org/ HIPAA: https://www.hhs.gov/hipaa/for-professionals/security/index.html https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/combined-regulation-text/omnibus-hipaa-rulemaking/index.html GDPR: https://gdpr-info.eu/ FISMA: https://csrc.nist.gov/projects/risk-management Enseñamos a mejorar la seguridad de información en tu negocio y en tu vida. Síguenos en Facebook, Instagram, Twitter y LinkedIN como @SecTYCS Envíame tus preguntas o recomendaciones a: itsec@sectycs.com Deja tu reseña en iTunes/Apple Podcast y compártelo con personas que necesiten mejorar la seguridad en su negocio y en su vida. Puedes escucharnos por medio de: iTunes/Apple Podcast, Spotify, Stitcher y Google Podcast.

Craig Peterson's Tech Talk
Tech Talk with Craig Peterson Podcast: Bitcoin and Ransomware Connection, The Gig Economy, Prop 22 and More

Craig Peterson's Tech Talk

Play Episode Listen Later Nov 13, 2020 78:35


Welcome!   This week I am spending a bit of time discussing Bitcoin and other crypto-currency and their tie to Ransomware and a couple of things the Feds are doing from the IRS to DOJ.  Then we go into the Gig Economy and thru the ramifications of CA Prop 22 and More so listen in. For more tech tips, news, and updates, visit - CraigPeterson.com. --- Tech Articles Craig Thinks You Should Read: The feds just seized Silk Road’s $1 billion Stash of bitcoin Uber and Lyft in driving seat to remake US labor laws The One Critical Element to Hardening Your Employees' Mobile Security Ransom Payment No Guarantee Against Doxxing Connected cars must be open to third parties, say Massachusetts voters Tracking Down the Web Trackers Apple develops an alternative to Google search San Diego’s spying streetlights stuck switched “on,” despite a directive Paying ransomware demands could land you in hot water with the feds Windows 10 machines running on ARM will be able to emulate x64 apps soon 'It Won't Happen to Me': Employee Apathy Prevails Despite Greater Cybersecurity Awareness Rise in Remote MacOS Workers Driving Cybersecurity 'Rethink' A Guide to the NIST Cybersecurity Framework --- Automated Machine-Generated Transcript: Craig Peterson: [00:00:00] The silk road is back in the news as a billion dollars was just taken from their account. We're going to talk about mobile security, ransom payments, and doxing. And of course, a whole lot more as you listen right now. Hi, everybody, of course, Craig Peterson here. Thanks for spending a little time with me today. We have a bunch to get to. I think one of the most interesting articles, what kind of start with this week because this is a very big deal. We're talking about something called cryptocurrency, and I'm going to go into that a little bit. So for those of you who already know, just maybe there's something you'll learn from this little part of the discussion and then we'll get into Bitcoin more specifically. Then the secret service, what they have been doing to track down some of these illegal operators and also how this is really affecting ransomware. Those two, by the way, are just tied tightly together, Bitcoin and ransomware. So I'll explain why that is as well. Cryptocurrency has been around for quite a while now.  There's a concept behind cryptocurrency and it's the most important concept of all, frankly, when it comes to cryptocurrency and that is you have to use advanced to mathematics in order to prove that you have found a Bitcoin. Time was you'd go out and go gold mining. Heck people are still doing it today. all over New England. It isn't just the Yukon or Alaska or Australia, et cetera. They're doing it right here. And they have proof that they found something that's very hard to find because they have a little piece of gold or maybe a nugget or maybe something that's like a huge nugget man. I saw a picture of one out of Australia that was absolutely incredible. Takes a few people to carry this thing. That is proof, isn't it? You can take that to the bank, ultimately. You sell it to a gold dealer who gives you cash. That you can then take to a bank.  Then the bank account information is used to prove that you can buy something. You give someone a credit card, it runs a little check. Hey, are we going to let this guy buy it? Or a debit card? Hey, does he have enough money in the bank? So along with that pathway, you have something that is real. That's hard and that's the gold that was mined out of the ground. Then it very quickly becomes something that's frankly, unreal. Time was our currency was backed by gold and then it was backed by silver. Now it's backed by the full faith and credit of the United States government. not quite the same thing, is it? So we're dealing with money that isn't all that real, the United States agreed to not manipulate its currency. We became what's called the petrodollar. All petroleum products, particularly crude oil are sold on international exchanges using the US dollar. China is trying to change that. Russia's tried to change that. They're actually both going to change it by using a cryptocurrency. At least that's their plan. The idea behind cryptocurrency is that your money, isn't real either, right? You sure you've got a piece of paper, but it's not backed by anything other than the acceptance of it by somebody else. If you walk into Starbucks and you drop down a quarter for your coffee. Yeah, I know it's not a quarter used to be a dime. I remember it was a dime for a cup of coffee, not at Starbucks, but you dropped down your money. Okay. Your $10 bill for a cup of coffee at Starbucks, they'll take it because they know they can take that $10 and they can use it to pay an employee and that employee will accept it and then they can use that to buy whatever it is that they need. It's how it works. With Bitcoin, they're saying what's the difference? You have a Bitcoin.  It's not real. Ultimately represents something that is real, but how is there a difference between accepting a Bitcoin and accepting a $5 bill? What is the difference between those two or that $10 bill that you put down at Starbucks? In both cases, we're talking about something that represents the ability to trade. That's really what it boils down to. Our currencies represent the ability to trade. Remember way back when, before I was born that a standard wage was considered a dollar a day. So people would be making money at a rate of a dollar a day. I remember that song, old country song. I sold my soul to the company's store and they made enough money just basically yet buy in to pay the company for the room and board and everything else they had. Interesting times, not fun, that's for sure for many people caught up in it. When you dig down behind Bitcoin, once you ultimately find at the root,  was a computer that spent a lot of time and money to solve this massive mathematical equation. That's the basics of how that works. That's what Bitcoin mining is. Right now, it costs more to mine a Bitcoin. In most areas, then it costs for the electricity to run it and the hardware to buy it. There are computers that are purpose made. Just to create these Bitcoins, just to find them just to mine them.  If you're sitting at home thinking, wow, I should get into a cryptocurrency and I'll just go ahead and mine it on my computer, that's really fun. It's a fun thing to think about. But in reality, you are not going to be able to justify it. You'd be better off to go and buy some gold or another precious metal. So that's how cryptocurrency has, how Bitcoin, that's how all of these really begin is just with the computer, trying to solve an incredibly complex math problem that can take weeks or months for it to solve. For those of you that want to dig a little bit more, basically, it's using prime numbers. You might remember messing with those in school. I remember, I wrote a program to determine prime numbers a long time ago. 45 plus years ago, I guess it was, and it was fun because I learned a lot about prime numbers back then. But we're dealing with multi-thousand digit numbers in some of these cases, just huge numbers, far too hard for you or I to deal with and that's why I take so incredibly long. Now we know how the value was started and that was with somebody running a computer finding that Bitcoin and putting it on the market. Now, normally when you're looking at market and market volatility, markets are supply and demand based except for government interference. We certainly have a lot of that in the United States. We do not have a completely free market system, not even close. The free market says I had to dig this hole and in order to dig that hole, I had to have a big backhoe. Before that, I had to have a bucket or maybe some other heavy equipment to move all of the earth out of the way, the bulldozers, et cetera. Then I had to run that through some sort of a wash plant and all of these things cost me money. So basically it costs me whatever it might be, a hundred bucks, in order to find this piece of gold, and then that hundred bucks now that it costs him to do it is the basis for the value of that piece of gold. Obviously, I'm not using real numbers, but just simple numbers to give you an idea of how cryptocurrency works. So it's a hundred bucks for me to get that piece of gold out of the ground. Then that piece of gold is taken and goes to some form of a distributor. So I'm going to sell that piece of gold to somebody that's going to melt it down. They're going to assay it and say, yeah, this is a hundred percent pure gold, and then they'll sell it to someone and then they'll sell it to someone and then they'll sell it to a jeweler who then takes it and makes jewelry. Every time along there they're adding stuff onto it. But the basic value of gold is based on how hard it is to get and how many people want to get their hands on it. The law of supply and demand. You've seen that over the years, it's been true forever. Really? That's how human trade works. Capitalism, in reality, is just the ability of strangers to trade with each other is just an incredible concept. What we're talking about here with the cryptocurrency is much the same thing. The value of cryptocurrency goes up and down a lot. Right now, one Bitcoin is worth about 15,000, almost $16,000 per bitcoin. We'll talk about that. What is Bitcoin? How can I even buy it? Pizza for the silly things were 16 grand, right? It's like taking a bar of gold to buy a pizza. How do you do that? How do you deal with that? So we'll get into that, and then we'll get into how the tie between cryptocurrencies, particularly Bitcoin, and the criminal underground. That tie is extremely tight and what that means to you.  It is tied directly into the value of Bitcoin. Right now the basis is it costs me 16 grand to mine, a Bitcoin. Therefore that's where I'm going to sell it for, of course, there are profit and everything else that you put into that $16,000 number. We've got a lot more to get to today. We're going to talk about this billion dollars, which is, that's a real piece of money here that the feds just seized. Right now talking about Bitcoin. What's the value of it? How is it tied into criminal enterprises and what's going on with the FBI seizure this week? Bitcoin's value has been going up and down. I just pulled up during the break, a chart showing me the value of Bitcoin over the last 12 months. It has been just crazy. going back years it was worth a dollar. I think the  Bitcoin purchase was for a pizza, which is really interesting when you get right down to it. The guy says, Oh yeah, what the heck, take some Bitcoin for it. Okay. here we go. May 22nd, 2010 Lasso Lowe made the first real-world transaction by buying two pizzas in Jacksonville, Florida for 10,000 Bitcoin. 10,000 Bitcoin. So let me do a little bit of math here. Let me pull it up here. Today's price is about $15,750,000. So he bought it. Two pizzas for the value today, Bitcoin of $157 million. That's actually pretty simple math, $157 million. Okay, that was 10 years ago. The first Bitcoin purchase. So it has gone up pretty dramatically in price. I think the highest price for one Bitcoin was $17,900. It was almost $18,000 and then it's dropped down.  It has gone up and it has gone down quite a bit over the years. It seems to have had a few really hard drop-offs when it hit about 14,000. Right now it is above that. So I'm not giving investment advice here, right? That's not what I do. We're talking about the technology that's behind some of this stuff, but one Bitcoin then. Is too much for a pizza, right? So he paid 10,000 Bitcoin for his first pizza. That's really cool, but, ah, today where it's another word, the Bitcoin was worth just a fraction of a cent each back then. Today you can't buy a pizza for one Bitcoin. So Bitcoin was designed to be chopped up so you can purchase and you can sell them at a fraction of a Bitcoin. That's how these transactions are happening. Now there's a lot of technology we won't get into that's behind all of this and how the transactions work and having a wallet, a Bitcoin wallet, and how the encryption works and how all of these logs work. The audits, basically the journals that are kept as accountants and how a majority of these have to vote and say that particular transaction was worthwhile. The fact that every Bitcoin transaction is not only stored but is stored on thousands of computers worldwide. Okay. There's a whole lot to that, but let's get into the practical side. If you are a bad guy. If you are a thief. If you're into extortion. If you're doing any of those things, how do you do it without the government noticing? In reality, it's impossible when you get right down to it. Nothing is completely anonymous and nothing ever will be most likely, completely anonymous. But they still do it anyway, because, in reality, they, the FBI or the secret service or whoever's investigating has to be interested enough in you and what you're doing in order to track you down. If they are interested enough, they will track you down. It really is that simple. Enter a convicted criminal by the name of Ross Ulbricht Ross was running something online, a website called the silk road.  It was what's known as the dark web. If you've listened to the show long enough, the history of the dark web and that it was founded by the US government. In fact, the dark web is still maintained by the government. I'm pretty sure it's still the Navy that actually keeps the dark web online. The thinking was we have the dark web. It's difficult for people to track us here on the dark web and if we use something like Bitcoin, one of these cryptocurrencies for payment, then we are really going to be a lot safer. Then they added one more thing to the mix called a tumbler. And the idea with the tumbler is that if I'm buying something from you using Bitcoin, my wallet shows that I transferred the Bitcoin to you. All of these verification mechanisms that are in place around the world also know about our little transaction, everybody knows. The secrecy is based on the concept of a Swiss bank account. When with that Swiss bank account, you have a number and obviously you have a name, but it is kept rather anonymous. The same, thing's true with your wallet. You have a number, it's a big number to a hexadecimal number. It is a number that you can use and you can trade with. You've got a problem because, ultimately, someone looking at these logs who knows who you are or who I am or wants to figure out who either one of us is probably can. And once they know that they can now verify that you indeed are the person who made that purchase. So these tumblers will take that transaction instead of me transferring Bitcoin directly to you, the Bitcoin gets transferred to another wallet. Then from that wallet to another wallet and from that wallet to another wallet and from that wallet to a number of another wallet.  Now is much more difficult to trace it because I did not have a transaction directly with you. Who is in the middle? That's where things start getting really difficult. But as Russ Ulbricht found out, it is not untraceable. He is behind bars with two life sentences plus 40 years. What they were doing on the silk road is buying and selling pretty much anything you can think of. You could get any hard drug that you wanted there, you could get fake IDs, anything, really, anything, even services that you might want to buy. There are thousands of dealers on the silk road. Over a hundred thousand buyers, according to the civil complaint that was filed on Thursday this week. Last week, actually, the document said that silk road generated a revenue of over 9.5 million Bitcoins and collected commissions from these sales of more than 600,000 Bitcoin. Absolutely amazing. Now you might wonder, okay. Maybe I can buy a pizza with Bitcoin or something elicit with Bitcoin, but how can I use it in the normal world while there are places that will allow you to convert Bitcoin into real dollars and vice versa? In fact, many businesses have bought Bitcoin for one reason and one reason in particular. That reason is insurance. They have bought Bitcoin in case they get ransomware. They just want it to sit in there, to use to pay ransoms. We'll talk more about that. We're turning into the Bitcoin hour, I guess today. we are talking a lot about it right now because it's one of the top questions I get asked. The IRS is saying that they may put a question on your tax return next year, about cryptocurrency specifically Bitcoin. So what's that all about? And by the way, the IRS had a hand in this conviction too. Your listening to Craig Peterson. We just mentioned, gentlemen, I don't know if he's a gentleman, by the name of Ross Ulbricht and he is behind bars for life. He was buying and selling on the. A website called the silk road. In fact, he was the guy running it, according to his conviction and two life terms, plus 40 years seems like a long time. In other words, he's not getting out. The internal revenue service had gotten involved with this as well because you are supposed to pay taxes on any money you earn. That is a very big deal when you're talking about potentially many millions of dollars. So let's figure this out. I'm going to say, some 9.5 million. So 9 million, 500,000. There we go, Bitcoin. What do we want to say? Let's say the average value of that Bitcoins over time, there was about $5,000 apiece. Okay. So let's see times 5,000, Oh wow. That's a big number. It comes back to 47 billion. There you go. $500 million dollars.  Almost $50 billion. That's just really rough back of the envelope math. We have no idea. So that's a lot of money to be running through a website. Then the commission that he made on all of those sales is said to have been more than 600,000 Bitcoin. So again, 600,000 times let's say an average price of $5,000 per Bitcoin. So that's saying he probably made about $3 billion gross anyways, on these collected commissions. That is amazing. The IRS criminal investigation arm worked with the FBI to investigate what was happening here as well as, by the way, the secret service.  I got a briefing on this from the secret service and these numbers are just staggering, but here's the problem. The guy was sentenced a few years ago. 2015  he was prosecuted successfully. where did all of his money go? His money was sitting there in Bitcoin, in an unencrypted wallet, because part of the idea behind your Bitcoin wallet is there are passcodes and nobody can get at that your wallet information unless they have the passcode. So they might know what your wallet number is, which they did. The secret service and the IRS knew his wallet number, but how can they get at that Bitcoin and the money it represents? They did. This is like something really from one of these, TV shows that I don't watch right there. What is it? NCU? The crime investigator unit CIU or whatever it is on TV. I can't watch those because there's so much stuff they get wrong technically, and I just start screaming at the TV. It's one of those things. What they found is that the wallet hadn't been used in five years. They found that just last week, people who've been watching his Bitcoin wallet number, found that they were about 70,000 Bitcoins transferred from the wallet. So people knew something was going on. Then we ended up having a confirmation. The feds had admitted that it was them. They had gone ahead and they had a hacker get into it. So here's a quote straight from the feds. That was an ARS Technica this week, according to the investigation, individual X was able to hack into silk road and gain unauthorized and illegal access and thereby steal the illicit cryptocurrency from silk road and move it into wallets and individual X controlled. According to the investigation, Ulbricht became aware of individual X's online identity and threatened individual X for the return of the cryptocurrency to Ulbricht.  So Ulbricht had his cryptocurrency stolen, which by the way, is if you are dealing with Bitcoin, that is very common, not that it's stolen. It does get stolen and it's not uncommon. It's very common for the bad guys to try and hack into your Bitcoin wallet. That's part of the reason they install key loggers so they can see what the password is to your wallet. So apparently that unknown hacker did not return or spend the Bitcoin, but on Tuesday they signed consent and agreement to forfeiture with the US attorney's office in San Francisco and agreed to turn over the funds to the government. Very complex here. There are a lot of links that the Silkroad founder took to really obfuscate the transfer of the funds. There's tons of forensic expertise that was involved and they eventually unraveled the true origins of Bitcoin. It is absolutely amazing. Earlier this year they used a third-party Bitcoin attribution company to analyze the transactions that had gone through the silk road. They zeroed in on 54 trends and actions, the transferred 70,000 Bitcoins to two specific wallets. I said earlier, by the way, that it was hex, it isn't hex. It's mixed upper lower case. characters as well as numbers. And, so it's a base. What is it? 26, 40, 60 something. The Bitcoin is valued at about $354,000 at the time. I don't know about you. I find this stuff absolutely fascinating. There's a lot of details on how it was all done and they got the money back. So with a cryptocurrency, you're not completely anonymous. As the founder of the silk road finds out. You end up with criminal organizations trying to use it all the time. Just having and using Bitcoin can raise a red flag that you might be part of a criminal organization. So you got to watch that okay. In addition to that, The IRS is looking to find what it is you have made with your Bitcoin transactions because almost certainly those are taxable transactions. If you've made money off of Bitcoin. Now you'd have to talk to your accountant about writing off money that you lost when you sold Bitcoin after it had dropped. I do not own any Bitcoin. I don't. I played with this years ago and I created a wallet. I started doing some mining, trying to just get to know this, so I'm familiar with this. I've done it. I haven't played with it for a long time. If you have made money on Bitcoin and you sold those Bitcoin, or even if you transferred Bitcoin and the profits as Bitcoin, you all money to the IRS. Now the feds have their hands on almost a billion dollars worth of Bitcoin, just from this one guy. that's it for Bitcoin for today. We're going to talk about Uber and Lyft and how they're in the driver's seat right now to maybe remake labor laws in about two or three dozen States almost right away. Are you, or maybe somebody driving for Uber or Lyft, or maybe you've been thinking about it? There are a lot of problems nationwide when it comes to employee status. We're going to talk about the gig economy right now. Hey, thanks for joining me, everybody. You are listening to Craig Peterson. Hey, Uber and Lyft are two companies that I'm sure you've heard of. If you heard about the general category here, it's called the gig economy. The gig economy is where you have people doing small things for you or your business. That's a gig. So during this election season, for instance, I turned somebody on to a site called Fiverr, F I V E R R.com, which is a great site. I've used it many times.  I turned them on saying that because they wanted a cartoon drawn there is no better place than to go to Fiverr. Find somebody who has a style you like, and then hire them.  It used to be five bucks apiece, nowadays not so much, it could be 20, it could be a hundred, but it is inexpensive. When you hire somebody to do that as a contractor, there are rules and regulations to determine. If you are an employee versus an independent contractor, there are a lot of rules on all of this, including filing 1099s. But can you decide whether or not they are a contractor? So let's look at the rules here. I'm on the IRS website right now and they have some basic categories. So number one, behavioral control, workers, and employee, when the business has the right to direct and control the work performed by the worker. Even if that right is not exercised. Then they give some reasons for behavioral control, like the types of instructions given, when and where to work, the tools to use the degree of instruction. I think the big one is training to work on how to do the job, because frankly, even if you're hiring somebody to do something for you, that takes an hour. You have control over their behavior. But how about an Uber driver or Lyft driver? Are you telling them where to go? Duh, of course, you are. are you telling them, Hey, don't take that road because the Westside highway so busy this time of day, of course, you are? It looks like they might be employees but under behavioral control. Next step financial control. Does the business have a right to direct or control the financial and business aspects of the worker's job, such as significant investment in the equipment they're using unreimbursed expenses, independent contractors, and more likely to incur unreimbursed expenses than employees? there you go. Okay. So no that Uber Lyft driver, that person making the cartoon, I don't have any financial control over their equipment. Relationship. How do the worker and the business perceive their interaction with each other in written contracts? Or describe the relationship? Even if the worker has a contract that says they are a contractor does not mean that they aren't a contractor. By the way, if you're not withholding the taxes and paying them as an employee, and then they don't pay their taxes and the IRS comes coming after somebody they're coming after you as well for all of those that you did not pay taxes on. Then it goes into the consequences of misclassifying an employee goes on. So there are people who could maybe they're an employee, maybe their contractor, but with Uber and Lyft, California decided to put it on the ballot because both Uber and Lyft were saying, we're pulling out of California. California has a state income tax and they want to collect that income tax. Plus California, we're saying, Oh, we care about the drivers. Maybe they do. Maybe they don't. I'm a little jaded on that.I might say because I had a couple of companies out in California, way back in the day. So the California voters had it on the ballot just here. What a week ago? A little more than a week ago, maybe two almost now isn't it. They decided to let Uber and other gig economy companies continue to treat the workers as independent contractors. That is a very big deal. Because now what's happened because of this overwhelming approval of proposition 22, these companies are now exempt from a new employment law that was passed last year in California. So what goes out the window here the well minimum rate of pay, healthcare provisions, et cetera. And by the way, They still can get this minimum pay and healthcare provisions. Okay. They can still get it. It's still mandated out there, but it's absolutely just phenomenal. Apparently, the law that was passed last year was started because these gig people can really cut the cost of something and other people just weren't liking it. Frankly, gig companies also outspent the opposition by a ratio of $10 to $1, which is amazing. 10 to one on. Trying to get this proposition to pass. So it's a very big deal. And what it means is in California, these gig workers are independent contractors, but there's a couple of dozen states that are looking at this, including to our South, or maybe the state you're listening in. If you're listening down in mass right now, but South of where I am. In Massachusetts, the state attorney general has sued Uber and Lyft over worker classification. And this, of course, is going to have nothing to do with what happened in California right now. There are other States who are looking into this right now and you'll be just totally surprised. They're all left-wing States. I'm sure. I hope you were sitting down, New York, Oregon, Washington state, New Jersey, and Illinois. Okay. so we'll see what happens here. The companies have tried to make a good with the unions. Unions, pretty upset about this, good articles. So you might want to look it up online. Now I want to, before this hour is up, talk about ransom payments. I have mentioned before on the show that the department of justice now looks at people and businesses, paying ransomware as supporting terrorist operations. Did you realize that it's like sending money off to Osama Bin Laden, back in the day? Because if you do pay a ransom, the odds are very good that it is going to a terrorist organization. Oh, okay. It could be Iran. Are they terrorists? No, but they do support terrorism, according to the state department. Is Russia terrorist. no, but are they attacking us? Is this okay? Is there an attack of the United States, a terrorist attack? This is bringing up all kinds of really interesting points. One of them is based on arrests that were made about three weeks ago where some hackers were arrested on charges of terrorism. It is affecting insurance as well. I've mentioned before that we can pass on to our clients a million dollars worth of insurance underwritten by Lloyd's of London. Very big deal. But when you dig into all of these different types of insurance policies, we're finding that insurance companies are not paying out on cyber insurance claims, they'll go in and they'll say, you were supposed to do this, that, and the other thing. You didn't do it, so we're not paying. We've seen some massive lawsuits that have been brought by very big, very powerful companies that did not go anywhere, because again they were not following best practices in the industry. So this is now another arrow in the quiver, the insurance companies to say. Wait a minute, you arrested hackers who were trying to put ransomware on machines and did in many cases and charged a ransom. You charge them with terrorism. Therefore, the federal government has acknowledged that hacking is a form of terrorism. Isn't that kind of a big deal now. So it's an act of terrorism. Therefore we don't have to payout. It's just if your home gets bombed during a war, You don't get compensation from the insurance company, and ransomware victims now that pay these bad guys to keep the bad guys from releasing data that they stole from these ransomware victims are finding out that data that was stolen is being released anyways. So here's, what's going on. You get ransomware on your machine. Time was everything's encrypted and you get this nice big red and warning label and you pay your ransom. They give you a key and you have a 50% chance that they are in fact, going to get your data back for you. Nowadays, it has changed in a big way where they will gain control of your computer. They will poke around on your computer. Often an actual person poking around on your computer. They will see if it looks interesting. If it does, they will spread laterally within your company. We call that East-West spread and they'll find documents that are of interest and they will download them from your network, all without your knowledge and once they have them, they'll decide what they're going to charge you as a ransom. So many of these companies, the bad guys. Yeah. They have companies, will ransom your machines by encrypting everything, and the same pay the ransom, get your documents back. Then what'll happen is they will come back to you, maybe under the guise of a different, bad guy, hacker group. They'll come back to you and say, if you don't pay this other ransom, we're going to release all your documents, and you're going to lose your business. Yeah, how's that for change? So paying a ransom is no guarantee against them releasing your files. Hey, we've been talking about how computers are everywhere. What can we expect from our computerized cars? What can we expect from computers? Intel has had a monopoly with Microsoft called the Wintel monopoly. So if you missed part of today's show. Make sure you double-check and also make sure you are on my newsletter list. I'm surprised here how every week I get questions from people and it's great. That's it. I love to help. I was asked when I was about 19 to read this little book and to also to fill out a form that said what I wanted on my headstone. That's it heady question to ask somebody at 19 years of age, but I said that this was pretty short and sweet. I said, "he helped others." Just those three words, because that's what I always wanted to do. That's what I always enjoyed doing. You can probably tell that's why I'm doing what I'm doing right now is to help people stop the bad guys and to make their lives a little bit better in the process, right? That's the whole goal. That's the hope anyway. If you need a little help, all you have to do is reach out. Be glad to help you out. Just email me M E at Craig Peterson dot com. Or if you're on my email list, you'll get all of my weekly articles, everything I talked about here on the show, as well as my during the week little emails that I send out with videos that I've been doing. I've been putting more together. Didn't get any out this week I had planned to, but I probably will get them out next week.  I was able to make a couple of this week and we'll queue them up for the coming week, but you'll get all of that. So just go to. Craig peterson.com/subscribe. You'll find everything there. As part of all of that of course, you will also be getting information about the training that I do. I do all kinds of free pieces of training and webinars, and I've got all kinds of reports. One of the most popular ones lately has been my self-audit kit. It's a little tool kit that you can use to audit, your business and see if you are compliant. It's just a PDF that you can take from the email that I send you. If you ask for it, all you have to do is ask for an audit kit, put that in the subject line, and email me@craigpeterson.com and we'll get you going. So I've had a few people who have this week said, Hey, can you help me out?  What do I do? I help them out and It turns out when I'm helping them out, they're not even on my email list. So I'll start there. If you're wondering where to start, how to get up to speed a little bit, right? You don't have to know all of this stuff like the back of your hand, but you do have to have the basic understanding. Just go online. And a signup Craig peterson.com/subscribe would love to have you there. Even when we get into ice station zebra weather here coming up in not so long, unfortunately, in the Northeast. When you're thinking about your computer and what to buy. There are a lot of choices. Of course, the big ones nowadays are a little different than they were just a few years ago. Or a couple of years ago, you used to say, am I going to get a Windows computer, or am I going to get a Mac now? I think there's a third choice that's really useful for most people, depends on what you're doing. If what you do is some web browsing, some email, and also might do a couple of things with some video and pictures and organizing you really should look at the third option. Which is a tablet of some sort and that is your iPad. Of course, the number one in the market, these things last a long time. They retain their value. So their higher introductory price isn't really a bad thing. And they're also not that much more expensive when you get right down to it and consider the resale value of them. So have a look at the tablet, but that's really one of the three major choices also today when you're deciding that you might not be aware of it, but you are also deciding what kind of processor you're going to be using. There is a lot of work that's been done going on arm processors. What they are called A R M. I started working with this class of processor, also known as RISC, which is reduced instruction set processors, many years ago, back in the nineties. I think it was when I first started working with RISC machines. But the big difference here is that these are not Intel chips that are in the iPads that are in or our iPhones, they aren't Intel or AMD processors that are in your Android phones or Android tablet. They're all using something that's called ARM architecture. This used to be called advanced RISC machine acorn risk machine. They've been around a while, but ARM is a different type of processor entirely than Intel. the basic Intel design is to try and get as much done with one instruction as possible. So for instance, if you and I decided to meet up for Dunkin donuts, I might say, okay, so we're going to go to the Duncan's on Elm Street, but the one that's South of the main street, and I'll meet you there at about 11 o'clock. And then I gave you some of the directions on how to get to the town, et cetera. And so we meet at dunks and to have a good old time. That would be a RISC architecture, which has reduced instructions. So you can tell it, okay, you get to take a right turn here, take a left turn there. In the computing world, it would be, you have to add this and divide that and then add these and divide those and subtract this. Now to compare my little dunk story. What you end up doing with an Intel processor or what's called a CISC processor, which is a complex instruction set, is we've already been to dunks before that dunks in fact, so all I have to say is I'll meet you at dunks. Usual time. There's nothing else I have to say. So behind all of that is the process of getting into your car, driving down to dunks the right town, the right street, the right dunks, and maybe even ordering. So in a CISC processor, it would try and do all of those things with one instruction. The idea is, let's make it simple for the programmer. So all of the programmers have to do, if the programmer wants to multiply too, double-precision floating-point numbers, the programmer that if he's just dealing with machine-level only has to have one instruction. Now those instructions take up multiple cycles. We can. Get into all the details, but I think I've already got some people glazing over. But these new ARM processors are designed to be blindingly fast is what matters. We can teach a processor how to add, and if we spend our time figuring out how to get that processor to add faster. We end up with ultimately faster chip and that's the theory behind risk or reduced instruction set computers, and it has taken off like wildfire. So you have things like the iPad pro now with an arm chip that's in there designed by Apple. Now they took the basic license with the basic ARM architecture and they've advanced it quite a bit. In fact, but that Ipad processor now is faster than most laptop processors made by Intel or AMD. That is an impressive feat. So when we're looking a little bit forward, we're no longer looking at machines that are just running an Intel instruction set. We're not just going to see, in other words, the Intel and AMD inside stickers on the outside of the computer. Windows 10 machines running on ARM processors are out already. Apple has announced arm based laptops that will be available very soon. In fact, there is a scheduled press conference. I think it's next week by Apple, the 15th. Give or take. Don't hold me to that one, but they're going to have a, probably an announcement of the iPhone 12 and maybe some delivery dates for these new ARM-based laptops. So these laptops are expected to last all day. Really all day. 12 hours worth of working with them, using them. They're expected to be just as fast or faster in some cases as the Intel chips are. So ARM is where things are going. We already have the Microsoft updated surface pro X. That was just announced about two weeks ago, which is ARM-based. We've gotten macs now coming out their ARM base. In fact, I think they're going to have two of them before the end of the year. Both Apple and Microsoft are providing support for x86 apps. So what that means is the programs that you have bought that are designed to run on an Intel architecture will run on these ARM chips. Now, as a rule, it's only the 64-bit processes that are going to work. The 32-bit processes, if you haven't upgraded your software to 64 bits yet you're gonna have to upgrade it before you can do the ARM migration. We're going to see less expensive computers. Arm chips are much cheaper as a whole than Intel. Intel chips are insanely high priced. They are also going to be way more battery efficient. So if you're looking for a new computer. Visual studio code has been updated optimized for windows 10 on ARM. We're going to see more and more of the applications coming out. And it won't be long, a couple of years now, you will have a hard time finding some of the Intel-based software that's out there. "it won't happen to me." That's our next topic. We've got companies who are investing a lot of money to upgrade the technology, to develop security processes, boost it. Staff yet studies are showing that they're overlooking the biggest piece of the puzzle. What is the problem? Employee apathy has been a problem for many businesses for a very long time. Nowadays, employee apathy is causing problems on the cybersecurity front. As we've talked about so many times, cybersecurity is absolutely critical. For any business or businesses are being attacked sometimes hundreds of times, a minute, a second, even believe it or not. Some of these websites come under attack and if we're not paying close attention, we're in trouble. So a lot of companies have decided while they need to boost their it staff. They've got to get some spending in on some of the hardware that's going to make the life. Better. And I am cheering them on. I think both of those are great ideas, but the bottom line problem is there are million-plus open cyber security IT jobs. So as a business, odds are excellent that you won't be able to find the type of person that you need. Isn't that a shame? But I've got some good news for you here. You can upgrade the technology that's going to help. But if you upgrade the technology, make sure you're moving towards, what's called a single pane of glass. You don't want a whole bunch of point solutions. You want something that monitors everything. Pulls all of that knowledge together uses some machine learning and some artificial intelligence and from all of that automatically shuts down attacks, whether they're internal or external, that's what you're looking for. There are some vendors that have various things out there. If you sell to the federal government within three years, you're going to have to meet these new requirements, the CMMC requirements, level three, four, level five, which are substantial. You cannot do it yourself, you have to bring in a cybersecurity expert. Who's going to work with your team and help you develop a plan. I think that's really great, really important, but here's where the good news comes in. You spent an astronomical amount of money to upgrade this technology and get all of these processes in place and you brought in this consultant, who's going to help you out. You boosted your IT staff. But studies are starting to indicate that a lot of these businesses are overlooking the biggest piece of the puzzle, which is their employees. Most of these successful attacks nowadays are better than 60%, it depends on how you're scoring this, but most of the attacks these days come in through your employees. That means that you clicked on a link. One of your employees clicked on a link. If you are a home user, it's exactly the same thing. The bad guys are getting you because you did something that you should not have done. Just go have a look online. If you haven't already make sure you go to have I been poned.com. Poned is spelled PWNED Have a look at it there online and try and see if your email address and passwords that you've been using have already been compromised. Have already been stolen. I bet they have, almost everybody has. Do you know what to do about that? This is part of the audit kit that I'll send to you. If you ask for that. Kind of goes through this and a whole lot of other stuff. But checking to see if your data has been stolen, because now is they use that to trick people. So they know that you go to a particular website that you use a particular email address or password. They might've been able to get into one of these social networks and figure out who your friends are. They go and take that information.  Now a computer can do this. They just mine it from a website like LinkedIn, find out who the managers in the company are. And then they send off some emails that look very convincing, and those convincing emails get them to click. That could be the end of it. Because you are going somewhere, you shouldn't go and they're going to trick you into doing something.  Knowledge really is the best weapon when it comes to cybersecurity. A lot of companies have started raising awareness among employees. I have some training that we can provide as well. That is very good. It's all video training and it's all tracked. We buy these licenses in big bundles. If you are a small company contact me and I'll see if I can't just sneak you into one of these bundles. Just email me @craigpeterson.com in the subject line, put something like training, bundle, or something. You need to find training for your employees and their training programs need to explain the risk of phishing scams. Those they're the big ones. That's how most of the ransomware it gets into businesses is phishing scams. That's how ransomware gets down to your computers. You also need to have simulations that clarify the steps you need to take when faced with a suspicious email. Again, if you want, I can point you to a free site that Google has on some phishing training and it's really quite good. It walks you through and shows you what the emails might look like and if you want to click or not. But there's a lot of different types of training programs. You've got to make sure that everybody inside your organization or in your, family is educated about cybersecurity. What do you do when you get an email that you suspect might be a phishing email? They need to know that this needs to be forwarded to IT, or perhaps they just tell IT, Hey, it's in my mailbox, if IT has access to their mailbox, so IT can look at it and verify it. You need to have really good email filters, not the type that comes by default with a Microsoft Windows 365 subscription, but something that flags all of this looks for phishing scams, and blocks them. There's been a ton of studies now that are showing that there is a greater awareness of cybersecurity dangers, but the bottom-line problem is that employees are still showing a lax attitude when it comes to practicing even the most basic of cybersecurity prevention methods. TrendMicro, who is a cybersecurity company. We tend to not use their stuff because it's just not as good. But TrendMicro is reporting that despite 72% of employees claim to have gained better cybersecurity awareness during the pandemic 56% still admitted to using a non-work application on a company device. Now that can be extremely dangerous. 66% admitted uploading corporate data to that application. This includes by the way, things like using just regular versions of Dropbox. Do you share files from the office and home? Dropbox does have versions that are all that have all kinds of compliance considerations that do give you security. But by default, the stuff a home user does not get the security you need. They're doing all of this even knowing that their behavior represents a security risk. And I think it boils right down to, it's not going to happen to me. Just apathy and denial. So same thing I've seen, being a security guy for the last 30 years, I've seen over and over, apathy and denial. Don't let it happen to them. By the way, about 50% believe that they could be hacked no matter what protective measures are taken. 43% took the polar opposite. They didn't take the threat seriously at all. 43% didn't believe they could be hacked. We're going to talk about Mac OS is driving cybersecurity rethink. By the way to follow up on that last segment. So Millennials and Generation Z are terrible with security. They keep reusing passwords. They accept connections with strangers. Most of the time. If that's not believable, I don't know what it is. They've grown up in this world of share everything with everyone. What does it matter? Don't worry about it. Yeah. I guess that's the way it goes. Right? Kids these days. Which generation hasn't said that in the past? We were just talking about millennials, generation Z, and the whole, it won't happen to me, employee apathy and we've got to stop that. Even within ourselves, right? We're all employees in some way or another. What does that mean? It means we've got to pay attention. We've' got to pay a lot of attention and that isn't just true in the windows world. Remember we've got to pay attention to our network. You should be upgrading the firmware on your switches, definitely upgrading the software and firmware in your firewalls and in your routers, et cetera. Keep that all up to date. Even as a home user, you've got a switch or more than one. You've got a router. You've got a firewall in many cases that equipment is provided by your ISP internet service provider. If you've got a Comcast line or a FairPoint, whatever, it might be coming into your home, they're providing you with some of that equipment and you know what their top priority is not your security. I know. Shocker. Their top priority is something else. I don't know, but it sure isn't security. What I advise most people to do is basically remove their equipment or have them turn off what's called network address translation. Turn off the firewall and put your own firewall in place. I was on the phone with a lady that had been listening to me for years, and I was helping her out. In fact, we were doing a little security audit because she ran a small business there in her home. I think she was an accountant if I remember right. She had her computer hooked up directly to the internet. She kind of misunderstood what I was saying. I want to make clear what I'm saying here. People should still have a firewall. You still need a router, but you're almost always better off getting a semi-professional piece of hardware. The prosumer side, if you will, something like the Cisco GO hardware and put that in place instead of having the equipment that your ISP is giving you. We've got to keep all of this stuff up to date. Many of us think that Macs are invulnerable, Apple Macintoshes, or Apple iOS devices, like our iPhones and iPads. In many ways they are. They have not been hit as hard as the Windows devices out there. One of the main reasons is they're not as popular. That's what so many people that use Windows say you don't get hit because you're just not as popular. There is some truth to that. However, the main reason is that they are designed from the beginning with security in mind, unlike Windows, that security was an absolute afterthought for the whole thing Don't tell me that it's because of age. Okay. I can hear it right now. People say, well, Mac is much, much newer than Microsoft Windows. Microsoft didn't have to deal with all of this way back when. How I respond to that is, yeah. Microsoft didn't have to deal with it way back when because it wasn't connected to a network and your viruses were coming in via floppy desk. Right? They really were. In fact, the first one came in by researchers. The operating system that Apple uses is much, much, much older than windows and goes back to the late 1960s, early 1970s. So you can't give me that, it is just that they didn't care. They didn't care to consider security at all. Which is something that's still one of my soapbox subjects, if you will. Security matters. When we are talking about your Macs, you still have to consider security on a Mac. It's a little different on a Mac. You're probably want to turn on some things. Like the windows comes with the firewall turned on however it has all of its services wide open. They're all available for anybody to attach to. That's why we have our windows hardening course that goes through, what do you turn off? How do you turn it off? What should you have in the windows firewall? Now the Mac side, all of these services turned off by default, which is way more secure. If they're not there to attack, they're not going to be compromised. Right.  They can't even be attacked the first place. So I like that strategy, but you might want to turn on your firewall on your Mac anyways. There are some really neat little features and functions in it. But the amount of malware that's attacking Apple Macintoshes, nowadays, is twice as much as it used to be. We've got these work from home people. We've got IT professionals within the companies, just scrambling to make it so that these people who are working from home can keep working from home. It's likely a permanent thing. It's going to be happening for a long time. But these incidents of malware on the Mac is pretty limited in reality. The malware on a Mac is unlikely to be any sort of ransomware or software that particularly steals things like your Excel files or your Word docs on a Mac, I should say it is much more likely to be outerwear. It's much more likely to be. Adware or some other unwanted programs and that's, what's rising pretty fast on Macs. Mac-based companies are being concerned here about cyber security issues. They are paying more attention to them. They're windows based counterparts have had to deal with a lot of this stuff for a long time because they were targets. So we've got to divide the Mac really into two pieces, just like any other computer. You've got the operating system with its control over things like the network, et cetera. Then you have the programs or applications, right? That is running on that device. So you want to keep both of them secure. The applications that are running on your device, Apple's done a much, much better job of sandboxing them. Making them so that they're less dangerous. The latest release, in fact, Catalina had a lot of security stuff built into that. Microsoft and Windows 10 added a lot more security. So that's all really, really good. Now, if you have to maintain a network of Macs, we like IBM software. They have some great software for managing Macs, but if you want something that's inexpensive and very usable to configure Macs and control the software on them. Have look at JAMF, J A M F. They just had their user's conference this last weekend. They were talking about how the landscape has changed over on the Mac side. All right. We've got one more segment left today and I'm going to talk about these cybersecurity frameworks. What should you be using? If you are a business or a home user, what are those checkboxes that you absolutely have to have to use? You might've heard about cybersecurity frameworks? Well, the one that's most in use right now is the NIST cybersecurity framework that helps guide you through the process of securing your business or even securing your home. That's our topic. It's a great time to be out on the road and kind of checking in. We've got security threats that have been growing quite literally. Exponentially. They are really making a lot of money by extorting it from us, stealing it from us. It's nothing but frustration to us. It's never been more important to put together an effective cybersecurity risk management policy. That's true if you're a home user and you've got yourself and your spouse and a kid or two in the home. Have a policy and put it together. That's where NIST comes in handy. NIST is the National Institute of standards and technology they've been around a long time. They've been involved in cryptography. These are the guys and gals that give us accurate clocks. In fact, we run two clocks here that we have for our clients, which are hyper-accurate. It's crazy it down to the millionth of a second. It's just amazing. That's who NIST is. They've put all these standards together for a very, very long time, but just before March, this year, It was reported that about 46 percent of businesses had suffered cyber attacks in 2019. That was up 10% from the year before.  Of course, we've all been worried about the Wuhan virus, people getting COVID-19, it is a problem. The biggest part of the problem is everybody's worried about it. Nobody wants to go to work. They don't want to go out to a restaurant. They don't want to do any of these things. You as a business owner are worried about how do you keep your business doors open? How do you provide services to the customers you have when your employees won't come in or cooperate or were paid more to stay at home than they would be to come back to work. I get it right. I know I'm in the same boat. Well, because of that we just have not been paying attention to some of the things we should be doing. One of the main ways that business people can measure their preparedness and their progress in managing cyber security-related risks, is to use the cybersecurity framework that is developed by NIST. It is a great framework. It provides you with different levels. The higher-end, the framework that is used by military contractors. Nowadays, we've been helping businesses conform to what's called NIST 800-171 and 800-53 High, which are both important and cybersecurity standards. So if you really, really, really need to be secure, are those are the ones you're going to be going with. Right now, no matter how much security you need I really would recommend you checking it out.  I can send you information on the NIST framework. I have a little flow chart. I can send you to help to figure out what part of the framework should you be complying with. It also helps you figure out if you by law need to be complying with parts of the framework. It will really help you. It's well thought out. It's going to make you way more efficient as you try and put together and execute your cyber risk management policy.  Remember cyber risk, isn't just for the software that you're running, or the systems you're running. It's the people, it includes some physical security as well. Now President Trump has been very concerned about it. I'm sure you've heard about it in the news. As he's talked about problems with TicTok and with Huawei and some of these other manufacturers out there. Huawei is a huge problem. Just absolutely huge. One of these days I can give you the backstory on that, but how they completely destroyed one of the world leaders in telecommunications technology by stealing everything they had. Yeah. It's a very sad story company you may have heard of,  founded over a hundred years ago. They're non-regulatory but they do publish guides that are used in regulations. So have a look at them, keep an eye on them. They have to help federal agencies as well. Meet the requirements is something called the federal information security management act called FISMA and that relates to the protection of government information and assets. So if you are a contractor to the federal government, pretty much any agency, you have physical requirements. So think about that. Who do you sell things to? When you're also dealing with the federal government they look at everything that you're doing and say, are you making something special for us? If you are, there are more and higher standards that you have to meet as well. It just goes on and on, but this framework was created by NIST ratified by Congress in 2014. It's used by over 30% of businesses in the US and will probably be used by 50% of businesses in the US this year. So if you're not using them you might want to have a look at them. It's big companies like JP Morgan, Chase, Microsoft, Boeing, and Intel who meet a much higher standard than most businesses need to meet. For a lot of businesses all you need to meet is what's called the CMMC one standard. You'll find that at NIST as well. And there are much higher levels than that up to level five, which is just, wow. All of the stuff that you have to keep secured looks like military level or better, frankly security. There are other overseas companies that are using it too, by the way in England, in Japan, Canada, many of them. I'm looking at the framework right now. The basic framework is to identify, protect, detect, respond, and recover. Those are the main parts of it. That's you have to do as a business in order to stay in business in this day and age, they get into it in a lot more detail. They also have different tiers for different tiers that you can get involved in. Then subcategories. I have all of this framework as part of our audit kit that I'll send out to anybody that asks for it that's a listener. All you have to do is send an email to me, M E @craigpeterson.com, and then the subject line, just say audit kit and I'll get back to you. I'll email that off to it's a big PDF. You can also go to NIST in the online world and find what they have for you. Just go to NIST, N I S T.gov, The National Institute of Standards and Technology, and you'll see right there, cybersecurity framework, it's got all of the stuff there. You can learn more here if you want. If you're new to the framework they've got online learning. They are really working hard to try and secure businesses and other organizations here in the U S and as I said used worldwide. It's hyper, hyper important. It's the same framework that we rely on in order to protect our information and protect our customer's information. So NIST, N I S T.gov, check it out. If you missed it today, you're going to want to check out the podcast. Now you can find the podcast on any of your favorite podcasting platforms. It is such a different world. Isn't it? We started out today talking about our cars. Our cars now are basically big mechanical devices ever so complex with computers, controlling them. But the cars of tomorrow that are being built by Tesla and other companies, those cars are absolutely amazing as well, but they're frankly, more computer than they are mechanical car. So what should we expect from these cars? I'm talking about longevity here. We expect a quarter-million miles from our cars today. Some of these electric vehicles may go half a million or even a million miles in the future. When they do that, can we expect that? Our computers get operating system updates and upgrades, for what five years give or take? If you have an Android phone, you're lucky if you get two years' worth of updates. Don't use Android, people. It's just not secure. How about our cars? How long should we expect updates for the firmware in our cars? So that's what we talked about first, today. Ring has a new security camera that is absolutely cool. It's called the always home cam. I talked about it earlier. It is a drone that flies around inside your house and ties into other Ring equipment. I think it's absolutely phenomenal and it's not quite out yet, but I'll let you know more about that. If you get ransomware and you pay the ransom, the feds are saying now that you are supporting terrorist organizations. You might want to be careful because they are starting to knock on doors, and there's jail time behind some of these things. So watch it when it comes ransomware and a whole lot more as well. So make sure you visit me online. Go to Craig peterson.com/subscribe. It's very important that you do that and do that now. So you'll get my weekly newsletter. I've got some special gifts, including security, reboot stuff that I'll send to you right away. Craig peterson.com/subscribe. --- More stories and tech updates at: www.craigpeterson.com Don't miss an episode from Craig. Subscribe and give us a rating: www.craigpeterson.com/itunes Follow me on Twitter for the latest in tech at: www.twitter.com/craigpeterson For questions, call or text: 855-385-5553

No President is above FISMA Law
Episode 54 US Congress is not above FISMA Law

No President is above FISMA Law

Play Episode Listen Later Nov 8, 2020 10:14


In this episode the author emphasizes Warning of Insider threats and Vigilance in US Information Security through FISMA Law Compliance.

The Podlets - A Cloud Native Podcast
The Dichotomy of Security (Ep 10)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Dec 30, 2019 44:20


Security is inherently dichotomous because it involves hardening an application to protect it from external threats, while at the same time ensuring agility and the ability to iterate as fast as possible. This in-built tension is the major focal point of today’s show, where we talk about all things security. From our discussion, we discover that there are several reasons for this tension. The overarching problem with security is that the starting point is often rules and parameters, rather than understanding what the system is used for. This results in security being heavily constraining. For this to change, a culture shift is necessary, where security people and developers come around the same table and define what optimizing to each of them means. This, however, is much easier said than done as security is usually only brought in at the later stages of development. We also discuss why the problem of security needs to be reframed, the importance of defining what normal functionality is and issues around response and detection, along with many other security insights. The intersection of cloud native and security is an interesting one, so tune in today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Bryan Liles Nicholas Lane Key Points From This Episode: Often application and program security constrain optimum functionality. Generally, when security is talked about, it relates to the symptoms, not the root problem. Developers have not adapted internal interfaces to security. Look at what a framework or tool might be used for and then make constraints from there. The three frameworks people point to when talking about security: FISMA, NIST, and CIS. Trying to abide by all of the parameters is impossible. It is important to define what normal access is to understand what constraints look like. Why it is useful to use auditing logs in pre-production. There needs to be a discussion between developers and security people. How security with Kubernetes and other cloud native programs work. There has been some growth in securing secrets in Kubernetes over the past year. Blast radius – why understanding the extent of security malfunction effect is important. Chaos engineering is a useful framework for understanding vulnerability. Reaching across the table – why open conversations are the best solution to the dichotomy. Security and developers need to have the same goals and jargon from the outset. The current model only brings security in at the end stages of development. There needs to be a place to learn what normal functionality looks like outside of production. How Google manages to run everything in production. It is difficult to come up with security solutions for differing contexts. Why people want service meshes. Quotes: “You’re not able to actually make use of the platform as it was designed to be made use of, when those constraints are too tight.” — @mauilion [0:02:21] “The reason that people are scared of security is because security is opaque and security is opaque because a lot of people like to keep it opaque but it doesn’t have to be that way.” — @bryanl [0:04:15] “Defining what that normal access looks like is critical to us to our ability to constrain it.” — @mauilion [0:08:21] “Understanding all the avenues that you could be impacted is a daunting task.” — @apinick [0:18:44] “There has to be a place where you can go play and learn what normal is and then you can move into a world in which you can actually enforce what that normal looks like with reasonable constraints.” — @mauilion [0:33:04] “You don’t learn to ride a motorcycle on the street. You’d learn to ride a motorcycle on the dirt.” — @apinick [0:33:57] Links Mentioned in Today’s Episode: AWS — https://aws.amazon.com/Kubernetes https://kubernetes.io/IAM https://aws.amazon.com/iam/Securing a Cluster — https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/TGI Kubernetes 065 — https://www.youtube.com/watch?v=0uy2V2kYl4U&list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa&index=33&t=0sTGI Kubernetes 066 —https://www.youtube.com/watch?v=C-vRlW7VYio&list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa&index=32&t=0sBitnami — https://bitnami.com/Target — https://www.target.com/Netflix — https://www.netflix.com/HashiCorp — https://www.hashicorp.com/Aqua Sec — https://www.aquasec.com/CyberArk — https://www.cyberark.com/Jeff Bezos — https://www.forbes.com/profile/jeff-bezos/#4c3104291b23Istio — https://istio.io/Linkerd — https://linkerd.io/ Transcript: EPISODE 10 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores cloud native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.2] NL: Hello and welcome back to The Kubelets Podcast. My name is Nicholas Lane and this time, we’re going to be talking about the dichotomy of security. And to talk about such an interesting topic, joining me are Duffie Coolie. [0:00:54.3] DC: Hey, everybody. [0:00:55.6] NL: Bryan Liles. [0:00:57.0] BM: Hello [0:00:57.5] NL: And Carlisia Campos. [0:00:59.4] CC: Glad to be here. [0:01:00.8] NL: So, how’s it going everybody? [0:01:01.8] DC: Great. [0:01:03.2] NL: Yeah, this I think is an interesting topic. Duffie, you introduced us to this topic. And basically, what I understand, what you wanted to talk about, we’re calling it the dichotomy of security because it’s the relationship between security, like hardening your application to protect it from attack and influence from outside actors and agility to be able to create something that’s useful, the ability to iterate as fast as possible. [0:01:30.2] DC: Exactly. I mean, the idea from this came from putting together a talks for the security conference coming up here in a couple of weeks. And I was noticing that obviously, if you look at the job of somebody who is trying to provide some security for applications on their particular platform, whether that be AWS or GCE or OpenStack or Kubernetes or anything of these things. It’s frequently in their domain to kind of define constraints for all of the applications that would be deployed there, right? Such that you can provide rational defaults for things, right? Maybe you want to make sure that things can’t do a particular action because you don’t want to allow that for any application within your platform or you want to provide some constraint around quota or all of these things. And some of those constraints make total sense and some of them I think actually do impact your ability to design the systems or to consume that platform directly, right? You’re not able to actually make use of the platform as it was designed to be made use of, when those constraints are too tight. [0:02:27.1] DC: Yeah. I totally agree. There’s kind of a joke that we have in certain tech fields which is the primary responsibility of security is to halt productivity. It isn’t actually true, right? But there are tradeoffs, right? If security is too tight, you can’t move forward, right? Example of this that kind of mind are like, if you’re too tight on your firewall rules where you can’t actually use anything of value. That’s a quick example of like security gone haywire. That’s too controlling, I think. [0:02:58.2] BM: Actually. This is an interesting topic just in general but I think that before we fall prey to what everyone does when they talk about security, let’s take a step back and understand why things are the way they are. Because all we’re talking about are the symptoms of what’s going on and I’ll give you one quick example of why I say this. Things are the way they are because we haven’t made them any better. In developer land, whenever we consume external resources, what we were supposed to do and what we should be doing but what we don’t do is we should create our internal interfaces. Only program to those interfaces and then let that interface of that adapt or talk to the external service and in security world, we should be doing the same thing and we don’t do this. My canonical example for this is IAM on AWS. It’s hard to create a secure IM configuration and it’s even harder to keep it over time and it’s even harder to do it whenever you have 150, 100, 5,000 people dealing with this. What companies do is they actually create interfaces where they could describe the part of IAM they want to use and then they translate that over. The reason I bring this up is because the reason that people are scared of security is because security is opaque and security is opaque because a lot of people like to keep it opaque. But it doesn’t have to be that way. [0:04:24.3] NL: That’s a good point, that’s a reasonable design and wherever I see that devoted actually is very helpful, right? Because you highlight a critical point in that these constraints have to be understood by the people who are constrained by them, right? It will just continue to kind of like drive that wedge between the people who are responsible for them top finding t hem and the people who are being affected by them, right? That transparency, I think it’s definitely key. [0:04:48.0] BM: Right, this is our cloud native discussion, any idea of where we should start thinking about this in cloud native land? [0:04:56.0] DC: For my part, I think it’s important to understand if you can like what the consumer of a particular framework or tool might need, right? And then, just take it from there and figure out what rational constraints are. Rather than the opposite which is frequently where people go and evaluate a set of rules as defined by some particular, some third-part company. Like you look at CIS packs and you look at like a lot of these other tooling. I feel like a lot of people look at those as like, these are the hard rules, we must comply to all of these things. Legally, in some cases, that’s the case. But frequently, I think they’re just kind of like casting about for some semblance of a way to start defining constraint and they go too far, they’re no longer taking into account what the consumers of that particular platform might meet, right? Kubernetes is a great example of this. If you look at the CIS spec for Kubernetes or if you look at a lot of the talks that I’ve seen kind of around how to secure Kubernetes, we defined like best practices for security and a lot of them are incredibly restrictive, right? I think of the problem there is that restriction comes at a cost of agility. You’re no longer able to use Kubernetes as a platform for developing microservices because you provided so much constraints that it breaks the model, you know? [0:06:12.4] NL: Okay. Let’s break this down again. I can think of a top of my head, three types of things people point to when I’m thinking about security. And spoiler alert, I am going to do some acronyms but don’t worry about the acronyms are, just understand they are security things. The first one I’ll bring up is FISMA and then I’ll think about NIST and the next one is CIS like you brought up. Really, the reason they’re so prevalent is because depending on where you are, whether you’re in a highly regulated place like a bank or you’re working for the government or you have some kind of automate concern to say a PIPA or something like that. These are the words that the auditors will use with you. There is good in those because people don’t like the CIS benchmarks because sometimes, we don’t understand why they’re there. But, from someone who is starting from nothing, those are actually great, there’s at least a great set of suggestions. But the problem is you have to understand that they’re only suggestions and they are trying to get you to a better place than you might need. But, the other side of this is that, we should never start with NIST or CIS or FISMA. What we really should do is our CISO or our Chief Security Officer or the person in charge of security. Or even just our – people who are in charge, making sure our stack, they should be defining, they should be taking what they know, whether it’s the standards and they should be building up this security posture in this security document and these rules that are built to protect whatever we’re trying to do. And then, the developers of whoever else can operate within that rather than everything literally. [0:07:46.4] DC: Yeah, agreed. Another thing I’ve spent some time talking to people about like when they start rationalizing how to implement these things or even just think about the secure surface or develop a threat model or any of those things, right? One of the things that I think it’s important is the ability to define kind of like what normal looks like, right? What normal access between applications or normal access of resources looks like. I think that your point earlier, maybe provides some abstraction in front of a secure resource such that you can actually just share that same fraction across all the things that might try to consume that external resource is a great example of the thing. Defining what that normal access looks like is critical to us to our ability to constrain it, right? I think that frequently people don’t start there, they start with the other side, they’re saying, here are all the constraints, you need to tell me which ones are too tight. You need to tell me which ones to loosen up so that you can do your job. You need to tell me which application needs access to whichever application so that I can open the firewall for you. I’m like, we need to turn that on its head. We need the environments that are perhaps less secure so that we can actually define what normal looks like and then take that definition and move it into a more secured state, perhaps by defining these across different environments, right? [0:08:58.1] BM: A good example of that would be in larger organizations, at every part of the organization does this but there is environments running your application where there are really no rules applied. What we do with that is we turn on auditing in those environments so you have two applications or a single application that talks to something and you let that application run and then after the application run, you go take a look at the audit logs and then you determine at that point what a good profile of this application is. Whenever it’s in production, you set up the security parameters, whether it be identity access or network, based on what you saw in auditing in your preproduction environment. That’s all you could run because we tested it fully in our preproduction environment, it should not do any more than that. And that’s actually something – I’ve seen tools that will do it for AWS IM. I’m sure you can do for anything else that creates auditing law. That’s a good way to get started. [0:09:54.5] NL: It sounds like what we’re coming to is that the breakdown of security or the way that security has impacted agility is when people don’t take a rational look at their own use case. instead, rely too much on the guidance of other people essentially. Instead of using things like the CIS benchmarking or NIST or FISMA, that’s one that I knew the other two and I’m like, I don’t know this other one. If they follow them less as guidelines and more as like hard set rules, that’s when we get impacts or agility. Instead of like, “Hey. This is what my application needs like you’re saying, let’s go from there.” What does this one look like? Duffie is for saying. I’m kind of curious, let’s flip that on its head a little bit, are there examples of times when agility impacts security? [0:10:39.7] BM: You want to move fast and moving fast is counter to being secure? [0:10:44.5] NL: Yes. [0:10:46.0] DC: Yeah, literally every single time we run software. When it comes down to is developers are going to want to develop and then security people are going to want to secure. And generally, I’m looking at it from a developer who has written security software that a lot of people have used, you guys had know that. Really, there needs to be a conversation, it’s the same thing as we had this dev ops conversation for a year – and then over the last couple of years, this whole dev set ops conversation has been happening. We need to have this conversation because from a security person’s point of view, you know, no access is great access. No data, you can’t get owned if you don’t have any data going across the wire. You know what? Can’t get into that server if there’s no ports opened. But practically, that doesn’t work and we find is that there is actually a failing on both sides to understand what the other person was optimizing for. [0:11:41.2] BM: That’s actually where a lot of this comes from. I will offer up that the only default secure posture is no access to anything and you should be working from that direction to where you want to be rather than working from, what should we close down? You should close down everything and then you work with allowing this list for other than block list. [0:12:00.9] NL: Yeah, I agree with that model but I think that there’s an important step that has to happen before that and that’s you know, the tooling or thee wireless phone to define what the application looks like when it’s in a normal state or the running state and if we can accomplish that, then I feel like we’re in a better position to find what that LOI list looks like and I think that one of the other challenges there of course, let’s backup for a second. I have actually worked on a platform that supported many services, hundreds of services, right? Clearly, if I needed to define what normal looked like for a hundred services or a thousand services or 2,000 services, that’s going to be difficult in a way that people approach the problem, right? How do you define for each individual service? I need to have some decoration of intent. I need the developer to engage here and tell me, what they’re expecting, to set some assumptions about the application like what it’s going to connect to, those dependences are – That sort of stuff. And I also need tooling to verify that. I need to be able to kind of like build up the whole thing so that I have some way of automatically, you know, maybe with oversight, defining what that security context looks like for this particular service on this particular platform. Trying to do it holistically is actually I think where we get into trouble, right? Obviously, we can’t scale the number of people that it takes to actually understand all of these individual services. We need to actually scale this stuff as software problem instead. [0:13:22.4] CC: With the cloud native architecture and infrastructure, I wonder if it makes it more restrictive because let’s say, these are running on Kubernetes, everything is running at Kubernetes. Things are more connected because it’s a Kubernetes, right? It’s this one huge thing that you’re running on and Kubernetes makes it easier to have access to different notes and when the nodes took those apart, of course, you have to find this connection. Still, it’s supposed to make it easy. I wonder if security from a perspective of somebody, needing to put a restriction and add miff or example, makes it harder or if it makes it easier to just delegate, you have this entire area here for you and because your app is constrained to this space or name space or this part, this node, then you can have as much access as you need, is there any difference? Do you know what I mean? Does it make sense what I said? [0:14:23.9] BM: There was actually, it’s exactly the same thing as we had before. We need to make sure that applications have access to what they need and don’t have access to what they don’t need. Now, Kubernetes does make it easier because you can have network policies and you can apply those and they’re easier to manage than who knows what networking management is holding you have. Kubernetes also has pod security policies which again, actually confederates this knowledge around my pod should be able to do this or should not be able to run its root, it shouldn’t be able to do this and be able to do that. It’s still the same practice Carlisia, but the way that we can control it is now with a standard set off tools. We still have not cracked the whole nut because the whole thing of turning auditing on to understand and then having great tool that can read audit locks from Kubernetes, just still aren’t there. Just to add one more last thing that before we add VMWare and we were Heptio, we had a coworker who wrote basically dynamic audit and that was probably one of the first steps that we would need to be able to employ this at scale. We are early, early, super early in our journey and getting this right, we just don’t have all the necessary tools yet. That’s why it’s hard and that’s why people don’t do it. [0:15:39.6] NL: I do think it is nice to have t hose and primitives are available to people who are making use of that platform though, right? Because again, kind of opens up that conversation, right? Around transparency. The goal being, if you understood the tools that we’re defining that constraint, perhaps you’d have access to view what the constraints are and understand if they’re actually rational or not with your applications. When you’re trying to resolve like I have deployed my application in dev and it’s the wild west, there’s no constraints anywhere. I can do anything within dev, right? When I’m trying to actually promote my application to staging, it gives you some platform around which you can actually sa, “If you want to get to staging, I do have to enforce these things and I have a way and again, all still part of that same API, I still have that same user experience that I had when just deploying or designing the application to getting them deployed.” I could still look at again and understand what the constraints are being applied and make sure that they’re reasonable for my application. Does my application run, does it have access to the network resources that it needs to? If not, can I see where the gaps are, you know? [0:16:38.6] DC: For anyone listening to this. Kubernetes doesn’t have all the documentation we need and no one has actually written this book yet. But on Kubernetes.io, there are a couple of documents about security and if we have shownotes, I will make sure those get included in our shownotes because I think there are things that you should at least understand what’s in a pod security policy. You should at least understand what’s in a network security policy. You should at least understand how roles and role bindings work. You should understand what you’re going to do for certificate management. How do you manage this certificate authority in Kubernetes? How do you actually work these things out? This is where you should start before you do anything else really fancy. At least, understand your landscape. [0:17:22.7] CC: Jeffrey did a TGI K talk on secrets. I think was that a series? There were a couple of them, Duffie? [0:17:29.7] DC: Yeah, there were. I need to get back and do a little more but yeah. [0:17:33.4] BM: We should then add those to our shownotes too. Hopefully they actually exist or I’m willing to see to it because in assistance. [0:17:40.3] CC: We are going to have shownotes, yes. [0:17:44.0] NL: That is interesting point, bringing up secrets and secret management and also, like secured Inexhibit. There are some tools that exist that we can use now in a cloud native world, at least in the container world. Things like vault exist, things like well, now, KBDM you can roll certificate which is really nice. We are getting to a place where we have more tooling available and I’m really happy about it. Because I remember using Kubernetes a year ago and everyone’s like, “Well. How do you secure a secret in Kubernetes?” And I’m like, “Well, it sure is basics for you to encode it. That’s on an all secure.” [0:18:15.5] BM: I would do credit Bitnami has been doing sealed secrets, that’s been out for quite a while but the problem is that how do you suppose to know about that and how are you supposed to know if it’s a good standard? And then also, how are you supposed to benchmark against that? How do you know if your secrets are okay? We haven’t talked about the other side which is response or detection of issues. We’re just talking about starting out, what do you do? [0:18:42.3] DC: That’s right. [0:18:42.6] NL: It is tricky. We’re just saying like, understanding all the avenues that you could be impacted is kind of a daunting task. Let’s talk about like the Target breach that occurred a few years ago? If anybody doesn’t remember this, basically, Target had a huge credit card breach from their database and basically, what happened is that t heir – If I recalled properly, their OIDC token had a – not expired but the audience for it was so broad that someone had hacked into one computer essentially like a register or something and they were able to get the OIDC token form the local machine. The authentication audience for that whole token was so broad that they were able to access the database that had all of the credit card information into it. These are one of these things that you don’t think about when you’re setting up security, when you’re just maybe getting started or something like that. What are the avenues of attack, right? You’d say like, “OIDC is just pure authentication mechanism, why would we need to concern ourselves with this?” And then but not understanding kind of what we were talking about last because the networking and the broadcasting, what is the blast radius of something like this and so, I feel like this is a good example of sometimes security can be really hard and getting started can be really daunting. [0:19:54.6] DC: Yeah, I agree. To Bryan’s point, it’s like, how do you test against this? How do you know that what you’ve defined is enough, right? We can define all of these constraints and we can even think that they’re pretty reasonable or rational and the application may come up and operate but how do you know? How can you verify that? What you’ve done is enough? And then also, remember. With OIDC has its own foundations and loft. You realize that it’s a very strong door but it’s only a strong door, it also do things that you can’t walk around a wall and that it’s protecting or climb over the wall that it’s protecting. There’s a bit of trust and when you get into things like the target breach, you really have to understand blast radius for anything that you’re going to do. A good example would be if you’re using shared key kind of things or like public share key. You have certificate authorities and you’re generating certificates. You should probably have multiple certificate authorities and you can have a basically, a hierarchy of these so you could have basically the root one controlled by just a few people in security. And then, each department has their own certificate authority and then you should also have things like revocation, you should be able to say that, “Hey, all this is bad and it should all go away and it probably should have every revocation list,” which a lot of us don’t have believe it or not, internally. Where if I actually kill our own certificate, a certificate was generated and I put it in my revocation list, it should not be served and in our clients that are accepting that our service is to see that, if we’re using client side certificates, we should reject these instantly. Really, what we need to do is stop looking at security as this one big thing and we need to figure out what are our blast radius. Firecracker, blowing up in my hand, it’s going to hurt me. But Nick, it’s not going to hurt you, you know? If someone drops in a huge nuclear bomb on the United States or the west coast United States, I’m talking to myself right now. You got to think about it like that. What’s the worst that can happen if this thing gets busted or get shared or someone finds that this should not happen? Every piece off data that you have that you consider secure or sensitive, you should be able to figure out what that means and that is how whenever you are defining a security posture that’s butchered to me. Because that is why you’ll notice that a lot of companies some of them do run open within a contained zone. So, within this contained zone you could talk to whomever you want. We don’t actually have to be secure here because if we lose one, we lost them all so who cares? So, we need to think about that and how do we do that in Kubernetes? Well, we use things like name spaces first of all and then we use things like this network policies and then we use things like pod security policies. We can lock some access down to just name spaces if need be. You can only talk to pods and your name space. And I am not telling you how to do this but you need to figure out talking with your developer, talking to the security people. But if you are in security you need to talk to your product management staff and your software engineering staff to figure out really how does this need to work? So, you realize that security is fun and we have all sorts of neat tools depending on what side you’re on. You know if you are on red team, you’re half knee in, you’re blue team you are saving things. We need to figure out these conversations and tooling comes from these conversations but we need to have these conversation first. [0:23:11.0] DC: I feel like a little bit of a broken record on this one but I am going to go back to chaos engineering again because I feel like it is critical to stuff like this because it enables a culture in which you can explore both the behavior of applications itself but why not also use this model to explore different ways of accessing that information? Or coming up with theories about the way the system might be vulnerable based on a particular attack or a type of attack, right? I think that this is actually one of the movements within our space that I think provides because then most hope in this particular scenario because a reasonable chaos engineering practice within an organization enables that ability to explore all of the things. You don’t have to be red team or blue team. You can just be somebody who understands this application well and the question for the day is, “How can we attack this application?” Let’s come up with theories about the way that perhaps this application could be attacked. Think about the problem differently instead of thinking about it as an access problem, think about it as the way that you extend trust to the other components within your particular distributed system like do they have access that they don’t need. Come up with a theory around being able to use some proxy component of another system to attack yet a third system. You know start playing with those ideas and prove them out within your application. A culture that embraces that I think is going to be by far a more secure culture because it lets developers and engineers explore these systems in ways that we don’t generally explore them. [0:24:36.0] BM: Right. But also, if I could operate on myself I would never need a doctor. And the reason I bring that up is because we use terms like chaos engineering and this is no disrespect to you Duffie, so don’t take it as this is panacea or this idea that we make things better and true. That is fine, it will make us better but the little secret behind chaos engineering is that it is hard. It is hard to build these experiments first of all, it is hard to collect results from these experiments. And then it is hard to extrapolate what you got out of the experiments to apply to whatever you are working on to repeat and what I would like to see is what people in our space is talking about how we can apply such techniques. But whether it is giving us more words or giving us more software that we can employ because I hate to say it, it is pretty chaotic in chaos engineering right now for Kubernetes. Because if you look at all the people out there who have done it well. And so, you look at what Netflix has done with pioneering this and then you listen to what, a company such us like Gremlin is talking about it is all fine and dandy. You need to realize that it is another piece of complexity that you have to own and just like any other things in the security world, you need to rationalize how much time you are going to spend on it first is the bottom line because if I have a “Hello, World!” app, I don’t really care about network access to that. Unless it is a “Hello, World!” app running on the same subnet as some doing some PCI data then you know it is a different conversation. [0:26:05.5] DC: Yeah. I agree and I am certainly not trying to version as a panacea but what I am trying to describe is that I feel like I am having a culture that embraces that sort of thinking is going to enable us to be in a better position to secure these applications or to handle a breach or to deal with very hard to understand or resolve problems at scale, you know? Whether that is a number of connections per second or whether that is a number of applications that we have horizontally scaled. You know like being able to embrace that sort of a culture where we asked why where we say “well, what if…” or if we actually come up you know embracing the idea of that curiosity that got you into this field, you know what I mean like the thing that is so frequently our cultures are opposite of that, right? It becomes a race to the finish and in that race to the finish, lots of pieces fall off that we are not even aware of, you know? That is what I am highlighting here when I talk about it. [0:26:56.5] NL: And so, it seems maybe the best solution to the dichotomy between security and agility is really just open conversation, in a way. People actually reaching across the aisle to talk to each other. So, if you are embracing this culture as you are saying Duffie the security team should be having constant communication with the application team instead of just like the team doing something wrong and the security team coming down and smacking their hand. And being like, “Oh you can’t do it this way because of our draconian rules” right? These people are working together and almost playing together a little bit inside of their own environment to create also a better environment. And I am sorry.I didn’t mean to cut you off there, Bryan. [0:27:34.9] BM: Oh man, I thought it was fleeting like all my thoughts. But more about what you are saying is, is that you know it is not just more conversations because we can still have conversations and I am talking about sider and subnets and attack vectors and buffer overflows and things like that. But my developer isn’t talking, “Well, I just need to be able to serve this data so accounting can do this.” And that’s what happens a lot in security conversations. You have two groups of individuals who have wholly different goals and part of that conversation needs to be aligning or jargon and then aligning on those goals but what happens with pretty much everything in the development world, we always bring our networking, our security and our operations people in right at the end, right when we are ready to ship, “Hey make this thing work.” And really it is where a lot of our problems come out. Now security either could or wanted to be involved at the beginning of a software project what we actually are talking about what we are trying to do. We are trying to open up this service to talk to this, share this kind of data. Security can be in there early saying, “Oh no you know, we are using this resource in our cloud provider. It doesn’t really matter what cloud provider and we need to protect this. This data is sitting here at rest.” If we get those conversations earlier, it would be easier to engineer solutions that to be hopefully reused so we don’t have to have that conversation in the future. [0:29:02.5] CC: But then it goes back to the issue of agility, right? Like Duffie was saying, wow you can develop, I guess a development cluster which has much less restrictive restrictions and they move to a production environment where the proper restrictions are then – then you find out or maybe station environment let’s say. And then you find out, “Oh whoops. There are a bunch of restrictions I didn’t deal with but I didn’t move a lot faster because I didn’t have them but now, I have to deal with them.” [0:29:29.5] DC: Yeah, do you think it is important to have a promotion model in which you are able to move toward a more secure deployment right? Because I guess a parallel to this is like I have heard it said that you should develop your monolith first and then when you actually have the working prototype of what you’re trying to create then consider carefully whether it is time to break this thing up into a set of distinct services, right? And consider carefully also what the value of that might be? And I think that the reason that that’s said is because it is easier. It is going to be a lower cognitive load with everything all right there in the same codebase. You understand how all of these pieces interconnect and you can quickly develop or prototype what you are working on. Whereas if you are trying to develop these things into individual micro services first, it is harder to figure out where the line is. Like where to divide all of the business logic. I think this is also important when you are thinking about the security aspects of this right? Being able to do a thing when which you are not constrained, define all of these services and your application in the model for how they communicate without constraint is important. And once you have that when you actually understand what normal looks like from that set of applications then enforce them, right? If you are able to declare that intent you are going to say like these are the ports on the list on for these things, these are the things that they are going to access, this is the way that they are going to go about accessing them. You know if you can declare that intent then that is actually that is a reasonable body of knowledge for which the security people can come along and say, “Okay well, you have told us. You informed us. You have worked with us to tell us like what your intent is. We are going to enforce that intent and see what falls out and we can iterate there.” [0:31:01.9] CC: Yeah everything you said makes sense to me. Starting with build the monolith first. I mean when you start out why which ones will have abstract things that you don’t really – I mean you might think you know but you’re only really knowing practice what you are going to need to abstract. So, don’t abstract things too early. I am a big fan of that idea. So yeah, start with the monolith and then you figure out how to break it down based on what you need. With security I would imagine the same idea resonates with me. Don’t secure things that you don’t need you don’t know just yet that needs securing except the deal breaker things. Like there is some things we know like we don’t want production that are being accessed some types of production that are some things we know we need to secure so from the beginning. [0:31:51.9] BM: Right. But I will still iterate that it is always denied by default, just remember that. It is security is actually the opposite way. We want to make sure that we have the least amount and even if it is harder for us you always want to start with un-allowed TCP communication on port 443 or UDP as well. That is what I would allow rather than saying shut everything else off. But this, I would rather have the way that we only allow that and that also goes in with our declarative nature in cloud native things we like anyways. We just say what we want and everything else doesn’t exists. [0:32:27.6] DC: I do want to clarify though because I think what you and I, we are the representative of the dichotomy right at this moment, right? I feel like what you are saying is the constraint should be the normal, being able to drop all traffic, do not allow anything is normal and then you have to declare intent to open anything up and what I am saying is frequently developers don’t know what normal looks like yet. They need to be able to explore what normal looks like by developing these patterns and then enforce them, right, which is turning the model on its head. And this is actually I think the kernel that I am trying to get to in this conversation is that there has to be a place where you can go play and learn what normal is and then you can move into a world in which you can actually enforce what that normal looks like with reasonable constraint. But until you know what that is, until you have that opportunity to learn it, all we are doing here is restricting your ability to learn. We are adding friction to the process. [0:33:25.1] BM: Right, well I think what I am trying to say here layer on top of this is that yes, I agree but then I understand what a breach can do and what bad security can do. So I will say, “Yeah, go learn. Go play all you want but not on software that will ever make it to production. Go learn these practices but you are going to have to do it outside of” – you are going to have a sandbox and that sandbox is going to be unconnected from the world I mean from our obelisk and you are going to have to learn but you are not going to practice here. This is not where you learn how to do this. [0:33:56.8] NL: Exactly right, yeah. You don’t learn to ride a motorcycle on the street you know? You’d learn to ride a motorcycle on the dirt and then you could take those skills later you know? But yeah I think we are in agreement like production is a place where we do have to enforce all of those things and having some promotion level in which you can come from a place where you learned it to a place where you are beginning to enforce it to a place where it is enforced I think is also important. And I frequently describe this as like development, staging and production, right? Staging is where you are going to hit the edges from because this is where you’re actually defining that constraint and it has to be right before it can be promoted to production, right? And I feel like the middle ground is also important. [0:34:33.6] BM: And remember that production is any environment production can reach. Any environment that can reach production is production and that is including that we do data backup dumps and we clean them up from production and we use it as data in our staging environment. If production can directly reach staging or vice versa, it is all production. That is your attack vector. That is also what is going to get in and steal your production data. [0:34:59.1] NL: That is absolutely right. Google actually makes an interesting not of caveat to that but like side point to that where like if I understand the way that Google runs, they run everything in production, right? Like dev, staging and production are all the same environment. I am more positing this is a question because I don’t know if anybody of us have the answer but I wonder how they secure their infrastructure, their environment well enough to allow people to play to learn these things? And also, to deploy production level code all in the same area? That seems really interesting to be and then if I understood that I probably would be making a lot more money. [0:35:32.6] BM: Well it is simple really. There were huge people process at Google that access gatekeeper for a lot of these stuff. So, I have never worked in Google. I have no intrinsic knowledge of Google or have talked to anyone who has given me this insight, this is all speculation disclaimer over. But you can actually run a big cluster that if you can actually prove that you have network and memory and CPU isolation between containers, which they can in certain cases and certain things that can do this. What you can do is you can use your people process and your approvals to make sure that software gets to where it needs to be. So, you can still play on the same clusters but we have great handles on network that you can’t talk to these networks or you can’t use this much network data. We have great things on CPU that this CPU would be a PCI data. We will not allow it unless it’s tied to CPU or it is PCI. Once you have that in place, you do have a lot more flexibility. But to do that, you will have to have some pretty complex approval structures and then software to back that up. So, the burden on it is not on the normal developer and that is actually what Google has done. They have so many tools and they have so many processes where if you use this tool it actually does the process for you. You don’t have to think about it. And that is what we want our developers to be. We want them to be able to use either our networking libraries or whenever they are building their containers or their Kubernetes manifest, use our tools and we will make sure based on either inspection or just explicit settings that we will build something that is as secure as we can given the inputs. And what I am saying is hard and it is capital H hard and I am actually just pitting where we want to be and where a lot of us are not. You know most people are not there. [0:37:21.9] NL: Yeah, it would be nice if we had like we said earlier like more tooling around security and the processes and all of these things. One thing I think that people seem to balk on or at least I feel is developing it for their own use case, right? It seems like people want an overarching tool to solve all the use cases in the world. And I think with the rise of cloud native applications and things like container orchestration, I would like to see people more developing for themselves around their own processes, around Kubernetes and things like that. I want to see more perspective into how people are solving their security problems, instead of just like relying on let’s say like HashiCorp or like Aqua Sec to provide all the answers like I want to see more answers of what people are doing. [0:38:06.5] BM: Oh, it is because tools like Vault are hard to write and hard to maintain and hard to keep correct because you think about other large competitors to vault and they are out there like tools like CyberArk. I have a secret and I want to make sure only certain will keep it. That is a very difficult tool but the HashiCorp advantage here is that they have made tools to speak to people who write software or people who understand ops not just as a checkbox. It is not hard to get. If you are using vault it is not hard to get a secret out if you have the right credentials. Other tools is super hard to get the secret out if you even have the right credential because they have a weird API or they just make it very hard for you or they expect you to go click on some gooey somewhere. And that is what we need to do. We need to have better programming interfaces and better operator interfaces, which extends to better security people are basis for you to use these tools. You know I don’t know how well this works in practice. But the Jeff Bezos, how teams at AWS or Amazon or forums, you know teams communicate on API and I am not saying that you shouldn’t talk, but we should definitely make sure that our API’s between teams and team who owns security stuff and teams who are writing developer stuff that we can talk on the same level of fidelity that we can having an in person conversation, we should be able to do that through our software as well. Whether that be for asking for ports or asking for our resources or just talking about the problem that we have that is my thought-leadering answer to this. This is “Bryan wants to be a VP of something one day” and that is the answer I am giving. I’m going to be the CIO that is my CIO answer. [0:39:43.8] DC: I like it. So cool. [0:39:45.5] BM: Is there anything else on this subject that we wanted to hit? [0:39:48.5] NL: No, I think we have actually touched on pretty much everything. We got a lot out of this and I am always impressed with the direction that we go and I did not expect us to go down this route and I was very pleased with the discussion we have had so far. [0:39:59.6] DC: Me too. I think if we are going to explore anything else that we talked about like you know, get it more into that state where we are talking about like that we need more feedback loops. We need people developers to talk to security people. We need security people talk to developers. We need to have some way of actually pushing that feedback loop much like some of the other cultural changes that we have seen in our industry are trying to allow for better feedback loops and other spaces. And you’ve brought up dev spec ops which is another move to try and open up that feedback loop but the problem I think is still going to be that even if we improved that feedback loop, we are at an age where – especially if you ended up in some of the larger organizations, there are too many applications to solve this problem for and I don’t know yet how to address this problem in that context, right? If you are in a state where you are a 20-person, 30-person security team and your responsibility is to secure a platform that is running a number of Kubernetes clusters, a number of Vsphere clusters, a number of cloud provider implementations whether that would be AWS or GC, I mean that is a set of problems that is very difficult. It is like I am not sure that improving the feedback loop really solves it. I know that I helps but I definitely you know, I have empathy for those folks for sure. [0:41:13.0] CC: Security is not my forte at all because whenever I am developing, I have a narrow need. You know I have to access a cluster.I have to access a machine or I have to be able to access the database. And it is usually a no brainer but I get a lot of the issues that were brought up. But as a builder of software, I have empathy for people who use software, consume software, mine and others and how can’t they have any visibility as far as security goes? For example, in the world of cloud native let’s say you are using Kubernetes, I sort of start thinking, “Well, shouldn’t there be a scanner that just lets me declare?” I think I am starting an episode right now –should there be a scanner that lets me declare for example this node can only access this set of nodes like a graph. But you just declare and then you run it periodically and you make sure of course this goes down to part of an app can only access part of the database. It can get very granular but maybe at a very high level I mean how hard can this be? For example, this pod can only access that pods but this pod cannot access this name space and just keep checking what if the name spaces changes, the permission changes. Or for example would allow only these answers can do a backup because they are the same users who will have access to the restore so they have access to all the data, you know what I mean? Just keep checking that is in place and it only changes when you want to. [0:42:48.9] BM: So, I mean I know we are at the end of this call and I want to start a whole new conversation but this is actually is why there are applications out there like Istio and Linkerd. This is why people want service meshes because they can turn off all network access and then just use the service mesh to do the communication and then they can use, they can make sure that it is encrypted on both sides and that is a honey cave on all both sides. That is why this is operated. [0:43:15.1] CC: We’ll definitely going to have an episode or multiple on service mesh but we are on the top of the hour. Nick, do your thing. [0:43:23.8] NL: All right, well, thank you so much for joining us on another interesting discussion at The Kubelets Podcast. I am Nicholas Lane, Duffie any final thoughts? [0:43:32.9] DC: There is a whole lot to discuss, I really enjoyed our conversations today. Thank you everybody. [0:43:36.5] NL: And Bryan? [0:43:37.4] BM: Oh it was good being here. Now it is lunch time. [0:43:41.1] NL: And Carlisia. [0:43:42.9] CC: I love learning from you all, thank you. Glad to be here. [0:43:46.2] NL: Totally agree. Thank you again for joining us and we’ll see you next time. Bye. [0:43:51.0] CC: Bye. [0:43:52.1] DC: Bye. [0:43:52.6] BM: Bye. [END OF EPISODE] [0:43:54.7] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

info@theworkforceshow.com
Nakul Munjal -The Story Behind Cybersecurity

info@theworkforceshow.com

Play Episode Listen Later Oct 21, 2019 23:57


Sponsored by: LookingGlassCyber, ScienceLogic, and Fairfax City Nakul Munjal has worked in the cybersecurity industry since 2003. In 2006, he administered the first computer-based bar exam in the country. In 2009, as part of the founding team for IBM Security Systems in New York, he helped multiple Wall Street banks, insurance companies and rating agencies navigate new cybersecurity requirements resulting from the 2008 financial crisis. He is regarded as a subject matter expert in Identity and Access Management technologies, where he was an executive covering North America and Latin America for Micro Focus. Nakul began Status Identity after observing Chief Security Officers often struggle with inconvenient security controls that cause productivity losses in their organizations. With increasingly stringent cybersecurity policies through GDPR, NYDFS, FISMA and others; this problem was compounding. Nakul is passionate about reducing the security burden on individuals while preserving their privacy and digital rights. He believes that the distribution of personal data must be limited; and can be effectively used to empower users. Nakul holds a Bachelors in Biology and Economics from the University of Colorado, Boulder and an MBA with Honors from Babson College

Federal Drive with Tom Temin
FISMA report tells how tools, capabilities, data protecting against cyber attacks

Federal Drive with Tom Temin

Play Episode Listen Later Aug 21, 2019 8:34


Don't break out the party hats quite yet, but the results in the latest Federal Information Security Modernization Act (FISMA) report to Congress does deserve some celebrating. The Office of Management and Budget said for the first time agencies suffered from no major cyber incidents in 2018. On top of that, agencies also saw fewer overall cyber attacks last year. Federal News Network's Executive Editor Jason Miller had the details about why agencies deserve a little pat on the back for their cyber efforts. Hear more on Federal Drive with Tom Temin.

office management data tools congress budget protecting cyberattacks capabilities jason miller fisma federal news network office of management and budget federal drive tom temin executive editor jason miller
ILTA
Incident and Event Log Monitoring: Pros and Cons of In-House, Outsource and Hybrid

ILTA

Play Episode Listen Later Feb 26, 2016 31:01


Description: If you are in the market for a security information event management (SIEM) tool, this podcast will be of value. Three security evangelists share their collective experience and insight regarding the need for such a tool, what to look for in a solution, deployment options, how to get started and key factors in driving the value from investing in SIEM. Speakers: James "Butch" Spencer is a Network Engineer at Jackson Kelly PLLC. He is an IT security expert with extensive practical experience in information management systems, security, networking, virtualization, optimization, e-business and programming. Butch is a member of numerous groups delving into and debating on the security and control of the Internet of Things (IoT). Jon Hanny is a strategic information security leader with over 15 years of experience in information security, risk management, governance and compliance. Skilled at building information security and IT risk management programs from inception, he successfully fostered paradigm shifts in higher education, financial services and legal verticals to instill an information security mindset across all levels of an organization, improving their security postures. Jon holds several security certifications including the C|CISO, CISM and CISSP, and has a proven track record leveraging ISO27001, NIST and FISMA. Prabhakar Chandrasekaran is a goal- and ethics-driven security and technology leader with over 20 years of experience in managing critical infrastructure services in IT production environments. He has a solid understanding and application of security and privacy regulations, risk assessments and security audit processes. Prabhakar is also a specialist in achieving reliability and availability through structure and process improvement, with a proven ability to achieve business objectives by establishing partnerships across departments.