Podcasts about Chegg

An American education technology company

  • 284PODCASTS
  • 418EPISODES
  • 37mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Aug 25, 2023LATEST
Chegg

POPULARITY

20162017201820192020202120222023


Best podcasts about Chegg

Latest podcast episodes about Chegg

The Edtech Podcast
#271 - Cutting Through the Noise on AI in Education

The Edtech Podcast

Play Episode Listen Later Aug 25, 2023 52:47


Rose plays host to Nina Huntemann, Chief Academic Officer of Chegg, and Lord Jim Knight, in the EdTech Podcast Zoom studio this week, attempting to understand how best to cut through the white noise surrounding AI's hype, misinformation, exaggeration and marketing, and determining just how positive for education AI can be if done responsibly. In our previous episodes on AI, Rose has been in conversation with universities from the US and the UK, examining what the role is for emerging technologies in higher education and what capacity exists to implement AI effectively.  The podcast also saw a contributions from Karine George in discussing whether or not the release and widespread use of ChatGPT has actually done education a favour.  Has its proliferation sparked debate about human cognition and limited understandings of AI, or initiated conversations in schools around digital transformation and strategy?     In this episode, we'd like to extend these same thoughts on AI to pedagogic effectiveness in education and academia, and how emerging technologies like AI can be incorporated into plans for companies' commercial services.  Talking points in today's episode includes: The development of ethical AI in commercial enterprises and how they ensure their responsible technologies are developed Tensions between the wealth of AI tools available and regulation of the market and educational use of such technologies Assessing AI tools' effectiveness Cutting through the huge amount of hype, headlines, and sensationalism at the heart of the communications and marketing around AI Material discussed in today's episode includes: Yes, AI could profoundly disrupt education, but maybe that's not a bad thing, article in the Guardian UK Newspaper by Professor Rose Luckin Chegg's Centre for Digital Learning The Skinny on AI for Education, EVR's newest publication featuring insights, trends and developments in the world of AI Ed

Alles auf Aktien
Die drei Angstmacher der Börse und Verdienen mit Short-Attacken

Alles auf Aktien

Play Episode Listen Later Aug 14, 2023 43:54


In der heutigen XXL-Folge von „Alles auf Aktien“ sprechen der Tech-Investor Pip Klöckner und Finanzjournalist Holger Zschäpitz über Lebenszeichen bei Varta, zweistelliges Korrekturpotenzial an den Märkten und die größten Short-Seller. Außerdem geht es um Palantir, C3.AI, Super Micro Computer, Chegg, Duolingo, Bechtle, Nagarro, New Work, Cisco, Hims & Hers, Nikola, Wirecard, Oatly, Figs, Nuvei Payments, PetIQ, Block, Icahn Enterprise, Tingo, Draftkings, Ebang, Clover, Nikola, Adler, Grenke, Pro7Sat.1, Steinhoff, Enron, Hertz, Tesla, Coinbase. Wir freuen uns an Feedback über aaa@welt.de. Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. Außerdem bei WELT: Im werktäglichen Podcast „Kick-off Politik - Das bringt der Tag“ geben wir Ihnen im Gespräch mit WELT-Experten die wichtigsten Hintergrundinformationen zu einem politischen Top-Thema des Tages. Mehr auf welt.de/kickoff und überall, wo es Podcasts gibt. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte! https://linktr.ee/alles_auf_aktien Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html

Earnings Season
Chegg, Inc., Q2 2023 Earnings Call, Aug 07, 2023

Earnings Season

Play Episode Listen Later Aug 9, 2023 49:52


Chegg, Inc., Q2 2023 Earnings Call, Aug 07, 2023

Motley Fool Money
Follow the Cash: Celsius, Chegg, Nelnet

Motley Fool Money

Play Episode Listen Later Aug 9, 2023 23:57


Earnings from Chegg, Celsius, and Nelnet show why it pays to watch cash flow and how businesses can shore up when there's cash on hand.  (00:21) Jim Gillies and Dylan Lewis discuss: - Celsius incredible top and bottom line results, but why investors should pay attention to the energy drink maker's relationship with Pepsi and accounts receivable. - Whether Chegg can harness AI for its education offerings. - Why Nelnet's slow and steady Berkshire approach continues to pay off.  Companies discussed: CHGG, NNI, CELH Host: Dylan Lewis Guests: Jim Gillies Engineers: Dan Boyd 

Alles auf Aktien
Auf der Spur des Blockbuster-Jägers und die 4 Dax-Superstars

Alles auf Aktien

Play Episode Listen Later Aug 8, 2023 23:29


In der heutigen Folge „Alles auf Aktien“ sprechen die Finanzjournalisten Anja Ettel und Holger Zschäpitz über eine neue Kryptowährung bei Paypal, eine Bauchlandung für Biontech, den Abschied vom Master of Coin bei Tesla und einen seltsamen Aktienrückkauf von Palantir. Außerdem geht es um Daimler Truck, GEA Group, Amgen, Horizon, Berkshire Hathaway, Pfizer, Moderna, Siemens Energy, Rheinmetall, General Dynamics, OHB, Beyond Meat, Hims & Hers Health, Chegg, Commerzbank, Deutsche Bank, Deutsch Telekom, E.on, Zalando, Continental, Vonovia, Covestro, Apple, Microsoft, Alphabet, Meta, Amazon, Nvidia, Johnson & Johnson, Wal-Mart, Siemens, SAP, Münchener Rück, Deutsche Börse, DWS ESG Investa (WKN: 847400), DWS Deutschland (WKN: 849096), DWS Aktien Strategie Deutschland (WKN: 976986), UniFonds (WKN: 849100), Fondak (WKN: 847101), Concentra (WKN: 847500), Bristol-Myers Squibb, Abbvie, Eli Lilly und Evotec. Wir freuen uns an Feedback über aaa@welt.de. Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. Außerdem bei WELT: Im werktäglichen Podcast „Kick-off Politik - Das bringt der Tag“ geben wir Ihnen im Gespräch mit WELT-Experten die wichtigsten Hintergrundinformationen zu einem politischen Top-Thema des Tages. Mehr auf welt.de/kickoff und überall, wo es Podcasts gibt. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte! https://linktr.ee/alles_auf_aktien Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html

Closing Bell
Closing Bell: Overtime: Stocks rise to start the week, earnings from Palantir, Paramount, Chegg and more 8/7/23

Closing Bell

Play Episode Listen Later Aug 7, 2023 44:37


Stocks started the week on strong footing, with the Dow finishing the day higher by more than 400 points. Adam Crisafulli from Vital Knowledge and Brian Levitt from Invesco break down the action and discuss earnings from Palantir, Chegg, Lucid, and more. Early Palantir employee turned venture capital partner Trae Stephens gives his thoughts on Palantir's results and other opportunities in defense and AI. The CEO of Coreweave talks about what the artificial intelligence boom means for his company. Plus a preview of what to expect from UPS results. 

What Got You There with Sean DeLaney
#354 Mike Maples Jr.- Unlocking the Secrets to Successful Startup Investing: A Conversation with the Co-founder of Floodgate

What Got You There with Sean DeLaney

Play Episode Listen Later Aug 6, 2023 68:54


“My whole business isn't about how often I lose but it's about the magnitude of the rightness when I win”  Mike Maples is a co-founding Partner at Floodgate.  He has been on the Forbes Midas List eight times in the last decade and was also named a “Rising Star” by FORTUNE and profiled by Harvard Business School for his lifetime contributions to entrepreneurship.  Before becoming a full-time investor, Mike was involved as a founder and operating executive at back-to-back startup IPOs, including Tivoli Systems (IPO TIVS, acquired by IBM) and Motive (IPO MOTV, acquired by Alcatel-Lucent.) Some of Mike's investments include Twitter, Twitch.tv, Clover Health, Okta, Outreach, ngmoco, Chegg, Bazaarvoice, and Demandforce. Mike is the host of the Starting Greatness podcast, which shares startup lessons from the super performers. On this episode Mike shares the mindset that has had the greatest impact on his life, what is true about the greatest startup founders and the key to unlocking greatness in investments.  Interested in having Sean DeLaney be your executive coach? CLICK HERE  Caldera Lab– Get 20% off high performance men's skincare!  – Click HERE Marketer Hire– Get $500 off your first hire! – Click HERE https://youunleashedcourse.com/ You Unleashed is an online personal development course created by Sean DeLaney after spending years working with and interviewing high achievers.The online course that helps you ‘Unleash your potential'! You Unleashed teaches you the MINDSETS, ROUTINES and BEHAVIORS you need to unleash your potential and discover what you're capable of. You know you're capable of more and want to bring out that untapped potential inside of you. We teach you how. Enroll Today!- Click Here Subscribe to my Momentum Monday Newsletter Connect with us! Whatgotyouthere TikTok YouTube Twitter Instagram   

The Bazz Show
49 - The Rise of the Fractional CMO? with Gil Rogers

The Bazz Show

Play Episode Listen Later Aug 2, 2023 27:02


Gil is a recognized leader in higher education enrollment management and marketing. After leading yet another record-breaking recruitment cycle in 2011, Gil dove head first into the education startup revolution. In May 2011, Gil joined the Zinch.com team to lead the company's marketing and thought leadership initiatives. The company was acquired by Chegg that September, and continues to transform into the leading student-first connected learning platform, while still leading the textbook rental market. In July 2018, Gil moved to the National Research Center for College and University Admissions (NCCUA) to support their new strategic partnership with Chegg's digital marketing team while helping introduce the company's new approach to data-informed enrollment management; Encoura. Gil's energy and enthusiasm is used to help colleges and universities understand how to best reach prospective students in support of their enrollment goals. At NRCCUA, Gil supports the Eduventures research agenda and leads numerous including best practices in recruitment strategies for transfer students and graduate programs, social media outreach strategy, and much more. Gil is selected to present at numerous national conferences throughout the year on these topics, and undergraduate admissions, graduate admissions, and international admissions professionals throughout the industry seek his knowledge and expertise.

college gil fractional chegg national research center gil rogers
The Sure Shot Entrepreneur
VCs Help Startups Recruit Great Candidates and Catch Liars

The Sure Shot Entrepreneur

Play Episode Listen Later Aug 1, 2023 28:51


Oren Zeev, Founding Partner at Zeev Ventures, shares his one-man VC experience, and lessons from investing in so many successful companies. Oren reveals the secrets behind his renowned Silicon Valley builder-investor approach. He also gives his perspectives on the dynamic role of a VC in supporting founders, as well as his thoughts on trends shaping the venture capital industry.In this episode, you'll learn:[4:20] The essence of being a VC lies not in having strong opinions, but in serving as the strongest support system for founders.[6:30] Why great VCs refrain from saying: "Let me know how I can help you."[12:06] Ultimately, a startup's success depends on the compelling nature of its product(s).[16:30] Don't rely solely on a candidate's provided references when recruiting.[20:25] For founders, unsolicited advice is unnecessary.[25:57] Venture capital is not broken!The non-profit organization that Oren is passionate about: ICONAbout Oren ZeevOren Zeev is the founder of Zeev Ventures, a one-man VC firm that invests in growth-stage companies. He also serves as a board member at Houzz, HomeLight, Tipalti, TripActions, Next Insurance, among others. Oren is a former board of Audible (went public and then acquired by AMZN) and of Chegg, (NYSE: CHGG) where he was the main investor and one of the two main early backers respectively, a former board member of RedKix (acquired by Facegook) and a former chairman of Loop Commerce (acquired by Synchrony Bank). Prior to that, he was a general partner at Apax Partners from 1995 to 2007.About Zeev VenturesZeev Ventures is a Silicon Valley-based early-stage venture fund that invests in technology, financial, e-commerce, and consumer service sectors. Its portfolio companies include: Trustmi, Sentra, Caramel, ShipIn, groundcover, GrowthSpace, HomeLight among others.Subscribe to our podcast and stay tuned for our next episode. Follow Us:  Twitter | Linkedin  | Instagram  | Facebook

Zaka Presents: My Journey
#95 Zaka Presents My Journey Dumebi Egbuna

Zaka Presents: My Journey

Play Episode Play 55 sec Highlight Listen Later Jul 21, 2023 39:48


Dumebi Egbuna started her career working at IBM within the enterprise sales organization. Working in an industry that is dominated by white men, she found her passion for helping minorities succeed in spaces that have previously not been accessible to them. She knows what it's like to be the only Black woman at the table. After having a conversation with her brother, Toby, they realized that their experience was shared by many marginalized employees working in corporate America. They were inspired to launch their own company, Chezie, to help people with similar backgrounds break down barriers.The Nigerian immigrants  pay homage to their heritage through their company name, “Chezie.” The Igbo word means “reflect.” They encourage all companies to reflect on their DEI efforts, in order to ensure that their intent matches the impact.Chezie,  empowers companies to build, track, and manage impactful employee resource groups (ERGs). It has raised $775,000 in pre-seed funding. Chezie's software empowers organizations to build, track, and manage ERGs, while also driving positive business outcomes such as increased retention and employee engagement. They've helped over 100 ERGs, including those at industry-leading companies such as Chegg, Instacart, and the NBA, take their initiatives from intent to impact.

Higher Ed Spotlight
29. Some Community Colleges Are Getting It Right

Higher Ed Spotlight

Play Episode Listen Later Jul 18, 2023 30:57


Rachel Lipson is the co-editor of the new book “America's Hidden Economic Engines: How Community Colleges Can Drive Shared Prosperity.” She shares what the five community colleges profiled in the book are getting right and how they're fulfilling the institution's promise of community economic growth and student career advancement. Higher Ed Spotlight is sponsored by Chegg's Center for Digital Learning and aims to explore the future of higher education. It is produced by Antica Productions.

FinanZe
Episode 18: The Corporate World with CFO of Dolby Robert Park

FinanZe

Play Episode Listen Later Jul 9, 2023 29:00


Robert is currently the Chief Financial Officer of Dolby, the #1 audio company in the world for entertainment technology. Previously, he also served as the VP of Finance for Chegg, Paypal and BlueJean Networks. In 2021 Dolby was named to Fast Company's prestigious annual list of the World's Most Innovative Companies and consistently among the top-ranked companies leading the music industry forward.Robert began his finance career with Ernst & Young, one of the big 4 accounting firms, serving numerous technology clients. Robert holds a B.S. degree in Business Administration with a concentration in accounting from Cal Poly in San Luis Obispo. In this episode we will be talking about the value of an accounting background, how to break into the corporate world, and the steps to climbing up the corporate ladder!To learn more about our podcast, follow us on Instagram @The_Finanze_podcast to receive updates on new episodes and our podcast's future, or subscribe to our youtube channel at The FinanZe Podcast. To receive updates about our podcast episodes, join our emailing list at the.finanZe.podcast@gmail.com. Enjoy the episode!

Higher Ed Spotlight
28. The Move for More College in Prison

Higher Ed Spotlight

Play Episode Listen Later Jul 4, 2023 25:18


After almost 30 years, incarcerated students are once again eligible to receive Pell Grants. It's a significant move with reverberations beyond prison. But providing college courses to incarcerated students isn't easy. Andrea Cantora, Director of the University of Baltimore's Second Chance College Program, sheds light on the importance of this move, the bipartisan shift it signals from punishment to rehabilitation, and what it's like to teach in prison.   Higher Ed Spotlight is sponsored by Chegg's Center for Digital Learning and aims to explore the future of higher education. It is produced by Antica Productions.

CFR On the Record
Higher Education Webinar: Implications of Artificial Intelligence in Higher Education

CFR On the Record

Play Episode Listen Later Jun 27, 2023


Pablo Molina, associate vice president of information technology and chief information security officer at Drexel University and adjunct professor at Georgetown University, leads the conversation on the implications of artificial intelligence in higher education.   FASKIANOS: Welcome to CFR's Higher Education Webinar. I'm Irina Faskianos, vice president of the National Program and Outreach here at CFR. Thank you for joining us. Today's discussion is on the record, and the video and transcript will be available on our website, CFR.org/Academic, if you would like to share it with your colleagues. As always, CFR takes no institutional positions on matters of policy. We are delighted to have Pablo Molina with us to discuss implications of artificial intelligence in higher education. Dr. Molina is chief information security officer and associate vice president at Drexel University. He is also an adjunct professor at Georgetown University. Dr. Molina is the founder and executive director of the International Applies Ethics in Technology Association, which aims to raise awareness on ethical issues in technology. He regularly comments on stories about privacy, the ethics of tech companies, and laws related to technology and information management. And he's received numerous awards relating to technology and serves on the board of the Electronic Privacy Information Center and the Center for AI and Digital Policy. So Dr. P, welcome. Thank you very much for being with us today. Obviously, AI is on the top of everyone's mind, with ChatGPT coming out and being in the news, and so many other stories about what AI is going to—how it's going to change the world. So I thought you could focus in specifically on how artificial intelligence will change and is influencing higher education, and what you're seeing, the trends in your community. MOLINA: Irina, thank you very much for the opportunity, to the Council on Foreign Relations, to be here and express my views. Thank you, everybody, for taking time out of your busy schedules to listen to this. And hopefully, I'll have the opportunity to learn much from your questions and answer some of them to the best of my ability. Well, since I'm a professor too, I like to start by giving you homework. And the homework is this: I do not know how much people know about artificial intelligence. In my opinion, anybody who has ever used ChatGPT considers herself or himself an expert. To some extent, you are, because you have used one of the first publicly available artificial intelligence tools out there and you know more than those who haven't. So if you have used ChatGPT, or Google Bard, or other services, you already have a leg up to understand at least one aspect of artificial intelligence, known as generative artificial intelligence. Now, if you want to learn more about this, there's a big textbook about this big. I'm not endorsing it. All I'm saying, for those people who are very curious, there are two great academics, Russell and Norvig. They're in their fourth edition of a wonderful book that covers every aspect of—technical aspect of artificial intelligence, called Artificial Intelligence: A Modern Approach. And if you're really interested in how artificial intelligence can impact higher education, I recommend a report by the U.S. Department of Education that was released earlier this year in Washington, DC from the Office of Education Technology. It's called Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations. So if you do all these things and you read all these things, you will hopefully transition from being whatever expert you were before—to a pandemic and Ukrainian war expert—to an artificial intelligence expert. So how do I think that all these wonderful things are going to affect artificial intelligence? Well, as human beings, we tend to overestimate the impact of technology in the short run and really underestimate the impact of technology in the long run. And I believe this is also the case with artificial intelligence. We're in a moment where there's a lot of hype about artificial intelligence. It will solve every problem under the sky. But it will also create the most catastrophic future and dystopia that we can imagine. And possibly neither one of these two are true, particularly if we regulate and use these technologies and develop them following some standard guidelines that we have followed in the past, for better or worse. So how is artificial intelligence affecting higher education? Well, number one, there is a great lack of regulation and legislation. So if you know, for example around this, OpenAI released ChatGPT. People started trying it. And all of a sudden there were people like here, where I'm speaking to you from, in Italy. I'm in Rome on vacation right now. And Italian data protection agency said: Listen, we're concerned about the privacy of this tool for citizens of Italy. So the company agreed to establish some rules, some guidelines and guardrails on the tool. And then it reopened to the Italian public, after being closed for a while. The same thing happened with the Canadian data protection authorities. In the United States, well, not much has happened, except that one of the organizations on which board I serve, the Center for Artificial Intelligence and Digital Policy, earlier this year in March of 2023 filed a sixty-four-page complaint with the Federal Trade Commission. Which is basically we're asking the Federal Trade Commission: You do have the authority to investigate how these tools can affect the U.S. consumers. Please do so, because this is your purview, and this is your responsibility. And we're still waiting on the agency to declare what the next steps are going to be. If you look at other bodies of legislation or regulation on artificial intelligence that can help us guide artificial intelligence, well, you can certainly pay attention to the U.S. Congress. And what is the U.S. Congress doing? Yeah, pretty much that, not much, to be honest. They listen to Sam Altman, the founder of ChatGPT, who recently testified before Congress, urging Congress to regulate artificial intelligence. Which is quite clever on his part. So it was on May 17 that he testified that we could be facing catastrophic damage ahead if artificial intelligence technology is not regulated in time. He also sounded the alarm about counterfeit humans, meaning that these machines could replace what we think a person is, at least virtually. And also warned about the end of factual evidence, because with artificial intelligence anything can be fabricated. Not only that, but he pointed out that artificial intelligence could start wars and destroy democracy. Certainly very, very grim predictions. And before this, many of the companies were self-regulating for artificial intelligence. If you look at Google, Microsoft, Facebook now Meta. All of them have their own artificial intelligence self-guiding principles. Most of them were very aspirational. Those could help us in higher education because, at the very least, it can help us create our own policies and guidelines for our community members—faculty, staff, students, researchers, administrators, partners, vendors, alumni—anybody who happens to interact with our institutions of higher learning. Now, what else is happening out there? Well, we have tons, tons of laws that have to do with the technology and regulations. Things like the Gramm-Leach-Bliley Act, or the Securities and Exchange Commission, the Sarbanes-Oxley. Federal regulations like FISMA, and Cybersecurity Maturity Model Certification, Payment Card Industry, there is the Computer Fraud and Abuse Act, there is the Budapest Convention where cybersecurity insurance providers will tells us what to do and what not to do about technology. We have state laws and many privacy laws. But, to be honest, very few artificial intelligence laws. And it's groundbreaking in Europe that the European parliamentarians have agreed to discuss the Artificial Intelligence Act, which could be the first one really to be passed at this level in the world, after some efforts by China and other countries. And, if adopted, could be a landmark change in the adoption of artificial intelligence. In the United States, even though Congress is not doing much, what the White House is trying to position itself in the realm of artificial intelligence. So there's an executive order in February of 2023—that many of us in higher education read because, once again, we're trying to find inspiration for our own rules and regulations—that tells federal agencies that they have to root out bias in the design and use of new technologies, including artificial intelligence, because they have to protect the public from algorithm discrimination. And we all believe this. In higher education, we believe in being fair and transparent and accountable. I would be surprised if any of us is not concerned about making sure that our technology use, our artificial technology use, does not follow these particular principles as proposed by the Organization for Economic Cooperation and Development, and many other bodies of ethics and expertise. Now, the White House also announced new centers—research and development centers with some new national artificial intelligence research institutes. Many of us will collaborate with those in our research projects. A call for public assessments of existing generative artificial intelligence systems, like ChatGPT. And also is trying to enact or is enacting policies to ensure that U.S. government—the U.S. government, the executive branch, is leading by example when mitigating artificial intelligence risks and harnessing artificial intelligence opportunities. Because, in spite of all the concerns about this, it's all about the opportunities that we hope to achieve with artificial intelligence. And when we look at how specifically can we benefit from artificial intelligence in higher education, well, certainly we can start with new and modified academic offerings. I would be surprised if most of us will not have degrees—certainly, we already have degrees—graduate degrees on artificial intelligence, and machine learning, and many others. But I would be surprised if we don't even add some bachelor's degrees in this field, or we don't modify significantly some of our existing academic offerings to incorporate artificial intelligence in various specialties, our courses, or components of the courses that we teach our students. We're looking at amazing research opportunities, things that we'll be able to do with artificial intelligence that we couldn't even think about before, that are going to expand our ability to generate new knowledge to contribute to society, with federal funding, with private funding. We're looking at improved knowledge management, something that librarians are always very concerned about, the preservation and distribution of knowledge. The idea would be that artificial intelligence will help us find better the things that we're looking for, the things that we need in order to conduct our academic work. We're certainly looking at new and modified pedagogical approaches, new ways of learning and teaching, including the promise of adaptive learning, something that really can tell students: Hey, you're not getting this particular concept. Why don't you go back and study it in a different way with a different virtual avatar, using simulations or virtual assistance? In almost every discipline and academic endeavor. We're looking very concerned, because we're concerned about offering, you know, a good value for the money when it comes to education. So we're hoping to achieve extreme efficiencies, better ways to run admissions, better ways to guide students through their academic careers, better way to coach them into professional opportunities. And many of this will be possible thanks to artificial intelligence. And also, let's not forget this, but we still have many underserved students, and they're underserved because they either cannot afford education or maybe they have physical or cognitive disabilities. And artificial intelligence can really help us reach to those students and offer them new opportunities to advance their education and fulfill their academic and professional goals. And I think this is a good introduction. And I'd love to talk about all the things that can go wrong. I'd love to talk about all the things that we should be doing so that things don't go as wrong as predicted. But I think this is a good way to set the stage for the discussion. FASKIANOS: Fantastic. Thank you so much. So we're going to go all of you now for your questions and comments, share best practices. (Gives queuing instructions.) All right. So I'm going first to Gabriel Doncel has a written question, adjunct faculty at the University of Delaware: How do we incentivize students to approach generative AI tools like ChatGPT for text in ways that emphasize critical thinking and analysis? MOLINA: I always like to start with a difficult question, so I very much, Gabriel Doncel, for that particular question. And, as you know, there are several approaches to adopting tools like ChatGPT on campus by students. One of them is to say: No, over my dead body. If you use ChatGPT, you're cheating. Even if you cite ChatGPT, we can consider you to be cheating. And not only that, but some institutions have invested in tools that can detect whether or something was written with ChatGPT or similar rules. There are other faculty members and other academic institutions that are realizing these tools will be available when these students join the workforce. So our job is to help them do the best that they can by using these particular tools, to make sure they avoid some of the mishaps that have already happened. There are a number of lawyers who have used ChatGPT to file legal briefs. And when the judges received those briefs, and read through them, and looked at the citations they realized that some of the citations were completely made up, were not real cases. Hence, the lawyers faced professional disciplinary action because they used the tool without the professional review that is required. So hopefully we're going to educate our students and we're going to set policy and guideline boundaries for them to use these, as well as sometimes the necessary technical controls for those students who may not be that ethically inclined to follow our guidelines and policies. But I think that to hide our heads in the sand and pretend that these tools are not out there for students to use would be—it's a disserve to our institutions, to our students, and the mission that we have of training the next generation of knowledge workers. FASKIANOS: Thank you. I'm going to go next to Meena Bose, who has a raised hand. Meena, if you can unmute yourself and identify yourself. Q: Thank you, Irina. Thank you for this very important talk. And my question is a little—(laughs)—it's formative, but really—I have been thinking about what you were saying about the role of AI in academic life. And I don't—particularly for undergraduates, for admissions, advisement, guidance on curriculum. And I don't want to have my head in the sand about this, as you just said—(laughs)—but it seems to me that any kind of meaningful interaction with students, particularly students who have not had any exposure to college before, depends upon kind of multiple feedback with faculty members, development of mentors, to excel in college and to consider opportunities after. So I'm struggling a little bit to see how AI can be instructive for that part of college life, beyond kind of providing information, I guess. But I guess the web does that already. So welcome your thoughts. Thank you. FASKIANOS: And Meena's at Hofstra University. MOLINA: Thank you. You know, it's a great question. And the idea that everybody is proposing right here is we are not—artificial intelligence companies, at least at first. We'll see in the future because, you know, it depends on how it's regulated. But they're not trying, or so they claim, to replace doctors, or architects, or professors, or mentors, or administrators. They're trying to help those—precisely those people in those professions, and the people they served gain access to more information. And you're right in a sense that that information is already on the web. But we've aways had a problem finding that information regularly on the web. And you may remember that when Google came along, I mean, it swept through every other search engine out there AltaVista, Yahoo, and many others, because, you know, it had a very good search algorithm. And now we're going to the next level. The next level is where you ask ChatGPT in human-natural language. You're not trying to combine the three words that say, OK, is the economics class required? No, no, you're telling ChatGPT, hey, listen, I'm in the master's in business administration at Drexel University and I'm trying to take more economic classes. What recommendations do you have for me? And this is where you can have a preliminary one, and also a caveat there, as most of these search engine—generative AI engines already have, that tell you: We're not here to replace the experts. Make sure you discuss your questions with the experts. We will not give you medical advice. We will not give you educational advice. We're just here, to some extent, for guiding purposes and, even now, for experimental and entertainment purposes. So I think you are absolutely right that we have to be very judicious about how we use these tools to support the students. Now, that said, I had the privilege of working for public universities in the state of Connecticut when I was the CIO. I also had the opportunity early in my career to attend public university in Europe, in Spain, where we were hundreds of students in class. We couldn't get any attention from the faculty. There were no mentors, there were no counselors, or anybody else. Is it better to have nobody to help you or is it better to have at least some technology guidance that can help you find the information that otherwise is spread throughout many different systems that are like ivory towers—emissions on one side, economics on the other, academics advising on the other, and everything else. So thank you for a wonderful question and reflection. FASKIANOS: I'm going to take the next question written from Dr. Russell Thomas, a senior lecturer in the Department of International Relations and Diplomatic Studies at Cavendish University in Uganda: What are the skills and competencies that higher education students and faculty need to develop to think in an AI-driven world? MOLINA: So we could argue here that something very similar has happened already with many information technologies and communication technologies. It is the understanding at first faculty members did not want to use email, or the web, or many other tools because they were too busy with their disciplines. And rightly so. They were brilliant economists, or philosophers, or biologists. They didn't have enough time to learn all these new technologies to interact with the students. But eventually they did learn, because they realized that it was the only way to meet the students where they were and to communicate with them in efficient ways. Now, I have to be honest; when it comes to the use of technology—and we'll unpack the numbers—it was part of my doctoral dissertation, when I expanded the adoption of technology models, that tells you about early adopters, and mainstream adopters, and late adopters, and laggards. But I uncovered a new category for some of the institutions where I worked called the over-my-dead-body adopters. And these were some of the faculty members who say: I will never switch word processors. I will never use this technology. It's only forty years until I retire, probably eighty more until I die. I don't have to do this. And, to be honest, we have a responsibility to understand that those artificial intelligence tools are out there, and to guide the students as to what is the acceptable use of those technologies within the disciplines and the courses that we teach them in. Because they will find those available in a very competitive work market, in a competitive labor market, because they can derive some benefit from them. But also, we don't want to shortchange their educational attainment just because they go behind our backs to copy and paste from ChatGPT, learning nothing. Going back to the question by Gabriel Doncel, not learning to exercise the critical thinking, using citations and material that is unverified, that was borrowed from the internet without any authority, without any attention to the different points of view. I mean, if you've used ChatGPT for a while—and I have personally, even to prepare some basic thank-you speeches, which are all very formal, even to contest a traffic ticket in Washington, DC, when I was speeding but I don't want to pay the ticket anyway. Even for just research purposes, you could realize that most of the writing from ChatGPT has a very, very common style. Which is, oh, on the one hand people say this, on the other hand people say that. Well, the critical thinking will tell you, sure, there are two different opinions, but this is what I think myself, and this is why I think about this. And these are some of the skills, the critical thinking skills, that we must continue to teach the students and not to, you know, put blinds around their eyes to say, oh, continue focusing only on the textbook and the website. No, no. Look at the other tools but use them judiciously. FASKIANOS: Thank you. I'm going to go next to Clemente Abrokwaa. Raised hand, if you can identify yourself, please. Q: Hi. Thanks so much for your talk. It's something that has been—I'm from Penn State University. And this is a very important topic, I think. And some of the earlier speakers have already asked the questions I was going to ask. (Laughs.) But one thing that I would like to say that, as you said, we cannot bury our heads in the sand. No matter what we think, the technology is already here. So we cannot avoid it. My question, though, is what do you think about the artificial intelligence, the use of that in, say, for example, graduate students using it to write dissertations? You did mention about the lawyers that use it to write their briefs, and they were caught. But in dissertations and also in class—for example, you have students—you have about forty students. You give a written assignment. You make—when you start grading, you have grading fatigue. And so at some point you lose interest of actually checking. And so I'm kind of concerned about that how it will affect the students' desire to actually go and research without resorting to the use of AI. MOLINA: Well, Clemente, fellow colleague from the state of Pennsylvania, thank you for that, once again, both a question and a reflection here. Listen, many of us wrote our doctoral dissertations—mine at Georgetown. At one point of time, I was so tired of writing about the same topics, following the wonderful advice, but also the whims of my dissertation committee, that I was this close from outsourcing my thesis to China. I didn't, but I thought about it. And now graduate students are thinking, OK, why am I going through the difficulties of writing this when ChatGPT can do it for me and the deadline is tomorrow? Well, this is what will distinguish the good students and the good professionals from the other ones. And the interesting part is, as you know, when we teach graduate students we're teaching them critical thinking skills, but also teaching them now to express themselves, you know, either orally or in writing. And writing effectively is fundamental in the professions, but also absolutely critical in academic settings. And anybody who's just copying and pasting from ChatGPT to these documents cannot do that level of writing. But you're absolutely right. Let's say that we have an adjunct faculty member who's teaching a hundred students. Will that person go through every single essay to find out whether students were cheating with ChatGPT? Probably not. And this is why there are also enterprising people who are using artificial intelligence to find out and tell you whether a paper was written using artificial intelligence. So it's a little bit like this fighting of different sources and business opportunities for all of them. And we've done this. We've used antiplagiarism tools in the past because we knew that students were copying and pasting using Google Scholar and many other sources. And now oftentimes we run antiplagiarism tools. We didn't write them ourselves. Or we tell the students, you run it yourself and you give it to me. And make sure you are not accidentally not citing things that could end up jeopardizing your ability to get a graduate degree because your work was not up to snuff with the requirements of our stringent academic programs. So I would argue that this antiplagiarism tools that we're using will more often than not, and sooner than expected, incorporate the detection of artificial intelligence writeups. And also the interesting part is to tell the students, well, if you do choose to use any of these tools, what are the rules of engagement? Can you ask it to write a paragraph and then you cite it, and you mention that ChatGPT wrote it? Not to mention, in addition to that, all the issues about artificial intelligence, which the courts are deciding now, regarding the intellectual property of those productions. If a song, a poem, a book is written by an artificial intelligence entity, who owns the intellectual property for those works produced by an artificial intelligence machine? FASKIANOS: Good question. We have a lot of written questions. And I'm sure you don't want to just listen to my voice, so please do raise your hands. But we do have a question from one of your colleagues, Pablo, Pepe Barcega, who's the IT director at Drexel: Considering the potential biases and limitations of AI models, like ChatGPT, do you think relying on such technology in the educational domain can perpetuate existing inequalities and reinforce systemic biases, particularly in terms of access, representation, and fair evaluation of students? And Pepe's question got seven upvotes, we advanced it to the top of the line. MOLINA: All right, well, first I have to wonder whether he used ChatGPT to write the question. But I'm going to leave it that. Thank you. (Laughter.) It's a wonderful question. One of the greatest concerns we have had, those of us who have been working on artificial intelligence digital policy for years—not this year when ChatGPT was released, but for years we've been thinking about this. And even before artificial intelligence, in general with algorithm transparency. And the idea is the following: That two things are happening here. One is that we're programming the algorithms using instructions, instructions created by programmers, with all their biases, and their misunderstandings, and their shortcomings, and their lack of context, and everything else. But with artificial intelligence we're doing something even more concerning than that, which is we have some basic algorithms but then we're feeling a lot of information, a corpus of information, to those algorithms. And the algorithms are fine-tuning the rules based on those. So it's very, very difficult for experts to explain how an artificial intelligence system actually makes decisions, because we know the engine and we know the data that we fed to the engine, but we don't know the real outcome how those decisions are being made through neural networks, through all of the different systems that we have and methods that we have for artificial intelligence. Very, very few people understand how those work. And those are so busy they don't have time to explain how the algorithm works for others, including the regulators. Let's remember some of the failed cases. Amazon tried this early. And they tried this for selecting employees for Amazon. And they fed all the resumes. And guess what? It turned out that most of the recommendations were to hire young white people who had gone to Ivy League schools. Why? Because their first employees were feeding those descriptions, and they had done extremely well at Amazon. Hence, by feeding that information of past successful employees only those were there. And so that puts away the diversity that we need for different academic institutions, large and small, public and private, from different countries, from different genders, from different ages, from different ethnicities. All those things went away because the algorithm was promoting one particular one. Recently I had the opportunity to moderate a panel in Washington, DC, and we had representatives from the Equal Employment Opportunity Commission. And they told us how they investigated a hiring algorithm from a company that was disproportionately recommending that they hired people whose first name was Brian and had played lacrosse in high school because, once again, a disproportionate number of people in that company had done that. And the algorithm realized, oh, this must be important characteristics to hire people for this company. Let's not forget, for example, with the artificial facial recognition and artificial intelligence by Amazon Rekog, you know, the facial recognition software, that the American Civil Liberties Union, decided, OK, I'm going to submit the pictures of all the congressmen to this particular facial recognition engine. And it turned out that it misidentified many of them, particularly African Americans, as felons who had been convicted. So all these artificial—all these biases could have really, really bad consequences. Imagine that you're using this to decide who you admit to your universities, and the algorithm is wrong. You know, you are making really biased decisions that will affect the livelihood of many people, but also will transform society, possibly for the worse, if we don't address this. So this is why the OECD, the European Union, even the White House, everybody is saying: We want this technology. We want to derive the benefits of this technology, while curtailing the abuses. And it's fundamental we achieve transparency. We are sure that these algorithms are not biased against the people who use them. FASKIANOS: Thank you. So I'm going to go next to Emily Edmonds-Poli, who is a professor at the University of San Diego: We hear a lot about providing clear guidelines for students, but for those of us who have not had a lot of experience using ChatGPT it is difficult to know what clear guidelines look like. Can you recommend some sources we might consult as a starting point, or where we might find some sample language? MOLINA: Hmm. Well, certainly this is what we do in higher education. We compete for the best students and the best faculty members. And we sometimes compete a little bit to be first to win groundbreaking research. But we tend to collaborate with everything else, particularly when it comes to policy, and guidance, and rules. So there are many institutions, like mine, who have already assembled—I'm sure that yours has done the same—assembled committees, because assembling committees and subcommittees is something we do very well in higher education, with faculty members, with administrators, even with the student representation to figure out, OK, what should we do about the use of artificial intelligence on our campus? I mentioned before taking a look at the big aspirational declarations by Meta, and Google, and IBM, and Microsoft could be helpful for these communities to look at this. But also, I'm a very active member of an organization known as EDUCAUSE. And EDUCAUSE is for educators—predominantly higher education educators. Administrators, staff members, faculty members, to think about the adoption of information technology. And EDUCAUSE has done good work on this front and continues to do good work on this front. So once again, EDUCAUSE and some of the institutions have already published their guidelines on how to use artificial intelligence and incorporate that within their academic lives. And now, that said, we also know that even though all higher education institutions are the same, they're all different. We all have different values. We all believe in different uses of technology. We trust more or less the students. Hence, it's very important that whatever inspiration you would take, you work internally on campus—as you have done with many other issues in the past—to make sure it really reflects the values of your institution. FASKIANOS: So, Pablo, would you point to a specific college or university that has developed a code of ethics that addresses the use of AI for their academic community beyond your own, but that is publicly available? MOLINA: Yeah, I'm going to be honest, I don't want to put anybody on the spot. FASKIANOS: OK. MOLINA: Because, once again, there many reasons. But, once again, let me repeat a couple resources. One is of them is from the U.S. Department of Education, from the Office of Educational Technology. And the article is Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, published earlier this year. The other source really is educause.edu. And if you look at educause.edu on artificial intelligence, you'll find links to articles, you'll find links to universities. It would be presumptuous of me to evaluate whose policies are better than others, but I would argue that the general principles of nonbiased, transparency, accountability, and also integration of these tools within the academic life of the institution in a morally responsible way—with concepts by privacy by design, security by design, and responsible computing—all of those are good words to have in there. Now, the other problem with policies and guidelines is that, let's be honest, many of those have no teeth in our institutions. You know, we promulgate them. They're very nice. They look beautiful. They are beautifully written. But oftentimes when people don't follow them, there's not a big penalty. And this is why, in addition to having the policies, educating the campus community is important. But it's difficult to do because we need to educate them about so many things. About cybersecurity threats, about sexual harassment, about nondiscriminatory policies, about responsible behavior on campus regarding drugs and alcohol, about crime. So many things that they have to learn about. It's hard to get at another topic for them to spend their time on, instead of researching the core subject matter that they chose to pursue for their lives. FASKIANOS: Thank you. And we will be sending out a link to this video, the transcript, as well as the resources that you have mentioned. So if you didn't get them, we'll include them in the follow-up email. So I'm going to go to Dorian Brown Crosby who has a raised hand. Q: Yes. Thank you so much. I put one question in the chat but I have another question that I would like to go ahead and ask now. So thank you so much for this presentation. You mentioned algorithm biases with individuals. And I appreciate you pointing that out, especially when we talk about face recognition, also in terms of forced migration, which is my area of research. But I also wanted you to speak to, or could you talk about the challenges that some institutions in higher education would have in terms of support for some of the things that you mentioned in terms of potential curricula, or certificates, or other ways that AI would be woven into the new offerings of institutions of higher education. How would that look specifically for institutions that might be challenged to access those resources, such as Historically Black Colleges and Universities? Thank you. MOLINA: Well, very interesting question, and a really fascinating point of view. Because we all tend to look at things from our own perspective and perhaps not consider the perspective of others. Those who have much more money and resources than us, and those who have fewer resources and less funding available. So this is a very interesting line. What is it that we do in higher education when we have these problems? Well, as I mentioned before, we build committees and subcommittees. Usually we also do campus surveys. I don't know why we love doing campus surveys and asking everybody what they think about this. Those are useful tools to discuss. And oftentimes the thing that we do also, that we've done for many other topics, well, we hire people and we create new offices—either academic or administrative offices. With all of those, you know, they have certain limitations to how useful and functional they can be. And they also continue to require resources. Resources that, in the end, are paid for by students with, you know, federal financing. But this is the truth of the matter. So if you start creating offices of artificial intelligence on our campuses, however important the work may be on their guidance and however much extra work can be assigned to them instead of distributed to every faculty and the staff members out there, the truth of the matter is that these are not perfect solutions. So what is it that we do? Oftentimes, we work with partners. And our partners love to take—(inaudible)—vendors. But the truth of the matter is that sometimes they have much more—they have much more expertise on some of these topics. So for example, if you're thinking about incorporating artificial intelligence to some of the academic materials that you use in class, well, I'm going to take a guess that if you already work with McGraw Hill in economics, or accounting, or some of the other books and websites that they put that you recommend to your students or you make mandatory for your students, that you start discussing with them, hey, listen, are you going to use artificial intelligence? How? Are you going to tell me ahead of time? Because, as a faculty member, you may have a choice to decide: I want to work with this publisher and not this particular publisher because of the way they approach this. And let's be honest, we've seen a number of these vendors with major information security problems. McGraw Hill recently left a repository of data misconfigured out there on the internet, and almost anybody could access that. But many others before them, like Chegg and others, were notorious for their information security breaches. Can we imagine that these people are going to adopt artificial intelligence and not do such a good job of securing the information, the privacy, and the nonbiased approaches that we hold dear for students? I think they require a lot of supervision. But in the end, these publishers have the economies of scale for you to recommend those educational materials instead of developing your own for every course, for every class, and for every institution. So perhaps we're going to have to continue to work together, as we've done in higher education, in consortia, which would be local, or regional. It could be based on institutions of the same interest, or on student population, on trying to do this. And, you know, hopefully we'll get grants, grants from the federal government, that can be used in order to develop some of the materials and guidelines that are going to help us precisely embrace this and embracing not only to operate better as institutions and fulfill our mission, but also to make sure that our students are better prepared to join society and compete globally, which is what we have to do. FASKIANOS: So I'm going to combine questions. Dr. Lance Hunter, who is an associate professor at Augusta University. There's been a lot of debate regarding if plagiarism detection software tools like Turnitin can accurately detect AI-generated text. What is your opinion regarding the accuracy of AI text generation detection plagiarism tools? And then Rama Lohani-Chase, at Union County College, wants recommendations on what plagiarism checker devices you would recommend—or, you know, plagiarism detection for AI would you recommend? MOLINA: Sure. So, number one, I'm not going to endorse any particular company because if I do that I would ask them for money, or the other way around. I'm not sure how it works. I could be seen as biased, particularly here. But there are many there and your institutions are using them. Sometimes they are integrated with your learning management system. And, as I mentioned, sometimes we ask the students to use them themselves and then either produce the plagiarism report for us or simply know themselves this. I'm going to be honest; when I teach ethics and technology, I tell the students about the antiplagiarism tools at the universities. But I also tell them, listen, if you're cheating in an ethics and technology class, I failed miserably. So please don't. Take extra time if you have to take it, but—you know, and if you want, use the antiplagiarism tool yourself. But the question stands and is critical, which is right now those tools are trying to improve the recognition of artificial intelligence written text, but they're not as good as they could be. So like every other technology and, what I'm going to call, antitechnology, used to control the damage of the first technology, is an escalation where we start trying to identify this. And I think they will continue to do this, and they will be successful in doing this. There are people who have written ad hoc tools using ChatGPT to identify things written by ChatGPT. I tried them. They're remarkably good for the handful of papers that I tried myself, but I haven't conducted enough research myself to tell you if they're really effective tools for this. So I would argue that for the timing you must assume that those tools, as we assume all the time, will not catch all of the cases, only some of the most obvious ones. FASKIANOS: So a question from John Dedie, who is an assistant professor at the Community College of Baltimore County: To combat AI issues, shouldn't we rethink assignments? Instead of papers, have students do PowerPoints, ask students to offer their opinions and defend them? And then there was an interesting comment from Mark Habeeb at Georgetown University School of Foreign Service. Knowledge has been cheap for many years now because it is so readily available. With AI, we have a tool that can aggregate the knowledge and create written products. So, you know, what needs to be the focus now is critical thinking and assessing values. We need to teach our students how to assess and use that knowledge rather than how to find the knowledge and aggregate that knowledge. So maybe you could react to those two—the question and comment. MOLINA: So let me start with the Georgetown one, not only because he's a colleague of mine. I also teach at Georgetown, and where I obtained my doctoral degree a number of years ago. I completely agree. I completely agree with the issue that we have to teach new skills. And one of the programs in which I teach at Georgetown is our master's of analysis. Which are basically for people who want to work in the intelligence community. And these people have to find the information and they have to draw inferences, and try to figure out whether it is a nation-state that is threatening the United States, or another, or a corporation, or something like that. And they do all of those critical thinking, and intuition, and all the tools that we have developed in the intelligence community for many, many years. And artificial intelligence, if they suspend their judgement and they only use artificial intelligence, they will miss very important information that is critical for national security. And the same is true for something like our flagship school, the School of Foreign Service at Georgetown, one of the best in the world in that particular field, where you want to train the diplomats, and the heads of state, and the great strategical thinkers on policy and politics in the international arena to precisely think not in the mechanical way that a machine can think, but also to connect those dots. And, sure they should be using those tools in order to, you know, get the most favorable position and the starting position, But they should also use their critical thinking always, and their capabilities of analysis in order to produce good outcomes and good conclusions. Regarding redoing the assignments, absolutely true. But that is hard. It is a lot of work. We're very busy faculty members. We have to grade. We have to be on committees. We have to do research. And now they ask us to redo our entire assessment strategy, with new assignments that we need to grade again and account for artificial intelligence. And I don't think that any provost out there is saying, you know what? You can take two semesters off to work on this and retool all your courses. That doesn't happen in the institutions that I know of. If you get time off because you're entitled to it, you want to devote that time to do research because that is really what you sign up for when you pursued an academic career, in many cases. I can tell you one thing, that here in Europe where oftentimes they look at these problems with fewer resources than we do in the United States, a lot of faculty members at the high school level, at the college level, are moving to oral examinations because it's much harder to cheat with ChatGPT with an oral examination. Because they will ask you interactive, adaptive questions—like the ones we suffered when we were defending our doctoral dissertations. And they will realize, the faculty members, whether or not you know the material and you understand the material. Now, imagine oral examinations for a class of one hundred, two hundred, four hundred. Do you do one for the entire semester, with one topic chosen and run them? Or do you do several throughout the semester? Do you end up using a ChatGPT virtual assistance to conduct your oral examinations? I think these are complex questions. But certainly redoing our assignments and redoing the way we teach and the way we evaluate our students is perhaps a necessary consequence of the advent of artificial intelligence. FASKIANOS: So next question from Damian Odunze, who is an assistant professor at Delta State University in Cleveland, Mississippi: Who should safeguard ethical concerns and misuse of AI by criminals? Should the onus fall on the creators and companies like Apple, Google, and Microsoft to ensure security and not pass it on to the end users of the product? And I think you mentioned at the top in your remarks, Pablo, about how the founder of ChatGPT was urging the Congress to put into place some regulation. What is the onus on ChatGPT to protect against some of this as well? MOLINA: Well, I'm going to recycle more of the material from my doctoral dissertation. In this case it was the Molina cycle of innovation and regulation. It goes like this, basically there are—you know, there are engineers and scientists who create new information technologies. And then there are entrepreneurs and businesspeople and executives to figure out, OK, I know how to package this so that people are going to use it, buy it, subscribe to it, or look at it, so that I can sell the advertisement to others. And, you know, this begins and very, very soon the abuses start. And the abuses are that criminals are using these platforms for reasons that were not envisioned before. Even the executives, as we've seen with Google, and Facebook, and others, decide to invade the privacy of the people because they only have to pay a big fine, but they make much more money than the fines or they expect not to be caught. And what happened in this cycle is that eventually there is so much noise in the media, congressional hearings, that eventually regulators step in and they try to pass new laws to do this, or the regulatory agencies try to investigate using the powers given to them. And then all of these new rules have to be tested in courts of law, which could take years by the time it reaches sometimes all the way to the Supreme Court. Some of them are even knocked down on the way to the Supreme Court when they realize this is not constitutional, it's a conflict of laws, and things like that. Now, by the time we regulate these new technologies, not only many years have gone by, but the technologies have changed. The marketing products and services have changed, the abuses have changed, and the criminals have changed. So this is why we're always living in a loosely regulated space when it comes to information technology. And this is an issue of accountability. We're finding this, for example, with information security. If my phone is my hacked, or my computer, my email, is it the fault of Microsoft, and Apple, and Dell, and everybody else? Why am I the one paying the consequences and not any of these companies? Because it's unregulated. So morally speaking, yes. These companies are accountable. Morally speaking also the users are accountable, because we're using these tools because we're incorporating them professionally. Legally speaking, so far, nobody is accountable except the lawyers who submitted briefs that were not correct in a court of law and were disciplined for that. But other than that, right now, it is a very gray space. So in my mind, it requires everybody. It takes a village to do the morally correct thing. It starts with the companies and the inventors. It involves the regulators, who should do their job and make sure that there's no unnecessary harm created by these tools. But it also involves every company executive, every professional, every student, and professor who decides to use these tools. FASKIANOS: OK. I'm going to take—combine a couple questions from Dorothy Marinucci and Venky Venkatachalam about the effect of AI on jobs. Dorothy talks about—she's from Fordham University—about she read something about Germany's best-selling newspaper Bild reportedly adopting artificial intelligence to replace certain editorial roles in an effort to cut costs. Does this mean that the field of journalism communication will change? And Venky's question is: AI—one of the impacts is in the area of automation, leading to elimination of certain types of jobs. Can you talk about both the elimination of jobs and what new types of jobs you think will be created as AI matures into the business world with more value-added applications? MOLINA: Well, what I like about predicting the future, and I've done this before in conferences and papers, is that, you know, when the future comes ten years from now people will either not remember what I said, or, you know, maybe I was lucky and my prediction was correct. In the specific field of journalism, and we've seen it, the journalism and communications field, decimated because the money that they used to make with advertising—and, you know, certainly a bit part of that were in the form of corporate profits. But many other one in the form of hiring good journalists, and investigative journalism, and these people could be six months writing a story when right now they have six hours to write a story, because there are no resources. And all the advertisement money went instead to Facebook, and Google, and many others because they work very well for advertisements. But now the lifeblood of journalism organizations has been really, you know, undermined. And there's good journalism in other places, in newspapers, but sadly this is a great temptation to replace some of the journalists with more artificial intelligence, particularly the most—on the least important pieces. I would argue that editorial pieces are the most important in newspapers, the ones requiring ideology, and critical thinking, and many others. Whereas there are others that tell you about traffic changes that perhaps do not—or weather patterns, without offending any meteorologists, that maybe require a more mechanical approach. I would argue that a lot of professions are going to be transformed because, well, if ChatGPT can write real estate announcements that work very well, well, you may need fewer people doing this. And yet, I think that what we're going to find is the same thing we found when technology arrived. We all thought that the arrival of computers would mean that everybody would be without a job. Guess what? It meant something different. It meant that in order to do our jobs, we had to learn how to use computers. So I would argue that this is going to be the same case. To be a good doctor, to be a good lawyer, to be a good economist, to be a good knowledge worker you're going to have to learn also how to use whatever artificial intelligence tools are available out there, and use them professionally within the moral and the ontological concerns that apply to your particular profession. Those are the kind of jobs that I think are going to be very important. And, of course, all the technical jobs, as I mentioned. There are tons of people who consider themselves artificial intelligence experts. Only a few at the very top understand these systems. But there are many others in the pyramid that help with preparing these systems, with the support, the maintenance, the marketing, preparing the datasets to go into these particular models, working with regulators and legislators and compliance organizations to make sure that the algorithms and the tools are not running afoul of existing regulations. All of those, I think, are going to be interesting jobs that will be part of the arrival of artificial intelligence. FASKIANOS: Great. We have so many questions left and we just couldn't get to them all. I'm just going to ask you just to maybe reflect on how the use of artificial intelligence in higher education will affect U.S. foreign policy and international relations. I know you touched upon it a little bit in reacting to the comment from our Georgetown University colleague, but any additional thoughts you might want to add before we close? MOLINA: Well, let's be honest, one particular one that applies to education and to everything else, there is a race—a worldwide race for artificial intelligence progress. The big companies are fighting—you know, Google, and Meta, many others, are really putting—Amazon—putting resources into that, trying to be first in this particular race. But it's also a national race. For example, it's very clear that there are executive orders from the United States as well as regulations and declarations from China that basically are indicating these two big nations are trying to be first in dominating the use of artificial intelligence. And let's be honest, in order to do well in artificial intelligence you need not only the scientists who are going to create those models and refine them, but you also need the bodies of data that you need to feed these algorithms in order to have good algorithms. So the barriers to entry for other nations and the barriers to entry by all the technology companies are going to be very, very high. It's not going to be easy for any small company to say: Oh, now I'm a huge player in artificial intelligence. Because even if you may have created an interesting new algorithmic procedure, you don't have the datasets that the huge companies have been able to amass and work on for the longest time. Every time you submit a question to ChatGPT, the ChatGPT experts are using their questions to refine the tool. The same way that when we were using voice recognition with Apple or Android or other companies, that we're using those voices and our accents and our mistakes in order to refine their voice recognition technologies. So this is the power. We'll see that the early bird gets the worm of those who are investing, those who are aggressively going for it, and those who are also judiciously regulating this can really do very well in the international arena when it comes to artificial intelligence. And so will their universities, because they will be able to really train those knowledge workers, they'll be able to get the money generated from artificial intelligence, and they will be able to, you know, feedback one with the other. The advances in the technology will result in more need for students, more students graduating will propel the industry. And there will also be—we'll always have a fight for talent where companies and countries will attract those people who really know about these wonderful things. Now, keep in mind that artificial intelligence was the core of this, but there are so many other emerging issues in information technology. And some of them are critical to higher education. So we're still, you know, lots of hype, but we think that virtual reality will have an amazing impact on the way we teach and we conduct research and we train for certain skills. We think that quantum computing has the ability to revolutionize the way we conduct research, allowing us to do competitions that were not even thinkable today. We'll look at things like robotics. And if you ask me about what is going to take many jobs away, I would say that robotics can take a lot of jobs away. Now, we thought that there would be no factory workers left because of robots, but that hasn't happened. But keep adding robots with artificial intelligence to serve you a cappuccino, or your meal, or take care of your laundry, or many other things, or maybe clean your hotel room, and you realize, oh, there are lots of jobs out there that no longer will be there. Think about artificial intelligence for self-driving vehicles, boats, planes, cargo ships, commercial airplanes. Think about the thousands of taxi drivers and truck drivers who may end up being out of jobs because, listen, the machines drive safer, and they don't get tired, and they can be driving twenty-four by seven, and they don't require health benefits, or retirement. They don't get depressed. They never miss. Think about many of the technologies out there that have an impact on what we do. So, but artificial intelligence is a multiplier to technologies, a contributor to many other fields and many other technologies. And this is why we're so—spending so much time and so much energy thinking about these particular issues. FASKIANOS: Well, thank you, Pablo Molina. We really appreciate it. Again, my apologies that we couldn't get to all of the questions and comments in the chat, but we appreciate all of you for your questions and, of course, your insights were really terrific, Dr. P. So we will, again, be sending out the link to this video and transcript, as well as the resources that you mentioned during this discussion. I hope you all enjoy the Fourth of July. And I encourage you to follow @CFR_Academic on Twitter and visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. Again, you send us comments, feedback, suggestions to CFRacademic@CFR.org. And, again, thank you all for joining us. We look forward to your continued participation in CFR Academic programming. Have a great day. MOLINA: Adios. (END)

CXO Conversations
The Hallmark of a Great Leader

CXO Conversations

Play Episode Listen Later Jun 21, 2023 30:09


An accomplished investor, Rand Lewis is  the Managing partner and co-founder of Delta-v Capital. Rand has invested in many well-known companies such as Zayo, Cloud Sherpa, LogRhythm, Iterable, Chegg and others. For over 24 years, Rand has been investing and hiring executives for portfolio companies and he's learned and observed a lot about what makes an executive a great leader and successful. In this episode, Rand discusses: He invests not only dollars, but real relationships with people Advice to current and emerging CEOs What makes a leader successful and how he finds it in candidates After earning an MBA from Northwestern, Rand was a consultant at McKinsey before joining Centennial Ventures.  After a successful 10 year run with Centennial, he co-founded Delta-v that focuses on technology companies.   Enjoy the show? Review us on iTunes- thanks! Thank you Jalan Crossland for lending your award-winning banjo skills to CXO Conversations.

The Sure Shot Entrepreneur
The Insurance Industry Desperately Needs Startups

The Sure Shot Entrepreneur

Play Episode Listen Later Jun 20, 2023 30:45


Charles Moldow, a general partner at Foundation Capital, shares his remarkable journey from being a Wall Street analyst to becoming an entrepreneur and eventually transitioning into his current role as an investor over two decades ago. His captivating anecdotes leave you eager for more, whether he's recounting stories about his father's wisdom on the internet or recalling a memorable encounter with an exceptional entrepreneur. Charles also delves into the exciting market trends within insurtech and offers valuable insights into the areas to focus attention for fruitful opportunities.In this episode, you'll learn:[2:20] Charles Moldow's early entrepreneurial ventures during the dynamic evolution of the internet.[7:58] The role of a VC in sometimes discouraging founders to protect them from their own pitfalls.[13:01] The revealing nature of a founder's personal life story, showcasing their unique abilities.[19:54] "Don't prepare to impress me. Just share your authentic truth." - Charles Moldow[23:43] The importance for entrepreneurs to explore the vast array of promising opportunities for leveraging technology in the insurance industry.The non-profit organization that Charles is passionate about: safespaceAbout Charles MoldowCharles Moldow is a general partner at Foundation Capital. At Foundation, he identifies technology trends and new user experiences that will change the financial services landscape. His thesis investing has him focused on fintech, insurtech and proptech opportunities with a crypto overlay to everything he evaluates. Since he joined Foundation Capital in 2005, he's made seventeen successful investments, five of which have gone public and twelve have been acquired. Charles' public portfolio includes early-stage investments that have led to notable IPOs with DOMA (IPO 2021), Rover (IPO 2021), LendingClub (IPO 2014), OnDeck (IPO 2014) and Everyday Health (2014). Fun fact: Charles moonlights as AAA Little League coach and family vacation planner.Learn more about Charles here.About Foundation CapitalFoundation Capital is a Silicon Valley-based early-stage venture capital firm that's dedicated to the proposition that one entrepreneur's idea, with the right support, can become a business that changes the world. The firm is made up of former entrepreneurs who set out to create the firm they wanted as founders. Foundation Capital is currently invested in more than 60 high-growth ventures in the areas of consumer, information technology, software, digital energy, financial technology, and marketing technology. These investments include AdRoll, Beepi, Bolt Threads, DogVacay, Kik, ForgeRock, Lending Home, Localytics, and Visier. The firm's twenty-six IPOs include Lending Club, OnDeck, Chegg, Sunrun, MobileIron, Control4, TubeMogul, Envestnet, Financial Engines, Netflix, NetZero, Responsys and Silver Spring Networks.Subscribe to our podcast and stay tuned for our next episode. Follow Us:  Twitter | Linkedin | Instagram | Facebook

Higher Ed Spotlight
27. A Futurist On Why Universities Can't Ignore Climate Change

Higher Ed Spotlight

Play Episode Listen Later Jun 20, 2023 26:14


“Universities on Fire: Higher Education in the Climate Crisis” is a provocative new book by author and futurist Bryan Alexander. He believes academia is uniquely poised to be a key player in the climate change debate and that the time has come for academics to step outside of the ivory tower and influence the public conversation.   Higher Ed Spotlight is sponsored by Chegg's Center for Digital Learning and aims to explore the future of higher education. It is produced by Antica Productions.

Venture Unlocked: The playbook for venture capital managers.
Oren Zeev on scaling to $2B+ in AUM as a solo-GP, contrarian investing, and high founder NPS

Venture Unlocked: The playbook for venture capital managers.

Play Episode Listen Later Jun 14, 2023 46:48


Follow me @samirkaji for my thoughts on the venture market, with a focus on the continued evolution of the VC landscape.We're pleased to welcome Oren Zeev, Founding Partner at Zeev Ventures. Without a doubt, Oren is one of the titans in venture investing with nearly 30 years of experience and one of the most unique. Unlike traditional firms that have achieved scale, Oren remains a solo-GP, and has an authentic and refreshing view on venture investing. Today, he manages over $2B in Assets under management and has backed companies such as   Houzz, Audible, Chegg, TripActions, and Tipalti, among many others.About Oren Zeev: Oren calls himself a “One Man Venture Capitalist,” and TechCrunch says he is a hybrid between an Angel Investor and a traditional VC. Prior to founding Zeev Ventures, Oren was a part of the founding team of Apax Israel in 1995. In 2002 moved to the US and co-headed, and later headed, the Technology Practice of Apax and the Silicon Valley office.He began his career at IBM and got his Bachelors from the Israel Institute of Technology and his MBA from INSEAD.In this episode we discuss:(02:21) The original thesis behind Zeev Ventures(09:44) Why Oren has avoided growing beyond a solo-GP(15:05) How Oren pushes himself to prevent biases and evolve his thinking over time(18:50) Why fund vintage doesn't matter(21:24) The reason why Oren can be aggressive with follow-ons(23:59) What type of support can founders expect(27:50) Being relevant to founders as a VC(30:26) Working with other VCs on board(32:47) Advice to companies that had 2021 valuations that may need to raise soon(37:03) Thoughts on this downturn(41:47) Why Venture is still a good long-term investmentI'd love to know what you took away from this conversation with Oren. Follow me @SamirKaji and give me your insights and questions with the hashtag #ventureunlocked. If you'd like to be considered as a guest or have someone you'd like to hear from (GP or LP), drop me a direct message on Twitter.Podcast Production support provided by Agent Bee This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit ventureunlocked.substack.com

Screaming in the Cloud
Centralizing Cloud Security Breach Information with Chris Farris

Screaming in the Cloud

Play Episode Listen Later Jun 8, 2023 35:06


Chris Farris, Cloud Security Nerd at PrimeHarbor Technologies, LLC, joins Corey on Screaming in the Cloud to discuss his new project, breaches.cloud, and why he feels having a centralized location for cloud security breach information is so important. Corey and Chris also discuss what it means to dive into entrepreneurship, including both the benefits of not having to work within a corporate structure and the challenges that come with running your own business. Chris also reveals what led him to start breaches.cloud, and what he's learned about some of the biggest cloud security breaches so far. About ChrisChris Farris is a highly experienced IT professional with a career spanning over 25 years. During this time, he has focused on various areas, including Linux, networking, and security. For the past eight years, he has been deeply involved in public-cloud and public-cloud security in media and entertainment, leveraging his expertise to build and evolve multiple cloud security programs.Chris is passionate about enabling the broader security team's objectives of secure design, incident response, and vulnerability management. He has developed cloud security standards and baselines to provide risk-based guidance to development and operations teams. As a practitioner, he has architected and implemented numerous serverless and traditional cloud applications, focusing on deployment, security, operations, and financial modeling.He is one of the organizers of the fwd:cloudsec conference and presented at various AWS conferences and BSides events. Chris shares his insights on security and technology on social media platforms like Twitter, Mastodon and his website https://www.chrisfarris.com.Links Referenced: fwd:cloudsec: https://fwdcloudsec.org/ breaches.cloud: https://breaches.cloud Twitter: https://twitter.com/jcfarris Company Site: https://www.primeharbor.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. My returning guest today is Chris Farris, now at PrimeHarbor, which is his own consultancy. Chris, welcome back. Last time we spoke, you were a Turbot, and now you've decided to go independent because you don't like sleep anymore.Chris: Yeah, I don't like sleep.Corey: [laugh]. It's one of those things where when I went independent, at least in my case, everyone thought that it was, oh, I have this grand vision of what the world could be and how I could look at these things, and that's going to just be great and awesome and everyone's going to just be a better world for it. In my case, it was, no, just there was quite literally nothing else for me to do that didn't feel like an exact reframing of what I'd already been doing for years. I'm a terrible employee and setting out on my own was important. It was the only way I found that I could wind up getting to a place of not worrying about getting fired all the time because that was my particular skill set. And I look back at it now, almost seven years in, and it's one of those things where if I had known then what I know now, I never would have started.Chris: Well, that was encouraging. Thank you [laugh].Corey: Oh, of course. And in sincerity, it's not one of those things where there's any one thing that stops you, but it's the, a lot of people get into the independent consulting dance because they want to do a thing and they're very good at that thing and they love that thing. The problem is, when you're independent, and at least starting out, I was spending over 70% of my time on things that were not billable, which included things like go and find new clients, go and talk to existing clients, the freaking accounting. One of the first hires I made was a fractional CFO, which changed my life. Up until that, my business partner and I were more or less dead reckoning of looking at the bank account and how much money is in there to determine if we could afford things. That's a very unsophisticated way of navigating. It's like driving by braille.Chris: Yeah, I think I went into it mostly as a way to define my professional identity outside of my W-2 employer. I had built cloud security programs for two major media companies and felt like that was my identity: I was the cloud security person for these companies. And so, I was like, ehh, why don't I just define myself as myself, rather than define myself as being part of a company that, in the media space, they are getting overwhelmed by change, and job security, job satisfaction, wasn't really something that I could count on.Corey: One of the weird things that I found—it's counterintuitive—is that when you're independent, you have gotten to a point where you have hit a point of sustainability, where you're not doing the oh, I'm just going to go work for 40 billable hours a week for a client. It's just like being an employee without a bunch of protections and extra steps. That doesn't work super well. But now, at the point where I'm at where the largest client we have is a single-digit percentage of revenue, I can't get fired anymore, without having a whole bunch of people suddenly turn on me because I've done something monstrous, in which case, I probably deserve not to have business anymore, or there's something systemic in the macro environment, which given that I do the media side and I do the cost-cutting side, I work on the way up, I work on the way down, I'm questioning what that looks like in a scenario that doesn't involve me hunting for food. But it's counterintuitive to people who have been employees their whole life, like I was, where, oh, it's risky and dangerous to go out on your own.Chris: It's risky and dangerous to be, you know, tied to a single, yeah, W-2 paycheck. So.Corey: Yeah. The question I'd like to ask is, how many people need to be really pissed off before you have one of those conversations with HR that doesn't involve giving you a cup of coffee? That's the tell: when you don't get coffee, it's a bad conversation.Chris: Actually, that you haven't seen [unintelligible 00:04:25] coffee these days. You don't want the cup of coffee, you know. That's—Corey: Even when they don't give you the crappy percolator navy coffee, like, midnight hobo diner style, it's still going to be a bad meeting because [unintelligible 00:04:37] pretend the coffee's palatable.Chris: Perhaps, yes. I like not having to deal with my own HR department. And I do agree that yeah, getting out of the W-2 space allows me to work on side projects that interests me or, you know, volunteer to do things like continuing the fwd:cloudsec, developing breaches.cloud, et cetera.Corey: I'll never forget, one of my last jobs I had a boss who walked past and saw me looking at Reddit and asked me if that was really the best use of my time. At first—it was in, I think, the sysadmin forum at the time, so yes, it was very much the best use of my time for the problem I was focusing on, but also, even if it wasn't, I spent an inordinate amount of time on social media, just telling stories and building audiences, on some level. That's the weird thing is that what counts as work versus what doesn't count as work gets very squishy when you're doing your own marketing.Chris: True. And even when I was a W-2 employee, I spent a lot of time on Twitter because Twitter was an intel source for us. It was like, “Hey, who's talking about the latest cloud security misconfigurations? Who's talking about the latest data breach? What is Mandiant tweeting about?” It was, you know—I consider it part of my job to be on Twitter and watching things.Corey: Oh, people ask me that. “So, you're on Twitter an awful lot. Don't you have a newsletter to write?” Like, yeah, where do you think that content comes from, buddy?Chris: Exactly. Twitter and Mastodon. And Reddit now.Corey: There's a whole argument to be had about where to find various things. For me at least, because I'm only security adjacent, I was always trying to report the news that other people had, not make the news myself.Chris: You don't want to be the one making the news in security.Corey: Speaking of, I'd like to talk a bit about what you just alluded to breaches.cloud. I don't think I've seen that come across my desk yet, which tells me that it has not been making a big splash just yet.Chris: I haven't been really announcing it; it got published the other night and so basically, yeah, is this is sort of a inaugural marketing push for breaches.cloud. So, what we're looking to do is document all the public cloud security breaches, what happened, why, and more importantly, what the companies did or didn't do that led to the security incident or the security breach.Corey: How are you slicing the difference between broad versus deep? And what I mean by that is, there are some companies where there are indictments and massive deep dives into everything that happens with timelines and blows-by-blows, and other times you wind up with the email that shows up one day of, “Security is very important to us. Now, listen to how we completely dropped the ball on it.” And it just makes the biggest description that they can get away with of what happened. Occasionally, you find out oh, it was an open S3 buckets, or they'll allude to something that sounds like it. Does that count for inclusion? Does it not? How do you make those editorial decisions?Chris: So, we haven't yet built a page around just all of the recipients of the Bucket Negligence Award. We're looking at the specific ones where there's been something that's happened that's usually involving IAM credentials—oftentimes involving IAM credentials found in GitHub—and what led to that. So, in a lot of cases, if there's a detailed company postmortem that they send their customers that said, “Hey, we goofed up, but complete transparency—” and then they hit all the bullet points of how they goofed up. Or in the case of certain others, like Uber, “Hey, we have court transcripts that we can go to,” or, “We have federal indictments,” or, “We have court transcripts, and federal indictments and FTC civil actions.” And so, we go through those trying to suss out what the company did or did not do that led to the breach. And really, the goal here is to be able to articulate as security practitioners, hey, don't attach S3 full access to this role on EC2. That's what got Capital One in trouble.Corey: I have a lot of sympathy for the Capital One breach and I wish they would talk about it more than they do, for obvious reasons, just because it was not, someone showed up and made a very obvious dumb decision, like, “Oh, that was what that giant red screaming thing in the S3 console means.” It was a series of small misconfigurations that led to another one, to another one, to another one, and eventually gets to a point where a sophisticated attacker was able to chain them all together. And yes, it's bad, yes, they're a bank and the rest, but I look at that and it's—that's the sort of exploit that you look at and it's okay, I see it. I absolutely see it. Someone was very clever, and a bunch of small things that didn't rise to the obvious. But they got dragged and castigated as if they basically had a four-character password that they'd left on the back of the laptop on a Post-It note in an airport lounge when their CEO was traveling. Which is not the case.Chris: Or all of the highlighting the fact that Paige Thompson was a former Amazon employee, making it seem like it was her insider abilities that lead to the incident, rather than she just knew that, hey, there's a metadata service and it gives me creds if I ask it.Corey: Right. That drove me nuts. There was no maleficence as an employee. And to be very direct, from what I understand of internal AWS controls, had there been, it would have been audited, flagged, caught, interdicted. I have talked to enough Amazonians that either a lot of them are lying to me very consistently despite not knowing each other, or they're being honest when they say that you can't get access to customer data using secret inside hacks.Chris: Yeah. I have reasonably good faith in AWS and their ability to not touch customer data in most scenarios. And I've had cases that I'm not allowed to talk about where Amazon has gone and accessed customer data, and the amount of rigmarole and questions and drilling that I got as a customer to have them do that was pretty intense and somewhat, actually, annoying.Corey: Oh, absolutely. And, on some level, it gets frustrating when it's a, look, this is a test account. I have nothing of sensitive value in here. I want the thing that isn't working to start working. Can I just give you a whole, like, admin-powered user account and we can move on past all of this? And their answer is always absolutely not.Chris: Yes. Or, “Hey, can you put this in our bucket?” “No, we can't even write to a public bucket or a bucket that, you know, they can share too.” So.Corey: An Amazonian had to mail me a hard drive because they could not send anything out of S3 to me.Chris: There you go.Corey: So, then I wound up uploading it back to S3 with, you know, a Snowball Edge because there's no overkill like massive overkill.Chris: No, the [snowmobile 00:11:29] would have been the massive overkill. But depending on where you live, you know, you might not have been able to get a permit to park the snowmobile there.Corey: They apparently require a loading dock. Same as with the outposts. I can't fake having one of those on my front porch yet.Chris: Ah. Well, there you go. I mean, you know it's the right height though, and you don't mind them ruining your lawn.Corey: So, help me understand. It makes sense to me at least, on some level, why having a central repository of all the various cloud security breaches in one place that's easy to reference is valuable. But what caused you to decide, you know, rather than saying it'd be nice to have, I'm going to go build that thing?Chris: Yeah, so it was actually right before the last time we spoke, Nicholas Sharp was indicted. And there was like, hey, this person was indicted for, you know, this cloud security case. And I'm like, that name rings a bell, but I don't remember who this person was. And so, I kind of realized that there's so many of these things happening now that I forget who is who. And so, when a new piece of news comes along, I'm like, where did this come from and how does this fit into what my knowledge of cloud security is and cloud security cases?So, I kind of realized that these are all running together in my mind. The Department of Justice only referenced ‘Company One,' so it wasn't clear to me if this even was a new cloud incident or one I already knew about. And so basically, I decided, okay, let's build this. Breaches.cloud was available; I think I kind of got the idea from hackingthe.cloud.And I had been working with some college students through the Collegiate Cyber Defense Competition, and I was like, “Hey, anybody want a spring research project that I will pay you for?” And so yeah, PrimeHarbor funded two college students to do quite a bit of the background research for me, I mentored them through, “Hey, so here's what this means,” and, “Hey, have we noticed that all of these seem to relate to credentials found in GitHub? You know, maybe there's a pattern here.” So, if you're not yet scanning for secrets in GitHub, I recommend you start scanning for secrets in your GitHub, private and public repos.Corey: Also, it makes sense to look at the history. Because, oh, I committed a secret. I'm going to go ahead and revert that commit and push that. That solves the problem, right?Chris: No, no, it doesn't. Yes, apparently, you can force push and delete an entire commit, but you really want to use a tool that's going to go back through the commit history and dig through it because as we saw in the Uber incident, when—the second Uber incident, the one that led to the CSOs conviction—yeah, the two attackers, [unintelligible 00:14:09] stuffed a Uber employee's personal GitHub account that they were also using for Uber work, and yeah, then they dug through all the source code and dug through the commit histories until they found a set of keys, and that's what they used for the second Uber breach.Corey: Awful when that hits. It's one of those things where it's just… [sigh], one thing leads to another leads to another. And on some level, I'm kind of amazed by the forensics that happen around all of these things. With the counterpoint, it is so… freakishly difficult, I think, for lack of a better term, just to be able to say what happened with any degree of certainty, so I can't help but wonder in those dark nights when the creeping dread starts sinking in, how many things like this happen that we just never hear about because they don't know?Chris: Because they don't turn on CloudTrail. Probably a number of them. Once the data gets out and shows up on the dark web, then people start knocking on doors. You know, Troy Hunt's got a large collection of data breach stuff, and you know, when there's a data breach, people will send him, “Hey, I found these passwords on the dark web,” and he loads them into Have I Been Pwned, and you know, [laugh] then the CSO finds out. So yeah, there's probably a lot of this that happens in the quiet of night, but once it hits the dark web, I think that data starts becoming available and the victimized company finds out.Corey: I am profoundly cynical, in case that was unclear. So, I'm wondering, on some level, what is the likelihood or commonality, I suppose, of people who are fundamentally just viewing security breach response from a perspective of step one, make sure my resume is always up to date. Because we talk about these business continuity plans and these DR approaches, but very often it feels like step one, secure your own mask before assisting others, as they always say on the flight. Where does personal preservation come in? And how does that compare with company preservation?Chris: I think down at the [IaC 00:16:17] level, I don't know of anybody who has not gotten a job because they had Equifax on their resume back in, what, 2017, 2018, right? Yes, the CSO, the CEO, the CIO probably all lost their jobs. And you know, now they're scraping by book deals and speaking engagements.Corey: And these things are always, to be clear, nuanced. It's rare that this is always one person's fault. If you're a one-person company, okay, yeah, it's kind of your fault, let's be clear here, but there are controls and cost controls and audit trails—presumably—for all of these things, so it feels like that's a relatively easy thing to talk around, that it was a process failure, not that one person sucked. “Well, didn't you design and implement the process?” “Yes. But it turned out there were some holes in it and my team reported that those weren't there and it turned out that they were and, well, live and learn.” It feels like that's something that could be talked around.Chris: It's an investment failure. And again, you know, if we go back to Harry Truman, “The buck stops here,” you know, it's the CEO who decides that, hey, we're going to buy a corporate jet rather than buy a [SIIM 00:17:22]. And those are the choices that happen at the top level that define, do you have a capable security team, and more importantly, do you have a capable security culture such that your security team isn't the only ones who are actually thinking about security?Corey: That's, I guess, a fair question. I saw a take on Twitter—which is always a weird thing—or maybe was Blue-ski or somewhere else recently, that if you don't have a C-level executive responsible for security with security in their title, your company does not take security seriously. And I can see that past a certain point of scale, but as a one-person company, do you have a designated CSO?Chris: As a one-person company and as a security company, I sort of do have a designated CSO. I also have, you know, the person who's like, oh, I'm going to not put MFA on the root of this one thing because, while it's an experiment and it's a sandbox and whatever else, but I also know that that's not where I'm going to be putting any customer data, so I can measure and evaluate the risk from both a security perspective and a business existential investment perspective. When you get to the larger the organization, the more detached the CEO gets from the risk and what the company is building and what the company is doing, is where you get into trouble. And lots of companies have C-level somebody who's responsible for security. It's called the CSO, but oftentimes, they report four levels down, or even more, from the chief executive who is actually the one making the investment decisions.Corey: On some level, the oh yeah, that's my responsibility, too, but it feels like it's a trap that falls into. Like, well, the CTO is responsible for security at a publicly traded company. Like, well… that tends to not work anymore, past certain points of scale. Like when I started out independently, yes, I was the CSO. I was also the accountant. I was also the head of marketing. I was also the janitor. There's a bunch of different roles; we all wear different hats at different times.I'm also not a big fan of shaming that oh, yeah. This is a universal truth that applies to every company in existence. That's also where I think Twitter started to go wrong where you would get called out whenever making an observation or witticism or whatnot because there was some vertex case to which it did not necessarily apply and then people would ‘well, actually,' you to death.Chris: Yeah. Well, and I think there's a lot of us in the security community who are in the security one-percenters. We're, “Hey, yes, I'm a cloud security person on a 15-person cloud security team, and here's this awesome thing we're doing.” And then you've got most of the other companies in this country that are probably below the security poverty line. They may or may not have a dedicated security person, they certainly don't have a SIIM, they certainly don't have anybody who's monitoring their endpoints for malware attacks or anything else, and those are the companies that are getting hit all the time with, you know, a lot of this ransomware stuff. Healthcare is particularly vulnerable to that.Corey: When you take a look across the industry, what is it that you're doing now at PrimeHarbor that you feel has been an unmet need in the space? And let me be clear, as of this recording earlier today, we signed a contract with you for a project. There's more to come on that in the future. So, this is me asking you to tell a story, not challenging, like, what do you actually do? This is not a refund request, let's be very clear here. But what's the unmet need that you saw?Chris: I think the unmet need that I see is we don't talk to our builder community. And when I say builder, I mean, developers, DevOps, sysadmins, whatever. AWS likes the term builder and I think it works. We don't talk to our builder community about risk in a way that makes sense to them. So, we can say, “Hey, well, you know, we have this security policy and section 24601 says that all data's classifications must be signed off by the data custodian,” and a developer is going to look at you with their head tilted, and be like, “Huh? What? I just need to get the sprint done.”Whereas if we can articulate the risk—and one of the reasons I wanted to do breaches.cloud was to have that corpus of articulated risk around specific things—I can articulate the risk and say, “Hey, look, you know how easy it is for somebody to go in and enumerate an S3 bucket? And then once they've enumerated and guessed that S3 bucket exists, they list it, and oh, hey, look, now that they've listed it, they know all of the objects and all of the juicy PII that you just made public.” If you demonstrate that to them, then they're going to be like, “Oh, I'm going to add the extra story point to this story to go figure out how to do CloudFront origin access identity.” And now you've solved, you know, one more security thing. And you've done in a way that not just giving a man a fish or closing the bucket for them, but now they know, hey, I should always use origin access identity. This is why I need to do this particular thing.Corey: One of the challenges that I've seen in a variety of different sites that have tried to start cataloging different breaches and other collections of things happening in public is the discoverability or the library management problem. The most obvious example of this is, of course, the AWS console itself, where when it paginates things like, oh, there are 3000 things here, ten at a time, through various pages for it. Like, the marketplace is just a joke of discoverability. How do you wind up separating the stuff that is interesting and notable, rather than, well, this has about three sentences to it because that's all the company would say?Chris: So, I think even the ones where there's three sentences, we may actually go ahead and add it to the repo, or we may just hold it as a draft, so that we know later on when, “Hey, look, here's a federal indictment for Company Three. Oh, hey, look. Company Three was actually this breach announcement that we heard about three months ago,” or even three years ago. So like, you know, Chegg is a great example of, you know, one of those where, hey, you know, there was an incident, and they disclosed something, and then, years later, FTC comes along and starts banging them over the head. And in the FTC documentation, or in the FTC civil complaint, we got all sorts of useful data.Like, not only were they using root API keys, every contractor and employee there was sharing the root API keys, so when they had a contractor who left, it was too hard to change the keys and share it with everybody, so they just didn't do that. The contractor still had the keys, and that was one of the findings from the FTC against Chegg. Similar to that, Cisco didn't turn off contractors' access, and I think—this is pure speculation—I think the poor contractor one day logged into his Google Cloud Shell, cd'ed into a Terraform directory, ran ‘terraform destroy', and rather than destroying what he thought he was destroying, it had the access keys back to Cisco WebEx and took down 400 EC2 instances that made up all of WebEx. These are the kinds of things that I think it's worth capturing because the stories are going to come out over time.Corey: What have you seen in your, I guess, so far, a limited history of curating this that—I guess, first what is it you've learned that you've started seeing as far as patterns go, as far as what warrants inclusion, what doesn't, and of course, once you started launching and going a bit more public with it, I'm curious to hear what the response from companies is going to be.Chris: So, I want to be very careful and clear that if I'm going to name somebody, that we're sourcing something from the criminal justice system, that we're not going to say, “Hey, everybody knows that it was Paige Thompson who was behind it.” No, no, here's the indictment that said it was Paige Thompson that was, you know, indicted for this Capital One sort of thing. All the data that I'm using, it all comes from public sources, it's all sited, so it's not like, hey, some insider said, “Hey, this is what actually happened.” You know? I very much learned from the Ubiquiti case that I don't want to be in the position of Brian Krebs, where it's the attacker themselves who's updating the site and telling us everything that went wrong, when in fact, it's not because they're in fact the perpetrator.Corey: Yeah, there's a lot of lessons to be learned. And fortunately, for what it's s—at least it seems… mostly, that we've moved past the battle days of security researchers getting sued on a whim from large companies for saying embarrassing things about them. Of course, watch me be tempting fate and by the time this publishes, I'll get sued by some company, probably Azure or whatnot, telling me that, “Okay, we've had enough of you saying bad things about our security.” It's like, well, cool, but I also read the complaint before you file because your security is bad. Buh-dum-tss. I'm kidding. I'm kidding. Please don't sue me.Chris: So, you know, whether it's slander or libel, depending on whether you're reading this or hearing it, you know, truth is an actual defense, so I think Microsoft doesn't have a case against you. I think for what we're doing in breaches, you know—and one of the reasons that I'm going to be very clear on anybody who contributes—and just for the record, anybody is welcome to contribute. The GitHub repo that runs breaches.cloud is public and anybody can submit me a pull request and I will take their write-ups of incidents. But whatever it is, it has to be sourced.One of the things that I'm looking to do shortly, is start soliciting sponsorships for breaches so that we can afford to go pull down the PACER documents. Because apparently in this country, while we have a right to a speedy trial, we don't have a right to actually get the court transcripts for less than ten cents a page. And so, part of what we need to do next is download those—and once we've purchased them, we can make them public—download those, make them public, and let everybody see exactly what the transcript was from the Capital One incident, or the Joey Sullivan trial.Corey: You're absolutely right. It drives me nuts that I have to wind up budgeting money for PACER to pull up court records. And at ten cents a page, it hasn't changed in decades, where it's oh, this is the cost of providing that data. It's, I'm not asking someone to walk to the back room and fax it to me. I want to be very clear here. It just feels like it's one of those areas where the technology and government is not caught up and it's—part of the problem is, of course, having no competition.Chris: There is that. And I think I read somewhere that the ent—if you wanted to download the entire PACER, it would be, like, $100 million. Not that you would do that, but you know, it is the moneymaker for the judicial system, and you know, they do need to keep the lights on. Although I guess that's what my taxes are for. But again, yes, they're a monopoly; they can do that.Corey: Wildly frustrating, isn't it?Chris: Yeah [sigh]… yeah, yeah, yeah. Yeah, I think there's a lot of value in the court transcripts. I've held off on publishing the Capital One case because one, well, already there's been a lot of ink spilled on it, and two, I think all the good detail is going to be in the trial transcripts from Paige Thompson's trial.Corey: So, I am curious what your take is on… well, let's called the ‘FTX thing.' I don't even know how to describe it at this point. Is it a breach? Is it just maleficence? Is it 15,000 other things? But I noticed that it's something that breaches.cloud does talk about a bit.Chris: Yeah. So, that one was a fascinating one that came out because as I was starting this project, I heard you know, somebody who was tweeting was like, “Hey, they were storing all of the crypto private keys in AWS Secrets Manager.” And I was like, “Errr?” And so, I went back and I read John J. Ray III's interim report to the creditors.Now, John Ray is the man who was behind the cleaning up of Enron, and his comment was “FTX is the”—“Never in my career have I seen such a complete failure of corporate controls and such a complete absence of trustworthy information as occurred here.” And as part of his general, broad write-up, they went into, in-depth, a lot of the FTX AWS practices. Like, we talk about, hey, you know, your company should be multi-account. FTX was worse. They had three or four different companies all operating in the same AWS account.They had their main company, FTX US, Alameda, all of them had crypto keys in Secrets Manager and there was no access control between any of those. And what ended up happening on the day that SBF left and Ray came in as CEO, the $400 million worth of crypto somehow disappeared out of FTX's wallets.Corey: I want to call this out because otherwise, I will get letters from the AWS PR spin doctors. Because on the surface of it, I don't know that there's necessarily a lot wrong with using Secrets Manager as the backing store for private keys. I do that with other things myself. The question is, what other controls are there? You can't just slap it into Secrets Manager and, “Well, my job is done. Let's go to lunch early today.”There are challenges [laugh] around the access levels, there are—around who has access, who can audit these things, and what happens. Because most of the secrets I have in Secrets Manager are not the sort of thing that is, it is now a viable strategy to take that thing and abscond to a country with a non-extradition treaty for the rest of my life, but with private keys and crypto, there kind of is.Chris: That's it. It's like, you know, hey, okay, the RDS database password is one thing, but $400 million in crypto is potentially another thing. Putting it in and Secrets Manager might have been the right answer, too. You get KMS customer-managed keys, you get full auditability with CloudTrail, everything else, but we didn't hear any of that coming out of Ray's report to the creditors. So again, the question is, did they even have CloudTrail turned on? He did explicitly say that FTX had not enabled GuardDuty.Corey: On some level, even if GuardDuty doesn't do anything for you, which in my case, it doesn't, but I want to be clear, you should still enable it anyway because you're going to get dragged when there's inevitable breach because there's always a breach somewhere, and then you get yelled at for not having turned on something that was called GuardDuty. You already sound negligent, just with that sentence alone. Same with Security Hub. Good name on AWS's part if you're trying to drive service adoption. Just by calling it the thing that responsible people would use, you will see adoption, even if people never configure or understand it.Chris: Yeah, and then of course, hey, you had Security Hub turned on, but you ignore the 80,000 findings in it. Why did you ignore those 80,000 findings? I find Security Hub to probably be a little bit too much noise. And it's not Security Hub, it's ‘Compliance Hub.' Everything—and I'm going to have a blog post coming out shortly—on this, everything that Security Hub looks at, it looks at it from a compliance perspective.If you look at all of its scoring, it's not how many things are wrong; it's how many rules you are a hundred percent compliant to. It is not useful for anybody below that AWS security poverty line to really master or to really operationalize.Corey: I really want to thank you for taking the time to catch up with me once again. Although now that I'm the client, I expect I can do this on demand, which is just going to be delightful. If people want to learn more, where can they find you?Chris: So, they can find breaches.cloud at, well https://breaches.cloud. If you're looking for me, I am either on Twitter, still, at @jcfarris, or you can find me and my consulting company, which is www.primeharbor.com.Corey: And we will, of course, put links to all of that in the [show notes 00:33:57]. Thank you so much for taking the time to speak with me. As always, I appreciate it.Chris: Oh, thank you for having me again.Corey: Chris Farris, cloud security nerd at PrimeHarbor. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry, insulting comment that you're also going to use as the storage back-end for your private keys.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Higher Ed Spotlight
26. Delivering on the Promise of Inclusive Excellence

Higher Ed Spotlight

Play Episode Listen Later Jun 6, 2023 34:02


A wide ranging interview with the new President of the University of Maryland, Baltimore County (UMBC), Valerie Sheares Ashby. We touch on the legacy she's inherited of inclusive excellence, what it takes to diversify STEM, and the challenges of leading a major state university in the current political climate.   Higher Ed Spotlight is sponsored by Chegg's Center for Digital Learning and aims to explore the future of higher education. It is produced by Antica Productions.  

Wall Street Millennial
Chegg: ChatGPT's Biggest Victim

Wall Street Millennial

Play Episode Listen Later Jun 1, 2023 14:09


In this episode we examine one of ChatGPTs biggest victims: the online school aid company Chegg.

Lochhead on Marketing
176 How AI Changes Startups, Entrepreneurship & Venture Capital with Mike Maples Jr. of Floodgate

Lochhead on Marketing

Play Episode Listen Later May 24, 2023 39:09


On this episode of Lochhead on Marketing, we have a dialogue with Mike Maples Jr. on how artificial intelligence is changing startups and venture capital. Mike Maples Jr. is the co-founder of Floodgate, one of the highest profile early stage venture capitalists. He also has a podcast called Starting Greatness, and it is one of my absolute favorites. By the end of it, we hope that you'll gain a new way to think about both technical risk for startups and market risk. And why in an AI world, you must either be radically different or radically disintermediate something. Welcome to Lochhead on Marketing. The number one charting marketing podcast for marketers, category designers, and entrepreneurs with a different mind. Mike Maples Jr. on AI We begin the discussion on the topic of challenges of making sense of the rapidly evolving field of AI. Mike also talks about the traditional funding model of startups, where the primary focus was taking out technical risk, and how the LAMP stack, which commoditized what was once expensive, made it easier to start a startup. Mike notes that the nature of the LAMP stack changed what startups were funded for. “What I like to say is that the LAMP stack was deflationary in terms of the cost of starting startup. And so what does that mean? It meant that what you were funding was different, because if Kevin Rose can start dig for $1,500, over a weekend, there's no technical risks there. I mean, he hired a contractor to do it that he didn't even know at the time.” – Mike Maples Jr. Who gets Product Market Fit first The conversation then moves on to the changing dynamics of venture capital investment. The discussion continues with the notion that technical risk and market risk are inversely related. Solving a technically difficult problem that is valuable to society will create a market; if the problem is easy to solve technically, it will all come down to who achieves product-market fit first. To add value to the business, Floodgate and YC have taken the approach of funding market risk takedown. As technology becomes more commoditized and innovations become more accessible, the person who creates something people want the quickest wins. This is why YC was so successful: it offered young people $100,000 to either take market risks or leave. He also mentions that the traditional venture capital model may not be appropriate for all businesses and that deflationary factors such as content, code, and data may change the way businesses are built. Mike Maples Jr. on AI and the future of Venture Capital Mike Maples Jr. then returns to the topic of artificial intelligence and its implications for the future of venture capital. Here, Mike emphasizes two ends of the risk spectrum: high technical risk and high market risk. On the one hand, some projects require large amounts of funding for mass computation in order to build massive models that have the potential to change humanity. On the other hand, AI is being used in a variety of fields, including content generation for marketing, customer service chatbots, and lead generation, resulting in a deflationary effect on content, code, and data. According to Mike, some businesses may not require traditional venture capital funding and should instead focus on achieving $50 million in revenue with a small team and minimal funding. There is also speculation that the current billion-dollar funds may be providing the wrong incentives to these companies. To hear more from Mike Maples Jr. and how AI can affect the future of startups and venture capital, download and listen to this episode. Bio Mike Maples Jr. is an entrepreneur turned venture capitalist. He's co-founder of Silicon Valley based, early-stage VC Floodgate. And the host of the popular “Starting Greatness” podcast. Investments include Twitter, Lyft, Bazaarvoice, Sparefoot, Ayasdi, Xamarin, Doubledutch, Twitch.tv, Playdom, Chegg, Demandforce, Rappi, Smule, and Outreach. Link

Higher Ed Spotlight
25. Why Grades Hurt Learning

Higher Ed Spotlight

Play Episode Listen Later May 23, 2023 30:25


Susan D. Blum is one of a growing number of educators questioning the role of grades in learning. She's a professor and the editor of “Ungrading: Why Rating Students Undermines Learning (and What to Do Instead).” She advocates for a sweeping pedagogical shift based on the practice of ‘ungrading' - not assigning a number or letter value to a student's work.    Higher Ed Spotlight is sponsored by Chegg's Center for Digital Learning and aims to explore the future of higher education. It is produced by Antica Productions.

The WAN Show Podcast
I'm sure you have questions..... - WAN Show May 19, 2023

The WAN Show Podcast

Play Episode Listen Later May 22, 2023 226:00


Go to https://babbel.com/WAN for 55% off your subscription Visit Newegg at https://lmg.gg/newegg Enable your creative side! Check out Moment at https://lmg.gg/ShopMoment Timestamps (Courtesy of NoKi1119) Note: Timing may be off due to sponsor change: 0:00 Chapters 0:48 Intro 1:14 Topic #1 - Linus steps down 2:55 Linus's past suggestions for LMG solutions 5:18 Luke on Teams notifs, Linus workshop on storyboarding 10:56 Discussing writing knowledge-based articles 11:48 Linus on disruptive shooting behavior, mentions SC 14:05 "Chief Vision Officer" Discussing LMG teams 16:53 Linus flabbergasted at MarkBench's progress 21:03 Luke asks about oversaturation 23:51 Ideas for LTT Labs, SC update, RF chamber's cost 27:28 Linus explains "moat building," quoting DMs about FP 29:06 Community response, Linus on difficult administrative 32:39 Topic #2 - Free TV!that spies on you 34:01 Charged $500 if opting out of tracking ft. awkward high five 35:02 Linus on privacy concerns 36:37 Ad-supported V.S. business revenue 40:24 Linus asks "What's next?" ft. Ads revenue, domains 48:30 Linus recalls getting charged for overdraft 51:28 LTTStore's presale LABS #FIRST shirt & hoodie 53:26 LTTStore notebooks back in stock 53:32 Merch Messages #1 54:48 Thoughts on slow drop of 4070 laptop GPUs? 56:49 Would sponsors change? ft. Discussing Terren Tong 1:14:52 LTTStore screwdriver Noctua edition 1:16:48 Showcasing the screwdrivers in person 1:23:47 Luke on how the CEO would handle leaks 1:24:03 Topic #3 - Schools struggle with ChatGPT 1:24:48 Chegg claims ChatGPT hurt their revenue 1:25:36 "Plagiarism," Texas professor fails half of students 1:27:25 Should schools be responding to AI? ft. Comment bots 1:30:26 Linus struggles with calling his contacts 1:33:14 Topic #4 - Imgur's NSFW & old content purge 1:36:57 How should we be retaining internet history? 1:40:07 "Convert the moon into a server! NUKE THE MOON!" 1:42:34 Sponsors 1:45:46 Merch Messages #2 1:46:40 Would it matter if I like or finish a video on FP? 1:49:19 Which was first - CVO Linus or CTO Luke? ft. Linus trolling Dan 1:51:19 Decision of overturning the CEO? ft. Water bottle "ideas" 1:54:38 Is it possible to run an OS on a GPU's VRAM? 1:56:29 Topic #5 - Google's controversial domain extensions 1:57:32 Linus on Google Search, Luke shows file link V.S. domain 1:59:56 Reasons for doing this, what is the point? 2:03:50 Topic #6 - Toyota exposes live location ft. Dad jokes 2:05:43 Topic #7 - Roblox doesn't protect kids from ads 2:06:56 CARU's report, FTC complaint & Roblox's response 2:08:42 Kids "budgeting," Luke on cosmetic costs 2:16:30 Topic #8 - Valve sued by Immersion due to the rumble 2:17:42 OCBASE/OCCT 20th "Stableversary" giveaway 2:18:52 Topic #9 - Overwatch 2 cancels PvE mode 2:29:37 Merch Messages #3 2:30:58 How did you get Terren? Compensation? 2:34:40 Any project Linus is excited to work on? ft. Nuke fab 2:37:48 Most costly things you misplaced or lost? 2:42:28 What worried Linus the most when stepping down? 2:43:19 Thoughts on Kyle ending Bitwit? 2:46:50 Most challenging part of being a CVO? 2:48:04 Is cosplay going to be allowed in LTX? 2:49:02 Favorite FP info that Linus leaked, and its downsides? 2:50:42 Defunct company you would revive to thrive? 2:54:48 Does Linus plan to have time for non-LMG stuff? 2:56:10 Skill sets you'd like to improve on your new position? 2:59:55 Has Linus debated how much of his life should be on the show? 3:04:40 Is putting a camera to monitor your computer too far? 3:12:36 Do you see WAN Show outliving you as hosts? 3:22:46 Has LTT Labs always been the end goal of LTT? 3:23:24 Best memory Linus made with his Taycan? 3:25:26 How would past Linus react to what LMG became today? 3:29:28 How did Linus prepare? 3:31:32 What did Terren teach young Linus that stuck? 3:37:00 Minimum age of lifeguarding has been lowered 3:44:11 Any leadership minds that inspired you? 3:46:10 What if the new CEO stopped you from being who you are? 3:46:28 Outro

Work In Progress
AI is causing companies and educators to rethink how they’re preparing workers for the workforce

Work In Progress

Play Episode Listen Later May 16, 2023 26:11


In this episode of the Work in Progress podcast, we're talking artificial intelligence and education with Rohit Sharma of ETS and John Fillmore of Chegg. For the past few months, we've been hearing that artificial intelligence and machine learning could transform nearly every job on the planet. With the introduction and popularity of ChatGPT, it feels like that transformation might be coming very quickly – some think it might be too quickly. Last month in San Diego, I attended the annual ASU+GSV Summit, a gathering of leaders sharing ideas on transforming society and business around learning and work. There I spoke with several leaders in education and learning solutions, including Rohit Sharma, senior vice president of Global Education and Workforce Solutions at ETS, and John Fillmore, president of Skills Learning at Chegg. I wanted to know how the introduction of AI into learning at all levels will transform education and the way people prepare for careers. Chegg is an education tech company that is increasingly offering online training to help build skills needed and wanted in today's job market. Fillmore says ChatGPT, AI, and machine learning are moving fast, but he sees that as a good development. "I'm a massive optimist on how technology is going to impact the future of work. The reality is every technology change we've had over the past 200 years has ended up creating dispersion, but ultimately creating more need for talent, more need for great jobs." For example, he believes companies are going to need hundreds of thousands more data analysts who can not just plug questions into the computer looking for answers, but who can look at those answers and determine the right strategy for a business. "So, can I tell the story of why this answer should lead us to make a different decision than we are making today? Those people who are getting displaced are going to want to go into this, this is actually why we work with Walmart, Target, and Macy's and lots of these companies that want to train their frontline workers for the jobs of tomorrow." Fillmore adds, "Traditionally, higher education has not moved fast enough to customize all of those types of training needs that an employer is going to have, because as you mentioned earlier, those are going to change dramatically and faster than I think we've ever seen in human history." ETS is the world's largest nonprofit educational testing and assessment organization. Sharma says, given the acceleration of artificial intelligence, ETS is looking at how they can leverage it to better determine a student's skills and where they need more training, in school or on the job. "There are so many micro skills. We'll be able to tell you how good you are perhaps in certain aspects of a communication. Are you better in written communications versus oral communication? Or are you better in analytical reasoning that can be applied in certain jobs and contexts that may be more suited to your strengths? "That's going to be important because there's so many things that are new, we can't just put people in a training program and expect a miracle to happen, and it needs to be complimented with on-the-job training," adds Sharma. To get more details of how Sharma and Fillmore expect skills-based training and education to evolve even further – and faster – listen to the podcast here. Or you download it wherever you get your podcasts. Episode 272: Rohit Sharma, Senior Vice President of Global Education and Workforce Solutions at ETS, and John Fillmore, President of Skills Learning at CheggHost & Executive Producer: Ramona Schindelheim, Editor-in-Chief, WorkingNationProducer: Larry BuhlExecutive Producers: Joan Lynch and Melissa PanzerTheme Music: Composed by Lee Rosevere and licensed under CC by 4.0 Download the transcript for this podcast here.You can check out all the other podcasts at this link: Work in Progress podcasts

Earnings Season
Chegg, Inc., Q1 2023 Earnings Call, May 01, 2023

Earnings Season

Play Episode Listen Later May 8, 2023 59:04


Chegg, Inc., Q1 2023 Earnings Call, May 01, 2023

Skippy and Doogles Talk Investing
No Pain, No Premium

Skippy and Doogles Talk Investing

Play Episode Listen Later May 8, 2023 52:49


Doogles marvels at how AI is changing...meaning Allen Iverson to Artificial Intelligence. Skippy takes the Warren Buffett challenge. Doogles walks through a recent AQR report on international diversification. Skippy can't believe that UC schools students are fighting to live on camp sites with current lack of affordability. The episode wraps with people being more worried about bank safety now than in 2008, and Chegg's stock hit on ChatGPT fears. Join the Skippy and Doogles fan club. You can also get more details about the show at skippydoogles.com, show notes on our Substack, and send comments or questions to skippydoogles@gmail.com.

The Prof G Show with Scott Galloway
Prof G Markets: The FDIC Limit, the Coinbase Lawsuit, and the Business of Formula 1

The Prof G Show with Scott Galloway

Play Episode Listen Later May 8, 2023 36:49


This week on Prof G Markets, Scott shares his thoughts on Vice's imminent bankruptcy, Hindenburg's latest short position, and Chegg's battered stock. He then discusses raising the FDIC limit and the potential consolidation of the banking industry, and weighs in on a lawsuit accusing Coinbase insiders of dumping shares soon after its direct listing. Finally, he discusses the growing success of Formula 1 in the U.S., and tells a story of his time at the Miami race last year. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Security Voices
The Hidden Dangers of Generative AI: Who is Responsible for Protecting our Data?

Security Voices

Play Episode Listen Later May 6, 2023 64:21


The breakaway success of ChatGPT is hiding an important fact and an even bigger problem. The next wave of generative AI will not be built by trawling the Internet but by mining hordes of proprietary data that have been piling up for years inside organizations. While Elon Musk and Reddit may breathe a sigh of relief, this ushers in a new set of concerns that go well beyond prompt injections and AI hallucinations. Who is responsible for making sure our private data doesn't get used as training data? And what happens if it does? Do they even know what's in the data to begin with?We tagged in data engineering expert Josh Wills and security veteran Mike Sabbota of Amazon Prime Video to go past the headlines and into what it takes to safely harness the vast oceans of data they've been responsible for in the past and present. Foundational questions like “who is responsible for data hygiene?” and “what is data governance?” may not be nearly as sexy as tricking AI into saying it wants to destroy humanity but they arguably will have a much greater impact on our safety in the long run. Mike, Josh and Dave go deep into the practical realities of working with data at scale and why the topic is more critical than ever.For anyone wondering exactly how we arrived at this moment where generative AI dominates the headlines and we can't quite recall why we ever cared about blockchains and NFTs, we kick off the episode with Josh explaining the recent history of data science and how it led to this moment. We quickly (and painlessly) cover the breakthrough attention-based transformer model explained in 2017 and key events that have happened since that point.

TechCheck
TechCheck+ AI Roadkill 5/5/23

TechCheck

Play Episode Listen Later May 5, 2023 6:27


You've heard about the AI winners – Microsoft, Google, Nvidia and others leading the shift. But what about AI roadkill? The companies who could see their entire business models eviscerated by the coming wave of generative AI? Edtech company Chegg was the first public company to blame AI and ChatGPT for a weak outlook, and lost 50% of its value in one day. But it won't be the last. This week on TechCheck weekly, the companies most at risk from the AI revolution. 

Sway
Bluesky Has the Juice + A.I. Jobs Apocalypse + Hard Questions

Sway

Play Episode Listen Later May 5, 2023 56:14


The Twitter look-alike Bluesky, started by the former Twitter chief executive Jack Dorsey, is doing the impossible: making social media fun again.Then, A.I. is coming for jobs but not in the way you think.Plus: Kevin and Casey moonlight as advice columnists in a new Hard Fork segment called Hard Questions.Additional reading:Bluesky is vying to replace Twitter.IBM announced a pause in hiring, anticipating that A.I. would replace thousands of jobs at the company in the coming years.The chief executive of the education company Chegg said student interest in the chatbot ChatGPT was hurting its sales.

Edtech Insiders
Edtech Insiders Special LIVE Edition: The Chegg/GPT Fallout

Edtech Insiders

Play Episode Listen Later May 4, 2023 25:20


In this special "live" (or at least, unedited and recorded the same day it is published) episode, Ben and Alex discuss the growing narrative that Edtech company Chegg is 'the first victim of ChatGPT', which has caused a 40% reduction in its stock price this week, and its implications for the sector. 

The Hustle Daily Show
Chegg learns a lesson, from AI

The Hustle Daily Show

Play Episode Listen Later May 4, 2023 10:16


Investors are worried that Chegg could be schooled by ChatGPT's homework help. Plus: The end of passwords, more AI news, J&J's really, really big spin off, and more. Join our host Jacob Cohen as he takes you through our most interesting stories of the day. Follow us on social media: TikTok: https://www.tiktok.com/@thdspod  Instagram: https://www.instagram.com/thdspod/  Thank You For Listening to The Hustle Daily Show. Don't forget to hit Subscribe or Follow us on Apple Podcasts so you never miss an episode! If you want this news delivered to your inbox, join millions of others and sign up for The Hustle Daily newsletter, here: https://thehustle.co/email/  Plus! Your engagement matters to us. If you are a fan of the show, be sure to leave us a 5-Star Review on Apple Podcasts https://podcasts.apple.com/us/podcast/the-hustle-daily-show/id1606449047 (and share your favorite episodes with your friends, clients, and colleagues). “The Hustle Daily Show” is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Produced by Darren Clarke.

Your Money. Your Life. With Delano Saporu
Episode 170: Regional Banking Crisis

Your Money. Your Life. With Delano Saporu

Play Episode Listen Later May 4, 2023 11:23


On this episode: We have a large news slate with the regional Banking Crisis worsening, the Fed raising interest rates again, and Chegg saying AI is hurting its business: QOFTW: Rapid fire https://www.instagram.com/delano.saporu/?hl=en Connect with me here also: https://newstreetadvisorsgroup.com/social/ Want to support the show? Feel free to do so here! https://anchor.fm/delano-saporu4/support Thank you for listening. --- Support this podcast: https://podcasters.spotify.com/pod/show/delano-saporu4/support

Squawk Pod
Fed Day, Alzheimer's Treatment, & Rep. Ro Khanna (D-CA) 05/03/23

Squawk Pod

Play Episode Listen Later May 3, 2023 30:39


Eli Lilly's Alzheimer's treatment donanemab proved to significantly slow progression of the disease in its latest clinical trial. Eli Lilly CEO David Ricks discusses the company's Alzheimer's research, treatment, and plans to apply for FDA approval as soon as this quarter. The Federal Reserve is expected to increase its benchmark interest rate Wednesday afternoon, as policy-making officials wrap up a two-day meeting. But there are plenty of strong indications the Fed may pause hikes here, especially as the economy processes a banking crisis that has already brought down three large U.S. financial institutions. Rep. Ro Khanna (D-Calif.) discusses America's banking turmoil, DC's debt ceiling standoff, and whether more crises await the U.S. financial system. Plus, More than 11,000 film and television writers are on strike and Chegg says ChatGPT is killing its business.In this episode:Rep. Ro Khanna, @RepRoKhannaJoe Kernen, @JoeSquawkBecky Quick, @BeckyQuickAndrew Ross Sorkin, @andrewrsorkinKatie Kramer, @Kramer_Katie

TD Ameritrade Network
Chegg (CHEGG) Sells Off After Earnings & ChatGPT Concerns

TD Ameritrade Network

Play Episode Listen Later May 3, 2023 7:41


Chegg (CHGG) sells off after earnings and CEO's warning about ChatGPT. Caroline Woods discusses this as CHGG's adjusted EPS came in at $0.27 versus an estimated $0.34 and its revenue came in at $187.60M versus an estimated $185.25N. She talks about how the CEO said that he believes student interest in ChatGPT is having an impact on new customer growth rate. Caroline then goes over Arista Networks (ANET) whose earnings were released yesterday, May 1st. Adjusted EPS came in at $1.43 versus an estimated $1.35 and revenue came in at $1.35B versus an estimated $1.31B. Tune in to find out more about the stock market today.

Okay, Computer.
Tech is a Minefield with Dan Niles

Okay, Computer.

Play Episode Listen Later May 3, 2023 73:49


Dan and Deirdre Bosa discuss Uber earnings (1:00), Chegg crashing after its ChatGPT warning (14:00), and Amazon's AWS guide spooking investors (20:00). Later, Dan interviews Dan Niles, Founder & Portfolio Manager of the Satori Fund, and talks about his background as an internet analyst during the dot com crash (26:00), the psychology of investing (28:30), Meta (37:45), big tech valuations/Nvidia (45:30), Intel (52:30), Apple (58:00), and the broader market (1:06:00).  View our show notes here Email us at contact@riskreversal.com with any feedback, suggestions, or questions for us to answer on the pod and follow us @OkayComputerPod. We're on social: Follow Dan Nathan @RiskReversal on Twitter Follow @GuyAdami on Twitter Follow us on Instagram @RiskReversalMedia Subscribe to our YouTube page

On The Tape
Okay, Computer: Tech is a Minefield with Dan Niles

On The Tape

Play Episode Listen Later May 3, 2023 73:49


Dan and Deirdre Bosa discuss Uber earnings (1:00), Chegg crashing after its ChatGPT warning (14:00), and Amazon's AWS guide spooking investors (20:00). Later, Dan interviews Dan Niles, Founder & Portfolio Manager of the Satori Fund, and talks about his background as an internet analyst during the dot com crash (26:00), the psychology of investing (28:30), Meta (37:45), big tech valuations/Nvidia (45:30), Intel (52:30), Apple (58:00), and the broader market (1:06:00).  View our show notes here Email us at contact@riskreversal.com with any feedback, suggestions, or questions for us to answer on the pod and follow us @OkayComputerPod. We're on social: Follow Dan Nathan @RiskReversal on Twitter Follow @GuyAdami on Twitter Follow us on Instagram @RiskReversalMedia Subscribe to our YouTube page

TechCheck
Chegg Shares Sink on ChatGPT Threat, Plus AI's Larger Impact on Business and Society 5/2/23

TechCheck

Play Episode Listen Later May 2, 2023 10:20


Chegg shares were cut nearly in half after its CEO described how students have increasingly turned to ChatGPT in recent months, hurting new user growth. And that's not the only company with big news today on AI disruption. IBM said it plans to pause hiring for roles that could be replaced by AI, showing just how much the new technology is creeping into all corners of society. But is AI ultimately going to be a net positive or net negative? A CNBC panel weighs in.

Breaking Points with Krystal and Saagar
5/2/23:'Godfather Of AI' Says SHUT IT DOWN, DeSantis Freaks Over Guantanamo Allegations, 2023 Bank Failures, Leaked Tucker Video, Covid Natural Origin, Commercial Debt Bomb, James Fox "Moment of Contact"

Breaking Points with Krystal and Saagar

Play Episode Listen Later May 2, 2023 123:40


Krystal and Saagar discuss Biden blinking on the Debt standoff with McCarthy, the 'Godfather Of AI" making public calls to shut down development, Republicans develop their first AI political attack ad, ChatGPT nukes Chegg's 'Homework' Business, DeSantis freaks when questioned over allegations he took part in torture at Guantanamo, how the Disney lawsuit is a dangerous corporate power grab, revelations that the 2023 bank failures are bigger than 2008, JP Morgan and Jamie Dimon become way too big to fail with purchase of First Republic, polls show that Americans overwhelmingly blame the Media for the country's division, leaked video from Tucker shows him shredding Fox Nation live streaming, Vice News being weeks from bankruptcy, Saagar looks into how the Covid natural origin theories fall apart, Krystal looks into the Commercial Property Debt Bomb that could destroy the economy, and we're joined by filmmaker James Fox to discuss his documentary "Moment of Contact" and reveal new video evidence concerning a potential alien encounter in Brazil. To become a Breaking Points Premium Member and watch/listen to the show uncut and 1 hour early visit: https://breakingpoints.supercast.com/ To listen to Breaking Points as a podcast, check them out on Apple and Spotify Apple: https://podcasts.apple.com/us/podcast/breaking-points-with-krystal-and-saagar/id1570045623   Spotify: https://open.spotify.com/show/4Kbsy61zJSzPxNZZ3PKbXl   Merch: https://breaking-points.myshopify.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

Closing Bell
Closing Bell Overtime: Microstrategy's Michael Saylor On Why The Banking Crisis Is Bullish For Bitcoin; Chegg CEO Joins After His Company's Stock Fell 50% 5/2/23

Closing Bell

Play Episode Listen Later May 2, 2023 46:32


Stocks fell sharply today, although off worst levels after paring some losses in the final hours of trading. Regional banks were among the hardest hit stocks; Vital Knowledge's Adam Crisafulli on why this selling feels worse than in March. iCapital's Anastasia Amoroso gives her take on the market action and earnings so far. It was a busy day of earnings, including: Ford, Starbucks, AMD and Simon Property. An exclusive interview with Microstrategy Executive Chairman Michael Saylor after the company posted strong earnings; he weighs in on bitcoin's rally, the company's dual strategies and why the banking crisis is bullish for crypto. Chegg stock fell nearly 50% after warning AI was hurting potential new business; CEO Dan Rosensweig joined in an exclusive interview to discuss why investor fears were “overblown.” Wedbush analyst Matt Bryson on AMD's quarter and the stocks weakness in Overtime. Box CEO Aaron Levie talks the upside of AI for human productivity…with the right guardrails. 

Deffner & Zschäpitz: Wirtschaftspodcast von WELT
Trotz aller Öko-Subventionen – wo bleibt der deutsche Umwelt-Gigant?

Deffner & Zschäpitz: Wirtschaftspodcast von WELT

Play Episode Listen Later May 2, 2023 97:13


Der Verkauf des deutschen Mittelständlers Viessmann an den US-Klimageräte-Giganten Carrier Global hat die Diskussion über eine verfehlte grüne Wirtschaftspolitik befeuert. Auch die beiden Wirtschaftsjournalisten Dietmar Deffner und Holger Zschäpitz diskutieren darüber, warum der Standort trotz Milliarden für Windkraft, Solar und jetzt Wärmepumpen ohne einen eigenen globalen Öko-Champion an der Börse dasteht. Weitere Themen: Sell in May – warum die Börsenweisheit in diesem Jahr wieder passen könnte Kaufen am Allzeittief oder Rekordhoch – Streit um den richtigen Einstiegszeitpunkt Thelen-Aktie Alphawave – Warum Deffner bei den Briten jetzt zugeschlagen hat KI-Disruption – warum die Aktien von Chegg und anderen Bildungsunternehmen ins Trudeln geraten sind KI-Boost – Warum Ihr nie wieder schlechte Ton-Qualität hören müsst Neues Tief bei Tui – was für die Aktie des Reisekonzerns jetzt spricht Marktforschung im Internet – warum Ihr jede Umfrage mit Skepsis betrachten solltet Und hier findet Ihr den großen Audio-Unterschied dank Künstlicher Intelligenz: https://drive.google.com/drive/folders/1HcCFOHbQGziRt5lxp-o6f5WiU5oteF2Q Es gibt jetzt eigene Hoodies für "Deffner&Zschäpitz"-Fans. Einfach hier bestellen: www.welt.de/hoodie Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutzerklärung: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html