POPULARITY
With AI as an accelerant, marketing is evolving at a breakneck pace, and brands are being challenged to maintain authenticity while scaling globally. How do you build a cohesive, authentic brand identity across diverse markets, cultures, and digital platforms—all while leveraging AI-driven personalization? Joining me today is Emily Ward, VP of Global Marketing at Turnitin, a leading edtech brand focused on academic integrity and student success. Emily has spent more than 15 years in the education space, shaping marketing strategies for institutions under the Laureate Education network, leading global marketing at Anthology (formerly Blackboard), and now overseeing the global marketing strategy at Turnitin. Emily Ward has spent more than 15 years focused on the education space, initially generating enrollments for a broad portfolio of global institutions under the Laureate Education network. She then moved to Blackboard, since acquired by Anthology, working with non-profit academic institutions to better understand how to leverage their marketing investment in order to positively impact enrollments and institutional growth.Over time, Emily's focus expanded to the larger concept of Student Success, helping academic leadership connect the dots of the full student experience from decision making through to matriculation and beyond. During the pandemic, Emily pivoted quickly to lead the launch of an official eCommerce platform, and was soon pulled in to lead North America then Global Marketing efforts for the newly formed EdTech giant Anthology.Today, she oversees global marketing for Turnitin, an academic integrity company focused on supporting educators and empowering students around the world to do their best original work. Emily holds a B.S. from Towson University and an M.B.A from Loyola University Maryland. She resides near Washington, DC with her daughter. RESOURCESCatch the future of e-commerce at eTail Boston, August 11-14, 2025. Register now: https://bit.ly/etailboston and use code PARTNER20 for 20% off for retailers and brandsOnline Scrum Master Summit is happening June 17-19. This 3-day virtual event is open for registration. Visit www.osms25.com and get a 25% discount off Premium All-Access Passes with the code osms25agilebrandDon't Miss MAICON 2025, October 14-16 in Cleveland - the event bringing together the brights minds and leading voices in AI. Use Code AGILE150 for $150 off registration. Go here to register: https://bit.ly/agile150Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstromDon't miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.showCheck out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnowThe Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
AI Ethics, Overreliance & Honest Talk with Jen ManlyIn this episode of My EdTech Life, Fonz sits down with returning guest Jen Manly, a computer science educator, TikTok powerhouse, and advocate for ethical tech use, to unpack the complex relationship between AI, teaching, and critical thinking. From data privacy concerns to AI detectors that fail our students, this conversation gets real about what's hype, what's helpful, and what needs more scrutiny. Whether you're cautiously curious or deep in the AI trenches, this episode offers clarity, nuance, and practical insight from a seasoned voice.
In this special episode recorded on site at SXSW EDU, Dustin chats with Annie Chechitelli, Chief Product Officer at Turnitin. Annie shares the exciting launch of Turnitin Clarity, a new tool designed to support both educators and students in navigating AI's role in academic writing. From AI detection to responsible AI use in the classroom, this conversation unpacks the evolving challenges and opportunities in maintaining academic integrity.Guest Name: Annie Chechitelli, Chief Product Officer, TurnitinGuest Social: LinkedInGuest Bio: Annie Chechitelli has spent the past two decades innovating with educators to expand access to education, meet the quickly changing needs of learners, and empower students to do their best, original work. As the Chief Product Officer at Turnitin, Annie oversees the Turnitin suite of applications which includes academic integrity, grading and feedback, and assessment capabilities.Prior to joining Turnitin, Annie spent over five years at Amazon where she led Kindle Content for School, Work, and Government and launched the AWS EdTech Growth Advisory team, advising education technology companies on how to grow their product and go-to-market strategies with AWS. - - - -Connect With Our Host:Dustin Ramsdellhttps://www.linkedin.com/in/dustinramsdell/About The Enrollify Podcast Network:The Higher Ed Geek is a part of the Enrollify Podcast Network. If you like this podcast, chances are you'll like other Enrollify shows too!Enrollify is made possible by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com.Attend the 2025 Engage Summit! The Engage Summit is the premier conference for forward-thinking leaders and practitioners dedicated to exploring the transformative power of AI in education. Explore the strategies and tools to step into the next generation of student engagement, supercharged by AI. You'll leave ready to deliver the most personalized digital engagement experience every step of the way.Register now to secure your spot in Charlotte, NC, on June 24-25, 2025! Early bird registration ends February 1st -- https://engage.element451.com/register
Kehittämispäällikkö Satu Hakanurmen vieraana erikoissuunnittelija Totti Tuhkanen. Keskusteluissa mukana mm. Turnitin-plagiaatintunnistusohjelma, suositukset AI:n käyttöön opetuksessa, vinkkejä promptauksen haltuunottoon. Jakson kuvan on Totti Tuhkanen tehnyt tekoälyllä ja sen promptista ja taustasta voi lukea lisää Opetuen blogista: https://blogit.utu.fi/opetuki/2025/04/02/latua-podcastsarja-valottaa-yliopistopedagogiikan-ja-oppimismuotoilun-ajankohtaisia-aiheita/ Tekstivastine: https://www.utu.fi/fi/ajankohtaista/podcast/latua-asiaa-yliopistopedagogiikasta-ja-oppimismuotoilusta
Send us a textJoin hosts Alex Sarlin and Ben Kornell as they break down a pivotal week of AI announcements, edtech disruption, and education policy shifts.✨ Episode Highlights:[00:00:00] China introduces AI education for 6-year-olds, sparking urgency for U.S. to respond[00:05:23] OpenAI teases a creative writing model as Google and Anthropic push into coding AI[00:07:59] Gemini Canvas and Deep Research go free, redefining educational productivity tools[00:11:09] Copyright clash: OpenAI wants to train on protected content, creatives push back[00:14:02] Gemini's UI outpaces OpenAI with real classroom use cases[00:17:54] Chain of Draft from Zoom cuts AI costs by 90% and mimics human note-taking[00:18:34] Baidu, Alibaba launch emotion-reading and multi-modal AI models in China[00:20:50] Manus, China's autonomous AI agent, sparks global interest in multi-agent systems[00:23:36] U.S. vs. China: centralized AI strategy meets decentralized innovation culture[00:27:57] U.S. Education Dept. shutters Office of Ed Tech, leaving a national guidance gapPlus, special guests:[00:30:11] Annie Chechitelli, CPO at Turnitin, on launching Clarity for ethical student AI use[01:03:49] Sara Mauskopf, CEO & Co-founder of Winnie, on expanding into K-12 and the rise of school choice
Today is a special day—we're celebrating our 100th episode! For this milestone episode, we're diving into a theme at the heart of enablement: making the impossible, possible. In today's business landscape, only 28% of sellers expect to hit their quota. So how can you enable your teams to overcome the challenges of the current market to achieve consistent go-to-market success? Shawnna Sumaoang: Hi and welcome to the Win Win Podcast. I'm your host, Shawnna Sumaoang. Join us as we dive into changing trends in the workplace and how to navigate them successfully. In this episode, we'll hear from nine leaders who transformed challenges into business outcomes, delivering impact against their go-to-market initiatives through enablement. From earning leadership buy-in to aligning go-to-market teams and boosting productivity, these leaders enable the impossible for their businesses. We hope their stories will inspire you to push boundaries and redefine what's possible in your organization. Driving consistent revenue growth can feel impossible when silos divide sales, marketing, enablement, and revenue operations. A unified enablement approach can break down these barriers and drive measurable impact. But how do you demonstrate the value of enablement to stakeholders and secure their long-term support? In this part, we'll hear Pam Dake, senior director of GTM enablement at Menlo Security, share her success story for gaining leadership buy-in. Pam Dake: My name is Pam Dake and I work for Menlo Security, a cybersecurity company that actually has just surpassed a hundred million ARR. One of the bigger challenges that I’ve had recently has been in aligning the executive teams in order to really, truly understand how to be impactful, leveraging the go to market motion in a way that not only lands the big deal, but also allows us to have a very productive and valuable customer relationship long term. And so for me, it’s been gaining the opportunity to have that meeting with all of these critical stakeholders, have them see value. Each and every time that you meet with them, so that they feel like they’re getting something out of that meeting where it’s actually really driving the business forward in ways that they may not have seen initially. And so for me, it’s been setting up a recurring meeting with those folks who are the most senior and executive in the company to be able to drive forward what sales needs, which actually is driven primarily from what sales needs. Really, our customers are looking for from us as a company. Be tenacious about how you’re able to make a difference with aligning their internal stakeholders and really driving forward the programs that will make a difference, not only in the short term and the long term. So as you consider the strategy that you’re building. Ensure that you have your other internal stakeholders aligned and do that in ways that create value for them so that they can see the impact. One of the things that we talked about earlier was data. Leverage the data that you have on hand. Leverage tools that provide you with that really impactful data that provide you with insight into the leading indicators that will actually drive the business longer term with the lagging ones. So the bottom line is really taking an outside in approach with what you’re doing from an enablement lens. How does this impact my customers? Therefore, how am I able to build the best programs that I can that will enable My internal stakeholders, my internal teams, in order to be successful and provide value to our customers, not only in the short term with what wins they’re able to achieve, but how they’re able to grow and develop the relationships over time. SS: You need stakeholder buy-in to break down silos and align your go-to-market teams – but why is that alignment so critical? Without it, you can't coordinate, plan, and execute the initiatives needed to drive the business outcomes that matter most. And when 90% of organizations fail to execute their strategies successfully, it clearly takes more than guesswork to achieve those outcomes. So how can you define, execute, and optimize your go-to-market initiatives to deliver unprecedented impact? In this part, we'll hear stories from enablement leaders who brought key go-to-market initiatives to life through enablement. First, let's start with a common initiative that impacts teams across the go-to-market organization: product launch. Effectively bringing a new product to market can make or break your revenue targets. We'll hear from Chris Wronski, senior program manager at Keysight, on how he helped deliver a product launch that contributed to the first revenue growth in seven years despite a tough market. Chris Wronski: My name is Chris Wronski. I’m a Senior Program Manager at Keysight Technologies, and I am the architect behind our Highspot implementation. The last couple years have been very difficult in the, across the entire industry, right? Every, many companies are talking about it, us included. If you go pull our quarterly info, you can see the last seven quarters have been very difficult for us. So what I talked about earlier, the focus on new product introductions. That’s an opportunity for us to make some hay. That’s an opportunity for a, we’ve got a brand new product, we’ve got a brand new reason to go talk to customers. Even if they have no opportunities, at least go explain to them what we’ve got, right? There might be something in there. We’ve done a lot of work around building sales plays in a way that the seller can consume it and trying to crush it down. Really, um, aggressive simplicity is what I would call it. But by building that in and giving them just a little bit of info to start the discussion in a way that we knew you could start that discussion with nearly any customer, that’s enough to get the ball rolling and let them go do their sales job. We’ve done a ton of pushing training to them. I can see that in the numbers. I can see when we do our training. I can see the following week there’s a huge spike in people going to those sales plays and looking at them and using them. And so, Last quarter we, we turned the curve, right? Turned the knee of the curve and brought back at least a little bit of growth. We were positive for the first time in seven quarters. SS: Next, let's dig into an initiative that is likely on the minds of many GTM leaders with the new year around the corner: sales kickoffs and events. Starting off a year on the right foot can provide a business with momentum that carries through the rest of the year. Brooke Cole, manager of global field readiness at Workato, shares with us how her team drove an impressive boost in NPS with their first in-person SKO events. Brooke Cole: My name is Brooke Cole, and I’ve been at Workato for almost three years. A business challenge that myself and my team have overcome that we’re really proud of is probably our first in-person SKO events that we executed earlier this year. Because of COVID and just the nature of the world, we had been unable to get together in person as a collective regional team. Really, ever. We hadn’t. We had one scheduled, and then we had to cancel it, of course. Uh, so, earlier this year, our team, we ended up doing it regionally. So, in North America, in APJ, and in EMEA, our team was tasked with putting on three different SKO events within three to four weeks. And we traveled to each one of them. And the way that we overcame that really was just a sense of teamwork and camaraderie. We built trust with one another. We had really open dialogue and communication. And we really used our skill sets and our collaboration. To put on an event that got an NPS score of 85 globally. We heard the phrase, this is the best SKO we’ve ever had. And truly, to be fair, it’s the only one we’ve ever had in person. But people left jazzed, and they left inspired, and we leveraged Highspot as a part of that. Going into this next year, this is the second year where Highspot will be our landing page for our global event that we’re having, and so it’s going to be the Know Before You Go, and we did that as a trial period last year, and it worked out really well. The traffic was great when people had questions, we were able to direct them to Highspot for that, and I think we were proud overall of just the vision that we put together. And how we executed the tools and the apps that we already had at hand in order to bring everybody together in a centralized place to give them the awareness and create excitement around the events. SS: Now, we're diving into an initiative that can have a profound impact on productivity: the sales process. Research shows that just 28% of a rep's time goes to selling, and an optimized sales process can help you streamline workflows and save time. Let's hear from Jay Livingston, head of enablement at Corporate Visions, on how his team is improving the sales process and delivering time savings as a result. Jay Livingston: I’m Jay Livingston. I lead Global Sales Enablement at Corporate Visions. I remember when I got involved in enablement, one of the things that I learned is that sellers spend an inordinate amount of time each month preparing their own content. They have a lot of goals. so we in a headquarters environment have time to sit around and think about how to improve some of these processes. Salespeople don’t, right? They’re running from call to call, always trying to be ready to meet that moment. And so one of the, one of the main challenges I’ve been focused on quite honestly for more than just the last couple of years at CVI is how do we make. Content and resources and tools and assets more purpose driven more readily available, more easily findable, and then more from a usability perspective, make it easier for, again, for those sales folks to be able to execute in those moments. And so I remember when we first rolled out Highspot here back a few months ago one of the things that, that a member of my team Eric is a VP on our team, would say, man, he’s I just, I don’t have time to do all the things that I need to do because I’m constantly getting emails or messages or slacks about, hey, where’s this and where’s this and where’s this, Highspot literally I no longer have to field any of those calls. As a matter of fact, when we were here in August, I had a chance, we were sitting around the table to share a story that just in the month, I think we had been maybe a month in at that point the amount of time that Eric has been able to get back in his day. To not have to field those annoying, it can be very annoying requests, right? Because how many times do we tell our sellers where things are, how to use them, right? And you almost wonder sometimes, are they listening, right? Are we not communicating it effectively? All of a sudden now we’re seeing literally no request for where is this? How do I use it, right? And so again, what I would say is it’s not bulletproof, right? There’s always going to be opportunities to improve. But one of the hallmarks of the way that I’ve tried to lead enablement organizations is to really have it boil down to two things. One, what is the seller’s, or what is your colleague’s ability to be able to execute in this moment? We can lean into the ability to help them get better. And two, what is their willingness? And willingness, oftentimes, is influenced by how easy something is to execute. And so if we can remove the willingness component, then we can just focus squarely on the ability. And so as we continue to move forward those are really the two things that, that we continue to evaluate ourselves by. Are we making it simple? And how are we helping folks coach or how are we coaching folks to get better and to be more effective and to utilize these amazing tools and opportunities that we all have SS: And now, let's talk about an initiative at the heart of enablement: training. When done right, sales training can drive the behavior change reps need to consistently hit their targets–but often, that can be easier said than done. Let's hear Anthony Doyle, director of sales enablement at Turnitin, explain how he revitalized training and ultimately improved seller engagement. Anthony Doyle: My name is Anthony Doyle, and I'm the director of sales and development at Turnitin. In terms of overcoming really difficult problems, the biggest problem is engagement—engagement from the sales teams, leaning into the enablement programs, spending time, and investing their time in their own development. I think that's what we've seen a real uptick on and success on in the past, maybe six to twelve months. We've seen a change in attitude. We're getting success now when launching new training programs. People are leaning into them, they're completing them, and they're giving us good feedback too, which is something that I probably never thought I would have said twelve months ago because we started investing a lot of time and building a lot of training, but then that wasn't really getting consumed. It was very difficult to get managers to even back us up and roll it out with our teams. Whereas now, when I've just presented to the go-to-market team on a go-to-market all-hands, strategy for the sales academy, there was just a lot of love in the room. A lot of people saying, ‘This is fantastic. We can't wait to see it in action and get our hands on it.’ So we had a lot of good feedback from that session. And that's really pleasing for me because it means that the strategy was the right strategy. I think the message for teams and enablement teams out there, if you are getting those challenges with engagement, is to keep at it, show value, and really drive those proof points. Get those wins regionally with teams who will engage, then present it in a very easy-to-consume way, and in a way that the teams can feel confident about engaging with. You will see the results, and the tide will turn. So that's something I'm proud of. SS: Next, let's explore an initiative that drives long-term impact—coaching. Effective coaching helps sellers apply newly acquired knowledge to maximize their performance. Let's hear from Andrea Holzwarth, VP of Sales Enablement and Customer Operations at Project Lead The Way, on how she supports ongoing coaching to help reps sharpen their skills. Andrea Holzwarth: We see a lot of value in coaching and training. We have our senior directors, our sales managers, really providing that one-to-one support for our reps out in the field. And we want them to be able to have those coaching conversations and the meeting intelligence helps with that. So we can see the calls. What is that? What’s going well? Maybe what are they struggling with? But I think a lot of times. I say this, that it’s easier to edit than it is to get started, and so having that AI feedback automatically in there it’s helpful, that’s a starting point. And then our senior directors, our sales managers can go in, provide more of that personalized coaching that they may see, but it gives them a starting point. One of the other benefits that I see with Meeting Intelligence is I just think about as a sales rep being in the field especially virtually now that we’re, all we do is meet virtually. It feels like we’re in the, we actually go to schools and districts too, but I would have loved it when I was a sales rep to just see, I think I know how I show up on camera or how I’m speaking to a customer, but, It is so helpful to go back and record and just see man, I said “um” a lot. So it helps with some of that coaching too. SS: And to close out, we asked our guests for advice on how they enable the impossible in their organizations. Here are a few tips from Suzanne Heller of Flight Centre Travel Group, Jennifer Shelley of QuidelOrtho Corporation, and Susan Kinser of Net Health Systems to help you take your enablement efforts to the next level. Suzanne Heller: Just go for it. Because we have the tools that make us successful. We have the tools to be able to measure what we’re trying to achieve. And it is okay at the end of the day to go back to the drawing board if it doesn’t work. But we won’t know that if we don’t try. And if we look at enablement 5, 10, 15 years ago, it wasn’t like what it is today. But because of the trial and errors that have, come up. Advice to anyone that is in an enablement role is just to go with your gut and deliver. And it is okay to go back and look at the data and pivot and optimize. You won’t know what’s successful unless you try it. I think my second piece of advice is buy-in. To your business, your brand, you tell a story, you bring immense value, and it’s really critical to create that brand awareness for yourself and for your team, to be able to let them know the purpose, and the deliverables, and the ROI that you bring to the business. So this would be my two. Jennifer Shelley: Not to get discouraged. Sometimes, we initially we will bring to the table things that sound outside of the box and Highspot tends to be on the cutting edge of technology. But technology can be frightening, and I think that you can get discouraged when people are, not as excited as you are about what you’re trying to accomplish with the technology that you have. And just take your time, stay focused stay, consistent with your message and understand that it takes time for people to really understand the vision that you might have if they haven’t been exposed to all the great the great information that Highspot is providing them in terms of that cutting edge capabilities in the platform. Susan Kinser: Whether you have a seat at a table or not is to, try to get your voice heard so that you start having those conversations to understand the business outcomes that your team is looking to change, right? I think that the moment you’re able to ensure that you’re aligning any of your programs or any of your initiatives to those specific strategic initiatives that your company in a larger way is looking to achieve when you get that kind of information and you’re getting that feedback and you’re having more, and then they have that insight into the change that you’re making it just makes you more of that strategic partner and it gives you that space to continue to make that kind of success progress and success, I would say get a unified platform. Use Highspot and use the resources. And so I think what’s fun is in this ever growing enablement space, having your voice being heard only. makes the impossible more possible, right? As we start bringing things together and we start, having different ideas or having different needs, and we’re able to do things in these different ways, I would say my advice is to get connected to those business strategies, those business insights, and then get that unified platform and keep scaling. SS: As you heard from the enablement leaders we featured in this episode, nothing is impossible with the right team, tools, and processes in place. In looking to the year ahead, take stock of the challenges on the horizon and rather than looking at them as obstacles, channel them into opportunities to push the boundaries of what you once thought was impossible. Thank you for joining us for this special 100th episode of the Win Win Podcast. We'd love to hear how you are enabling the impossible—be sure to connect with us in the Highspot Spark Community to share your advice, and tune in next time for more insights on how you can maximize enablement success with Highspot.
Chris Caren, CEO of Turnitin since 2009, has played a pivotal role in transforming the company from a plagiarism prevention tool to a comprehensive education technology platform that promotes academic integrity, streamlines grading, and enhances educational outcomes. Under his leadership, Turnitin was acquired by Advanced Publications in 2019 for $1.75 billion. Prior to Turnitin, Chris was General Manager of Business Solutions at Microsoft. He holds an MBA with distinction from Northwestern's Kellogg School of Management and a BS in Engineering from Stanford and regularly contributes to CNBC and other media outlets. In this conversation, we discuss:How a childhood introduction to computers and a background in science set the stage for a career in software.The journey from Microsoft to Turnitin and the decision to focus on meaningful industries like education.Insights into leadership, including the importance of team cultural fit and the power of delegation.Navigating the challenges of taking over from company founders while maintaining and respecting the company culture.How AI is transforming the educational landscape and Turnitin's evolving role in addressing student misconduct with AI tools.The complex relationship between higher education and AI, balancing the need for students to learn writing and critical thinking with the demand for AI literacy in the workforce.ResourcesSubscribe to the AI & The Future of Work NewsletterConnect with ChrisAI fun fact articleOn finding work you love
n this week's episode of All Things Marketing and Education, our host Elana Leoni sat down with Ian McCullough, a consultant specializing in marketing, go-to-market strategies, and business development for the EdTech industry.You might remember Ian from our last episode with him, Academic Integrity in the AI Era, took us on a journey into the fascinating intersection of ChatGPT, AI writing, and academic integrity.Here, we delve into the intricacies of the EdTech buying cycle, exploring its metaphors and nuances that help make sense of the process. Ian's insights are invaluable for anyone involved in EdTech sales, whether B2B or, as Ian suggests, B2G (business to government).Before diving in, a bit about Ian: with over 20 years in education and creative technology, he has experience spanning consumer, corporate training, and institutional markets. He previously led the North American K-12 marketing team at Turnitin and now offers his expertise as a consultant.Enjoy this engaging episode that makes the complex K-12 buying cycle both understandable and exciting!
A Gartner study found that organizations prioritizing revenue enablement see a 41 percent increase in revenue attainment per seller. So how can you build an enablement strategy that drives results?Shawnna Sumaoang: Hi, and welcome to the Win Win Podcast. I am your host, Shawnna Sumaoang. Join us as we dive into changing trends in the workplace and how to navigate them successfully. Here to discuss this topic is Anthony Doyle, the director of sales enablement at Turnitin. Thanks for joining us, Anthony. I’d love for you to tell us all about yourself, your background, and your role. Anthony Doyle: Sure thing. Thanks for having me, Shawnna. My name is Anthony Doyle, I’m the director of sales and development as Shawnna said, at Turnitin.A little bit about myself, I started my career back around 1998 working in the education sector, building interactive multimedia learning materials for education. I started out on the tool side of the game and really building materials and building learning programs for UK institutions.And that was at the time when the VLE space, the virtual learning environment space had just started being created. So I was working on some of the early prototypes and software development for those types of systems. That led me to a sales role for around nine years, and that’s where I really learned my sales craft and my different selling methodologies. I think of solution selling, Milleheim, and those types of frameworks.Through that career, I built a massive knowledge of sales and marketing and went on to lead sales and marketing organizations at various ed tech companies. And what I found at those companies I was doing a lot of enablement. Before enablement was a thing and before it was coined as a type, a term, or a category, I was naturally developing sales teams, developing sales processes, selling systems, and things like that.So that led me to a bit of a consulting career, working with organizations to develop their sales and marketing practices. And then a couple of years ago, I decided I really wanted to get into a long-term role and join an organization where I could have a good tenure with, and be part of something from a longevity perspective, rather than going in and fixing and putting things in and then putting it in the hands of somebody else, to really see the long term development.I’ve been familiar with Turnitin since probably the late nineties. So when Turnitin was first founded, I’ve seen Turnitin grow up as a company and mature. So it was good to join them and get on the other side of that. And now I lead the sales and development practice here at Turnitin, which is part of the RevOps organization here. SS: Anthony, as you mentioned, you have extensive experience in enablement and you also have a very clear vision for your enablement strategy at Turnitin. What are the core components of your strategy and what are the key strategic initiatives you’re focused on driving this year through enablement? AD: It’s first important to say that when I joined the organization around just under three years ago, this strategy wasn’t what I led with initially. I led by really trying to figure out where the organization was at, what the goals of the org were, and figuring out some of the kind of key gaps initially that we needed to put in place in order then to be able to develop a strategy for the long term. So we focused initially on some of the competency development and a competency framework around what we wanted to really be driving in terms of our sales process and the skills underneath that. And then at the beginning of this year, I got together with my team ahead of the sales kickoff to really develop the sales and neighborhood strategy that would take us from 2024 through to 2026. And where we’ve landed with that, is we’ve got three pillars in the strategy. The first of those is: align and engage. And that pillar is around really aligning with the different sales regions, the different sales leaders, and, factoring in their regional intricacies and figuring out where their teams are at. And engaging those teams in a dialogue for development, for increasing all the key metrics and KPIs that you would expect. And then really moving into the next pillar, that’s around educating and inspiring, so it’s: educate and inspire. And that area is really around developing the training programs that will be inspiring sellers to engage and to develop their skill sets. The aim of, obviously, to develop the sales practice. The next pillar underneath that is: elevate and impact. And that’s really where the rubber meets the road, right? It’s about. impacting results and elevating the practice to a world-class sales level. So when I was hired, the MO was to develop a world-class selling organization that other people in the ed tech industry would recognize and want to be part of and want to come here because of the way that we do sell and the level of practice that we have.So that last pillar is about getting us there. Now, some of the initiatives, obviously that come under that, there’s many initiatives that we have. Some of those range from, at the align and engage level, just having a regional management cadence, so having a regular cadence. We restructured our sales enablement team to have a regional sales enablement manager in each one of our three key regions.And what we’re really looking to do is have that regular cadence with the first-line managers to understand what’s going on, and get coverage on where the sellers are at from a competency perspective and a sales capabilities rating. And then see what, we can drive programmatically down from that. Then we feed that into the next pillar, which is obviously, the education-focused pillar, the educate and inspire, where we’ll be looking to drive tailored training programs and we’ll be driving that through Highspot. And then obviously as we get to elevating the practice and driving impact, some of the things that we’re exploring there in terms of initiative is looking at Meeting Intelligence to figure out, where there’s coaching opportunity and where we can really drive that elevation of practice.SS: Amazing, and I love how you really centered them around those three core principles. How does your enablement platform, Highspot, help play a role in effectively executing your enablement strategy and supporting your strategic initiatives? AD: It’s a great question. First of all, you need, a place to be able to gravitate around and a place to be able to drive content, and programmatic training. We need somewhere to put that, and we need somewhere to drive that as well. So Highspot is pretty much our sales enablement hub. It’s where all of our content to do with messaging [lives], it’s where we do all of our onboarding, so when somebody first joins us, and we’re developing as part of the strategy role-based onboarding pathways.At the moment, we’ve got quite a generic onboarding pathway. So, we’re developing more personalized onboarding routes, depending on the role that you take within the org, and all of that first engagement starts with Highspot. And then the ever-boarding, things like sales systems training product messaging.Plays and we’re going to be looking at development sales kits as well. And we have got a strong partnership with our product marketing team and they develop well-built sales plays for our product motions. So a lot of that, all of that has to be housed somewhere, and it should be in one place, and it should be somewhere where you can understand how that’s being leveraged, and what impact that’s making. If we think about elevation and driving impact, we want to be able to know what’s working, and what’s not working. And if these motions and the training that we’re delivering is being consumed, how is that impacting on results?We look at correlations between where people are exhibiting certain behaviors, pitching more regularly, involving certain pieces of content within the sales cycle in Highspot, and how that’s driving back-to-end results. SS: Now, you mentioned the importance of driving regional alignment. How does this defined enablement strategy help you drive that alignment to execute against your strategic initiatives?AD: I think this is the key component really. What we found over the first two years when I joined the org being more transparency we weren’t seeing the traction we wanted with the adoption of some of the programs we’re developing. We built out a competency framework with really high-quality training that existed under that, we built customized frameworks for lead qualification.We have a framework called Nitro, and we felt the need to do that because of a lot of qualifications LMX, put things like budget – if you think of band – budget is at the top. You think of Adam, authority at the top. Whereas in our sector when we’re selling, the need is the key thing, right? You've got to have need at the top of the cycle.So we developed resources like that. As much as we try to drive awareness and adoption of those things, we weren’t really seeing it at a macro level. What we quickly recognized is, it was missing that regional engagement piece.We had to align, we had to figure out what the challenges were in the regions and then eat our own dog food, really, in terms of, if we’re trying to push a problem-based selling approach. Really, we should take the same approach ourselves as enablement and figure out what’s going on and diagnose before we start prescribing things like nitro and, sprints, prospecting frameworks, and things like this. And certain training, we should try and figure out first, where are the sellers are, and what’s their biggest opportunity to improve. And even if they’re really high-performing sales teams, it’s like any sport, right? You can be the world’s heavyweight champion of the world, right? Of boxing, but you know where you want to develop muscle and you want to develop strength or, refine certain techniques.But you can probably talk to that very quickly if you’re engaged and say, “Hey, if you’re going to coach me for two hours, Muhammad Ali, and bring him back this is what I want you to work with me on.” So that’s the approach we’ve now been taking, and we think it’s crucial to get the alignment.Because then when we’re asking those questions to the first line managers and they’re saying this is where I want your help. When we offer that help we’re going to get the adoption. We’re going to get the engagement because they’ve asked for it. SS: I love that notion of elevate and inspire. How do you think about that when structuring your coaching programs, especially across regions and how can real-world coaching help drive consistent execution of your strategic initiatives around the world? AD: So one of the first things we did, one of the first things I did when I joined the org is redevelop the sales process. We merged about two or three separate organizations together and they all had slightly different sales processes. So what we said is really what we feel is important is breaking down the sales process and looking at what are the capabilities that sellers need to really craft, and work on to be successful in any buy-in journey. So we now have is we have ten core sales competencies or sales capabilities that are mapped under our sales process. So what we’ve done is develop material around them, developed job aids, a pre-discovery planning worksheet, a vision engineering worksheet, and things like that.Frameworks for mitigating objections using things like layer, and another approach is mid labels and mirroring and techniques like that, psychological selling techniques, and negotiation techniques. And we’ve developed assets around these things. So what we’re effectively doing when we’re aligning with the regions is talking to the regional manager about what they’re seeing in results where they’re seeing the average pipeline velocities and the kind of metrics around pipeline health.We’ve got that presented now in dashboards. We’ve got a fantastic BI team here, so they’ve looked at a lot of depth on the pipeline, and our regional sales managers can have that dialogue with the sales manager in the region and say, “Hey, based on this, what capabilities do you think we can further develop in your teams?”And then what we’re doing from there is building a programmatic approach to that. So instead of just doing a training and saying, we’re done, we’re actually building a four-week program or a six-week program around that, and we’re layering in different training, driving bespoke activities and workshop activities and different fun ways of engaging the teams.And then we’re driving that and we’re rolling that out through Highspot on a learning path, and then we’re seeing how the teams that we’re engaging on Zoom, to get like feedback and where you’re struggling. How have you applied this over the last four weeks? What are you finding?What’s not working, what’s working? We’re getting that kind of tribal knowledge culture moving across the teams. And that we feel is the right approach. SS: Now we’ve touched on this a bit, but, as we’ve been talking about this, you have helped to globalize your Highspot instance and you’re seeing amazing impact, I think you guys are at 86% adoption. Can you tell us more about this effort and how it has helped to keep your teams aligned across regions? AD: When we first deployed Highspot, what we did was we took quite a wide approach to it. And obviously, we’ve got many different regions. We’ve got teams in Asia, and we’ve got many different languages that are spoken.We’ve got teams whose primary language is Japanese, so we’ve got content that’s translated into Japanese. We’ve got folks in the Netherlands, in Germany, in Spain, in Mexico, we’ve got people in the Philippines, in all over the world, Australia, et cetera. Now, when you think about collateral and marketing material, and when you start translating that, what we’ve done and a mistake we made, to be honest, is we put that all centrally in one kind of like product by product, we had different Spots in Highspot. But what happened is that quickly became overwhelming for people because when they were searching or when they were trying to service content, they were finding lots of content that wasn’t applicable to them. It was in Japanese and their clients don’t speak Japanese.And, obviously, once people were leaning into that content and some of the teams are leaning in and using it, that was bubbling to the top in some of the lists and on the smart pages and things like on the Spot overviews. So what we did is we restructured Highspot to take more of an approach where our core, primary language content, that’s American English or British English is in a central spot, and then we created regional spots.We used the group feature of Highspot to collect all these teams into groups so that they only had access to the materials and the regions that mattered to them. And that helped a lot because it meant that content was easily found. It was more applicable. They also had their own spaces where regional marketing teams could start driving certain motions and specific. Materials that are right and relevant to those regions. So that helped in just thinking more thoughtfully around the process of structuring Highspot in the way that’s going to best serve the sellers. And then I think the key thing is a partnership with product marketing. So in enablement, we don’t own the messaging. We don’t own how we message our products. How we necessarily train the products as well into the market, but we’re a key partner in building some of those programs. And I have a learning developer who’s fantastic, her name is Ren Narciso, an absolutely amazing learning designer and developer. And she develops a lot of our product training, but she’s not an expert in each product, right? And I’m not an expert in the product. So, that partnership with product marketing is absolutely key. And we started working with them to leverage frameworks like PIC: problem, impact, root cause – different frameworks to really think about how we position our products.And they have done a fantastic job of developing materials and assets. Without that partnership, I think it’s very difficult for enablement to drive that value. I think we work in proxy in some instances, and we work to support those teams to help them craft a very valuable experience in Highspot.I think that’s probably why we’re seeing some of the adoption we are, it’s because people like the product market and really leaning in and being a very strong internal advocate for the use of Highspot. They even do things like building out like how-tos in Highspot. Here’s how you use digital rooms and good practices around it.So even though you think shouldn’t an admin be doing that? Actually, because those people are really building out these assets, they want to see them utilized effectively. So they’re leaning in and they’ve got the enthusiasm and the willingness to even push more tutorials and things out to sellers.SS: Now you touched on the importance of learning programs and the key role it plays in really driving that consistent execution. What are your best practices for designing effective learning programs and how do you leverage Highspot to help? AD: So I think you’ve got to go right down to what’s the intended outcome, right? When you’re looking at a learning sort of program, you’ve got to think about what are we trying to drive in terms of the learning outcomes. So our learning specialist, she really does look at that level when she’s developing these modules. She thinks okay, what are the intended learning outcomes?So there’s like a training docket for each one of the courses we build. And the key thing that’s in mind there is what are the key learning outcomes we’re looking to drive. And then we back into that, right? We make sure we’ve got the coverage on the resources. We make sure we’ve got the situational knowledge and the subject matter experts feeding that in.We try to drive things like interactivity and drive curiosity too. We just try to make it fun, and engaging, but we’re very purposeful and we don’t we don’t put it. A fun exercise in there just for the sake of it. We make sure that it’s driving towards a learning outcome. SS: Now, in addition to enabling your internal teams, I believe Turnitin also leverages Highspot to enable your customers through programs like customer onboarding. How is your company helping to ensure customers have a great onboarding experience and how is Highspot helping with this? AD: In terms of our customers who we sell to or we’re onboarding, when I started enablement, the enablement team was actually within the customer experience part of the organization. I reported to the chief customer officer but we moved into sales under the revenue, the chief revenue officer as when that new member of the exec team was hired. But we’ve still got quite strong connections with the CE org and we have fantastic members of that team in terms of who do the onboarding. What we find the onboarding team utilizes Highspot for, I know a number of the consultants use it to actually provide the glue to the onboarding experience and now they’re using the Digital Sales Rooms to put materials in there and send that to customers and have them go through the onboarding experience, and they can update the resources at the right point in time. Things like the help guides and such things, different resources, links to our help center, and presentations that they’ve delivered on the virtual sessions or in-person sessions if they’re doing in-person onboarding. So, a lot of the use we see with the onboarding team is more around that level. SS: I love that Turnitin is really on the cutting edge here because you guys are creating a consistent experience for your customers by really leveraging Highspot from the moment they’re a prospective customer all the way through their customer experience with you. Do you have any wins from that team that you can share? AD: I think what they’re saying, what they’ve said to me is when they said, “Look, we need this.” It was like, we get really good feedback on that. And it’s like a valuable resource. It was something they were unwilling to give up, it was providing real identifiable value. I think as we scale and as we deploy new products as well into the market, there has to be scalable ways of onboarding. And I know we’ve been leaning in really heavily on digital onboarding. So this provides another way to, to provide not just the training, but the resources that then help nurture and bring customers to a high level of initial deployment and success. What I’m keen to understand is how that’s going and looking into how can we even support that team more, and provide them with the connectivity back into Highspot. Now I know this is a really hot topic at the moment, cause I see on the community side, there’s lots of discussion around it, right? People are curious around, I wonder if this is something we can do. And I’ve covered a bit in a couple of those chats, but I think it is a really important area as we think about Digital Sales Rooms. Not just Digital Sales Rooms, but digital engagement spaces where actually post-sale, you can keep nurturing that customer. If we want to use the kind of HubSpot terminology to delight. We want to delight the customer, we want to bring them in and some of that experience they’ve had throughout the sales process, they can then continue to have into implementation. SS: Shifting back to impact, you have defined success metrics for each of your key initiatives. What are the core business metrics you focus on impacting through enablement? AD: Yeah, so it’s probably not really too dissimilar to most people, right? We have time to revenue, like what the average sales cycle looks like from net new, or to an upsell or a cross-sell initiative. The sort of that where that falls into sales cycle length, of course, what’s the content usage and performance looking like of the material we are putting in Highspot, is it getting utilized? We’re starting to really lean into that in a governance project that we’re working on. It’s a core docketed project in our PMO office, our project management office. And we’re looking at really figuring out where’s the content performing, where’s it not. Things like the closing ratio, things like sales process consistency too, that’s an issue in every sales organization. But then, and that kind of goes down with DRINTS and we’ve got training we’re developing and deploying on that, so we want to see that improve because we’re driving initiatives in Highspot using training programs in there to try and improve forecastability and things like that. So obviously you’ve got win-loss rate, I don’t think that’s a huge issue for us, what is more of an issue to us is it probably wasn’t an opportunity in the first place. The process wasn’t adhered to that cleverly and we’ve got to get more robust around that. So all the kind of call metrics you would expect, size of the deal, velocity through the stages, those types of things.So we have a lot of those already mapped out into our Tableau dashboard and we are tracking those. And what we did very roughly last year is when we deployed that dashboard, we looked at about an eight-month period, and we looked at just a simple metric of who has been through the training programs and completed them versus who hasn’t across a number of different product trainings and sales capability trainings, and how are those metrics aligning?And every single one of the KPIs was positively trending for the people who were completing the learning programs versus those who weren’t. Which is probably not surprising, but it was good to actually prove out and see in the data. SS: Fantastic. Last question for you, Anthony. A big aspect of your enablement strategy is also that it serves as a roadmap for your future vision, which for Turnitin includes leveraging innovation like AI. How are you beginning to leverage AI in your strategy? And how do you plan to continue to evolve that? AD: Yeah, so this is a great question. So we’re currently just piloting and trying out the Meeting Intelligence tool at Highspot. So one of the reasons we wanted to do that, there’s a couple of reasons really.One, it’s to understand and try and figure out the behaviors, and are the capabilities getting put into practice and how consistent is that happening. But the other thing is around really trying to drive those coaching opportunities as well. But what we found is we had Gong actually in place a number of years ago, and we had about four and a half thousand recordings in that platform, sales meetings, four and a half thousand sales meetings. But when we looked at making a decision on whether we were going to continue with that tool or not, what we’ll find is nobody was reviewing them.Nobody was actually doing anything about them. There was no top-down push for people to do it, but also there was no bottom-up real kind of drive or even asks from teams to get that commentary and get that coaching and that reinforcement. So in terms of coaching, it’s a really big challenge. And when Highspot was looking at developing this tool, actually spoke with some of your product managers and tried to input into some of the early thinking around how you would implement a tool like this in Highspot.And this is one of the things I rose in that conversation and I raised in that conversation and what I was delighted to see is the introduction of an AI in terms of setting a rubric around what you expect in these types. So take a discovery meeting, for example, and be able to set a rubric around what a good discovery meeting looks like.What are the capabilities you expect? What are the outcomes you expect to see from that discovery meeting? How do you expect the rep to manage the meeting and be able to capture that? And then if you ingest that meeting at the Meeting Intelligence, I have an algorithm that can understand that and score that.So I was delighted to see that as part of the product when you initially launched it, and we’re really keen to test that out because we have this concept as one of our initiatives around quality assurance and being able to drop in on a quarterly basis lessons in Highspot on a pathway.Where sellers are asked to go and identify their top discovery meeting or identify a sample of discovery meetings. And we want those to be run through the algorithm, run through that rubric. And then we want managers to be able to get some quick feedback immediately and be able to try it again if they want and put another discovery meeting in there.Maybe, two weeks later, have another discovery meeting, try it out, and then get more feedback. But, then on a kind of summative basis, maybe once every quarter, once, twice a year maybe, be able to drop that in and across all of our capabilities. The key meetings for discovery and for vision, establishing a buy-in vision.We generally have other meetings to present and demo so how are the reps demoing? We want that to go through the system and be stored. And then we want managers hopefully to go in there, review the AI feedback, give their own feedback, give a grade, give a result. And build that as a quality assurance piece to the practice.So that’s how we’re hoping to leverage some of that technology, but we haven’t really got there yet. We’ve got the model in place, and we want to try it out and see where it gets to because what we know is it’s very difficult to engage managers in that coaching dialogue, but we feel if we can give them a bit of a crutch or a bit of a lead in with some suggestions and this is where to look, we think we can get there much easier.SS: Thank you, Anthony. I greatly appreciate your time and your insights. AD: No problem. Happy to share. SS: To our audience, thank you for listening to this episode of the Win Win Podcast. Be sure to tune in next time for more insights on how you can maximize enablement success with Highspot.
In this week's episode of the Hitech Podcast, we explore the effectiveness of @turnitin #plagiarism and #ai checkers. We question whether they truly deliver on their promises and if they are worth the investment. Then, we delve into Lucidspark and its role in enhancing online learning. For more on our conversation, check out the episode page here. Want to build your business like we have? Join us over at Notion by signing up with our affiliate link to start organizing EVERYTHING you do. Head over to our website at hitechpod.us for all of our episode pages, send some support at Buy Me a Coffee, our Twitter, our YouTube, our connection to Education Podcast Network, and to see our faces (maybe skip the last one). Need a journal that's secure and reflective? Sign-up for the Reflection App today! We promise that the free version is enough, but if you want the extra features, paying up is even better with our affiliate discount. --- Send in a voice message: https://podcasters.spotify.com/pod/show/hitechpod/message
The artificial intelligence genie is out of the bottle, and there's no way to put it back in. All we can do is learn to live with AI as productively as possible. No surprise, today's youth are leading the charge from middle school to college. Amy and Mike invited ed tech expert Annie Chechitelli to describe how students use generative AI. What are five things you will learn in this episode? How are students using generative AI? What are positive and productive ways to use generative AI? What are less-productive ways to use generative AI? Are current detection tools keeping up with AI-generated work? How can educators promote positive adoption and use of AI tools? MEET OUR GUEST Annie Chechitelli has spent the past two decades innovating with educators to expand access to education, meet the quickly changing needs of learners, and empower students to do their best, original work. As the Chief Product Officer at Turnitin, Annie oversees the Turnitin suite of applications which includes academic integrity, grading and feedback, and assessment capabilities. Prior to joining Turnitin, Annie spent over five years at Amazon where she led Kindle Content for School, Work, and Government and launched the AWS EdTech Growth Advisory team, advising education technology companies on how to grow their product and go-to-market strategies with AWS. Annie began her career in EdTech at Wimba where she launched a live collaboration platform for education which was ultimately acquired by Blackboard in 2010. At Blackboard she led platform management, focused on transitioning Blackboard Learn to the cloud. Annie holds a B.S. from Columbia University and an M.B.A. and M.S. from Claremont Graduate University. She resides near Indianapolis, Indiana with her youngest child and husband, with two children attending college on the East Coast. Annie can be reached at LinkedIn. LINKS What is ChatGPT, DALL-E, and generative AI? How To Promote Academic Integrity In Your Classroom RELATED EPISODES ARTIFICIAL INTELLIGENCE AND ACADEMIC INTEGRITY COLLEGE ESSAYS IN THE AGE OF ARTIFICIAL INTELLIGENCE HOW CAN TUTORS USE AI? ABOUT THIS PODCAST Tests and the Rest is THE college admissions industry podcast. Explore all of our episodes on the show page. ABOUT YOUR HOSTS Mike Bergin is the president of Chariot Learning and founder of TestBright. Amy Seeley is the president of Seeley Test Pros. If you're interested in working with Mike and/or Amy for test preparation, training, or consulting, feel free to get in touch through our contact page.
"Cuando una persona no quiere mostrar su tesis universitaria de maestría y doctorado es por una sola razón: que si la metes al TURNITIN, resulta que no es tuyo lo que está ahí, que tomaste las ideas de otros y no lo consignaste debidamente", dijo el conductor de Exitosa, Nicolás Lúcar, sobre la suspendida fiscal de la Nación, Patricia Benavides.
In the latest episode of the K12 Tech Talk Podcast, Chris and Mark discuss a variety of news stories, including the COPS SVPP grant focusing on security measures in schools such as notification systems and security cameras. They also address concerns around Primavera's investment in Tutor.com and the rise of AI-generated plagiarism in student papers. Listener emails reveal discussions on firewall ratings and more. - - - - - https://www.youtube.com/@k12techtalk Join the K12TechPro.com Community. Buy our merch!!! * NTP * Howard - Email Seth Shockley - sshockley@howard.com At Howard, we pride ourselves on being a one stop shop for all of your technology needs. From Computing, Audiovisual, Networking, Physical Security, Cybersecurity and beyond, we can find a solution that works for you and your environment * Extreme - Email dmayer@extremenetworks.com * Fortinet - Email fortinetpodcast.com@fortinet.com * ClassLink * A4L Summit Oh, and... Email us at k12techtalk@gmail.com Tweet us err X us @k12techtalkpod Visit our LinkedIn page HERE
Turnitin, a service that checks papers for plagiarism, says its detection tool found millions of papers that may have a significant amount of AI-generated content. Thanks for listening to WIRED. Talk to you next time for more stories from WIRED.com and read this story here. Learn more about your ad choices. Visit megaphone.fm/adchoices
NYU is flipping the script on many traditional educational models and mindsets, embracing a strategic shift to offer alternative pathways to top-tier degrees. This strategic evolution reflects a profound commitment to access and flexibility, directly addressing the needs of an expanded demographic of students. Dr. Harrison shines a light on the practical implementations and thought processes behind such forward-thinking initiatives, aiming to demonstrate the successful delivery of education to a larger, non-traditional population. In Part 2 of this two-part podcast, Drumm McNaughton and Doug Harrison continue the conversation where they left off in Part 1, discussing New York University's Applied Undergraduate Studies program at its School for Professional Studies' four key components of the delivery modality, which are: 1) Transfer credit friendly/expanded. 2) The delivery modality. 3) Offering an associate degree. 4) Prior learning assessment. Podcast Highlights Enhancing Online Learning Modalities NYU's approach to online learning, encompassing both synchronous and asynchronous modalities. Benefits of providing a flexible learning environment to accommodate the needs of diverse learners. The role of support services in enhancing the online learning experience, including professional advising and career services. Prior Learning Assessment and Additional Credits Importance of recognizing the diverse backgrounds and experiences of students through prior learning assessment. Examples of crediting students for external experiences, such as military service or professional certifications, to accelerate degree completion while containing cost. Student Support Services and Data Analytics for Successful Outcomes Utilizing data analytics to support successful outcomes. The shift from reactive to proactive strategies in identifying and supporting at-risk students. The comprehensive analysis of student data to allocate targeted resources and interventions effectively. NYU's holistic approach to student support, spanning from enrollment through graduation, accommodating skill gaps due to K-12 inequities. An explanation of various support services offered, including financial aid and career services. The importance of a coordinated approach to ensure students receive comprehensive support throughout their educational journey. Public-Private Partnerships for Workforce Alignment and Opportunities The significance of partnerships with public schools, industry leaders, and community organizations in aligning education with workforce needs. NYU's initiatives in creating pathways for students that lead to relevant and rewarding careers. Examples of collaborations aimed at expanding economic opportunities and fostering a diverse workforce. Pricing and Accessibility Strategies to Broaden Higher Ed Pathways Strategies to make education more accessible through pricing models and financial aid options. The impact of NYU's pricing policies on broadening access to higher education, including associate degrees at reduced prices. NYU's commitment to supporting students from families with limited income, ensuring an affordable path to degree completion. How Leadership's Learning Mindset Impacts Student Success The role of leadership in fostering a culture of innovation and continuous improvement within educational institutions. Examples of how unified vision among board members and executives can drive the adoption of innovative educational strategies. The importance of learning from failure and the strategies for implementing changes based on outcomes and evaluations. Three Key Takeaways for University Presidents and Boards Strategic Focus: Prioritize your institution's core strengths and values, directing resources and efforts towards areas of excellence to navigate the disruptive pressures in higher education. Innovation and Learning: Foster a culture of innovation tailored to your institution's unique mission, encouraging experimentation and valuing the lessons learned from failure to build resilience. Humanity and Civility: Champion a culture of integrity, professionalism, and collaboration, modeling these values to navigate the sector's challenges and maintain a positive, supportive community. Read the transcript and detailed show summary: https://changinghighered.com/nyus-alternate-pathways-to-a-top-tier-degree-part-2 About Our Guest Douglas Harrison leads the Division of Applied Undergraduate Studies at NYU's School of Professional Studies. Prior to NYU, Harrison founded the School of Cybersecurity and Information Technology at the University of Maryland Global Campus. He has published and presented widely on access and inclusion in online learning, assessment security, and academic integrity. He is a past director on the board of the International Center for Academic Integrity and currently serves on Turnitin's Customer Advisory Board for AI in higher education and for the Sounding Spirit Collaborative at Emory University's Center for Digital Scholarship. His scholarship has been awarded the John Kluge Residential Fellowship at the Library of Congress and the NEA's Award for Excellence in the Academy. Social Link: Doug Harrison on LinkedIn → About the Host Dr. Drumm McNaughton, host of Changing Higher Ed podcast, is a consultant to higher education institutions in governance, accreditation, strategy and change, and mergers. To learn more about his services and other thought leadership pieces, visit his firm's website: https://changinghighered.com/. The Change Leader's Social Media Links LinkedIn: https://www.linkedin.com/in/drdrumm/ Twitter: @thechangeldr Email: podcast@changinghighered.com #NYU #HigherEdInnovation #InclusiveEducation #AlternativePathways
NYU is responding to the large U.S. population that needs and wants affordable and flexible higher education that meets them where they are by creating unconventional pathways to top-tier degrees. In this episode of Changing Higher Ed® podcast, Dr. Drumm McNaughton is joined by Dr. Doug Harrison, the head of New York University's Applied Undergraduate Studies program at its School for Professional Studies, to discuss how NYU has built structures and processes that create alternative pathways for first-gen and low socioeconomic students that enable them to get an NYU degree. Podcast Highlights · Introduction to NYU's Innovative Educational Pathways o Overview of NYU's School for Professional Studies o Dr. Doug Harrison's role in expanding access to education · Targeting the "Some College, No Degree" Demographic o The significance of this group in the U.S. education landscape o Strategies to support students with interrupted education · Non-Traditional Pathways for Higher Education o Tailoring education for first-gen and low socioeconomic students o The importance of stackable degrees and flexible learning options · Maximizing Transfer Credits o Addressing the challenge of diverse educational backgrounds o NYU's approach to curriculum design for broader credit acceptance · Online Programs and Work-Life Balance o Expanding access through online degree programs o Catering to students with work or family commitments · Associate Degrees at Elite Institutions o The role of associate degrees in NYU's educational offerings o Financial accessibility for Pell and TAP-eligible students · Apprenticeship Degrees and Real-World Skills o Launching apprenticeship degrees to meet workforce demands o The benefits of integrating education with practical experience · Strategies for Student Recruitment and Engagement o Digital marketing and SEO optimization for program visibility o Personalized outreach and understanding non-traditional student pathways · Collaborative Efforts for Student Success o The creation of an equity and access inclusion network o Cross-school collaboration for seamless educational transitions · Vision for the Future o NYU's commitment to education innovation and student inclusivity o Leadership's role in fostering a supportive learning environment Read the transcript or detailed show summary: https://changinghighered.com/nyus-alternate-pathways-to-a-top-tier-degree-part-1 About Our Guest Douglas Harrison leads the Division of Applied Undergraduate Studies at NYU's School of Professional Studies. Prior to NYU, Harrison founded the School of Cybersecurity and Information Technology at the University of Maryland Global Campus. He has published and presented widely on access and inclusion in online learning, assessment security, and academic integrity. He is a past director on the board of the International Center for Academic Integrity and currently serves on Turnitin's Customer Advisory Board for AI in higher education and for the Sounding Spirit Collaborative at Emory University's Center for Digital Scholarship. His scholarship has been awarded the John Kluge Residential Fellowship at the Library of Congress and the NEA's Award for Excellence in the Academy. Social Link: Doug Harrison on LinkedIn → About the Host Dr. Drumm McNaughton, host of Changing Higher Ed podcast, is a consultant to higher education institutions in governance, accreditation, strategy and change, and mergers. To learn more about his services and other thought leadership pieces, visit his firm's website: https://changinghighered.com/. The Change Leader's Social Media Links LinkedIn: https://www.linkedin.com/in/drdrumm/ Twitter: @thechangeldr Email: podcast@changinghighered.com #changinghighered #thechangeleader #higheredpodcast
In this episode, I speak with the Jason Gulya who is one of my top voices of AI and education. Note that this was a conversation we had back in November 2023, but due to a computer crash and loss of files it has taken me time to restore and fix all files. I was able to save this file thank goodness.Here is his bio so you can learn more about himI am currently Professor of English at Berkeley College. I have been working in higher education for about 10 years and--before coming to Berkeley--have held positions at Rutgers University, Raritan Valley Community College, and Brookdale Community College.As a professor I teach onsite and online courses on a variety of topics--including composition, literature, film, and the humanities more broadly. I am also a proud member of the Honors Program faculty at Berkeley. In 2020, I received my college's award for Excellence in Teaching.My research focuses primarily on literature, pedagogy, and grammar. I have published in a wide variety of journals, including (but not limited to) "Literary Imagination," "Pedagogy," "Dialogue," and "eCampus News." I have also written chapters for the books "Allegory Studies: Contemporary Perspectives" (forthcoming from Routledge), "Adapting the 18th Century" (University of Rochester Press, 2020) and "Reflections on Academic Lives" (Palgrave Macmillan, 2017). My research has been supported by fellowships and grants from Berkeley College, Rutgers University, Harvard University, and Cornell University. WHERE CAN I LISTEN?No matter how you listen to your podcasts I should be there.Check the link here to follow and subscribeAnd I recently started posting on my YouTube channelhttps://www.youtube.com/@coffeechugDirect Link: https://coffeechug.simplecast.com/episodes/190 Challenges & GoalsJason's main challenge is adapting traditional teaching methods to incorporate emerging technologies like AI. His goal is fostering an environment conducive for experimentation and innovative learning practices.How do we face the challenge of reducing low-level managerial tasks without compromising foundational knowledge? Perhaps a goal is to use AI to automate certain tasks, freeing up time for more enriching activities.Surprising TakeawaysJason advocates for educators being open about trying new things, even if they are not fully formed ideas yet.Emphasizing the importance of restorative breaks where individuals engage in mindless yet mentally refreshing activities.Jason would never automate his social presence because he learns from talking and writing.Giving students the ability to choose, not just whether they're using AI, but how they're using it and how it's actually being worked into their process.Emerging PatternsBoth of us are grapple with shifting from traditional learning methods towards technology-enhanced ones.There's a common theme among educators that while automation can be beneficial, it should not replace all human elements in education or daily routines.Key MomentsQuotes from Jason:I think one of the productive things that can come out of this level of disruption is a culture of experimentation.We ask our students to experiment all the time. Like that's that, especially in the K to 12 certainly. And higher ed too that we want our students, you know, we give them something new. We do this all the time. I don't even think about it. We say here's something new to read, something new to watch, something new to process and engage with it. Just experiment, test it out whether it's in, you know, traditional assessment or nontraditional assessment. And so in many ways, we need to just practice that. We need to do that, right? We have this new stuff. So experiment play with it, iterate, see what works And then you kind of go from there. And I think that we all have this desire um from the instructor side for perfection. We want, we think that and I think this is an error. We think that the best way to serve our students is to give them a fully realized polished product, right? Which is the course. But I think the exact opposite is true. I think something changes when you tell students that I'm trying something new, I'm gonna try a different form of assessment and I want your feedback about how it worked. I think that fundamentally changes the feel of the virtual room or actual physical room. So I think that the culture of experimentation is something we need to really, really conserve and prize because that's, that's what we want our students to do. You want them to experiment and play?....sometimes the key to making an assignment A I proof is to create a better assignment.So normally with the essay form, I would spend the first hour looking, not even reading anything, not even like doing like what we all do and what, what we like, you know, giving feedback and engaging with ideas. But going through the Turnitin reports, scanning them, finding information. I would lose an hour to two hours. And then sending emails to students that it came up a 70% on Turnitin. So I'm not doing that anymore. I no longer have that task. So the, the amount of time that I put to create the assignment I save later on at least a little bit.It's not that you're you're saving time but repurposing time and how much better time spent for you and for the students to be engaged in reading their thinking and having conversations about their thinking. That's such a more enriching learning opportunity for everybody involved, the educator and the students versus what we're doing now.What assessment can I create today that I continue to teach? One year, five years, 10 years down the road? I think that we have to be willing as a part of our job to change how we're assessing things if something does need to change about them.RESOURCESJason on LinkedIn where he shares and I do my learning from him!Reference to his assignment he made AI proof(it includes movie Inception!)
This episode is sponsored by Turnitin. I had such a great conversation with Annie Chechitelli, Chief Product Officer at Turnitin. Hope you will enjoy this episode! About Annie: Annie Chechitelli has spent the past two decades innovating with educators to expand access to education, meet the quickly changing needs of learners, and empower students to do their best, original work. As the Chief Product Officer at Turnitin, Annie oversees the Turnitin suite of applications which includes academic integrity, grading and feedback, and assessment capabilities. Prior to joining Turnitin, Annie spent over five years at Amazon where she led Kindle Content for School, Work, and Government and launched the AWS EdTech Growth Advisory team, advising education technology companies on how to grow their product and go-to-market strategies with AWS. Annie began her career in EdTech at Wimba where she launched a live collaboration platform for education which was ultimately acquired by Blackboard in 2010. At Blackboard she led platform management, focused on transitioning Blackboard Learn to the cloud. Annie holds a B.S. from Columbia University and an M.B.A. and M.S. from Claremont Graduate University. She resides near Indianapolis, Indiana with her youngest child and husband, with two children attending college on the East Coast. Learn more about Turnitin's great resources! 1. Turnitin's AI writing detection web page: https://www.turnitin.com/solutions/topics/ai-writing/ 2. Blog: One piece at a time: Navigating AI writing: https://www.turnitin.com/blog/one-piece-at-a-time-navigating-ai-writing 3. AI writing puzzle for educators, institutions and administrators: (Note - each puzzle piece directs users to a resource) https://go.turnitin.com/l/45292/2023-09-19/ckqbmy/45292/1695141908uEcVg1as/TII_GL_AI_WritingPuzzle_Infographic_Tabloid_US_0923.pdf 4. AI writing resource page: https://www.turnitin.com/instructional-resources/packs/academic-integrity-in-the-age-of-ai
In this week's episode of All Things Marketing and Education, our host Elana Leoni sat down with Ian McCullough, the Director of Marketing for Global Campaigns at Turnitin. Ian took us on a journey into the fascinating intersection of ChatGPT, AI writing, and academic integrity, giving insights from the history of plagiarism and the need for plagiarism detection to academic integrity in the world of AI today. Join Elana and Ian as they share lighthearted anecdotes, discuss the collaborative nature of EdTech, and explore the transformative power of networking in the field.Show notes: leoniconsultinggroup.com/57
Eric Wang focuses on leveraging AI to improve learning experiences and promote academic integrity around the world. He leads the AI transformation of Turnitin as VP of AI. Turnitin is one of the world's largest EdTech companies. Turnitin AI, is a globe spanning AI research org that develops and deploys cutting edge scalable AI to improve teaching, feedback, efficiency, and academic integrity at over 16,000 educational institutions, reaching 40+ million students.Eric has over 15 years of hands-on people and strategic leadership experience in AI across academia, government research, and technology industries. He's an expert in the full lifecycle of enterprise AI and enterprise AI strategy. He's recently been featured: NBC Nightly News, NYT, Wired, Insider, and EdSurge.Links:LARGE-SCALE DEEP LEARNING ON THE YFCC100M DATASETThe Paper, Attention is all you need. Vaswani, Shazeer, et al. Hosted on Acast. See acast.com/privacy for more information.
Welcome to Episode 567 of the Yeukai Business Show. In this episode, Yuliya Gorenko, a marketing communications expert, shares a strategy for developing a powerful influencer marketing campaign that enhances the brand image and contributes to business growth. If you want to learn about the aspects of influencer marketing, tune in now! In this episode, you'll discover: How to find the right influencers for your campaign?How to do influencer outreach?How to measure the success of your influencer campaign? About Yuliya Gorenko Yuliya Gorenko is a marketing communications expert with over 10 years of proven track record in public relations and influencer marketing. Her hands-on marketing expertise extends to various industries, including consumer goods, mobile apps, B2C SaaS, and enterprise software. She's worked with over 30 brands globally, including L'Oreal, Garnier, Maybelline New York, Hotspot Shield VPN, Turnitin, and DeleteMe. In 2020, Yuliya started Mischka Agency – a marketing communications boutique that helps brands deliver their vision to the world. Originally from Ukraine, she now lives on the West Coast of the US. More Information Learn more about the aspects of influencer marketing at https://mischka.agency/ LinkedIn: https://www.linkedin.com/in/yuliya-gorenko/ Thanks for Tuning In! Thanks so much for being with us this week. Have some feedback you'd like to share? Please leave a note in the comments section below! If you enjoyed this episode on How to Expand Your Business, please share it with your friends by using the social media buttons you see at the bottom of the post. Don't forget to subscribe to the show on iTunes to get automatic episode updates for our "Yeukai Business Show !" And, finally, please take a minute to leave us an honest review and rating on iTunes. They really help us out when it comes to the ranking of the show and I make it a point to read every single one of the reviews we get. Please leave a review right now Thanks for listening!
This week's episode was our new format shortcast - a rapid rundown of some of the news about AI in Education. And it was a hectic week! Here's the links to the topics discussed in the podcast Australian academics apologise for false AI-generated allegations against big four consultancy firms https://www.theguardian.com/business/2023/nov/02/australian-academics-apologise-for-false-ai-generated-allegations-against-big-four-consultancy-firms?CMP=Share_iOSApp_Other New UK DfE guidance on generative AI The UK's Department for Education guidance on generative AI looks useful for teachers and schools. It has good advice about making sure that you are aware of students' use of AI, and are also aware of the need to ensure that their data - and your data - is protected, including not letting it be used for training. The easiest way to do this is use enterprise grade AI - education or business services - rather than consumer services (the difference between using Teams and Facebook) You can read the DfE's guidelines here: https://lnkd.in/eqBU4fR5 You can check out the assessment guidelines here: https://lnkd.in/ehYYBktb "Everyone Knows Claude Doesn't Show Up on AI Detectors" Not a paper, but an article from an Academic https://michellekassorla.substack.com/p/everyone-knows-claude-doesnt-show The article discusses an experiment conducted to test AI detectors' ability to identify content generated by AI writing tools. The author used different AI writers, including ChatGPT, Bard, Bing, and Claude, to write essays which were then checked for plagiarism and AI content using Turnitin. The tests revealed that while other AIs were detected, Claude's submissions consistently bypassed the AI detectors. New AI isn't like Old AI - you don't have to spend 80% of your project and budget up front gathering and cleaning data Ethan Mollick on Twitter: The biggest confusion I see about AI from smart people and organizations is conflation between the key to success in pre-2023 machine learning/data science AI (having the best data) & current LLM/generative AI (using it a lot to see what it knows and does, worry about data later) Ethan's tweet 4th November His blog post: https://www.oneusefulthing.org/p/on-holding-back-the-strange-ai-tide Open AI's Dev Day We talked about the Open AI announcements this week, including the new GPTs - which is a way to create and use assistants. The Open AI blog post is here: https://openai.com/blog/new-models-and-developer-products-announced-at-devday The blog post on GPT's is here: https://openai.com/blog/introducing-gpts And the keynote video is here: OpenAI DevDay, Opening Keynote Research Corner Gender Bias Quote: "Contrary to concerns, the results revealed no significant difference in gender bias between the writings of the AI-assisted groups and those without AI support. These findings are pivotal as they suggest that LLMs can be employed in educational settings to aid writing without necessarily transferring biases to student work" Tutor Feedback tool Summary of the Research: This paper presents two longitudinal studies assessing the impact of AI-generated feedback on English as a New Language (ENL) learners' writing. The first study compared the learning outcomes of students receiving feedback from ChatGPT with those receiving human tutor feedback, finding no significant difference in outcomes. The second study explored ENL students' preferences between AI and human feedback, revealing a nearly even split. The research suggests that AI-generated feedback can be incorporated into ENL writing assessment without detriment to learning outcomes, recommending a blended approach to capitalize on the strengths of both AI and human feedback. Personalised feedback in medical learning Summary of the Research: The study examined the efficacy of ChatGPT in delivering formative feedback within a collaborative learning workshop for health professionals. The AI was integrated into a professional development course to assist in formulating digital health evaluation plans. Feedback from ChatGPT was considered valuable by 84% of participants, enhancing the learning experience and group interaction. Despite some participants preferring human feedback, the study underscores the potential of AI in educational settings, especially where personalized attention is limited. High Stakes answers Your Mum was right all along - ask nicely if you want things! And, in the case of ChatGPT, tell it your boss/Mum/sister is relying on your for the right answer! Summary of the Research: This paper explores the potential of Large Language Models (LLMs) to comprehend and be augmented by emotional stimuli. Through a series of automatic and human-involved experiments across 45 tasks, the study assesses the performance of various LLMs, including Flan-T5-Large, Vicuna, Llama 2, BLOOM, ChatGPT, and GPT-4. The concept of "EmotionPrompt," which integrates emotional cues into standard prompts, is introduced and shown to significantly improve LLM performance. For instance, the inclusion of emotional stimuli led to an 8.00% relative performance improvement in Instruction Induction and a 115% increase in BIG-Bench tasks. The human study further confirmed a 10.9% average enhancement in generative tasks, validating the efficacy of emotional prompts in improving the quality of LLM outputs.
It's YOUR time to #EdUp In this episode, YOUR guest is Annie Chechitelli, Chief Product Officer at Turnitin YOUR host is Dr. Joe Sallustio YOUR sponsors are The Middle States Commission on Higher Education (MSCHE) & InsightsEDU Why & how should higher education redefine assessment in an AI age? How has Turnitin evolved from its inception to the fast changing technology environment we all find ourselves in today? What does Annie see as the future of higher education? Listen in to #EdUp! Thank YOU so much for tuning in. Join us on the next episode for YOUR time to EdUp! Connect with YOUR #EdUp Team - Elvin Freytes & Dr. Joe Sallustio ● Join YOUR #EdUp community at The EdUp Experience! We make education YOUR business! --- Send in a voice message: https://podcasters.spotify.com/pod/show/edup/message
Yuliya Gorenko“Marketing Communications expert with 10+ years of experience in PR, influencer marketing, and communications.”About MeHI! I'm a marketing communications expert with over 10 years of track record in public relations, influencer marketing, and communications. My hands-on marketing expertise extends to various industries, including consumer goods, mobile apps, B2C SaaS, and enterprise software.I've worked with over 30 brands globally, including L'Oreal, Garnier, Maybelline New York, Hotspot Shield VPN, Turnitin, and DeleteMe.In 2020, I started Mischka Agency - a marketing communications boutique that helps brands deliver their vision to the world. Since then, I've been featured in AdExchanger, SeoBuddy, and Business2Community.I am passionate about developing powerful communication strategies that elevate the brand image and contribute to business growth. In my free time, I enjoy social psychology literature and volunteer for humanitarian projects in my native country of Ukraine. I am also an Executive Board member of Kulbaba – a 501(c)(3) organization that raises funds for people affected by Russia's war against my home country of Ukraine.I can speak about the ins and outs of influencer marketing campaigns, as well as about applying my marketing communications skills to charitable work.Past speaking: https://www.enago.com/see-the-future/conference-2021/speakers/yuliya-gorenko/The Doctor offers a complimentary website analysis, or a custom software open door session with Amplifi Labs: mick.smith@amplifilabs.com.Burning America: In the Best Interest of the Children?Mick, The Doctor of Digital, Smith mick.smith@wsiworld.comBurning America: In the Best Interest of the Children?https://burning-america.comAmazon: https://www.amazon.com/G-Mick-Smith/e/B0B59X5R79Also at Barnes & Noble, Walmart, and TargetLeave a message for The Doctor of Digital:https://podinbox.com/thedoctorofdigitalpodcastInstagram: burningamericacommunityPatreon burningamericacommunity:https://www.patreon.com/SmithConsultingWSITheDoctorofDigitalPodcastListen, subscribe, share, and positively review The Aftermath:https://podcasts.apple.com/us/podcast/the-aftermath-the-epidemic-of-divorce-custody-and-healing/id1647001828Substack:https://micksmith.substack.com/Commercials Voice Talent ||https://www.spreaker.com/user/7768747/track-1-commercialsNarratives Voice Talenthttps://www.spreaker.com/user/7768747/track-2-narrativesDo you want a free competitive analysis for your business?https://marketing.wsiworld.com/free-competitive-analysis?utm_campaign=Mick_Smith_Podcast&utm_source=SpreakerMake an Appointment:https://www.picktime.com/TheDoctorOfDigitalBe sure to subscribe, like, & review The Doctor of Digital™ PodcastSign up for the Doctor Up Your Life courseFacebook || Instagram || Twitter || LinkedIn || YouTubehttps://www.linkedin.com/in/gmicksmith/
When philosophy professor Darren Hick came across another case of cheating in his classroom at Furman University, he posted an update to his followers on social media: “Aaaaand, I've caught my second ChatGPT plagiarist.” Practically overnight, ChatGPT and other artificial intelligence chatbots have become the go-to source for cheating in college. Now, educators are rethinking how they'll teach courses this fall from Writing 101 to computer science. Educators say they want to embrace the technology's potential to teach and learn in new ways, but when it comes to assessing students, they see a need to “ChatGPT-proof” test questions and assignments. An explosion of AI-generated chatbots including ChatGPT, which launched in November, has raised new questions for academics dedicated to making sure that students not only can get the right answer, but also understand how to do the work. Educators say there is agreement at least on some of the most pressing challenges. — Are AI detectors reliable? Not yet, says Stephanie Laggini Fiore, associate vice provost at Temple University. Fiore was part of a team at Temple that tested the detector used by Turnitin, a popular plagiarism detection service, and found it to be “incredibly inaccurate.” — Will students get falsely accused of using artificial intelligence platforms to cheat? Absolutely. In one case last semester, a Texas A&M professor wrongly accused an entire class of using ChatGPT on final assignments. Most of the class was subsequently exonerated. — So, how can educators be certain if a student has used an AI-powered chatbot dishonestly? It's nearly impossible unless a student confesses, as both of Hick's students did. Unlike old-school plagiarism where text matches the source it is lifted from, AI-generated text is unique each time. In some cases, the cheating is obvious, says Timothy Main, a writing professor at Conestoga College in Canada, who has had students turn in assignments that were clearly cut-and-paste jobs. In his first-year required writing class last semester, Main logged 57 academic integrity issues, an explosion of academic dishonesty compared to about eight cases in each of the two prior semesters. AI cheating accounted for about half of them. This article was provided by The Associated Press.
Eric Schwartz, the head writer at Voicebot.ai, was joined by GAIN producer and Synthedia researcher Andrew Herndon. Generative AI News - Featured Stories of the Week OpenAI introduces Custom Instructions to add personalization to ChatGPT. OpenAI removed its Text Classifier that was supposed to identify whether content was AI-written because it didn't work. Of course, Turnitin and others still say they can spot AI-written text, but provides no evidence to back up the claims. Synthedia was the first to break this story BTW. The ChatGPT App for Android racked up over 1 million downloads in 24 hours, more than a week faster than the iOS app reached that milestone. Synthedia also was the first to break this story
Join Sarah Knight, head of learning and teaching transformation at Jisc, who is joined by Marieke Guy, Head of Digital Assessment at University College London (UCL) and Mary McHarg, Activities & Engagement Officer at UCL Student Union to discuss the reimagining of assessment and feedback at the institution. Marieke provides insights into the university's broad scope, with 11 faculties and over 60 departments. UCL supports around 43,000 students and over 14,000 employees, offering a diverse range of undergraduate and postgraduate programmes. They discuss how the institution faces the challenge of maintaining consistency and utilising technology effectively due to its scale and diversity. Mary highlights the challenges students face in relation to assessment and feedback. With a vast institution like UCL, students experience different assessment methods, frequencies, and feedback quality across departments. The podcast explores the importance of consistency, quality feedback, and supporting student well-being. The episode emphasises the involvement of students in the assessment process. UCL actively engages students through panels, partnerships, and programmes such as ‘student changemakers'. Marieke discusses the wide range of assessment tools used at UCL, such as Moodle, Wiseflow, Mahara, WordPress, Crowdmark, and Turnitin. The conversation moves on to how UCL is addressing the need for assessment practice and curriculum redesign. Marieke mentions ongoing work with the academic practice centre and academic communication centre to support staff in rethinking assessments. The discussion delves into AI's role in assessment and the need to educate staff and students about its capabilities, limitations, and ethical considerations. UCL is incorporating AI into assessments and actively involving students in discussions about its use. The episode concludes with the importance of senior leaders supporting the institutional approach to rethinking assessment and feedback. It emphasises the need for clear communication, involving students as partners, providing resources and support for staff, and investing in experts. Show notes Read more about how UCL is redesigning assessment for the AI age Check out our framework guide for digital transformation in higher education, and explore a comprehensive perspective on how the digital environment can support positive work, research and learning experiences, and promote a sense of belonging and wellbeing Read the UCL Digital Assessment Team blog for valuable insights and updates on innovative digital assessment practices at UCL Subscribe to Headlines - our newsletter which has all the latest edtech news, guidance and events tailored to you Get in touch with us at podcast@jisc.ac.uk if you'd like to come on the show or know someone who might suit the series
Pablo Molina, associate vice president of information technology and chief information security officer at Drexel University and adjunct professor at Georgetown University, leads the conversation on the implications of artificial intelligence in higher education. FASKIANOS: Welcome to CFR's Higher Education Webinar. I'm Irina Faskianos, vice president of the National Program and Outreach here at CFR. Thank you for joining us. Today's discussion is on the record, and the video and transcript will be available on our website, CFR.org/Academic, if you would like to share it with your colleagues. As always, CFR takes no institutional positions on matters of policy. We are delighted to have Pablo Molina with us to discuss implications of artificial intelligence in higher education. Dr. Molina is chief information security officer and associate vice president at Drexel University. He is also an adjunct professor at Georgetown University. Dr. Molina is the founder and executive director of the International Applies Ethics in Technology Association, which aims to raise awareness on ethical issues in technology. He regularly comments on stories about privacy, the ethics of tech companies, and laws related to technology and information management. And he's received numerous awards relating to technology and serves on the board of the Electronic Privacy Information Center and the Center for AI and Digital Policy. So Dr. P, welcome. Thank you very much for being with us today. Obviously, AI is on the top of everyone's mind, with ChatGPT coming out and being in the news, and so many other stories about what AI is going to—how it's going to change the world. So I thought you could focus in specifically on how artificial intelligence will change and is influencing higher education, and what you're seeing, the trends in your community. MOLINA: Irina, thank you very much for the opportunity, to the Council on Foreign Relations, to be here and express my views. Thank you, everybody, for taking time out of your busy schedules to listen to this. And hopefully, I'll have the opportunity to learn much from your questions and answer some of them to the best of my ability. Well, since I'm a professor too, I like to start by giving you homework. And the homework is this: I do not know how much people know about artificial intelligence. In my opinion, anybody who has ever used ChatGPT considers herself or himself an expert. To some extent, you are, because you have used one of the first publicly available artificial intelligence tools out there and you know more than those who haven't. So if you have used ChatGPT, or Google Bard, or other services, you already have a leg up to understand at least one aspect of artificial intelligence, known as generative artificial intelligence. Now, if you want to learn more about this, there's a big textbook about this big. I'm not endorsing it. All I'm saying, for those people who are very curious, there are two great academics, Russell and Norvig. They're in their fourth edition of a wonderful book that covers every aspect of—technical aspect of artificial intelligence, called Artificial Intelligence: A Modern Approach. And if you're really interested in how artificial intelligence can impact higher education, I recommend a report by the U.S. Department of Education that was released earlier this year in Washington, DC from the Office of Education Technology. It's called Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations. So if you do all these things and you read all these things, you will hopefully transition from being whatever expert you were before—to a pandemic and Ukrainian war expert—to an artificial intelligence expert. So how do I think that all these wonderful things are going to affect artificial intelligence? Well, as human beings, we tend to overestimate the impact of technology in the short run and really underestimate the impact of technology in the long run. And I believe this is also the case with artificial intelligence. We're in a moment where there's a lot of hype about artificial intelligence. It will solve every problem under the sky. But it will also create the most catastrophic future and dystopia that we can imagine. And possibly neither one of these two are true, particularly if we regulate and use these technologies and develop them following some standard guidelines that we have followed in the past, for better or worse. So how is artificial intelligence affecting higher education? Well, number one, there is a great lack of regulation and legislation. So if you know, for example around this, OpenAI released ChatGPT. People started trying it. And all of a sudden there were people like here, where I'm speaking to you from, in Italy. I'm in Rome on vacation right now. And Italian data protection agency said: Listen, we're concerned about the privacy of this tool for citizens of Italy. So the company agreed to establish some rules, some guidelines and guardrails on the tool. And then it reopened to the Italian public, after being closed for a while. The same thing happened with the Canadian data protection authorities. In the United States, well, not much has happened, except that one of the organizations on which board I serve, the Center for Artificial Intelligence and Digital Policy, earlier this year in March of 2023 filed a sixty-four-page complaint with the Federal Trade Commission. Which is basically we're asking the Federal Trade Commission: You do have the authority to investigate how these tools can affect the U.S. consumers. Please do so, because this is your purview, and this is your responsibility. And we're still waiting on the agency to declare what the next steps are going to be. If you look at other bodies of legislation or regulation on artificial intelligence that can help us guide artificial intelligence, well, you can certainly pay attention to the U.S. Congress. And what is the U.S. Congress doing? Yeah, pretty much that, not much, to be honest. They listen to Sam Altman, the founder of ChatGPT, who recently testified before Congress, urging Congress to regulate artificial intelligence. Which is quite clever on his part. So it was on May 17 that he testified that we could be facing catastrophic damage ahead if artificial intelligence technology is not regulated in time. He also sounded the alarm about counterfeit humans, meaning that these machines could replace what we think a person is, at least virtually. And also warned about the end of factual evidence, because with artificial intelligence anything can be fabricated. Not only that, but he pointed out that artificial intelligence could start wars and destroy democracy. Certainly very, very grim predictions. And before this, many of the companies were self-regulating for artificial intelligence. If you look at Google, Microsoft, Facebook now Meta. All of them have their own artificial intelligence self-guiding principles. Most of them were very aspirational. Those could help us in higher education because, at the very least, it can help us create our own policies and guidelines for our community members—faculty, staff, students, researchers, administrators, partners, vendors, alumni—anybody who happens to interact with our institutions of higher learning. Now, what else is happening out there? Well, we have tons, tons of laws that have to do with the technology and regulations. Things like the Gramm-Leach-Bliley Act, or the Securities and Exchange Commission, the Sarbanes-Oxley. Federal regulations like FISMA, and Cybersecurity Maturity Model Certification, Payment Card Industry, there is the Computer Fraud and Abuse Act, there is the Budapest Convention where cybersecurity insurance providers will tells us what to do and what not to do about technology. We have state laws and many privacy laws. But, to be honest, very few artificial intelligence laws. And it's groundbreaking in Europe that the European parliamentarians have agreed to discuss the Artificial Intelligence Act, which could be the first one really to be passed at this level in the world, after some efforts by China and other countries. And, if adopted, could be a landmark change in the adoption of artificial intelligence. In the United States, even though Congress is not doing much, what the White House is trying to position itself in the realm of artificial intelligence. So there's an executive order in February of 2023—that many of us in higher education read because, once again, we're trying to find inspiration for our own rules and regulations—that tells federal agencies that they have to root out bias in the design and use of new technologies, including artificial intelligence, because they have to protect the public from algorithm discrimination. And we all believe this. In higher education, we believe in being fair and transparent and accountable. I would be surprised if any of us is not concerned about making sure that our technology use, our artificial technology use, does not follow these particular principles as proposed by the Organization for Economic Cooperation and Development, and many other bodies of ethics and expertise. Now, the White House also announced new centers—research and development centers with some new national artificial intelligence research institutes. Many of us will collaborate with those in our research projects. A call for public assessments of existing generative artificial intelligence systems, like ChatGPT. And also is trying to enact or is enacting policies to ensure that U.S. government—the U.S. government, the executive branch, is leading by example when mitigating artificial intelligence risks and harnessing artificial intelligence opportunities. Because, in spite of all the concerns about this, it's all about the opportunities that we hope to achieve with artificial intelligence. And when we look at how specifically can we benefit from artificial intelligence in higher education, well, certainly we can start with new and modified academic offerings. I would be surprised if most of us will not have degrees—certainly, we already have degrees—graduate degrees on artificial intelligence, and machine learning, and many others. But I would be surprised if we don't even add some bachelor's degrees in this field, or we don't modify significantly some of our existing academic offerings to incorporate artificial intelligence in various specialties, our courses, or components of the courses that we teach our students. We're looking at amazing research opportunities, things that we'll be able to do with artificial intelligence that we couldn't even think about before, that are going to expand our ability to generate new knowledge to contribute to society, with federal funding, with private funding. We're looking at improved knowledge management, something that librarians are always very concerned about, the preservation and distribution of knowledge. The idea would be that artificial intelligence will help us find better the things that we're looking for, the things that we need in order to conduct our academic work. We're certainly looking at new and modified pedagogical approaches, new ways of learning and teaching, including the promise of adaptive learning, something that really can tell students: Hey, you're not getting this particular concept. Why don't you go back and study it in a different way with a different virtual avatar, using simulations or virtual assistance? In almost every discipline and academic endeavor. We're looking very concerned, because we're concerned about offering, you know, a good value for the money when it comes to education. So we're hoping to achieve extreme efficiencies, better ways to run admissions, better ways to guide students through their academic careers, better way to coach them into professional opportunities. And many of this will be possible thanks to artificial intelligence. And also, let's not forget this, but we still have many underserved students, and they're underserved because they either cannot afford education or maybe they have physical or cognitive disabilities. And artificial intelligence can really help us reach to those students and offer them new opportunities to advance their education and fulfill their academic and professional goals. And I think this is a good introduction. And I'd love to talk about all the things that can go wrong. I'd love to talk about all the things that we should be doing so that things don't go as wrong as predicted. But I think this is a good way to set the stage for the discussion. FASKIANOS: Fantastic. Thank you so much. So we're going to go all of you now for your questions and comments, share best practices. (Gives queuing instructions.) All right. So I'm going first to Gabriel Doncel has a written question, adjunct faculty at the University of Delaware: How do we incentivize students to approach generative AI tools like ChatGPT for text in ways that emphasize critical thinking and analysis? MOLINA: I always like to start with a difficult question, so I very much, Gabriel Doncel, for that particular question. And, as you know, there are several approaches to adopting tools like ChatGPT on campus by students. One of them is to say: No, over my dead body. If you use ChatGPT, you're cheating. Even if you cite ChatGPT, we can consider you to be cheating. And not only that, but some institutions have invested in tools that can detect whether or something was written with ChatGPT or similar rules. There are other faculty members and other academic institutions that are realizing these tools will be available when these students join the workforce. So our job is to help them do the best that they can by using these particular tools, to make sure they avoid some of the mishaps that have already happened. There are a number of lawyers who have used ChatGPT to file legal briefs. And when the judges received those briefs, and read through them, and looked at the citations they realized that some of the citations were completely made up, were not real cases. Hence, the lawyers faced professional disciplinary action because they used the tool without the professional review that is required. So hopefully we're going to educate our students and we're going to set policy and guideline boundaries for them to use these, as well as sometimes the necessary technical controls for those students who may not be that ethically inclined to follow our guidelines and policies. But I think that to hide our heads in the sand and pretend that these tools are not out there for students to use would be—it's a disserve to our institutions, to our students, and the mission that we have of training the next generation of knowledge workers. FASKIANOS: Thank you. I'm going to go next to Meena Bose, who has a raised hand. Meena, if you can unmute yourself and identify yourself. Q: Thank you, Irina. Thank you for this very important talk. And my question is a little—(laughs)—it's formative, but really—I have been thinking about what you were saying about the role of AI in academic life. And I don't—particularly for undergraduates, for admissions, advisement, guidance on curriculum. And I don't want to have my head in the sand about this, as you just said—(laughs)—but it seems to me that any kind of meaningful interaction with students, particularly students who have not had any exposure to college before, depends upon kind of multiple feedback with faculty members, development of mentors, to excel in college and to consider opportunities after. So I'm struggling a little bit to see how AI can be instructive for that part of college life, beyond kind of providing information, I guess. But I guess the web does that already. So welcome your thoughts. Thank you. FASKIANOS: And Meena's at Hofstra University. MOLINA: Thank you. You know, it's a great question. And the idea that everybody is proposing right here is we are not—artificial intelligence companies, at least at first. We'll see in the future because, you know, it depends on how it's regulated. But they're not trying, or so they claim, to replace doctors, or architects, or professors, or mentors, or administrators. They're trying to help those—precisely those people in those professions, and the people they served gain access to more information. And you're right in a sense that that information is already on the web. But we've aways had a problem finding that information regularly on the web. And you may remember that when Google came along, I mean, it swept through every other search engine out there AltaVista, Yahoo, and many others, because, you know, it had a very good search algorithm. And now we're going to the next level. The next level is where you ask ChatGPT in human-natural language. You're not trying to combine the three words that say, OK, is the economics class required? No, no, you're telling ChatGPT, hey, listen, I'm in the master's in business administration at Drexel University and I'm trying to take more economic classes. What recommendations do you have for me? And this is where you can have a preliminary one, and also a caveat there, as most of these search engine—generative AI engines already have, that tell you: We're not here to replace the experts. Make sure you discuss your questions with the experts. We will not give you medical advice. We will not give you educational advice. We're just here, to some extent, for guiding purposes and, even now, for experimental and entertainment purposes. So I think you are absolutely right that we have to be very judicious about how we use these tools to support the students. Now, that said, I had the privilege of working for public universities in the state of Connecticut when I was the CIO. I also had the opportunity early in my career to attend public university in Europe, in Spain, where we were hundreds of students in class. We couldn't get any attention from the faculty. There were no mentors, there were no counselors, or anybody else. Is it better to have nobody to help you or is it better to have at least some technology guidance that can help you find the information that otherwise is spread throughout many different systems that are like ivory towers—emissions on one side, economics on the other, academics advising on the other, and everything else. So thank you for a wonderful question and reflection. FASKIANOS: I'm going to take the next question written from Dr. Russell Thomas, a senior lecturer in the Department of International Relations and Diplomatic Studies at Cavendish University in Uganda: What are the skills and competencies that higher education students and faculty need to develop to think in an AI-driven world? MOLINA: So we could argue here that something very similar has happened already with many information technologies and communication technologies. It is the understanding at first faculty members did not want to use email, or the web, or many other tools because they were too busy with their disciplines. And rightly so. They were brilliant economists, or philosophers, or biologists. They didn't have enough time to learn all these new technologies to interact with the students. But eventually they did learn, because they realized that it was the only way to meet the students where they were and to communicate with them in efficient ways. Now, I have to be honest; when it comes to the use of technology—and we'll unpack the numbers—it was part of my doctoral dissertation, when I expanded the adoption of technology models, that tells you about early adopters, and mainstream adopters, and late adopters, and laggards. But I uncovered a new category for some of the institutions where I worked called the over-my-dead-body adopters. And these were some of the faculty members who say: I will never switch word processors. I will never use this technology. It's only forty years until I retire, probably eighty more until I die. I don't have to do this. And, to be honest, we have a responsibility to understand that those artificial intelligence tools are out there, and to guide the students as to what is the acceptable use of those technologies within the disciplines and the courses that we teach them in. Because they will find those available in a very competitive work market, in a competitive labor market, because they can derive some benefit from them. But also, we don't want to shortchange their educational attainment just because they go behind our backs to copy and paste from ChatGPT, learning nothing. Going back to the question by Gabriel Doncel, not learning to exercise the critical thinking, using citations and material that is unverified, that was borrowed from the internet without any authority, without any attention to the different points of view. I mean, if you've used ChatGPT for a while—and I have personally, even to prepare some basic thank-you speeches, which are all very formal, even to contest a traffic ticket in Washington, DC, when I was speeding but I don't want to pay the ticket anyway. Even for just research purposes, you could realize that most of the writing from ChatGPT has a very, very common style. Which is, oh, on the one hand people say this, on the other hand people say that. Well, the critical thinking will tell you, sure, there are two different opinions, but this is what I think myself, and this is why I think about this. And these are some of the skills, the critical thinking skills, that we must continue to teach the students and not to, you know, put blinds around their eyes to say, oh, continue focusing only on the textbook and the website. No, no. Look at the other tools but use them judiciously. FASKIANOS: Thank you. I'm going to go next to Clemente Abrokwaa. Raised hand, if you can identify yourself, please. Q: Hi. Thanks so much for your talk. It's something that has been—I'm from Penn State University. And this is a very important topic, I think. And some of the earlier speakers have already asked the questions I was going to ask. (Laughs.) But one thing that I would like to say that, as you said, we cannot bury our heads in the sand. No matter what we think, the technology is already here. So we cannot avoid it. My question, though, is what do you think about the artificial intelligence, the use of that in, say, for example, graduate students using it to write dissertations? You did mention about the lawyers that use it to write their briefs, and they were caught. But in dissertations and also in class—for example, you have students—you have about forty students. You give a written assignment. You make—when you start grading, you have grading fatigue. And so at some point you lose interest of actually checking. And so I'm kind of concerned about that how it will affect the students' desire to actually go and research without resorting to the use of AI. MOLINA: Well, Clemente, fellow colleague from the state of Pennsylvania, thank you for that, once again, both a question and a reflection here. Listen, many of us wrote our doctoral dissertations—mine at Georgetown. At one point of time, I was so tired of writing about the same topics, following the wonderful advice, but also the whims of my dissertation committee, that I was this close from outsourcing my thesis to China. I didn't, but I thought about it. And now graduate students are thinking, OK, why am I going through the difficulties of writing this when ChatGPT can do it for me and the deadline is tomorrow? Well, this is what will distinguish the good students and the good professionals from the other ones. And the interesting part is, as you know, when we teach graduate students we're teaching them critical thinking skills, but also teaching them now to express themselves, you know, either orally or in writing. And writing effectively is fundamental in the professions, but also absolutely critical in academic settings. And anybody who's just copying and pasting from ChatGPT to these documents cannot do that level of writing. But you're absolutely right. Let's say that we have an adjunct faculty member who's teaching a hundred students. Will that person go through every single essay to find out whether students were cheating with ChatGPT? Probably not. And this is why there are also enterprising people who are using artificial intelligence to find out and tell you whether a paper was written using artificial intelligence. So it's a little bit like this fighting of different sources and business opportunities for all of them. And we've done this. We've used antiplagiarism tools in the past because we knew that students were copying and pasting using Google Scholar and many other sources. And now oftentimes we run antiplagiarism tools. We didn't write them ourselves. Or we tell the students, you run it yourself and you give it to me. And make sure you are not accidentally not citing things that could end up jeopardizing your ability to get a graduate degree because your work was not up to snuff with the requirements of our stringent academic programs. So I would argue that this antiplagiarism tools that we're using will more often than not, and sooner than expected, incorporate the detection of artificial intelligence writeups. And also the interesting part is to tell the students, well, if you do choose to use any of these tools, what are the rules of engagement? Can you ask it to write a paragraph and then you cite it, and you mention that ChatGPT wrote it? Not to mention, in addition to that, all the issues about artificial intelligence, which the courts are deciding now, regarding the intellectual property of those productions. If a song, a poem, a book is written by an artificial intelligence entity, who owns the intellectual property for those works produced by an artificial intelligence machine? FASKIANOS: Good question. We have a lot of written questions. And I'm sure you don't want to just listen to my voice, so please do raise your hands. But we do have a question from one of your colleagues, Pablo, Pepe Barcega, who's the IT director at Drexel: Considering the potential biases and limitations of AI models, like ChatGPT, do you think relying on such technology in the educational domain can perpetuate existing inequalities and reinforce systemic biases, particularly in terms of access, representation, and fair evaluation of students? And Pepe's question got seven upvotes, we advanced it to the top of the line. MOLINA: All right, well, first I have to wonder whether he used ChatGPT to write the question. But I'm going to leave it that. Thank you. (Laughter.) It's a wonderful question. One of the greatest concerns we have had, those of us who have been working on artificial intelligence digital policy for years—not this year when ChatGPT was released, but for years we've been thinking about this. And even before artificial intelligence, in general with algorithm transparency. And the idea is the following: That two things are happening here. One is that we're programming the algorithms using instructions, instructions created by programmers, with all their biases, and their misunderstandings, and their shortcomings, and their lack of context, and everything else. But with artificial intelligence we're doing something even more concerning than that, which is we have some basic algorithms but then we're feeling a lot of information, a corpus of information, to those algorithms. And the algorithms are fine-tuning the rules based on those. So it's very, very difficult for experts to explain how an artificial intelligence system actually makes decisions, because we know the engine and we know the data that we fed to the engine, but we don't know the real outcome how those decisions are being made through neural networks, through all of the different systems that we have and methods that we have for artificial intelligence. Very, very few people understand how those work. And those are so busy they don't have time to explain how the algorithm works for others, including the regulators. Let's remember some of the failed cases. Amazon tried this early. And they tried this for selecting employees for Amazon. And they fed all the resumes. And guess what? It turned out that most of the recommendations were to hire young white people who had gone to Ivy League schools. Why? Because their first employees were feeding those descriptions, and they had done extremely well at Amazon. Hence, by feeding that information of past successful employees only those were there. And so that puts away the diversity that we need for different academic institutions, large and small, public and private, from different countries, from different genders, from different ages, from different ethnicities. All those things went away because the algorithm was promoting one particular one. Recently I had the opportunity to moderate a panel in Washington, DC, and we had representatives from the Equal Employment Opportunity Commission. And they told us how they investigated a hiring algorithm from a company that was disproportionately recommending that they hired people whose first name was Brian and had played lacrosse in high school because, once again, a disproportionate number of people in that company had done that. And the algorithm realized, oh, this must be important characteristics to hire people for this company. Let's not forget, for example, with the artificial facial recognition and artificial intelligence by Amazon Rekog, you know, the facial recognition software, that the American Civil Liberties Union, decided, OK, I'm going to submit the pictures of all the congressmen to this particular facial recognition engine. And it turned out that it misidentified many of them, particularly African Americans, as felons who had been convicted. So all these artificial—all these biases could have really, really bad consequences. Imagine that you're using this to decide who you admit to your universities, and the algorithm is wrong. You know, you are making really biased decisions that will affect the livelihood of many people, but also will transform society, possibly for the worse, if we don't address this. So this is why the OECD, the European Union, even the White House, everybody is saying: We want this technology. We want to derive the benefits of this technology, while curtailing the abuses. And it's fundamental we achieve transparency. We are sure that these algorithms are not biased against the people who use them. FASKIANOS: Thank you. So I'm going to go next to Emily Edmonds-Poli, who is a professor at the University of San Diego: We hear a lot about providing clear guidelines for students, but for those of us who have not had a lot of experience using ChatGPT it is difficult to know what clear guidelines look like. Can you recommend some sources we might consult as a starting point, or where we might find some sample language? MOLINA: Hmm. Well, certainly this is what we do in higher education. We compete for the best students and the best faculty members. And we sometimes compete a little bit to be first to win groundbreaking research. But we tend to collaborate with everything else, particularly when it comes to policy, and guidance, and rules. So there are many institutions, like mine, who have already assembled—I'm sure that yours has done the same—assembled committees, because assembling committees and subcommittees is something we do very well in higher education, with faculty members, with administrators, even with the student representation to figure out, OK, what should we do about the use of artificial intelligence on our campus? I mentioned before taking a look at the big aspirational declarations by Meta, and Google, and IBM, and Microsoft could be helpful for these communities to look at this. But also, I'm a very active member of an organization known as EDUCAUSE. And EDUCAUSE is for educators—predominantly higher education educators. Administrators, staff members, faculty members, to think about the adoption of information technology. And EDUCAUSE has done good work on this front and continues to do good work on this front. So once again, EDUCAUSE and some of the institutions have already published their guidelines on how to use artificial intelligence and incorporate that within their academic lives. And now, that said, we also know that even though all higher education institutions are the same, they're all different. We all have different values. We all believe in different uses of technology. We trust more or less the students. Hence, it's very important that whatever inspiration you would take, you work internally on campus—as you have done with many other issues in the past—to make sure it really reflects the values of your institution. FASKIANOS: So, Pablo, would you point to a specific college or university that has developed a code of ethics that addresses the use of AI for their academic community beyond your own, but that is publicly available? MOLINA: Yeah, I'm going to be honest, I don't want to put anybody on the spot. FASKIANOS: OK. MOLINA: Because, once again, there many reasons. But, once again, let me repeat a couple resources. One is of them is from the U.S. Department of Education, from the Office of Educational Technology. And the article is Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, published earlier this year. The other source really is educause.edu. And if you look at educause.edu on artificial intelligence, you'll find links to articles, you'll find links to universities. It would be presumptuous of me to evaluate whose policies are better than others, but I would argue that the general principles of nonbiased, transparency, accountability, and also integration of these tools within the academic life of the institution in a morally responsible way—with concepts by privacy by design, security by design, and responsible computing—all of those are good words to have in there. Now, the other problem with policies and guidelines is that, let's be honest, many of those have no teeth in our institutions. You know, we promulgate them. They're very nice. They look beautiful. They are beautifully written. But oftentimes when people don't follow them, there's not a big penalty. And this is why, in addition to having the policies, educating the campus community is important. But it's difficult to do because we need to educate them about so many things. About cybersecurity threats, about sexual harassment, about nondiscriminatory policies, about responsible behavior on campus regarding drugs and alcohol, about crime. So many things that they have to learn about. It's hard to get at another topic for them to spend their time on, instead of researching the core subject matter that they chose to pursue for their lives. FASKIANOS: Thank you. And we will be sending out a link to this video, the transcript, as well as the resources that you have mentioned. So if you didn't get them, we'll include them in the follow-up email. So I'm going to go to Dorian Brown Crosby who has a raised hand. Q: Yes. Thank you so much. I put one question in the chat but I have another question that I would like to go ahead and ask now. So thank you so much for this presentation. You mentioned algorithm biases with individuals. And I appreciate you pointing that out, especially when we talk about face recognition, also in terms of forced migration, which is my area of research. But I also wanted you to speak to, or could you talk about the challenges that some institutions in higher education would have in terms of support for some of the things that you mentioned in terms of potential curricula, or certificates, or other ways that AI would be woven into the new offerings of institutions of higher education. How would that look specifically for institutions that might be challenged to access those resources, such as Historically Black Colleges and Universities? Thank you. MOLINA: Well, very interesting question, and a really fascinating point of view. Because we all tend to look at things from our own perspective and perhaps not consider the perspective of others. Those who have much more money and resources than us, and those who have fewer resources and less funding available. So this is a very interesting line. What is it that we do in higher education when we have these problems? Well, as I mentioned before, we build committees and subcommittees. Usually we also do campus surveys. I don't know why we love doing campus surveys and asking everybody what they think about this. Those are useful tools to discuss. And oftentimes the thing that we do also, that we've done for many other topics, well, we hire people and we create new offices—either academic or administrative offices. With all of those, you know, they have certain limitations to how useful and functional they can be. And they also continue to require resources. Resources that, in the end, are paid for by students with, you know, federal financing. But this is the truth of the matter. So if you start creating offices of artificial intelligence on our campuses, however important the work may be on their guidance and however much extra work can be assigned to them instead of distributed to every faculty and the staff members out there, the truth of the matter is that these are not perfect solutions. So what is it that we do? Oftentimes, we work with partners. And our partners love to take—(inaudible)—vendors. But the truth of the matter is that sometimes they have much more—they have much more expertise on some of these topics. So for example, if you're thinking about incorporating artificial intelligence to some of the academic materials that you use in class, well, I'm going to take a guess that if you already work with McGraw Hill in economics, or accounting, or some of the other books and websites that they put that you recommend to your students or you make mandatory for your students, that you start discussing with them, hey, listen, are you going to use artificial intelligence? How? Are you going to tell me ahead of time? Because, as a faculty member, you may have a choice to decide: I want to work with this publisher and not this particular publisher because of the way they approach this. And let's be honest, we've seen a number of these vendors with major information security problems. McGraw Hill recently left a repository of data misconfigured out there on the internet, and almost anybody could access that. But many others before them, like Chegg and others, were notorious for their information security breaches. Can we imagine that these people are going to adopt artificial intelligence and not do such a good job of securing the information, the privacy, and the nonbiased approaches that we hold dear for students? I think they require a lot of supervision. But in the end, these publishers have the economies of scale for you to recommend those educational materials instead of developing your own for every course, for every class, and for every institution. So perhaps we're going to have to continue to work together, as we've done in higher education, in consortia, which would be local, or regional. It could be based on institutions of the same interest, or on student population, on trying to do this. And, you know, hopefully we'll get grants, grants from the federal government, that can be used in order to develop some of the materials and guidelines that are going to help us precisely embrace this and embracing not only to operate better as institutions and fulfill our mission, but also to make sure that our students are better prepared to join society and compete globally, which is what we have to do. FASKIANOS: So I'm going to combine questions. Dr. Lance Hunter, who is an associate professor at Augusta University. There's been a lot of debate regarding if plagiarism detection software tools like Turnitin can accurately detect AI-generated text. What is your opinion regarding the accuracy of AI text generation detection plagiarism tools? And then Rama Lohani-Chase, at Union County College, wants recommendations on what plagiarism checker devices you would recommend—or, you know, plagiarism detection for AI would you recommend? MOLINA: Sure. So, number one, I'm not going to endorse any particular company because if I do that I would ask them for money, or the other way around. I'm not sure how it works. I could be seen as biased, particularly here. But there are many there and your institutions are using them. Sometimes they are integrated with your learning management system. And, as I mentioned, sometimes we ask the students to use them themselves and then either produce the plagiarism report for us or simply know themselves this. I'm going to be honest; when I teach ethics and technology, I tell the students about the antiplagiarism tools at the universities. But I also tell them, listen, if you're cheating in an ethics and technology class, I failed miserably. So please don't. Take extra time if you have to take it, but—you know, and if you want, use the antiplagiarism tool yourself. But the question stands and is critical, which is right now those tools are trying to improve the recognition of artificial intelligence written text, but they're not as good as they could be. So like every other technology and, what I'm going to call, antitechnology, used to control the damage of the first technology, is an escalation where we start trying to identify this. And I think they will continue to do this, and they will be successful in doing this. There are people who have written ad hoc tools using ChatGPT to identify things written by ChatGPT. I tried them. They're remarkably good for the handful of papers that I tried myself, but I haven't conducted enough research myself to tell you if they're really effective tools for this. So I would argue that for the timing you must assume that those tools, as we assume all the time, will not catch all of the cases, only some of the most obvious ones. FASKIANOS: So a question from John Dedie, who is an assistant professor at the Community College of Baltimore County: To combat AI issues, shouldn't we rethink assignments? Instead of papers, have students do PowerPoints, ask students to offer their opinions and defend them? And then there was an interesting comment from Mark Habeeb at Georgetown University School of Foreign Service. Knowledge has been cheap for many years now because it is so readily available. With AI, we have a tool that can aggregate the knowledge and create written products. So, you know, what needs to be the focus now is critical thinking and assessing values. We need to teach our students how to assess and use that knowledge rather than how to find the knowledge and aggregate that knowledge. So maybe you could react to those two—the question and comment. MOLINA: So let me start with the Georgetown one, not only because he's a colleague of mine. I also teach at Georgetown, and where I obtained my doctoral degree a number of years ago. I completely agree. I completely agree with the issue that we have to teach new skills. And one of the programs in which I teach at Georgetown is our master's of analysis. Which are basically for people who want to work in the intelligence community. And these people have to find the information and they have to draw inferences, and try to figure out whether it is a nation-state that is threatening the United States, or another, or a corporation, or something like that. And they do all of those critical thinking, and intuition, and all the tools that we have developed in the intelligence community for many, many years. And artificial intelligence, if they suspend their judgement and they only use artificial intelligence, they will miss very important information that is critical for national security. And the same is true for something like our flagship school, the School of Foreign Service at Georgetown, one of the best in the world in that particular field, where you want to train the diplomats, and the heads of state, and the great strategical thinkers on policy and politics in the international arena to precisely think not in the mechanical way that a machine can think, but also to connect those dots. And, sure they should be using those tools in order to, you know, get the most favorable position and the starting position, But they should also use their critical thinking always, and their capabilities of analysis in order to produce good outcomes and good conclusions. Regarding redoing the assignments, absolutely true. But that is hard. It is a lot of work. We're very busy faculty members. We have to grade. We have to be on committees. We have to do research. And now they ask us to redo our entire assessment strategy, with new assignments that we need to grade again and account for artificial intelligence. And I don't think that any provost out there is saying, you know what? You can take two semesters off to work on this and retool all your courses. That doesn't happen in the institutions that I know of. If you get time off because you're entitled to it, you want to devote that time to do research because that is really what you sign up for when you pursued an academic career, in many cases. I can tell you one thing, that here in Europe where oftentimes they look at these problems with fewer resources than we do in the United States, a lot of faculty members at the high school level, at the college level, are moving to oral examinations because it's much harder to cheat with ChatGPT with an oral examination. Because they will ask you interactive, adaptive questions—like the ones we suffered when we were defending our doctoral dissertations. And they will realize, the faculty members, whether or not you know the material and you understand the material. Now, imagine oral examinations for a class of one hundred, two hundred, four hundred. Do you do one for the entire semester, with one topic chosen and run them? Or do you do several throughout the semester? Do you end up using a ChatGPT virtual assistance to conduct your oral examinations? I think these are complex questions. But certainly redoing our assignments and redoing the way we teach and the way we evaluate our students is perhaps a necessary consequence of the advent of artificial intelligence. FASKIANOS: So next question from Damian Odunze, who is an assistant professor at Delta State University in Cleveland, Mississippi: Who should safeguard ethical concerns and misuse of AI by criminals? Should the onus fall on the creators and companies like Apple, Google, and Microsoft to ensure security and not pass it on to the end users of the product? And I think you mentioned at the top in your remarks, Pablo, about how the founder of ChatGPT was urging the Congress to put into place some regulation. What is the onus on ChatGPT to protect against some of this as well? MOLINA: Well, I'm going to recycle more of the material from my doctoral dissertation. In this case it was the Molina cycle of innovation and regulation. It goes like this, basically there are—you know, there are engineers and scientists who create new information technologies. And then there are entrepreneurs and businesspeople and executives to figure out, OK, I know how to package this so that people are going to use it, buy it, subscribe to it, or look at it, so that I can sell the advertisement to others. And, you know, this begins and very, very soon the abuses start. And the abuses are that criminals are using these platforms for reasons that were not envisioned before. Even the executives, as we've seen with Google, and Facebook, and others, decide to invade the privacy of the people because they only have to pay a big fine, but they make much more money than the fines or they expect not to be caught. And what happened in this cycle is that eventually there is so much noise in the media, congressional hearings, that eventually regulators step in and they try to pass new laws to do this, or the regulatory agencies try to investigate using the powers given to them. And then all of these new rules have to be tested in courts of law, which could take years by the time it reaches sometimes all the way to the Supreme Court. Some of them are even knocked down on the way to the Supreme Court when they realize this is not constitutional, it's a conflict of laws, and things like that. Now, by the time we regulate these new technologies, not only many years have gone by, but the technologies have changed. The marketing products and services have changed, the abuses have changed, and the criminals have changed. So this is why we're always living in a loosely regulated space when it comes to information technology. And this is an issue of accountability. We're finding this, for example, with information security. If my phone is my hacked, or my computer, my email, is it the fault of Microsoft, and Apple, and Dell, and everybody else? Why am I the one paying the consequences and not any of these companies? Because it's unregulated. So morally speaking, yes. These companies are accountable. Morally speaking also the users are accountable, because we're using these tools because we're incorporating them professionally. Legally speaking, so far, nobody is accountable except the lawyers who submitted briefs that were not correct in a court of law and were disciplined for that. But other than that, right now, it is a very gray space. So in my mind, it requires everybody. It takes a village to do the morally correct thing. It starts with the companies and the inventors. It involves the regulators, who should do their job and make sure that there's no unnecessary harm created by these tools. But it also involves every company executive, every professional, every student, and professor who decides to use these tools. FASKIANOS: OK. I'm going to take—combine a couple questions from Dorothy Marinucci and Venky Venkatachalam about the effect of AI on jobs. Dorothy talks about—she's from Fordham University—about she read something about Germany's best-selling newspaper Bild reportedly adopting artificial intelligence to replace certain editorial roles in an effort to cut costs. Does this mean that the field of journalism communication will change? And Venky's question is: AI—one of the impacts is in the area of automation, leading to elimination of certain types of jobs. Can you talk about both the elimination of jobs and what new types of jobs you think will be created as AI matures into the business world with more value-added applications? MOLINA: Well, what I like about predicting the future, and I've done this before in conferences and papers, is that, you know, when the future comes ten years from now people will either not remember what I said, or, you know, maybe I was lucky and my prediction was correct. In the specific field of journalism, and we've seen it, the journalism and communications field, decimated because the money that they used to make with advertising—and, you know, certainly a bit part of that were in the form of corporate profits. But many other one in the form of hiring good journalists, and investigative journalism, and these people could be six months writing a story when right now they have six hours to write a story, because there are no resources. And all the advertisement money went instead to Facebook, and Google, and many others because they work very well for advertisements. But now the lifeblood of journalism organizations has been really, you know, undermined. And there's good journalism in other places, in newspapers, but sadly this is a great temptation to replace some of the journalists with more artificial intelligence, particularly the most—on the least important pieces. I would argue that editorial pieces are the most important in newspapers, the ones requiring ideology, and critical thinking, and many others. Whereas there are others that tell you about traffic changes that perhaps do not—or weather patterns, without offending any meteorologists, that maybe require a more mechanical approach. I would argue that a lot of professions are going to be transformed because, well, if ChatGPT can write real estate announcements that work very well, well, you may need fewer people doing this. And yet, I think that what we're going to find is the same thing we found when technology arrived. We all thought that the arrival of computers would mean that everybody would be without a job. Guess what? It meant something different. It meant that in order to do our jobs, we had to learn how to use computers. So I would argue that this is going to be the same case. To be a good doctor, to be a good lawyer, to be a good economist, to be a good knowledge worker you're going to have to learn also how to use whatever artificial intelligence tools are available out there, and use them professionally within the moral and the ontological concerns that apply to your particular profession. Those are the kind of jobs that I think are going to be very important. And, of course, all the technical jobs, as I mentioned. There are tons of people who consider themselves artificial intelligence experts. Only a few at the very top understand these systems. But there are many others in the pyramid that help with preparing these systems, with the support, the maintenance, the marketing, preparing the datasets to go into these particular models, working with regulators and legislators and compliance organizations to make sure that the algorithms and the tools are not running afoul of existing regulations. All of those, I think, are going to be interesting jobs that will be part of the arrival of artificial intelligence. FASKIANOS: Great. We have so many questions left and we just couldn't get to them all. I'm just going to ask you just to maybe reflect on how the use of artificial intelligence in higher education will affect U.S. foreign policy and international relations. I know you touched upon it a little bit in reacting to the comment from our Georgetown University colleague, but any additional thoughts you might want to add before we close? MOLINA: Well, let's be honest, one particular one that applies to education and to everything else, there is a race—a worldwide race for artificial intelligence progress. The big companies are fighting—you know, Google, and Meta, many others, are really putting—Amazon—putting resources into that, trying to be first in this particular race. But it's also a national race. For example, it's very clear that there are executive orders from the United States as well as regulations and declarations from China that basically are indicating these two big nations are trying to be first in dominating the use of artificial intelligence. And let's be honest, in order to do well in artificial intelligence you need not only the scientists who are going to create those models and refine them, but you also need the bodies of data that you need to feed these algorithms in order to have good algorithms. So the barriers to entry for other nations and the barriers to entry by all the technology companies are going to be very, very high. It's not going to be easy for any small company to say: Oh, now I'm a huge player in artificial intelligence. Because even if you may have created an interesting new algorithmic procedure, you don't have the datasets that the huge companies have been able to amass and work on for the longest time. Every time you submit a question to ChatGPT, the ChatGPT experts are using their questions to refine the tool. The same way that when we were using voice recognition with Apple or Android or other companies, that we're using those voices and our accents and our mistakes in order to refine their voice recognition technologies. So this is the power. We'll see that the early bird gets the worm of those who are investing, those who are aggressively going for it, and those who are also judiciously regulating this can really do very well in the international arena when it comes to artificial intelligence. And so will their universities, because they will be able to really train those knowledge workers, they'll be able to get the money generated from artificial intelligence, and they will be able to, you know, feedback one with the other. The advances in the technology will result in more need for students, more students graduating will propel the industry. And there will also be—we'll always have a fight for talent where companies and countries will attract those people who really know about these wonderful things. Now, keep in mind that artificial intelligence was the core of this, but there are so many other emerging issues in information technology. And some of them are critical to higher education. So we're still, you know, lots of hype, but we think that virtual reality will have an amazing impact on the way we teach and we conduct research and we train for certain skills. We think that quantum computing has the ability to revolutionize the way we conduct research, allowing us to do competitions that were not even thinkable today. We'll look at things like robotics. And if you ask me about what is going to take many jobs away, I would say that robotics can take a lot of jobs away. Now, we thought that there would be no factory workers left because of robots, but that hasn't happened. But keep adding robots with artificial intelligence to serve you a cappuccino, or your meal, or take care of your laundry, or many other things, or maybe clean your hotel room, and you realize, oh, there are lots of jobs out there that no longer will be there. Think about artificial intelligence for self-driving vehicles, boats, planes, cargo ships, commercial airplanes. Think about the thousands of taxi drivers and truck drivers who may end up being out of jobs because, listen, the machines drive safer, and they don't get tired, and they can be driving twenty-four by seven, and they don't require health benefits, or retirement. They don't get depressed. They never miss. Think about many of the technologies out there that have an impact on what we do. So, but artificial intelligence is a multiplier to technologies, a contributor to many other fields and many other technologies. And this is why we're so—spending so much time and so much energy thinking about these particular issues. FASKIANOS: Well, thank you, Pablo Molina. We really appreciate it. Again, my apologies that we couldn't get to all of the questions and comments in the chat, but we appreciate all of you for your questions and, of course, your insights were really terrific, Dr. P. So we will, again, be sending out the link to this video and transcript, as well as the resources that you mentioned during this discussion. I hope you all enjoy the Fourth of July. And I encourage you to follow @CFR_Academic on Twitter and visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. Again, you send us comments, feedback, suggestions to CFRacademic@CFR.org. And, again, thank you all for joining us. We look forward to your continued participation in CFR Academic programming. Have a great day. MOLINA: Adios. (END)
Philippine institutions have expressed concerns and optimism regarding the use of artificial intelligence (AI), seeing it as both a potential challenge and an opportunity, according to plagiarism checker Turnitin. In this B-Side episode, Jack Brazel, regional director and spokesperson at Turnitin Southeast Asia, speaks with reporter Miguel Hanz L. Antivola about the current state of AI in Philippine academia. Recorded on May 11, 2023.
AI and K12 : Turnitin Experts speak on the subject.For more information: AI writing and Turnitin ,Academic integrity in the age of AI The launch of Turnitin's AI writing detector and the road ahead, To ban or not to ban AI writing in schools? , How failing safely can uphold academic integrity (and mitigate AI writing), AI-generated text: The threat, the responsibility, and the promise PLUS Two YouTube videos from the David Adamson: https://youtu.be/g85aB8qaSGc and on dealing with potential false positives : https://youtu.be/ogL4wKect6w
AI and K12 : Turnitin Experts speak on the subject.For more information: AI writing and Turnitin ,Academic integrity in the age of AI The launch of Turnitin's AI writing detector and the road ahead, To ban or not to ban AI writing in schools? , How failing safely can uphold academic integrity (and mitigate AI writing), AI-generated text: The threat, the responsibility, and the promise PLUS Two YouTube videos from the David Adamson: https://youtu.be/g85aB8qaSGc and on dealing with potential false positives : https://youtu.be/ogL4wKect6w
How to copy appropriately without being caught
Annie Chechitelli from TurnItIn on A.I. anti-cheating software // Chris Sullivan's Chokepoint -- impending speed reductions on I-5 southbound // Matt Markovich on police pursuit and reproductive healthcare legislation // Dose of Kindness -- hardware store worker goes above and beyond for child with cerebral palsy // Gee Scott on celebrating birthdays // House Minority Leader JT Wilcox live on police pursuit legislation // Micki Gamez on the state of the truck in the U.S.See omnystudio.com/listener for privacy information.
Universities are continuing their arms race against ChatGPT and AI. Their plagiarism software, Turnitin, now has the ability to spot artificially formulated material with 98 percent accuracy. Some New Zealand universities say they won't use the new tool, preferring to wait until the technology has been tested before deciding. Simon Mccullum, Victoria University senior lecturer in software engineering, says universities are fighting a battle against AI that they won't win. "It's an arms race, which the generative AI will win. Because as soon as it got someone to test against, it will just change and get better." LISTEN ABOVESee omnystudio.com/listener for privacy information.
From Wednesday school and university students will find it harder to get away with using Artificial Intelligence systems such as ChatGPT to write their essays. The developers of one of the programs used to detect plagiarism, Turnitin, says the software can now spot AI-generated material with 98 percent accuracy and it has switched on that ability for its New Zealand customers. Academics say it will help, but not for long and not for students who know what they're doing. Here's our education correspondent, John Gerritsen.
In our final show before the Easter break, OfS has published the first Equality of Opportunity Risk Register for English HE. But are there some big risks missing? Plus UCAS says we'll soon have a million applicants, Wales is working towards better mental health and in response to news from Turnitin, plenty of people seem to want to turn it off.With James Purnell, Vice Chancellor at University of the Arts London, Anne-Marie Canning, CEO at The Brilliant Club, David Kernohan, Deputy Editor at Wonkhe and presented by Jim Dickinson, Associate Editor at Wonkhe. Hosted on Acast. See acast.com/privacy for more information.
In this episode of THE Journal Insider podcast, host and THEJournal.com editor Kristal Kuykendall welcomes two former teachers who have been working on AI writing tools at Turnitin, a plagiarism-detection software used by thousands of K–12 schools and institutions of higher education. Turnitin is expected to launch a new AI writing detector and additional related features for educators in the next few weeks. David Adamson, principal machine learning scientist at Turnitin, and Patti West-Smith, senior director of customer engagement, have been working on Turnitin's AI writing detection feature and related new tools to help educators better understand ChatGPT — and to show teachers how to use AI to save themselves time and how to tweak assignments so that ChatGPT cannot earn a good grade on writing homework. Adamson, who taught computer science and math at Digital Harbor High in Baltimore, and West-Smith, who worked in public schools for 19 years as a teacher, curriculum supervisor, and principal, both believe that ChatGPT has presented a growth opportunity — or perhaps more like a growth demand — for writing instruction, which they explained at length in the newest episode of THE Journal Insider podcast. THE Journal Insider podcast explores current ed tech trends and issues impacting K–12 educators, IT professionals, instructional technologists, education leaders, and ed tech providers. Listen in as THE Journal Editor Kristal Kuykendall chats with ed tech experts, educators, and industry leaders about how they are 'meeting the moment' in the U.S. public education system. Find all podcast episodes as well as K–12 ed tech news updated daily at THEJournal.com. Resource links: Academic Integrity in the Age of AI Writing — Turnitin's guide for educators Texas Tech List of AI Writing and Research Tools Turnitin.com Video demo of Turnitin's AI writing detector at work Music by LemonMusicStudio from Pixabay Duration: 29 minutes
Hoy en Rcn Digital ChatGPT ¿Oportunidad o una amenaza para la comunidad educativa?Hablamos con Catalina Londoño, Gerente de Servicios Profesionales de Turnitin.También hablamos del robot que asistió a un equipo médico en la operación de un paciente que necesitaba cambio urgente de rodilla y los congresistas estadounidenses que andan encendiendo las alarmas por la amenaza que representa la inteligencia artificial.
This edWeb podcast is sponsored by Schoolytics. The webinar recording can be accessed here. Teachers' responsibilities will change as technology progresses at a faster rate. The question is: How can automation help teachers be more efficient and effective and prevent the accumulation of even more daily tasks for an already-overworked profession?In this edWeb podcast, ways in which technology-enabled automation can help teachers maximize instructional time and reduce the burden of administrative, “non-teaching” tasks are covered. Also discussed are the pros and cons of automation in the classroom, pitfalls to avoid, and practical tips for implementation. Finally, the panelists share their thoughts on what the future holds for automation in the education space.Panelists include edtech leaders Dr. Eric Wang, Vice President of Artificial Intelligence at Turnitin, and Dr. Courtney Monk, Co-founder and COO at Schoolytics, alongside district leaders Randy Kolset, Coordinator of Educational Technology at Orange Unified School District, and Sam Brooks, Personal Learning Supervisor at Putnam County School System. Together, they provide a comprehensive perspective on technology and automation's impact on teachers and their students.This recorded edWeb podcast is of interest to K-12 school leaders, district leaders, and education technology leaders.Learn more about viewing live edWeb presentations and on-demand recordings, earning CE certificates, and using accessibility features.
THE IMPORTANCE OF STRONG CTE IN ENSURING EQUITY AND WELL ROUNDED ACADEMICS : From Staten Island Technical High School, one of the top 100 schools in the country, Physics teacher and CTE Coordinator Dr. Jared Jax. A huge thanks to Turnitin for arranging the return visit of this excellent school SEE ALL WE DO AT ACE-ED.ORG
THE IMPORTANCE OF STRONG CTE IN ENSURING EQUITY AND WELL ROUNDED ACADEMICS : From Staten Island Technical High School, one of the top 100 schools in the country, Physics teacher and CTE Coordinator Dr. Jared Jax. A huge thanks to Turnitin for arranging the return visit of this excellent school SEE ALL WE DO AT ACE-ED.ORG
About JackJack is Uptycs' outspoken technology evangelist. Jack is a lifelong information security executive with over 25 years of professional experience. He started his career managing security and operations at the world's first Internet data privacy company. He has since led unified Security and DevOps organizations as Global CSO for large conglomerates. This role involved individually servicing dozens of industry-diverse, mid-market portfolio companies.Jack's breadth of experience has given him a unique insight into leadership and mentorship. Most importantly, it fostered professional creativity, which he believes is direly needed in the security industry. Jack focuses his extra time mentoring, advising, and investing. He is an active leader in the ISLF, a partner in the SVCI, and an outspoken privacy activist. Links Referenced: UptycsSecretMenu.com: https://www.uptycssecretmenu.com Jack's email: jroehrig@uptycs.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: If you asked me to rank which cloud provider has the best developer experience, I'd be hard-pressed to choose a platform that isn't Google Cloud. Their developer experience is unparalleled and, in the early stages of building something great, that translates directly into velocity. Try it yourself with the Google for Startups Cloud Program over at cloud.google.com/startup. It'll give you up to $100k a year for each of the first two years in Google Cloud credits for companies that range from bootstrapped all the way on up to Series A. Go build something, and then tell me about it. My thanks to Google Cloud for sponsoring this ridiculous podcast.Corey: This episode is brought to us by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out. Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted guest episode is brought to us by our friends at Uptycs. And they have sent me their Technology Evangelist, Jack Charles Roehrig. Jack, thanks for joining me.Jack: Absolutely. Happy to spread the good news.Corey: So, I have to start. When you call yourself a technology evangelist, I feel—just based upon my own position in this ecosystem—the need to ask, I guess, the obvious question of, do you actually work there, or have you done what I do with AWS and basically inflicted yourself upon a company. Like, well, “I speak for you now.” The running gag that becomes more true every year is that I'm AWS's chief marketing officer.Jack: So, that is a great question. I take it seriously. When I say technology evangelist, you're speaking to Jack Roehrig. I'm a weird guy. So, I quit my job as CISO. I left a CISO career. For, like, ten years, I was a CISO. Before that, 17 years doing stuff. Started my own thing, secondaries, investments, whatever.Elias Terman, he hits me up and he says, “Hey, do you want this job?” It was an executive job, and I said, “I'm not working for anybody.” And he says, “What about a technology evangelist?” And I was like, “That's weird.” “Check out the software.”So, I'm going to check out the software. I went online, I looked at it. I had been very passionate about the space, and I was like, “How does this company exist in doing this?” So, I called him right back up, and I said, “I think I am.” He said, “You think you are?” I said, “Yeah, I think I'm your evangelist. Like, I think I have to do this.” I mean, it really was like that.Corey: Yeah. It's like, “Well, we have an interview process and the rest.” You're like, “Yeah, I have a goldfish. Now that we're done talking about stuff that doesn't matter, I'll start Monday.” Yeah, I like the approach.Jack: Yeah. It was more like I had found my calling. It was bizarre. I negotiated a contract with him that said, “Look, I can't just work for Uptycs and be your evangelist. That doesn't make any sense.” So, I advise companies, I'm part of the SVCI, I do secondaries, investment, I mentor, I'm a steering committee member of the ISLF. We mentor security leaders.And I said, “I'm going to continue doing all of these things because you don't want an evangelist who's just an Uptycs evangelist.” I have to know the space. I have to have my ear to the ground. And I said, “And here's the other thing, Elias. I will only be your evangelist while I'm your evangelist. I can't be your evangelist when I lose passion. I don't think I'm going to.”Corey: The way I see it, authenticity matters in this space. You can sell out exactly once, so make it count because you're never going to be trusted again to do it a second time. It keeps people honest, at least the ones you actually want to be doing work with. So, you've been in the space a long time, 20 years give or take, and you've seen an awful lot. So, I'm curious, given that I tend to see about, you know, six or seven different companies in the RSA Sponsor Hall every year selling things because you know, sure hundreds of booths, bunch of different marketing logos and products, but it all distills down to the same five or six things.What did you see about Uptycs that made you say, “This is different?” Because to be very direct, looking at the website, it's, “Oh, what do you sell?” “Acronyms. A whole bunch of acronyms that, because I don't eat, sleep, and breathe security for a living, I don't know what most of them mean, but I'm sure they're very impressive and important.” What does it actually do, for those of us who are practitioners, but not swimming in the security vendor stream?Jack: So, I've been obsessed with this space and I've seen the acronyms change over and over and over again. I'm always the first one to say, “What does that mean?” As the senior guy in the room a lot of time. So, acronyms. What does Uptycs do? What drew me into them? They did HIDS, Host Intrusion Detection System. I don't know if you remember that. Turned into—Corey: Oh, yeah. OSSEC was the one I always wound up using, the open-source version. OSSEC [kids 00:04:10]. It's like, oh, instead of paying a vendor, you can contribute it yourself because your time is free, right? Free as in puppy, or these days free as in tier when it comes to cloud.Jack: Oh, I like that. So, yeah, I became obsessed with this HIDS stuff. I think it was evident I was doing it, that it was threat [unintelligible 00:04:27]. And these companies, great companies. I started this new job in an education technology company and I needed a lot of work, so I started to play around with more sophisticated HIDS systems, and I fell in love with it. I absolutely fell in love with it.But there are all these limitations. I couldn't find this company that would build it right. And Uptycs has this reputation as being not very sexy, you know? People telling me, “Uptycs? You're going to Uptycs?” Yeah—I'm like, “Yeah. They're doing really cool stuff.”So, Uptycs has, like, this brand name and I had referred Uptycs before without even knowing what it was. So, here I am, like, one of the biggest XDR, I hope to say, activists in the industry, and I didn't know about Uptycs. I felt humiliated. When I heard about what they were doing, I felt like I wasted my career.Corey: Well, that's a strong statement. Let's begin with XDR. To my understanding, that some form of audio cable standard that I use to plug into my microphone. Some would say it, “X-L-R.” I would say sounds like the same thing. What is XDR?Jack: What is it, right? So, [audio break 00:05:27] implement it, but you install an agent, typically on a system, and that agent collects data on the system: what processes are running, right? Well, maybe it's system calls, maybe it's [unintelligible 00:05:37] as regular system calls. Some of them use the extended Berkeley Packet Filter daemon to get stuff, but one of the problems is that we are obtaining low-level data on an operating system, it's got to be highly specific. So, you collect all this data, who's logging in, which passwords are changing, all the stuff that a hacker would do as you're typing on the computer. You're maybe monitoring vulnerabilities, it's a ton of data that you're monitoring.Well, one of the problems that these companies face is they try to monitor too much. Then some came around and they tried to monitor too little, so they weren't as real-time.Corey: Sounds like a little pig story here.Jack: Yeah [laugh], exactly. Another company came along with a fantastic team, but you know, I think they came in a little late in the game, and it looks like they're folding now. They were wonderful company, but the one of the biggest problems I saw was the agent, the compatibility. You know, it was difficult to deploy. I ran DevOps and security and my DevOps team uninstalled the agent because they thought there was a problem with it, we proved there wasn't and four months later, they hadn't completely reinstall it.So, a CISO who manages the DevOps org couldn't get his own DevOps guy to install this agent. For good reason, right? So, this is kind of where I'm going with all of this XDR stuff. What is XDR? It's an agent on a machine that produces a ton of data.I—it's like omniscience. Yes, I started to turn it in, I would ping developers, I was like, “Why did you just run sudo on that machine?” Right. I mean, I knew everything was going on in the space, I had a good intro to all the assets, they technically run on the on-premise data center and the quote-unquote, “Cloud.” I like to just say the production estate. But it's omniscience. It's insights, you can create rules, it's one of the most powerful security tools that exists.Corey: I think there's a definite gap as far as—let's narrow this down to cloud for just a second before we expand this into the joy that has data centers—where you can instrument a whole bunch of different security services in any cloud provider—I'm going to pick on AWS because they're the 800-pound gorilla in the room, and frankly, they could use taking down a peg or two by and large—and you wind up configuring all the different security services that in some cases seem totally unaware of each other, but that's the AWS product portfolio for you. And you do the math out and realize that it theoretically would cost you—to enable all these things—about three times as much as the actual data breach you're ideally trying to prevent against. So, on some level, it feels like, “Heads, I win; tails, you lose,” style scenario.And the answer that people have started reaching out to third-party vendors to wind up tying all of this together into some form of cohesive narrative that a human being has a hope in hell of understanding. But everything I've tried to this point still feels like it is relatively siloed, focused on the whole fear, uncertainty, and doubt that is so inherent to so much of the security world's marketing. And it's almost like cost control where you can spend almost limitless amount of time, energy, money, et cetera, trying to fix these things, but it doesn't advance your company to the next milestone. It's like buying fire insurance on your building. You can spend all the money on fire insurance. Great, it doesn't get you to the next milestone that propels your company forward. It's all reactive instead of proactive. So, it feels like it is never the exciting, number-one priority for companies until right after it should have been higher in the list than it was.Jack: So, when I worked at Turnitin, we had saturated the market. And we worked in education, technology space globally. Compliance everywhere. So, I just worked on the Australian Data Infrastructure Act of 2020. I'm very familiar with the 27 data privacy regulations that are [laugh] in scope for schools. I'm a FERPA expert, right? I know that there's only one P in HIPAA [laugh].So, all of these compliance regulations drove schools and universities, consortiums, government agencies to say, “You need to be secure.” So, security at Turnitin was the number one—number one—key performance indicator of the company for one-and-a-half years. And these cloud security initiatives didn't just make things more secure. They also allowed me to implement a reasonable control framework to get various compliance certifications. So, I'm directly driving sales by deploying these security tools.And the reason why that worked out so great is, by getting the certifications and by building a sensible control framework layer, I was taking these compliance requirements and translating them into real mitigations of business risk. So, the customers are driving security as they should. I'm implementing sane security controls by acting as the chief security officer, company becomes more secure, I save money by using the correct toolset, and we increased our business by, like, 40% in a year. This is a multibillion-dollar company.Corey: That is definitely a story that resonates, especially with organizations that are—or they should be—compliance-forward and having to care about the nature of what it is that they're doing. But I have a somewhat storied history in working in FinTech and large-scale financial services. One of the nice things about that job, which is sort of a weird thing to say there if you don't want to get ejected from the room, has been, “Yeah well, it's only money,” in the final analysis. Because yeah, no one dies if you wind up screwing that up. People's kids don't get exposed.It's just okay, people have to fill out a bunch of forms and you get sued into oblivion and you're not there anymore because the first role of a CISO is to be ablative and get burned away whenever there's a problem. But it still doesn't feel like it does more for a number of clients than, on some level, checking a box that they feel needs to be checked. Not that it shouldn't be, necessarily, but I have a hard time finding people that get passionately excited about security capabilities. Where are they hiding?Jack: So, one of the biggest problems that you're going to face is there are a lot of security people that have moved up in the ranks through technology and not through compliance and technology. These people will implement control frameworks based on audit requirements that are not bespoke to their company. They're doing it wrong. So, we're not ticking boxes; I'm creating boxes that need to be ticked to secure the infrastructure. And at Turnitin, Turnitin was a company that people were forced to use to submit their works in the school.So, imagine that you have to submit a sensitive essay, right? And that sensitive essay goes to this large database. We have the Taiwanese government submitting confidential data there. I had the chief scientist at NASA submitting in pre-publication data there. We've got corporate trade secrets that are popped in there. We have all kinds of FDA pre-approval stuff. This is a plagiarism detection software being used by large companies, governments, and 12-year-old girls, right, who don't want their data leaked.So, if you look at it, like, this is an ethical thing that is required for us to do, our customers drive that, but truly, I think it's ethics that drive it. So, when we implemented a control framework, I didn't do the minimum, I didn't run an [unintelligible 00:12:15] scan that nobody ran. I looked for tools that satisfied many boxes. And one of the things about the telemetry at scale, [unintelligible 00:12:22], XDR, whatever want to call it, right? But the agent-based systems that monitor for all of us this run-state data, is they can take a lot of your technical SOC controls.Furthermore, you can use these tools to improve your processes like incident response, right? You can use them to log things. You can eliminate your SIEM by using this for your DLP. The problem of companies in the past is they wouldn't deploy on the entire infrastructure. So, you'd get one company, it would just be on-prem, or one company that would just run on CentOS.One of the reasons why I really liked this Uptycs company is because they built it on an osquery. Now, if you mention osquery, a lot of people glaze over, myself included before I worked at Uptycs. But apparently what it is, is it's this platform to collect a ton of data on the run state of a machine in real-time, pop it into a normalized SQL database, and it runs on a ton of stuff: Mac OS, Windows, like, tons of version of Linux because it's open-source, so people are porting it to their infrastructure. And that was one of these unique differentiators is, what is the cloud? I mean, AWS is a place where you can rapidly prototype, there's tons of automation, you can go in and you build something quickly and then it scales.But I view the cloud as just a simple abstraction to refer to all of my assets, be them POPS, on-premise data machines, you know, the corporate environment, laptops, desktops, the stuff that we buy in the public clouds, right? These things are all part of the greater cloud. So, when I think cloud security, I want something that does it all. That's very difficult because if you had one tool run on your cloud, one tool to run on your corporate environment, and one tool to run for your production environment, those tools are difficult to manage. And the data needs to be ETL, you know? It needs to be normalized. And that's very difficult to do.Our company is doing [unintelligible 00:14:07] security right now as a company that's taking all these data signals, and they're normalizing them, right, so that you can have one dashboard. That's a big trend in security right now. Because we're buying too many tools. So, I guess the answer that really is, I don't see the cloud is just AWS. I think AWS is not just data—they shouldn't call themselves the cloud. They call themselves the cloud with everything. You can come in, you can rapidly prototype your software, and you know what? You want to run to the largest scale possible? You can do that too. It's just the governance problem that we run into.Corey: Oh, yes. The AWS product strategy is pretty clearly, in a word, “Yes,” written on a Post-it note somewhere. That's the easiest job in the world is running their strategy. The challenge, too, is that we don't live in a world where monocultures are a thing anymore because regardless—if you use AWS for the underlying infrastructure, great, that makes a lot of sense. Use it for a lot of the higher-up the stack, SaaS-y type things that you don't want to have to build yourself from—by going to Home Depot and picking up components, you're doing something relatively foolish in most cases.They're a plumbing company not a porcelain company, in many respects. And regardless of what your intention is around multiple clouds, people wind up using different things. In most cases, you're going to be storing your source code in GitHub, not in AWS CodeCommit because CodeCommit doesn't really have any customers, for reasons that become blindingly apparent the first time you try to use it for something. So, you always wind up with these cross-cloud, cross-infrastructure stories. For any company that had the temerity to be founded before 2010, they probably have an on-premises data center as well—or six or more—and you're starting to try to wind up having a whole bunch of different abstractions viewed through the same lenses in terms of either observability or control plane or governance, or—dare I say it—security. And it feels like there are multiple approaches, all of which have their drawbacks, which of course means, it's complicated. What's your take on it?Jack: So, I think it was two years ago we started to see tools to do signal consumption. They would aggregate those signals and they would try and produce meaningful results that were actionable rather than you having to go and look at all this granular data. And I think that's phenomenal. I think a lot of companies are going to start to do that more and more. One of the other trends people do is they eliminated data and they went machine-learning and anomaly detection. And that didn't work.It missed a lot of things, right, or generated a lot of false positive. I think that one of the next big technologies—and I know it's been done for two years—but I think we're the next things we're going to see is the axonius of the consumption of events, the categorization into alerts-based synthetic data classification policies, and we're going to look at the severity classifications of those, they're going to be actionable in a priority queue, and we're going to eliminate the need for people that don't like their jobs and sit at a SOC all day and analyze a SIEM. I don't ever run a SIEM, but I think that this diversity can be a good thing. So, sometimes it's turned out to be a bad thing, right? We wanted to diversity, we don't want all the data to be homogenous. We don't need data standards because that limits things. But we do want competition. But I would ask you this, Corey, why do you think AWS? We remember 2007, right?Corey: I do. Oh, I've been around at least that long.Jack: Yeah, you remember when S3 came up. Was that 2007?Corey: I want to say 2004, 2005 in beta, and then relaunched as the first general available service. The first beta service was SQS, so there's always some question about which one was first. I don't get in the middle of those fights because all I'm going to do is upset people.Jack: But S3 was awesome. It still is awesome, right?Corey: Oh yes.Jack: And you know what I saw? I worked for a very older company with very strict governance. You know with SOX compliance, which is a joke, but we also had SOC compliance. I did HIPAA compliance for them. Tons of compliance to this.I'm not a compliance off, too, by trade. So, I started seeing [x cards 00:17:54], you know, these company personal cards, and people would go out and [unintelligible 00:17:57] platform because if they worked with my teams internally, if they wanted to get a small app deployed, it was like a two, three-month process. That process was long because of CFO overhead, approvals, vendor data security vetting, racking machines. It wasn't a problem that was inherent to the technology. I actually built a self-service cloud in that company. The problem was governance. It was financial approvals, it was product justification.So, I think AWS is really what made the internet inflect and scale and innovate amazingly. But I think that one of the things that it sacrificed was governance. So, if you tie a lot of what we're saying back together, by using some sort of tool that you can pop into a cloud environment and they can access a hundred percent of the infrastructure and look for risks, what you're doing is you're kind of X-Ray visioning into all these nodes that were deployed rapidly and kept around because they were crown jewels, and you're determining the risks that lie on them. So, let's say that 10 or 15% of your estate is prototype things that grew at a scale and we can't pull back into our governance infrastructure. A lot of times people think that those types of team machines are probably pretty locked down and they're probably low risk.If you throw a company on the side scanner or something like that, you'll see they have 90% of the risk, 80% of the risk. They're unpatched and they're old. So, I remember at one point in my career, right, I'm thinking Amazon's great. I'm—[unintelligible 00:19:20] on Amazon because they've made the internet go, they influxed. I mean, they've scaled us up like crazy.Corey: Oh, the capability store is phenomenal. No argument there.Jack: Yeah. The governance problem, though, you know, the government, there's a lot of hacks because of people using AWS poorly.Corey: And to be clear, that's everyone. We all are. I take a look at some of the horrible technical decisions I made even a couple of years ago, based upon what I know now, it's difficult to back out and wind up doing things the proper way. I wrote an article a while back, “17 Ways to Run Containers on AWS,” and listed all the services. And I think it was a little on the nose, but then I wrote 17, “More Ways to Run Containers on AWS,” but different services. And I'm about three-quarters of the way through the third in the sequel. I just need a couple more releases and we're good to go.Jack: The more and more complexity you add, the more security risk exists. And I've heard horror stories. Dictionary.com lost a lot of business once because a couple of former contractors deleted some instances in AWS. Before that, they had a secret machine they turned into a pixel [unintelligible 00:20:18] and had take down their iPhone app.I've seen some stuff. But one of the interesting things about deploying one of these tools in AWS, they can just, you know, look X-Ray vision on into all your compute, all your storage and say, “You have PIIs stored here, you have personal data stored here, you have this vulnerability, that vulnerability, this machine has already been compromised,” is you can take that to your CEO as a CISO and say, “Look, we were wrong, there's a lot of risk here.” And then what I've done in the past is I've used that to deploy HIDS—XDR, telemetry at scale, whatever you want to call it—these agent-based solutions, I've used that to justification for them. Now, the problem with this solutions that use agentless is almost all of them are just in the cloud. So, just a portion of your infrastructure.So, if your hybrid environment, you have data centers, you're ignoring the data centers. So, it's interesting because I've seen these companies position themselves as competitors when really, they're in complementary spaces, but one of them justified the other for me. So, I mean, what do you think about that awkward competition? Why was this competition exists between these people if they do completely different things?Corey: I'll take it a step further. I'm a big believer that security for the cloud providers should not be a revenue generator in any meaningful sense because at that point, they wind up with an inherent conflict of interest, where when they start charging, especially trying to do value-based pricing as they move up the stack, what they're inherently saying is, great, you can get our version of our services that is less secure, so that they're what they're doing is they're making security on their platform an inherent investment decision. And I've never been a big believer in that approach.Jack: The SSO tax.Corey: Oh, yes. And many others.Jack: Yeah. So, I was one of the first SSO tax contributors. That started it.Corey: You want data plane audit logging? Great, that'll cost you. But they finally gave in a couple of years back and made the first management trail for CloudTrail audit logging free for everyone. And people still advertently built second ones and then wonder why they're paying through the nose. Like, “Oh, that's 40 grand a month. That should be zero.” Great. Send that to your SIEM and then have that pass it out to where it needs to go. But so much of it is just these weird configuration taxes that people aren't fully aware exist.Jack: It's the market, right? The market is—so look at Amazon's IAM. It is amazing, right? It's totally robust, who is using it correctly? I know a lot of people are. I've been the CISO for over 100 companies and IAM is was one of those things that people don't know how to use, and I think the reason is because people aren't paying for it, so AWS can continue to innovate on it.So, we find ourselves with this huge influx of IAM tools in the startup scene. We all know Uptycs does some CIAM and some identity management stuff. But that's a great example of what you're talking about, right? These cloud companies are not making the things inherently secure, but they are giving some optionality. The products don't grow because they're not being consumed.And AWS doesn't tend to advertise them as much as the folks in the security industry. It's been one complaint of mine, right? And I absolutely agree with you. Most of the breaches are coming out of AWS. That's not AWS's fault. AWS's infrastructure isn't getting breached.It's the way that the customers are configuring the infrastructure. That's going to change a lot soon. We're starting to see a lot of change. But the fundamental issue here is that security needs to be invested in for short-term initiatives, not just for long-term initiatives. Customers need to care about security, not compliance. Customers need to see proof of security. A customer should be demanding that they're using a secure company. If you've ever been on the vendor approval side, you'll see it's very hard to push back on an insecure company going through the vendor process.Corey: This episode is sponsored in part by our friends at Uptycs, because they believe that many of you are looking to bolster your security posture with CNAPP and XDR solutions. They offer both cloud and endpoint security in a single UI and data model. Listeners can get Uptycs for up to 1,000 assets through the end of 2023 (that is next year) for $1. But this offer is only available for a limited time on UptycsSecretMenu.com. That's U-P-T-Y-C-S Secret Menu dot com.Corey: Oh, yes. I wound up giving probably about 100 companies now S3 Bucket Negligence Awards for being public about failing to secure their data and put that out into the world. I had one physical bucket made, the S3 Bucket Responsibility Award and presented it to their then director of security over at the Pokémon Company because there was a Wall Street Journal article talking about how their security review—given the fact that they are a gaming company that has children as their primary customer—they take it very seriously. And they cited the reason they're not to do business with one unnamed vendor was in part due to the lackadaisical approach around S3 bucket control. So, that was the one time I've seen in public a reference where, “Yeah, we were going to use a vendor and their security story was terrible, and we decided not to.”It's, why is that news? That should be a much more common story, but these days, it feels like procurement is rubber-stamping it and, like, “Okay, great. Fill out the form.” And, “Okay, you gave some wrong answers on the form. Try it again and tell the story differently until it gets shoved through.” It feels like it's a rubber stamp rather than a meaningful control.Jack: It's not a rubber stamp for me when I worked in it. And I'm a big guy, so they come to me, you know, like—that's how being, like, career law, it's just being big and intimidating. Because that's—I mean security kind of is that way. But, you know, I've got a story for you. This one's a little more bleak.I don't know if there's a company called Ask.fm—and I'll mention them by name—right, because, well, I worked for a company that did, like, a hostile takeover this company. And that's when I started working with [unintelligible 00:25:23]. [unintelligible 00:25:24]. I speak Russian and I learned it for work. I'm not Russian, but I learned the language so that I could do my job.And I was working for a company with a similar name. And we were in board meetings and we were crying, literally shedding tears in the boardroom because this other company was being mistaken for us. And the reason why we were shedding tears is because young women—you know, 11 to 13—were committing suicide because of online bullying. They had no health and safety department, no security department. We were furious.So, the company was hosted in Latvia, and we went over there and we installed one I lived in Latvia for quite a bit, working as the CISO to install a security program along with the health and safety person to install the moderation team. This is what we need to do in the industry, especially when it comes to children, right? Well, regulation solve it? I don't know.But what you're talking about the Pokémon video game, I remember that right? We can't have that kind of data being leaked. These are children. We need to protect them with information security. And in education technology, I'll tell you, it's just not a budget priority.So, the parents need to demand the security, we need to demand these audit certifications, and we need to demand that our audit firms are audited better. Our audit firms need to be explaining to security leaders that the control frameworks are something that they're responsible for creating bespoke. I did a presentation with Al Kingsley recently about security compliance, comparing FERPA and COPPA to the GDPR. And it was very interesting because FERPA has very little teeth, it's very long code and GDPR is relatively brilliant. GDPR made some changes. FERPA was so ambiguous and vague, it made a lot of changes, but they were kind of like, in any direction ever because nobody knows FERPA is. So, I don't know, what's the answer to that? What do we do?Corey: Yeah. The challenge is, you can see a lot of companies in specific areas doing the right thing, when they're intentionally going out on day one to, for example, service kids as a primary user base demographic. The challenge that you see with this is that, that's great, but then you have things that are not starting off with that point of view. And they started running into population limits and realize, okay, we've got to start expanding our user base somewhere, and then they went a bolting on those things is almost as an afterthought, where, “Oh, well, we've been basically misusing people's data for our entire existence, but now—now—we're suddenly magically going to do the right thing where kids are concerned.” I wish, but unfortunate that philosophy assumes a better take of humanity than is readily apparent.Jack: I wonder why they do that though, right? Something's got to, you know, news happened or something and that's why they're doing it. And that's not okay. But I have seen companies, one of the founders of Scantron—do you know what a Scantron is?Corey: Oh, yes. I'm much older than I look.Jack: Yeah, I'm much older than I look, too. I like to think that. But for those that don't know, a scantron, use a number two pencil and you filled in these little dots. And it was for taking tests. So, the guy who started Scantron, created a small two-person company.And AWS did something magnificent. They recognized that it was an education technology company, and they gave them, for free, security consultation services, security implementation services. And when we bought this company—I'm heavily involved in M&A, right—I'm sitting down with the two founders of the company, and my jaw is on the desk. They were more secure than a lot of the companies that I've worked with that had robust security departments. And I said, “How did you do this?”They said, “AWS provided us with this free service because we're education technology.” I teared up. My heart was—you know, that's amazing. So, there are companies that are doing this right, but then again, look at Grammarly. I hate to pick on Grammarly. LanguageTool is an open-source I believe, privacy-centric Grammarly competitor, but Grammarly, invest in your security a little more, man. Y'all were breached. They store a lot of data, they [unintelligible 00:29:10] lot of the data.Corey: Oh, and it scared the living hell out of companies realizing that they had business users using Grammarly as an extension to work on internal documents and just sending proprietary data to some third-party service that they clicked through the terms on and I don't know that it was ever shown the Grammarly was misusing any of that, but the potential for that is massive.Jack: Do you know what they were doing with it?Corey: Well, using AI to learn these things. Yeah, but it's the supervision story always involves humans reading it.Jack: They were building a—and I think—nobody knows the rumor, but I've worked in the industry, right, pretty heavily. They're doing something great for the world. I believe they're building a database of works submitted to do various things with them. One of those things is plagiarism detection. So, in order to do that they got to store, like, all of the data that they're processing.Well, if you have all the data that you've done for your company that's sitting in this Grammarly database and they get hacked—luckily, that's a lot of data. Maybe you'll be overlooked. But I've data breach database sitting here on my desk. Do you know how many rows it's got? [pause]. Yes, breach database.Corey: Oh, I wouldn't even begin to guess. I know the data volumes that Troy Hunt's Have I Been Pwned? Site winds up dealing with and it is… significant.Jack: How many billions of rows do you think it is?Corey: Ah, I'd say 20 as an argument?Jack: 34.Corey: Okay. Yeah, directionally right. Fermi estimation saves us yet again.Jack: [laugh]. The reason I build this breach database is because I thought Covid would slow down and I wanted it to do executive protection. Companies in the education space also suffer from [active 00:30:42] shooters and that sort of thing. So, that's another thing about security, too, is it transcends all these interesting areas, right? Like here, I'm doing executive risk protection by looking at open-source data.Protect the executives, show the executives that security is a concern, these executives that'll realize security's real. Then these past that security down in the list of priorities, and next thing you know, the 50 million active students that are using Turnitin are getting better security. Because an executive realized, “Hey, wait a minute, this is a real thing.” So, there's a lot of ways around this, but I don't know, it's a big space, there's a lot of competition. There's a lot of companies that are coming in and flashing out of the pan.A lot of companies are coming in and building snake oil. How do people know how to determine the right things to use? How do people don't want to implement? How do people understand that when they deploy a program that only applies to their cloud environment it doesn't touch there on-prem where a lot of data might be a risk? And how do we work together? How do we get teams like DevOps, IT, SecOps, to not fight each other for installing an agent for doing this?Now, when I looked at Uptycs, I said, “Well, it does the EDR for corp stuff, it does the host intrusion detection, you know, the agent-based stuff, I think, for the well because it uses a buzzword I don't like to use, osquery. It's got a bunch of cloud security configuration on it, which is pretty commoditized. It does agentless cloud scanning.” And it—really, I spent a lot of my career just struggling to find these tools. I've written some myself.And when I saw Uptycs, I was—I felt stupid. I couldn't believe that I hadn't used this tool, I think maybe they've increased substantially their capabilities, but it was kind of amazing to me that I had spent so much of my time and energy and hadn't found them. Luckily, I decided to joi—actually I didn't decide to join; they kind of decided for me—and they started giving it away for free. But I found that Uptycs needs a, you know, they need a brand refresh. People need to come and take a look and say, “Hey, this isn't the old Uptycs. Take a look.”And maybe I'm wrong, but I'm here as a technology evangelist, and I'll tell you right now, the minute I no longer am evangelists for this technology, the minute I'm no longer passionate about it, I can't do my job. I'm going to go do something else. So, I'm the one guy who will put it to your brass tacks. I want this thing to be the thing I've been passionate about for a long time. I want people to use it.Contact me directly. Tell me what's wrong with it. Tell me I'm wrong. Tell me I'm right. I really just want to wrap my head around this from the industry perspective, and say, “Hey, I think that these guys are willing to make the best thing ever.” And I'm the craziest person in security. Now, Corey, who's the craziest person security?Corey: That is a difficult question with many wrong answers.Jack: No, I'm not talking about McAfee, all right. I'm not that level of crazy. But I'm talking about, I was obsessed with this XDR, CDR, all the acronyms. You know, we call it HIDS, I was obsessed with it for years. I worked for all these companies.I quit doing, you know, a lot of very good entrepreneurial work to come work at this company. So, I really do think that they can fix a lot of this stuff. I've got my fingers crossed, but I'm still staying involved in other things to make these technologies better. And the software's security space is going all over the place. Sometimes it's going bad direction, sometimes it's going to good directions. But I agree with you about Amazon producing tools. I think it's just all market-based. People aren't going to use the complex tools of Amazon when there's all this other flashy stuff being advertised.Corey: It all comes down to marketing budget, and AWS has always struggled with telling a story. I really want to thank you for being so generous with your time. If people want to learn more, where should they go?Jack: Oh, gosh, everywhere. But if you want to learn more about Uptycs, why don't you just email me?Corey: We will, of course, put your email address into the show notes.Jack: Yeah, we'll do it.Corey: Don't offer if you're not serious. There's also uptycssecretmenu.com, which is apparently not much of a secret, given the large banner all over Uptycs' website.Jack: Have you seen this? Let me just tell you about this. This is not a catch. I was blown away by this; it's one of the reasons I joined. For a buck, if you have between 100 and 1000 nodes, right, you get our agentless system and our agent-based system, right?I think it's only on AWS. But that's, like, what, $150, $180,000 value? You get it for a full year. You don't have to sign a contract to renew or anything. Like, you just get it for a buck. If anybody who doesn't go on to the secret menu website and pay $1 and check out this agentless solution that deploys in two minutes, come on, man.I challenge everybody, go on there, do that, and tell me what's wrong with it. Go on there, do that, and give me the feedback. And I promise you I'll do everything in my best efforts to make it the best. I saw the engineering team in this company, they care. Ganesh, the CEO, he is not your average CEO.This guy is in tinkerers. He's on there, hands on keyboard. He responds to me in the middle of night. He's a geek just like me. But we need users to give us feedback. So, you got this dollar menu, you sign up before the 31st, right? You get the product for buck. Deploy the thing in two minutes.Then if you want to do the XDR, this agent-based system, you can deploy that at your leisure across whichever areas you want. Maybe you want a corporate network on laptops and desktops, your production infrastructure, your compute in the cloud, deploy it, take a look at it, tell me what's wrong with it, tell me what's right with it. Let's go in there and look at it together. This is my job. I want this company to work, not because they're Uptycs but because I think that they can do it.And this is my personal passion. So, if people hit me up directly, let's chat. We can build a Slack, Uptycs skunkworks. Let's get this stuff perfect. And we're also going to try and get some advisory boards together, like, maybe a CISO advisory board, and just to get more feedback from folks because I think the Uptycs brand has made a huge shift in a really positive direction.And if you look at the great thing here, they're unifying this whole agentless and agent-based stuff. And a lot of companies are saying that they're competing with that, those two things need to be run together, right? They need to be run together. So, I think the next steps here, check out that dollar menu. It's unbelievable. I can't believe that they're doing it.I think people think it's too good to be true. Y'all got nothing to lose. It's a buck. But if you sign up for it right now, before the December 31st, you can just wait and act on it any month later. So, just if you sign up for it, you're just locked into the pricing. And then you want to hit me up and talk about it. Is it three in the morning? You got me. It's it eight in the morning? You got me.Corey: You're more generous than I am. It's why I work on AWS bills. It's strictly a business-hours problem.Jack: This is not something that they pay me for. This is just part of my personal passion. I have struggled to get this thing built correctly because I truly believe not only is it really cool—and I'm not talking about Uptycs, I mean all the companies that are out there—but I think that this could be the most powerful tool in security that makes the world more secure. Like, in a way that keeps up with the security risks increasing.We just need to get customers, we need to get critics, and if you're somebody who wants to come in and prove me wrong, I need help. I need people to take a look at it for me. So, it's free. And if you're in the San Francisco Bay Area and you give me some good feedback and all that, I'll take you out to dinner, I'll introduce you to startup companies that I think, you know, you might want to advise. I'll help out your career.Corey: So, it truly is dollar menu then.Jack: Well, I'm paying for the dinner out my personal thing.Corey: Exactly. Well, again, you're also paying for the infrastructure required to provide the service, so, you know, one way or another, it's all the best—it's just like Cloud, there is no cloud. It's just someone else's cost center. I like that.Jack: Well, yeah, we're paying for a ton of data hosting. This is a huge loss leader. Uptycs has a lot of money in the bank, I think, so they're able to do this. Uptycs just needs to get a little more bold in their marketing because I think they've spent so much time building an awesome product, it's time that we get people to see it. That's why I did this.My career was going phenomenally. I was traveling the world, traveling the country promoting things, just getting deals left and right and then Elias—my buddy over at Orca; Elias, one of the best marketing guys I've ever met—I've never done marketing before. I love this. It's not just marketing. It's like I get to take feedback from people and make the product better and this is what I've been trying to do.So, you're talking to a crazy person in security. I will go well above and beyond. Sign up for that dollar menu. I'm telling you, it is no commitment, maybe you'll get some spam email or something like that. Email me directly, I'll kill the spam email.You can do it anytime before the end of 2023. But it's only for 2023. So, you got a full year of the services for free. For free, right? And one of them takes two minutes to deploy, so start with that one. Let me know what you think. These guys ideate and they pivot very quickly. I would love to work on this. This is why I came here.So, I haven't had a lot of opportunity to work with the practitioners. I'm there for you. I'll create a Slack, we can all work together. I'll invite you to my Slack if you want to get involved in secondaries investing and startup advisory. I'm a mentor and a leader in this space, so for me to be able to stay active, this is like a quid pro quo with me working for this company.Uptycs is the company that I've chosen now because I think that they're the ones that are doing this. But I'm doing this because I think I found the opportunity to get it done right, and I think it's going to be the one thing in security that when it is perfected, has the biggest impact.Corey: We'll see how it goes out over the coming year, I'm sure. Thank you so much for being so generous with your time. I appreciate it.Jack: I like you. I like you, Corey.Corey: I like me too.Jack: Yeah? All right. Okay. I'm telling [unintelligible 00:39:51] something. You and I are very weird.Corey: It works out.Jack: Yeah.Corey: Jack Charles Roehrig, Technology Evangelist at Uptycs. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an insulting comment that we're going to be able to pull the exact details of where you left it from because your podcast platform of choice clearly just treated security as a box check.Jack: [laugh].Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Rachna Nath tells us why we " Should believe that destiny will lead us", why we "Shouldn't be afraid of failure" and that " If you believe in something, make it happen " hosted by Diana White. About Rachna Nath Rachna Nath is a TIME recognized Innovative teacher and is also an internationally recognized innovator, entrepreneur, NASA solar system Ambassador, National Geographic Educator, grant writer and a STEM enthusiast. She is also the coauthor of the SDG4 Corporate handbook set forward by the United Nations. She has two master's degrees, first one in Entomology (Insect Science) and the second one in Biology (Developmental Genetics) from Arizona State University working with Honey Bee Exocrine gland ontology. She has won the Teacher of the Year by JSHS (sponsored by the US armed forces), Governors Celebration of Innovation Award, Global Innovation Award from TURNITIN, Honorable Mention for the Presidential Innovation Award for Environment Educators in the United States, two Excite Awards from Lemelson-MIT foundation to mention a few. She has also been invited to join the “Imaginary College” as an honorary member (Center for Science and Imagination) at ASU along with world renowned elite Philosophers like Margaret Atwood, Paolo Bacigalupi and many more. She has been featured as one of the fifty 2021 Women in MILLION STEM, Entrepreneur Magazine, “Chandler Lifestyle 2020 Women of Chandler” recognized at the “Women in Leadership Conference” by the Chandler Chamber of Commerce, Phoenix Arizona. Some other features are in Thrive Global, Authority Magazine as Inspirational Women in STEM. She has also received grants from Bill and Melinda Gates foundation, Healthy Urban Environments, FLINN foundation, NSF, Department of Defense, Arizona Recycling Coalition, Society for Science and the Public, Chandler Education Foundation and the list goes on. Her entrepreneurship ventures, through her program DRIPBL (Dream Research Innovate Problem/Project Based Learning) has led her to open up many companies with her students. One of the most prominent one is www.oxiblast.in which is a three-generation women entrepreneurship. Her 14-year-old daughter and 12-year-old son are both #1international bestseller and well recognized musicians as well. She works with young entrepreneurs to make their dreams come true by working with the community partners and helping patent their ideas. Rachna has a network of trusted IT professionals, lawyers, community helpers who help bring dreams to reality for 9th to 12th grade students who are invested in critical thinking, problem solving and giving back to the community by solving real world problems. She has 3 patents pending from such students in various prototypes from Anti-VOC scent bags to Heat stress monitoring devices. Rachna also does a lot of volunteering work talking about honeybees at various festivals, has contributed her time in mask making during the COVID19 pandemic and also runs a dance school “Sangeeta Nritya Academy” in US which she has dedicated to her Guru Sangita Hazarika in Assam, India. She is a force to be reckoned with and she is not stopping anytime soon. Episode Notes Lesson 1: Believe that destiny will lead you to where your impact is most needed. 02:44 Lesson 2: Empowering students with curiosity is empowering the future. 04:50 Lesson 3: Be your own advocate, value yourself, and pamper yourself. 07:03 Lesson 4: Never confuse standing up for yourself with being disrespectful. 11:28 Lesson 5: Patience is a virtue but can become a vice if you practice it too much.15:21 Lesson 6: Looking for contentment and making your passion your job is worth more than money.21:23 Lesson 7: Make a list of priorities in your life and allot time for each of them based on your priority 27:30 Lesson 8: If you believe in something, make it happen 33:47 Lesson 9: Don't be scared of the negativity or failure in life, make it a learning moment. 35:22 Lesson 10: Keep growing in your field and never be afraid of change but embrace it. 41:03
On this edition of Parallax Views, Project Censored' Nolan Hidon returns to the program alongside the Media Freedom Foundation's Allison Butler to discuss their recent USA Today article "Strangers are spying on your child. And schools are paying them to do it". Since the pandemic, big tech hardware and software has become even more ubiquitous in schools across the United States. Is there a downside to this alliance between the American education system and big tech companies? Nolan Higdon and Allison Butler argue that big tech's latest ventures in the classroom violate students' right to privacy and stifle their learning environments. In fact, they go so far as to invoke George Orwell's 1984 in addressing the issues of big tech in the classroom. Among the topics we'll be discussing are: companies and software such as Turnitin, ClassDojo, Illuminate Education and G Suite for Education; the effects of big tech surveillance and the potential for student self-censorship in the classroom; data breaches in schools; big tech surveillance in the classroom's growth and its coinciding with the renewed issues around book banning; the difficult in measuring what the possible negative impacts of big tech's influence in the classroom will be going forward; and much, much more!
Some strong opinions and hot takes in bound! Josh and Will sink their teeth into the conversation on plagiarism, cheating, and dishonesty. How should we approach this issue? What's the goal for our students? Then we open up a conversation on Turnitin like no other app before it. Will you agree? For more on our conversation, check out the episode page here. For all of our episodes and resources for each app we discuss, head over to our website at hitechpod.us. --- Send in a voice message: https://podcasters.spotify.com/pod/show/hitechpod/message
WHAT TEACHERS SHOULD LOOK FOR IN AN ED TECH EXPERIENCE Good topic with guys who know what they are talking about....TURNITIN's Director of Customer Engagement Ian MacCullough and Senior Director of User Experience Bill Rattner Visit ace-ed.org, SELtoday.org and teacher-retention.com to see all our work PLUS We're excited to be working on the inaugural Excellence in Equity Awards, which will help us spotlight and celebrate high-impact work across K-12 education.Head to ace-ed.org/awards to find all the information and nominate before June 30! Email awards@ace-ed.org with questions.
WHAT TEACHERS SHOULD LOOK FOR IN AN ED TECH EXPERIENCE Good topic with guys who know what they are talking about....TURNITIN's Director of Customer Engagement Ian MacCullough and Senior Director of User Experience Bill Rattner Visit ace-ed.org, SELtoday.org and teacher-retention.com to see all our work PLUS We're excited to be working on the inaugural Excellence in Equity Awards, which will help us spotlight and celebrate high-impact work across K-12 education.Head to ace-ed.org/awards to find all the information and nominate before June 30! Email awards@ace-ed.org with questions.
Have you heard of Masterclass, Coursera, Turnitin, Coursehero, and Handshake? Our guest today is the Head of Talent at GSV Ventures, an early-stage investment fund that has invested in all of these companies and manages over $270M in assets. We are lucky to have Amanda Porter who supports talent acquisition and performance management across GSV Ventures and its portfolio companies. --- Here are some questions we will be answering: - How to know if VC is right for you? - How do you break into VC? - What are the personality traits of an ideal professional in VC? - How sports can translate to real-world job skills? - Is working for a startup beneficial prior to joining a VC? - What to research about the VC firm prior to a coffee chat or interview? - What questions are asked in VC interviews? --- Connect with Amanda: https://www.linkedin.com/in/porteramanda/ Get 1-on-1 Career Coaching: https://www.careercoachingcompany.com/ Follow our Host, AJ Eckstein, on LinkedIn: https://www.linkedin.com/in/aaron-aj-eckstein/ Read the episode transcript: https://thefinalround.com/episode7/ Watch on YouTube: https://youtu.be/gdFrKRGQ_lM --- Disclaimer: The opinions and views expressed in this podcast are of the host and guest and not of their employers.
This episode may have the listener thinking, "these guys are just contrarians." After all, who in his right mind, would question the value of catching a young person in the act of cheating. Is that not the supreme moment of authority? Well, Jon and Shaun will question that. Specifically, they even call out a much relied upon program used for catching students who plagiarize, but let's be sympathetic here for they are not simple contrarians and in regards to Turnitin.com, they know not what they do.First, let's just get something out of the way here. Shaun is clearly unwell in this recording, but he powers through. It's painful to hear, but hear him out. If nothing else, such rhetoric is amusing.What seems to have happened to these idealistic teachers is that they decided to believe in students somewhere along the way. Having spoken with some of their former colleagues, it seems that they were always a bit soft on students. Probably, they should have exited the educational system in those early years. Unfortunately, they persisted and those seeds of softness grew into trees of tolerance, and now these two cannot see the students as doing anything wrong. Shaun is more afflicted by this affliction than Jon, and it took Jon nigh on half the episode to realize how radical of a position Shaun was espousing.In the midst of the kindness confusion, Jon outright confesses to dishonestly proceeding through his secondary education and challenges erstwhile administrators of Grapevine Colleyville Independent School District to strip him of his high school diploma for a specific incident from junior high. Shaun suggests that he could be stripped of his diploma as well for academic malfeasances in the North East Independent School District of San Antonio, but he does not specify an instance so that may be tricky for NEISD. But Shaun will have his own issues when Turnitin.com comes after him for defamation.So take it easy on these guys: one is about to be a 40-something stripped of his high school diploma while the other will be contending with the perfectly legitimate and morally upright company, Turnitin.com.