POPULARITY
In our third episode on AI in UK schools, Professor Rose Luckin explores AI integration further with two very special guests helping to lead the way with AI in their institutions. Talking points and questions may include: What is the extent of AI penetration in your schools, including teacher usage, classes avoiding it, student use, and any strategies or evaluation plans in place regarding reactive or proactive AI adoption? No AI is risk-free, so concerns around impacts on learning, creativity, authorship, assessment, and whether students genuinely understand AI-generated content are critical issues Safeguarding measures must address the risks of AI providing misleading, biased, or explicit content without consent as these technologies proliferate in classrooms Comprehensive AI training is needed for educators at all levels to ensure smooth technology transitions while maintaining human-centric learning approaches as new tools and understanding are required Guests: Harvey Trump, Educational Consultant, Global Educational Consultancy, Egypt Avani Higgins, Director of School Improvement, Leathersellers' Federation of Schools
In our second episode on AI in UK schools, Professor Rose Luckin explores AI integration further with three very special guests helping to lead the way with AI in their institutions. Talking points and questions may include: What is the extent of AI penetration in your schools, including teacher usage, classes avoiding it, student use, and any strategies or evaluation plans in place regarding reactive or proactive AI adoption? No AI is risk-free, so concerns around impacts on learning, creativity, authorship, assessment, and whether students genuinely understand AI-generated content are critical issues Safeguarding measures must address the risks of AI providing misleading, biased, or explicit content without consent as these technologies proliferate in classrooms Comprehensive AI training is needed for educators at all levels to ensure smooth technology transitions while maintaining human-centric learning approaches as new tools and understanding are required Guests: Adam Webster, Deputy Head (Innovation), Caterham School & CEO of Sphinx AI Scott Hayden, Head of Teaching, Learning, and Digital, Basingstoke College of Technology Chris Goodall, Head of Digital Education, Bourne Education Trust
In today's episode, we have the first part of a two-part miniseries on risk management, risk mitigation and risk assessment in AI learning tools. Professor Rose Luckin is away in Australia, speaking internationally, so Rowland Wells takes the reins to chat with Educate Ventures Research team members about their experience managing risk as teachers and developers. What does a risk assessment look like and whose responsibility is it to take onboard its insights? Rose joins our discussion group towards the end of the episode, and in the second instalment of the conversation, Rowland sits down with Dr Rajeshwari Iyer of sAInaptic to hear her perspective on risk and testing features of a tool as a developer and CEO herself. View our Risk Assessments here: https://www.educateventures.com/risk-assessments In the studio: Rowland Wells, Creative Producer, EVR Dave Turnbull, Deputy Head of Educator AI Training, EVR Ibrahim Bashir, Technical Projects Manager, EVR Rose Luckin, CEO & Founder, EVR Talking points and questions include: Who are these for? what's the profile of the person we want to engage with these risk assessments? They're concise, easy-to-read, no technical jargon. But it's still an analysis, for people with a research/evidence mindset. Many people ignore it: we know that even learning tool developers who put research on their tools ON THEIR WEBSITES do not actually have it read by the public. So how do we get this in front of people? Do we lead the conversation with budget concerns? Safeguarding concerns? Value for money? What's the end goal of this? Are you trying to raise the sophistication of conservation around evidence and risk? Many developers who you critique might just think you're trying to make a name pulling apart their tools. Surely the market will sort itself out? What's the process involved in making judgements about a risk assessment? If we're trying to demonstrate to the buyers of these tools, the digital leads in schools and colleges, what to look for, what's the first step? Can this be done quickly? Many who might benefit from AI tools might not have the time to exhaustively hunt out all the little details of a learning tool and interpret them themselves? Schools aren't testbeds for intellectual property or tech interventions. Why is it practitioners' responsibilities to make these kind of evaluations, even with the aid of these kind of assessments? Why is the tech and AI sector not capable of regulating their own practices? You've all worked with schools and learning and training institutions using AI tools. Although this episode is about using the tools wisely, effectively and safely, please tell us how you've seen teaching and learning enhanced with the safe and impactful use of AI
In the second episode of a two-part miniseries on risk management, risk mitigation and risk assessment in AI learning tools, Professor Rose Luckin is away in Australia, speaking internationally, so Rowland Wells takes the reins to chat with Dr Rajeshwari Iyer of sAInaptic to hear her perspective on risk as a developer and CEO. View our Risk Assesments here: https://www.educateventures.com/risk-assessments In the studio: Rowland Wells, Creative Producer, EVR Rajeshwari Iyer, CEO and Cofounder, sAInaptic Talking points and questions include: Who are these for? what's the profile of the person we want to engage with these risk assessments? They're concise, easy-to-read, no technical jargon. But it's still an analysis, for people with a research/evidence mindset. Many people ignore it: we know that even learning tool developers who put research on their tools ON THEIR WEBSITES do not actually have it read by the public. So how do we get this in front of people? Do we lead the conversation with budget concerns? Safeguarding concerns? Value for money? What's the end goal of this? Are you trying to raise the sophistication of conservation around evidence and risk? Many developers who you critique might just think you're trying to make a name pulling apart their tools. Surely the market will sort itself out? What's the process involved in making judgements about a risk assessment? If we're trying to demonstrate to the buyers of these tools, the digital leads in schools and colleges, what to look for, what's the first step? Can this be done quickly? Many who might benefit from AI tools might not have the time to exhaustively hunt out all the little details of a learning tool and interpret them themselves? Schools aren't testbeds for intellectual property or tech interventions. Why is it practitioners' responsibilities to make these kind of evaluations, even with the aid of these kind of assessments? Why is the tech and AI sector not capable of regulating their own practices? You've all worked with schools and learning and training institutions using AI tools. Although this episode is about using the tools wisely, effectively and safely, please tell us how you've seen teaching and learning enhanced with the safe and impactful use of AI
This week we hear from Professor Rose Luckin, from University College London and Educate Ventures Research In the podcast, Rose mentioned the 1% project in Finland from 2020 - here's some reporting on it Through Educate Ventures Research there are a range of AI consultancy and training services for schools, including the AI Readiness Online Course for teachers Rose also has a monthy newsletter "The Skinny on AI for Education" which has an extensive reading list every edition, on a number of AI topics, not just what's happening in education.
In today's rapidly evolving educational landscape, Artificial Intelligence is emerging as a transformative force, offering both opportunities and challenges. As AI technologies continue to advance, it's crucial to examine their impact on student expectations, learning experiences, and institutional strategies. One pressing question is: what do students truly want from AI in education? Are they reflecting on the value of their assessments and assignments when AI tools can potentially complete them? This begs the deeper question of what we mean by student success in higher education and the purpose of knowledge in an AI-driven economy. Professor Rose Luckin is joined by three wonderful guests in the studio to discuss what tools we need to support students and how we explore the potential and the limitations of AI for education. Guests: Michael Larsen, CEO & Managing Director, Studiosity Sally Wheeler, Professor, Vice-Chancellor, Birkbeck, University of London Ant Bagshaw, Executive Director, Australian Technology Network of Universities Talking points and questions include: Student expectations and perspectives on using AI for assessments/assignments and the role of knowledge in an AI economy The potential of AI to enhance learning through features like instant feedback, error correction, personalized support, learning analytics How AI could facilitate peer support systems and student community, and the research on the value of this The lack of robust digital/AI strategies at many institutions as a barrier to effective AI adoption The evidence-base for AI in education - challenges with research being highly specific/contextual, debating the value of in-house research vs general studies Whether evidence on efficacy truly drives institutions' buying decisions for AI tools or if other factors/institutional challenges are stronger influences How challenges facing the education sector can inhibit capacity for innovative deployments like AI The growing need for proven, supportive AI tools for students despite institutional constraints
Coming to the fifth and final episode of our miniseries on AI for education, host Professor Rose Luckin is joined by Timo Hannay, Founder of SchoolDash, and Lord David Puttnam, Independent Producer, Chair of Atticus Education, and former member of the UK parliament's House of Lords. This episode and our series have been generously sponsored by Nord Anglia Education. Today we're going to look ahead to the near and far future of AI in education, and ask what might be on the horizon that we can't even predict, and what we can do as humans to proof ourselves against disruptions and innovations that have, like the Covid pandemic and ChatGPT's meteoric rise, rocked our education systems, and demanded we do things differently. Guests: Lord David Puttnam, Independent Producer, Chair, Atticus Education Timo Hannay, Founder, SchoolDash Talking points and questions include: Slow Reaction to AI: Despite generative AI's decade-long presence and EdTech's rise, the education sector's response to tools like ChatGPT has been surprisingly delayed. Why? Learning from Our AI Response: Can our current reaction to generative AI serve as a case study for adapting to future tech shifts? It's a test of our educational system's resilience AI's Double-Edged Sword: With ChatGPT's rapid rise, are EdTech companies risking harm by using AI without fully understanding it? Think Facebook's data misuse in the Rohingya massacre Equipping Teachers for AI: Who can educators trust for AI knowledge? We need frameworks to guide them, as AI literacy is now as crucial as internet literacy Digital Natives ≠ AI-Ready: Today's youth grew up online, but does that prepare them for sophisticated, accessible AI? Not necessarily
Continuing our miniseries on AI in education with the fourth episode centred around a AI's potential for equity of learning, host Professor Rose Luckin is joined by Richard Culatta of ISTE, Professor Sugata Mitra, and Emily Murphy of Nord Anglia Education. This episode and our series are generously sponsored by Nord Anglia Education. In our fourth instalment of this valuable series, we look at AI's potential to address various challenges and bridge the educational gaps that exist among different groups of students around the world. AI can analyse vast amounts of data, provide early interventions, and enhance accessibility, and as long as the deployment of the technology is appropriate to the unique context of the school, the learners, the location, and the access to devices, AI can transform education for those who need the most support. Guests: Professor Sugata Mitra, Author/Professor of Educational Technology, Newcastle University Emily Murphy, Senior PD Lead, DNA Metacognition Project, Nord Anglia Education Richard Culatta, CEO, ISTE Talking points and questions include: What do we mean by equity of learning, and how can we understand context? Is there a danger that AI will simply be used to reinforce or replace existing conventional methods of assessing learning, despite what it's great potential? What needs to fall into place for AI to be the promise for education we know it could be? What needs to happen to have AI be the magic bullet for equity of learning from a teacher and headteacher perspective? If the technology is there, and it has the potential it has, how can teachers build on that? How have different practices and innovations in the classroom been adopted and rejected… is AI going to succeed where other initiatives and technologies have either failed to be adopted, or plateaued and fallen by the wayside? How is AI different? How do we talk about getting school infrastructure in place to use AI? How we do we convince educationalists, and the budget holders and local governance that AI and other emerging technologies are worth their investment? There is some understandable fear about revolutionary technology disrupting existing practice in the classroom, but are we underestimating our students and teachers?
Continuing our miniseries on AI in education with the third episode centred around a global perspective on AI, host Professor Rose Luckin is joined by Andreas Schleicher of the OECD, Dr Elise Ecoff of Nord Anglia Education, and Dan Worth of Tes. This episode and our series are generously sponsored by Nord Anglia Education. In our third instalment of this valuable series, we head out beyond the UK and the English-speaking world to get a global perspective on AI, and ask how educators and developers around the world build and engage with AI, and what users, teachers and learners want from the technology that might tell people back home a thing or two. We examine how international use of AI might change the way we engage with AI, and we also ask why they might be doing things differently. Guests: Dr Andreas Schleicher, Director for the Directorate of Education & Skills, OECD Dr Elise Ecoff, Chief Education Officer, Nord Anglia Education Dan Worth, Senior Editor, Tes Talking points and questions include: What are other countries tech and education ecosystems doing to develop and implement AI? International considerations of ethics and regulation Is the first world imposing a way of looking at technology and its innovation on the third world? What assumptions are we making, and are we mindful of the context? Is the first world restricting innovation through specific regulation to change what technology is being built and how, and who might it benefit? Skills and competencies development can be driven by the needs of business - what priorities for AI education exhibited by international models could the UK adopt or consider?
What's in this episode? Continuing our new 5-episode miniseries on AI in education with the second episode on AI's relationship to neuroscience and metacognition, host Professor Rose Luckin is joined by Dr Steve Fleming, Professor of Cognitive Neuroscience at UCL, UK, and Jessica Schultz, Academic & Curriculum Director at the San Roberto International School in Monterrey, Mexico. This episode and our series are generously sponsored by Nord Anglia Education. Metacognition, neuroscience and AI aren't just buzzwords but areas of intense research and innovation that will help learners in ways that until now have been unavailable to the vast majority of people. The technologies and approaches that study in these domains unlocks, however, must not be siloed or made inaccessible to public understanding. Real work must be done to bring these areas together and we are tremendously excited that this podcast will present a great opportunity to showcase what inroads have been made, where, why, and how. Guests: Dr Steve Fleming, Professor of Cognitive Neuroscience, UCL Jessica Schultz, Academic & Curriculum Director, San Roberto International School Talking points and questions include: Neuroscience and AI are well-respected fields with a massive amount of research underpinning their investigation and practices, but they are also two very shiny buzzwords that the public likely only understands in the abstract (and the words may even be misapplied to things that aren't based in neuroscience or AI). Can you tell our listeners what they are, how they intersect with one another, and what benefits their crossover can provide in the realms of skills and knowledge? Can we use one field, AI, or Neuroscience, to talk about the other, to better 'sell' the idea of the other field of study, and in this way, drastically raise the bar of what is possible to detect, uncover and assess, in education, using these domains? In practical terms, how do we use AI and neuroscience to measure what might be considered 'unmeasurable' in learning? What data is required, what expertise in the team, or in a partner organisation, can be leveraged, who can be responsible for doing this in an educational or training institution? What data or competencies or human resource do they need access to? Sponsorship Thank you so much to this series' sponsor: Nord Anglia Education, the world's leading premium international schools organisation. They make every moment of your child's education count. Their strong academic foundations combine world-class teaching and curricula with cutting-edge technology and facilities, to create learning experiences like no other. Inside and outside of the classroom, Nord Anglia Education inspires their students to achieve more than they ever thought possible. "Along with great academic results, a Nord Anglia education means having the confidence, resilience and creativity to succeed at whatever you choose to do or be in life." - Dr Elise Ecoff, Chief Education Officer, Nord Anglia Education
Professor Rose Luckin provides an engaging tutorial on the opportunities, risks, and challenges of AI in education and why AI raises the bar for human learning. Acknowledging AI's real and present risks, Rose is optimistic about the power of AI to transform education and meet the needs of diverse student populations. From adaptive learning platforms to assistive tools, Rose highlights opportunities for AI to make us smarter, supercharge learner-educator engagement and level the educational playing field. Along the way, she confronts overconfidence in AI, the temptation to offload challenging cognitive workloads and the risk of constraining a learner's choices prematurely. Rose also adroitly addresses conflicting visions of human quantification as the holy grail and the seeds of our demise. She asserts that AI ups the ante on education: how else can we deploy AI wisely? Rising to the challenge requires the hard work of tailoring strategies for specific learning communities and broad education about AI itself. Rose Luckin is a Professor of Learner Centered Design at the UCL Knowledge Lab and Founder of EDUCATE Ventures Research Ltd., a London hub for educational technology start-ups, researchers and educators involved in evidence-based educational technology and leveraging data and AI for educational benefit. Explore Rose's 2018 book Machine Learning and Human Intelligence (free after creating account) and the EDUCATE Ventures newsletter The Skinny. A transcript of this episode is here.
What's in this episode? Delighted to launch this new 5-episode miniseries on AI in education, sponsored by Nord Anglia Education, host Professor Rose Luckin kicks things off for the Edtech Podcast by examining how we keep education as the centre of gravity for AI. AI has exploded in the public consciousness with innovative large language models writing our correspondence and helping with our essays, and sophisticated images, music, impersonations and video generated on-demand from prompts. Whilst big companies proclaim what this technology can achieve and how it will affect work, life, play and learning, the consumer and user on the ground and in our schools likely has little idea how it works or why, and it seems like a lot of loud voices are telling us only half the story. What's the truth behind AI's power? How do we know it works, and what are we using to measure its successes or failures? What are our young people getting out of the interaction with this sophisticated, scaled technology, and who can we trust to inject some integrity into the discourse? We're thrilled to have three guests in the Zoom studio with Rose this week: Dr Paul LeBlanc, President, Southern New Hampshire University Dr Kate Erricker, Assistant Director of Curriculum, Nord Anglia Education Julie Henry, Freelance Education Correspondent Talking points and questions include: We often ask of technology in the classroom 'does it work'? But when it comes to AI, preparing people to work, live, and play with it will be more than just whether or not it does what the developers want it to. We need to start educating those same people HOW it works, because that will not only protect us as consumers out in the world, as owners of our own data, but help build a more responsible and 'intelligent' society that is learning all of the time, and better able to support those who need it most. So if we want that 'intelligence infrastructure', how do we build it? What examples of AI in education have we got so far, what areas have been penetrated and has anything radically changed for the better? Can assessment, grading, wellbeing, personalisation, tutoring, be improved with AI enhancements, and is there the structural will for this to happen in schools? The ‘white noise' surrounding AI discourse: we know the conversation is being dominated by larger-than-life personalities and championed by global companies who have their own technologies and interests that they're trying to glamourise and market. What pushbacks, what reputable sources of information, layman's explanations, experts and opinions should we be listening to to get the real skinny on AI, especially for education? Sponsorship Thank you so much to this series' sponsor: Nord Anglia Education, the world's leading premium international schools organisation. They make every moment of your child's education count. Their strong academic foundations combine world-class teaching and curricula with cutting-edge technology and facilities, to create learning experiences like no other. Inside and outside of the classroom, Nord Anglia Education inspires their students to achieve more than they ever thought possible. "Along with great academic results, a Nord Anglia education means having the confidence, resilience and creativity to succeed at whatever you choose to do or be in life." - Dr Elise Ecoff, Chief Education Officer, Nord Anglia Education
Welcome to the Inner Game of Change Podcast, where I explore the intricate layers of organizational change alongside insightful professionals. Today, I am thrilled to introduce Professor Rose Luckin from University College London, a leading expert in educational technology and artificial intelligence. She's recognized globally for her contributions to the field, named one of the most influential people in education and among the top influential women in technology.Professor Rose's work includes providing expert evidence to policymakers, and she's an accomplished author, with notable works like "Machine Learning and Human Intelligence" and "AI for School Teachers."Rose founded EDUCATE Ventures Research Ltd., fostering AI in education, and has held significant academic leadership roles. I am grateful to have Rose chatting with me today about AI and its powerful potential specifically in Higher Education. Topics include:Brief history of AIWhy the hype now? Higher Education as a Use CaseThe Excitement and Concern ; and The diverse reaction from Higher EducationThe emerging AI challenge for the Higher Education Leaders.The inevitability of AI impact on our way of working. The need for universities to adopt an AI strategy to achieve their vision. Not engaging with AI technology will inevitably disadvantage students. Where can a university start in building people capabilities in AI? With the powerful use of AI, leaning standards needs to be higher. AI to help us re-imagine the education experience and uplift our human intelligence. Rose's counsel for any Higher Education leadership when it come to AI. Contact Rose Rose's LinkedIn Websitesknowledgeillusion.wordpress.com/ (Blog)educateventures.com (Company) Ali Juma @The Inner Game of Change podcast
SCIENCE! Under discussion today are the ways in which students who were switched off the sciences at school manage to retain their curiosity about the subjects and can even reengage with it later in life. Professor Rose Luckin is very lucky to have in the online studio this week Dr Andrew Morris, Honorary Associate Professor at UCL, former president of the Education Section of the British Science Association, and author, whose book, Bugs, Drugs, and Three-Pin Plugs: Everyday Science, Simply Explained, is now available wherever books are sold. Dr Morris has an interest in serving learners and the public through scientific and evidence-based outreach. The discussion in the studio centred around science, technology, research and practice in education. Talking points and questions: The ways in which people who were switched off the sciences at school retain their curiosity and can reengage with science at a later point in life Examples of topics and ways of approaching science that have been revealed by Dr Morris' science discussion groups Research-informed educational practice, and research-informed educational policy Ways in which research can be transformed and mediated for use Material discussed in today's episode includes: Smartphones in schools? Only when they clearly support learning, the 2023 Global Education Monitoring Report has just released a call for technology only to be used in class when it supports learning outcomes, and this includes the use of smartphones. The Skinny on AI for Education, EVR's newest publication featuring insights, trends and developments in the world of AI Ed
The fifth and final episode in the Evidence-Based EdTech miniseries produced by Professor Rose Luckin's EDUCATE Ventures Research, exploring education, research, AI and EdTech, and hosted on The Edtech Podcast The Evidence-Based EdTech miniseries connects, combines, and highlights leading expertise and opinion from the worlds of EdTech, AI, Research, and Education, helping teachers, learners, and technology developers get to grips with ethical learning tools led by the evidence. In our previous episode, Rose was in conversation with representatives from Make (Good) Trouble, Feminist Internet, and Soundwaves Foundation, an organisation pursuing technology to assist with deaf or hearing-impaired students in the classroom. We asked a number of questions that centred around what inclusive technology looks like to each of the guests in the room, given that they had and worked with unique perspectives, and what their thoughts were around user agency and why it was so vital EdTech developers be mindful of this in the creation of their products. Our last question was on what we should demand of technology that it cater to people from diverse backgrounds. Was it data, the context, access, that allowed tech to help those from diverse backgrounds? In this episode, we'd like to extend these same thoughts on DEI and ethics outward, beyond the borders of the UK. We'll be asking: Are international education ecosystems implementing their diversity, equity and inclusion any differently from that of the UK? What could be learned from them that EdTech developers and educationalists can adopt and use in the UK? From an international perspective, is the technology developed in the first world, but exported to the third, sensitive to the context of its use or too prescriptive? And as an additional point, has the third world reshaped its attitudes towards diversity and ethics in technology in line with what it believes the first world will find desirable or employable? There's rumour of national and international standards for good evidence in EdTech coming out of some countries, with presumably varying emphasis placed on adherence to these standards by different governments and regulatory bodies. What is our guest's opinion on how robust they think regulation needs to be where EdTech evidence is concerned, and how strictly do they think such standards should be enforced when developing and using EdTech? Our guest this week is Jane Mann, Managing Director for Cambridge Partnership for Education. With over two decades of experience in the education sector, as Managing Director of the Cambridge Partnership for Education Jane is now focused on working with ministries of education, government agencies, NGOs, donor agencies and educational organisations to advocate for, design and implement effective programmes of education transformation. The Cambridge Partnership for Education works across the globe in curriculum and assessment design and development, creation of teaching and learning resources, professional development, stakeholder engagement and English language learning and skills. Thank you to Cambridge Partnership for Education for sponsoring this episode, and for supporting the Evidence-Based EdTech series on the EdTech Podcast.
Welcome to the fourth episode in a series produced by Professor Rose Luckin's EDUCATE Ventures Research, exploring ‘Evidence-Based EdTech', and hosted on The Edtech Podcast. For this episode we will examine topics such how we use existing technology to assist with DEI and ethics, and what we know of technology that does not include this perspective. We ask why that might be, and we look at the art of data capture, and data irresponsibility: what are we capturing that we shouldn't, who is being affected by our biases, and if this is a step in the development of technological interventions that organisations can afford to skip. How do we mitigate systemic bias and scaled harm? What are examples of inclusive technology that accommodate the learning styles, online behaviours, device access, and dis/abilities of learners? Can we place more pressure on leadership in schools and institutions to incorporate inclusive technologies? What do we know of user agency, and how does that affect the design and transparency of an EdTech solution?
Welcome to this episode in our series produced by Professor Rose Luckin's EDUCATE Ventures Research, and hosted on The Edtech Podcast In this episode, Karine and Rose meet this week to discuss the Online Safety Bill, school absences, and ChatGPT, the latter of which has produced huge public debate, from teacher anxieties to developer felicitations, questions from parents, and columnist think pieces all around the presence of AI in the classroom. With all of these concerns, however, is it possible that ChatGPT has done education a favour? OpenAI's ChatGPT is the third and latest version of their text-generating AI technology, and it's been trained on over 45 terabytes of data. If that seems like a lot, it is: the entirety of English-language Wikipedia accounts for just 1% of that volume in comparison. The talk of Twitter and intrigued educationalists in schools around the anglosphere, much of the discussion has been around its use as a replacement for human cognition: will students use it to cheat in essays and assessments? Does its information retrieval dumb-down student opportunities for learning when material is simply parroted, rather than interrogated and the learning then applied in novel contexts? In this week's episode, Karine and Rose discuss practical uses for this incredibly powerful tool, and explain why human and machine intelligence can work together successfully to improve teaching and learning, and our understanding of AI. Material discussed in this episode includes: Square Peg's new book by Fran Morgan and Ellie Costello, with Ian Gilbert: Square Pegs: Inclusivity, Compassion, and Fitting In - a Guide for Schools, available here EVR and Cambridge Partnership for Education's Covid-19 report: Shock to the System: Lessons from Covid-19, available here
Welcome to this episode in our series produced by Professor Rose Luckin's EDUCATE Ventures Research, exploring 'Evidence-Based EdTech', and hosted on The Edtech Podcast This mini-series connects, combines, and highlights leading expertise and opinion from the worlds of EdTech, AI, Research, and Education, helping teachers, learners, and technology developers get to grips with ethical learning tools that are led by the evidence. For this episode, Rose and Karine play host to Lord Jim Knight in the EdTech Podcast Zoom studio this week, and try to understand the arguments surrounding the establishment of Oak National Academy as an 'Arm's Length Body'. They dig into whether Oak Academy - an organisation providing an online classroom and resource hub and set up in the UK during the pandemic - has shifted substantially from a well-intentioned response to Covid to something more challenging for the Edtech sector and potentially those it serves. And finally, shout out to Rose, Karine and Jim for also digging into the world of ChatGPT and how we should start thinking of that within our classrooms and for our young people. Thank you to Learnosity for sponsoring this episode, and for supporting the Evidence-Based EdTech series on the EdTech Podcast.
Welcome to the second episode in a series produced by Professor Rose Luckin's EDUCATE Ventures Research, exploring 'Evidence-Based EdTech', and hosted on The Edtech Podcast This mini-series connects, combines, and highlights leading expertise and opinion from the worlds of EdTech, AI, Research, and Education, helping teachers, learners, and technology developers get to grips with ethical learning tools that are led by the evidence. For this episode, we examine the state of technology in work, training, and mentorship, and ask what role evidence plays when we are dealing with environments where (usually) productivity is the thing that's measured. Is productivity for the sake of it good? How do we know the technology that the current and future workforce encounters, benefits them? As many roles demand a more complex skill set, and fluency in technology, is there a risk we're leaving people behind? What do employability, recruitment, and skills look like in the age of the portfolio career? We'll be asking: Are the skills, the ways of working, ways of thinking, ways of measuring success, that schools teach young people, appropriate for today's world of work? How we balance human intelligence in the workplace with, broadly, ‘machine intelligence'; that is how we work with and support the human learner or worker, with the tech that many workplaces ask us to use What do we mean by ‘deep skills/reskilling/upskilling', and this idea that people aren't just sticking to one role, one organisation or type of work for 20, 30, 50 years? And most importantly, what evidence is there to help us understand what young people need and what can be done to effectively prepare young people for their ever-changing futures? Thank you to Learnosity for sponsoring this episode, and for supporting the Evidence-Based EdTech series on the EdTech Podcast.
Welcome to this first episode in a series produced by Professor Rose Luckin's EDUCATE Ventures Research, exploring 'Evidence-Based EdTech', and hosted on The Edtech Podcast. This mini-series connects, combines, and highlights leading expertise and opinion from the worlds of EdTech, AI, Research, and Education, helping teachers, learners, and technology developers get to grips with ethical learning tools that are led by the evidence. For this episode we examine the presence of EdTech in schools, looking at how we judge whether the tech ‘works' or not. We explore what makes for good evidence, why contextual use is significant, and how school CPD, infrastructure development, and staff capacity building are vital to making the most of the tools at our disposal. We are chatting to: Tom Hooper – Founder and CEO, Third Space Learning Neelam Parmar – Director of Digital Transformation and Education, AISL Harrow Schools Richard Culatta – Author, CEO, ISTE Katie Novak – Strategist, Writer, Smart Technologies Host: Rose Luckin – Professor of Learner Centred Design, UCL, Founder, EDUCATE Ventures Research Can our schools operate as testbeds for emerging technology, and is this an ethical or beneficial use of class time? Why is an evidence-led investment and regulatory ecosystem so important? What is a ‘research mindset' for aspiring technology developers, and do users even care about the evidence? We'll be asking: How do we know EdTech works? What does good evidence look like, and what can stakeholders in the ecosystem do to ensure it is high-quality? What are the biggest barriers to generating good evidence and getting it into the hands of the people in companies responsible for technology development, and into the hands of those using that technology? Thank you to SMART Technologies for sponsoring this episode, and for supporting the Evidence-Based EdTech series on the EdTech Podcast
John explores the role AI is bound to play in the future of education with special guest, Professor Rose Luckin.
Welcome to Ep 2! Our amazing guests are Professor Rose Luckin, Director of EDUCATE and Professor of Learner Centred Design at UCL, Carly Kind, Director of the Ada Lovelace Institute and Elena Sinel, Teens In AI Founder. We discuss the future of AI, advice for teens interested in tech and so much more! For more information, follow us on twitter @TeensInAIPod
Welcome to Ep 1!! Our amazing guests are Professor Rose Luckin, Director of EDUCATE and Professor of Learner Centred Design at UCL and Carly Kind, Director of the Ada Lovelace Institute. We discuss bias in the A-Level Algorithm and UK school system, the potential of AI in education and much more. For more information, follow us on twitter @TeensInAIPod
With constant advancements in technology impacting different sectors, how could technology help educate people better, and what effect could this have on evolving education? In this episode, Professor Rose Luckin, founder and Director of UCL EDUCATE, explains how technology is both the cause and solution for change in the education sector, and how implementing technology like AI can create a personal education experience to help learners build knowledge beyond just information.
What's in this episode? Hello listeners and welcome back to The Edtech Podcast, the show about improving the dialogue between “ed” and “tech” for better innovation and impact. This week we share recordings made live at the launch of the European Edtech Network during London Edtech Week. For those who have been following the news, the ambitions of the network are interesting in the wake of Alt Schools pivot after raising nearly $200 million; many level the perceived failure or change of direction at Alt School to the lack of educators at the helm, relying only on the silicon valley faith in everything tech. We also throw in a few listener messages and some news from the world of edtech events. Enjoy and have a great week! People Sophie Bailey is the Founder and Presenter of The Edtech Podcast | Twitter: @podcastedtech Listener messages; Samuel Munyuwiny from the African Institute for Children’s Studies calling in from Nairobi, Kenya | Twitter: @SMunyuwiny Art Fridrich, A Higher Vision, USA | Twitter: @Ahighervision Event news from; Anni Mansikkaoja, Dare to Learn | Twitter: @Dare_ToLearn Ben Sowter, Senior VP at QS to talk about Reimagine Education | Twitter: @bensowter Guests; Avi Warshavsky, CEO, MindCET | Twitter: @aviwarshavsky Cyril Ghanem, Head of Business Development, AppScho | Twitter: @AppScho Leila Guerra, Assistant Dean of Programmes, Imperial College Business School |Twitter: @leila_guerra Professor Rose Luckin, Centred Learning Design at UCL and Director of EDUCATE | Twitter: @Knowldgillusion Angela Mcfarlane, | Twitter: @AngelaMcFarlane Katy Fryatt, Founder & CEO, Learnit | Twitter: @katyfryatt Vic Vuchic, Chief Innovation Officer at Digital Promise Global and Executive Director of the Learner Variability Project | Twitter: @DigitalPromise Mary Curnock Cook, former CEO of UCAS & Chair of Emerge | Twitter: @MaryCurnockCook Alison Clark-Wilson, Principal Research Lead, UCL EDUCATE | Twitter: @AliClarkWilson Lucía Figar, IE Chief of Corporate Innovation Chairwoman IE Rockets, IE University | Twitter: @luciafigar Show Notes and References Check out https://theedtechpodcast.com/edtechpodcast for the full show notes. Tell us your story We'd love to hear your thoughts. Record a quick free voicemail via speakpipe for inclusion in the next episode. Or you can post your thoughts or follow-on links via twitter @podcastedtech or via The Edtech Podcast Facebook page or Instagram.
Rose Luckin, professor of Learner Centred Design at the UCL Institute of Education, is worried about the machines. Or more specifically, artificially intelligent machines seeping into education and the lack of response, thus far, from teachers as to what that might mean for their jobs.“The computer can do the academic knowledge delivery in a very individualised way for each learner,” she explains on this week's Tes Podagogy. “For academic knowledge, in well-designed subject areas, the evidence shows that these systems can be as effective – if they are well designed – as a human teacher, but not a human teacher acting 121.” The implications of this have not been fully realised, she believes. “If what you prize in your education system is academic knowledge and we can build systems that can teach it and learn it faster than we can, you put yourself in a tricky situation,” she says. “You can understand why someone in charge of the purse strings might think: why do I need these humans? “That is an apocalyptic scenario, certainly dystopian, and I don't think that is what we should do. But I think we need to recognise that if we don't change our perceptions about what we should be valuing in our education system we do run the risk of handing over too much to artificially intelligent systems.” In a wide-ranging podcast interview, professor Luckin explains how we build a system that guards against the slow creep of technology-led teaching into the classroom. She believes it is about better understanding intelligence and the difference between information and knowledge.
Rose Luckin, professor of Learner Centred Design at the UCL Institute of Education, is worried about the machines. Or more specifically, artificially intelligent machines seeping into education and the lack of response, thus far, from teachers as to what that might mean for their jobs.“The computer can do the academic knowledge delivery in a very individualised way for each learner,” she explains on this week’s Tes Podagogy. “For academic knowledge, in well-designed subject areas, the evidence shows that these systems can be as effective – if they are well designed – as a human teacher, but not a human teacher acting 121.” The implications of this have not been fully realised, she believes. “If what you prize in your education system is academic knowledge and we can build systems that can teach it and learn it faster than we can, you put yourself in a tricky situation,” she says. “You can understand why someone in charge of the purse strings might think: why do I need these humans? “That is an apocalyptic scenario, certainly dystopian, and I don’t think that is what we should do. But I think we need to recognise that if we don’t change our perceptions about what we should be valuing in our education system we do run the risk of handing over too much to artificially intelligent systems.” In a wide-ranging podcast interview, professor Luckin explains how we build a system that guards against the slow creep of technology-led teaching into the classroom. She believes it is about better understanding intelligence and the difference between information and knowledge. See acast.com/privacy for privacy and opt-out information.
In this episode, Rachel and Jason speak with Professor Rose Luckin from the UCL Knowledge Lab at University College London about the impact of artificial intelligence in higher education. We discussed the report: Luckin, R., Holmes, W., Griffiths, M. & Forcier, (2016). Intelligence Unleashed: An argument for AI in Education. London: Pearson. Producers: Dr Rachel Searston & Dr Jason Lodge Episode recorded: 14th August 2017 Music: Bensound
Natalie Campbell explores the thoughts of Araceli Camargo – an entrepreneur and cognitive neuroscientist who believes artificial intelligence could spell a new chapter in human development. But are we heading towards utopia or something much more sinister? And how can we prepare for more imminent changes to the jobs market? We hear from robotics expert Nick Hawes, education specialist Professor Rose Luckin, and neo-luddite author Kirkpatrick Sale. See acast.com/privacy for privacy and opt-out information.
Mondial Learning Podcast, with Professor Rose Luckin. Recently, I was lucky enough to get to interview Professor Rose Luckin, from the UCL Knowledge Lab. We got to talk about her work on Artifical Intelligence in learning contexts, and the future of education.