Interdisciplinary field
POPULARITY
This and all episodes at: https://aiandyou.net/ . "We are in the privileged - or unfortunate - situation of being the last generation of humans to be parenting AIs. All the future generations of AIs are going to be parented mainly by AIs. And so even more than with our human children and grandchildren, we have one shot at raising this next generation correctly." I am talking with De Kai, a pioneering professor of AI who built the web's first global online language translator that spawned Google Translate and Microsoft Bing Translator, and author of new book, Raising AI: An Essential Guide to Parenting Our Future. De Kai was honored by the Association for Computational Linguistics as one of its 17 Founding Fellows and holds joint appointments at HKUST's Department of Computer Science and Engineering and Division of Arts and Machine Creativity, and at Berkeley's International Computer Science Institute. He is Independent Director of the AI ethics think tank The Future Society and was one of eight inaugural members of Google's AI ethics council. So he's helped create some of the most important mechanisms and institutions of the modern AI age. In the conclusion of our interview, we talk about how to parent AI and what that means, responsibilities of the AI companies, a kind of parent-teacher association for AI and how to get involved, and our responsibilities to the next generation. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Everyone is talking about AI, but why do up to 80% of corporate AI initiatives fail to reach production? The gap between a cool demo and a reliable, valuable product is massive, and navigating it requires a solid strategy.In this episode, Allen sits down with Dr. Jenna Kova, a PhD in Computational Linguistics, founder of AI Strategy Partners, and author of the Manning book, "Art of AI Product Development." Dr. Kova breaks down the common pitfalls teams face and provides a clear framework for success.This is a must-listen for any developer, product manager, or leader trying to move beyond the hype and build AI products that deliver real-world value.IN THIS EPISODE00:48 - Why 80% of AI Initiatives Fail04:14 - Vibe Coding20:40 - Finding the Right Business Problem for AI26:42 - AI Governance: Managing Compliance, Privacy, and Data Access34:21 - The AI Opportunity Tree & System Blueprint46:47 - Why You Need to Start Building Your AI Know-How Today
This and all episodes at: https://aiandyou.net/ . As AI becomes more and more powerful, what is our responsibility, collectively? I am joined by De Kai, a pioneering professor of AI who built the web's first global online language translator that spawned Google Translate and Microsoft Bing Translator. And he has answered those questions with his new book, Raising AI: An Essential Guide to Parenting Our Future. De Kai was honored by the Association for Computational Linguistics as one of its 17 Founding Fellows and holds joint appointments at Hong Kong University of Science and Technology's Department of Computer Science and Engineering and Division of Arts and Machine Creativity, and at Berkeley's International Computer Science Institute. He is Independent Director of the AI ethics think tank The Future Society and was one of eight inaugural members of Google's AI ethics council. So he's helped create some of the most important mechanisms and institutions of the modern AI age. We talk about why we should parent AI, the existential issues that drove him to write the book, seeing AI as neuro-atypical, and the architecture and features of AI that are important to consider in how we relate to it. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
In a world in which artificial intelligence will change everything, we need a leader to illuminate the impact of “the automation of thought” on our way of life. How is the widespread use of AI impacting our world, our minds, and our future—not just as a technical innovation but as a mode of culture? Should we be afraid? De Kai has been a trailblazer in the world of AI. He invented and built the world's first global-scale online language translator that spawned Google Translate, Yahoo Translate, and Microsoft Bing Translator. He brings decades of his paradigm-shifting work at the nexus of artificial intelligence and society to help people make sense of their interactions with AI at both personal and collective levels—ethically and responsibly. While Hollywood narratives of AI destroying humanity might be overblown, the age of AI is reshaping the future of civilization. What should each of us do as the responsible adults in the room? De Kai asks critical, overlooked questions requiring urgent attention. Dr. De Kai is professor of computer science and engineering at HKUST and distinguished research scholar at the International Computer Science Institute. He was honored by the Association for Computational Linguistics as one of only 17 Founding Fellows. De Kai is an independent director of the AI ethics think tank The Future Society and was one of eight inaugural members of Google's AI ethics council. His book Raising AI provides an accessible framework to navigate the enormous impact of AI upon human culture, our values, and the flow of information. De Kai demonstrates that society can not only survive the AI revolution but also flourish in a new world where we all play our part in a more humane, compassionate, and understanding society—alongside our artificial children. Our moderator, Camille Crittenden, Ph.D., is the executive director of CITRIS and the Banatao Institute and co-founder of the CITRIS Policy Lab and EDGE (Expanding Diversity and Gender Equity) in Tech at UC. She served as chair of the California Blockchain Working Group in 2019–20 and co-chaired the Student Experience subcommittee of the University of California's Presidential Working Group on Artificial Intelligence. She continues to serve on the UC AI Council. A Technology & Society Member-led Forum program. Forums at the Club are organized and run by volunteer programmers who are members of The Commonwealth Club, and they cover a diverse range of topics. Learn more about our Forums. OrganizerGerald Anthony Harris Learn more about your ad choices. Visit megaphone.fm/adchoices
A team of researchers at Trinity College Dublin has received €500,000 in funding to develop an AI-enabled platform to help teachers create assessments and provide formative feedback to learners. The project is called Diotima and is supported by The Learnovate Centre, a global research and innovation centre in learning technology in Trinity College Dublin. Diotima began its partnership with Learnovate in February this year and is expected to spin out as a company in 2026. The €500,000 funding was granted under Enterprise Ireland's Commercialisation Fund, which supports third-level researchers to translate their research into innovative and commercially viable products, services and companies. Diotima supports teaching practice by using responsible AI to provide learners with feedback, leading to more and better assessments and improved learning outcomes for students, and a more manageable workload for teachers. The project was co-founded by Siobhan Ryan, a former secondary school teacher, biochemist and environmental scientist, and Jonathan Dempsey, an EdTech professional with both start-up and corporate experience. Associate Professor Ann Devitt, Head of the Trinity School of Education, and Carl Vogel, Professor of Computational Linguistics and Director of the Trinity Centre for Computing and Language Studies, are serving as co-principal investigators on the project. Diotima received the funding in February. Since then, the project leaders have established an education advisory group formed of representatives from post-primary and professional education organisations. The Enterprise Ireland funding has facilitated the hiring of two post-doctoral researchers. They are now leading AI research ahead of the launch of an initial version of the platform in September 2025. Diotima aims to conduct two major trials of the platform as they also seek investment. Co-founder Siobhan Ryan is Diotima's Learning Lead. After a 12-year career in the brewing industry with Diageo, Siobhan re-trained as a secondary school teacher before leaving the profession to develop the business case for a formative assessment and feedback platform. Her experience in the classroom made her realise that she could have a greater impact by leveraging AI to create a platform to support teachers in a safe, transparent, and empowering way. Her fellow co-founder Jonathan Dempsey is Commercial Lead at Diotima. He had been CEO of the Enterprise Ireland-backed EdTech firm Digitary, which is now part of multinational Instructure Inc. He held the role of Director of UK and Ireland for US education system provider Ellucian and Head of Education and Education Platforms for Europe with Indian multinational TCS. Jonathan has a wealth of experience at bringing education technologies to market. Learnovate Centre Director Nessa McEniff says: "We are delighted to have collaborated with the Diotima team to secure €500,000 investment from Enterprise Ireland's Commercialisation Fund. Diotima promises to develop into a revolutionary platform for learners in secondary schools and professional education organisations, delivering formative feedback and better outcomes overall. We look forward to supporting them further as they continue to develop the platform in the months ahead." Enterprise Ireland Head of Research, Innovation and Infrastructure Marina Donohoe says: "Enterprise Ireland is delighted to support Diotima under the Commercialisation Fund. We look forward to seeing them continue in their mission to transform teaching practice through AI enabled assessment and feedback. We believe that the combination of excellence in AI and in education from Trinity College, expertise in education technology from the Learnovate Centre and focus on compliance with the EU AI Act and other regulations will see the Diotima team make a global impact". Diotima Learning Lead and co-founder Siobhan Ryan says: "We're delighted to have received such a significant award from the Enterprise Ireland C...
Our guest today is Dr. Ken Forbus, the Walter P. Murphy Professor of Computer Science and a Professor of Education at Northwestern University. Joining Dr. Ken Ford to co-host today's interview is Dr. James Allen, who was IHMC's associate director until he retired a few years ago. James is a founding fellow of the American Association for Artificial Intelligence and a perfect fit for today's discussion with Dr. Forbus, who, like James, is an AI pioneer. Back in 2022, James was named a fellow by the Association for Computational Linguistics, an organization that studies computational language processing, another field he helped pioneer. Dr. Forbus also is a Fellow of the Association for the Advancement of Artificial Intelligence and was the inaugural winner of the Herbet A. Simon Prize for Advances in Cognitive Systems. He is well-known for his development of the Structure Mapping Engine. In artificial intelligence and cognitive science, the Structure Mapping Engine is a computer simulation of analogy and similarity comparisons that helped pave the way for computers to reason more like humans. Show Notes: [00:03:07] Ken opens the interview with Dr. Forbus by asking if it is true that he had an unusual hobby for a nerdy kid growing up. [00:04:18] James mentions that Dr. Forbus' family moved often when he was younger and asks how that affected him. [00:05:18] Ken mentions that when Dr. Forbus was in high school, he filled his free time reading about psychology and cognition before eventually coming across some articles on AI. Ken asks Dr. Forbus to talk about this experience and what happened next. [00:07:49] James asks Dr. Forbus if he remembers the first computer he owned. [00:09:17] Ken asks Dr. Forbus if there was anything, other than its reputation, that led him to attend MIT. [00:10:09] James mentions that for the past few decades, Dr. Forbus has been working on developing “human like” AI systems. While much of AI research and development has been focused on meeting the standard of the Turing test, James asks Dr. Forbus why he is not a fan of the Turing test. [00:12:24] Ken mentions that Dr. Forbus received his Ph.D. from MIT in 1984, the same year that Apple released the first Macintosh, which was rolled out with a famous Super Bowl ad. This computer was the first successful mouse driven personal computer with a graphical interface. Ken asks Dr. Forbus what he remembers about that ad, and what his reaction to it was at the time. [00:13:22] James mentions that 1984 was also the year that Dr. Forbus made his first splash in the AI world with his paper on qualitative process theory. James goes on to explain that at the time, qualitative reasoning regarding quantities was a major problem for AI. In his paper, Dr. Forbus proposed qualitative process theory as a representational framework for common sense physical reasoning, arguing that understanding common sense physical reasoning first required understanding of processes and their effects and limits. James asks Dr. Forbus to give an overview of this paper and its significance. [00:18:10] Ken asks Dr. Forbus how it was that he ended up marrying one of his collaborators on the Structure Mapping Engine project, Dedre Gentner. [00:19:14] James explains that Dedre's Structure Mapping Theory explains how people understand and reason about relationships between different situations, which is central to human cognition. James asks Dr. Forbus how Dedre's theory was foundational for the Structure Mapping Engine (SME). [00:25:19] Ken mentions how SME has gone through a number of changes and improvements over the years, as documented in Dr. Forbus' 2016 paper “Extending SME to handle large scale cognitive modeling.” Ken asks, as a cognitive model, what evidence Dr. Forbus has used to argue for the psychological and cognitive plausibility of SME. [00:30:00] Ken explains that many AI systems rely on deep learning,
Is AI Becoming Conscious? The Hidden Evolution No One is Talking About | Prof. De Kai (Part 2)
Are We Raising AI… or Is AI Raising Us? | Professor De Kai on The Dov Baron Show
Michael Keller is a prominent pastor with a distinguished career intertwining academia and theology. Having spent his formative years in New York City and obtaining degrees in History and Psychology from Vanderbilt University, Michael pursued theological studies at Gordon Conwell Theological Seminary, earning both M.Div and THM degrees. He has served in various pastoral roles across London and Boston and most notably, Michael holds a Ph.D. in Computational Linguistics applied to the sermons of Jonathan Edwards from the Free University in Amsterdam. He currently pastors in Manhattan where he engages with a diverse community, addressing contemporary Christian challenges.Rebecca and Michael Keller explore the complexities of faith in urban environments and explore the changing landscape of spiritual conversations in cities like New York and Boston, addressing questions around Christianity's relevance and goodness in modern society.Sign up for weekly emails at RebeccaMcLaughlin.org/SubscribeFollow Confronting Christianity:Instagram | XPurchase Rebecca's Books:Confronting Christianity: 12 Hard Questions for the World's Largest ReligionDoes the Bible Affirm Same-Sex Relationships?: Examining 10 Claims about Scripture and Sexuality10 Questions Every Teen Should Ask (and Answer) about ChristianityJesus though the Eyes of Women: How the First Female Disciples Help Us Know and Love the LordNo Greater Love: A Biblical Vision for FriendshipConfronting Jesus: 9 Encounters with the Hero of the GospelsAmazon affiliate links are used where appropriate. As an Amazon Associate we earn from qualifying purchases, thank you for supporting!Produced by The Good Podcast Co.
Guest: Dr. Christian Hempelmann, Professor of Computational Linguistics in the Department of Literature and Languages at East Texas A&M University Learn more about your ad choices. Visit megaphone.fm/adchoices
-Why is mental health in BC getting worse? Guest: Jonny Morris, CEO of the Canadian Mental Health Association of BC -What's driving up BC's liveable wage? Guest: Iglika Ivanova, Senior Economist at the Canadian Center for Policy Alternatives' BC Branch and Lead Author of the Study -What caused the laughter epidemic of 1962? Guest: Dr. Christian Hempelmann, Professor of Computational Linguistics in the Department of Literature and Languages at East Texas A&M University -Why should we all become experts at something? Guest: Hannah Poikonen, Neuroscientist and Researcher at ETH Zurich (public university in Zurich) Learn more about your ad choices. Visit megaphone.fm/adchoices
The Cognitive Crucible is a forum that presents different perspectives and emerging thought leadership related to the information environment. The opinions expressed by guests are their own, and do not necessarily reflect the views of or endorsement by the Information Professionals Association. During this episode, Remi Whiteside discusses his Ph.D. research and dissertation which is entitled: Peering into US Army Media, Information, and Data Literacy Fundamentals against Malign Information in the Open Information Environment: A Qualitative Case Study. According to Remi Whiteside, the US Army currently has no institutionalized program-of-record for educational development, uniquely designed for its population of uniformed Information Professionals in training to detect, analyze, and scrutinize malign information in the Open Information Environment. Unlike its peer services, the US Army does not perceive malign information, a reimagined tool of ideological subversion, as a high-caliber threat so far as to invest the time, money, or resources into critical, foundational metaliteracy competencies needed for its Information Professionals for the Open Information Environment. Recording Date: 1 Aug 2024 Research Question: Remi Whiteside suggests an interested student ask–in relation to media and information—how do narratives derived from the Open Information Environment shape servicemembers' metanarratives and do these metanarratives conflict with military identity? Resources: Peering into US Army Media, Information, and Data Literacy Fundamentals against Malign Information in the Open Information Environment: A Qualitative Case Study Syntactic Structures by Noam Chomsky Rhet Ops: Rhetoric and Information Warfare (Composition, Literacy, and Culture) by Jim Ridolfo and William Hart-Davidson NOEMA Magazine Link to full show notes and resources Guest Bio: Remington Whiteside is an active-duty Chief Warrant Officer in the US Army, career education, and academic researcher into MIDLE (media, information, and data literacy education) and M2DP (malinformation, misinformation, disinformation, propaganda). He started his career as an enlisted Cryptologic Linguist, with work in strategic, SOF, and training environments. He metamorphosed to Signals Intelligence Warrant Officer, specializing in tactical SIGINT, COMINT, OSINT, PAI, OPSEC, and Intelligence Support to Cyber and Electromagnetic Warfare (EW) training as an Observer-Coach-Trainer at Fort Johnson, Louisiana at the Joint Readiness Training Center. Remi holds an undergraduate degree in Middle Eastern studies, a graduate degree in Applied Linguistics with a focus on Computational Linguistics, and a Doctor of Philosophy degree in Education. He is lovingly supported by his wife Sara and his three children: Evolette, Torben, and Soren. About: The Information Professionals Association (IPA) is a non-profit organization dedicated to exploring the role of information activities, such as influence and cognitive security, within the national security sector and helping to bridge the divide between operations and research. Its goal is to increase interdisciplinary collaboration between scholars and practitioners and policymakers with an interest in this domain. For more information, please contact us at communications@information-professionals.org. Or, connect directly with The Cognitive Crucible podcast host, John Bicknell, on LinkedIn. Disclosure: As an Amazon Associate, 1) IPA earns from qualifying purchases, 2) IPA gets commissions for purchases made through links in this post.
Peoples & Things host, Lee Vinsel, talks to Emily Bender, Professor of Linguistics, Director of the Masters of Science in Computational Linguistics program, and Director of the Computational Linguistics Laboratory at University of Washington, about her work on artificial intelligence criticism. Bender is also an adjunct professor in the School of Computer Science and Engineering and the Information School at UW; she is a member of the Tech Policy Lab, the Value Sensitive Design Lab, the Distributed AI Research Institute, and RAISE, or Responsibilities in AI Systems and Experiences; *AND*, with Alex Hanna, she is co-host of the Mystery AI Hype Theater podcast, which you should check out. Vinsel and Bender talk about the current AI bubble, what is driving it, and the technological potentials and limitations of this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Peoples & Things host, Lee Vinsel, talks to Emily Bender, Professor of Linguistics, Director of the Masters of Science in Computational Linguistics program, and Director of the Computational Linguistics Laboratory at University of Washington, about her work on artificial intelligence criticism. Bender is also an adjunct professor in the School of Computer Science and Engineering and the Information School at UW; she is a member of the Tech Policy Lab, the Value Sensitive Design Lab, the Distributed AI Research Institute, and RAISE, or Responsibilities in AI Systems and Experiences; *AND*, with Alex Hanna, she is co-host of the Mystery AI Hype Theater podcast, which you should check out. Vinsel and Bender talk about the current AI bubble, what is driving it, and the technological potentials and limitations of this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/economics
Peoples & Things host, Lee Vinsel, talks to Emily Bender, Professor of Linguistics, Director of the Masters of Science in Computational Linguistics program, and Director of the Computational Linguistics Laboratory at University of Washington, about her work on artificial intelligence criticism. Bender is also an adjunct professor in the School of Computer Science and Engineering and the Information School at UW; she is a member of the Tech Policy Lab, the Value Sensitive Design Lab, the Distributed AI Research Institute, and RAISE, or Responsibilities in AI Systems and Experiences; *AND*, with Alex Hanna, she is co-host of the Mystery AI Hype Theater podcast, which you should check out. Vinsel and Bender talk about the current AI bubble, what is driving it, and the technological potentials and limitations of this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/politics-and-polemics
Peoples & Things host, Lee Vinsel, talks to Emily Bender, Professor of Linguistics, Director of the Masters of Science in Computational Linguistics program, and Director of the Computational Linguistics Laboratory at University of Washington, about her work on artificial intelligence criticism. Bender is also an adjunct professor in the School of Computer Science and Engineering and the Information School at UW; she is a member of the Tech Policy Lab, the Value Sensitive Design Lab, the Distributed AI Research Institute, and RAISE, or Responsibilities in AI Systems and Experiences; *AND*, with Alex Hanna, she is co-host of the Mystery AI Hype Theater podcast, which you should check out. Vinsel and Bender talk about the current AI bubble, what is driving it, and the technological potentials and limitations of this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
Peoples & Things host, Lee Vinsel, talks to Emily Bender, Professor of Linguistics, Director of the Masters of Science in Computational Linguistics program, and Director of the Computational Linguistics Laboratory at University of Washington, about her work on artificial intelligence criticism. Bender is also an adjunct professor in the School of Computer Science and Engineering and the Information School at UW; she is a member of the Tech Policy Lab, the Value Sensitive Design Lab, the Distributed AI Research Institute, and RAISE, or Responsibilities in AI Systems and Experiences; *AND*, with Alex Hanna, she is co-host of the Mystery AI Hype Theater podcast, which you should check out. Vinsel and Bender talk about the current AI bubble, what is driving it, and the technological potentials and limitations of this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
In this podcast episode, Diana and Nicole discuss the use of Generative AI in scholarly and genealogical writing, emphasizing the importance of transparency in disclosing AI assistance. They talk about editorial guidelines from scholarly journals and the Association of Computational Linguistics, which suggest clear declarations of AI's involvement in literature searches, drafting, and idea generation. Key points include recommendations for crediting AI-generated content not as authors but by detailing the AI's role. They also explore citation practices, such as including the AI model and user details, and stress the user's responsibility to verify and refine AI outputs in professional settings, advocating for explicit disclosures in various document sections. Nicole generated this summary with ChatGPT 4. Links Disclosing Use of AI for Writing Assistance in Genealogy - https://familylocket.com/disclosing-use-of-ai-for-writing-assistance-in-genealogy/ Jordan Boyd-Graber et al., “ACL 2023 Policy on AI Writing Assistance,” ACL 2023, January 10, 2023, https://2023.aclweb.org/blog/ACL-2023-policy/. Association for Computational Linguistics (ACL) Rolling Review, “Call for Papers,” ACL Rolling Review, 2024, https://aclrollingreview.org/cfp. Baldy Dyer research report from log – Claude AI April 2024 - https://familylocket.com/wp-content/uploads/2024/04/Baldy-Dyer-research-report-from-log-Claude-AI-April-2024.pdf Sponsor – Newspapers.com For listeners of this podcast, Newspapers.com is offering new subscribers 20% off a Publisher Extra subscription so you can start exploring today. Just use the code “FamilyLocket” at checkout. Research Like a Pro Resources Airtable Universe - Nicole's Airtable Templates - https://www.airtable.com/universe/creator/usrsBSDhwHyLNnP4O/nicole-dyer Airtable Research Logs Quick Reference - by Nicole Dyer - https://familylocket.com/product/airtable-research-logs-for-genealogy-quick-reference/ Research Like a Pro: A Genealogist's Guide book by Diana Elder with Nicole Dyer on Amazon.com - https://amzn.to/2x0ku3d 14-Day Research Like a Pro Challenge Workbook - digital - https://familylocket.com/product/14-day-research-like-a-pro-challenge-workbook-digital-only/ and spiral bound - https://familylocket.com/product/14-day-research-like-a-pro-challenge-workbook-spiral-bound/ Research Like a Pro Webinar Series 2024 - monthly case study webinars including documentary evidence and many with DNA evidence - https://familylocket.com/product/research-like-a-pro-webinar-series-2024/ Research Like a Pro eCourse - independent study course - https://familylocket.com/product/research-like-a-pro-e-course/ RLP Study Group - upcoming group and email notification list - https://familylocket.com/services/research-like-a-pro-study-group/ Research Like a Pro with DNA Resources Research Like a Pro with DNA: A Genealogist's Guide to Finding and Confirming Ancestors with DNA Evidence book by Diana Elder, Nicole Dyer, and Robin Wirthlin - https://amzn.to/3gn0hKx Research Like a Pro with DNA eCourse - independent study course - https://familylocket.com/product/research-like-a-pro-with-dna-ecourse/ RLP with DNA Study Group - upcoming group and email notification list - https://familylocket.com/services/research-like-a-pro-with-dna-study-group/ Thank you Thanks for listening! We hope that you will share your thoughts about our podcast and help us out by doing the following: Write a review on iTunes or Apple Podcasts. If you leave a review, we will read it on the podcast and answer any questions that you bring up in your review. Thank you! Leave a comment in the comment or question in the comment section below. Share the episode on Twitter, Facebook, or Pinterest. Subscribe on iTunes, Stitcher, Google Podcasts, or your favorite podcast app. Sign up for our newsletter to receive notifications of new episodes - https://familylocket.com/sign-up/ Check out this list of genealogy podcasts from Feedspot: Top 20 Genealogy Podcasts - https://blog.feedspot.com/genealogy_podcasts/
Greg Baker knows more than most about AI and where it's headed. As an AI futurist and lecturer in Computational Linguistics at Macquarie University, Greg educates students on the intricacies of Artificial Intelligence for Text and Vision, and blends his theoretical insights with practical applications. He helps the C-suite understand the future of labour, how to maximise AI productivity gains and how to gain a deep technical understanding of what is possible. He is a director at the Institute for Open Systems Technologies and has worked at Google and CSIRO and is a leading voice in how to understand the opportunities that AI offers us today and into the future. If you're looking to discover how AI is impacting copywriters and the writing industry, this is the episode you have been waiting for. Read the show notes This podcast is brought to you by the Australian Writers' Centre. WritersCentre.com.au Join our community of copywriters at CopyClub.com.au.See omnystudio.com/listener for privacy information.
“I’m a jack of all trades – all linguists are” Falene McKenna is a linguist, tech enthusiast, conversation wizard, and a “true robot whisperer”. She studied Computational Linguistics at the University of Alberta. Since then, she’s led QA crusades, expanded beta programs across North American homes, and built customer support systems from the ground up. Falene has presented at various events, including the Conversation Design Festival 2022, emphasizing the importance of bot building standards, language expertise in AI experiences, and persona building. She is also an active member of Women in Voice, promoting gender diversity in the voice technology industry. Falene McKenna on LinkedIn Falene McKenna’s speaking profile on Sessionize Alberta Language Technology Lab (ALT Lab) Conversation Design Institute Danielle Boyer’s Skobots Topics include – language revitalization – computational linguistics – QA – robotics – conversational AI – conversation design – data science – academic precarity – job interviewsThe post Episode #37: Falene McKenna first appeared on Linguistics Careercast.
Emily and Alex are joined by Stanford PhD student Haley Lepp to examine the increasing hype around LLMs in education spaces - whether they're pitched as ways to reduce teacher workloads, increase accessibility, or simply "democratize learning and knowing" in the Global South. Plus a double dose of devaluating educator expertise and fatalism about the 'inevitability' of LLMs in the classroom.Haley Lepp is a Ph.D. student in the Stanford University Graduate School of Education. She draws on critical data studies, computational social science, and qualitative methods to understand the rise of language technologies and their use for educational purposes. Haley has worked in many roles in the education technology sector, including curriculum design and NLP engineering. She holds an M.S. in Computational Linguistics from the University of Washington and B.S. in Science, Technology, and International Affairs from Georgetown University.References:University of Michigan debuts 'customized AI services'Al Jazeera: An AI classroom revolution is comingCalifornia Teachers Association: The Future of Education?Politico: AI is not just for cheatingExtra credit: "Teaching Machines: The History of Personalized Learning" by Audrey WattersFresh AI Hell:AI generated travel article for Ottawa -- visit the food bank! Microsoft Copilot is “usefully wrong”* Response from Jeff Doctor“Ethical” production of “AI girlfriends”Withdrawn AI-written preprint on millipedes resurfaces, causing alarm among myriapodological communityNew York Times: How to Tell if Your A.I. Is Conscious* Response from VentureBeat: Today's AI is alchemy.EUYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Over the last year, AI large-language models (LLMs) like ChatGPT have demonstrated a remarkable ability to carry on human-like conversations in a variety of different concepts. But the way these LLMs "learn" is very different from how human beings learn, and the same can be said for how they "reason." It's reasonable to ask, do these AI programs really understand the world they are talking about? Do they possess a common-sense picture of reality, or can they just string together words in convincing ways without any underlying understanding? Computer scientist Yejin Choi is a leader in trying to understand the sense in which AIs are actually intelligent, and why in some ways they're still shockingly stupid.Blog post with transcript: https://www.preposterousuniverse.com/podcast/2023/08/28/248-yejin-choi-on-ai-and-common-sense/Support Mindscape on Patreon.Yejin Choi received a Ph.D. in computer science from Cornell University. She is currently the Wissner-Slivka Professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research director at AI2 overseeing the project Mosaic. Among her awards are a MacArthur fellowship and a fellow of the Association for Computational Linguistics.University of Washington web pageGoogle Scholar publicationsWikipediaSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
MLOps Coffee Sessions #171 with Thibaut Labarre, Using Large Language Models at AngelList co-hosted by Ryan Russon. We are now accepting talk proposals for our next LLM in Production virtual conference on October 3rd. Apply to speak here: https://go.mlops.community/NSAX1O // Abstract Thibaut innovatively addressed previous system constraints, achieving scalability and cost efficiency. Leveraging AngelList investing and natural language processing expertise, they refined news article classification for investor dashboards. Central is their groundbreaking platform, AngelList Relay, automating parsing and offering vital insights to investors. Amid challenges like Azure OpenAI collaboration and rate limit solutions, Thibaut reflects candidly. The narrative highlights prompt engineering's strategic importance and empowering domain experts for ongoing advancement. // Bio Thibaut LaBarre is an engineering lead with a background in Natural Language Processing (NLP). Currently, Thibaut focuses on unlocking the potential of Large Language Model (LLM) technology at AngelList, enabling everyone within the organization to become prompt engineers on a quest to streamline and automate the infrastructure for Venture Capital. Prior to that, Thibaut began his journey at Amazon as an intern where he built Heartbeat, a state-of-the-art NLP tool that consolidates millions of data points from various feedback sources, such as product reviews, customer contacts, and social media, to provide valuable insights to global product teams. Over the span of seven years, he expanded his internship project into an organization of 20 engineers. He received a M.S. in Computational Linguistics from the University of Washington. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.angellist.com/venture/relay --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Ryan on LinkedIn: https://www.linkedin.com/in/ryanrusson/ Connect with Thibaut on LinkedIn: https://www.linkedin.com/in/thibautlabarre/
ANGELA'S SYMPOSIUM 📖 Academic Study on Witchcraft, Paganism, esotericism, magick and the Occult
#dungeonsanddragons #dnd #criticalrole What does science say about dungeons and dragons? is it dangerous for your mental health? Is D&D a religion or a cult? Academic review and research studies on the Psychology and Religious elements in dungeons and dragons. RECOMMENDED READINGS Dangerous Games by Laycock https://amzn.to/3YrACEU Invented Religions by Cusack https://amzn.to/3ROfjeV D&D Player's Handbook https://amzn.to/3l9kvh8 Player's Handbook Dungeons and Dragons 5th Edition with DND Dice and Complete Printable Kit https://amzn.to/3Y72Yob D&D Dungeon Master's Guide https://amzn.to/3X3zhmQ D&D Monster Manual https://amzn.to/3X9LaaG MY SET UP Canon 90D camera https://amzn.to/3ZtfT4W Canon EF-S 18-55mm f/3.5-5.6 https://amzn.to/3XwOP36 Teleprompter https://amzn.to/3ZE4KhK Shure SM7B Microphone https://amzn.to/3CMz3ZX Microphone stand https://amzn.to/3QJbgzY lights https://amzn.to/3w3VAxr REFERENCES Adams, A. 2013. Needs Met Through Role-Playing Games: A Fantasy Theme Analysis of Dungeons & Dragons. Kaleidoscope: A Graduate Journal of Qualitative Communication Research. 12(1). Blackmon, W.D. 1994. Dungeons and Dragons: The Use of a Fantasy Game in the Psychotherapeutic Treatment of a Young Adult. American Journal of Psychotherapy. 48(4), pp.624–632. DeRenard, L.A. and Kline, L.M. 1990. Alienation and the Game Dungeons and Dragons. Psychological Reports. 66(3_suppl), pp.1219–1222. Laycock, J.P. 2015. Dangerous Games: What the Moral Panic over Role-Playing Games Says about Play, Religion, and Imagined Worlds. University of California Press. Perlini-Pfister, F. 2012. Philosophers with Clubs: Negotiating Cosmology and Worldviews in Dungeons & Dragons In: P. Bornet and M. Burger, eds. Religions in Play: Games, Rituals, and Virtual Worlds. Zürich: Theologischer Verlag Zürich, pp.275–294. Rameshkumar, R. and Bailey, P. 2020. Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics [Online]. Online: Association for Computational Linguistics, pp.5121–5134. [Accessed 31 January 2023]. Available from: https://aclanthology.org/2020.acl-main.459. Simón, A. 1987. Emotional stability pertaining to the game of Dungeons & Dragons. Psychology in the Schools. 24(4), pp.329–332. 0:00 Introduction: Dungeons and Dragons 01:04 D&D Beginning and early reaction 01:47 Is D&D dangerous? 04:15 The religious side of D&D 05:33 The Gods 07:47 The Alignment ethical system 10:27 Other Worlds 12:20 Mircea Eliade - theory of the sacred 14:25 Support Angela's Symposium BECOME MY PATRON! https://www.patreon.com/angelapuca ONE-OFF DONATIONS https://paypal.me/angelasymposium JOIN MEMBERSHIPS https://www.youtube.com/channel/UCPSbip_LX2AxbGeAQfLp-Ig/join FOLLOW ME: Facebook (Angela's Symposium), Instagram (angela_symposium), Twitter (@angelapuca11), TikTok (Angela's Symposium). Music by Erose MusicBand. Check them out! https://www.youtube.com/watch?v=Ja2mMNC5ybc
It's the Season 10 finale of the Elixir Wizards podcast! José Valim, Guillaume Duboc, and Giuseppe Castagna join Wizards Owen Bickford and Dan Ivovich to dive into the prospect of types in the Elixir programming language! They break down their research on set-theoretical typing and highlight their goal of creating a type system that supports as many Elixir idioms as possible while balancing simplicity and pragmatism. José, Guillaume, and Giuseppe talk about what initially sparked this project, the challenges in bringing types to Elixir, and the benefits that the Elixir community can expect from this exciting work. Guillaume's formalization and Giuseppe's "cutting-edge research" balance José's pragmatism and "Guardian of Orthodoxy" role. Decades of theory meet the needs of a living language, with open challenges like multi-process typing ahead. They come together with a shared joy of problem-solving that will accelerate Elixir's continued growth. Key Topics Discussed in this Episode: Adding type safety to Elixir through set theoretical typing How the team chose a type system that supports as many Elixir idioms as possible Balancing simplicity and pragmatism in type system design Addressing challenges like typing maps, pattern matching, and guards The tradeoffs between Dialyzer and making types part of the core language Advantages of typing for catching bugs, documentation, and tooling The differences between typing in the Gleam programming language vs. Elixir The possibility of type inference in a set-theoretic type system The history and development of set-theoretic types over 20 years Gradual typing techniques for integrating typed and untyped code How José and Giuseppe initially connected through research papers Using types as a form of "mechanized documentation" The risks and tradeoffs of choosing syntax Cheers to another decade of Elixir! A big thanks to this season's guests and all the listeners! Links and Resources Mentioned in this Episode: Bringing Types to Elixir | Guillaume Duboc & Giuseppe Castagna | ElixirConf EU 2023 (https://youtu.be/gJJH7a2J9O8) Keynote: Celebrating the 10 Years of Elixir | José Valim | ElixirConf EU 2022 (https://youtu.be/Jf5Hsa1KOc8) OCaml industrial-strength functional programming https://ocaml.org/ ℂDuce: a language for transformation of XML documents http://www.cduce.org/ Ballerina coding language https://ballerina.io/ Luau coding language https://luau-lang.org/ Gleam type language https://gleam.run/ "The Design Principles of the Elixir Type System" (https://www.irif.fr/_media/users/gduboc/elixir-types.pdf) by G. Castagna, G. Duboc, and J. Valim "A Gradual Type System for Elixir" (https://dlnext.acm.org/doi/abs/10.1145/3427081.3427084) by M. Cassola, A. Talagorria, A. Pardo, and M. Viera "Programming with union, intersection, and negation types" (https://www.irif.fr/~gc/papers/set-theoretic-types-2022.pdf), by Giuseppe Castagna "Covariance and Contravariance: a fresh look at an old issue (a primer in advanced type systems for learning functional programmers)" (https://www.irif.fr/~gc/papers/covcon-again.pdf) by Giuseppe Castagna "A reckless introduction to Hindley-Milner type inference" (https://www.lesswrong.com/posts/vTS8K4NBSi9iyCrPo/a-reckless-introduction-to-hindley-milner-type-inference) Special Guests: Giuseppe Castagna, Guillaume Duboc, and José Valim.
In this episode Lauren Hawker Zafer is joined by Janna Lipenkova Who Can Benefit From This Conversation? Tune in to this captivating podcast episode, delving into the realm of detecting and combating greenwashing. Engage with Lauren and Janna as they unravel the intricacies of LLMs and other NLP techniques, and how they can help uncover the truth behind environmental and sustainability claims. Learn and grow while you listen, arming yourself with invaluable knowledge to navigate the world of sustainability with clarity and confidence. Who is Janna Lipenkova?Janna Lipenkova holds a Master in Chinese Studies and Economics, a PhD in Computational Linguistics and speaks seven languages fluently. After several years of work in AI in academia and industry, she started her own analytics business and is now the CEO and Co-Founder of Equintel. Equintel uses AI and NLP methods to deliver timely and objective ESG and impact analytics. Janna is fascinated by the current developments in Generative AI and their impact on our society and actively contributes to public discourse in the AI space on Medium and her blog (jannalipenkova.com). REDEFINING AI is powered by The Squirro Academy - learn.squirro.com. Try our free courses on AI, ML, NLP and Cognitive Search at the Squirro Academy and find out more about Squirro here.
Spotlight Nine is a snippet from our upcoming episode: Janna Lipenkova - Detecting and Mitigating Greenwashing Risks with LLMs and Other NLP Approaches. Listen to the full episode, as soon as it comes out by subscribing to Redefining AI. Who is Janna Lipenkova?Janna Lipenkova holds a Master in Chinese Studies and Economics, a PhD in Computational Linguistics and speaks seven languages fluently. After several years of work in AI in academia and industry, she started her own analytics business and is now the CEO and Co-Founder of Equintel. Equintel uses AI and NLP methods to deliver timely and objective ESG and impact analytics. Janna is fascinated by the current developments in Generative AI and their impact on our society and actively contributes to public discourse in the AI space. You can find out more about her contribution on her blog (jannalipenkova.com) and Medium (https://medium.com/@janna.lipenkova_52659). Why this Episode? Tune in to this captivating podcast episode, delving into the realm of detecting and combating greenwashing. Engage with Janna as she unravels the intricacies of LLMS and other NLP techniques and how they can help you to uncover the truth behind environmental claims. Learn and grow while you listen, arming yourself with invaluable knowledge to navigate the world of sustainability with clarity and confidence.
Sign up for the next LLM in production conference here: https://go.mlops.community/LLMinprod Watch all the talks from the first conference: https://go.mlops.community/llmconfpart1 // Abstract In this panel discussion, the topic of the cost of running large language models (LLMs) is explored, along with potential solutions. The benefits of bringing LLMs in-house, such as latency optimization and greater control, are also discussed. The panelists explore methods such as structured pruning and knowledge distillation for optimizing LLMs. OctoML's platform is mentioned as a tool for the automatic deployment of custom models and for selecting the most appropriate hardware for them. Overall, the discussion provides insights into the challenges of managing LLMs and potential strategies for overcoming them. // Bio Lina Weichbrodt Lina is a pragmatic freelancer and machine learning consultant that likes to solve business problems end-to-end and make machine learning or a simple, fast heuristic work in the real world. In her spare time, Lina likes to exchange with other people on how they can implement best practices in machine learning, talk to her at the Machine Learning Ops Slack: shorturl.at/swxIN. Luis Ceze Luis Ceze is Co-Founder and CEO of OctoML, which enables businesses to seamlessly deploy ML models to production making the most out of the hardware. OctoML is backed by Tiger Global, Addition, Amplify Partners, and Madrona Venture Group. Ceze is the Lazowska Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, where he has taught for 15 years. Luis co-directs the Systems and Architectures for Machine Learning lab (sampl.ai), which co-authored Apache TVM, a leading open-source ML stack for performance and portability that is used in widely deployed AI applications. Luis is also co-director of the Molecular Information Systems Lab (misl.bio), which led pioneering research in the intersection of computing and biology for IT applications such as DNA data storage. His research has been featured prominently in the media including New York Times, Popular Science, MIT Technology Review, and the Wall Street Journal. Ceze is a Venture Partner at Madrona Venture Group and leads their technical advisory board. Jared Zoneraich Co-Founder of PromptLayer, enabling data-driven prompt engineering. Compulsive builder. Jersey native, with a brief stint in California (UC Berkeley '20) and now residing in NYC. Daniel Campos Hailing from Mexico Daniel started his NLP journey with his BS in CS from RPI. He then worked at Microsoft on Ranking at Bing with LLM(back when they had 2 commas) and helped build out popular datasets like MSMARCO and TREC Deep Learning. While at Microsoft he got his MS in Computational Linguistics from the University of Washington with a focus on Curriculum Learning for Language Models. Most recently, he has been pursuing his Ph.D. at the University of Illinois Urbana Champaign focusing on efficient inference for LLMs and robust dense retrieval. During his Ph.D., he worked for companies like Neural Magic, Walmart, Qualtrics, and Mendel.AI and now works on bringing LLMs to search at Neeva. Mario Kostelac Currently building AI-powered products in Intercom in a small, highly effective team. I roam between practical research and engineering but lean more towards engineering and challenges around running reliable, safe, and predictable ML systems. You can imagine how fun it is in LLM era :). Generally interested in the intersection of product and tech, and building a differentiation by solving hard challenges (technical or non-technical). Software engineer turned into Machine Learning engineer 5 years ago.
Our guest, Christopher Manning, is a computational linguist. He builds computer models that understand and generate language using math. Words are the key component of human intelligence, he says, and why generative AI, like ChatGPT, has caused such a stir. We used to hope a model might produce one coherent sentence and suddenly ChatGPT is composing five-paragraph stories and doing mathematical proofs in rhyming verse, Manning tells host Russ Altman in this episode of Stanford Engineering's The Future of Everything podcast.
Our guest, Christopher Manning, is a computational linguist. He builds computer models that understand and generate language using math. Words are the key component of human intelligence, he says, and why generative AI, like ChatGPT, has caused such a stir. We used to hope a model might produce one coherent sentence and suddenly ChatGPT is composing five-paragraph stories and doing mathematical proofs in rhyming verse, Manning tells host Russ Altman in this episode of Stanford Engineering's The Future of Everything podcast.
Has ChatGPT made copywriters redundant? Far from it, according to Greg Baker, a lecturer in Artificial Intelligence and Computational Linguistics at Macquarie University. Greg, more than most, has his finger on the pulse on not just where ChatGPT is heading but how it actually works! He has spent his life researching natural language processing and has a deep knowledge of what these tools can do and more importantly, how we copywriters and creative folk can harness them to our advantage. Read the show notes This podcast is brought to you by the Australian Writers' Centre. WritersCentre.com.au Join our community of copywriters at CopyClub.com.au.See omnystudio.com/listener for privacy information.
Paris Marx is joined by Emily M. Bender to discuss what it means to say that ChatGPT is a “stochastic parrot,” why Elon Musk is calling to pause AI development, and how the tech industry uses language to trick us into buying its narratives about technology. Emily M. Bender is a professor in the Department of Linguistics at the University of Washington and the Faculty Director of the Computational Linguistics Master's Program. She's also the director of the Computational Linguistics Laboratory. Follow Emily on Twitter at @emilymbender or on Mastodon at @emilymbender@dair-community.social. Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon. The podcast is produced by Eric Wickham and part of the Harbinger Media Network. Also mentioned in this episode:Emily was one of the co-authors on the “On the Dangers of Stochastic Parrots” paper and co-wrote the “Octopus Paper” with Alexander Koller. She was also recently profiled in New York Magazine and has written about why policymakers shouldn't fall for the AI hype.The Future of Life Institute put out the “Pause Giant AI Experiments” letter and the authors of the “Stochastic Parrots” paper responded through DAIR Institute.Zachary Loeb has written about Joseph Weizenbaum and the ELIZA chatbot.Leslie Kay Jones has researched how Black women use and experience social media.As generative AI is rolled out, many tech companies are firing their AI ethics teams.Emily points to Algorithmic Justice League and AI Incident Database.Deborah Raji wrote about data and systemic racism for MIT Tech Review.Books mentioned: Weapons of Math Destruction by Cathy O'Neil, Algorithms of Oppression by Safiya Noble, The Age of Surveillance Capitalism by Shoshana Zuboff, Race After Technology by Ruha Benjamin, Ghost Work by Mary L Gray & Siddharth Suri, Artificial Unintelligence by Meredith Broussard, Design Justice by Sasha Costanza-Chock, Data Conscience: Algorithmic S1ege on our Hum4n1ty by Brandeis Marshall.Support the show
In this Silicon Valley Tech & AI episode presented by GSD Venture Studios Gary Fowler interviews De Kai. Guest: De Kai is Professor of Computer Science and Engineering at HKUST, and Distinguished Research Scholar at Berkeley's International Computer Science Institute. He is among only 17 scientists worldwide named by the Association for Computational Linguistics as a Founding ACL Fellow, for his pioneering contributions to machine translation and machine learning foundations of systems like the Google/Yahoo/Microsoft translators. Recruited as founding faculty of HKUST directly from UC Berkeley, where his PhD thesis was one of the first to spur the paradigm shift toward machine learning based natural language processing technologies, he founded HKUST's internationally funded Human Language Technology Center which launched the world's first web translator over twenty years ago.
Think of this as the impossible lecture on AI. Recorded across time, but presented as one flow, a series of AI experts line up to contribute to the conversation. https://authoredby.ai DISCLAIMER: The sound quality (and only sound quality!) from calls in this episode has been enhanced using AI. The otherwise identical unenhanced version can be found here: https://www.authoredby.ai/03unenhanced/ Hosted by Stephen Follows and Eliel Camargo-Molina Guests (in order of appearance) Mike Kanaan, Author of "T-Minus AI", Chief of Staff of the U.S. Air Force FellowBob Fisher, Professor of Computer Vision at University of EdinburghAngelina McMillan-Major, PhD Candidate in Computational Linguistics at the University of WashingtonMagnus Sahlgren, Head of research for natural language understanding at AI SwedenChristoph Molnar, Researcher and author of "Interpretable Machine Learning"Sameer Singh, Associate Professor of Computer Science at the University of CaliforniaAmandalynne Paullada, Researcher in Computational Linguistics at the University of WashingtonDan Rockmore, Professor of Math and Computer Science at the University of DartmouthKen Stanley, AI Researcher, former Open-endedness Team Lead at OpenAIGPT-3Edited by Eliel Camargo-Molina and Jess YungMusic by Eliel Camargo-Molina and GPT-3Mastering by Adanze "Lady Ze" UnaegbuG46PUiGigCepW8Ld0R0D
Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 YT: https://youtu.be/i9VPPmQn9HQ Edward Grefenstette is a Franco-American computer scientist who currently serves as Head of Machine Learning at Cohere and Honorary Professor at UCL. He has previously been a research scientist at Facebook AI Research and staff research scientist at DeepMind, and was also the CTO of Dark Blue Labs. Prior to his move to industry, Edward was a Fulford Junior Research Fellow at Somerville College, University of Oxford, and was lecturing at Hertford College. He obtained his BSc in Physics and Philosophy from the University of Sheffield and did graduate work in the philosophy departments at the University of St Andrews. His research draws on topics and methods from Machine Learning, Computational Linguistics and Quantum Information Theory, and has done work implementing and evaluating compositional vector-based models of natural language semantics and empirical semantic knowledge discovery. https://www.egrefen.com/ https://cohere.ai/ TOC: [00:00:00] Introduction [00:02:52] Differential Semantics [00:06:56] Concepts [00:10:20] Ontology [00:14:02] Pragmatics [00:16:55] Code helps with language [00:19:02] Montague [00:22:13] RLHF [00:31:54] Swiss cheese problem / retrieval augmented [00:37:06] Intelligence / Agency [00:43:33] Creativity [00:46:41] Common sense [00:53:46] Thinking vs knowing References: Large language models are not zero-shot communicators (Laura Ruis) https://arxiv.org/abs/2210.14986 Some remarks on Large Language Models (Yoav Goldberg) https://gist.github.com/yoavg/59d174608e92e845c8994ac2e234c8a9 Quantum Natural Language Processing (Bob Coecke) https://www.cs.ox.ac.uk/people/bob.coecke/QNLP-ACT.pdf Constitutional AI: Harmlessness from AI Feedback https://www.anthropic.com/constitutional.pdf Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Patrick Lewis) https://www.patricklewis.io/publication/rag/ Natural General Intelligence (Prof. Christopher Summerfield) https://global.oup.com/academic/product/natural-general-intelligence-9780192843883 ChatGPT with Rob Miles - Computerphile https://www.youtube.com/watch?v=viJt_DXTfwA
There's rarely an expected path in science. This week's episode, produced in partnership with The Carl R. Woese Institute for Genomic Biology, features two stories from scientists of their cutting-edge research institute at the University of Illinois Urbana-Champaign who took unexpected journeys to get where they are today.Part 1: After a troubling personal experience with the health care system, Heng Ji decides to try to fix it.Part 2: When Brendan Harley is diagnosed with leukaemia in high school, it changes everything.Heng Ji is a professor at Computer Science Department, and an affiliated faculty member at Electrical and Computer Engineering Department of University of Illinois at Urbana-Champaign. She is also an Amazon Scholar. She received her B.A. and M. A. in Computational Linguistics from Tsinghua University, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing, especially on Multimedia Multilingual Information Extraction, Knowledge Base Population and Knowledge-driven Generation. She was selected as "Young Scientist" and a member of the Global Future Council on the Future of Computing by the World Economic Forum in 2016 and 2017. She was named as part of Women Leaders of Conversational AI (Class of 2023) by Project Voice. The awards she received include "AI's 10 to Watch" Award by IEEE Intelligent Systems in 2013, NSF CAREER award in 2009, PACLIC2012 Best paper runner-up, "Best of ICDM2013" paper award, "Best of SDM2013" paper award, ACL2018 Best Demo paper nomination, ACL2020 Best Demo Paper Award, NAACL2021 Best Demo Paper Award, Google Research Award in 2009 and 2014, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invited by the Secretary of the U.S. Air Force and AFRL to join Air Force Data Analytics Expert Panel to inform the Air Force Strategy 2030. She is the lead of many multi-institution projects and tasks, including the U.S. ARL projects on information fusion and knowledge networks construction, DARPA DEFT Tinker Bell team and DARPA KAIROS RESIN team. She has coordinated the NIST TAC Knowledge Base Population task since 2010. She was the associate editor for IEEE/ACM Transaction on Audio, Speech, and Language Processing, and served as the Program Committee Co-Chair of many conferences including NAACL-HLT2018 and AACL-IJCNLP2022. She is elected as the North American Chapter of the Association for Computational Linguistics (NAACL) secretary 2020-2023. Her research has been widely supported by the U.S. government agencies (DARPA, ARL, IARPA, NSF, AFRL, DHS) and industry (Amazon, Google, Facebook, Bosch, IBM, Disney). Heng Ji is supported by NSF AI Institute on Molecule Synthesis, and collaborating with Prof. Marty Burke at Chemistry Department at UIUC and Prof. Kyunghyun Cho at New York University and Genetech on using AI for drug discovery. Dr. Brendan Harley is a Professor of Chemical and Biomolecular Engineering at the University of Illinois at Urbana-Champaign. His research group develops biomaterial that can be implanted in the body to regenerate musculoskeletal tissues or that can be used outside the body as tissue models to study biological events linked to endometrium, brain cancer, and stem cell behavior. He's a distance runner who dreams of (eventually) running ultramarathons. Follow him @Prof_Harley and www.harleylab.org. Learn more about your ad choices. Visit megaphone.fm/adchoicesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
There's rarely an expected path in science. This week's episode, produced in partnership with The Carl R. Woese Institute for Genomic Biology, features two stories from scientists of their cutting-edge research institute at the University of Illinois Urbana-Champaign who took unexpected journeys to get where they are today. Part 1: After a troubling personal experience with the health care system, Heng Ji decides to try to fix it. Part 2: When Brendan Harley is diagnosed with leukaemia in high school, it changes everything. Heng Ji is a professor at Computer Science Department, and an affiliated faculty member at Electrical and Computer Engineering Department of University of Illinois at Urbana-Champaign. She is also an Amazon Scholar. She received her B.A. and M. A. in Computational Linguistics from Tsinghua University, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing, especially on Multimedia Multilingual Information Extraction, Knowledge Base Population and Knowledge-driven Generation. She was selected as "Young Scientist" and a member of the Global Future Council on the Future of Computing by the World Economic Forum in 2016 and 2017. She was named as part of Women Leaders of Conversational AI (Class of 2023) by Project Voice. The awards she received include "AI's 10 to Watch" Award by IEEE Intelligent Systems in 2013, NSF CAREER award in 2009, PACLIC2012 Best paper runner-up, "Best of ICDM2013" paper award, "Best of SDM2013" paper award, ACL2018 Best Demo paper nomination, ACL2020 Best Demo Paper Award, NAACL2021 Best Demo Paper Award, Google Research Award in 2009 and 2014, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invited by the Secretary of the U.S. Air Force and AFRL to join Air Force Data Analytics Expert Panel to inform the Air Force Strategy 2030. She is the lead of many multi-institution projects and tasks, including the U.S. ARL projects on information fusion and knowledge networks construction, DARPA DEFT Tinker Bell team and DARPA KAIROS RESIN team. She has coordinated the NIST TAC Knowledge Base Population task since 2010. She was the associate editor for IEEE/ACM Transaction on Audio, Speech, and Language Processing, and served as the Program Committee Co-Chair of many conferences including NAACL-HLT2018 and AACL-IJCNLP2022. She is elected as the North American Chapter of the Association for Computational Linguistics (NAACL) secretary 2020-2023. Her research has been widely supported by the U.S. government agencies (DARPA, ARL, IARPA, NSF, AFRL, DHS) and industry (Amazon, Google, Facebook, Bosch, IBM, Disney). Heng Ji is supported by NSF AI Institute on Molecule Synthesis, and collaborating with Prof. Marty Burke at Chemistry Department at UIUC and Prof. Kyunghyun Cho at New York University and Genetech on using AI for drug discovery. Dr. Brendan Harley is a Professor of Chemical and Biomolecular Engineering at the University of Illinois at Urbana-Champaign. His research group develops biomaterial that can be implanted in the body to regenerate musculoskeletal tissues or that can be used outside the body as tissue models to study biological events linked to endometrium, brain cancer, and stem cell behavior. He's a distance runner who dreams of (eventually) running ultramarathons. Follow him @Prof_Harley and www.harleylab.org. Learn more about your ad choices. Visit podcastchoices.com/adchoices
“Closed mouths don’t get fed.” Victoria Hamilton is a computational linguist with an interest in language learning technology. She earned her Master's in Computational Linguistics at Stony Brook University. Over the years she's had a number of jobs, including as a graduate career coach and graduate coordinator at Stony Brook. She is now working as an analytical linguist at Grammarly. Victoria Hamilton at LinkedIn Topics covered: – Music – Life priorities – Employment during grad school – Being Black in linguistics – Networking Download a transcript here (Word doc) or view it online here courtesy of Luca DinuThe post Episode #8: Victoria Hamilton first appeared on Linguistics Careercast.
#naturallanguageprocessing #nlp #artificialintelligence #machinelearning #toctw Author of Cognitively inspired natural language processing, Professor Pushpak Bhattacharyya is a computer scientist and a professor at Computer Science and Engineering Department, IIT Bombay. He served as the director of the Indian Institute of Technology Patna from 2015 to 2021. He is a past president of the Association for Computational Linguistics and Ex-Vijay and Sita Vashee Chair Professor He currently heads the Natural language processing research group Center For Indian Language Technology (CFILT) lab at IIT Bombay which was established in 2000 at the Computer Science and Engineering Department, IIT Bombay. Formerly known as the Center for Indian Language Technology CFILT lab has been publishing in the area of Natural Language Processing and Artificial Intelligence year after year. The principal investigator for this lab is Prof. Pushpak Bhattacharyya (CSE, IITB) Prof. Bhattacharyya has published more than 350 research papers in different areas of NLP and ML. He is the author of ‘Machine Translation. 'Cognitively Inspired Natural Language Processing- An Investigation Based on Eye Tracking and 'Low Resource Machine Translation and Transliteration. https://in.linkedin.com/in/pushpakbh https://www.cse.iitb.ac.in/~pb/ https://www.iitb.ac.in/en/employee/prof-pushpak-bhattacharya Watch our highest viewed videos: 1-India;s 1st Quantum Computer- https://youtu.be/ldKFbHb8nvQDR R VIJAYARAGHAVAN - PROF & PRINCIPAL INVESTIGATOR AT TIFR 2-Breakthrough in Age Reversal- -https://youtu.be/214jry8z3d4DR HAROLD KATCHER - CTO NUGENICS RESEARCH 3-Head of Artificial Intelligence-JIO - https://youtu.be/q2yR14rkmZQShailesh Kumar 4-STARTUP FROM INDIA AIMING FOR LEVEL 5 AUTONOMY - SANJEEV SHARMA CEO SWAAYATT ROBOTS -https://youtu.be/Wg7SqmIsSew 5-TRANSHUMANISM & THE FUTURE OF MANKIND - NATASHA VITA-MORE: HUMANITY PLUS -https://youtu.be/OUIJawwR4PY 6-MAN BEHIND GOOGLE QUANTUM SUPREMACY - JOHN MARTINIS -https://youtu.be/Y6ZaeNlVRsE 7-1000 KM RANGE ELECTRIC VEHICLES WITH ALUMINUM AIR FUEL BATTERIES - AKSHAY SINGHAL -https://youtu.be/cUp68Zt6yTI 8-Garima Bharadwaj Chief Strategist IoT & AI at Enlite Research -https://youtu.be/efu3zIhRxEY 9-BANKING 4.0 - BRETT KING FUTURIST, BESTSELLING AUTHOR & FOUNDER MOVEN -https://youtu.be/2bxHAai0UG0 10-E-VTOL & HYPERLOOP- FUTURE OF INDIA"S MOBILITY- SATYANARAYANA CHAKRAVARTHY -https://youtu.be/ZiK0EAelFYY 11-NON-INVASIVE BRAIN COMPUTER INTERFACE - KRISHNAN THYAGARAJAN -https://youtu.be/fFsGkyW3xc4 12-SATELLITES THE NEW MULTI-BILLION DOLLAR SPACE RACE - MAHESH MURTHY -https://youtu.be/UarOYOLUMGk Connect & Follow us at: https://in.linkedin.com/in/eddieavil https://in.linkedin.com/company/change-transform-india https://www.facebook.com/changetransformindia/ https://twitter.com/intothechange https://www.instagram.com/changetransformindia/ Listen to the Audio Podcast at: https://anchor.fm/transform-impossible https://podcasts.apple.com/us/podcast/change-i-m-possibleid1497201007?uo=4 https://open.spotify.com/show/56IZXdzH7M0OZUIZDb5mUZ https://www.breaker.audio/change-i-m-possible https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy8xMjg4YzRmMC9wb2RjYXN0L3Jzcw Dont Forget to Subscribe www.youtube.com/ctipodcast
This week we are joined by Sebastian Ruder. He is a research scientist at DeepMind, London. He has also worked at a variety of institutions such as AYLIEN, Microsoft, IBM's Extreme Blue, Google Summer of Code, and SAP. These experiences were completed in tangent with his studies which included studying Computational Linguistics at the University of Heidelberg, Germany and at Trinity College, Dublin before undertaking a PhD in Natural Language Processing and Deep Learning at the Insight Research Centre for Data Analytics.This week we discuss language independence and diversity in natural language processing whilst also taking a look at the attempts to identify material properties from images.As discussed in the podcast if you would like to donate to the current campaign of "CREATE DONATE EDUCATE" which supports Stop Hate UK then please find the link below:https://www.shorturl.at/glmszPlease also find additional links to help support black colleagues in the area of research;Black in AI twitter account: https://twitter.com/black_in_aiMentoring and proofreading sign-up to support our Black colleagues in research: https://twitter.com/le_roux_nicolas/status/1267896907621433344?s=20Underrated ML Twitter: https://twitter.com/underrated_mlSebastian Ruder Twitter: https://twitter.com/seb_ruderPlease let us know who you thought presented the most underrated paper in the form below: https://forms.gle/97MgHvTkXgdB41TC8Links to the papers:“On Achieving and Evaluating Language-Independence in NLP” - https://journals.linguisticsociety.org/elanguage/lilt/article/view/2624.html"The State and Fate of Linguistic Diversity and Inclusion in the NLP World” - https://arxiv.org/abs/2004.09095"Recognizing Material Properties from Images" - https://arxiv.org/pdf/1801.03127.pdfAdditional Links:Student perspectives on applying to NLP PhD programs: https://blog.nelsonliu.me/2019/10/24/student-perspectives-on-applying-to-nlp-phd-programs/Tim Dettmer's post on how to pick your grad school: https://timdettmers.com/2020/03/10/how-to-pick-your-grad-school/Rachel Thomas' blog post on why you should blog: https://medium.com/@racheltho/why-you-yes-you-should-blog-7d2544ac1045Emily Bender's The Gradient article: https://thegradient.pub/the-benderrule-on-naming-the-languages-we-study-and-why-it-matters/Paper on order-sensitive vs order-free methods: https://www.aclweb.org/anthology/N19-1253.pdf"Exploring the Origins and Prevalence of Texture Bias in Convolutional Neural Networks": https://arxiv.org/abs/1911.09071Sebastian's website where you can find all his blog posts: https://ruder.io/
This week we feature our interview with Eric Daimler, PhD. Eric and I discussed how AI can unlock the potential of humanity. Dr. Eric Daimler is an authority in Artificial Intelligence with over 20 years of experience in the field as an entrepreneur, executive, investor, technologist, and policy advisor. Daimler has co-founded six technology companies that have done pioneering work in fields ranging from software systems to statistical arbitrage. Daimler is the author of the forthcoming book "The Coming Composability: The roadmap for using technology to solve society's biggest problems." A frequent speaker, lecturer, and commentator, he works to empower communities and citizens to leverage AI for a more sustainable, secure, and prosperous future. As a Presidential Innovation Fellow during the Obama Administration, Daimler helped drive the agenda for U.S. leadership in research, commercialization, and public adoption of AI. He has also served as Assistant Dean and Assistant Professor of Software Engineering in Carnegie Mellon's School of Computer Science. His academic research focuses on the intersection of Machine Learning, Computational Linguistics, and Network Science (Graph Theory). He has a specialization in public policy and economics, helped launch Carnegie Mellon's Silicon Valley Campus, and founded its Entrepreneurial Management program. A frequent keynote speaker, he has presented at venues including the engineering schools of MIT, Stanford, and Harvard. Daimler studied at Stanford University, the University of Washington-Seattle, and Carnegie Mellon University, where he earned his PhD in its School of Computer Science. Contact Information Twitter: @ead LinkedIn: linkedin.com/in/ericdaimler Website: http://www.conexus.com/ Re-read Saturday News Multitasking is the first or second greatest LIE in the modern business world. The best description of multitasking would include thrash, waste, and hubris. The problem is that EVERYONE thinks they are special and can multitask their way to the effective delivery of value. Chapter 3 of Why Limit WIP: We Are Drowning In Work blasts away at multitasking (another take on the topic from 2015: Multitasking Yourself Away From Efficiency | Software Process and Measurement https://bit.ly/37XmrSY). Multitasking is bad, don't do it. Remember to buy a copy and read along. Amazon Affiliate LInk: https://amzn.to/36Rq3p5 Previous Entries Week 1: Preface, Foreword, Introduction, and Logistics – https://bit.ly/3iDezbp Week 2: Processing and Memory – https://bit.ly/3qYR4yg Week 3: Completion - https://bit.ly/3usMiLm Week 4: Multitasking - https://bit.ly/37hUh5z Upcoming Events: Final Call! Free Webinar When Prioritization Goes Bad https://www.greatpro.org/Webinar-Live-Register?id=1954 April 19, 2022 11 AM EDT to 1230 EDT Next SPaMCAST Next week for SPaMCAST 700 we will feature our interview with Slater Victoroff. Slater presents an alternate definition for AI. Compare and contrast to Dr. Daimler's definition?
Ever wondered if we could predict people's actions through their words? Renowned social psychologist and linguist Dr Jamie Pennebaker shares how words can give away our secrets, feelings and inner state of mind from Putin's language which predicted his invasion of Ukraine to poets whose use of the word “I” can predict a higher risk of suicide. Dr. Pennebaker's groundbreaking research in computational linguistics analyzing and counting the frequency of words, shows that our most forgettable words, such as pronouns I, me and my, can be the most revealing. He explains what the words Vladimir Putin, Xi Jinping, and Joe Biden use (and even the ones they don't use) reveal about their inner feelings and the “tell” that predicted Vladimir Putin's invasion of Ukraine. He also talks about how American Presidents have become more likeable and less analytical, the differences in men's and women's words, and how writing about traumatic experiences can help people heal and improve their physical health. This podcast is available on all major podcast streaming platforms. Did you enjoy this episode? Consider leaving a review on Apple Podcasts.Receive updates on upcoming guests and more in our weekly e-mail newsletter. Subscribe today at www.3takeaways.com.
Eneko Agirre, EHUko IXA taldeko informatikaria. Association of Computational Linguistics erakundeko kide egin dute (fellow). Mundu osoan 75 bakarrik daude eta Europan 15. Aitziber Agirre, "Elhuyar" aldizkariko zuzendaria, martxoko zenbakia atera berritan....
Eneko Agirre, EHUko IXA taldeko informatikaria. Association of Computational Linguistics erakundeko kide egin dute (fellow). Mundu osoan 75 bakarrik daude eta Europan 15. Aitziber Agirre, "Elhuyar" aldizkariko zuzendaria, martxoko zenbakia atera berritan....
In this episode of “Technically Human,” I sit down with Dr. Eric Daimler. We talk about one of the biggest technology problems facing us today—data deluge—and how new computational models and theories can help solve it and, Dr. Daimler weighs in on the gaps, differences, and possibilities for collaboration between policy, industry, and academia. And we talk about what a vision of “AI for Good” might look like in a world of increasingly infinite data. Dr. Eric Daimler is a leading authority in robotics and artificial intelligence with over 20 years of experience as an entrepreneur, investor, technologist, and policymaker. He served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of President, as the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI & Robotics. Dr. Daimler has incubated, built and led several technology companies recognized as pioneers in their fields ranging from software systems to statistical arbitrage. His newest venture, Conexus, is a groundbreaking solution for what is perhaps today's biggest information technology problem — data deluge. As founder and CEO of Conexus, Dr. Daimler is leading the development of CQL, a patent-pending platform founded upon category theory — a revolution in mathematics — to help companies manage the overwhelming and rapidly growing challenge of data integration and migration. His academic research has been at the intersection of AI, Computational Linguistics, and Network Science (Graph Theory). His work has expanding to include economics and public policy. He served as Assistant Professor and Assistant Dean at Carnegie Mellon's School of Computer Science where he founded the university's Entrepreneurial Management program and helped to launch Carnegie Mellon's Silicon Valley Campus. He has studied at the University of Washington-Seattle, Stanford University, and Carnegie Mellon University, where he earned his Ph.D. in Computer Science. Dr. Daimler's extensive career spanning business, academics and policy give him a rare perspective on the next generation of AI. Dr. Daimler sees clearly how information technology can dramatically improve our world. However, it demands our engagement. Neither a utopia nor dystopia is inevitable. What matters is how we shape and react to, its development. This episode was produced by Matt Perry. Our head of reseaarch is Sakina Nuruddin. Art by Desi Aleman.
In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study. Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender --- Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP. --- Timestamps: 0:00 Sneak peek, intro 1:03 Stochastic Parrots 9:57 The societal impact of big language models 16:49 How language models can be harmful 26:00 The important difference between linguistic form and meaning 34:40 The octopus thought experiment 42:11 Language acquisition and the future of language models 49:47 Why benchmarks are limited 54:38 Ways of complementing benchmarks 1:01:20 The #BenderRule 1:03:50 Language diversity and linguistics 1:12:49 Outro
The Naked Dialogue Podcast EP#30: Shira Eisenberg | Computational Linguistics, Psychoanalysis & Cognitive Architectures around Human Behavior Shira Eisenberg- https://shira.dev/ | https://twitter.com/shiraeis Shira Eisenberg is an AI Language Systems Researcher @MIT Lincoln Laboratory & a technical consultant @CDC, (most recently on, Changes in the Scientific Information During the COVID-19 Pandemic: The Importance of Scientific Situational Awareness in Responding to the Infodemic. Sanjana Singh (The Host): https://itsa2amgrunge.com/ --- Support this podcast: https://anchor.fm/sanjanasinghx/support
Samuel Läubli, Partner and CTO at TextShuttle, joins the pod to talk about the ins and outs of a language technology provider, and the current state of machine translation.The CTO touches on his background in Computational Linguistics and decision to go back to the academe in 2016. He gives his take on the current state of machine translation, particularly weaknesses around sentence-by-sentence structure and limited control.Samuel discusses his thesis, which tackles three key challenges in MT for professionals: quality, presentation, and adaptability. He debates whether machine translation can become truly creative without artificial general intelligence — or if it will always be considered imitation.He then walks listeners through TextShuttle's business model as well as the key problems the company solves for clients, ranging from producing MT systems to helping with configurations, workflows, and training translators.First up, Florian and Esther discuss the language industry news of the week, where RSI platform Interactio announced that it had raised USD 30m in series A funding, led by VCs Eight Roads Ventures and Storm Ventures.Esther delves into Straker's 100-page annual report, which showed the Australia-listed LSP's 13% revenue growth to USD 22.6m for the 12 months to March 31, 2021. The duo also discusses Akorbi, another fast-growing language service provider (LSP), which recently acquired the low-code process automation platform RunMyProcess from Fujitsu.Heading to Japan, Florian goes over Honyaku Center's 2020 financial results, which saw revenues decline 14% to USD 91m and operating income nearly halved to USD 3.8m.Florian closes the Pod full circle with more machine translation news: a research paper presented by Bering Lab about IntelliCAT, an MT post-editing and interactive translation model; and, out of big tech, Microsoft Document Translation, a recent addition to their enterprise MT offerings.
Daphne Keller, platform regulation expert at Stanford University and former Associate General Counsel for Google, joins Ellysse and Ashley to explain Section 230's role in shaping how large companies approach content moderation on a massive scale, and how intermediary liability protections allow platforms of all sizes to thrive.MentionedJennifer M. Urban, Joe Karaganis, and Brianna L. Shofield, Notice and Takedown in Everyday Practice(Berkeley Law, 2016).Maarten Sap et al., “The Risk of Racial Bias in Hate Speech Detection,” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019): 1668-78.Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber, “Racial Bias in Hate Speech and Abusive Language Detection Datasets,” Proceedings of the Third Workshop on Abusive Language Online (2019): 25-35.“H.R.1865 - Allow States and Victims to Fight Online Sex Trafficking Act of 2017,” Congress.gov.Woodhull Freedom Foundation v. United States, No. 18-5298 (D.C. Cir. 2020).Daphne Keller, “SESTA and the Teachings of Intermediary Liability” (The Center for Internet and Society, November 2017).Daphne Keller, “For platform regulation Congress should use a European cheat sheet,” The Hill, January 15, 2021.Renee Diresta, “Free Speech Is Not the Same As Free Reach,” Wired, August 30, 2018.
Professor Kai von Fintel is a world-leading linguist (Section Head at MIT) who is well known for his contributions to Semantics, an academic fields which sits at the intersection of many disciplines which is typically concerned with the meaning of linguistic expressions. He is the co-founder of the open access journal Semantics & Pragmatics. You can find more about his work on his website: https://www.kaivonfintel.org Conversation Outline: 00:00 Introduction00:18 What is special about language? 03:31 How did we (as a species) get linguistic abilities?05:24 What do people who work in Semantics do? 09:19 How can babies pick up language? 15:07 What is the meaning of words? Aren't they just dictionary entries? 19:03 On idiolects 27:00 The meanings of sentences33:43 What are possible worlds? Are they the same as the many-worlds of quantum theory? 39:52 Differences between ‘school' grammar, syntax and formal logic49:07 What is the meaning of ‘if'? 01:04:54 Does the research of Semanticists impact the field of Computational Linguistics?01:07:39 The relationship between thought and languageTwitter: https://twitter.com/tedynenuApple Podcasts:https://podcasts.apple.com/gb/podcast/philosophical-trials/id1513707135Spotify: https://open.spotify.com/show/3Sz88leU8tmeKe3MAZ9i10Google Podcasts:https://podcasts.google.com/?q=philosophical%20trialsInstagram: https://www.instagram.com/tedynenu/