POPULARITY
Explore how the American College of Radiology (ACR) is tackling one of the most pressing questions in healthcare: does AI truly improve outcomes for radiologists and patients? In this interview, Dr. Bernardo Bizzo, Associate Chief Science Officer at the ACR's Data Science Institute, dives into the work his team is doing to rigorously evaluate AI tools for radiology. Hear about the ACR's hands-on approach to testing AI in real-world clinical settings, their findings so far, and how they're helping radiologists navigate the rapidly evolving world of AI. If you're curious about the practical impact of AI in radiology and what lies ahead, this is a conversation you won't want to miss.Learn more about ACR athttps://www.acr.org/Learn more about ACR DSI athttps://www.acrdsi.org/Find more great health IT content athttps://www.healthcareittoday.com/
Dr. Crane is a distinguished leader in cancer control and is passionate about her commitment to the science of better outcomes for patients and community. Her work sits at the intersection of lifestyle medicine and digital health with a goal of developing targeted interventions, utilizing digital biomarkers to deliver the right intervention, at the right time to the right person. She received her Ph.D. from the University of Arizona and is an R25 Transdisciplinary Research on Energetics and Cancer (TREC) NCI Fellow.Dr. Crane was recruited to Sylvester Comprehensive Cancer Center to co-lead the Cancer Control Research Program and direct the Lifestyle Medicine, Prevention and Digital health initiative. She serves on the University of Miami IRB's consent to contact committee as the Sylvester representative for cancer-related studies and is an appointed cabinet member to the Executive Dean for Research of the Miller School of Medicine. As the population science representative, she serves on the Gynecologic Oncology and Lymphoma site disease groups and works with clinical and research faculty members. She was recently appointed the Vice-Chair for NRG Oncology NCORP Cancer Prevention and Control Committee.Dr. Crane joined Sylvester from the College of Nursing, Biobehavioral Health Sciences Division, at the University of Arizona in Tucson, where she served as co-lead of the Behavioral Measurement and Interventions Shared Resource at the University of Arizona Cancer Center. She also held joint appointments in the Department of Nutrition Sciences, Data Science Institute, and Mel and Enid Zuckerman College of Public Health.
We explore the groundbreaking DeepSeek- R1 model with Dr Mark Kennedy, who leads Imperial College London's Data Science Institute.What is DeepSeek, and how has this small team from China revolutionised AI technology? We also discuss any implications of this model for the future of AI, the potential “wake-up call” for existing tech giants, and the significance of having more players in the field. Plus, could this all be a positive step for small business leaders?This episode includes a look back at Monday's stock market crash with The London Standard's Business Editor Jonathan Prynn. Hosted on Acast. See acast.com/privacy for more information.
Analysis of the US supreme court TikTok ban case over security concerns - with The London Standard's science and technology editor, Will Hosie.Plus, online star Adam McIntyre, who has over one million followers across his social media platforms, on the loss of income and memories if the video app is forced to close down.It follows reports Tesla, X and SpaceX boss Elon Musk was in talks to buy TikTok's US operations - something the Chinese ByteDance-owned firm denies.Also a thanks to Dr Mark Kennedy, director of Imperial's Data Science Institute. Hosted on Acast. See acast.com/privacy for more information.
Sir Keir Starmer has set out the government's AI Opportunities Action Plan, with aims to boost the UK economy. Tech & Science Daily is joined by Dr Mark Kennedy, associate professor of strategy and organisational behaviour at Imperial College Business School.New technology for Parkinson's patients approved for use in the UK and EU, with neurologist Dr Martijn Beudel from the Department of Neurology in Amsterdam University Medical Centre.Blue Origin postpones launching New Glenn rocket from Florida due to ‘last-minute issues'.Also in this episode:LA wildfires: death toll rises to 24 as forecasters issue new wind warning.The 21-year-old rower aiming to be the youngest person and first woman to cross from Europe to South America.TikTok's Claudia Winkleman caricature cake video goes viral.Dr Mark Kennedy is also director of Imperial's Data Science Institute. Hosted on Acast. See acast.com/privacy for more information.
We want to hear from you! Send us a text message.In this episode, Dr. Purush Papatla, Co-Director of the Northwestern Mutual Data Science Institute (NMDSI), explores the cutting-edge world of data science and artificial intelligence. NMDSI, a collaboration between Northwestern Mutual, the University of Wisconsin-Milwaukee and Marquette University, is pushing the boundaries of AI applications to solve real-world problems.Dr. Papatla discusses the Institute's innovative programs and groundbreaking research projects. NMDSI is building a diverse talent pipeline, encouraging interdisciplinary collaboration, and making significant strides in fields ranging from healthcare to marketing. Whether you're a data science enthusiast, a business leader, or simply curious about the future of AI, this episode is packed with valuable takeaways.In this episode, you'll learn:The origin story behind NMDSI's unique collaboration and its $40 million startup capitalHow data science is being democratized, allowing individuals from all backgrounds to become data scientistsA groundbreaking AI project that could revolutionize personalized advertising as we know itWhy Milwaukee is becoming a hotbed for tech investment and AI researchThe difference between data science, machine learning, and artificial intelligence—and how each is transforming industriesQuotable Moment:"Our program allows anyone with any background to pursue a master's in data science. They could come in with a background in art or dance. We'll build up the required skills and then make sure that by the time they graduate, they leave as data scientists who can pursue data science in their chosen field."3 Big Takeaways from this Episode:Understanding the distinctions between data science, machine learning, and artificial intelligence is crucial for comprehending their applications and impacts: Data science focuses on extracting insights from data using statistical methods, a practice with roots in agricultural research from 200 years ago. Machine learning involves teaching machines to learn from data and improve over time, while artificial intelligence enables machines to mimic human cognition and decision-making.Data science and AI are being democratized, making these fields accessible to individuals from a wide range of backgrounds: Programs at NMDSI allow people with diverse educational and professional backgrounds, including the arts and humanities, to gain the necessary skills to transition into data science roles. This inclusive approach is crucial for integrating AI and data science across various sectors, ensuring that these technologies benefit a broader range of industries and applications.The Midwest is becoming a significant hub for AI research and technology investment due to its unique advantages: Factors such as a strong work ethic, lower operational costs, and a thriving startup ecosystem are attracting major tech investments and fostering innovation in the region. Examples include Microsoft's $3.3 billion data center and NMDSI's efforts in advancing AI research, highlighting the Midwest's growing importance in the tech landscape.Resources:Learn more about NMDSI: https://innv.northwesternmutual.com/ViInstagram - Facebook - YouTube - TikTok - Twitter - LinkedIn
Yian Ma, an assistant professor in the Halıcıoğlu Data Science Institute at UC San Diego talks about his research using scalable inference methods for credible machine learning. This involves designing Bayesian inference methods to quantify uncertainty in the predictions of complex models; understanding computational and statistical guarantees of inference algorithms; and leveraging these scalable algorithms to learn from time series data and perform sequential decision making tasks. Series: "Science Like Me" [Science] [Show ID: 39710]
Yian Ma, an assistant professor in the Halıcıoğlu Data Science Institute at UC San Diego talks about his research using scalable inference methods for credible machine learning. This involves designing Bayesian inference methods to quantify uncertainty in the predictions of complex models; understanding computational and statistical guarantees of inference algorithms; and leveraging these scalable algorithms to learn from time series data and perform sequential decision making tasks. Series: "Science Like Me" [Science] [Show ID: 39710]
Yian Ma, an assistant professor in the Halıcıoğlu Data Science Institute at UC San Diego talks about his research using scalable inference methods for credible machine learning. This involves designing Bayesian inference methods to quantify uncertainty in the predictions of complex models; understanding computational and statistical guarantees of inference algorithms; and leveraging these scalable algorithms to learn from time series data and perform sequential decision making tasks. Series: "Science Like Me" [Science] [Show ID: 39710]
Yian Ma, an assistant professor in the Halıcıoğlu Data Science Institute at UC San Diego talks about his research using scalable inference methods for credible machine learning. This involves designing Bayesian inference methods to quantify uncertainty in the predictions of complex models; understanding computational and statistical guarantees of inference algorithms; and leveraging these scalable algorithms to learn from time series data and perform sequential decision making tasks. Series: "Science Like Me" [Science] [Show ID: 39710]
Dennis Yi Tenen is an associate professor of English at Columbia University, where he also serves as co-director of the Center for Comparative Media. Affiliated with Columbia's Data Science Institute, he is a former fellow at the Berkman Center for Internet and Society and worked as a Microsoft engineer in the Windows group, where he wrote code that runs on millions of personal computers around the world. His articles, which span topics ranging from literary theory to computational narratology, can be found in such journals as Amodern, New Literary History, and boundary2. In Literary Theory for Robots, Tenen takes readers on a centuries-spanning trip through automation to explore the relationship between writers and emerging technologies. Because you love Author Events, please make a donation to keep our podcasts free for everyone. THANK YOU! (recorded 4/11/2024)
Tristan Brugère, a Ph.D. student in the Halıcıoğlu Data Science Institute at UC San Diego discusses his research at The Institute for Learning-enabled Optimization at Scale (TILOS). Specifically, how he is working on optimal transport and neural networks on graph generative models with applications to chip design. Series: "Science Like Me" [Science] [Show ID: 39342]
Tristan Brugère, a Ph.D. student in the Halıcıoğlu Data Science Institute at UC San Diego discusses his research at The Institute for Learning-enabled Optimization at Scale (TILOS). Specifically, how he is working on optimal transport and neural networks on graph generative models with applications to chip design. Series: "Science Like Me" [Science] [Show ID: 39342]
Tristan Brugère, a Ph.D. student in the Halıcıoğlu Data Science Institute at UC San Diego discusses his research at The Institute for Learning-enabled Optimization at Scale (TILOS). Specifically, how he is working on optimal transport and neural networks on graph generative models with applications to chip design. Series: "Science Like Me" [Science] [Show ID: 39342]
Tristan Brugère, a Ph.D. student in the Halıcıoğlu Data Science Institute at UC San Diego discusses his research at The Institute for Learning-enabled Optimization at Scale (TILOS). Specifically, how he is working on optimal transport and neural networks on graph generative models with applications to chip design. Series: "Science Like Me" [Science] [Show ID: 39342]
From cell phone footage of police killing unarmed Black people to leaked racist messages and even comments from friends and family on social media, online communication exposes how racism operates in a world that pretends to be colorblind. In When the Hood Comes Off, Rob Eschmann blends rigorous research and engaging personal narrative to examine the effects of online racism on communities of color and society, and the unexpected ways that digital technologies enable innovative everyday tools of antiracist resistance. In this episode Lissa talks with Dr. Rob Eschmann about When the Hood Comes Off: Racism and Resistance in the Digital Age (University of California Press, 2023), his book exploring racism in the digital age. Rob Eschmann is a writer, scholar, filmmaker, and educator from Chicago. He is Associate Professor of Social Work and a member of the Data Science Institute at Columbia University, as well as Faculty Associate at Harvard's Berkman Klein Center for Internet and Society.
POSTGRADUATE EDUCATION AT A TIME OF TECHNOLOGY AND TENSION. Costis Maglaras is the Dean of Columbia Business School, and the David and Lyn Silfen Professor of Business at Columbia University. He received his BS in Electrical Engineering from Imperial College, London, in 1990, and his MS and PhD in Electrical Engineering from Stanford University in 1991 and 1998, respectively. He joined Columbia Business School in 1998, and prior to becoming dean served as chair of the Decision, Risk & Operations division at the Business School, Director of the School's doctoral program, and was a member of the executive committee of the University's Data Science Institute. He is a Fellow of INFORMS, an Honorary Fellow of the Foreign Policy Association, and a Member of the Economic Club of New York. Quick LinksGet Embed PlayerShare on SocialDownload Audio File
POSTGRADUATE EDUCATION AT A TIME OF TECHNOLOGY AND TENSION. Costis Maglaras is the Dean of Columbia Business School, and the David and Lyn Silfen Professor of Business at Columbia University. He received his BS in Electrical Engineering from Imperial College, London, in 1990, and his MS and PhD in Electrical Engineering from Stanford University in 1991 and 1998, respectively. He joined Columbia Business School in 1998, and prior to becoming dean served as chair of the Decision, Risk & Operations division at the Business School, Director of the School's doctoral program, and was a member of the executive committee of the University's Data Science Institute. He is a Fellow of INFORMS, an Honorary Fellow of the Foreign Policy Association, and a Member of the Economic Club of New York.
For the first time ever, parents going through IVF can use whole genome sequencing to screen their embryos for hundreds of conditions. Harness the power of genetics to keep your family safe, with Orchid. Check them out at orchidhealth.com. On this episode of Unsupervised Learning, Razib talks to Michael Muthukrishna about his new book, A Theory of Everyone: The New Science of Who We Are, How We Got Here, and Where We're Going. Muthukrishna is Associate Professor of Economic Psychology at the London School of Economics, an affiliate of the Developmental Economics Group at STICERD and Data Science Institute, Azrieli Global Scholar at the Canadian Institute for Advanced Research (CIFAR), Technical Director of The Database of Religious History, a fellow at the Charter Cities Institute and board member of the One Pencil Project. Of Sri Lankan extraction, he trained as an engineer in Australia, but later became interested in anthropological and cultural questions. He studied for his Ph.D. under Joe Henrich in Canada. Like his mentor, Muthukrishna cross-applies toolkits from evolutionary biology and population genetics to questions of variation and change in human cultures. A Theory of Everyone is an ambitious book with arguably galactic ambitions. The chapters jump from topics like the Cambrian Explosion to the ever-increasing amount of energy needed to get at the fossil fuels that power our civilization. But to start off, Razib asks Muthukrishna about his background as a “third culture kid” and how that might have influenced his anthropological interests. Muthukrishna observed firsthand social and political chaos in Papua New Guinea, while his family's background in Sri Lanka illustrated for him the salience of ethnic tensions, even when differences might seem minimal to outsiders. Then Razib talks about A Theory of Everyone's fixation on energy and its role in powering organic life, about our technology-driven civilization and about our potential interplanetary future. Here, Muthukrishina thinks like an engineer, albeit with a broad historical and evolutionary perspective. He and Razib also discuss the problems of “degrowth economics” and why it is a dead-end for a dynamic civilization's flourishing. Razib also probes Muthukrishna for his views on IQ, its utility as a psychological measure, the variation between individuals and groups, and how those might relate to cultural evolutionary frameworks for considering cognitive aptitudes. The conversation concludes with a consideration of future possibilities as we hurtle past our current energy constraints as a civilization (Muthukrishna is bullish on nuclear), and the role of decentralized political experimentation in improving our social technology.
The concept of 'value based health care' - where patient outcomes are monitored and health care services are funded on the basis of the quality of care, rather than the quantity of procedures - has been around for a couple of decades, but has yet to become the norm. This podcast explores the potential benefits of a shift from 'volume' to 'value', to patients and to health care providers. Guests: Catherine MacLean, Chief Value Medical Officer at the Hospital for Special Surgery in New York. Meni Styliadou, Founder and Co-lead of the Health Outcomes Observatory and VP Health Data Partnerships, Data Science Institute, Takeda (featured in thumbnail picture). Links: Related episodes: Podcast links: Check out all our podcasts on : - - - - Join the
The concept of 'value based health care' - where patient outcomes are monitored and health care services are funded on the basis of the quality of care, rather than the quantity of procedures - has been around for a couple of decades, but has yet to become the norm. This podcast explores the potential benefits of a shift from 'volume' to 'value', to patients and to health care providers. Guests: Catherine MacLean, Chief Value Medical Officer at the Hospital for Special Surgery in New York. Meni Styliadou, Founder and Co-lead of the Health Outcomes Observatory and VP Health Data Partnerships, Data Science Institute, Takeda (featured in thumbnail picture). Links: Global Coalition for Value in Healthcare Related episodes: Gene therapy - how can poorer countries benefit from the most expensive drugs in the world? Antimicrobial resistance - how to stop a quiet pandemic Below the Belt: the movie that lifts the taboo on endometriosis Beyond AI: the top-10 tech of 2023 set to change our lives Podcast links: Check out all our podcasts on wef.ch/podcasts: Radio Davos - subscribe Meet the Leader - subscribe World Economic Forum Book Club Podcast - subscribe Agenda Dialogues - subscribe Join the World Economic Forum Podcast Club
We explore what impact the U.S. President's executive order on artificial intelligence might have as governments across the globe move to regulate AI. We Meet: Khari Johnson, senior writer, Wired Suresh Venkatasubramanian, director of the Center for Technological Responsibility, Reimagination, and Redesign with the Data Science Institute at Brown University Credits: SHIFT is produced by Jennifer Strong, Anthony Green and Emma Cillekens. It's mixed by Garret Lang, with original music from him and Jacob Gorski.
Welcome to another Tech in MKE episode: conversations with business leaders who have a connection to Milwaukee where we chat about the state of technology in their organizations, our community, and how both come together to attract talent and make Milwaukee more “techie”. This episode features Jonathan Stark, Executive Director of Northwestern Mutual's Data Science Institute. Jonathan brings an amazing background, perspective, and experience to the role, the company, and our community. Just what does Jonathan do at the NMDSI? What is the NMDSI and what does it seek to accomplish? How did Jonathan find his way here to Milwaukee from New York and what does he love about the city? All of this and more as Steve digs in with the leader of this important and influential organization. Tech in MKE is brought to you by Harley-Davidson, The Milwaukee Tech Hub Coalition, Expedient, the Milwaukee School of Engineering, and Brooksource. Thanks for listening! Love you, Milwaukee!
“I think the horizon, the trends that we are seeing today, are indicating that this technology is just going to explode and be integrated into everything we do in nursing or beyond. Many of the things with nursing are going to change significantly for us, which has already started,” Maxim Topaz, PhD, MA, RN, FAAN, Elizabeth Standish Gill Associate Professor of Nursing at the School of Nursing and The Data Science Institute at Columbia University in New York, NY, and senior scientist at VNS Health, told Jaime Weimer, MSN, RN, AGCNS-BS, AOCNS®, manager of oncology nursing practice at ONS, during a discussion about nursing's contributions and opportunities to shape in AI in health care. You can earn free NCPD contact hours after listening to this episode and completing the evaluation linked below. Music Credit: “Fireflies and Stardust” by Kevin MacLeod Licensed under Creative Commons by Attribution 3.0 Earn 0.75 NCPD contact hours of nursing continuing professional development (NCPD), which may be applied to the nursing practice, oncology nursing practice, or professional practice/performance ILNA categories, by listening to the full recording and completing an evaluation at myoutcomes.ons.org by October 13, 2025. The planners and faculty for this episode have no relevant financial relationships with ineligible companies to disclose. ONS is accredited as a provider of NCPD by the American Nurses Credentialing Center's Commission on Accreditation. Learning outcome: The learner will report an increase in knowledge related to the nurse's role with AI in health care. Episode Notes Complete this evaluation for free NCPD. Oncology Nursing Podcast: Episode 131: NLM Is Changing Health Care Through the Power of Data ONS Voice articles: New Technology Tools Help Oncology APRNs Improve Patient Outcomes AI Ultrasound Is Nearly 100% Accurate in Detecting Thyroid Cancers Nursing Informaticists Are the Backbone of Technology-Driven Care What ChatGPT Says About Belonging and Oncology Nursing Clinical Journal of Oncology Nursing article: Technology and Humanity Oncology Nursing Forum article: Artificial Intelligence for Oncology Nursing Authors: Potential Utility and Concerns About Large Language Model Chatbots Topaz's project on natural language processing: NimbleMiner: An Open-Source Nursing-Sensitive Natural Language Processing System Based on Word Embedding Harvard Business Review article: The Power of Natural Language Processing Nursing and Artificial Intelligence Leadership (NAIL) Collaborative ChatGPT Platforms for courses on AI: Coursera eDEX MOOC To discuss the information in this episode with other oncology nurses, visit the ONS Communities. To find resources for creating an ONS Podcast Club in your chapter or nursing community, visit the ONS Podcast Library. To provide feedback or otherwise reach ONS about the podcast, email pubONSVoice@ons.org. Highlights From Today's Episode “So, today, I think this technology advances every week. There are updates for this technology, specifically ChatGPT technology, that are not incremental, I think, they're pretty evolutional, though, and are making me excited about this field. I was excited before, but I was very skeptical, actually, before the recent advancements in the last year or so about our ability to get to a place where we would interact with those large language models.” TS 10:31 “My goal right now is to try to see how we use this technology appropriately for nurses in general, including oncology nursing. The use cases that I can see are more multiple, and one thing is generating the summary of your care. If the interaction between you and the patient can be recorded, then some summary can be generated. Now in oncology, there are a lot of things that machine learning in general, including technologies like ChatGPT, can do.” TS 13:35 “When we build machine learning models using the secondary data that kind of captures those biases, then this propagates. So their ability, those machine learning models, they just learn patterns from data. So, they're going to be biased as well as the data that goes in, basically.” TS 18:25 “You need to think about your population. You need to think about your specific setting. You need to think about what are the historic factors that kind of influence what's going on in practice. And, what is your kind of moral compass. And then, you make decisions about how to fix the machine learning algorithm.” TS 22:04 “The field that, today, kind of the name for this field, I think, is changing to AI, right. They used to call this informatics. So, you know, that's kind of traditional name, right, that is what you would search for on those platforms, so like healthcare informatics, right. I have a course on eDEX. This platform that thousands of people from more than fifty countries have taken and learned about informatics.” TS 35:33 “[In terms of nurses] thinking about ethical aspects and thinking about some of the biases that can be embedded in the technology, we're really good at this. I would definitely encourage nurses that are seeing explosion of this technology to think about those trends and not just blindly apply this knowledge in practice, but think about what implications does it have for our patients? So having the patients in mind, having the person in mind, is kind of that central thing. And that's what we're doing every day.” TS 42:08
Associate Professor of English and Comparative Literature at Columbia University and author of the forthcoming Literary Theory For Robots Dennis Yi Tenen shares the hidden history of modern machine intelligence — which has more to do with medieval poetry and Russian folktales than the myths being told by those in Silicon Valley.
------------------Support the channel------------ Patreon: https://www.patreon.com/thedissenter PayPal: paypal.me/thedissenter PayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpz PayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9m PayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao This show is sponsored by Enlites, Learning & Development done differently. Check the website here: http://enlites.com/ Dr. Michael Muthukrishna is an Associate Professor of Economic Psychology at the London School of Economics. His other affiliations include Affiliate of the Developmental Economics Group at STICERD, Affiliate of the Data Science Institute, Azrieli Global Scholar at the Canadian Institute for Advanced Research (CIFAR), Technical Director of The Database of Religious History, and Board member of the One Pencil Project. His research focuses on the psychological and evolutionary processes that underlie culture and how culture is transmitted, maintained, and modified. He is the author of A Theory of Everyone: The New Science of Who We Are, How We Got Here, and Where We're Going. In this episode, we focus on A Theory of Everyone. We begin with the premise of the book, and what a “theory of everyone” is. We talk about four laws of life: energy, innovation, cooperation, and evolution. We discuss how cooperation expands in human societies, and what distinguishes us from other animals. We talk about intelligence and IQ, and the collective brain and the idea of “genius”. We discuss the importance of institutions and cultural norms, and the role of ideas. We talk about multiculturalism, and how to solve the “paradox of diversity”. We discuss our current energy ceiling, and the problems that derive from it and how to solve them. Finally, we talk about the challenges of studying human behavior cross-culturally. -- A HUGE THANK YOU TO MY PATRONS/SUPPORTERS: PER HELGE LARSEN, JERRY MULLER, HANS FREDRIK SUNDE, BERNARDO SEIXAS, OLAF ALEX, ADAM KESSEL, MATTHEW WHITINGBIRD, ARNAUD WOLFF, TIM HOLLOSY, HENRIK AHLENIUS, JOHN CONNORS, FILIP FORS CONNOLLY, DAN DEMETRIOU, ROBERT WINDHAGER, RUI INACIO, ZOOP, MARCO NEVES, COLIN HOLBROOK, SIMON COLUMBUS, PHIL KAVANAGH, MIKKEL STORMYR, SAMUEL ANDREEFF, FRANCIS FORDE, TIAGO NUNES, FERGAL CUSSEN, HAL HERZOG, NUNO MACHADO, JONATHAN LEIBRANT, JOÃO LINHARES, STANTON T, SAMUEL CORREA, ERIK HAINES, MARK SMITH, JOÃO EIRA, TOM HUMMEL, SARDUS FRANCE, DAVID SLOAN WILSON, YACILA DEZA-ARAUJO, ROMAIN ROCH, DIEGO LONDOÑO CORREA, YANICK PUNTER, ADANER USMANI, CHARLOTTE BLEASE, NICOLE BARBARO, ADAM HUNT, PAWEL OSTASZEWSKI, NELLEKE BAK, GUY MADISON, GARY G HELLMANN, SAIMA AFZAL, ADRIAN JAEGGI, PAULO TOLENTINO, JOÃO BARBOSA, JULIAN PRICE, EDWARD HALL, HEDIN BRØNNER, DOUGLAS FRY, FRANCA BORTOLOTTI, GABRIEL PONS CORTÈS, URSULA LITZCKE, SCOTT, ZACHARY FISH, TIM DUFFY, SUNNY SMITH, JON WISMAN, DANIEL FRIEDMAN, WILLIAM BUCKNER, PAUL-GEORGE ARNAUD, LUKE GLOWACKI, GEORGIOS THEOPHANOUS, CHRIS WILLIAMSON, PETER WOLOSZYN, DAVID WILLIAMS, DIOGO COSTA, ANTON ERIKSSON, CHARLES MOREY, ALEX CHAU, AMAURI MARTÍNEZ, CORALIE CHEVALLIER, BANGALORE ATHEISTS, LARRY D. LEE JR., OLD HERRINGBONE, STARRY, MICHAEL BAILEY, DAN SPERBER, ROBERT GRESSIS, IGOR N, JEFF MCMAHAN, JAKE ZUEHL, BARNABAS RADICS, MARK CAMPBELL, TOMAS DAUBNER, LUKE NISSEN, CHRIS STORY, KIMBERLY JOHNSON, BENJAMIN GELBART, JESSICA NOWICKI, LINDA BRANDIN, NIKLAS CARLSSON, ISMAËL BENSLIMANE, GEORGE CHORIATIS, VALENTIN STEINMANN, PER KRAULIS, KATE VON GOELER, ALEXANDER HUBBARD, LIAM DUNAWAY, BR, MASOUD ALIMOHAMMADI, PURPENDICULAR, JONAS HERTNER, URSULA GOODENOUGH, GREGORY HASTINGS, AND DAVID PINSOF! A SPECIAL THANKS TO MY PRODUCERS, YZAR WEHBE, JIM FRANK, ŁUKASZ STAFINIAK, TOM VANEGDOM, BERNARD HUGUENEY, CURTIS DIXON, BENEDIKT MUELLER, THOMAS TRUMBLE, KATHRINE AND PATRICK TOBIN, JONCARLO MONTENEGRO, AL NICK ORTIZ, AND NICK GOLDEN! AND TO MY EXECUTIVE PRODUCERS, MATTHEW LAVENDER, SERGIU CODREANU, BOGDAN KANIVETS, AND ROSEY!
In this episode, Dr. Hettie V. Williams discusses racism and resistance in the digital age with Dr. Rob Eschmann. Williams is Director of the Trotter Institute at the University of Massachusetts at Boston and Eschmann is a writer, scholar, filmmaker, and educator from Chicago and Associate Professor of Social Work and a member of the Data Science Institute at Columbia University; and Faculty Associate at Harvard's Berkman Klein Center for Internet and Society. Eschmann's book When the Hood Comes Off: Racism and Resistance in the Global Digital Age (University of California Press, 2023) discusses the changing nature of racism in interpersonal encounters and online. Drawing on a wealth of data using a mixed methods approach, he demonstrates in his text how the new media impacts race and racism in society. His website with more information about his book and work can be found here: Rob Eschmann click for details.
In the last year, programs like ChatGPT, Dall-E and Bard have shown the world just how powerful artificial intelligence can be. AI programs can write hit pop songs, pass the bar exam and even appear to develop meaningful relationships with humans. This apparent revolution in AI tech has provoked widespread awe, amazement — and for some, terror. But as Brown Professor of Data Science and Computer Science Suresh Venkatasubramanian explains on this episode of Trending Globally, artificial intelligence has been with us for a while, and a serious, nuanced conversation about its role in our society is long overdue. Suresh Venkatasubramanian is the Deputy Director of Brown's Data Science Institute. This past year, he served in the Biden Administration's Office of Science and Technology Policy, where he helped craft the administration's blueprint for an “AI Bill Rights.” In this episode of Trending Globally, Dan Richards talks with Suresh about what an AI Bill of Rights should look like and how to build a future where artificial intelligence isn't just safe and effective, but actively contributes to social justice. Read the blueprint for the AI Bill of RightsLearn more about Brown's Data Science InstituteLearn more about the Watson Institute's other podcastsTranscript coming soon to our website
Artificial intelligence can generate human-like responses, making it possible for malicious actors to misuse it to spread false information. The use of AI and deepfakes on social media platforms raises concerns about the potential amplification of disinformation campaigns. Efforts are underway to address this issue, including developing strategies to detect and counteract misinformation. Halıcıoğlu Data Science Institute professor Stuart Geiger breaks down the consequences of AI and ChatGPT's use on social media. Series: "Data Science Channel" [Science] [Show ID: 38935]
Artificial intelligence can generate human-like responses, making it possible for malicious actors to misuse it to spread false information. The use of AI and deepfakes on social media platforms raises concerns about the potential amplification of disinformation campaigns. Efforts are underway to address this issue, including developing strategies to detect and counteract misinformation. Halıcıoğlu Data Science Institute professor Stuart Geiger breaks down the consequences of AI and ChatGPT's use on social media. Series: "Data Science Channel" [Science] [Show ID: 38935]
Artificial intelligence can generate human-like responses, making it possible for malicious actors to misuse it to spread false information. The use of AI and deepfakes on social media platforms raises concerns about the potential amplification of disinformation campaigns. Efforts are underway to address this issue, including developing strategies to detect and counteract misinformation. Halıcıoğlu Data Science Institute professor Stuart Geiger breaks down the consequences of AI and ChatGPT's use on social media. Series: "Data Science Channel" [Science] [Show ID: 38935]
Artificial intelligence can generate human-like responses, making it possible for malicious actors to misuse it to spread false information. The use of AI and deepfakes on social media platforms raises concerns about the potential amplification of disinformation campaigns. Efforts are underway to address this issue, including developing strategies to detect and counteract misinformation. Halıcıoğlu Data Science Institute professor Stuart Geiger breaks down the consequences of AI and ChatGPT's use on social media. Series: "Data Science Channel" [Science] [Show ID: 38935]
Generative AI machines can create artwork, music, and even realistic text that resembles human creations. However, these AI systems can also produce misleading or harmful content. The ethical implications involve issues like intellectual property, authenticity, and accountability. Who owns AI-generated art? How can we ensure transparency and trust in AI-generated information? Can AI infringe upon human creativity and job security? Halıcıoğlu Data Science Institute professor David Danks discusses how we need to strike the right balance between technological advancement and ethical considerations. This requires careful thought, regulation, and ongoing discussions to navigate the ethical landscape of generative AI responsibly. Series: "Data Science Channel" [Science] [Show ID: 38932]
Generative AI machines can create artwork, music, and even realistic text that resembles human creations. However, these AI systems can also produce misleading or harmful content. The ethical implications involve issues like intellectual property, authenticity, and accountability. Who owns AI-generated art? How can we ensure transparency and trust in AI-generated information? Can AI infringe upon human creativity and job security? Halıcıoğlu Data Science Institute professor David Danks discusses how we need to strike the right balance between technological advancement and ethical considerations. This requires careful thought, regulation, and ongoing discussions to navigate the ethical landscape of generative AI responsibly. Series: "Data Science Channel" [Science] [Show ID: 38932]
Generative AI machines can create artwork, music, and even realistic text that resembles human creations. However, these AI systems can also produce misleading or harmful content. The ethical implications involve issues like intellectual property, authenticity, and accountability. Who owns AI-generated art? How can we ensure transparency and trust in AI-generated information? Can AI infringe upon human creativity and job security? Halıcıoğlu Data Science Institute professor David Danks discusses how we need to strike the right balance between technological advancement and ethical considerations. This requires careful thought, regulation, and ongoing discussions to navigate the ethical landscape of generative AI responsibly. Series: "Data Science Channel" [Science] [Show ID: 38932]
Generative AI machines can create artwork, music, and even realistic text that resembles human creations. However, these AI systems can also produce misleading or harmful content. The ethical implications involve issues like intellectual property, authenticity, and accountability. Who owns AI-generated art? How can we ensure transparency and trust in AI-generated information? Can AI infringe upon human creativity and job security? Halıcıoğlu Data Science Institute professor David Danks discusses how we need to strike the right balance between technological advancement and ethical considerations. This requires careful thought, regulation, and ongoing discussions to navigate the ethical landscape of generative AI responsibly. Series: "Data Science Channel" [Science] [Show ID: 38932]
Responses are generated based on the patterns and information it has acquired during training. While ChatGPT lacks genuine understanding and operates based on statistical patterns rather than true comprehension, it has the ability to talk like a human. But, how does ChatGPT actually work? Halıcıoğlu Data Science Institute professor Jingbo Shang breaks down how the large language model and artificial intelligence actually works. Series: "Data Science Channel" [Science] [Show ID: 38931]
Responses are generated based on the patterns and information it has acquired during training. While ChatGPT lacks genuine understanding and operates based on statistical patterns rather than true comprehension, it has the ability to talk like a human. But, how does ChatGPT actually work? Halıcıoğlu Data Science Institute professor Jingbo Shang breaks down how the large language model and artificial intelligence actually works. Series: "Data Science Channel" [Science] [Show ID: 38931]
Responses are generated based on the patterns and information it has acquired during training. While ChatGPT lacks genuine understanding and operates based on statistical patterns rather than true comprehension, it has the ability to talk like a human. But, how does ChatGPT actually work? Halıcıoğlu Data Science Institute professor Jingbo Shang breaks down how the large language model and artificial intelligence actually works. Series: "Data Science Channel" [Science] [Show ID: 38931]
Responses are generated based on the patterns and information it has acquired during training. While ChatGPT lacks genuine understanding and operates based on statistical patterns rather than true comprehension, it has the ability to talk like a human. But, how does ChatGPT actually work? Halıcıoğlu Data Science Institute professor Jingbo Shang breaks down how the large language model and artificial intelligence actually works. Series: "Data Science Channel" [Science] [Show ID: 38931]
Ben and Ceora talk through some thorny issues around AI-generated music and art, explain why creators are suing AI companies for copyright infringement, and compare notes on the most amusing/alarming AI-generated content making the rounds (Pope coat, anyone?).Episode notes:Getty Images is suing the company behind AI art generator Stable Diffusion for copyright infringement, accusing the company of copying 12 million images without permission or compensation to train its AI model.Meanwhile, a group of artists is suing the companies behind Midjourney, DreamUp, and Stable Diffusion for “scraping and collaging” their work to train AI models. One of those artists, Sarah Anderson, wrote an op-ed in The New York Times about seeing her comics gobbled up by AI models and regurgitated as far-right memes.Speaking of copyright violations, did Vanilla Ice really steal that hook from David Bowie and Freddie Mercury? (Yes.)Check out the AI model trained on Kanye's voice that sounds almost indistinguishable from Ye himself.Read The Verge's deep dive into the intersection of AI-generated music and IP/copyright laws.Watch the AI-generated video of Will Smith eating spaghetti that's been called “the natural end point for AI development.”ICYMI: The Pope coat was real in our hearts.Columbia University's Data Science Institute recently wrote about how blockchain can give creators more control over their IP, now that AI-generated art is clearly here to stay.Congrats to today's Lifeboat badge winner, herohuyongtao, for answering How can I add a prebuilt static library in a project using CMake?.
EPISODE 1387: In this KEEN ON show, Andrew talks to the HOW DATA HAPPENED co-author Chris Wiggins on a history of data from the Age of Reason to today's Age of Algorithms Chris Wiggins is an associate professor of applied mathematics at Columbia University and the Chief Data Scientist at The New York Times. At Columbia he is a founding member of the executive committee of the Data Science Institute, and of the Department of Systems Biology, and is affiliated faculty in Statistics. He is a co-founder and co-organizer of hackNY (http://hackNY.org), a nonprofit which since 2010 has organized once a semester student hackathons and the hackNY Fellows Program, a structured summer internship at NYC startups. Prior to joining the faculty at Columbia he was a Courant Instructor at NYU (1998-2001) and earned his PhD at Princeton University (1993-1998) in theoretical physics. He is a Fellow of the American Physical Society and is a recipient of Columbia's Avanessians Diversity Award. Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children. Learn more about your ad choices. Visit megaphone.fm/adchoices
For 35 years, the Internet has been bedeviled by attackers. For about as long, defenders have tried deploying various defenses; these have often been of limited utility. We look back at what has happened, focusing on the explicit or (more often) implicit assumptions behind the defenses, and why these assumptions were or were not correct. About the speaker: Steven M. Bellovin is the Percy K. and Vida L. W. Hudson Professor of Computer Science at Columbia University, member of the Cybersecurity and Privacy Center of the university's Data Science Institute, and an affiliate faculty member at Columbia Law School. Bellovin does research on security and privacy and on related public policy issues. In his copious spare professional time, he does some work on the history of cryptography. He joined the faculty in 2005 after many years at Bell Labs and AT&T Labs Research, where he was an AT&T Fellow. He received a BA degree from Columbia University, and an MS and PhD in Computer Science from the University of North Carolina at Chapel Hill. While a graduate student, he helped create Netnews; for this, he and the other perpetrators were given the 1995 Usenix Lifetime Achievement Award (The Flame). He has also received the 2007 NIST/NSA National Computer Systems Security Award and has been elected to the Cybersecurity Hall of Fame. Bellovin has served as Chief Technologist of the Federal Trade Commission and as the Technology Scholar at the Privacy and Civil Liberties Oversight Board. He is a member of the National Academy of Engineering and has served on the Computer Science and Telecommunications Board of the National Academies of Sciences, Engineering, and Medicine. In the past, he has been a member of the Department of Homeland Security's Science and Technology Advisory Committee, and the Technical Guidelines Development Committee of the Election Assistance Commission.Bellovin is the author of Thinking Security and the co-author of Firewalls and Internet Security: Repelling the Wily Hacker, and holds a number of patents on cryptographic and network protocols. He has served on many National Research Council study committees, including those on information systems trustworthiness, the privacy implications of authentication technologies, and cybersecurity research needs; he was also a member of the information technology subcommittee of an NRC study group on science versus terrorism. He was a member of the Internet Architecture Board from 1996-2002; he was co-director of the Security Area of the IETF from 2002 through 2004.More details may be found at http://www.cs.columbia.edu/~smb/informal-bio.html.
When not teaching at computer science at the UC San Diego, Barna Saha serves as the director of The Institute for Emerging Core Methods in Data Science (EnCORE). EnCORE is a National Science Foundation (NSF) Transdisciplinary Research in Principles of Data Science Phase II (TRIPODS) program housed at UC San Diego's Halıcıoğlu Data Science Institute. She talks with Saura Naderi about her early interest in data science and some of the amazing work being done with EnCORE. She also talks about the importance of getting more young women interested in STEM fields like data science. Series: "Science Like Me" [Science] [Show ID: 38550]
When not teaching at computer science at the UC San Diego, Barna Saha serves as the director of The Institute for Emerging Core Methods in Data Science (EnCORE). EnCORE is a National Science Foundation (NSF) Transdisciplinary Research in Principles of Data Science Phase II (TRIPODS) program housed at UC San Diego's Halıcıoğlu Data Science Institute. She talks with Saura Naderi about her early interest in data science and some of the amazing work being done with EnCORE. She also talks about the importance of getting more young women interested in STEM fields like data science. Series: "Science Like Me" [Science] [Show ID: 38550]
When not teaching at the University of Pennsylvania, CJ Taylor is part of the Robotics team working on The Institute for Learning-Enabled Optimization at Scale (TILOS) at UC San Diego's Halıcıoğlu Data Science Institute. He talks with Saura Naderi about his upbringing, his early interest in data science and his current position. He also talks about his involvement in projects that benefit the community. Series: "Science Like Me" [Science] [Show ID: 38501]
When not teaching at the University of Pennsylvania, CJ Taylor is part of the Robotics team working on The Institute for Learning-Enabled Optimization at Scale (TILOS) at UC San Diego's Halıcıoğlu Data Science Institute. He talks with Saura Naderi about his upbringing, his early interest in data science and his current position. He also talks about his involvement in projects that benefit the community. Series: "Science Like Me" [Science] [Show ID: 38501]
In this episode, I talk with Dr. Margaret E. Roberts (scroll down for full bio) about China and the new age of censorship. Topics:Methods of modern censorshipWhy free speech is bad for regimes How much does the average Chinese Citizen know?How censorship creates factions in ChinaShould the West intervene?The global impact of censorshipDemocracies and censorshipCensorship and Covid-19 in ChinaBooks Mentioned: (I get a cut)Censored: Distraction and Diversion inside China's Great Firewall by Margaret E. Roberts - https://amzn.to/3QJODtEStories of Your Life and Others by Ted Chiang - https://amzn.to/3QKcnODMargaret Roberts is a Professor in the Department of Political Science and the Halıcıoğlu Data Science Institute at the University of California, San Diego. She co-directs the China Data Lab at the 21st Century China Center. Her research interests lie in the intersection of political methodology and the politics of information, with a specific focus on methods of automated content analysis and the politics of censorship and propaganda in China. She received a PhD from Harvard in Government (2014), MS from Stanford in Statistics (2009) and BA from Stanford in International Relations and Economics (2009). Her book, Censored: Distraction and Diversion Inside China's Great Firewall, published by Princeton University Press in 2018, was listed as one of the Foreign Affairs Best Books of 2018, was honored with the Goldsmith Book Award, and has been awarded the Best Book Award in the Human Rights Section and Information Technology and Politics Section of the American Political Science Association. She holds a Chancellor's Associates Endowed Chair at UCSD.Socials! -Lessons from Interesting People substack: https://taylorbledsoe.substack.com/Website: https://www.aimingforthemoon.com/Instagram: https://www.instagram.com/aiming4moon/Twitter: https://twitter.com/Aiming4MoonTaylor's Blog: https://www.taylorgbledsoe.com/YouTube: https://www.youtube.com/channel/UC6-TwYdfPcWV-V1JvjBXk
This week we're resharing an episode of the Light Reading Podcast featuring Nick Feamster, director of research for the Data Science Institute at the University of Chicago. He joined Light Reading's Phil Harvey and Kelsey Ziser last month to discuss his team's research based on the Internet Equity Initiative data portal, which uses Ookla Speedtest data to map out Internet inequities. See acast.com/privacy for privacy and opt-out information.
The five components of Internet equity include accessibility, infrastructure, affordability, adoption and performance/reliability, says the University of Chicago's Nick Feamster.In his latest return to the podcast, Nick Feamster, director of research for the Data Science Institute at the University of Chicago, shares findings from his team's research that supports the Internet Equity Initiative data portal, which uses Ookla Speedtest data and other data sets to map out Internet inequities."This important work shows the disparity of Internet access and performance, as well as a variety of different demographic measures, including race, wealth and education by Census tract," said Ookla in a recent newsletter.The Internet Equity Initiative deployed Internet measurement devices in over 100 households in Chicago to measure disparities in Internet performance and reliability between low-income and high-income neighborhoods, according to Ookla.Here are just a few things covered in this podcast:Background on the Internet Equity Initiative (01:40)The five components of Internet Equity (05:40)Comparing digital conferencing tools (11:26)Data and research behind the Internet Equity Initiative (15:53)Conclusions reached based on available data (21:57)Challenges with speed testing (34:21)Preview of upcoming research paper: "Benchmarks or Equity?" (35:21)Related stories and links:Internet Equity Initiative mapA Tale of Two GigsUnderstanding Chicago's Digital Divide: What Does Census Data Tell Us?What Can We Learn from Selected, Unmatched Data: Measuring Internet Inequity in Chicago See acast.com/privacy for privacy and opt-out information.