POPULARITY
We're joined by the US Science Envoy for AI, Dr.Rumman Chowdhury, who's a leading expert in responsible AI. We uncover the ethical, technical, and societal implications of artificial intelligence. As AI rapidly eats up the world, the question is what happens when it doesn't align with human values? How do we navigate the risks of bias, misinformation, and hallucination in AI systems? Dr. Chowdhury has been at the forefront of AI governance, red teaming, and AI risk mitigation. She has worked with global institutions, governments, and tech companies to make AI more accountable, safe, and equitable. From her time at Twitter's (now X) Machine Learning Ethics Transparency and Accountability team to founding Humane Intelligence, she has actively shaped policies that determine how AI interacts with human society. We dive deep into: - AI bias, disinformation, and manipulation: How AI models inherit human biases and what we can do about it. - Hallucinations in AI: Why generative AI models fabricate information and why it's not a bug but a feature. - AI governance and regulation: Why unchecked AI development is dangerous, and the urgent need for independent audits. - The risks of OpenAI, Meta, and big tech dominance: Who is really in control of AI, and how can we ensure fair oversight? - How companies should approach AI ethics: Practical strategies businesses can use to prevent harm while innovating responsibly. Key Takeaways from the Episode: 1. AI as a Tool, Not a Mind: Dr. Rumman Chowdhury debunks the myth that AI is alive or sentient. AI is a tool—just like a hammer—it can be used to build or destroy. The real issue isn't AI itself, but how humans choose to use it. 2. Why AI Hallucinations Are Unavoidable: Unlike traditional machine learning models, generative AI doesn't compute facts; it predicts what words statistically fit together. This means hallucinations—where AI completely fabricates information—are not a flaw, but an inherent feature of how these models work. 3. The Hidden Biases in AI Models: AI models are only as good as their training data, which often reflects human biases. Dr. Chowdhury discusses how AI systems unintentionally amplify biases in hiring, finance, and law enforcement, and what needs to be done to fix it. 4. The Illusion of AI Objectivity: Many assume AI models are neutral, but the truth is that all models are built with human input, which means they carry subjective biases. Dr. Chowdhury warns that the real danger is allowing a handful of tech elites to dictate how AI shapes global narratives. 5. The Need for AI Red Teaming & Auditing: Just like cybersecurity stress tests, AI models need independent stress tests to identify risks before they cause harm. Dr. Chowdhury shares her experience leading global AI red teaming exercises with scientists and governments to assess AI's real-world impact. 6. OpenAI and the Power Problem: Is OpenAI truly aligned with public interest? Dr. Chowdhury critiques how AI giants hold more power than entire nations and explains why AI must be treated as a public utility rather than a corporate monopoly. 7. Why AI Needs More Public Oversight: Most AI governance is self-imposed by the companies that build these models. Dr. Chowdhury calls for third-party, independent AI audits, similar to financial auditing, to ensure transparency and accountability in AI decision-making. 8. The Role of Governments vs. Private AI Firms: With AI development largely controlled by private companies, what role should governments play? Dr. Chowdhury argues that governments must create AI Safety Institutes, set up national regulations, and empower independent researchers to hold AI accountable. Timestamps: (00:00) - Introduction to Dr. Rumman Chowdhury and AI ethics (03:03) - Why AI is just a tool (and how it's being misused) (04:58) - The difference between machine learning, deep learning, and generative AI (07:43) - Why AI hallucinations will never fully go away (11:46) - AI misinformation and the challenge of verifying truth (13:26) - The ethical risks of OpenAI and Meta's control over AI (18:20) - The role of red teaming in stress-testing AI models (30:26) - Should AI be treated as a public utility? (35:43) - Government vs. private AI oversight—who should regulate AI? (37:22) - The case for third-party AI audits (53:51) - The future of AI governance and accountability (61:03) - Closing thoughts and how AI can be a force for good Join us in this deep dive into the world of AI ethics, accountability, and governance with one of the field's top leaders. Follow our host (@iwaheedo) for more insights on technology, civilization, and the future of AI.
How to identify risks in AI models? Red teaming is one of the options, says the guest of AI at Scale podcast - Dr. Rumman Chowdhury, CEO of Humane Intelligence, US Science Envoy for Artificial Intelligence. Rumman guides us through her approach to detecting risks, ensuring transparency and accountability in AI systems. She emphasizes the importance of responsible AI practices and shares her perspective on the role of regulation in fostering innovation. Recognized as one of Time's 100 most Influential People in AI, she offers valuable insights on navigating ethical challenges in AI development.
Maha Taibah is the Founder of RUMMAN, a human-centric design firm focused on workplace wellbeing, and an independent board member of PwC MENA. With 20+ years in human capital development across government, private, and academic sectors, she is a strategic advisor, investor, speaker, and wellbeing advocate. A former government leader driving national initiatives for the Saudi labor market, Maha also serves on several global boards and was the only “sharkette” on Shark Tank Arabia. An alumna of DePaul, Harvard Business School, and Singularity University, she has been featured in outlets like The New York Times, Forbes, and Fortune.
Welcome to the Islamabad United Podcast, hosted and produced by Backward Point. This episode features Rumman Raees who talks about his journey to the PSL, winning the champions trophy, and how he overcame the challenges he faced when he had injuries. Hosted and produced by @backwardpointpod Design & BTS by Finoto Studio
For AI to achieve its full potential, non-experts need to contribute to its development, says Rumman Chowdhury, CEO and cofounder of Humane Intelligence. She shares how the right-to-repair movement of consumer electronics provides a promising model for a path forward, with ways for everyone to report issues, patch updates or even retrain AI technologies.
Internet of Humans, with Jillian York & Konstantinos Komaitis
In this episode, we talk with Rumman Chowdhury, a data scientist and social scientist and the CEO of the tech nonprofit Humane Intelligence, which builds a community of practice around evaluations of AI models. Rumman the United States Science Envoy for Artificial Intelligence. We ask Rumman about AI and how it intersects with humanity, the role of human rights in AI, what responsible AI means and what is the best way to govern it, given the current geopolitical tensions. Rumman also walks us through her thoughts on AI accountability and her term "moral outsourcing".
Dr. Rumman Chowdhury is a leader in applied algorithmic ethics, creating ethical AI solutions. She heads Parity Consulting and the Parity Responsible Innovation Fund and is a Responsible AI Fellow at Harvard. Previously, she led the META team at Twitter and founded the algorithmic audit platform Parity. This year, she was one of four scientists to serve as a new U.S. Science Envoy.00:00 Intro 2:54 Are tech companies delivering on their promise?4:19 Why is Rumman no longer in tech?5:30 The culture at Google has changed7:52 Corporate culture in tech companies11:10 Being legal is not always ethical15:45 Does AI reflect society?20:45 AI errors27:00 Should we believe what AI tells us?40:00 Is AI more intelligent than us?42:50 What is intelligence?48:00 Will AI free up more time for us?YouTube: @mogawdatofficialInstagram: @mo_gawdatFacebook: @mo.gawdat.officialLinkedIn: /in/mogawdatTiktok: @mogawdatX: @mgawdatWebsite: mogawdat.comDon't forget to subscribe to Slo Mo for new episodes every Saturday. Only with your help can we reach One Billion Happy #onebillionhappy
For AI to achieve its full potential, non-experts need to be let into the development process, says Rumman Chowdhury, CEO and cofounder of Humane Intelligence. She tells the story of farmers fighting for the right to repair their own AI-powered tractors (which some manufacturers actually made illegal), proposing everyone should have the ability to report issues, patch updates or even retrain AI technologies for their specific uses.
For AI to achieve its full potential, non-experts need to be let into the development process, says Rumman Chowdhury, CEO and cofounder of Humane Intelligence. She tells the story of farmers fighting for the right to repair their own AI-powered tractors (which some manufacturers actually made illegal), proposing everyone should have the ability to report issues, patch updates or even retrain AI technologies for their specific uses.
For AI to achieve its full potential, non-experts need to be let into the development process, says Rumman Chowdhury, CEO and cofounder of Humane Intelligence. She tells the story of farmers fighting for the right to repair their own AI-powered tractors (which some manufacturers actually made illegal), proposing everyone should have the ability to report issues, patch updates or even retrain AI technologies for their specific uses.
In this episode: Rumman Chowdhury, PhD, founder and CEO of Humane Intelligence, joins the pod to talk w/ Katie and Tara about the reality of running a nonprofit in tech Rumman shares stories from working at Twitter before and after Elon Musk took over and what it taught her about ethics in tech and AI Why Rumman chose to build a tech nonprofit instead of a for-profit company How to track KPIs, attract and retain top talent with a low budget You can read more about Rumman's work here: https://www.rummanchowdhury.com/
Artificial Intelligence (AI) is on every business leader's agenda. How do you ensure the AI systems you deploy are harmless and trustworthy? This month, Azeem picks some of his favorite conversations with leading AI safety experts to help you break through the noise. Today's pick is Azeem's conversation with Dr. Rumman Chowdhury, a pioneer in the field of applied algorithmic ethics. She runs Parity Consulting, the Parity Responsible Innovation Fund, and she's a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University.
Theme Song: “Not Your Fool” written and performed by Alexa Villa; courtesy of Sign From The Universe Entertainment, LLC Learn more about your ad choices. Visit megaphone.fm/adchoices
On this episode of There Has to Be a Better Way?, co-hosts Zach Coseglia and Hui Chen talk to Dr. Rumman Chowdhury, a pioneer in the field of responsible AI. Currently a Responsible AI Fellow at Harvard, with prior leadership roles at Twitter and Accenture, Rumman has first-hand insight into the real harms of AI, including algorithmic bias. She discusses how data scientists seek to understand these problems, and the importance of trustworthiness in the future of AI development. Having recently testified before Congress about AI governance, she shares her thoughts about building a governance ecosystem where human ingenuity can flourish.
On this episode of There Has to Be a Better Way?, co-hosts Zach Coseglia and Hui Chen talk to Dr. Rumman Chowdhury, a pioneer in the field of responsible AI. Currently a Responsible AI Fellow at Harvard, with prior leadership roles at Twitter and Accenture, Rumman has first-hand insight into the real harms of AI, including algorithmic bias. She discusses how data scientists seek to understand these problems, and the importance of trustworthiness in the future of AI development. Having recently testified before Congress about AI governance, she shares her thoughts about building a governance ecosystem where human ingenuity can flourish.
On 27 September 2021, the ODI in partnership with the Leverhulme Centre for the Future of Intelligence (LCFI) at the University of Cambridge and the Center for Responsible AI at NYU (NYU R/AI) convened an online roundtable to explore experimentation in data policy and practice around how structurally under-represented communities in North America and the EU can be transnational emergent forces that renegotiate or reimagine the social contract under the Fourth Industrial Revolution.
You Can Do Amazing Things is a podcast that is going to bring you weekly inspiration, tips and knowledge to help you feel more confident and motivated to take action on making those dreams and goals happen! We'll talk about mindset, confidence, habits, discipline, health and entrepreneurship - you will learn helpful tips and strategies, and hear stories from others that took action and pushed through what was holding them back, and accomplished their next AMAZING thing. You're next - I'm so excited for your success - join me and let's DO this! Subscribe so you don't miss the first episode coming January 27, 2022!
Khutbah Jum'at - Ust. Abdurrahim Rumman Lc. hafizhahullah. Tags: khutbah, jumat, khutbah jumat, yufid, rumaysho, al Iman TV, muslim.or.id, dakwah sunnah, siraman rohani, tombo ati, nasihat, motivasi islami, semangat hidup, bimbingan islam, inspirasi muslim, Kumpulan Khutbah Jum'at Pilihan Dakwah Sunnah, Podcast Khutbah Jum'at, ceramah pendek, ceramah singkat, Khotbah Jum'at, dakwah tauhid, tarbiyah sunnah, tunas ilmu, intan ilmu. Sumber: Youtube
In this episode of the Artificial Intelligence & Equality Initiative podcast, Senior Fellows Anja Kaspersen and Wendell Wallach are joined by Mona Sloane, senior research scientist and adjunct professor at New York University, and Rumman Chowdhury, Twitter's director of machine learning ethics, transparency and accountability, to discuss their recent online resource aiprocurement.org. The conversation addresses key tension points and narratives impacting how AI systems are procured and embedded in the public sector.
In this episode of the Artificial Intelligence & Equality Initiative podcast, Senior Fellows Anja Kaspersen and Wendell Wallach are joined by Mona Sloane, senior research scientist and adjunct professor at New York University, and Rumman Chowdhury, Twitter's director of machine learning ethics, transparency and accountability, to discuss their recent online resource aiprocurement.org. The conversation addresses key tension points and narratives impacting how AI systems are procured and embedded in the public sector.
In Episode 122, Quinn has big questions about AI ethics and, like many other situations, is left wondering: was Dr. Ian Malcolm right all along? Our guest is Dr. Rumman Chowdury. She is the director of the Machine Ethics Transparency & Accountability (META) team at Twitter, where she's helping to build a new ethical backbone into Twitter from the inside out. On every social media platform you interact with on a regular basis, there is some type of machine learning or algorithm determining what you see and how you interact with it. Obviously, that is quite a responsibility to bear. We've seen algorithms in the past be straight-up racist, suck people into alt-right conspiracy funnels, and cater to the worst of our human tendencies. These are all running on systems designed by humans – mostly white men – and that shows in the ways that they work. It's Dr. Rumman's job to clamp down on the worst of these to make algorithms that perform ethically – and she's certainly got her work cut out for her. Have feedback or questions? http://www.twitter.com/importantnotimp (Tweet us), or send a message to questions@importantnotimportant.com Important, Not Important Book Club: https://bookshop.org/shop/importantnotimportant (Against Purity) by Alexis Shotwell https://bookshop.org/shop/importantnotimportant (https://bookshop.org/shop/importantnotimportant) Links: http://www.rummanchowdhury.com/ (rummanchowdhury.com) Twitter: http://twitter.com/ruchowdh (@ruchowdh) https://www.parity-fund.com/ (parity-fund.com) https://startupsandsociety.medium.com/launch-of-the-startups-society-initiative-d6cdfe11512f (startupsandsociety.medium.com) Connect with us: Subscribe to our newsletter at http://importantnotimportant.com/ (ImportantNotImportant.com)! Follow us on Twitter: http://twitter.com/ImportantNotImp (twitter.com/ImportantNotImp) Follow Quinn: http://twitter.com/quinnemmett (twitter.com/quinnemmett) Follow Brian: https://twitter.com/beansaight (twitter.com/beansaight) Like and share us on Facebook: http://facebook.com/ImportantNotImportant (facebook.com/ImportantNotImportant) Intro/outro by Tim Blane: http://timblane.com/ (timblane.com) Important, Not Important is produced by http://crate.media/ (Crate Media) Support this podcast
Timestamps 5:00 - What did being in academia teach you? 8:30 - How did you get rid of nerd #FOMO and focus on 1 thing? 14:00 - If you want to do good, can you still work in consulting or finance? 21:00 - Can you truly be ethical in a system that values profit above all else? 31:00 - Creating new metrics to drive ethical growth and improvement 36:00 - What Nazi Germany can teach us about unethical AI 43:30 - The story of Parity, an algorithmic audit company 53:00 - Haters as a Key Performance Indicator (KPI)
Advanced technologies provide a unique ability to process data in efficient and smart ways, allowing healthcare providers and administrators to anticipate patients' needs, design targeted preventative programs, innovate remedies, and remove burdensome tasks from these processes. In this episode, we look at the implications and opportunities for the use of AI in Healthcare. How can we streamline and verify quality data to ensure our AI decisions are instructive, ethical, and valuable? How can we demystify AI for increased transparency and utilization, both to improve performance and reduce doubt?
In this episode of the Data Exchange I speak with Dr. Rumman Chowdhury, founder of Parity, a startup building products and services to help companies build and deploy ethical and responsible AI. Prior to starting Parity, Rumman was Global Lead for Responsible AI at Accenture Applied Intelligence.Subscribe: Apple, Android, Spotify, Stitcher, Google, and RSS.Download the 2020 NLP Survey Report and learn how companies are using and implementing natural language technologies.Detailed show notes can be found on The Data Exchange web site.Subscribe to The Gradient Flow Newsletter.
What is the relationship between the government and artificial intelligence? To unpack this timely question we interview Mona Sloane and Rumman Chowdhury. Mona Sloane is a sociologist working on inequality in the context of AI design and policy. Mona is a Fellow with NYU's Institute for Public Knowledge (IPK), where she convenes the ‘Co-Opting AI' series and co-curates the ‘The Shift' series. She is also an Adjunct Professor at NYU's Tandon School of Engineering. Rumman Chowdhury studies artificial intelligence and humanity. She is currently the Global Lead for Responsible AI at Accenture Applied Intelligence, where she works with C-suite clients to create cutting-edge technical solutions for ethical, explainable and transparent AI. Full show notes for this episode can be found at Radicalai.org. If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
How can we inform and inspire the next generation of responsible technologists and changemakers? How do you get involved as someone new to the responsible AI field? In partnership with All Tech is Human we present this Livestreamed conversation featuring Rumman Chowdhury (Responsible AI Lead at Accenture) and Yoav Schlesinger (Principal, Ethical AI Practice at Salesforce). This conversation is moderated by All Tech Is Human's David Ryan Polgar. The organizational partner for the event is TheBridge. The conversation does not stop here! For each of the episodes in our series with All Tech is Human, you can find a detailed “continue the conversation” page on our website radicalai.org. For each episode we will include all of the action items we just debriefed as well as annotated resources that were mentioned by the guest speakers during the livestream, ways to get involved, relevant podcast episodes, books, and other publications.
About this episode's guest: Rumman Chowdhury's passion lies at the intersection of artificial intelligence and humanity. She holds degrees in quantitative social science and has been a practicing data scientist and AI developer since 2013. She is currently the Global Lead for Responsible AI at Accenture Applied Intelligence, where she works with C-suite clients to […]
Summary The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Rumman Chowdhury’s passion lies at the intersection of artificial intelligence and humanity. She holds degrees in quantitative social science and has been a practicing data scientist and AI developer since 2013. She is currently the Global Lead for Responsible AI at Accenture Applied Intelligence, where she works with C-suite clients to create cutting-edge technical solutions for ethical, explainable and transparent AI. She tweets as @ruchowdh. This episode streamed live on Thursday, July 23, 2020. Episode highlights: (Part 1) 3:17 how Rumman's background in political science shapes her thinking in AI 3:28 "quantitative social science is math with context" 3:58 "often when we talk about technologies like artificial intelligence… we've started to talk about the technology as if it supersedes the human" 4:11 Rumman mentions her article "The pitfalls of a ‘retrofit human’ in AI systems": https://venturebeat.com/2019/11/11/the-pitfalls-of-a-retrofit-human-in-ai-systems/ 4:56 What is the core human concept that shapes your work? 5:25 "I recognize and want a world in which people make decisions that I disagree with, but they are making those decisions fully informed and fully capable." 5:49 A DOG ALMOST APPEARS! 7:18 transparency and explainability in Responsible AI 8:17 on the cake trend: "reality is already turned upside on its head — I want to be able to trust that the shoe is a shoe and not really a cake" :) 9:04 on the critiques of Responsible AI, "cancel culture," and anthropomorphizing machines 11:11 Responsible AI is not about having politically correct answers; her role leading Responsible AI is part of core business functions 12:00 Responsible AI is about serving the customers, the people; credit lending discrimination example 12:40 need for discussion that's bigger than profitability and efficiency; humanity and human flourishing 13:27 "human flourishing — creating something with positive impact — is not at odds with good business" 15:21 "I think sometimes people can get overly focused on value as revenue generation; value comes from many, many different things" 17:05 a political science view on human agency relative to machine outcomes 19:22 AI governance 20:34 "constructive dissent" 21:13 the "human in the loop" problem 25:14 algorithmic bias 29:20 "building products with the future in mind" 29:44 are there applications of AI that fill you with hope for the good they could potentially do? (Part 2) 0:45 how can we promote humanity and human flourishing with AI and emerging technologies? 1:16 what can businesses do to enable Responsible AI 1:22 "I have a paper out… where we interview people who work in Responsible AI and Ethical AI… on what companies can do" (see: https://arxiv.org/abs/2006.12358) 6:22 what can the average human being do 8:40 where can people find you? on Twitter: https://twitter.com/ruchowdh on the web: http://www.rummanchowdhury.com/
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible Artificial Intelligence at Accenture. In our conversation with Rumman, we explored questions like: Why is now such a critical inflection point in the application of responsible AI? How should engineers and practitioners think about AI ethics and responsible AI? Why is AI ethics inherently personal and how can you define your own personal approach? Is the implementation of AI governance necessarily authoritarian? How do we balance idealism and pragmatism in the application of AI ethics? We also cover practical topics like how and where you should implement responsible AI in your organization, and building the teams and processes capable of taking on critical ethics and governance questions. The complete show notes for this episode can be found at twimlai.com/talk/381.
In this episode Dr. Rumman will discuss his research on undisclosed payments by pharmaceutical and medical device manufacturers to authors of endoscopy guidelines in the USA. For the complete abstract visit JCAG the online Journal of the Canadian Association of Gastroenterology 2019 Volume 2, Issue Supplement_2, 15 March 2019, Page 309 https://academic.oup.com/jcag/issue/2/Supplement_2 This episode was recorded during the 2019 GRIT Program in Banff, Alberta
How can we use artificial intelligence ethically during a crisis? How do we balance privacy with security and public health? Rumman Chowdhury, global lead for responsible AI at Accenture, discusses surveillance, supply chains, pseudoscience, Netflix, and much more as the world adjusts to social distancing.
How can we use artificial intelligence ethically during a crisis? How do we balance privacy with security and public health? Rumman Chowdhury, global lead for responsible AI at Accenture, discusses surveillance, supply chains, pseudoscience, Netflix, and much more as the world adjusts to social distancing.
Rumman R Kalam is the founder of Rantages - Bangladesh's first crude humor website which has now built itself into a meme empire in the country. He is also the Acting In-charge of New Media at The Daily Star - the country's largest circulating English newspaper. While many don't consider memes to be anything serious, it is hard to deny the impact of this content form especially in the current digital content landscape. Big brands have adapted it and some businesses have an entire digital strategy built around creating memes. In this episode, Tawsif Akkas sits down with Rumman to talk about why memes shouldn't be overlooked, the ethical side of "stealing memes", and how you can start earning a buck or two by creating relatable online content. Find out more about the episode here: www.tawsifakkas.com/003
Whats up podcast! Today me and Rumman discuss a potential new car or future cars for his own collection. He is considering making his 488 Mansory or getting another v12 (812 Superfast) to pair with his SVJ. Take a listen to find out what we decide!
As technology becomes more embedded in our lives, the fear of a big data takeover is becoming even more tangible. Recent headlines, including those reporting on the use of Artificial Intelligence (AI) in racially biased algorithms, “deepfakes” that are indistinguishable from reality and fatal accidents involving self-driving cars, have only contributed to these fears. Many of these stories, however, do not include ways non-tech people can gain agency over their data. As a practicing data scientist and AI developer since 2013, Rumman Chowdhury is no stranger to the problems with tech. However, her optimism about the good it can do—in identifying cancer cells, for example, or helping you clean your apartment—has led her to focus her career on bringing humanity to data and including everyone in the process. Instead of sitting on the sidelines as bystanders to the techpocalypse, Chowdhury encourages both companies and consumers to take an active role in recognizing the real-world problems that perpetuate bad algorithms, instilling a moral compass in our tech. Chowdhury has been recognized as one of Silicon Valley's 40 under 40, one of the BBC's 100 Women and is a fellow at the Royal Society of the Arts. She is currently the global lead for responsible AI at Accenture Applied Intelligence, where she works with c-suite clients to create cutting-edge technical solutions for ethical, explainable and transparent AI. Come calm your fears about our data-driven future with Rumman Chowdhury as she joins INFORUM to break down how we can all work to shape AI for the better. This conversation will be moderated by Moira Weigel, a postdoctoral researcher at the Harvard Society of Fellows and a founding editor of Logic magazine. Learn more about your ad choices. Visit megaphone.fm/adchoices
SPEAKERS Rumman Chowdhury Ph.D., Responsible AI Lead, Accenture; Data and Social Scientist; RSA Fellow Moira Weigel Junior Fellow, Harvard University; Founding Editor, Logic Magazine This program was recorded in front of a live audience at The Commonwealth Club of California in San Francisco on January 29th, 2020.
Danny and Leo breakdown the beats of the Rumman Beef, recap the fake crossing guards shoot, and squash the WTHN podcast response. Danny's YouTube: https://www.youtube.com/dannymullenofficial Leo's YouTube: https://www.youtube.com/channel/UCTTcsKnU0T_g7sLNvwVzpDg Danny's IG: @DannyMullen Leo's IG: @Leofdot Danny's Twitter: @DannyMullen Leo's Twitter: @Leodottavio Support the channel! Patreon▶ https://www.patreon.com/DannyMullen Cameo ▶ https://Cameo.com/dannymullen
Leo and Danny talk about the Fresno and Sacramento film shoots, a failed reality show, and the Rumman beef. Danny's YouTube: https://www.youtube.com/dannymullenofficial Leo's YouTube: https://www.youtube.com/channel/UCTTcsKnU0T_g7sLNvwVzpDg Danny's IG: @DannyMullen Leo's IG: @Leofdot Danny's Twitter: @DannyMullen Leo's Twitter: @Leodottavio Support the channel! Patreon▶ https://www.patreon.com/DannyMullen Cameo ▶ https://Cameo.com/dannymullen
Episode 11: Rumman Ariff by Novelty Growth
Want to learn how to drive business ROI for your company with data science, machine learning, and artificial intelligence? Watch our AI For Growth series: https://www.topbots.com/ai-for-growth/ TOPBOTS executives Mariya Yao and Marlene Jia interview top technology leaders from global companies to learn how they’ve successfully applied modern automation techniques to improve sales, marketing, product, and customer experience. Learn winning strategies from executives who have adopted AI for their enterprises and bring them back to your company. Today we speak with Rumman Chowdhury, Global Lead for Responsible AI at Accenture. As a data scientist and social scientist, Rumman’s passion lies at the intersection of artificial intelligence and humanity. At Accenture, she drives the creation of responsible and ethical AI products for company clients. She has been honored as one of the BBC’s 100 Women, Silicon Valley’s 40 under 40, a TedX speaker, and is a fellow at the Royal Society for the Arts. Rumman will share how business leaders should define and enforce ethical behavior and ensure safety and transparency in AI and automated systems. Read the transcript and summary of this interview on TOPBOTS: https://www.topbots.com/building-ethical-responsible-ai-technologies-interview-rumman-chowdhury-accenture/
In this episode, Rumman Chowdhury shares her insights on: Enterprise AI - What exactly is it and the right way to think of it Two key advantages Enterprise AI can provide to your organization Biggest barrier executive leaders face when it comes to Entreprise AI Humanity and AI - Current AI models and flawed reality How to create AI with Active Inclusion Education as it stands today - how it's about to be revolutionized And tons of other valuable insights... I hope you enjoy listening to it as much as I enjoyed created it! References: Rumman's Blog: http://www.rummanchowdhury.com/ Rumman's Twitter: http://www.rummanchowdhury.com/ AI Education Startup https://www.accel.ai/ Studies Show Hosted by Rumman Chowdhury and Imran Siddiquee. 1000 women in data science: https://twitter.com/BecomingDataSci/lists/women-in-data-science/members About Rumman Chowdhury Rumman Chowdhury is an AI Authority working on cutting edge applications of Artificial Intelligence at Accenture. She also serves on the Board of Directors for several AI startups. Rumman holds two undergraduate degrees from MIT, and a Masters in Quantitative Methods of Social Sciences from Columbia University. Rumman also has a PhD in Political Science from the University of California, San Diego. Rumman has given several talks and tutorials, some of them include Intel Analytics Conference, Open Data Science Conference, Machine Learning Conference, Women Catalysts, PyBay, and Demystifying AI Conference.In mainstream media, Rumman has been interviewed for the PHDivas podcast, German Public Television, and fashion line MM LaFleur. In 2017, Rumman has upcoming talks at the Global Artificial Intelligence Conference, at Strata + Hadoop San Jose, Southern Data Science Conference, and Strata + Hadoop London
Blaming biased data and bad algorithms? Stop moral outsourcing to machines because data is a reflection of us! Liz and Xine interview Rumman Chowdhury, PhD candidate in political science and a full-time data scientist. We talk about how ethics, diversity, and social justice play an integral part in data, dubious robots, and the uneven development of data science through industry and academia. Beyond tin foil hats, we discuss everything from racist redlining in Chicago, philosopher Hannah Arendt's concept of the banality of evil, Netflix's Luke Cage, and menstruation apps. How can we approach data in a way that is not reductive? How can the quantitative not come at the cost of the qualitative? Rumman works at Metis, an organization dedicated to data science training that has scholarships for women, people of color, veterans, and LGBTQ+ people. (www.thisismetis.com) Check out more about her work and for job postings in data scientist at www.rummanchowdhury.com and follow her on Twitter @ruchowdh