Podcast appearances and mentions of christina montgomery

  • 24PODCASTS
  • 30EPISODES
  • 30mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 10, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about christina montgomery

Latest podcast episodes about christina montgomery

TechStuff
Smart Talks with IBM: Responsible AI - Why Businesses Need Reliable AI Governance

TechStuff

Play Episode Listen Later Dec 10, 2024 28:39 Transcription Available


To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business. Visit us at: ibm.com/smarttalks Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance This is a paid advertisement from IBM. See omnystudio.com/listener for privacy information.

IBM Analytics Insights Podcasts
Learn AI Ethics, Risks, and Regulation! Christina Montgomery, IBM's Chief Privacy and Trust Officer. Yes, Law can be fun AND very interesting.

IBM Analytics Insights Podcasts

Play Episode Listen Later Jul 31, 2024 44:35


Send us a Text Message.Learn AI Ethics, Risks, and Regulation with Christina Montgomery, IBM's Chief Privacy and Trust Officer.  Yes, Law can be fun AND very interesting.   01:00 Christina Montgomery!04:36 My Daughter and the Bar08:36 Chief Privacy and Trust Officer11:37 Keeping IBM Out of Trouble13:34 Client Conversations16:23 Where to Be Bullish and Bearish20:52 The Risks of LLMs24:21 NIST and AI Alliance28:26 AI Regulation36:13 Synthetic Data38:00 Misconceptions40:07 Worries41:27 The Path to AI43:13 Aspiring Lawyers Linkedin: linkedin.com/in/christina-montgomery-8776b1a Website: https://www.ibm.com/impact/ai-ethicsWant to be featured as a guest on Making Data Simple?  Reach out to us at almartintalksdata@gmail.com and tell us why you should be next.  The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Making Data Simple
Learn AI Ethics, Risks, and Regulation! Christina Montgomery, IBM's Chief Privacy and Trust Officer. Yes, Law can be fun AND very interesting.

Making Data Simple

Play Episode Listen Later Jul 31, 2024 44:35


Send us a Text Message.Learn AI Ethics, Risks, and Regulation with Christina Montgomery, IBM's Chief Privacy and Trust Officer.  Yes, Law can be fun AND very interesting.   01:00 Christina Montgomery!04:36 My Daughter and the Bar08:36 Chief Privacy and Trust Officer11:37 Keeping IBM Out of Trouble13:34 Client Conversations16:23 Where to Be Bullish and Bearish20:52 The Risks of LLMs24:21 NIST and AI Alliance28:26 AI Regulation36:13 Synthetic Data38:00 Misconceptions40:07 Worries41:27 The Path to AI43:13 Aspiring Lawyers Linkedin: linkedin.com/in/christina-montgomery-8776b1a Website: https://www.ibm.com/impact/ai-ethicsWant to be featured as a guest on Making Data Simple?  Reach out to us at almartintalksdata@gmail.com and tell us why you should be next.  The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

POLITICO Dispatch
Four U.S. and Canadian tech execs talk AI

POLITICO Dispatch

Play Episode Listen Later Jun 13, 2024 19:04


POLITICO Tech went to Toronto for the U.S.-Canada Summit, hosted by BMO Financial Group and Eurasia Group. Host Steven Overly moderated a discussion on how the neighbors are competing and cooperating when it comes to artificial intelligence, with Cohere CEO Martin Kon, OpenAI vice president of government affairs Anna Makanju, IBM chief privacy and trust officer Christina Montgomery and Radical Ventures co-founder and managing partner Jordan Jacobs. On the show today, key takeaways from that conversation.

Data Protection Breakfast Club
“Didn't you guys invent the Barcode?” w/ Christina Montgomery - Chief Privacy & Trust Officer @ IBM

Data Protection Breakfast Club

Play Episode Listen Later Mar 13, 2024 46:11


Christina Montgomery is IBM's Chief Privacy & Trust  Officer and has spent over 29 years at the company! In that time Christina has just about done it all from a legal perspective.  Prior to her current role, she was Secretary to IBM's Board of Directors, was Managing Attorney, overseeing the IBM law department's strategic and transformational initiatives, hiring and recruiting, professional development, budget management, and other projects on a worldwide basis.Now Christina is leading the charge on IBM's AI Ethics Board speaking publicly about the challenges and benefits of emerging AI technologies.

Aspen Ideas to Go
Forging a Path to Ethical AI

Aspen Ideas to Go

Play Episode Listen Later Feb 22, 2024 55:33


It doesn't look like we're going to be able to put the generative artificial intelligence genie back in the bottle. But we might still be able to prevent some potential damage. Tools like Bard and ChatGPT are already being used in the workplace, educational settings, health care, scientific research, and all over social media. What kind of guardrails do we need to prevent bad actors from causing the worst imaginable outcomes? And who can put those protections in place and enforce them? A panel of AI experts from the 2023 Aspen Ideas Festival shares hopes and fears for this kind of technology, and discusses what can realistically be done by private, public and civil society sectors to keep it in check. Lila Ibrahim, COO of the Google AI company DeepMind, joins social science professor Alondra Nelson and IBM's head of privacy and trust, Christina Montgomery, for a conversation about charting a path to ethical uses of AI. CNBC tech journalist Deirdre Bosa moderates the conversation and takes audience questions. aspenideas.org

The Six Five with Patrick Moorhead and Daniel Newman
The Six Five Insider with Christina Montgomery, VP and Chief Privacy & Trust Officer at IBM

The Six Five with Patrick Moorhead and Daniel Newman

Play Episode Listen Later Jan 23, 2024 21:24


On this episode of The Six Five – Insider, hosts Daniel Newman and Patrick Moorhead welcome Christina Montgomery, Vice President and Chief Privacy & Trust Officer from IBM for a conversation on AI and the importance of her role in AI as Chief Privacy and Trust Officer. Their discussion covers: An introduction from Christina Montgomery about her role and what it entails at IBM What role, as the Chief Privacy Officer, did she play in the development of IBM's AI platform, watsonx What concerns clients have about AI and how is IBM addressing them Her recommendations on where organizations can begin when looking to adopt AI Learn more about IBM's AI platform, watsonx, on the company's website.  

Stuff To Blow Your Mind
Smart Talks with IBM: Responsible AI: Why Businesses Need Reliable AI Governance

Stuff To Blow Your Mind

Play Episode Listen Later Oct 18, 2023 27:52 Transcription Available


To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business. Visit us at: ibm.com/smarttalks Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.

Revisionist History
Responsible AI: Why Businesses Need Reliable AI Governance

Revisionist History

Play Episode Listen Later Oct 17, 2023 26:57 Transcription Available


To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business. Visit us at: https://www.ibm.com/smarttalks/ Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.

The Steve Harvey Morning Show
Responsible AI: Why Businesses Need Reliable AI Governance

The Steve Harvey Morning Show

Play Episode Listen Later Oct 17, 2023 26:57 Transcription Available


To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business. Visit us at: https://www.ibm.com/smarttalks/ Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance This is a paid advertisement from IBM.Support the show: https://www.steveharveyfm.com/See omnystudio.com/listener for privacy information.

TechStuff
Smart Talks with IBM - Responsible AI: Why Businesses Need Reliable AI Governance

TechStuff

Play Episode Listen Later Oct 17, 2023 28:00 Transcription Available


To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business. Visit us at: ibm.com/smarttalks Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.

Into the Zone
Responsible AI: Why Businesses Need Reliable AI Governance

Into the Zone

Play Episode Listen Later Oct 17, 2023 26:57 Transcription Available


To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business. Visit us at: https://www.ibm.com/smarttalks/ Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.

Deep Background with Noah Feldman
Responsible AI: Why Businesses Need Reliable AI Governance

Deep Background with Noah Feldman

Play Episode Listen Later Oct 17, 2023 26:57 Transcription Available


To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business. Visit us at: https://www.ibm.com/smarttalks/ Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.

Story of the Week with Joel Stein
Responsible AI: Why Businesses Need Reliable AI Governance

Story of the Week with Joel Stein

Play Episode Listen Later Oct 17, 2023 26:57 Transcription Available


To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business. Visit us at: https://www.ibm.com/smarttalks/ Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.

Solvable
Responsible AI: Why Businesses Need Reliable AI Governance

Solvable

Play Episode Listen Later Oct 17, 2023 26:57 Transcription Available


To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business. Visit us at: https://www.ibm.com/smarttalks/ Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.

Smart Talks with IBM
Responsible AI: Why Businesses Need Reliable AI Governance

Smart Talks with IBM

Play Episode Listen Later Oct 17, 2023 26:57 Transcription Available


To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business. Visit us at: https://www.ibm.com/smarttalks/ Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.

The Marketing AI Show
#48: Artificial Intelligence Goes to Washington, the Biggest AI Safety Risks Today, and How AI Could Be Regulated

The Marketing AI Show

Play Episode Listen Later May 23, 2023 54:18


AI came to Washington in a big way. OpenAI CEO Sam Altman appeared before Congress for his first-ever testimony, speaking at a hearing called by Senators Richard Blumenthal and Josh Hawley. The topic? How to oversee and establish safeguards for artificial intelligence. The hearing lasted nearly three hours and focused largely on Altman, though Christina Montgomery, an IBM executive, and Gary Marcus, a leading AI expert, academic, and entrepreneur, also testified. During the hearing, Altman covered a wide range of topics, including a discussion of different risks posed by generative AI, what should be done to address those risks, and how companies should develop AI technology. Altman even suggested that AI companies be regulated, possibly through the creation of one or more federal agencies and/or some type of licensing requirement. The hearing was divisive. Some experts applauded what they saw as much-needed urgency from the federal government to tackle important AI safety issues. Others criticized the hearing for being far too friendly, citing worries that companies like OpenAI are angling to have undue influence over the regulatory and legislative process. An important note: This hearing appeared to be informational in nature. It was not called because OpenAI is in trouble. And it appears to be the first of many such hearings and committee meetings on AI that will happen moving forward. In this episode, Paul and Mike tackled the hearing from three different angles as our three main topics today, as well as talked about a series of lower-profile government meetings that occurred. First, they do a deep dive into what happened, what was discussed, and what it means for marketers and business leaders.  Then they took a closer look at the biggest issues in AI safety that were discussed during the hearing and that the hearing is designed to address. At one point during the hearing, Altman said "My worst fear is we cause significant harm to the world.” Lawmakers and the AI experts at the hearing cited several AI safety risks they're losing sleep over. Overarching concerns included election misinformation, job disruption, copyright and licensing, generally harmful or dangerous content, and the pace of change.  Finally, Paul and Mike talked through the regulatory measures proposed during the hearing and what dangers there are, if any, of OpenAI or other AI companies tilting the regulatory process in their favor. Some tough questions were raised in the process. Senate Judiciary Chair Senator Dick Durbin suggested the need for a new agency to oversee the development of AI, and possibly an international agency. Gary Marcus said there should be a safety review, similar to what is used with the FDA for drugs, to vet AI systems before they are deployed widely, advocating for what he called a “nimble monitoring agency.” On the subject of agencies, Senator Blumenthal cautioned that the agency or agencies must be well-resourced, with both money and the appropriate experts. Without those, he said, AI companies would “run circles around us.” As expected, this discussion wasn't without controversy. Tune in to this critically important episode of The Marketing AI Show. Find it on your favorite podcast player and be sure to explore the links below. Listen to the full episode of the podcast Want to receive our videos faster? SUBSCRIBE to YouTube!  Visit our website Receive our weekly newsletter Register for a free webinar Come to our next Marketing AI Conference Enroll in AI Academy for Marketers Join our community on Slack, LinkedIn, Twitter, Instagram, and Facebook.

The Nonlinear Library
EA - Some quotes from Tuesday's Senate hearing on AI by Daniel Eth

The Nonlinear Library

Play Episode Listen Later May 17, 2023 6:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some quotes from Tuesday's Senate hearing on AI, published by Daniel Eth on May 17, 2023 on The Effective Altruism Forum. On Tuesday, the US Senate held a hearing on AI. The hearing involved 3 witnesses: Sam Altman, Gary Marcus, and Christina Montgomery. (If you want to watch the hearing, you can watch it here – it's around 3 hours.) I watched the hearing and wound up live-tweeting quotes that stood out to me, as well as some reactions. I'm copying over quotes to this post that I think might be of interest to others here. Note this was a very impromptu process and I wasn't originally planning on writing a forum post when I was jotting down quotes, so I've presumably missed a bunch of quotes that would be of interest to many here. Without further ado, here are the quotes (organized chronologically): Senator Blumenthal (D-CT): "I think you [Sam Altman] have said, in fact, and I'm gonna quote, 'Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.' You may have had in mind the effect on jobs, which is really my biggest nightmare in the long run..." Sam Altman: [doesn't correct the misunderstanding of the quote and instead proceeds to talk about possible effects of AI on employment] Sam Altman: "My worst fears are that... we, the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways; it's why we started the company... I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear eyed about what the downside case is and the work that we have to do to mitigate that." Sam Altman: "I think the US should lead [on AI regulation], but to be effective, we do need something global... There is precedent – I know it sounds naive to call for something like this... we've done it before with the IAEA... Given what it takes to make these models, the chip supply chain, the sort of limited number of competitive GPUs, the power the US has over these companies, I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of, that are actually workable, even though it sounds on its face like an impractical idea. And I think it would be great for the world." Senator Coons (D-DE): "I understand one way to prevent generative AI models from providing harmful content is to have humans identify that content and then train the algorithm to avoid it. There's another approach that's called 'constitutional AI' that gives the model a set of values or principles to guide its decision making. Would it be more effective to give models these kinds of rules instead of trying to require or compel training the model on all the different potentials for harmful content? ... I'm interested also, what international bodies are best positioned to convene multilateral discussions to promote responsible standards? We've talked about a model being CERN and nuclear energy. I'm concerned about proliferation and nonproliferation." Senator Kennedy (R-LA): "Permit me to share with you three hypotheses that I would like you to assume for the moment to be true... Hypothesis number 3... there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying... Please tell me in plain English two or three reforms, regulations, if any, that you would implement if you were queen or king for a day..." Gary Marcus: "Number 1: a safety-review like we use with the FDA prior to widespread deployment... Number 2: a nimble monitoring agency to follow what's going ...

The Nonlinear Library
LW - Brief notes on the Senate hearing on AI oversight by Diziet

The Nonlinear Library

Play Episode Listen Later May 17, 2023 3:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brief notes on the Senate hearing on AI oversight, published by Diziet on May 16, 2023 on LessWrong. On May 16th, 2023 Sam Altman of OpenAI; Gary Marcus, professor at New York University, and Christina Montgomery, chief privacy and trust officer at IBM spoke to congress on topics related to AI regulation. A link to the hearing can be found here: Youtube: CNBC Senate hearing on AI oversight. From a lens of AI Alignment, the general substance of the conversation focused on near term effects such as job loss, bias, harmful content, targeted advertising, privacy implications, election interference, IP and copyright issues and other similar topics. Sam Altman has spoken about hard AI risks before, but he was not explicit about them in the hearing. Gary Marcus communicated that his estimation for AGI at 50 years out, so his position on timelines is far out. There was an interesting moment where Gary Marcus called out for Sam to explicitly state his worst fears, but Sam did not explicitly say anything about x-risk and gave a broad vague answer: Twitter link. A proposed mechanism for safety was the concept of a "Nutrition Label" or a "Data Sheet" summarizing what a model has been trained on. This seems like a misguided exercise given the vast amount of data the LLMs are trained on. Summarizing that volume of data is a difficult task, and most orgs keep their data sets private due to competitive reasons and potential copyright forward risk. I also find the premise that knowing some summarization of the training set to be predictive or informative of the capabilities, truth approximation and biases of large text models to be flawed. Sam Altman, Gary Marcus and Christina Montgomery all asked for more regulation, with Sam Altman and Prof. Marcus asking for a new regulatory agency. There were some allusions to previous private conversations between the speakers and members of Congress in the hearing, so it seems likely that some very substantive lobbying for regulations is happening in a closed-door setting. For example, Section 230 was brought up multiple times, from a copyright and privacy perspective. Another alternative of requiring licensing to work on this technology was brought up. Sen. John Kennedy explicitly called out an existential thread "... a berserk wing of the artificial intelligence committee that intentionally or unintentionally could use AI to kill all of us and hurt us the entire time we're dying" and asked the three testifying members to propose policies to prevent such risk. Prof. Marcus explicitly called out longer term risk and more funding for AI Safety, noting the mixed use of the term. Sam Altman mentioned regulation, licensing and tests for exfiltration and self replication. Gary Marcus, like Sam Altman, seemed to be quite familiar with the general scope of existential threats, for example mentioning self-improvement capabilities. His timelines are very long, so he does not seem to have a short P(doom) timeline.Generally, it seems that the trend in the hearing was toward regulating and preventing short term risks, potentially licensing and regulating the development of models to address immediate short term risks, with very little discussion about existential style risks of AGI. I hope that more questions like Sen. Kennedy's rise up and a broader discussion about existential risk enters the public discourse on a congressional level. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Brief notes on the Senate hearing on AI oversight by Diziet

The Nonlinear Library: LessWrong

Play Episode Listen Later May 17, 2023 3:18


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brief notes on the Senate hearing on AI oversight, published by Diziet on May 16, 2023 on LessWrong. On May 16th, 2023 Sam Altman of OpenAI; Gary Marcus, professor at New York University, and Christina Montgomery, chief privacy and trust officer at IBM spoke to congress on topics related to AI regulation. A link to the hearing can be found here: Youtube: CNBC Senate hearing on AI oversight. From a lens of AI Alignment, the general substance of the conversation focused on near term effects such as job loss, bias, harmful content, targeted advertising, privacy implications, election interference, IP and copyright issues and other similar topics. Sam Altman has spoken about hard AI risks before, but he was not explicit about them in the hearing. Gary Marcus communicated that his estimation for AGI at 50 years out, so his position on timelines is far out. There was an interesting moment where Gary Marcus called out for Sam to explicitly state his worst fears, but Sam did not explicitly say anything about x-risk and gave a broad vague answer: Twitter link. A proposed mechanism for safety was the concept of a "Nutrition Label" or a "Data Sheet" summarizing what a model has been trained on. This seems like a misguided exercise given the vast amount of data the LLMs are trained on. Summarizing that volume of data is a difficult task, and most orgs keep their data sets private due to competitive reasons and potential copyright forward risk. I also find the premise that knowing some summarization of the training set to be predictive or informative of the capabilities, truth approximation and biases of large text models to be flawed. Sam Altman, Gary Marcus and Christina Montgomery all asked for more regulation, with Sam Altman and Prof. Marcus asking for a new regulatory agency. There were some allusions to previous private conversations between the speakers and members of Congress in the hearing, so it seems likely that some very substantive lobbying for regulations is happening in a closed-door setting. For example, Section 230 was brought up multiple times, from a copyright and privacy perspective. Another alternative of requiring licensing to work on this technology was brought up. Sen. John Kennedy explicitly called out an existential thread "... a berserk wing of the artificial intelligence committee that intentionally or unintentionally could use AI to kill all of us and hurt us the entire time we're dying" and asked the three testifying members to propose policies to prevent such risk. Prof. Marcus explicitly called out longer term risk and more funding for AI Safety, noting the mixed use of the term. Sam Altman mentioned regulation, licensing and tests for exfiltration and self replication. Gary Marcus, like Sam Altman, seemed to be quite familiar with the general scope of existential threats, for example mentioning self-improvement capabilities. His timelines are very long, so he does not seem to have a short P(doom) timeline.Generally, it seems that the trend in the hearing was toward regulating and preventing short term risks, potentially licensing and regulating the development of models to address immediate short term risks, with very little discussion about existential style risks of AGI. I hope that more questions like Sen. Kennedy's rise up and a broader discussion about existential risk enters the public discourse on a congressional level. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Machine Learning Street Talk
AI Senate Hearing - Executive Summary (Sam Altman, Gary Marcus)

Machine Learning Street Talk

Play Episode Listen Later May 16, 2023 49:43


Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk In a historic and candid Senate hearing, OpenAI CEO Sam Altman, Professor Gary Marcus, and IBM's Christina Montgomery discussed the regulatory landscape of AI in the US. The discussion was particularly interesting due to its timing, as it followed the recent release of the EU's proposed AI Act, which could potentially ban American companies like OpenAI and Google from providing API access to generative AI models and impose massive fines for non-compliance. The speakers openly addressed potential risks of AI technology and emphasized the need for precision regulation. This was a unique approach, as historically, US companies have tried their hardest to avoid regulation. The hearing not only showcased the willingness of industry leaders to engage in discussions on regulation but also demonstrated the need for a balanced approach to avoid stifling innovation. The EU AI Act, scheduled to come into power in 2026, is still just a proposal, but it has already raised concerns about its impact on the American tech ecosystem and potential conflicts between US and EU laws. With extraterritorial jurisdiction and provisions targeting open-source developers and software distributors like GitHub, the Act could create more problems than it solves by encouraging unsafe AI practices and limiting access to advanced AI technologies. One core issue with the Act is the designation of foundation models in the highest risk category, primarily due to their open-ended nature. A significant risk theme revolves around users creating harmful content and determining who should be held accountable – the users or the platforms. The Senate hearing served as an essential platform to discuss these pressing concerns and work towards a regulatory framework that promotes both safety and innovation in AI. 00:00 Show 01:35 Legals 03:44 Intro 10:33 Altman intro 14:16 Christina Montgomery 18:20 Gary Marcus 23:15 Jobs 26:01 Scorecards 28:08 Harmful content 29:47 Startups 31:35 What meets the definition of harmful? 32:08 Moratorium 36:11 Social Media 46:17 Gary's take on BingGPT and pivot into policy 48:05 Democratisation

The CEO Sessions
IBM's Chief Privacy Officer on Data Privacy in the Age of AI - Christina Montgomery

The CEO Sessions

Play Episode Listen Later Jan 23, 2023 35:26


You need to pay attention to data privacy in the Age of AI.Your data feeds AI, and if the wrong information goes “in” then the results can be catastrophic for you, your team, and your customers.Ultimately your approach to data privacy can make or break your company.I host IBM Chief Privacy Officer and Vice President, Christina Montgomery, who shares a powerful leadership strategy for data privacy for your team.As Chief Privacy Officer, Christina oversees IBM's privacy program, compliance and strategy on a global basis, and directs all aspects of IBM's privacy policies.She also chairs IBM's AI Ethics Board, a multi-disciplinary team responsible for the governance and decision-making process for AI ethics policies and practices.She was appointed to the U.S. Department of Commerce National AI Advisory Committee, which will advise the President and the National AI Initiative Office on a range of issues related to AI.LinkedIn Profile https://www.linkedin.com/in/christina-montgomery-8776b1a/Company Link: https://www.ibm.com/What You'll Discover in this Episode:What a Chief Privacy Officer does and why your company needs one.An industry that's on the cutting edge of privacy.Why you need a strategy for AI and data privacy.The most exciting use of AI today and the biggest risks.Christina's interesting path from English Major to Chief Privacy Officer.Why you're never truly stuck in your career.The one trait she'd like to instill in every employee.A strategy to make your team more proactive.Her biggest source of inspiration and a difficult time it got her through.A twist in her career that accelerated her growth.A powerful way to celebrate the contributions of your team.Two strategies to deepen your connection with remote teams.-----Connect with the Host, #1 bestselling author Ben FanningSpeaking and Training inquiresSubscribe to my Youtube channelLinkedInInstagramTwitter

Caveat
Privacy Compliance and Culture.

Caveat

Play Episode Listen Later Nov 3, 2022 52:48


Christina Montgomery, Vice President and Chief Privacy Officer at IBM, sits down with Ben to discuss “Privacy Compliance Is Cultural: The Need for Diverse & Inclusive Perspectives in Data Governance & Regulation.” Ben's story discusses a possible new “right to repair” law in New York state. Dave's story is about Twitter and how they have a new boss, as well as if this new boss has made the right decision. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney.  The Nation's First Right to Repair Law Is Waiting for Kathy Hochul's Signature Welcome to hell, Elon Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.

Women in Law - On The Record
Episode No. 72: Christina Montgomery, Chief Privacy Officer of IBM

Women in Law - On The Record

Play Episode Listen Later Jun 7, 2022 35:34


Today I'm talking with Christina Montgomery, Chief Privacy Officer of IBM in Armonk, New York. Christina attended Binghamton University where she studied English before going to Harvard Law School. She spent one year in private practice before taking her first in-house job at IBM over 27 years ago. Over nearly three decades, she has held several roles, become a subject matter expert in a variety of areas of the law, and consistently and loyally served her client. Now, as Chief Privacy Officer, Christina oversees IBM's privacy program, compliance and strategy on a global basis, and directs all aspects of IBM's privacy policies, including the IBM AI Ethics Board. Christina's career is admirable and her insight is beyond valuable. Please enjoy hearing all about it.

The Steve Harvey Morning Show
How a Human-Centered Approach is Building Trustworthy AI

The Steve Harvey Morning Show

Play Episode Listen Later Nov 18, 2021 30:18 Transcription Available


Creating trust and transparency in AI isn't just a business requirement, it's a social responsibility. In this episode of Smart Talks, Malcolm talks to Christina Montgomery, IBM's Chief Privacy Officer and AI Ethics Board Co-Chair, and Dr. Seth Dobrin, Global Chief AI Officer, about IBM's approach to AI and how it's helping businesses transform the way they work with AI systems that are fair and address bias so AI can benefit everyone, not just a few.This is a paid advertisement for IBM. Learn more about your ad-choices at https://www.iheartpodcastnetwork.comSupport the show: https://www.steveharveyfm.com/See omnystudio.com/listener for privacy information.

Elvis Duran and the Morning Show ON DEMAND
How a Human-Centered Approach is Building Trustworthy AI

Elvis Duran and the Morning Show ON DEMAND

Play Episode Listen Later Nov 18, 2021 30:18 Transcription Available


Creating trust and transparency in AI isn't just a business requirement, it's a social responsibility. In this episode of Smart Talks, Malcolm talks to Christina Montgomery, IBM's Chief Privacy Officer and AI Ethics Board Co-Chair, and Dr. Seth Dobrin, Global Chief AI Officer, about IBM's approach to AI and how it's helping businesses transform the way they work with AI systems that are fair and address bias so AI can benefit everyone, not just a few.This is a paid advertisement for IBM. Learn more about your ad-choices at https://www.iheartpodcastnetwork.comSee omnystudio.com/listener for privacy information.

Smart Talks with IBM
How a Human-Centered Approach is Building Trustworthy AI

Smart Talks with IBM

Play Episode Listen Later Nov 18, 2021 30:48


Creating trust and transparency in AI isn't just a business requirement, it's a social responsibility. In this episode of Smart Talks, Malcolm talks to Christina Montgomery, IBM's Chief Privacy Officer and AI Ethics Board Co-Chair, and Dr. Seth Dobrin, Global Chief AI Officer, about IBM's approach to AI and how it's helping businesses transform the way they work with AI systems that are fair and address bias so AI can benefit everyone, not just a few. This is a paid advertisement for IBM. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

TechStuff
How a human-centered approach is building trustworthy AI.

TechStuff

Play Episode Listen Later Nov 18, 2021 31:48


Creating trust and transparency in AI isn't just a business requirement, it's a social responsibility. In this episode of Smart Talks, Malcolm talks to Christina Montgomery, IBM's Chief Privacy Officer and AI Ethics Board Co-Chair, and Dr. Seth Dobrin, Global Chief AI Officer, about IBM's approach to AI and how it's helping businesses transform the way they work with AI systems that are fair and address bias so AI can benefit everyone, not just a few. This is a paid advertisement from IBM. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

Stuff To Blow Your Mind
Smart Talks with IBM and Malcolm Gladwell: How a Human-Centered Approach is Building Trustworthy AI

Stuff To Blow Your Mind

Play Episode Listen Later Nov 17, 2021 30:48


Creating trust and transparency in AI isn't just a business requirement, it's a social responsibility. In this episode of Smart Talks, Malcolm talks to Christina Montgomery, IBM's Chief Privacy Officer and AI Ethics Board Co-Chair, and Dr. Seth Dobrin, Global Chief AI Officer, about IBM's approach to AI and how it's helping businesses transform the way they work with AI systems that are fair and address bias so AI can benefit everyone, not just a few. This is a paid advertisement from IBM. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

Explain to Shane
Data protection, privacy, and the ethics of artificial intelligence (with Christina Montgomery)

Explain to Shane

Play Episode Listen Later Feb 2, 2021 24:24


Enabling data to flow between enterprise divisions and industry partners enhances our economy, but it's important that user privacy is protected when the information being shared is about individuals. Data usage will thus continue to be a major topic in tech policy, especially with regard to newer products that use artificial intelligence (AI) to collect and transfer user data. If the US were to pass federal data protection legislation or a federal privacy law, what AI-related measures would need to be included? How can we ensure AI is regulated in a precise manner that protects innovation? On this episode, https://www.aei.org/profile/shane-tews/ (Shane) is joined by IBM's Chief Privacy Officer and Ethics Board Co-Chair https://newsroom.ibm.com/Christina-Montgomery (Christina Montgomery). They discuss how IBM is working to ensure its newest technologies — including AI — handle consumer data in an ethical manner, and why the company supports “precision regulation” of AI under the next administration.