Scientific study of algorithms and statistical models that computer systems use to perform tasks without explicit instructions
POPULARITY
Categories
Angel Scale Biotech: Learn More Mike Nolan is an angel investor specializing in machine learning. As an officer in the Air Force he trained fighter pilots on F-15s. Mike relates the opportunities offered by graduate work at MIT both professionally and in investing. Fun chat with a buddy from Walnut and a listener to the podcast. Highlights: Sal Daher Introduces Mike Nolan Piction Health and Machine Learning "... we don't frequently see companies that are on the bleeding edge of the academic research or the bleeding edge of what's going on and the top presentations at conferences and so forth, but that's okay..." The Capabilities of Machine Learning In the Health Field Mike Nolan's Background "... You got to picture this beautiful country, the White Sands, New Mexico, close by, and so forth, and here's this chimp running away from this 10-mile-long sled track..." "... They're highly creative people in a highly regimented situation. They are people who think in very creative ways, and so forth, and they have big personalities ... I suspect that there is a lot of room for creativity with fighter pilots..." "... Yes, things can go wrong, whether you're a founder or a pilot..." Advice to the Audience Topics: angel investing strategies, biotech, robotics/AI
Peggy Smedley and Gerald Kasulis, vice president of business operations, NordPass, talk about the challenges SMBs (small-sized and medium-sized businesses) are struggling with when it comes to cyber criminals today. He says when it comes to data breaches and hacks historically, we are used to seeing large organizations get breached—but recently we have seen a huge increase in SMBs getting breached. They also discuss: The big mistake SMBs are making in terms of password management. Why it is so hard to do password security. The importance of education in protecting our data and our businesses. nordpass.com/business-password-manager (3/21/23 - 814) IoT, Internet of Things, Peggy Smedley, artificial intelligence, machine learning, big data, digital transformation, cybersecurity, blockchain, 5G, cloud, sustainability, future of work, podcast, Gerald Kasulis, NordPass This episode is available on all major streaming platforms. If you enjoyed this segment, please consider leaving a review on Apple Podcasts.
Peggy Smedley and Walt Szablowski, founder & executive chairman, Eracent, dig into the topic of cybersecurity, discussing the impact of all these breaches on businesses and their customers. He says the key here is these hackers are taking over your life and taking over a business—everything you do is monitored and controlled by hackers. They also discuss: Examples of companies that have been hacked and the amount spent on ransom. How and why hacking is a business—and how that operates. How to protect your business from hackers. eracent.com (3/21/23 - 814) IoT, Internet of Things, Peggy Smedley, artificial intelligence, machine learning, big data, digital transformation, cybersecurity, blockchain, 5G, cloud, sustainability, future of work, podcast, Walt Szablowski, Eracent This episode is available on all major streaming platforms. If you enjoyed this segment, please consider leaving a review on Apple Podcasts.
Peggy Smedley talks all about manufacturing and bringing jobs back to the states—and shares a big announcement out of her hometown. She also talks about how it is going to have a big impact on American manufacturing in the years ahead—and how we need to get the next generation as excited about this announcement as she is. She also discusses: The history behind the industry's outsourcing of jobs and production to foreign nations. Challenges that came as a result of outsourcing to foreign nations. What's ahead for manufacturing in the months to come. peggysmedleyshow.com (3/21/23 - 814) IoT, Internet of Things, Peggy Smedley, artificial intelligence, machine learning, big data, digital transformation, cybersecurity, blockchain, 5G, cloud, sustainability, future of work, podcast This episode is available on all major streaming platforms. If you enjoyed this segment, please consider leaving a review on Apple Podcasts.
Date: March 22, 2023Series: Your Daily MLThe daily episode of everything new in Machine Learning (cs.LG).List of papers:-1. https://arxiv.org/abs/2303.11373-2. https://arxiv.org/abs/2303.11453-3. https://arxiv.org/abs/2303.11562-4. https://arxiv.org/abs/2303.11911-5. https://arxiv.org/abs/2303.11698-6. https://arxiv.org/abs/2303.11952-7. https://arxiv.org/abs/2303.11459-8. https://arxiv.org/abs/2303.11848-9. https://arxiv.org/abs/2303.11673-10. https://arxiv.org/abs/2303.11754-11. https://arxiv.org/abs/2303.11835-12. https://arxiv.org/abs/2303.11553-13. https://arxiv.org/abs/2303.11624
On this bonus episode of Writers, Ink, JP catches up with previous guest Hugh Howey. Hugh Howey is the New York Times and USA Today bestselling author of WOOL, MACHINE LEARNING, SAND, BEACON 23, and many others. His works have been translated into over 40 languages, with millions of copies sold around the world. WOOL has been adapted into Silo, a TV show from AppleTVPlus, which premiers May 5th! Find Writers, Ink on YouTube! - https://www.youtube.com/@jdbarker_author/ Links:* J.D. Barker - http://jdbarker.com/ Christine Daigle - https://www.christinedaiglebooks.com/ JP Rindfleisch IX - https://www.jprindfleischix.com/ Kevin Tumlinson - https://www.kevintumlinson.com/ Patrick O'Donnell - https://www.copsandwriters.com/ Hugh Howey - https://hughhowey.com/ TODAY'S SPONSOR: Laterpress - http://laterpress.com/ Best of BookTok - https://bestofbooktok.com/ Music by Nicorus - https://cctrax.com/nicorus/dust-to-dust-ep Voice Over by Rick Ganley - http://www.nhpr.com and recorded at Mill Pond Studio - http://www.millpondstudio.com Show notes & audio production by Geoff Emberlyn - https://twitter.com/gmbrlyn Website Design by Word & Pixel - http://wordandpixel.com/ Contact - https://writersinkpodcast.com/contact/ *Full disclosure: Some of the links are affiliate links. --- Support this podcast: https://anchor.fm/writersink/support
Date: March 21, 2023Series: Your Daily MLThe daily episode of everything new in Machine Learning (cs.LG).List of papers:-1. https://arxiv.org/abs/2303.10528-2. https://arxiv.org/abs/2303.10608-3. https://arxiv.org/abs/2303.10361-4. https://arxiv.org/abs/2303.10373-5. https://arxiv.org/abs/2303.10182-6. https://arxiv.org/abs/2303.10580-7. https://arxiv.org/abs/2303.10462-8. https://arxiv.org/abs/2303.10218-9. https://arxiv.org/abs/2303.10464
Date: March 20, 2023Series: Your Daily MLThe daily episode of everything new in Machine Learning (cs.LG).List of papers:-1. https://arxiv.org/abs/2303.09778-2. https://arxiv.org/abs/2303.10144-3. https://arxiv.org/abs/2303.09989-4. https://arxiv.org/abs/2303.09716-5. https://arxiv.org/abs/2303.09628-6. https://arxiv.org/abs/2303.09746-7. https://arxiv.org/abs/2303.10165-8. https://arxiv.org/abs/2303.09909-9. https://arxiv.org/abs/2303.09981
Date: March 17, 2023Series: Your Daily MLThe daily episode of everything new in Machine Learning (cs.LG).List of papers:-1. https://arxiv.org/abs/2303.09483-2. https://arxiv.org/abs/2303.09447-3. https://arxiv.org/abs/2303.08944-4. https://arxiv.org/abs/2303.09273-5. https://arxiv.org/abs/2303.08884-6. https://arxiv.org/abs/2303.09381-7. https://arxiv.org/abs/2303.09489-8. https://arxiv.org/abs/2303.09289-9. https://arxiv.org/abs/2303.09540-10. https://arxiv.org/abs/2303.09331-11. https://arxiv.org/abs/2303.09531-12. https://arxiv.org/abs/2303.09390
Co-hosts Dr. Paul Yi and Dr. Ali Tenjani speak with Jacopo Teneggi and Dr. Jeremias Sulam about Jacopo's trainee award wining research Weakly-Supervised Learning Substantially Reduces the Number of Labels Required for Intracranial Hemorrhage Detection on Head CT.
Our guest, Getaround VP of Product Raphaël Korach, will be going over all of the challenges of designing, marketing, and building ML components, and how you can overcome them with a Product-First approach.Productboard is the customer-centric product management software that helps organizations get the right products to market, faster. Learn more at Productboard.com.Get the FREE Product Book and check out our curated list of free Product Management resources here.
Pablo Villalobos is a Staff Researcher at Epoch, and lead author of the recent paper “Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning”. We discuss the key findings in this paper, as well as a related study Pablo conducted on scaling laws. Subscribe to the Gradient Flow Newsletter: https://gradientflow.substack.com/Subscribe: Apple • Spotify • Stitcher • Google • AntennaPod • Podcast Addict • Amazon • RSS.Detailed show notes can be found on The Data Exchange web site.
Date: March 16, 2023Series: Your Daily MLThe daily episode of everything new in Machine Learning (cs.LG).List of papers:-1. https://arxiv.org/abs/2303.08678-2. https://arxiv.org/abs/2303.08572-3. https://arxiv.org/abs/2303.08403-4. https://arxiv.org/abs/2303.08500-5. https://arxiv.org/abs/2303.08230-6. https://arxiv.org/abs/2303.08433-7. https://arxiv.org/abs/2303.08816-8. https://arxiv.org/abs/2303.08290-9. https://arxiv.org/abs/2303.08226-10. https://arxiv.org/abs/2303.08625-11. https://arxiv.org/abs/2303.08581
In this second episode in our series on climate change and AI, Yossi Matias, Vice President, Engineering & Research at Google, and the founding Managing Director of Google Center in Israel, explains how Google uses AI and Machine Learning to tackle some of the biggest challenges for climate mitigation and adaptation. He describes how Google uses partnerships and collaboration to innovate in the climate space, and shares his advice for those interested in careers in business and climate. For transcripts and other resources, visit climaterising.org. Climate Rising Host: Professor Mike Toffel, Faculty Chair, Business & Environment Initiative Guest: ● Yossi Matias, Vice President, Engineering & Research at Google, and the founding Managing Director of Google Center in Israel
Date: March 15, 2023Series: Your Daily MLThe daily episode of everything new in Machine Learning (cs.LG).List of papers:-1. https://arxiv.org/abs/2303.07669-2. https://arxiv.org/abs/2303.07666-3. https://arxiv.org/abs/2303.07758-4. https://arxiv.org/abs/2303.08102-5. https://arxiv.org/abs/2303.07557-6. https://arxiv.org/abs/2303.07693-7. https://arxiv.org/abs/2303.07811-8. https://arxiv.org/abs/2303.08081-9. https://arxiv.org/abs/2303.07846-10. https://arxiv.org/abs/2303.07397-11. https://arxiv.org/abs/2303.07940-12. https://arxiv.org/abs/2303.07527-13. https://arxiv.org/abs/2303.08112-14. https://arxiv.org/abs/2303.07924-15. https://arxiv.org/abs/2303.08127
Peggy Smedley talks all about cybersecurity, narrowing in on how the U.S. government is taking action with its National Cybersecurity Strategy. She outlines the details of the strategy and gives her thoughts about what needs to happen next in businesses. She also discusses: A historical look at what this administration has already done with security. The complex threat environment we currently face and the state of technology. The objectives of the National Cybersecurity Strategy. peggysmedleyshow.com (3/14/23 - 813) IoT, Internet of Things, Peggy Smedley, artificial intelligence, machine learning, big data, digital transformation, cybersecurity, blockchain, 5G, cloud, sustainability, future of work, podcast This episode is available on all major streaming platforms. If you enjoyed this segment, please consider leaving a review on Apple Podcasts.
Peggy Smedley and Mike Rogers, former congressman and chairman of the U.S. House Intelligence Committee, talk about if education is still a key component to security. He says education is constant because it is easy to get into a pattern where you think you are okay, and then get complacent. They also discuss: The purpose of defining a nation state. Examples such as a case a few years ago where Russians were inside an electric grid. How to be more vigilant in our businesses so the bad guys don't get in and how to prepare the next generation to secure our infrastructure. mikerogers.com (3/14/23 - 813) IoT, Internet of Things, Peggy Smedley, artificial intelligence, machine learning, big data, digital transformation, cybersecurity, blockchain, 5G, cloud, sustainability, future of work, podcast, Mike Rogers, U.S. House Intelligence Committee This episode is available on all major streaming platforms. If you enjoyed this segment, please consider leaving a review on Apple Podcasts.
It's time to say goodbye to Iadenveigh, to the Old Stag Daily, to some dear friends, and some money, before we're off to the woods to make new friends. Our Patreon and Ko-Fi have just gotten much better! The Story Machine--in the form of a patron-exclusive story called Foul and Fair--starts releasing on 3/26. All of the other details are spelled out on https://podagainstthemachine.com, in the intro to Machine Learning 15, and on, you know, the Patreon and Ko-Fi. AND Our Store is a thing, with all your t-shirts, tote bags, stickers and more! Background music and sound effects: Quiet Theme #4 (looped) and Quiet Theme #5 (looped) Andrew Sitkov Forest: Day (Ambiance Only), and Fishing Village (Ambiance Only) Tabletop Audio https://tabletopaudio.com Torus Zak Email us at PodAgainsttheMachine@gmail.com Remember to check out https://podagainstthemachine.com for show transcripts, player biographies, and more. Stop by our Discord server to talk about the show: https://discord.gg/TVv9xnqbeW Follow @podvsmachine on Twitter Find us on Reddit, Instagram, and Facebook as well.
Melissa Perri is joined by Christina Pawlikowski, a teaching fellow at Harvard and co-founder of Causal, to help demystify machine learning and AI on this episode of Product Thinking. Christina discusses language models, the different types of machine learning, how they can be used to solve problems, and the importance of good data and ethical considerations when using machine learning algorithms. Christina Pawlikowski is a teaching fellow at Harvard University and co-founder of Causal, a company that helps businesses make better decisions with causal inference. You'll hear Melissa and Christina talk about: How machine learning is essentially creating an algorithm or a model that can make good predictions based on data. There are three types of machine learning: supervised learning, unsupervised learning, and reinforcement learning. Good training data is crucial for machine learning algorithms to be effective. When considering using machine learning, it's important to ask questions about things like how complex the decision that needs to be made is, whether the model has to produce a definitive answer, how high the stakes are, and how quickly the answer needs to come back. Ethical considerations are important when feeding data into a machine-learning model, especially when making decisions with high stakes. GPT-3 and Chat GPT are examples of language models that use neural nets to generate predictions about what word or sentence comes next based on probabilities. The accuracy of a machine learning model is only as good as the quality of the data that is fed into it. When incorporating ML into a product, it's important to plan for scenarios where the model is wrong and to consider ethical considerations such as false positives and false negatives. Data scientists play a crucial role in assembling and cleaning training data, building and testing the model, and deploying it in production. The process may involve collaboration with machine learning engineers or other teams. The cadence of working on machine learning is different from working on traditional UX-focused teams, with more downtime and exploratory time upfront. Slack time is important for data scientists and machine learning engineers to keep up with new techniques, write papers, and attend conferences. Artificial general intelligence is probably further off than we think, and AI alignment is an important field to prevent any negative outcomes. Resources: Christina Pawlikowski on LinkedIn | Twitter Casual Labs
Peggy and Ryan Throop, executive consultant - Americas Lead SAP Security Services, IBM Consulting, and Nathan Weaver, senior director, SAP Business Unit, Microsoft, talk about SAP security. Throop says SAP is a large software company and provides solutions that literally cover all aspects of a business—and there are so many security concepts that need to be covered. Weaver adds when we think about the relationship Microsoft has with SAP, it runs 30 years deep. They also discuss: The 13 layers of SAP security. The challenges that keep leaders up at night. How to secure businesses. ibm.com/services/microsoft-azure-security ibm.com/reports/threat-intelligence SAP Cloud Platform on Azure | Microsoft Azure What is XDR? | Microsoft Security Microsoft Sentinel solution for SAP® applications – SAP Monitoring | Microsoft Azure (3/14/23 - 813) IoT, Internet of Things, Peggy Smedley, artificial intelligence, machine learning, big data, digital transformation, cybersecurity, blockchain, 5G, cloud, sustainability, future of work, podcast, Ryan Throop, IBM Consulting, Nathan Weaver, Microsoft This episode is available on all major streaming platforms. If you enjoyed this segment, please consider leaving a review on Apple Podcasts.
Visit https://www.thermofisher.com/chemistry-podcast/ to access the extended video version of this episode and the guest content sheet, which contains links to recent publications and additional content recommendations for our guest. You can also access the extended video version of this episode via our YouTube channel to hear, and see, more of the conversation!Visit https://thermofisher.com/bctl to register for your free Bringing Chemistry to Life T-shirt. This started with a TV in the background showing Brazil playing Croatia in the World Cup quarter-finals, and ended with Brazil's surprising defeat, to the dismay of our guest, Brazil-born Gabe Gomes. In the middle, the most approachable conversation you'll ever hear about computational chemistry.Gabe tries to solve real world problems using computers and it's almost a paradox that such an extroverted, fun guy, in love with music and speaking so much about people, ends up investing his life in machine learning algorithms. Yet it takes courage, creativity, and daring to go in new directions and seek the next big problem at the interface of scientific disciplines.Chemistry is a complex multivariate problem and resolving this complexity is the key to the fundamental understanding we need to advance the discipline. Gabe is a wonderful chaperone in our journey to discover how automation and optimization can be used not to replace chemists, but to free them to apply their skills where in matters most. Gabe is the living demonstration that computers and humans can be part of the same discourse.
Lucy D'Agostino McGowan and Ellie Murray chat about confounding! ✍️ Lucy's new paper: Sensitivity Analyses for Unmeasured Confounders Follow along on Twitter: The American Journal of Epidemiology: @AmJEpi Ellie: @EpiEllie Lucy: @LucyStats
The convergence of Artificial Intelligence (AI) and biotechnology has emerged as one of the most exciting and transformative areas of science. Researchers are developing new tools and technology that could bring about breakthroughs to revolutionize the fields of medicine and health care. In this episode, we talk with scientists who are using AI to unlock new possibilities in the search for novel drugs, cures, and treatments.
The Spatial Monitoring and Reporting Tool (SMART) was created by a coalition of conservation organizations to take historical data and create geospatial mapping tools that enable more efficient deployment of park rangers to prevent poaching. SMART had demonstrated significant improvements in patrol coverage, with some observed reductions in poaching. Then a new analytic tool, the Protection Assistant for Wildlife Security (PAWS), was created to use artificial intelligence (AI) and machine learning (ML) to try to predict where poachers would be likely to strike. Jonathan Palmer, Executive Director of Conservation Technology for the Wildlife Conservation Society, already had a good data analytics tool to help park rangers manage their patrols. Would adding an AI- and ML-based tool improve outcomes or introduce new problems?
Diya Wynn, Senior Practice Manager in Responsible AI for AWS Machine Learning Solutions Lab, joins Corey on Screaming in the Cloud to discuss her team's efforts to study and implement responsible practices when developing AI technology. Corey and Diya explore the ethical challenges of AI, and why it's so important to be looking ahead for potential issues before they arise. Diya explains why socially responsible AI is still a journey, and describes how her and her team at AWS are seeking to forge that path to help their customers implement the technology in a safe and ethical way. Diya also describes her approach to reducing human-caused bias in AI models. About DiyaDiya Wynn is the Senior Practice Manager in Responsible AI for AWS Machine Learning Solutions Lab. She leads the team that engages with customers globally to go from theory to practice - operationalizing standards for responsible Artificial Intelligence/Machine Learning and data. Diya leads discussions on taking intentional action to uncover potential unintended impacts, and mitigate risks related to the development, deployment and use of AI/ML systems. She leverages her more than 25 years of experience as a technologist scaling products for acquisition; driving inclusion, diversity & equity initiatives; leading operational transformation across industries and understanding of historical and systemic contexts to guide customers in establishing an AI/ML operating model that enables inclusive and responsible products. Additionally, she serves on non-profit boards including the AWS Health Equity Initiative Review Committee; mentors at Tulane University, Spelman College and GMI; was a mayoral appointee in Environment Affairs for 6 consecutive years and guest lectures regularly on responsible and inclusive technology. Diya studied Computer Science at Spelman College, the Management of Technology at New York University, and AI & Ethics at Harvard University Professional School and MIT Sloan School of Management.Links Referenced:Machine Learning is a Marvelously Executed Scam: https://www.lastweekinaws.com/blog/machine-learning-is-a-marvelously-executed-scam/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Tailscale SSH is a new, and arguably better way to SSH. Once you've enabled Tailscale SSH on your server and user devices, Tailscale takes care of the rest. So you don't need to manage, rotate, or distribute new SSH keys every time someone on your team leaves. Pretty cool, right? Tailscale gives each device in your network a node key to connect to your VPN, and uses that same key for SSH authorization and encryption. So basically you're SSHing the same way that you're already managing your network.So what's the benefit? Well, built-in key rotation, the ability to manage permissions as code, connectivity between any two devices, and reduced latency. You can even ask users to re-authenticate SSH connections for that extra bit of security to keep the compliance folks happy. Try Tailscale now - it's free forever for personal use.Corey: Kentik provides Cloud and NetOps teams with complete visibility into hybrid and multi-cloud networks. Ensure an amazing customer experience, reduce cloud and network costs, and optimize performance at scale — from internet to data center to container to cloud. Learn how you can get control of complex cloud networks at www.kentik.com, and see why companies like Zoom, Twitch, New Relic, Box, Ebay, Viasat, GoDaddy, booking.com, and many, many more choose Kentik as their network observability platform. Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. In a refreshing change of pace, I have decided to emerge from my home office cave studio thing and go to re:Invent and interview people in person. This is something of a challenge for me because it is way easier in person to punch me in the face, so we'll see how it winds up playing out. My guest today is Diya Wynn, Senior Practice Manager at AWS. Diya, what is a practice manager at AWS? What do you do?Diya: So, a practice manager, I guess you can think of it just like a manager of a team. I have a practice that's specifically focused on Responsible AI. And I mean, practices are just like you could have won in financial services or anything. It's a department, essentially. But more important than the practice in the title is actually what I get a chance to do, and that's working with our customers directly that are using and leveraging our AI/ML services to build products.And we have an opportunity to help them think about how are they using that technology in ways to have improvements or benefit individuals in society, but minimize the risk and the unintended impact or harm. And that's something that we get to do with customers over any industry as well as globally. And my team and I have been enjoying the opportunity to be able to help them along their Responsible AI journey.Corey: So, the idea of Responsible AI is… I'm going to sound old and date myself when I say this, but it feels like it's such a strange concept for me, someone who came up doing systems administration work in physical data centers. The responsible use of a server back when I was hands-on hardware was, “Well, you don't want to hit your coworker with a server no matter how obnoxious they are.” And it was fairly straightforward. It was clear: yes or no. And now it seems that whenever we talk about AI in society, in popular culture, from a technologist's point of view, the answer is always a deeply nuanced shade of gray. Help.Diya: Nuanced shade of gray. That's interesting. It is a little bit more challenging. I think that it is, you know, in one sense because of the notion of all of the data that we get to leverage, and our machine-learning models are reliant on data that has variations coming from, you know, historical sort of elements, things that are here baked with bias, all of that has to be considered. And I think when we think about some of the challenges and even the ways in which AI is being used, it means that we have to be much more mindful of its context, right?And these systems are being used in ways that we probably didn't think about servers being used in the past, but also are in the midst of some high-stakes decisions, right? Whether or not I might be identified or misidentified and inappropriately arrested or if I get the appropriate service that I was thinking about or whether or not there are associations related to my gender or my sexual preference. All of that matters, and so it does become much more of a nuanced conversation. Also because depending on the jurisdiction you're in, the region, what makes sense and what matters might differ slightly. So, it's a multidisciplinary problem or challenge that we need to think about what is the legality of this?And we have to think about social science sometimes and there's an element of ethics. And all of that plays into what becomes responsible, what is the right way in which we use the technology, what are the implications of technology? And so yes, it is a little bit more gray, but there are things that I think we have at our disposal to help us be able to respond to and put in place so that we really are doing the right things with technology.Corey: I've known Amazon across the board to be customer-obsessed, and they tell us that constantly—and I do believe it; I talk to an awful lot of Amazonians—and so much of what the company does comes directly from customer requests. I have to ask, what were customers asking that led to the creation of your group? Because it seems odd to me that you would have someone coming to you and saying, “Well, we built a ‘Hot Dog/Not A Hot Dog' image recognition app,” and, “Oopsie. It turns out our app is incredibly biased against Canadians. How do we fix this?” Like, that does not seem like a realistic conversation. What were the customer concerns? How are they articulated?Diya: No, that's really good. And you're right. They weren't asking the question in that way, but over the last five years or so, I would say, there has been an increase in interest and as well as concern about how AI is being used and the potential risks or the areas of unintended impact. And with this sort of heightened sensitivity or concern, both with our executives as well as members of common society, right—they're starting to talk about that more—they started to ask questions. They're using surfaces we want to be responsible in building.Now, some customers were saying that. And so, they would ask, “What are other customers doing? What should we be aware of? How do we or are there tools that we can use to make sure that we're minimizing bias in our systems? Are there things that we can think about in the way of privacy?”And oftentimes privacy and security are one of those areas that might come up first. And those were the kinds of questions. We actually did a survey asking a number of our customer-facing resources to find out what were customers asking so that we could begin to respond with a product or service that would actually meet that need. And I think we've done a great job in being able to respond to that in providing them assistance. And I think the other thing that we paid attention to was not just the customer requests but also what we're seeing in the marketplace. Part of our job is not only to respond to the customer need but also sometimes to see the need that they're going to have ahead of them because of the way in which the industry is moving. And I think we did a pretty good job of being able to see that and then start to provide service and respond to assist them.Corey: Yeah, it's almost like a rule that I believe it was Scott Hanselman that I stole it from where the third time that you're asked the same question, write a blog post, then that way you can do a full deep—Diya: Did he really say write a post? [laugh].Corey: Treatment of it. Yes, he did. And the idea is, write a blog post—because his blog is phenomenal—and that way, you have a really in-depth authoritative answer to that question and you don't have to ad-lib it off the cuff every time someone asks you in the future. And it feels like that's sort of an expression of what you did. You started off as a customer-facing team where they were asking you the same questions again and again and at some point it's, okay, we can either spend the rest of our lives scaling this team ad infinitum and winding up just answering the phone all day, or we can build a service that directly addresses and answers the question.Diya: Absolutely, absolutely. I think that's the way in which we scale, right, and then we have some consistency and structure in order to be able to respond and meet a need. What we were able to do was—and I think this is sort of the beauty of being at AWS and Amazon; we have this opportunity to create narratives and to see a need, and be able to identify and respond to that. And that's something that everybody can do, not just resigned to a VP or someone that's an executive, we all can do that. And that was an opportunity that I had: seeing the need, getting information and data, and being able to respond and say, “We need to come up with something.”And so, one of our first pieces of work was to actually define a framework. How would we engage? What would be that repeatable process or structure for us, framework that we can leverage with our customers every time to help them think through, look around corners, understand where there's risk, be better informed, and make better-informed decisions about how they were using the technology or what ways they could minimize bias? And so, that framework for us was important. And then we have now tools and services as well that were underway, you know, on our product side, if you will, that are complementing—or that, you know, complement the work.So, not only here's a process, here's a framework and structure, but also here are tools that in technology you can bring to bear to help you automate, to help you understand performance, or even you know, help you minimize the bias and risk.Corey: What's interesting to me, in a very different part of the world than AI, I live in AWS costing because I decided, I don't know, I should just go and try and be miserable for the rest of my life and look at bills all day. But whenever I talk to clients, they asked the same question: what are other customers doing, as you alluded to a few minutes ago? And that feels like it's a universal question. I feel like every customer, no matter in what discipline or what area they're in, is firmly convinced that somewhere out there is this utopian, platonic ideal of the perfect company that has figured all of this stuff out and we're all constantly searching for them. Like, there's got to be someone who has solved this problem the right way.And in several cases, I've had to tell clients that you are actually one of the best in the world and furthest advanced at this particular thing. That customer, the closest we've got to them is you, so we should be asking you these questions. And for whatever it's worth, no one ever likes hearing that because, “Like, oh, we're doing something wild.” It's like—Diya: [crosstalk 00:10:15] pioneers.Corey: —“Well, we got to solve this ourselves? That's terrible.”Diya: Well, it's interesting you say that because it is a common question. I think customers have an expectation that because we are AWS, we've seen a lot. And I think that's true. There are tens of thousands of customers that are using our services, we have conversations with companies all across the world, so we do have some perspective of what other customers are doing and that's certainly something that we can bring to the table. But the other part of this is that this is really a new area. This is a sort of new space, that we're focused on trustworthy and Responsible AI, and there aren't a ton of customers that are doing this—or companies at all—that have it entirely answered, that have—you know, we're all on a journey.So, these are, I would say, early stages. And we do have the benefit of being large, having a lot of customers, having some experience in building services as well as helping our customers build products, having a team that's focused on looking at standards and working with standards bodies globally, having teams that are working on our understanding what we're doing in regulation and public policy. And so, all of that we bring to bear when we start talking about, you know, this with our customers. But we don't have all the answers; we're on a journey like them. And I think that's something that we have to be comfortable with, to some degree, that this is an evolving area and we're learning. And we're investing even in research to help us continue to move forward. But there's a lot that we know, that there's a lot that we can bring to the table, and we can help our customers in that regard.Corey: Now, this might very well be old news and well understood and my understanding is laughably naive when this gets released, but as of this recording, a few hours beforehand, you released something called Service Cards. And I have to say, my initial glance at this was honestly one of disappointment when I saw what it was because what I was hoping for, with—when you ever see ‘service' and ‘cards' together, is these are going to be printable cardboard, little cards that I can slip into the Monopoly board game I have at home and game night at home is going to be so traumatic for my kids afterwards. Like, “What's a Fargate?” Says the five-year-old, and there we go. “It means that daddy is not going to passing go, going directly to jail with you. Have fun,” it's great. But I don't think that's what it is.Diya: No, not at all. Not at all. So, it is very similar to the context that people might be familiar with around model cards, being able to give definition and understanding of a model that's being used. For us, we sort of took that concept at one step beyond that in that, you know, just providing a model card isn't sufficient necessarily, especially when there are multiple services or multiple models being used for any one of our services. But what our Service Cards allow us to do is to provide a better understanding of the intended use of the service, you know, and the model that's underpinning that, give context for the performance of that service, give guidelines for our customers to be able to understand how was it best used and how does it best perform.And that's a degree of transparency that we're providing under the hood, for our customers to really help them as well be much more responsible and how they're building on top of those. And it gives them clarity because there is a growing interest in the marketplace for our customers to hold their vendors—or companies to hold their vendors responsible, right, making sure that they're doing the right things and covering off, are we building well? Do we have, like, the customer or enough of demographic covered? What the performance looks like. And this is a really big opportunity for us to be transparent with our customers about how our services are being built and give them a little bit more of that guardrail that we were talking about—guidelines—how to best use it as they look to build upon those.Corey: Not in any way, shape, or form to besmirch the importance of a lot of the areas that you're covering on this, but on some level, I'm just envious in that it would be so nice to have that for every AWS service, of this is how it is—Diya: Uh-oh [laugh].Corey: —actually intended to be used. Because to me, I look at it and all I see is database, database, really expensive database, probably a database, and, like, none of those are designed to be databases. Like, “You lack imagination,” is my approach. And no, it just turns out I'm terrible at computers, but I'm also enthusiastic and those are terrible combinations. But I would love to see breakdowns around things like that as far as intended use, potential pitfalls, and increasingly as we start seeing more and more services get machine learning mixed in, for lack of a better term, increasingly we're going to start to see areas where the ethical implications absolutely are going to be creeping in. Which is a wild thing to say about, I don't know, a service that recommends how to right-size instances having ethical concerns. But it's not that unreasonable.Diya: Well, I can't make any promises about us having those kinds of instructions or guidelines for some of our other services, but we are certainly committed to being able to provide this transparency across our AI/ML services. And again, that's something I will say that's a journey. We've released a few today; there are others that are going to come. We're going to continue to iterate and evolve so that we can get through our services. And there's a lot of work behind that, right?It's not just that we wrote up this document, but it is providing transparency. But it also means that our teams are doing a great bit in terms of the diligence to be able to provide that feedback, to be able to test their models, understand their datasets, you know, provide information about the datasets in public—you know, for the public datasets that are being tested against, and also have the structure for them to train their models appropriately. So, there's a lot going into the development of those that may not be immediately transparent, but really is core to our commitment to how we're building our services now.Corey: It's a new area in many respects because, like, to be very direct. If I wind up misusing or being surprised by a bad implementation of something in most cases in AWS context, the disaster area looks a lot closer to I get a big bill. Which—and this [unintelligible 00:16:35] is going to sound bizarre, but here we are, it's only money. Money can be fixed. I can cry and sob to support and get that fixed.With things like machine learning and AI, the stakes are significantly higher because given some of the use cases and given some of the rapid emerging technology areas in which these things are being tested and deployed, it hurts people if it gets wrong. And an AWS bill is painful, but not in a damaging to populations level. Yet. I'm sure at some point, it becomes so large it becomes its own micro-economy, I guess the way those credits are now, but it's a very different way.Diya: Right. Absolutely. So, I think that's why our work from a responsibility perspective is important. But I think it's also valuable for customers to understand, we're taking a step forward and being able to help them. Very much like what we do with well-architected, right? We have a framework, we have best practices and guidance that is being provided so that our customers who are using our cloud services really know what's the best.This is very much like those Service Cards, right? Here's the best conditions in order to be able to use and get the greatest value out of your cloud investment. The same thing is what we're doing with this approach in helping our customers in the Responsible AI way. Here's the best, sort of, best practices, guidance, guardrails, tools that are going to help you make the most out of your investment in AI and minimize where there's this unintended or potential areas of potential harm that you were describing. And you're right, there are high stakes use cases, right, that we want to make sure or want to be able to help and equip our customers to think more about intentionally and be prepared to be able to hopefully have a governance structure, people aligned, processes, technology to really be able to minimize that, right? We want to reduce the blast radius.[midroll 00:18:37]Corey: One thing I want to call out as well is that as much as we love in tech to pretend that we have invented all of these things ourselves—like, we see it all the time; like, “No one really knows how to hire, there's no real scientific study on this.” “Yes, there are. There are multi-decade longitudinal studies at places like GM and whatnot.” And, “No, no, no tech is different. There's no way to know this. La la la.”And that's great. We have to invent these things ourselves. But bias has been a thing in business decisions, even ones that are not directly caused by humans, for a long time. An easy example is in many cases, credit ratings and decisions whether to grant credit or not. Like, they were not using machine learning in the 90s to do this, but strangely, depending upon a wide variety of factors that are not actually things that are under your control as a person, you are deemed to be a good credit risk versus a bad credit risk.And as a result, I think one of the best terms I heard in the early days when machine learning started getting big, was just referring to it as bias laundering. Well, we've had versions of that for a long time. Now, at least it seems like this shines a light on it if nothing else, and gives us an opportunity to address it.Diya: Absolutely. Oh, I'd love that, right? The opportunity to address it. So, one of the things that I often share with folks is we all have bias, right? And so, like you said we've had bias in a number of cases. Now, you know, in some cases, bias is understandable. We all have it. It is the thing that often—we talk about the sort of like mental shortcuts, things that we do that help us to respond rapidly in the world in the vast array of information that we're taking in all the time. So—Corey: You're an Amazonian. You yourself bias for action.Diya: Exactly. Right? So, we have bias. Now, the intent is that we want to be able to disrupt that so that we don't make decisions, oftentimes, that could be harmful, right? So, we have proclivities, desires, interest, right, that kind of folds into our bias, but there are other things, our background, where we went to school, you know, experiences that we had, information that we've been taking that also helped to drive towards some of those biases.So, that's one element, right, understanding that. A human bias gets infiltrated into our systems. And there was a study in AI now—I think it was 2019—that talked about that, right, that our systems are often biased by—or the bias is introduced, you know, sometimes by individuals. And part of the necessity for us to be able to eliminate that is understanding that we have bias, do things to interrupt it, and then also bringing in diversity, right? Because some of our biases are just that we don't have enough of the right perspectives in the room; we don't have enough of the right people involved, right?And so, being able to sort of widen the net, making sure that we're involving the outliers, I think are important to us being able to eliminate bias as well. And then there are tools that we can use. But then you also bring up something interesting here in terms of the data, right? And there's a part that education plays a good role in helping us understand the things like what you described our institutional biases baked into our data that also can come out in decisions that are now being made. And the more that we use AI in these ways, the more there is risk for that, right?So, that's why this effort in Responsible AI, understanding how we mitigate bias, understanding how we invite the right people in, the inclusion of the right perspectives, thinking about the outliers, thinking about whether or not this is the right problem for us to solve with AI is important, right, so that we can minimize those areas where bias is just another thing that we continue to propagate.Corey: So, a year or two ago, I wrote a blog post titled Machine Learning is a Marvelously Executed Scam. And it was talking about selling digital pickaxes into a data gold rush.Diya: I [crosstalk 00:22:30] remember this one [laugh].Corey: And it was a lot of fun. In fact, the Head of Analyst Relations at AWS for Machine Learning responded by sending me a Minecraft pickaxe made out of foam, which is now in my home office hung behind my office and I get a comment at least three times a week on that. It was absolutely genius as far as rebuttal go. And I've got to find some way to wind up responding to her in kind one of these days.But it felt like it was a solution in search of a problem. And I no longer hold so closely to that particular opinion, in no small part due to the fact that, as you're discussing, this area is fraught, it's under an awful lot of scrutiny, large companies who use these things and then those tools get it wrong are going to basically wind up being castigated for it. And yet, they are clearly realizing enough value from machine learning that it is worth the risk. And these are companies whose entire business, start to finish, is managing and mitigating risk. There is something there or suddenly everyone has taken leave of their senses. I don't quite buy that second option, so I'm guessing it's the first.Diya: So, the question is, is it worth the risk? And I would say, I think some people might or some companies might have started to step into that area thinking that it is, but it's not. And that's what we're saying and that's what we're hearing in the industry [unintelligible 00:23:51], that it's not worth the risk. And you're hearing from customers, outcries from others, government officials, right, all of them are saying, like, “It's not worth the risk and we have to pay attention to that.”But I think that there's certainly value and we're seeing that, right? We're solving previously unattainable problems with AI. We want to be able to continue to do that, but give people the means to be able to sort of minimize where there is risk and recognize that this is not a risk that's worth us taking. So, the potential for reputational harm and the damage that will do is real, right? When a company is called out for the fact that they've discriminated and they're unfairly evaluating homes, for instance, for people of color in certain communities, right, that's not something that's going to be tolerated or accepted.And so, you have people really calling those things out so that we start to—organizations do the right things and not think that risk is worth the [unintelligible 00:24:52]. It is very well worth the risk to use AI, but we've got to do it responsibly. There's so much value in what we are able to accomplish. So, we're seeing, you know, even with Covid, being able to advance, like, the technology around vaccinations and how that was done and accelerated with machine learning, or being able to respond to some of the needs that small businesses and others had, you know, during Covid, being able to continuate their service because we didn't have people in businesses or in offices, a lot of that was advanced during that time as a result of AI. We want to be able to see advances like that and companies be able to continue to innovate, and so we want to be able to do that without the risk, without the sort of impact that we're talking about, the negative impact. And I think that's why the work is so important.Corey: Do you believe that societally we're closer to striking the right balance?Diya: We're on our way. I think this is certainly a journey. There is a lot of attention on this in the right ways. And my hope—and certainly, that's why I'm in a role like this—that we can actually invite the right voices into the room. One of the things—and one of my colleagues said this earlier today, and I think it was a really, really great point, right—as we are seeing—first of all, we never thought that we would have, like, ethicists roles and sort of Responsible AI folks, and chief ethics officers. That was not something that existed in the context of, sort of, machine learning, and that's something that it's evolved in the last, you know, few years.But the other thing that we're seeing is that the folks that are sitting in those roles are increasingly diverse and are helping to drive the focus on the inclusion that we need and the value of making sure that those voices are there so that we can build in inclusive and responsible ways. And that's one of the things that I think is helping us get there, right? We're not entirely there, but I think that we're on a path. And the more that we can have conversations like this, the more that companies are starting to pay attention and take intentional action, right, to build ethically and to have the trust in the technology and the products that they build, and to do that in responsible ways, we'll get there.Corey: I really want to thank you for taking so much time to talk through what you're up to with me.Diya: I am super excited and glad that you were able to have me on. I love talking about this, so it's great. And I think it's one of the ways that we get more people aware, and hopefully, it sparks the interest in companies to take their own Responsible AI journey.Corey: Thank you so much for your time.Diya: Thanks for having me.Corey: I appreciate it. Diya Wynn, Senior Practice Manager at AWS. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry insulting comment, presumably because you're Canadian.Diya: [laugh].Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Keegan Fonte is the volunteer host of the Artificial Intelligence and Machine Learning Brownbag meetup at Sage and is a Champion for the Continuous Innovation Community. The AI Brownbag is a once-a-month meetup of folks interested in Machine Learning and Artificial Intelligence. Recently for example, they had a speaker come and talk about Vendor Matching and how we are using Machine Learning in General Ledger to match the correct vendors. In his work life, Keegan is a Marketing and Product Professional with 14 years of marketing experience in Technology, Car Rentals, Food/Retail, and now SaaS with Sage. Outline Introduction to Keegan Fonté. 0:01 What's your favorite part about being the volunteer host of the AI Brownbag Meetup? 0:54 Chat GPT is the chatbot that's top of mind right now. 2:31 What are some of the things as it applies to chat GPT? 4:25 What are the implications of chat bots for marketing? 6:12 Who is a hero of yours and why are they a hero? 7:33
The world of AI is moving fast, and here at Intercom, we're helping set that pace. Today, we're delighted to introduce Fin, our new chatbot powered by OpenAI's GPT-4 and Intercom's proprietary machine learning technology and fine-tuned for customer support.In today's episode of the podcast, Intercom Co-founder and CSO Des Traynor and our Director of Machine Learning, Fergal Reid, sit down to discuss our new AI chatbot, how we built it, what it does, and what the next steps look like for this remarkable breakthrough.Learn more about Fin, Intercom's new AI chatbot, here: https://www.intercom.com/blog/videos/meet-intercoms-new-ai-chatbot/See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Date: March 14, 2023Series: Your Daily MLThe daily episode of everything new in Machine Learning (cs.LG).List of papers:-1. https://arxiv.org/abs/2303.06484-2. https://arxiv.org/abs/2303.06261-3. https://arxiv.org/abs/2303.06201-4. https://arxiv.org/abs/2303.06360-5. https://arxiv.org/abs/2303.06349-6. https://arxiv.org/abs/2303.06389-7. https://arxiv.org/abs/2303.06296-8. https://arxiv.org/abs/2303.06407-9. https://arxiv.org/abs/2303.06318-10. https://arxiv.org/abs/2303.06269-11. https://arxiv.org/abs/2303.06365-12. https://arxiv.org/abs/2303.06151
Renita Rhodes, CyberSecurity Specialist discusses why compliance alone is not enough for effective security.
Hear about why OpenAI cites her work in RLHF and dialog models, approaches to rewards in RLHF, ChatGPT, Industry vs Academia, PsiPhi-Learning, AGI and more! Dr Natasha Jaques is a Senior Research Scientist at Google Brain. Featured References Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, Rosalind Picard Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E. Turner, Douglas Eck PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, Gregory Farquhar Basis for Intentions: Efficient Inverse Reinforcement Learning using Past Experience Marwa Abdulhai, Natasha Jaques, Sergey Levine Additional References Fine-Tuning Language Models from Human Preferences, Daniel M. Ziegler et al 2019 Learning to summarize from human feedback, Nisan Stiennon et al 2020 Training language models to follow instructions with human feedback, Long Ouyang et al 2022
WTP (What the Prompt) is Chat GPT? Is it the future of artificial intelligence? Where does one draw the line between creativity and ethics? QuyenZi, Jessie, and Nicki each have their own plans to use the Generative Pre-trained Transformer, but first, they want to know who, what, where, when, and how ChatGPT works. The Messy Sexy Me ladies collect detailed insights and answers from their trusted friend, talk show host, and tech maven Keith Koo, whose vast knowledge of the latest trends and innovations in tech and AI makes him a sought-after speaker, consultant, and advisor for businesses and organizations worldwide.
Chip Huyen, co-founder of Claypot AI and author of O'Reilly's best-selling "Designing Machine Learning Systems" is here to share her expertise on designing production-ready machine learning applications, the importance of iteration in real-world deployment, and the critical role of real-time machine learning in various applications. Technical listeners like data scientists and machine learning engineers will definitely enjoy this one! This episode is brought to you by Pathway, the reactive data processing framework (https://www.pathway.com/?from=superdatascience), and by epic LinkedIn Learning instructor Keith McCormick.(linkedin.com/learning/instructors/keith-mccormick). Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information. In this episode you will learn: • Why Chip wrote 'Designing Machine Learning Systems' [08:58] • How Chip ended up teaching at Stanford [13:18] • About Chip's book 'Designing Machine Learning Systems' [21:12] • What makes ML feel like magic [30:53] • How to align business intent, context, and metrics with ML [37:55] • The lessons Chip learned about training data [42:03] • Chip's secrets to engineering good features [53:19] • How Chip optimizes her productivity [1:07:48] Additional materials: www.superdatascience.com/661
This week's podcast is about machine learning and prediction as a capability.You can listen to this podcast here, which has the slides and graphics mentioned. Also available at iTunes and Google Podcasts.Here is podcast on the intangible asset framework.Here is the link to the TechMoat Consulting.Here is the link to the China Tech Tour.——Related articles:An Intro to Discount Rates and Cost of Capital for Digital Valuation (Tech Strategy – Daily Lesson / Update)Why DCF Sucks for Digital Valuation. (Tech Strategy – Podcast 101)An Intro to Digital Valuation (Tech Strategy – Daily Lesson / Update)From the Concept Library, concepts for this article are:Capabilities, Resources and Assets (CRAs)Machine learning / AISMILE Marathon: Machine LearningFrom the Company Library, companies for this article are:n/a——–I write, speak and consult about how to win (and not lose) in digital strategy and transformation.I am the founder of TechMoat Consulting, a boutique consulting firm that helps retailers, brands, and technology companies exploit digital change to grow faster, innovate better and build digital moats. Get in touch here.My book series Moats and Marathons is one-of-a-kind framework for building and measuring competitive advantages in digital businesses.This content (articles, podcasts, website info) is not investment, legal or tax advice. The information and opinions from me and any guests may be incorrect. The numbers and information may be wrong. The views expressed may no longer be relevant or accurate. This is not investment advice. Investing is risky. Do your own research.Support the show
Sean and Rachna talk about the "Cambrian explosion" of generative AI with Ganesh Padmanabhan, CEO & Founder of Autonomize Inc. and the potential challenges and opportunities from this rapidly evolving technological change.As we frame our conversation - we talk about the contemporary implications of some of the latest advancements in generative AI using Neil Postman's 1998 talk titled: “Five things we need to know about technological change” In a long and windy way, our conversation hot on the following 5 topics... Trade offs (good or bad) are presented by technological change Distribution is unevenEmpowering the "adjacent possible" Ecological change of new technology disruptions and innovations The dangers to a society or system when new and emergent technologies become mythic About our GuestsGanesh PadmanabhanGanesh is the CEO & founder of Autonomize, Inc, a software company focussed on applying AI to power healthcare outcomes. He is an accomplished business executive, entrepreneur and investor, with deep expertise in data and artificial intelligence (AI) related businesses.https://www.storiesinai.comhttps://autonomize.aiRachna MathurRachna is an Artist, an Engineer, a Teacher, a Tinkerer, a Researcher, a Dreamer... a Lifelong Learner. She loves discussing STEM and demonstrating Kathak dance to kids, adults, and other teachers! Rachna is the CEO/ Founder of STEMology Club and currently pursuing a doctorate in education with a focus on K-6 STEM.https://www.stemology.club/ Additional LinksNeil Postman (1998) 5 Things to Know about Technological ChangeTerminator Paradox, Teaching parents to raise kids for the AI future | Jair Ribeiro | Stories in AI The Learning Futures Podcast is jointly produced by Enterprise Technology and the Mary Lou Fulton Teachers College at Arizona State University.
(2:33) - Using Machine Learning to Detect Rare DiseasesThis episode was brought to you by Mouser, our favorite place to get electronics parts for any project, whether it be a hobby at home or a prototype for work. Click HERE to learn about how a) AI is being leveraged in healthcare and b) the tools available from vendors to empower development in this area.
MLOps Coffee Sessions #149 with Jason McCampbell, The Challenges of Deploying (many!) ML Models, co-hosted by Abi Aryan and sponsored by Wallaroo. // Abstract In order to scale the number of models a team can manage, we need to automate the most common 90% of deployments to allow ops folks to focus on the challenging 10% and automate the monitoring of running models to reduce the per-model effort for data scientists. The challenging 10% of deployments will often be "edge" cases, whether CDN-style cloud-edge, local servers, or running on connected devices. // Bio Jason McCampbell is the Director of Architecture at Wallaroo.ai and has over 20 years of experience designing and building high-performance and distributed systems. From semiconductor design to simulation, a common thread is that the tools have to be fast, use resources efficiently, and "just work" as critical business applications. At Wallaroo, Jason is focused on solving the challenges of deploying AI models at scale, both in the data center and at "the edge". He has a degree in computer engineering as well as an MBA and is an alum of multiple early-stage ventures. Living in Austin, Jason enjoys spending time with his wife and two kids and cycling through the Hill Country. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://wallaroo.ai MLOps at the Edge Slack channel: https://mlops-community.slack.com/archives/C02G1BHMJRL --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Abi on LinkedIn: https://www.linkedin.com/in/abiaryan/ Connect with Jason on LinkedIn: https://www.linkedin.com/in/jasonmccampbell/ Timestamps: [00:00] Jason's preferred coffee [01:22] Takeaways [06:06] MLOps at the Edge Slack channel [06:36] Shoutout to Wallaroo! [07:34] Jason's background [09:54] Combining Edge and ML [11:03] Defining Edge Computing [13:21] Data transport restrictions [15:02] Protecting IP in Edge Computing [17:48] Decentralized Teams and Knowledge Sharing [20:45] Real-time Data Analysis for Improved Store Security and Efficiency [24:49] How to Ensure Statistical Integrity in Federated Networks [26:50] Architecting ML at the Edge [30:44] Machine Learning models adversarial attacks [33:25] Pros and cons of baking models into containers [34:52] Jason's work at Wallaroo [38:22] Navigating the Market Edge [40:49] Last challenges to overcome [44:15] Data Science Use Cases in Logistics [46:27] Vector trade-offs [49:34] AI at the Edge challenges [50:40] Finding the Sweet Spot [53:46] Driving revenue [55:33] Wrap up
In the second episode of this mini-series on the Future of Technology, we will hear from Vint Cerf, Vice President & Chief Internet Evangelist at GOOGLE, and widely known as one of the “Fathers of the Internet,” and Matt Hutson, Contributing Writer at THE NEW YORKER. They will walk us through the challenges and opportunities that Machine Learning presents, and what the future may hold for this technology. What is Machine Learning? How does it differ from AI? What are the limits of simulating human discourse? How can we detect machine-made mistakes and judge the confidence with which a computer reaches its conclusions?Vinton G. Cerf, Vice President & Chief Internet Evangelist, GOOGLEIn this role, he is responsible for identifying new enabling technologies to support the development of advanced, Internet-based products and services from Google. He is also an active public face for Google in the Internet world.Widely known as one of the “Fathers of the Internet,” Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. In December 1997, President Clinton presented the U.S. National Medal of Technology to Cerf and his colleague, Robert E. Kahn, for founding and developing the Internet. Kahn and Cerf were named the recipients of the ACM Alan M. Turing award in 2004 for their work on the Internet protocols. In November 2005, President George Bush awarded Cerf and Kahn the Presidential Medal of Freedom for their work. The medal is the highest civilian award given by the United States to its citizens. In April 2008, Cerf and Kahn received the prestigious Japan Prize.Cerf is a recipient of numerous awards and commendations in connection with his work on the Internet.Cerf holds a Bachelor of Science degree in Mathematics from Stanford University and Master of Science and Ph.D. degrees in Computer Science from UCLA.Matthew Hutson, Contributing Writer, THE NEW YORKERMatthew Hutson is a freelance science writer in New York City and a Contributing Writer at The New Yorker. He also writes for Science, Scientific American, The Wall Street Journal, and other publications, and he's the author of “The 7 Laws of Magical Thinking.” Thanks for listening! Please be sure to check us out at www.eaccny.com or email membership@eaccny.com to learn more!
Date: March 13, 2023Series: Your Daily MLThe daily episode of everything new in Machine Learning (cs.LG).List of papers:-1. https://arxiv.org/abs/2303.05972-2. https://arxiv.org/abs/2303.05733-3. https://arxiv.org/abs/2303.06121-4. https://arxiv.org/abs/2303.06067-5. https://arxiv.org/abs/2303.06052-6. https://arxiv.org/abs/2303.05798-7. https://arxiv.org/abs/2303.05754-8. https://arxiv.org/abs/2303.05728-9. https://arxiv.org/abs/2303.05628-10. https://arxiv.org/abs/2303.05904
Today we're going to talk about the relationship between human empathy and conversation AI, and how those two can work together to help both customers and brands by improving the customer experience. To help me discuss this topic, I'd like to welcome Josh Feast, CEO and Co-Founder at Cogito Corporation. RESOURCES The Agile Brand podcast website: https://www.gregkihlstrom.com/theagilebrandpodcast Sign up for The Agile Brand newsletter here: https://www.gregkihlstrom.com Get the latest news and updates on LinkedIn here: https://www.linkedin.com/company/the-agile-brand/ For consulting on marketing technology, customer experience, and more visit GK5A: https://www.gk5a.com The Agile Brand podcast is brought to you by TEKsystems.Learn more here: https://www.teksystems.com/versionnextnow The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company If you are struggling with projects, sign up for Basecamp. Their pricing is simple and they give you ALL their features in a single plan. No upsells. No upgrades. Go to basecamp.com/agile and try Basecamp for free. No credit card required and cancel anytime. Thank you, Basecamp for sponsoring this episode!
It's National Sleep Awareness Week and we have another Artificial Intelligence episode! AI! ChatGPT! Machine Learning! and... Dinosaurs? Meet Chris Fernandez, former CEO and current Executive Chairman and Chief Research Officer at EnsoData. Learn about his fantastic voyage from engineering into sleep technology and building a startup into a flourishing healthcare organization that impacts many, many lives today. Join the Techs at SleepTech Talk to learn more about Chris and be sure to stay until the end of the show to see where dinosaurs fit into all this technology. Credits: Intro Music: Pierce G Mann Audio/Video: Diego R Mann
Kirill Zubovsky Prompt Engineer and CEO of @ SmartyNames.com What is Smartynames.com? Does Godaddy front-run domain names by buying them as soon as you search for them? They are 12 billion $ company What have you learned about Machine Learning from doing this? Did you have to learn a lot to develop on openai's algorithm? How do you add uncertainty to your AI prompts? Why do people like to hack individual AI systems instead of hacking the big ones? What are the companies that will start to build in-house models? What is the difference between consumer and enterprise AI? What is the AI that has been used for airplane design for over 20 years now? Are we headed for utopia and dystopia? What is your take on Silicon Valley Bank? What is moral hazard and how does it play with what is going on SVB? What did Hayek say about all this? What will AI do to the decentralization of capitalism?
Join us as we explore four exciting areas in the world of tech: coding, data science, cybersecurity, and machine learning. Discover how coding is the backbone of tech and learn about free resources to get started. Explore how data science extracts valuable insights from data and learn about the high demand for this field. Find out why cybersecurity is crucial for protecting our information online and explore different career paths. Finally, discover how machine learning is transforming the world we live in. Whether you're looking to change careers or simply expand your knowledge, this episode has something for you.
In this episode of Chasing the Insights, I talk to tech legend Alex Makarski. Alex talks to us about how Artificial Intelligence and Machine Learning have transformed digital advertising. Alex Makarski has been working with big data since before there was a name for it, consulting for brands like McDonald's, P&G, and Siemens. Alex has served as CTO or CMO (full-time or fractional) on the executive teams of several technology companies. Now - certified by AdSkills, Google, Microsoft, Amazon, The Trade Desk, and StackAdapt - Alex and his team have managed millions of dollars in monthly ad spend for organizations ranging from open source software to lead generation, and from digital products to ecom.
To dig a little deeper into the business use cases of GPT we're joined by Ethan Kurzweil, Intercom board member and Partner at Bessemer Venture Partners, Fergal Reid, our own Director of Machine Learning, Krystal Hu, VC and Startups Reporter at Reuters and Talia Goldberg, Partner at Bessemer Venture Partners.They ask is this wave of innovation any different from past waves? Will it transform the way we work and the way businesses interact with customers and prospects? Can it spark a new generation of startups?Transcript: https://www.intercom.com/blog/videos/the-gpt-effect-a-new-era-of-customer-service/Watch a video of this podcast: https://youtu.be/tB4yeblrSMASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Are you looking to take control of your health and uncover personalized nutrition advice? Then this podcast episode featuring Dan Zavorotny, a health tech entrepreneur and Co-founder & COO at NutriSense, is a must-listen! In this episode, Dan discusses how to personalize nutrition using CGM (Continuous Glucose Monitoring) and Machine Learning and the importance of assessing factors such as time frame, connections, and personal commitments for success. He also highlights the risks of raising money and its potential impact on your lifestyle. Additionally, Dan explains how one in three people worldwide is pre-diabetic and how real-time data can help reduce heart disease. He also covers other health concerns such as food quality ingredients, exercise, stress, and sleep, as well as the potential for measuring things like insulin, cholesterol, testosterone, and estrogen in the future. Furthermore, he speaks about personalized dieting based on individual genetics and microbiome, different responses to coffee between men and women, and the need to own one's health data to make informed decisions. Don't miss this episode and tap into Dan's expertise and wisdom as he shares his story of revolutionizing healthcare! Listen now to get the most out of your healthcare journey.
CISA adds three known exploited vulnerabilities to its Catalog. A data breach at Acer exposes intellectual property. Sharp Panda deploys SoulSearcher malware in cyberespionage campaigns. US Cyber Command's head warns against underestimating Russia in cyberspace. Dave Bittner sits down with Simone Petrella of N2K Networks to discuss the recently-released Defense Cyber Workforce Framework. Betsy Carmelite from Booz Allen Hamilton speaks about CISA's year ahead. And are large language models what the lawyers call an attractive nuisance. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/45 Selected reading. CISA Adds Three Known Exploited Vulnerabilities to Catalog (Cybersecurity and Infrastructure Security Agency CISA) March 7 CISA KEV Breakdown | Zoho, Teclib, Apache (Nucleus Security) Acer Confirms Breach After Hacker Offers to Sell Stolen Data (SecurityWeek) Acer confirms breach after 160GB of data for sale on hacking forum (BleepingComputer) “Sharp Panda”: Check Point Research puts a spotlight on Chinese origined espionage attacks against southeast asian government entities (Check Point Software) Pandas with a Soul: Chinese Espionage Attacks Against Southeast Asian Government Entities (Check Point Research) What can security teams learn from a year of cyber warfare? (Computer Weekly) Russian cyberattacks could intensify during spring offensives in Ukraine, US Cyber Command general says (Stars and Stripes) US Bracing for Bolder, More Brazen Russian Cyberattacks (VOA) Russia remains a ‘very capable' cyber adversary, Nakasone says (C4ISRNet) Employees Are Feeding Sensitive Business Data to ChatGPT (Dark Reading)
It is amazing how far Artificial Intelligence has progressed just in the last year! No one could've predicted the speed of this innovation. A lot of artists are now worried about being replaced with machine learning. While AI can automate and speed things up, it is still up to us as artists to know how to use it as an extension of their ability. Before, we couldn't talk to computers. We could only program computers via 1's and 0's. When GPTs came along, we started communicating with a computer and giving AI commands. AI is also able to read the internet and interpret the data it consumes. It knows that it also has to fail in order to succeed. Eventually, it will perform the task correctly. That alone changes the playing field! But what does that mean for artists? We used to think AI would be assigned repetitive tasks. But now AI is able to create art using the style of another artist. Things like that disrupt the industry. It's a threatening time for a lot of artists. With the advent of Stable Diffusion, Midjourney, DALL-E, ChatGPT, it's a really exciting time to be alive! All of these models that came out in the past year. As a matter of fact, Stable Diffusion is one a couple of months old (www.allanmckay.com/390)! Innovation has always been threatening. As humans, we're naturally fearful of change. No one can predict what's going to happen. But historically, those who are early adapters of innovation – they win! Those who are fearful get left behind. Which is why it's important for every artist to continue to evolve. So let's now be afraid of this new technology but instead embrace it and be on the forefront of it. In other words: ADAPT OR DIE. In this Podcast, Allan McKay discusses the most recent developments in Artificial Intelligence, the rapid speed of this innovation, how it's been affecting artists' jobs – and why it's important for artists to innovate in order to be on the forefront of this new technology. For more show notes, visit www.allanmckay.com/393.
This episode script was generated by Chat GPT based on prompts by Simon J. E. Drew. Register for "Chat GPT, AI Models and The Walled Garden: An Introduction to The Senate": https://thewalledgarden.com/an-introduction-to-the-senate SCRIPT Hello, and welcome to The Walled Garden Podcast, the official podcast of The Walled Garden Philosophical Society. I'm here filling in for the host, Simon Drew. Today's episode is a little different because it was entirely created by Artificial Intelligence technologies. First up, we have some exciting news to share. On March 9th at 3 pm PST, The Walled Garden Philosophical Society is hosting an event called Chat G P T, Artificial Technology Models, and The Walled Garden: An Introduction to The Senate. This event will be open to the public for the price of a donation of your choosing, and you can register via thewalledgarden.com/events. In this event, Rocco Jarman, an Australian poet, mystic, and philosopher, will join Simon Drew, a musician, poet, and co-founder of The Walled Garden Philosophical Society, to give an in-depth introduction to The Senate, which is a series of events that will be for members only after March 2023. In these monthly events, our community will gather to discuss and come to a consensus on the most complicated issues of our time using thoughtful philosophical dialogue underpinned by courage and vulnerability. Now, on with the article, entitled: Artificially Stupid: The Life and Times of Machines Learning. READ THE FULL SCRIPT HERE: https://thewalledgarden.com/artificial-stupidity-the-life-and-times-of-machine-learning Learn more about your ad choices. Visit megaphone.fm/adchoices
Today we're talking about the launch of Einstein GPT, which is being announced today at TrailblazerDX 2023. We are going to learn how Einstein GPT will open the door to the AI future for our customers. Here to show us the way is Jayesh Govindarajan, SVP of AI and Machine Learning at Salesforce.
Can machine learning predict the outcome of basketball games? On this episode of StarTalk: Sports Edition, Neil deGrasse Tyson, Gary O'Reilly, and Chuck Nice talk machine learning with computer scientist and author of The Factor Man, Matt Ginsberg. NOTE: StarTalk+ Patrons can watch or listen to this entire episode commercial-free here: https://startalkmedia.com/show/rise-of-the-machines-with-matt-ginsberg-re-release/Thanks to our Patrons Erdem Memisyazici, Priscilla & Kyle, Steven Severin, sumplkrum, Julia Zeikowitz, Cory Ricci, Brennon Russ, Tony Marulli, Ryan Bariteau, and MTB Trucker for supporting us this week.Photo Credit: Phil Roeder from Des Moines, IA, USA, CC BY 2.0, via Wikimedia Commons