Podcasts about microsoft research lab

  • 14PODCASTS
  • 19EPISODES
  • 49mAVG DURATION
  • ?INFREQUENT EPISODES
  • Aug 1, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about microsoft research lab

Latest podcast episodes about microsoft research lab

Why Is This Happening? with Chris Hayes
AI: “An Exponential Disruption” with Kate Crawford (2023)

Why Is This Happening? with Chris Hayes

Play Episode Listen Later Aug 1, 2023 59:56


Since Chris is just getting back from vacation this week, we're re-sharing one of our favorite episodes. You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, we're starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. She's also author of “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.” Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says it's “neither artificial nor intelligent,” climate change concerns, the need for regulation and more.

Why Is This Happening? with Chris Hayes
AI: “An Exponential Disruption” with Kate Crawford

Why Is This Happening? with Chris Hayes

Play Episode Listen Later Apr 18, 2023 59:56


You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest this week points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, we're starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. She's also author of “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.” Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says it's “neither artificial nor intelligent,” climate change concerns, the need for regulation and more.

Pivot
The Discord Leaks, San Francisco Safety, and Kate Crawford on AI

Pivot

Play Episode Listen Later Apr 18, 2023 70:26


Scott's back on Twitter, and Elon says the platform could be cash-flow positive this quarter - coincidence? Kara and Scott discuss growing calls for Senator Feinstein to resign, a delay in the Dominion v. Fox News trial, and impressive JPMorgan Chase earnings. Also, a tech consultant has been arrested for the murder of Cash App founder Bob Lee. And U.S. National Security is in disarray over Discord after an Air National Guardsman allegedly leaked classified documents on the platform. Then, we're joined by Principal Researcher at Microsoft Research Lab and Professor at USC Annenberg, Kate Crawford to talk everything AI. You can find Kate on Twitter at @katecrawford, and can buy “Atlas of AI” here. We're nominated for a Webby! Vote for us here. Send us your questions! Call 855-51-PIVOT or go to nymag.com/pivot. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Podcast | BNR
De Technoloog

Podcast | BNR

Play Episode Listen Later Dec 13, 2022 64:38


We kunnen inmiddels heel behoorlijk chatten, autonoom rijden en plaatjes genereren door kunstmatige intelligentie, maar nu krijgt ook de natuurkunde hulp van kunstmatige intelligentie. Met de computer kunnen tegenwoordig natuurwetten worden gesimuleerd. Hoe dat werkt en vooral wat dat oplevert vraagt De Technoloog aan Max Welling, hoogleraar machine learning aan de UvA en leider van het Microsoft Research Lab.

uva microsoft research lab max welling de technoloog
De Technoloog | BNR
Ook de natuurkunde krijgt hulp van kunstmatige intelligentie

De Technoloog | BNR

Play Episode Listen Later Dec 13, 2022 64:38


We kunnen inmiddels heel behoorlijk chatten, autonoom rijden en plaatjes genereren door kunstmatige intelligentie, maar nu krijgt ook de natuurkunde hulp van kunstmatige intelligentie. Met de computer kunnen tegenwoordig natuurwetten worden gesimuleerd. Hoe dat werkt en vooral wat dat oplevert vraagt De Technoloog aan Max Welling, hoogleraar machine learning aan de UvA en leider van het Microsoft Research Lab. Gast Max Welling Links Meer Max Welling! Video YouTube Hosts Herbert Blankesteijn & Ben van der Burg Redactie Daniël Mol See omnystudio.com/listener for privacy information.

uva krijgt hulp kunstmatige intelligentie natuurkunde microsoft research lab max welling de technoloog
BNR's Big Five | BNR
Kunstmatige Intelligentie | Max Welling

BNR's Big Five | BNR

Play Episode Listen Later Dec 1, 2021 42:47


Bij grote AI-doorbraken speelt machine learning vaak een belangrijke rol. Welke ontwikkelingen zijn het meest veelbelovend? Te gast is Max Welling, hoogleraar machine learning aan de UvA en leider van het Microsoft Research Lab

Voice of FinTech
The future of banking with Neon's co-founder Simon Youssef hosted by Silvan Krähenbühl

Voice of FinTech

Play Episode Listen Later Aug 17, 2021 38:27


In this episode hosted by Silvan Krähenbühl, a serial entrepreneur and host of the Swisspreneur podcast, Silvan spoke to the co-founder of Neon, a leading Swiss digital bank, Simon Youssef, about the future of banking.Here are the highlights of what they talked about: Leaving your well-paying corporate job The business model behind a free product Is Revolut a competitor? Being on Höhle der Löwen Doing a 5M tokenized crowd investment Simon Youssef's storySimon Youssef is the co-founder and CTO of neon, a transaction account for everyday finances. Before starting his own company, he was an associate at the management consulting company Strategy&. His educational background is in Physics, and he holds a PhD from the Microsoft Research Lab.Simon and his co-founder Jörg Sandrock worked together at Strategy&, where they became well acquainted with the Swiss banking market. After a few unfulfilling projects, Simon went on a surfing sabbatical in Indonesia. During this time, Jörg proposed the neon business idea to him, and Simon decided to take the leap.Whereas typical banking apps cater to the bank's sales interests, merely replicating the organizational structure of a traditional bank, neon offers a client-focused product free of charge, with the lowest fees in Switzerland. Neon manages this because, having a team of around 20 employees, they can live on much slimmer margins. Their revenue comes from card interchange, premium plans like neon Green, and finance fees (from when clients buy partner products on the app).Revolut and Transferwise don't pose a very big threat, since they're more travel cards than anything else. neon, on the other hand, is a Swiss banking product, meaning they can offer you a Swiss IBAN, e-bills, and so on.Having been the first of its kind in the small Swiss market, neon has already amassed 70 000 users, in part due to the large amount of press coverage they received. Their participation in Höhle der Löwen, while helpful cash flow-wise, did not bring them as much of a boost as they initially imagined.Recommended resources:  How I Built This  and The Happiness Lab.

Swisspreneur Show
EP #178 - Simon Youssef: The Future Of Banking

Swisspreneur Show

Play Episode Listen Later Aug 4, 2021 38:47


Timestamps: 8:16 - Leaving your well-paying corporate job 13:29 - The business model behind a free product 22:08 - Is Revolut a competitor? 27:29 - Being on Höhle der Löwen 29:35 - Doing a 5M tokenized crowd investment About Simon Youssef Simon Youssef is the co-founder and CTO of neon, a transaction account for everyday finances. Before starting his own company, he was an associate at the management consulting company Strategy&. His educational background is in Physics, and he holds a PhD from the Microsoft Research Lab. Simon and his co-founder Jörg Sandrock worked together at Strategy&, where they became well acquainted with the Swiss banking market. After a few unfulfilling projects, Simon went on a surfing sabbatical in Indonesia. During this time, Jörg proposed the neon business idea to him, and Simon decided to take the leap. Whereas typical banking apps cater to the bank's sales interests, merely replicating the organizational structure of a traditional bank, neon offers a client-focused product free of charge, with the lowest fees in Switzerland. Neon manages this because, having a team of around 20 employees, they can live on much slimmer margins. Their revenue comes from card interchange, premium plans like neon Green, and finance fees (from when clients buy partner products on the app). Revolut and Transferwise don't pose a very big threat, since they're more travel cards than anything else. neon, on the other hand, is a Swiss banking product, meaning they can offer you a Swiss IBAN, e-bills, and so on. Having been the first of its kind in the small Swiss market, neon has already amassed 70 000 users, in part due to the large amount of press coverage they received. Their participation in Höhle der Löwen, while helpful cashflow-wise, did not bring them as much of a boost as they initially imagined. Memorable Quotes: "Leaving our corporate careers was a huge risk, so we did not want to fuck this up. Not wanting to fuck up is a big motivator." Resources Mentioned: - How I Built This - The Happiness Lab If you'd like to know more about the neon story, check out our episode with Simon's co-founder, Jörg Sandrock. Don't forget to give us a follow on our Twitter, Instagram, Facebook and Linkedin accounts, so you can always stay up to date with our latest initiatives. That way, there's no excuse for missing out on live shows, weekly give-aways or founders dinners!

Microsoft Research India Podcast
Research @Microsoft Research India: interdisciplinary and impactful. With Dr. Sriram Rajamani

Microsoft Research India Podcast

Play Episode Listen Later Apr 19, 2021 27:56


  Episode 008 | April 20, 2021 Microsoft Research India is constantly exploring how research can enable new technologies that positively impact the lives of people while also opening new frontiers in computer science and technology itself. In this podcast we speak to Dr. Sriram Rajamani, distinguished scientist and Managing Director of the Microsoft Research India Lab. We talk about some of the projects in the lab that are making fundamental changes to the computing at Internet scale, computing at the edge and the role he thinks technology should play in the future to ensure digital fairness and inclusion. Sriram also talks to us about a variety of things his own journey as a researcher, how the lab has changed from the time he joined it years ago, and his vision for the lab. Sriram’s research interests are in designing, building and analyzing computer systems in a principled manner. Over the years he has worked on various topics including Hardware and Software Verification, Type Systems, Language Design, Distributed Systems, Security and Privacy. His current research interest is in combining Program Synthesis and Machine Learning. Together with Tom Ball, he was awarded the CAV 2011 Award for “contributions to software model checking, specifically the development of the SLAM/SDV software model checker that successfully demonstrated computer-aided verification techniques on real programs.” Sriram was elected ACM Fellow in 2015 for contributions to software analysis and defect detection, and Fellow of  Indian National Academy of Engineering in 2016. Sriram was general chair for POPL 2015 in India, and was program Co-Chair for CAV 2005. He co-founded the Mysore Park Series, and the ISEC conference series in India. He serves on the CACM editorial board as co-chair for special regional sections, to bring computing innovations from around the world to CACM. Sriram has a PhD from UC Berkeley, MS from University of Virginia and BEng from College of Engineering, Guindy, all with specialization in Computer Science. In 2020, he was named as a Distinguished Alumnus by College of Engineering, Guindy. For more information about the Microsoft Research India click here. Related Microsoft Research India Podcast: More podcasts from MSR India iTunes: Subscribe and listen to new podcasts on iTunes Android RSS Feed Spotify Google Podcasts Email Transcript Sriram Rajamani: We are not like an ivory tower lab. You know we are not a lab that just writes papers. We are a lab that has our hands and feet, dirty, we sort of get ourselves dirty sort of get in there, you know, we test our assumptions, see whether it works, learn from them and in that sense actually the problems that we work on are a lot more real than a purely academic environment. [Music] Sridhar Vedantham: Welcome to the Microsoft Research India podcast, where we explore cutting-edge research that’s impacting technology and society. I’m your host, Sridhar Vedantham. [Music] Sridhar Vedantham: Microsoft Research India is constantly exploring how research can enable new technologies that positively impact the lives of people while also opening new frontiers in computer science and technology itself. In this podcast we speak to Dr. Sriram Rajamani, distinguished scientist and Managing Director of the Microsoft Research India Lab. We talk about some of the projects in the lab that are making fundamental changes to computing at Internet scale, computing at the edge and the role he thinks technology should play in the future to ensure digital fairness and inclusion. Sriram also talks to us about a variety of things his own journey as a researcher, how the lab has changed from the time he joined it many years ago and his vision for the lab. Sridhar Vedantham: So today we have a very special guest on the podcast, and he is none other than Dr. Sriram Rajamani, who is the Managing Director of the Microsoft Research Lab in India. So Sriram welcome to the podcast. Sriram Rajamani: Yeah, thank you. Thank you for having me here, Sridhar. Sridhar Vedantham: OK, you've been around in Microsoft Research for quite a while, right? Can you give me a brief background as to how you joined and when you join and what's your journey been in MSR so far? Sriram Rajamani: Yeah, so I joined in 1999. And , oh man, it's now 22 years, I guess. I've been here for a while. Sridhar Vedantham: That's a long time. Sriram Rajamani: I joined in Microsoft Research in Redmond right after I finished my PhD in Berkeley and then I, you know, my PhD was in formal verification. So, my initial work in Microsoft in Redmond was in the area of formal verification and then at some point I moved to India around 2006 or something like that. So I think I spent about six or seven years in Redmond and my remaining time- another 15 years- in India. So that's been my journey, yeah. Sridhar Vedantham: OK, so this is interesting, right, because, you know, we constantly hear about India as being this great talent pool for software engineers, but we certainly don't hear as often that it is a great place for a computer science research lab. Why do you think a Microsoft Research lab in India works and what drew you to the lab here? Sriram Rajamani: I'm a scientist and I joined MSR because I wanted to do high quality science work that is also applicable in the real world, you know. That's why I joined MSR and the reason why I moved to India was because at some point. I just wanted to live here - I wanted to live here because I have family here and so on and then Anandan started the lab and so somehow things came together, and that's why I personally moved. But if you ask, you know, ask me why it makes sense for MSR to have a lab here, the reasons are quite clear. I think we are such a big country, we have enormous talent. I think talent is the number one reason I think we are here. Particularly unique to India is that we have really strong undergraduate talent, which is why we have programs like our Research Fellow program. But over the past, many years, right, the PhD talent is also getting better and better. As you know, initially when we started, you know, we recruited many PHDs you know from abroad, who had their PhD from abroad and then return just like me. But over the years we've also recruited many PhDs from Indian institutions as well. So, I think that talent is the number one reason. The second reason is you know the local tech ecosystem is very different. It started out as a service industry for the West- you know essentially all of the software we were doing, we were servicing companies in the western hemisphere. But over time, India has also become a local consumer of technology, right? Now, be it if you sort of think about, you know Ola or Flipkart, you know, the country is now using technology for its own local purposes. And because of the size and scale of the country, the amount the government and industry is pushing digitization, there's a huge opportunity there as well. And finally, I would say another reason to have a lab is in a place like India that it's a very unique testbed. You know, cost is a huge concern in a place like India, technology has to be really low cost for it to be adopted here. There are very severe resource constraints. Be it bandwidth…you know if you think about NLP, you know many of our languages don't have data resources. Very unreliable infrastructure- things fail all the time, and so you know, I've heard of saying that you know if you build something so that it works in India, it works anywhere. So it's a test bed to actually build something. If you can deploy it and make it work here, you can make it work anywhere. So in that sense actually it's also another reason. Sridhar Vedantham: OK, so basically it works here it's a good certification that it'll work anywhere in the world. Sriram Rajamani: Yeah, yeah. Sridhar Vedantham: All right. OK Sriram, so here's something I'm very curious about. How does a research scientist end up becoming the managing director of a lab? Sriram Rajamani: So the short answer is that it was rather unplanned, but maybe I can give a more longer answer. You know, I started out, you know, being a researcher like anyone else who joins MSR. My initial projects were all in the area of, you know, formal verification, you know, I built together with Tom Ball something called static driver verifier that used formal methods to improve windows reliability. Then I worked on verifiable design- how can you do better design so that you produce better systems? Then I worked on, you know, security, and now I work on machine learning and program synthesis. And you know, a common thread in my work has always been the use of programming languages and formal methods to sort of understand how to build various kinds of systems be it drivers, be it secure systems, be it machine learning systems. That has been sort of the theme underlying my research. But to answer your question as to how I sort of became lab director, you know, after some years after I moved back to MSR India, you know Anandan who was the lab director then, you know, he left. There was a leadership churn there, and at the time I was asked whether I would consider being the lab director. The first time I declined and because I had many other technical projects that are going on. But I got the opportunity the second time, you know, when Chandu and Jeanette really encouraged me when Chandu decided to move on. I had been in MSR maybe 15-16 years when that event happened. And one of the reasons why I decided to take this up was I felt very strongly for MSR, and I thought that MSR has given me a lot and I wanted to give back to MSR and MSR India. And MSR India is easily one of the best CS, computer science industrial labs in this part of the world. And, you know, it made sense that I actually devote my time to support my colleagues, grow their lab in ambition, impact and I sort of had a sense of purpose in that, and so I decided to take this on. So the real answer your question is I don't think anyone plans to be a lab director, and sometimes you know you get an opportunity to become one, and sometimes you say yes. Sridhar Vedantham: Great. OK, and you know, given that you've been in the lab here in India for quite a while, how do you see the lab having evolved over the years? I mean, I'm sure there are lots of things that have changed quite a bit. So, what do you think are those things that have changed quite a bit and what's not changed and what are the kind of things that you'd like to preserve going forward? Sriram Rajamani: Yeah, I think the number one thing that has not changed is quality. MSR… I've now been here for now 21 years and I've been with MSR India you know, from the very beginning. I came here after 6-7 months after the lab started. We've always had great people and the quality of the work we do has always been exceptional. But I think what has changed over the years is that we think much more end to end. When I joined, you know, ’99, we were sort of more academic in nature. We always used to publish in high quality conferences, which we still do. But I think what we do more now is that we think much more end to end. We are no longer satisfied with solving a particular piece of a problem, but we sort of think about how does that piece connect with many, many other pieces, some social, some technical and how do those things fit together broadly to solve problems end to end. We sort of think about that a lot more. As a result, I think we more often than not deploy what we build, you know, either in scale, solutions that actually are adopted by product groups, or actually in our communities to actually validate whether what we think about as something that creates a change, does it indeed create a change and learn from that and use that to even reframe our problems, test our assumptions. And so, you know, I don't think we are, we are not like an ivory tower lab. You know we are not a lab that just writes papers. We just we are a lab that has our hands and feet, dirty, we sort of get ourselves dirty, sort of get in there, you know, we test our assumptions, see whether it works, learn from them and in that sense actually the problems that we work on are a lot more real than a purely academic environment. I think that that's the way in which things have changed. And I think partly also, Sridhar, as you do that, we have become a lot more interdisciplinary… Sridhar Vedantham: Right. Sriram Rajamani: …you know, if you look at our projects today, right? Because if you want to get something to work end to end, it is not just one piece you build, you know. You have to make it interdisciplinary and many of our projects are interdisciplinary. I think that's the other way in which we’ve changed. Sridhar Vedantham: Yeah, in fact this particular term, right, interdisciplinary research- is something that I've heard quite often coming from you. Do you want to just bring up a couple of examples of what you mean by interdisciplinary research through by using some projects as examples? Sriram Rajamani: Yeah, I can give like you know, two or three you know. The first one that comes to mind is our EzPC or you know, our multiparty computation project. And if you look at actually how that project is working, the whole goal is to take computations, be it, DNN training or DNN inference and run it securely with multiple parties. And you know that's a pretty complex problem. There’s compiler people, there's programming languages people, and there's cryptographers. All of them work together to build a solution where the programmer can express their computation in some language, and there's a compiler that compiles it. And then there is, you know, a lot of cryptography smarts are there in order to make this you know, multiparty computation work and, and that's very unique. You can't do this without compiler people and the cryptographers getting together. Another example, you know, is if you look at our Akupara semantic search work, that's actually a combination of algorithms, work machine learning work and systems work so that we can index trillions of vector indices and look them up right in a reasonable amount of time with a reasonable number of machines. I mean, I can't imagine you doing that without expertise in algorithms, machine learning and systems. And if you look at our, more recent, you know societal impact projects like we have this project called HAMS which we are using to improve road safety. I mean that has actually quite a bit of tech like computer vision. You have to make that work on a smartphone. So you need to have systems innovation to actually make that work on a smartphone. And it has quite a bit of HCI. I mean it has to work in an environment where you go into a driver license testing RTO and it should just work there, right? You know it has to work with people, the feedback that is given should be consumable. You know if somebody fails a driving test, the feedback has to be given in such a way that it's a positive experience for them even if they fail the test, right? So it has all these interdisciplinary aspects. And so I hope that those give you a little bit of a flavor for what it takes to solve things end to end. [Music] Sridhar Vedantham: A lot of the listeners of this podcast are not going to be really familiar with all the stuff that we do here at MSR right, at MSR India especially. In your mind, you know, how do you kind of categorize or bucket the different research work that goes on in the lab? Sriram Rajamani: We now think about our work as being classified into themes and the themes are different from our expertise. If you look at our expertise, right, our expertise has always been, you know, from the beginnings of the lab, we have 4 broad areas of expertise. We have, you know, people with expertise in algorithms, second in machine learning, and third in systems, very broadly interpreted in including programming languages, distributed systems, networking, security and so on. And then we have people who do human computer interaction and social sciences, right? Those are our four areas of expertise, but if you look at the way we organize our work now, it is in the themes. We have 5 themes. One theme is around large-scale machine learning, you know, things like recommendation systems, search, large multilingual learning which spans an entire gamut, from algorithms to practical machine learning algorithms as well as systems, right, you know, in order to build them and scale them. Then we have two systems related themes. One is data driven systems and networking where we are using telemetry and the enormous amount of data that we get from large scale cloud systems to do machine learning on them and improve those systems themselves. And then the second systems area we have is called co-designed systems, where we think about interdisciplinary systems work that spans distributed systems, security, privacy, programming languages, verifications. So we sort of think about systems much more holistically. Another thing we have is edge computing, where we sort of think of about machine learning systems, usability in the edge, which is such an important topic from the perspective of India. And the last theme is socio-technical systems and inclusion where we really think about technology as an ally for an enabler for inclusion and empowerment. And each of these five teams, right, draws on the expertise of people from these various disciplines. Sridhar Vedantham: Great, so I've heard you many times talking about things like tech at scale, so I think you have a couple of things you know. There are a couple of things that you said that kind of stick in my mind, so there is one tech at scale and then there is one tech in minute form. I forget the exact terms you use and socio-technical computing is also quite big at MSR India right now. Could you give me a flavor of what exactly is happening in, say, the tech at scale area and also the social technical computing area? Sriram Rajamani: Yeah, so I think the tech at scale is quite important because digital systems are very pervasive now. The pandemic has only accelerated the adoption of digital systems. Most interactions these days are online, and even when we come back from the pandemic, it's going to be hybrid, right? The amount of information is just increasing and increasing and increasing and as a result, right, for any useful user experience we need to be able to sort through this huge amount of information and make the right information available at the right time, right. And that I think is in some sense the primary goal of AI and machine learning and systems at scale, and I think most of our systems at scale work are about how to build systems that use AI and machine learning to process huge, humongous amounts of information in billions and trillions of pages or documents, or vectors, understand them and make sure that their right information is available to you at the right time. And how do you do that reliably, how do you do that securely, how do you do that while preserving privacy? So that I think is the is the crux of our tech at scale. I already mentioned Akupara, which is a trillion-scale index and serving system that we are building for semantic search. Another at-scale project we are doing is called extreme classification where we are trying to build classifiers that can take an object and classify it into hundreds of millions of categories, right? And just like when we think about machine learning, we think about a picture and classifying it into a cat or a dog or a small number of categories. But in extreme classification we take an object like a web page or a document, and we classify it into, potentially, millions or hundreds of millions of, for example, topics. I mean, what are these topics that this document is talking about, or if this object that I'm talking about is an advertisement, what are the keyword bid phrases that are actually relevant to this advertisement, right? So those kinds of classifications are significantly more complex, and our lab really originated this field and is a thought leader in this field. Another at-scale work that we are doing is if you take the area of DNN training. Deep neural network training is an extremely resource intensive process. You know if you take, billions and billions of training points and train deep neural networks, that uses a huge number of GPU resources and other hardware resources. Can you do that more efficiently? And we have a project called Gandiva that improves the throughput of all of the infrastructure that we are using to train these kinds of DNNs. And we want to give you one more example. We have a project called Sankie and what Sankie does is actually, to use all of this telemetry from huge software engineering processes, including coding, testing, development to really improve the productivity of the engineering itself. So, I would say you know those are the kinds of examples of at-scale AI and ML and Systems project that we do. And I think every CS lab has to do that because that is the real world today. Sridhar Vedantham: And we've actually done a podcast earlier on Sankie, so I think I'll link through to that when we publish the transcript of this podcast. Sriram Rajamani: Wonderful. Sridhar Vedantham: Right, and in sociotechnical computing, do you want to talk a little more about that? And this is something that personally I find quite fascinating, you know, because this lab has always had, from the very beginning, a focus on the ICTD space itself. But the kind of projects that are happening now in the lab seem to be taking that to a different level altogether in terms of actually going out there and figuring out the impact and deploying at scale. Sriram Rajamani: You're right, Sridhar, that Technology for Emerging Markets has always been a really interesting area in the lab from the inception. But one thing that has changed if you sort of think about it, is that...see, when the lab was started, right, the kind of technology that was available to everybody in rural India was very different from the technology that all of us use, right? You know they had maybe feature phones and everybody else, you know, had smartphones and so on. But now connectivity, smartphone penetration and so on has increased significantly, right? So in some sense, I think the smartphone and through 4G and so on, connectivity to the cloud, the cloud and the mobile, and with the smartphone is much more accessible, much more prevalent these days, right? But still the problems are there, you know, bandwidth is a problem, you know, they don't work in local language, English works much better than local language. Those constraints are there, but the technology platform has up leveled throughout the country. So as a result, right, if you take our own work on socio-technical computing, we are doing technologically more sophisticated things now than we did before because more technologically sophisticated things are accessible to a much broader population of the country. That I think is the way things have changed, which is why we are actually now able to do projects like you know, HAMS where you're using driver license testing. Because even, uh, even an RTO in a rural area, right, they have access to smartphones, right? And you know, they are interested to see whether you know driver license testing can be streamlined. So I think that the high tide has lifted the technology everywhere. I think that's one way in which things have changed. Another one where we are now using peer to peer connectivity, this project called BlendNet, where we are actually helping people share media and other bulky resources better. And even that actually, you know the reason why we are doing this because you know smartphone, the desire to view movies, entertainment, it’s very widespread throughout the country, right? So that's actually another example of projects. And even just this morning I was actually looking at a piece of news where they were talking about, uh, this company Respirer Living Sciences and we're having a collaboration with them to measure air pollution, and they want to monitor pollution and democratize the data, right. I mean this is now such an important problem and but if you look at what is needed to do that right, we have to solve really hard technical problems. Like how do you make sure that the sensors that are sensing these are reliable? If there's a way in which the sensors are calibrated, if it is erroneous, how do you re-calibrate them? But these are hardcore technology problems that I think are important to solve a societal problem like air pollution. So another way I think things have changed is that maybe, perhaps, previously all our societal scale problems were sort of low tech- that's no longer true. That doesn't mean actually that the tech works as it is, right? You know, we still work on projects like Karya where we are trying to do data collection and crowdsourcing for low resource Indian languages and that requires actually us to build user interfaces that work with semi-literate and illiterate users and you know, make sure that we are actually able to cater to the multilingual population in the country and so on, right? So the user centered design and the need to design to people on the other side of the digital divide is still important, right? But you know at the same time the tech tidal wave has also lifted things up, so I think that's sort of the dynamic here, I think. Sridhar Vedantham: Right, and there's a bit of a conundrum here, right? Because at one point of time it was assumed that technology itself is going to help people's lives become better and so on. And we've obviously seen technology permeate to levels within society that it's never permeated before. Now this brings about questions of digital inclusion and fairness and equitable access to information and to the benefits of technology. So, a couple of questions here. How do we actually ensure things like digital inclusion and fairness? And given very specific, unprecedented situations like the one that we find ourselves now, in the midst of a pandemic, how does this actually impact people or impact society at large? Sriram Rajamani: I think in spite of the fact that digital technology has permeated right, it is very clear that technology is still very non inclusive, right? That is also, I think true at the same time. And so there is no silver bullet I think to the question that you're asking. I think it's extremely important for us to think about as scientists and technologists, think about underserved populations, underserved communities and see whether the technologies that we build you know are inclusive, whether they are useful, you know. I give an example of what Manohar Swaminathan is doing with his work on accessibility, where he has done quite a bit of study in schools for visually impaired children, thinking about even the curriculum that they have you know, in STEM and computing, computational thinking, I think, for this population. And seeing whether the tools that we have and even the curriculum that we have and the technologies that we have, are they actually reaching you know this demographic and the answer is no. And then quite a bit of work needs to be done in order to make sure that you know people with vision impairment, children with vision impairment are getting educated in digital technologies and the technology is inclusive. And there's a huge gap there, so his work is particularly inspiring in that sense. And you know, I think problems like awareness and literacy, they are very hard problems to solve. You know you make a smartphone cheaper. You can actually make you know 4G and 5G maybe more available. But you know things like literacy, cognition and understanding of actually what's going on, those take many, many generations to resolve. So I think one has to think about people’s context, you know people preparedness, when thinking about inclusion. Sridhar Vedantham: Great, so I'm going to be cognizant of your time. I know you've got a bunch of meetings every day all the time. So before we sign off are there any final thoughts? Sriram Rajamani: Yeah, so I would say that I think that the pandemic in some way has really accelerated digital transformation, right? But at the same time the pandemic has also exacerbated the gap between the rich and the poor. That has also happened. So I would say that, I think this is a very interesting time, as scientists and technologists. On the one hand, actually, science is our, you know, an important hope for us to get out of the pandemic, you know, be it vaccines, you know, be it digital technology to help us communicate and collaborate even when we are at our homes, technology is such an important thing to do. And in order to actually serve, you know, the large number of people we have to build technology at scale. I think that's such an important thing. At the same time, I think, you know, that the virus doesn't discriminate between you know, rich or poor. It doesn't discriminate based on race or gender, and so if we have to actually get out of the pandemic, you know, we have to actually make sure that the solutions you know be it vaccines, they reach everyone. If anything, the pandemic has taught us that you know, unless we serve everyone problems like the pandemic and same thing with climate change, those are not going to be solved. Those are universal problems and that by definition they are inclusive, right? So I think my closing comment would be for technologists to think about technologies in such a way, technology in such a way that it brings people together. You know, have empathy for people in every shape, size and form and make sure that what we build, serves the whole of the world. Sridhar Vedantham: OK Siram, thank you so much for your time. This has been a fascinating conversation. Sriram Rajamani: Yeah, thank you Sridhar, and I wish the listeners health and happiness in the rest of the year as well. Thank you. [Music Ends]  

Microsoft Research India Podcast
Making cryptography accessible, efficient and scalable. With Dr. Divya Gupta and Dr. Rahul Sharma

Microsoft Research India Podcast

Play Episode Listen Later Sep 7, 2020 34:12


Episode 005 | September 08, 2020 Podcast: Making cryptography accessible, efficient and scalable. With Dr. Divya Gupta and Dr. Rahul Sharma Ensuring security and privacy of data, both personal and institutional, is of paramount importance in today’s world where data itself is a highly precious commodity. Cryptography is a complex and specialized subject that not many people are familiar with, and developing and implementing cryptographic and security protocols such as Secure Multi-party Computation can be difficult and also add a lot of overhead to computational processes. But researchers at Microsoft Research have now been able to develop cryptographic protocols that are developer-friendly, efficient and that work at scale with acceptable impact on performance. Join us as we talk to Dr. Divya Gupta and Dr. Rahul Sharma about their work in making cryptography easy to use and deploy. Dr. Divya Gupta is a senior researcher at Microsoft Research Lab. Her primary research interests are cryptography and security. Currently, she is working on secure machine learning, using secure multi-party computation (MPC), and lightweight blockchains. Earlier she received her B.Tech and M.Tech in Computer Science from IIT Delhi and PhD in Computer Science from University of California at Los Angeles where she worked on secure computation, coding theory and program obfuscation. Dr. Rahul Sharma is a senior researcher in Microsoft Research Lab India since 2016. His research lies in the intersection of Machine Learning (ML) and Programming Languages (PL), which can be classified into the two broad themes of “ML for PL” and “PL for ML”. In the former, he has used ML to improve reliability and efficiency of software. Whereas, in the latter, he has built compilers to run ML on exotic hardware like tiny IoT devices and cryptographic protocols. Rahul holds a B.Tech in Computer Science from IIT Delhi and a PhD in Computer Science from Stanford University. Click here for more information in Microsoft Research’s work in Secure Multi-party Computation and here to go to the GitHub page for the project. Related Microsoft Research India Podcast: More podcasts from MSR India iTunes: Subscribe and listen to new podcasts on iTunes Android RSS Feed Spotify Google Podcasts Email Transcript Divya Gupta: We not only make existing Crypto out there more programmable and developer friendly, but we have developed super-duper efficient cryptographic protocols which are tailored to ML, like secure machine learning inference task and work for large machine learning benchmarks. So before our work, the prior work had three shortcomings I would say. They were slow. They only did small machine learning benchmarks and the accuracy of the secure implementations was lower than the original models. And we solved all three challenges. So our new protocols are at least 10 times faster than what existed out there. [Music] Sridhar: Welcome to the Microsoft Research India podcast, where we explore cutting-edge research that’s impacting technology and society. I’m your host, Sridhar Vedantham. [Music] Ensuring security and privacy of data, both personal and institutional, is of paramount importance in today’s world where data itself is a highly precious commodity. Cryptography is a complex and specialized subject that not many people are familiar with, and developing and implementing cryptographic and security protocols such as Secure Multi-party Computation can be difficult and also add a lot of overhead to computational processes. But researchers at Microsoft Research have now been able to develop cryptographic protocols that are developer-friendly, efficient and that work at scale with acceptable impact on performance. Join us as we talk to Dr. Divya Gupta and Dr. Rahul Sharma about their work in making cryptography easy to use and deploy. Sridhar Vedantham: Alright, so Divya and Rahul, welcome to the podcast. It's great to have you guys on the show and thank you so much. I know this is really late in the night so thank you so much for taking the time to do this. Divya Gupta: Thanks Sridhar for having us. Late is what works for everyone right now. So yeah, that's what it is. Rahul Sharma: Thanks Sridhar. Sridhar Vedantham: Alright, so this podcast, I think, is going to be interesting for a couple of reasons. One is that the topic is something I know next to nothing about, but it seems to me from everything I've heard that it's quite critical to computing today, and the second reason is that the two of you come from very different backgrounds in terms of your academics, in terms of your research interests and specialities, but you're working together on this particular project or on this particular field of research. So let me jump into this. We're going to be talking today about something called Secure Multi-party Computation or MPC. What exactly is that and why is it important? Divya Gupta: Right, so Secure Multi-party Computation and as you said, popularly known as MPC, is a cryptographic primitive, which at first seems completely magical. So let me just explain with an example. So let's say you, Sridhar, and Rahul are two millionaires and you want to know who has more money or who's richer. And you want to do this without revealing your net worth to each other, because this is private information. So at first this seems almost impossible. As in how can you compute a function without revealing the inputs of the function? But MPC makes this possible. What MPC gives you is an interactive protocol in which you and Rahul will talk to each other back and forth, exchanging some random looking messages. And at the end of this interaction you will learn the output, which is that who is richer and you will only learn the output alone. So this object MPC comes with the strong mathematical guarantees which say that at the end of this interaction only the output is revealed, and anything which can be deduced from output, but nothing else about the input is revealed. So in this example, Sridhar, you and Rahul both will learn who is richer. And let's say you turn out to be richer. Then of course from this output you would know that your net worth is more than Rahul’s and that's it. Nothing else you will learn about Rahul’s net worth. So this is what MPC is. This example is called the Millionaire’s Problem, where the function is very simple. You're just trying to compare two values, which is the net worth. But MPC is much more general. So just going into a bit of history, I would say that MPC can compute any function of your choice on secret inputs. And this result in fact was shown as early as 1980s and this power of MPC, of being able to compute any function securely, got many people interested in this problem. So a lot of work happened and people kept coming up with better and better protocols which were more efficient. So when I say efficient, some of the parameters of interest are the data being sent in the messages back and forth. The number of messages you want to exchange, and also the end to end latency of this protocol, like how much time does it take to compute the function itself, And people kept coming with better and better protocols. And finally, the first implementations came out in 2008 and since then, people have evaluated a few real world examples using MPC and one example which I found particularly interesting is the following, which was a social study which was done in a privacy preserving manner using MPC in Estonia in 2015. So, the situation was as follows. Along with the boom in information and communication technology, it was observed that more and more students were dropping out of college without finishing their degree. And the hypothesis going around was that the students, when they are studying in the University, get employed in IT jobs and they start to value their salaries more than their University degree and hence drop out. But a counter hypothesis was that it is because IT courses are gaining popularity, more and more students are enrolling into it and find it hard and drop out. So the question was, is working during studies in IT jobs correlated with high dropout rate? And to answer to answer this question, a study was proposed to understand the correlation between early employment of students in IT jobs while being enrolled in University and high dropout rate. Now this study can be done by taking in data from employment records in the tax department and also the enrollment records in the education department and just cross referencing this data. So even though all of this data is there with the government, it could not be shared in the clear between the two departments because of legal regulations and the way they solve this problem is by doing this Secure Multi-party Computation between Ministry of Education and tax board. So this, I feel, is an excellent example which shows that MPC can help solve real problems where data sharing is important but cannot be done in the clear. Sridhar Vedantham: OK. Rahul was there something you wanted to add to that? Rahul Sharma: Yes, Sridhar. So if you realized what is happening today, the data is being digitized. Financial documents, medical records. Everything is being digitized, so we are getting, you can say, flood of data which is being available and the other thing which has happened in computer science is that we have now very, very powerful machine learning algorithms and very powerful hardware which can crunch these machine learning algorithms on this huge amount of data. And so machine learning people have created, for example, machine learning models which can beat human accuracy on tasks in computer vision. Computer vision is basically you have an image and you want to find some pattern in that image. For example, does the image belong to a cat or a dog? And now we have classifiers which will beat humans on such tasks. And the way these machine learning classifiers work is they use something called supervised machine learning, which has two phases. One is a training phase and one is an inference phase. In the training phase, machine learning researchers- they curate the data, they collect the data and they throw a lot of hardware on it to generate a powerful machine learning model. And then there is an inference phase in which new data points come in and the model labels or makes predictions on these new input data points. Now, after you have gone through this expensive training phase, the companies or the organizations who do this want to monetize the model which they have obtained. Now, if you have to monetize this model, then you have two options. One is that you can just release the model to the clients who can just download the model and run the model on their private data. Now, if they do this, then first of all, the company is not able to monetize the model because the model has just been given away and second, all the privacy of the training data which was used to generate this model is lost because now someone can look at try to look at the model and try it to reverse engineer where the training data was. So this is not a good option. Another option is that the organization can hold the model as a web service and then the clients can send their data to the company for predictions. Now, this is also not a good option because, first of all, clients will have to reveal their sensitive data to the organization holding the model and moreover the organization itself would not like to have this client data because it is just red hot, right? If they hold client data and there is a data breach, then there are legal liabilities. So here we have a situation that there is an organization. It has a model which is its own proprietary model and we have clients who have all their sensitive data and these two parties don't want to reveal their inputs to each other. But still, the organization wants to provide a service in which the client can give the data, receive predictions, and in exchange for the prediction, the client can give some money to the organization. And MPC will help achieve this task. So what I think is that MPC will enable machine learning to reach its full potential because machine learning is always hampered by the issues of data privacy and with the MPC combined with machine learning, the data privacy issues can be mitigated. Sridhar Vedantham: Interesting, that's really interesting. Now obviously this sounds like a great thing to be able to do in this in this day and age of the Internet and machine learning. Uh, but it sounds to me that, uh, you know, given that you have so many people from the research community working on it, there have got to be certain challenges that you need to first overcome to make this practical and usable, right? Why don't you walk me through the issues that currently exist with implementing MPC at scale? Rahul Sharma: What you said, Sridhar, is exactly correct. So there are three issues which come up. They are summarized as efficiency, scalability, and programmability. So what is efficiency? The thing is that if you have a secure solution, it is going to be slower than an insecure solution. Because the insecure solution is not doing anything about security. When implementing a secure solution, you are doing something more to ensure the privacy of data and so there is going to be a performance overhead and that's the first issue that we want the MPC protocols to have a bearable overhead and which is what Divya said that people have been working on it for decades to bring that overhead down. The second is that machine learning models are becoming bigger and bigger and more and more complicated. So what we want to do is take these MPC protocols and scale them to the level of machine learning which exists today. And the third challenge, which I believe is the most pressing challenge, is that of programmability. So when we think of these MPC protocols, who is going to implement them at the end of the day? If it is a normal developer, then we have a problem because normal developers don't understand security that much. There was a case in which there was a web forum post in which a person said that, “Oh, I need to ship a product. I'm going to miss the deadline. I'm getting all these security warnings. What should I do?”. And a Good Samaritan came in and said, “Oh you are calling this function with value one. Just call it with the value 0 and the error should go away.” And then the developer replied, “Great. Now I'm able to ship my product. All the warnings have gone away. You saved my life,” and so on. Now in switching that one to zero, what happened was the developer switched off all security checks, all certificate checking, all encryption, everything got switched off, so MPC protocols can be good in math, but when given to normal developers, it's not clear whether normal developers will be able to implement these MPC protocols. Divya Gupta: Actually, I would like to chime in here. So what Rahul said is a great story and rather an extreme one. But, uh, I, as a cryptographer, can vouch for the fact that cryptography as a whole field is mathematically challenging and quite subtle. And many a times like even we experts come up with C protocols which at the face of it looks secure and seems like there's no issues at all. But as soon as we start to dive deeper and try to prove security proofs of the protocol and so on, we see that there are big security vulnerabilities which cannot be fixed. So I cannot stress enough that when it comes to Crypto, it is very, very important to have rigorous proofs of correctness and security. And even small, tiny tweaks here and there which look completely harmless can completely break the whole system. So it is completely unreasonable to expect people or developers who have had no formal training in Crypto or security to be able to implement these crypto protocols correctly and securely and so on. And this in fact we feel is one of the biggest challenge, which is the technical challenge to deploy MPC to do real world applications. Sridhar Vedantham: Interesting, so I've got a follow-up question to something that you both just spoke about. Obviously, the cryptographer brings in the whole thing about the security and how to make secure protocols, and so on and so forth. What does the programming languages guy or the ML person bring to the table in this scenario? Rahul Sharma: Yeah, so I think that's a question for me since I work in the intersection of compilers and machine learning. So if I put my developer hat on and someone tells me that implement these MPC protocols written in these papers. I will be scared to death. I'm pretty sure I will break some security thing here or there. So I think the only way to get secure systems is to not let programmers implement those secure systems. So what we want to do is we want to build compilers which are automatic tools which translate programs from one language to another so that programmers write their normal code without any security like they are used to writing and then the compiler does all the cryptography and generates MPC protocols. So this is where a compiler person comes in to make the system programmable by normal programmers. [Music] Sridhar Vedantham: OK, so let's be a little more specific about the work that both of you have actually been doing over the past few years, I guess. Could you talk a bit about that? Rahul Sharma: So, continuing on the compiler part of the story. So first we build compilers in which developers can write C-like code, and we could automatically generate secure protocols out, and this gives a lot of flexibility because C is a very expressive language and you can do all sorts of different computations. But then we realized that the machine learning people don't want to write in C. They want to write in their favorite machine learning frameworks like Tensorflow or PyTorch and ONNX. So what we did is build compilers which take machine learning models written in Tensorflow, PyTorch, ONNX and compile them directly to MPC protocols and the compilers which we built have some good properties. First of all, they're accuracy preserving, which means that if you run insecure computation and you get some accuracy, and if you run secure computation, then you get the same accuracy. Now, this was extremely important because these machine learning people care for every ounce of accuracy. They can live with some overhead- computational overhead- because of security, but if they lose accuracy, that means the user experience gets degraded, they lose revenue. That is just a no go. So our compiler ensures that no accuracy is lost in doing the secure execution. Moreover, the compiler also has some formal guarantees, which means that even if the developer unintentionally or inadvertently does something wrong, which can create a security leak, then the compiler will just reject the program, which means that now developers can be confident that when they use our framework that if they have written something and it is compiling then it is secure. Divya Gupta: So as I think Sridhar already pointed out that this is a project which is a great collaboration between cryptographers and programming languages folks. So we not only make advances on the programming languages front, but also on the cryptography side. So we make progress on all three challenges which Rahul mentioned before, which are efficiency, scalability and programmability. So we not only make existing Crypto out there more programmable and developer friendly, but we have developed super-duper efficient cryptographic protocols which are tailored to ML, like secure machine learning inference task and work for large machine learning benchmarks. So before our work, the prior work had three shortcomings, I would say. They were slow. They only did small machine learning benchmarks and the accuracy of the secure implementations was lower than the original models. And we solved all three challenges. So our new protocols are at least 10 times faster than what existed out there. We run large ImageNet scale benchmarks using our protocols. So ImageNet data set is a standard machine learning classification task where an image needs to be classified into one of thousand classes, which is even hard for a human to do. And for this task we take the state-of-the-art machine learning models and run them securely. And these models are again at least 10 times larger than what the prior works did securely. And finally, all our secure implementations, in fact, match the accuracy of original models, which is very important to ML folks. And all of this could not have been possible without our framework, which is called CryptFlow, which again would not have been possible without a deep collaboration between cryptographers and programming languages folks. So this, I think, summarizes well what we have achieved in the last few years with this collaboration. Sridhar Vedantham: That's fantastic. Rahul, you wanted to add to that? Rahul Sharma: I want to add a little bit about the collaboration aspect, which Divya mentioned. So this project was started by Nishanth Chandran, Divya, Aseem Rastogi and me at MSR India, and all of us come from very different backgrounds. Divya, Nishanth are cryptographers, I work at the intersection of machine learning and programming languages, Aseem works in intersection of programming languages and security. And since all of us came together, we could solve applications or scenarios with MPC much better because given a scenario, we could find out that should we fix the compiler or should we fix the cryptography, and our meetings are generally sword fights. We would fight for hours on very, very simple design decisions, and the final design we came up with is something which all of us are very happy with and this wouldn't have been possible if we did not have our hard-working Research Fellows. And we had a fantastic set of interns which worked on this project. Sridhar Vedantham: Fantastic, and I think that's a great testament to the power of interdisciplinary work. And I totally can buy what you said in terms of sword fights during research meetings. Because, while I've not sat through research meetings myself, I have certainly attended research reviews so I can completely identify with what you're saying from what I've seen myself. Alright, so this one thing that I wanted to kind of clarify for myself and I, and I think for the benefit of a lot of people who would be listening. You know when you say things like the complexity decreases and we can run things faster and the overheads are less and so on, these concepts sound fairly abstract to people who are not familiar with the area of research. Could you put a more tangible face to it in terms of you know when you're saying that we reduce overheads, is there a certain percentage or can you give it in terms of time and so on? Divya Gupta: Right so when you talk about efficiency of our protocols, we measure things like end to end runtimes of them, like how much time does it take for the whole function to run securely and this depends on things like the amount of data being transferred in the messages which are being exchanged between different parties. So just to take an example from our latest paper to appear at CCS this year, we built new protocols for the simple Millionaire’s Problem, which I described in the very beginning. And there we have almost 5X- five times improvement in just the communication numbers. And this translates to run times as well. And now when I look at this Millionaire’s, this is a building block to our other protocols. So in a machine learning task, let's say there is a neural network. Neural network consist of these linear layers which look like matrix multiplications or convolutions and also some nonlinear operators which are, let's say rectified linear units (or ReLU) or MaxPool etc. And in all of these nonlinear layers you have to do some kind of comparison on secret values, which essentially boils down to doing some kind of Millionaire’s Problem. So whatever improvements we got in in the simplest setting of Millionaire’s translate to these more complicated functions as well. And in fact, our improvement for more complicated functions are much better than just the Millionaire’s and there we have almost 10 times improvement in the communication numbers. And when you’re actually running these protocols over a network, communication is what matters the most, because like compute is local you can parallelize it, you can run it on heavy machines and so on, but communication is something which you cannot essentially make go faster. So all our protocols have been handmade and tailored to the exact setting of functions which occur in the neural networks. And we improve the communication numbers and hence the other parameters of the runtimes as well. Sridhar Vedantham: OK, thanks for that. It certainly makes things a little clearer to me. Because to me a lot of this stuff just sounds very abstract unless I hear some actual numbers or some real instances where these things impact computation and actual time taken to conduct certain computations. Divya Gupta: Right, so just to give you another example, our task of ImageNet classification, right which I talked about? We took state of the art models there and our inference runtime end to end was under a minute. So this shows that it doesn't run in seconds, but it definitely runs under a minute, so it is still real, I would say. Sridhar Vedantham: Right, so Divya, thanks. I mean, that certainly puts a much more tangible spin on it, which I can identify with. Are there any real-life scenarios in which you see MPC bringing benefits to people or to industry etc? Right now in the real- you know, in in the near term. Rahul Sharma: So Sridhar, I believe that MPC has the potential to change the way we think about healthcare. So if we think of, for example, a hospital, which has trained a model that, given a patient image, it can tell whether the patient has COVID or pneumonia, or whether the patient is alright. Now, the hospital can post this model as a web service and what I can do- I can go to my favorite pathological lab, get a chest X-Ray done and then I can do a multi-party computation with the hospital and my sensitive data which are my chest X-Ray images will not be revealed at all to the hospital and I will get a prediction which can tell me how to go about doing the next steps. Now this task, we have run it actually with MPC protocols and this runs in a matter of minute or two. So, a latency which is quite acceptable in real life. Other applications which we have looked at is- one is detecting diabetic retinopathy from retina scans. We have also run machine learning algorithms which can give you state of the art accuracies in terms of detecting about 14 chest diseases from X-Ray images and the most recent work which we have done is in tumor segmentation. So there what happens is that the doctor is given a 3D image and the doctor has to mark the boundary of the tumor in this 3D image. So it is like a volume which the doctor is marking. Now this is a very intensive process and takes lot of time and one can think of training a machine learning model which can help the doctor do this task- the machine learning model will mark some boundary and then the doctor can just fine tune the boundary or make minor modifications to it and approve the boundary. Now we already have machine learning algorithms which can do this, but then again patients will be wary of giving their 3D scans to the model owners. So what MPC again can do is that they will be able to do this task securely without revealing the 3D scan to the organization which owns the machine learning model, and this we can do in a couple of hours. And to put things in perspective, doctors usually get to a scan in a matter of couple of days. So again, this latency is acceptable. Divya Gupta: So another domain of interest for MPC is potentially finance and we all know that banks are highly secretive entities, for the right reasons, and they cannot and do not share the data even with other banks. And this makes many tasks quite challenging, such as detecting fraudulent transactions and detecting money laundering as the only data available is the bank’s own data and nothing else. What MPC can enable is that the banks can pool in their data and do fraud detection and detection of money laundering together on all the banks’ data and at the same time no bank’s data would be revealed in the clear to any other bank. So all this can happen securely and still you can reap benefits from pooling in data of all the banks. And in fact, many of these tasks like money laundering, actually works by siphoning money through multiple banks so you indeed need the data of all the banks. What I'm trying to get at is that the power of MPC is very general, and as long as you and I have some secret data which we do not want to reveal to each other but at the same time we want to pool in this data together and compute some function jointly so that it benefits us both, MPC can be used. Sridhar Vedantham: So this sounds fantastic and it also sounds like there's a huge number of areas in which you can actually deploy and implement MPC, and I guess it's being made much easier now that you guys have come up with something that makes it usable, which it wasn't really earlier. So, are the research findings and the research work that you guys have done, is it available to people outside of Microsoft? Can the tech community as such be able to leverage and use this work? Divya Gupta: Yes, actually fortunately all of our protocols and work has been published at top security conferences and is available online and all the code is also available on GitHub, so if you have a secure inference scenario, you can actually go out there and try this code and code up your application. Sridhar Vedantham: Excellent, so I think what we'll also do is provide the links to resources that folks can access in the transcript of this podcast itself. Now, where do you guys plan to go with this in the future and what are your future research directions, future plans for this particular area? Rahul Sharma: So, going back to machine learning. As I said there are two phases. There's a training phase and there is the inference phase, and we have been talking mainly about the inference phase till now, because that is what we have focused on in our work. But the training phase is also very important. Suppose there are multiple data holders, for example, take multiple hospitals and they want to pool in their data together to train a joint model. But there can be legal regulations which prohibit them from sharing data indiscriminately between each other. So then they can use MPC to train a model together. Then I've heard like bizarre stories like nurses will sit down with permanent marker and where they will be just redacting documents and there will be legal agreements which will take years to get through and MPC just provides a technological solution to do this multi-party training. Divya Gupta: So, we live in a world where security is a term which gets thrown around a lot without any solid backing. And to make MPC real, we feel that we have to educate people and businesses about the power of MPC and what security guarantees it can provide. So as an example, let's take encryption. I think most people, businesses and even law understands what encryption is, what guarantees it provides, and as a result, most real-world applications use end to end encryption. But if I ask a person and say the following that there are two parties who have the secret input and they want to compute some function by pooling in their inputs, how do I do this? And the most likely answer I would get would be that the only solution possible out there is to share the data under some legal NDAs. Most people just simply don't know that something like MPC exists. So I'm not saying that MPC would be as omnipresent as encryption, but with this education we can put MPC on the table and people and businesses can think of MPC as a potential solution to security problems. And in fact, as we talk to more and more people and educate them about MPC new scenarios are discovered which MPC can enable. And moreover, with regulations like GDPR which are aimed at preserving privacy, and also bigger and bigger ML models which need more and more data for more accuracy, we feel that MPC is a technology which can resolve this tension. Sridhar Vedantham: Excellent, this has been a really eye opening conversation for me and I hope the people who listen to this podcast will learn as much as I have during this. Thank you so much, Divya and Rahul. I know once again- so once again I'm just going to say that it's really late and I totally appreciated your time. Divya Gupta: Thanks Sridhar, thanks a lot for having us here. Rahul Sharma: Thanks Sridhar, this was fun. [Music Ends]  

Between the Data - NVivo Podcast Series
Episode 7: The Place of the Technology in Online Research: Interview with Dr. Nancy Baym - NVivo Virtual Conference Keynote Speaker

Between the Data - NVivo Podcast Series

Play Episode Listen Later Aug 17, 2020 32:16


In this podcast episode, Dr. Baym, Senior Principal Researcher at Microsoft Research Lab, New England, gives us a pre-view to her upcoming keynote address at the NVivo Virtual Conference on September 23, 2020. Register for the conference here NVivo Virtual Conference

Sorgatron Media Master Feed
AwesomeCast 474: Redmond Playground

Sorgatron Media Master Feed

Play Episode Listen Later Dec 10, 2019 55:17


This week’s episode brought to you by  Slice on Broadway, and Sidekick Media Services. Chilla is back from Microsoft's Redmond Playground to catch up with the tech news! Chilla talks about his visit to the Microsoft Research Lab, using HoloLens 2, Microsoft's custom energy water, and Xbox in a museum. Sorg stayed at home and watched the finale of Silicon Valley on HBO. Configuring the new Mac Pro takes our breath away... Sorg is eyeing the lower end Blackmagic ATEM Mini for mobile and studio video streaming implementations. Google is ending support for Google Glass Explorer Edition.  Of which we have one still. Is this a result of the Google Founders leaving? Turn your iMac into a workstation Microsoft Office coming to Linux  Coca-Cola bottles are using OLEDs to light up Star Wars lightsabers.  But not in your country probably. Zelda is coming to Mario Maker Chilla is doing some serious server setup for Minecraft.  That may help him in his day job! Plex is launching free TV and movies. After the show remember to: Eat at Slice on Broadway (@Pgh_Slice) if you are in the Pittsburgh area! It is Awesome! (sliceonbroadway.com) Want to be part of our studio audience?  Hit us up at awesomecast@sorgatronmedia.com and we’ll save you a seat! Join our AwesomeCast Facebook Group to see what we’re sharing and to join the discussion! Follow these awesome people on Twitter: John Chichilla (@Chilla) and Sorg (@Sorgatron) You can support the show at Patreon.com/awesomecast! Remember to check out our friends at the The 405 Media (@The405Radio), and Postindustrial Audio  (@post_industry) who replay the show on their stream throughout the week! Also, check out sorgatronmedia.com and awesomecast.com for more entertainment; and view us livestreaming Tuesdays around 7:00 PM EST

AwesomeCast: Tech and Gadget Talk
Redmond Playground | AwesomeCast 474

AwesomeCast: Tech and Gadget Talk

Play Episode Listen Later Dec 10, 2019 55:17


This week’s episode brought to you by  Slice on Broadway, and Sidekick Media Services. Chilla is back from Microsoft's Redmond Playground to catch up with the tech news! Chilla talks about his visit to the Microsoft Research Lab, using HoloLens 2, Microsoft's custom energy water, and Xbox in a museum. Sorg stayed at home and watched the finale of Silicon Valley on HBO. Configuring the new Mac Pro takes our breath away... Sorg is eyeing the lower end Blackmagic ATEM Mini for mobile and studio video streaming implementations. Google is ending support for Google Glass Explorer Edition.  Of which we have one still. Is this a result of the Google Founders leaving? Turn your iMac into a workstation Microsoft Office coming to Linux  Coca-Cola bottles are using OLEDs to light up Star Wars lightsabers.  But not in your country probably. Zelda is coming to Mario Maker Chilla is doing some serious server setup for Minecraft.  That may help him in his day job! Plex is launching free TV and movies. After the show remember to: Eat at Slice on Broadway (@Pgh_Slice) if you are in the Pittsburgh area! It is Awesome! (sliceonbroadway.com) Want to be part of our studio audience?  Hit us up at awesomecast@sorgatronmedia.com and we’ll save you a seat! Join our AwesomeCast Facebook Group to see what we’re sharing and to join the discussion! Follow these awesome people on Twitter: John Chichilla (@Chilla) and Sorg (@Sorgatron) You can support the show at Patreon.com/awesomecast! Remember to check out our friends at the The 405 Media (@The405Radio), and Postindustrial Audio  (@post_industry) who replay the show on their stream throughout the week! Also, check out sorgatronmedia.com and awesomecast.com for more entertainment; and view us livestreaming Tuesdays around 7:00 PM EST

Microsoft Research Podcast
007r - Functional Programming Languages and the Pursuit of Laziness with Dr. Simon Peyton Jones

Microsoft Research Podcast

Play Episode Listen Later Jul 31, 2019


This episode first aired in January, 2018. When we look at a skyscraper or a suspension bridge, a simple search engine box on a screen looks tiny by comparison. But Dr. Simon Peyton Jones would like to remind us that computer programs, with hundreds of millions of lines of code, are actually among the largest structures human beings have ever built. A principle researcher at the Microsoft Research Lab in Cambridge, England, co-developer of the programming language Haskell, and a Fellow of Britain’s Royal Society, Simon Peyton Jones has dedicated his life to this very particular kind of construction work.Today, Dr. Peyton Jones shares his passion for functional programming research, reveals how a desire to help other researchers write and present better turned him into an unlikely YouTube star, and explains why, at least in the world of programming languages, purity is embarrassing, laziness is cool, and success should be avoided at all costs. https://www.microsoft.com/research

Microsoft Research Podcast
056 (rerun) - Functional Programming Languages and the Pursuit of Laziness with Dr. Simon Peyton Jones

Microsoft Research Podcast

Play Episode Listen Later Dec 26, 2018


This episode first aired in January, 2018.When we look at a skyscraper or a suspension bridge, a simple search engine box on a screen looks tiny by comparison. But Dr. Simon Peyton Jones would like to remind us that computer programs, with hundreds of millions of lines of code, are actually among the largest structures human beings have ever built. A principle researcher at the Microsoft Research Lab in Cambridge, England, co-developer of the programming language Haskell, and a Fellow of Britain’s Royal Society, Simon Peyton Jones has dedicated his life to this very particular kind of construction work. Today, Dr. Peyton Jones shares his passion for functional programming research, reveals how a desire to help other researchers write and present better turned him into an unlikely YouTube star, and explains why, at least in the world of programming languages, purity is embarrassing, laziness is cool, and success should be avoided at all costs.

Sci on the Fly
Harnessing the Data Revolution for Food, Energy and Water Systems

Sci on the Fly

Play Episode Listen Later Aug 15, 2018 14:27


Ryan Locicero, environmental engineer and AAAS Science & Technology Policy Fellow at the National Science Foundation, speaks with Ranveer Chandra at the Microsoft Research Lab. As a principal researcher, Chandra leads an Incubation on IoT Applications. His research has shipped as part of multiple Microsoft products, including VirtualWiFi in Windows 7 onwards, low power Wi-Fi in Windows 8, Energy Profiler in Visual Studio, Software Defined Batteries in Windows 10, and the Wireless Controller Protocol in XBOX One. He has published more than 80 papers, and has been granted more than 85 patents by the USPTO. His research has been cited by the media including The Economist, MIT Technology Review, BBC, Scientific American, New York Times, and the WSJ. He also leads the battery research project and the white space networking projects. Here he discusses Microsoft’s FarmBeats project, which is building several unique solutions to enable data-driven farming, including low-cost sensors, drones, machine vision, and machine learning algorithms.   This podcast does not necessarily reflect the views of AAAS, its Council, Board of Directors, officers, or members. AAAS is not responsible for the accuracy of this material. AAAS has made this material available as a public service, but this does not constitute endorsement by the association.

Microsoft Research Podcast
021 - AI, ML and the Reasoning Machine with Dr. Geoff Gordon

Microsoft Research Podcast

Play Episode Listen Later Apr 25, 2018


Teaching computers to read, think and communicate like humans is a daunting task, but it’s one that Dr. Geoff Gordon embraces with enthusiasm and optimism. Moving from an academic role at Carnegie Mellon University, to a new role as Research Director of the Microsoft Research Lab in Montreal, Dr. Gordon embodies the current trend toward the partnership between academia and industry as we enter what many believe will be a new era of progress in machine learning and artificial intelligence. Today, Dr. Gordon gives us a brief history of AI, including his assessment of why we might see a break in the weather-pattern of AI winters, talks about how collaboration is essential to innovation in machine learning, shares his vision of the mindset it takes to tackle the biggest questions in AI, and reveals his life-long quest to make computers less… well, less computer-like.

Microsoft Research Podcast
007 - Functional Programming Languages and the Pursuit of Laziness with Dr. Simon Peyton Jones

Microsoft Research Podcast

Play Episode Listen Later Jan 10, 2018


  When we look at a skyscraper or a suspension bridge, a simple search engine box on a screen looks tiny by comparison. But Dr. Simon Peyton Jones would like to remind us that computer programs, with hundreds of millions of lines of code, are actually among the largest structures human beings have ever built. A principle researcher at the Microsoft Research Lab in Cambridge, England, co-developer of the programming language Haskell, and a Fellow of Britain’s Royal Society, Simon Peyton Jones has dedicated his life to this very particular kind of construction work.       Today, Dr. Peyton Jones shares his passion for functional programming research, reveals how a desire to help other researchers write and present better turned him into an unlikely YouTube star, and explains why, at least in the world of programming languages, purity is embarrassing, laziness is cool, and success should be avoided at all costs.

MIT Comparative Media Studies/Writing
The Mediated Construction of Reality - From Berger and Luckmann to Norbert Elias

MIT Comparative Media Studies/Writing

Play Episode Listen Later Sep 28, 2017 88:53


Nick Couldry outlined the project of his recent book, The Mediated Construction of Reality (Polity October 2016, co-written with Andreas Hepp). The book offers a critical reevaluation and rearticulation of the social constructivist ambitions of Berger and Luckmann’s 1966 book The Social Construction of Reality while radically rethinking the implications of this for a world saturated not just with digital media, but with data processes. Couldry outlined how a materialist phenomenology can draw not just on traditional phenomenology, but on the social theory of Norbert Elias, particularly his concept of figurations, to address the challenges of social analysis in the face of datafication. Elias, Couldry argued, is a particularly important theorist on whom to draw in making social constructivism ready to face the deep embedding of the social world with digital technologies, and more than that, to outline the challenges for social order of such a world. More broadly, Couldry argued for a reengagement of media theory with the broader tradition of social theory in the era of Big Data, in the face of a radical expansion of what media are and how mediation is embedded in everyday social orders. Nick Couldry is a sociologist of media and culture. He is Professor of Media Communications and Social Theory at the London School of Economics and Political Science. He is currently a Visiting Researcher at Microsoft Research Lab, and during 2017-2018 a Faculty Associate at the Berkman Klein Center for Internet and Society, Harvard University. He is the author or editor of twelve books including most recently The Mediated Construction of Reality (with Andreas Hepp, Polity, 2016), Ethics of Media (2013 Palgrave, coedited with Mirca Madianou and Amit Pinchevski), Media, Society, World: Social Theory and Digital Media Practice (Polity 2012) and Why Voice Matters: Culture and Politics After Neoliberalism (Sage 2010).