POPULARITY
Professor Anima Anandkumar is one of the worlds leading scientists in the field of AI & ML with more than 30k citations, a h-index of 80 and numerous landmark papers such as FourCastNet, which got world-wide coverage for demonstrating how AI can be used to speed up weather prediction. She is the Bren Professor at Caltech, leading a large team of PhD students and post-docs in her AI+Science lab, and has had extensive experience in industry, previously being the Senior Director of AI Resarch at Nvidia. In this episode I speak to her about her background in academia and industry, her journey into machine learning, and the importance of AI for science. We discuss the integration of AI and scientific research, the potential of AI in weather modeling, and the challenges of applying AI to other areas of science. Prof Anandkumar shares examples of successful AI applications in science and explains the concept of AI + science. We also touch on the skepticism surrounding machine learning in physics and the need for data-driven approaches. The conversation explores the potential of AI in the field of science and engineering, specifically in the context of physics-based simulations. Prof. Anandkumar discusses the concept of neural operators, highlights the advantages of neural operators, such as their ability to handle multiple domains and resolutions, and their potential to revolutionize traditional simulation methods. Prof. Anandkumar also emphasizes the importance of integrating AI with scientific knowledge and the need for interdisciplinary collaboration between ML specialists and domain experts. She also emphasizes the importance of integrating AI with traditional numerical solvers and the need for interdisciplinary collaboration between ML specialists and domain experts. Finall she provides advice for PhD students and highlights the significance of attending smaller workshops and conferences to stay updated on emerging ideas in the field.Links:LinkedIn: https://www.linkedin.com/in/anima-anandkumar/Ted Video: https://www.youtube.com/watch?v=6bl5XZ8kOzI FourCastNet: https://arxiv.org/abs/2202.11214Google Scholar: https://scholar.google.com/citations?hl=en&user=bEcLezcAAAAJLab page: http://tensorlab.cms.caltech.edu/users/anima/Takeaways- Anima's background includes both academia and industry, and she sees value in bridging the gap between the two.- AI for science is the integration of AI and scientific research, with the goal of enhancing and accelerating scientific developments.- AI has shown promise in weather modeling, with AI-based weather models outperforming traditional numerical models in terms of speed and accuracy.- The skepticism surrounding machine learning in physics can be addressed by verifying the accuracy of AI models against known physics principles.- Applying AI to other areas of science, such as aircraft design and fluid dynamics, presents challenges in terms of data availability and computational cost. Neural operators have the potential to revolutionize traditional simulation methods in science and engineering.- Integrating AI with scientific knowledge is crucial for the development of effective AI models in the field of physics-based simulations.- Interdisciplinary collaboration between ML specialists and domain experts is essential for advancing AI in science and engineering.- The future of AI in science and engineering lies in the integration of various modalities, such as text, observational data, and physical understanding.Chapters00:00 Introduction and Overview04:29 Professor Anima Anandkumar's Career Journey09:14 Moving to the US for PhD and Transitioning to Industry13:00 Academia vs Industry: Personal Choices and Opportunities17:49 Defining AI for Science and Its Importance22:05 AI's Promise in Enhancing Scientific Discovery28:18 The Success of AI-Based Wea
Anima Anandkumar is a Bren Professor at Caltech. Her work developing novel AI algorithms enables and accelerates scientific applications of AI, including scientific simulations, weather forecasting, autonomous drone flights, and drug design. She has received best paper awards at venues such as NeurIPS and the ACM Gordon Bell Special Prize for HPC-Based COVID-19 Research. She holds degrees from the IIT Madras and Cornell University. She has conducted postdoctoral research at MIT. She was previously principal scientist at Amazon Web Services and senior director at Nvidia. Anima's favorite book: Hyperspace (Author: Michio Kaku)(00:00) Introduction(00:10) The Impact of AI on Science(02:25) AI Disrupting Physics(03:02) Challenges in Fluid Dynamics(06:21) Achieving Orders of Magnitude Speedup(10:43) AI Discovering New Laws of Physics(11:45) Complexity of Fluid Dynamics(15:54) Simulating Physical Phenomena with AI(22:23) AI for Drones in Strong Winds(25:16) Optimizing Experiments with AI(28:19) AI in Quantum Chemistry(32:38) Technological Breakthroughs in AI(33:23) Rapid Fire Round--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi
Generative AI-based models can not only learn and understand natural languages — they can learn the very language of nature itself, presenting new possibilities for scientific research. Anima Anandkumar, Bren Professor at Caltech and senior director of AI research at NVIDIA, was recently invited to speak at the President's Council of Advisors on Science and Technology. At the talk, Anandkumar says that generative AI was described as “an inflection point in our lives,” with discussions swirling around how to “harness it to benefit society and humanity through scientific applications.” On the latest episode of NVIDIA's AI Podcast, host Noah Kravitz spoke with Anandkumar on generative AI's potential to make splashes in the scientific community. It can, for example, be fed DNA, RNA, viral and bacterial data to craft a model that understands the language of genomes. That model can help predict dangerous coronavirus variants to accelerate drug and vaccine research. Generative AI can also predict extreme weather events like hurricanes or heat waves. Even with an AI boost, trying to predict natural events is challenging because of the sheer number of variables and unknowns. However, Anandkumar explains that it's not just a matter of upsizing language models or adding compute power — it's also about fine-tuning and setting the right parameters. “Those are the aspects we're working on at NVIDIA and Caltech, in collaboration with many other organizations, to say, ‘How do we capture the multitude of scales present in the natural world?'” she said. “With the limited data we have, can we hope to extrapolate to finer scales? Can we hope to embed the right constraints and come up with physically valid predictions that make a big impact?” Anandkumar adds that to ensure AI models are responsibly and safely used, existing laws must be strengthened to prevent dangerous downstream applications. She also talks about the AI boom, which is transforming the role of humans across industries, and problems yet to be solved. “This is the research advice I give to everyone: the most important thing is the question, not the answer,” she said.
In this episode of ACM ByteCast, Rashmi Mohan hosts Anima Anandkumar, a Bren Professor of Computing at California Institute of Technology (the youngest named chair professor at Caltech) and the Senior Director of AI Research at NVIDIA, where she leads a group developing the next generation of AI algorithms. Her work has spanned healthcare, robotics, and climate change modeling. She is the recipient of a Guggenheim Fellowship and an NSF Career Award, and was most recently named an ACM Fellow, among many other prestigious honors and recognitions. Her work has been extensively covered on PBS, in Wired magazine, MIT Tech Review, YourStory, and Forbes, with a focus on using AI for good. Anima talks about her journey, growing up in a house where computer science was a way of life and family members who served as strong role models. She shares her path in education and research at the highly selective IIT-Madras, the importance of a strong background in math in her computing work, and some of the breakthrough moments in her career, including work on using tensor algorithms to process large datasets. Anima spends some time discussing topic modeling and reinforcement learning, what drives her interests, the possibilities of interdisciplinary collaboration, and the promise and challenges brought about by the age of generative AI.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we're joined by Anima Anandkumar, Bren Professor of Computing And Mathematical Sciences at Caltech and Sr Director of AI Research at NVIDIA. In our conversation, we take a broad look at the emerging field of AI for Science, focusing on both practical applications and longer-term research areas. We discuss the latest developments in the area of protein folding, and how much it has evolved since we first discussed it on the podcast in 2018, the impact of generative models and stable diffusion on the space, and the application of neural operators. We also explore the ways in which prediction models like weather models could be improved, how foundation models are helping to drive innovation, and finally, we dig into MineDojo, a new framework built on the popular Minecraft game for embodied agent research, which won a 2022 Outstanding Paper Award at NeurIPS. The complete show notes for this episode can be found at twimlai.com/go/614
When you think of the future of clean energy, wind and solar might be the first things that come to mind. But when the wind doesn't blow and the sun doesn't shine, the need for alternative sources of power becomes apparent. From advanced geothermal to nuclear fusion, up-and-coming advancements may deliver a future of abundant, clean energy. One of the most ambitious ideas is space-based solar: orbiting solar panels that can beam energy to the Earth from space. Is this a viable energy solution ... or a sci-fi pipe dream? To find out more, I'm joined by Ali Hajimiri. Ali is the Bren Professor of Electrical Engineering and Medical Engineering at the California Institute of Technology, as well as Co-Director of the Space-Based Solar Power Project at Caltech.
Ali Hajimiri is the Bren Professor of Electrical Engineering and Medical Engineering at the California Institute of Technology. He is also co-director of the Space Solar Power Project, which is developing technology capable of generating solar power in space and beaming it back to Earth. Hajimiri and colleagues are designing solar arrays composed of hundreds of small photovoltaic tiles that would be linked together to form larger modules, and then those modules — flying together in formation like a school of fish — would form a hexagonal power station in space. These flexible arrays would be rolled up when launched and unfurl at their orbital destination.In this inaugural episode of Faster, Please! — The Podcast, Ali tells me about how space-based solar works, what problem it solves, and how long we'll have to wait before we see orbiting power stations in the sky. For more, check out my recent 5QQ chat with Ali. Below is a lightly edited transcript of our conversation.Pethokoukis: Space-based solar — putting solar panels in space and beaming the energy to Earth — seems like a beautiful, elegant solution. Why is it a good idea? What problem is it solving?Hajimiri: So the primary problem that it solves is being able to get around the days and nights, the cycles of the weather, the cloudy days, and all those things — and having dispatchable power where you need it, when you need it, and as much as you need.An advantage over ground-based solar?Correct. And the other benefit of it is that essentially you can have these systems in space for a long time, and you can route it the way you want. You can actually distribute the power; you can break it up into smaller pieces. You can say, “I want to send 20 percent to New York, 30 percent to LA, and 40 percent to, I don't know, Seattle.”Wouldn't these panels sometimes be in the darkness, on the night side of the Earth? So how would they work?It depends on which orbit you put it in. If you put them in geosynchronous orbit (or something near geosynchronous) you are basically in the sun for most of the time, except for 20 minutes on the equinoxes. Most of the time you're not eclipsed, because you're so far away that the shadow of Earth is so small. And because of the inclination of the Earth, because it's at an angle, you would get eclipsed for 20 minutes on each one of those.And as it's transferring power down, it doesn't have to be directly over the collection station, right? It can be at an angle?It doesn't. That's the beauty of it. Because it's a very large array, it redirects the energy. You can electronically steer it. It does not even need mechanical steering. So you can actually create a focal point of energy where you need, where your recovery of energy occurs. And you can move that very rapidly — on the scales of nanoseconds, extremely fast — from one place to another.Does it require new technology to distribute that power? Or is that basically using current technology?On the ground, we have what we call “rectennas,” which is basically rectifying antennas. These are another array of antennas that are very plain, very flat. I mean, if this were not radio, I would've had demonstrations of these things to show you how they look. But these are like thin sheets of material, like printed circuit boards that go in your computers and things of that sort, that sit on the ground. They collect the energy, they convert it to DC power, and then that's converted to AC. And then at that point, you can plug it in to connect to your network — essentially to your distribution line, the same power distribution line that you use. You can even envision putting this next to photovoltaic solar [panels] that are out there, or any other kind of power plant. It could be any kind of power plant, and you just connect to it and add and augment the power that you generate with these.So you can basically bolt this onto the existing power system?Yes. I mean, once you are on the ground station, once you go get past the rectenna and the conversion to AC, then that's basically compatible with all the other AC network.Solar power is becoming cheaper, and the land area we would need to cover with solar panels to power the whole Earth is smaller than you'd think. But traditional solar relies on storage at night when the sun isn't shining. But what you're suggesting wouldn't be reliant on batteries. Is that right?What we do allows you to send the power where you need at the time you need — and you can even break it up into different proportions. But the other thing that it does is that, since you have it 24/7, pretty much you don't need the storage, which is a big challenge.The other thing is that there are places that don't have the power infrastructure. A good analogy to this is cell phones versus landlines. Thirty years ago, there were places in Africa that didn't have landlines. In Sub-Saharan Africa today, there are these same places that still don't have landlines, but there they have leapfrogged to cell phones.So this way, you can actually get to places that don't have power. You can think about the Arctic Circle — you can think about a lot of places, remote islands and things of that sort — that may not have power infrastructure. And this way you can enable it when you need to have the power over there.This is not a new idea. It's an idea from about 80 years ago that you're attempting to turn into reality. I wonder if you could spend a minute or two talking about what you're doing.It is an idea that I think the earliest rendition, that I know of, is in a short story by Asimov, as many ideas are. But you know, what's different is that the technology didn't exist for doing these kinds of things in space. I mean, it sounds like a good idea, but it's also a very challenging idea in many different ways. One is that, when you put things in space, things are expensive — you pay dollars per gram. That's extremely expensive for things that you put in orbit. So one of the key parts of making this happen is to make it lightweight.The other thing is that these array elements, making it with large arrays, were not very practical up until the point where we are in integrated circuits — the same chips that go into our computers and phones. The same technology is now what we are using to make these incredibly large arrays that are very lightweight, because these are very small and lightweight.And then now, on top of it, we are making them flexible, because the way to deploy something that's of that magnitude is to roll it, then deploy it, and then unroll it. You can think about this like a sheet. These are like sails that you open up in space. Now, the technology to enable that integrated circuit, the packaging and all those things, did not really exist until recently. And that's why we came up with a new architecture for doing it, and that allows us to do the original renditions of this idea.The thinking was that we have the solar panels, and we aggregate all of the power. We have this giant antenna that points to Earth and then sends it. And in that case, you would be pointing to one direction, and you couldn't move it around because it was mechanically pointed. And if you wanted to reorient it, you have to mechanically reorient that antenna and point it in a different direction. We are doing it all electronically. So we have this very thin, very flat sheet that transmits the energy. Because of the coherent addition of all these billions and billions of sources — it's like an army of ants.So a swarm? A solar swarm?Exactly, exactly. So we've gone from the old mindset, which was what I describe as a big elephant, as opposed to an army of ants. I mean, each one of them is capable of doing different things, but because of the swarm nature, you can actually make it very lightweight and spread out.How old is the project that you're working on at Caltech?We've been working on this for close to eight years now — seven or eight years actively. We've been working on the power transfer part of it — the part that I'd been working on even before this project, which is what led to this project — for like 10 or 12 years. Wireless power transfer for both terrestrial, as well as space-based applications.And the powerless transfer is converted from sunlight into lasers? Microwaves? What?It's microwaves. It's radio frequencies, essentially microwaves. Then you transmit it, and then you recover that on the ground.Whenever I hear about any space project, I always think, “Well, was this possible before SpaceX? And is the reason we're talking about it because of that decline in launch costs?” Does your project depend on that, or is it just a fantastic enabler of it?I would say it's one of the four or five enablers that converged to make this closer to something that can actually be done. Definitely, SpaceX is a catalyst in lowering the barrier for space enterprises — anything that you want to do, non-governmental stuff, smaller projects — SpaceX and alike. I mean, there are other places like Blue Origin, things like that.So people are trying to do that. They are trying to level the playing field so that more entrepreneurs can get into it. Now it can be in academia, industry, or anywhere else. And that plays a role. And again, there are all these other technologies and architectural changes that also enable us. So I would say that's definitely one of the four or five catalysts that had to come together to make this happen.I've seen a video of you describing how there are small wafers that add up into bigger panels which are arranged into this giant array. Each one would be like a power plant in space. How big would each of those be?Yeah, that's a good way to think about it. Each one of these power plants, you can think about them on the order of a kilometer by kilometer, or about a mile by a mile. So that is like a square mile or square kilometer. Something in that range. It depends on the orbit you choose and the size of the ground station. There's a little bit of a tradeoff. You can make it larger in space and smaller on the ground, or smaller in space and larger on the ground. So there's that trade off you can play with. But yeah, it's about a square kilometer or square mile in space, each one of them.And how much power could that theoretically generate back on Earth?So somewhere between like several hundred megawatts to a gigawatt, depending on the angles and things like that. It's a substantial amount of power.How would that compare to a nuclear reactor?It would be comparable. And it can be even higher than that in some cases, depending. The other interesting thing I should say about comparing to these other kinds of generators is that, since it's a modular system — this is actually a formation flying of satellites; each one of the modules is about 20 to 60 meters, depending on different designs for different orbits; they are formation flying in close proximity to each other — and this means that if one of them fails, you can actually replace it without having to replace the whole thing. So it's very modular. You can actually have robustness because of that.I think when they had to repair the Hubble Space Telescope, it was a pretty big deal.Yes.And I'd hate to think it would be as involved with fixing each of these panels. All we'd be doing is space walks.Exactly. That's an excellent point because the way we've designed them, one of the key elements is the cost structure of these modules. It has to be economical at the end of the day, because we are using the same silicon technology that's used for all these electronics — and all the other stuff we're making at low cost. So the idea here is: For that component, we just decommission it, let it burn in the atmosphere and just put a new one in there. We don't have to replace components. It's just like a new satellite that's put in the orbit, and the other one is just decommissioned. And the cost structure allows for that.Would you envision this as just one arrow in the quiver? Or do you view this as something where we could get substantially all our power from space? What are sort of the potential and limitations?I think, like any other technology, if it's successful, it'll be phased in. You can't really do it all at once. Now, as more and more of these stations are going to be put in space, then you can see how this will respond to the system. But my anticipation is that it would definitely be filling in the gaps in the baseline.So, for example, if you look at the load line that the power generation has today on the Earth, it has changed because of the photovoltaics, quite interestingly. They had this duck — they call it the duck curve — because in the middle of the day, there's lower demand. The way it changes in the early afternoon, it goes up, peaks, and then comes back down and kind of looks like a duck.But the interesting thing is, now photovoltaics have kind of brought up the middle of the duck. So they've brought up this middle gap that they had. And then now it's gotten to a point that, at some points, the bulk price of power is actually negative during the day. And what this does is it allows you to fill in the gaps where you need it. So for example, you could have most of your power being transmitted to New York in the afternoon, but three hours later, you can shift that power to LA, for example.I think one thing people might say is, “We're already worried about too many Starlink satellites in orbit. These are much, much bigger! I mean, you would be able to see these from the Earth.” What do you make of that concern?So, there are different aspects to this. Is it mostly a concern about, for example, space junk and getting crowded and all those things?There's the space junk concern. There are also just these sort of astronomical concerns, that it would be hard to do astronomy. And more sort of aesthetic concerns.The aesthetic aspect, I can't talk to. I guess the beauty is in the eyes of the beholder. But the astronomy aspects: Again, there are obviously going to be windows, and there are going to be the times that this system passes overhead. But just to think about things, the area that is out there at 36,000 kilometers, which is the geosync, is actually 36 times larger than the area of the entire surface of the planet, including all of the water and all the oceans and everything. If you take that area, it's a much bigger sphere. So there's a lot more room, if anything, out there compared to other things that we make. So I'm not too concerned about that.There are also people who think about, “Is it going to cause interference?” and all those things. And those are the kinds of things that we've learned how to deal with in radio systems. We have many different radio systems working concurrently and seamlessly, and we don't seem to have problems with that — like Wi-Fi and 5G and this and that. And you have Bluetooth, and all of these things seem to be working together. And the main reason is that we've learned how to do it in that respect.There's also another set of concerns some people raise. “It's a health concern. Is it going to fry birds flying overhead?” And the answer to that is actually interesting, because the answer is that the energy density that anything, even in that beam spot, will get is comparable to what you get from standing out in the sun — except for the fact that it's what we call non-ionizing radiation as opposed to the sun, because it has UV and all those things that can actually change the molecules and the chemistry. So they can cause cancer (UV does), but radio frequencies don't. All they can do is generate heat. The benefit of this thing is that with that power level, you'd recover probably close to three times, three to three-and-a-half times, more than what you recover from photovoltaics. And you can have it during the day or night.I was recently reading a big report from Citigroup about the space economy, and they went into some detail about space-based solar. That's the first time I remember reading Wall Street research about that technology. At this point, is it still so early that you're not getting much private sector interest?First of all, I can tell you that there has been a tremendous amount of interest. I mean, especially recently, over the last couple of years, we've seen a lot more. And partly I think it's because of the fact that the technology … I mean, 10-20 years ago, it was not really realistic because of the cost structure, the complexity of tech technologies, and all those things. But now people are starting to see the pathway. So we've had a lot of interest from various places. And it's kind of growing exponentially in a way, recently.So I'm anticipating seeing a lot more of that investment. In fact, we've been approached by several investors in this regard, too. But it'll take time. It's not a short-term project. It's not an app that we can start today and have a first prototype working in a few weeks or months. We've been working on this for quite a while, and it has to continue on. We, in fact, are going to have a launch sometime soon, to have a first demonstration of some of the key components of the technologies that we are launching.The Chinese seem pretty interested in this technology.They are. And it's interesting. A lot of this thing has happened in part because of these new technologies that have been developed at Caltech and at other places that made it possible. So people are taking another look at it. There was this old kind of mindset about it, and this new mindset has renewed interest in it, because of these things. Yeah, the Chinese are interested. The United Kingdom is very interested in this. The Japanese are very interested in this. There are a lot of other efforts in other places — India is actually even interested in it. So we've actually seen a lot of interest all over the world, in this area.Is there something you need government to do or to stop doing at this stage in the development of the technology?A great question. One is, in terms of investment, definitely. These are the kind of things that, to get started, you need a big entity like government to put investment in it — in terms of research and development — because the barrier to entry is pretty large, regarding the amount of initial investment. Of course, the return eventually is going to be large, too.That's important also from a regulatory perspective. It's important for government in general — about the technologies related to wireless power transfer, both terrestrial and space — I think the government needs to be more proactive in terms of allowing it to flourish and not getting in the way. With everything new that comes in, there of course needs to be a thoughtful discourse about it. But if it gets to a point of becoming too much of an impediment to innovation and progress, then that would not be a good thing.So I think allowing these technologies to flourish — in terms of spectral allocations and other things of that sort — would be a good thing to continue to do.Are there key, deal-breaking technological challenges that you still need to solve? There are. I mean, it is fair to say that not all the technical challenges have been solved, but the pathway has become more clear over the last several years in terms of at least how we go about solving them. It's sometimes the unknown unknowns that get you at the end of the day. But we have more of the things that we know that we need to figure out. And I think we have a clear pathway.But in general, nobody has built a coherent structure of this magnitude anywhere — not even on Earth, let alone in space. So for example, that analogy that I used earlier: If you have an army of ants, you want the ants, that are like a mile apart, to be synchronized within a few picoseconds (and a picosecond is one-trillionth of a second).So the timing accuracy of that — that kind of thing … We have solutions; we are working on things. It's a combination of various advanced technologies that allows us to get this kind of timing synchronization. But those are the kind of challenges that we're trying to overcome and solve when you go to this scale. And it is something that has emerged because we've solved the other problems. Now we are at the point to say, “Okay, well, now we are scaling it up. How do we do these things?” And we need to solve these problems.How long until space-based solar arrives? Are we talking the 2030s? The 2040s?I'm more on the optimistic side, I guess. I think probably by the end of the 2020s, you will have some demonstration, some power transfer demo. We are going to have to show it soon. We are going to have some technology demonstrations.But if you want to have a substantial amount of power transferred, probably before the end of this decade. It would probably not provide a whole lot of our power at that point. That takes another decade or two to get to that point — if this pathway turns out to be the right pathway to go down.Ali, thanks for coming on the podcast.No problem. It's my pleasure. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Today's interview is with Dr Magdalena Zernicka-Goetz, Professor at the University of Cambridge, UK and Bren Professor at the California Institute of Technology, USA who is interviewed here by her PhD student Bailey Weatherbee. Tune in to learn all about Magda's early influences growing up in Poland and how she came to be a scientist with a love of developmental biology, the challenges she's faced along the way not only as a scientist but as a woman and an immigrant and and, of course, all about her labs fascinating work trying to unravel the complexities of early embryo development.
Anima Anandkumar is a Bren Professor at the Caltech Computing + Mathematical Sciences (CMS) Department and a director of machine learning research at NVIDIA. Find out more about her on her homepage. Humans of AI: Stories, Not Stats is an interview series with AI researchers to get to know them better as people. We don't talk about AI or their work or the stats of their life like what college they went to. They share what they think about, what they are insecure about, what they get excited about. They share the stories of their day-to-day life. Videos of these interviews are available at humanstories.ai. The host is Devi Parikh, an Associate Professor in the School of Interactive Computing at Georgia Tech and a Research Scientist at Facebook AI Research (FAIR). Find out more about her at her homepage or follow her on Twitter. This interview was recorded on October 8, 2020. --- Send in a voice message: https://anchor.fm/humanstoriesai/message
In this episode I had the privilege of speaking with Anima Anandkumar about the her background, her work at NVIDIA, and how she views the future of Machine Learning and AI. Anima is the Head of ML and AI research at NVIDIA. She also is a Bren Professor at CalTech. She has been honored by multiple organizations for her contributions to the field of AI research and she has given multiple talks on topics ranging from the future of AI to ethics in the data science community.Her Ted Talk: https://www.youtube.com/watch?v=zkJ5lEuGQDwHer YouTube Channel: https://www.youtube.com/user/animakumar82Her Twitter: https://twitter.com/AnimaAnandkumar
It's great to be back! (The video of this episode is out on Youtube - (https://youtu.be/tTRTbVxKuUM). Today's episode is a very special one and brings down the curtains on our series 'From Insti and Beyond', which we ran in collaboration with the Alumni Relations Team of IIT Madras. Our guest today is Prof. Anima Anandkumar, a Bren Professor at Caltech and the Director of ML Research at NVIDIA. Prof. Anima has acquired near-legendary status in both corporate research and academia and in this episode she talks about her life journey so far. We also talk to her about why she chose research as her path, about the AI revolution today and much more. Prof. Anima is also very vocal about furthering the cause of women in STEM and we talk to her about diversity (or lack thereof) in technical fields, the effect of patriarchy on Indian and American culture, and what we can do to create a better future. An episode peppered with fascinating anecdotes and a lot of sound life advice.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this episode of our AI Rewind series, we’re back with Anima Anandkumar, Bren Professor at Caltech and now Director of Machine Learning Research at NVIDIA. Anima joins us to discuss her take on trends in the broader Machine Learning field in 2018 and beyond. In our conversation, we cover not only technical breakthroughs in the field but also those around inclusivity and diversity. For this episode's complete show notes, visit twimlai.com/talk/215. For more information on the AI Rewind series, visit twimlai.com/rewind18.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this episode of our TrainAI series, I sit down with Anima Anandkumar, Bren Professor at Caltech and Principal Scientist with Amazon Web Services. Anima joined me to discuss the research coming out of her “Tensorlab” at CalTech. In our conversation, we review the application of tensor operations to machine learning and discuss how an example problem–document categorization–might be approached using 3 dimensional tensors to discover topics and relationships between topics. We touch on multidimensionality, expectation maximization, and Amazon products Sagemaker and Comprehend. Anima also goes into how to tensorize neural networks and apply our understanding of tensor algebra to do perform better architecture searches. The notes for this show can be found at twimlai.com/talk/142. For series info, visit twimlai.com/trainai2018
Prof Judy Olson, Bren Professor of Information and Computer Sciences in the Informatics Department at UC Irvine, reflects back on her career, on changes she has seen particularly the increasing expectations of hiring committees, on common issues people deal with, on blooming where you are planted, on dealing with imposter syndrome, on the value of good colleagues/collaborators and on her work plans after retirement later this year.
The PrimateCast interviews Dr. Ralph Adolphs about emotion, the social brain, and the approaches he and his lab are using to understand these phenomena. Dr. Ralph Adolphs is Bren Professor of Psychology and Neuroscience and Professor of Biology at the...
The PrimateCast interviews Dr. Ralph Adolphs about emotion, the social brain, and the approaches he and his lab are using to understand these phenomena. Dr. Ralph Adolphs is Bren Professor of Psychology and Neuroscience and Professor of Biology at the...
The PrimateCast interviews Dr. Ralph Adolphs about emotion, the social brain, and the approaches he and his lab are using to understand these phenomena. Dr. Ralph Adolphs is Bren Professor of Psychology and Neuroscience and Professor of Biology at the...
Dr. Ralph Adolphs is the Bren Professor of Psychology and Neuroscience and Professor of Biology at the California Institute of Technology. He received his PhD in Neurobiology from Caltech and completed his postdoctoral training at the University of Iowa. Ralph is here with us today to tell us about his journey through life and science.
Speakers: David Baltimore, Nobel Laureate, Medicine, 1975; Robert Andrews Millikan Professor of Biology, California Institute of Technology Joel Burdick, Professor of Mechanical Engineering and Professor of Bioengineering, California Institute of Technology James Economou, Vice Chancellor for Research, University of California, Los Angeles Michael Quick, Executive Vice Provost and Professor of Biological Sciences, University of Southern California Owen Witte, Director, Broad Stem Cell Research Center, University of California, Los Angeles; Distinguished Professor of Microbiology, Immunology & Molecular Genetics, University of California, Los Angeles Moderator: Stephen Mayo, William K. Bowes Jr. Foundation Chair, Division of Biology, and Bren Professor of Biology and Chemistry, California Institute of Technology. Research in bioscience has been transformed in the last decade, becoming tightly linked to other fields ranging from chemistry, engineering and bioengineering to physics, nanoscience and applied sciences. The intersection of these core disciplines is leading to discoveries that have global impact. Universities are increasingly creating partnerships among these diverse areas to accelerate medical science and change how we think about treating complex health issues in the 21st century. Our panel of leaders in research will examine how these connections are being made and their salutary results.