POPULARITY
Join The Full Nerd gang as they talk about the latest PC building news. In this special episode the gang is joined by Tom Petersen, Intel Fellow, to talk about the new Battlemage GPUs, new upscaling tech, and more. And of course we answer your questions live! Links: - https://www.pcworld.com/article/2543041/intels-249-arc-b580-is-the-gpu-weve-begged-for-since-the-pandemic.html Join the PC related discussions and ask us questions on Discord: https://discord.gg/SGPRSy7 Follow the crew on X: @GordonUng @BradChacos @MorphingBall @AdamPMurray ============= Follow PCWorld! Website: http://www.pcworld.com X: https://www.x.com/pcworld ============= This video is NOT sponsored. Some links may contain affiliate links, which means if you buy something PCWorld may receive a small commission.
Send us a Text Message.On this episode of Embedded Insiders, Jack Weast, Intel Fellow and Vice President and General Manager of Intel Automotive, breaks down the company's recently announced OLEA U310 SoC. Built with a hybrid and heterogeneous architecture, the solution is designed to improve the overall efficiency of electric vehicles by promoting energy efficiency, and lower design and manufacturing costs.Next, Rich and Vin are back with another Dev Talk discussing the embedded developers' approach to the testing process, specifically the pros and cons of waiting to test prototypes once they're complete, and the testing tools every developer should have in their toolbox.But first, Rich, Ken, and I discuss the current state of electric vehicles and our curiosity about Intel's approach to the market. For more information, visit embeddedcomputing.com
Send us a Text Message.Dr. Raj Yavatkar, Ph.D. is Chief Technology Officer at Juniper Networks ( https://www.juniper.net/us/en/the-feed/topics/raj-yavatkar.html ), where he has responsibility for charting the company's technology strategy, leading and executing the company's critical innovations and products for intelligent self-driving networks, security, Mobile Edge Cloud, network virtualization, packet-optical integration, and hybrid cloud. A technology and products pioneer throughout his career, Dr. Yavatkar has envisioned how emerging technologies can be applied to creatively solve enterprise and business problems ahead of competitors to help establish new product lines. Before joining Juniper, Dr. Yavatkar was at Google (GCP), where he led a large team of engineers to deliver cloud networking infrastructure and products for Google Cloud customers. Prior to that at VMware, he ideated a new product concept to address the private/hybrid cloud markets by defining VMware Cloud Foundation—an easy way to deploy and manage virtual clouds—leading a large product team to successfully deliver product to market. He started his career at Intel, rising to the position of Intel Fellow, a position he held for 10 years. During his various leadership roles there, Dr. Yavatkar was responsible for driving new product and R&D initiatives in many areas of software. He brings a wealth of experience in emerging technologies. He has been awarded more than 45 patents, published over 60 research papers, authored 5 Internet standards, and co-authored a book on internet quality of service. He is an IEEE Fellow and holds a PhD in Computer Science from Purdue University. Support the Show.
Remember when the heavy battery inside your laptop weighed you down? Intel Automotive does – and they are using proven power management best practices from the PC industry to increase EV battery efficiency. . For more than 50 years, Intel has worked to advance the design and manufacturing of semiconductors. As the automotive industry shifts to electrification and software-defined vehicle infrastructures, Intel Automotive offers a scalable product portfolio, regionally resilient manufacturing, and demonstrated industry transformation experience. In fact, Intel Automotive is working with SAE to develop vehicle platform management standard, SAE J3311. . To learn more, we sat down with Jack Weast, Intel Fellow, Vice President and General Manager, Intel Automotive, and Rebeca Delgado, Chief Technology Officer, Intel Automotive, to discuss the importance of standardizing EV power management using learnings from the PC industry. . If you are interested in joining SAE J3311, Vehicle Platform Power Management, please reach out to gary.a.martz.jr@intel.com. . We'd love to hear from you. Share your comments, questions and ideas for future topics and guests to podcast@sae.org. Don't forget to take a moment to follow SAE Tomorrow Today—a podcast where we discuss emerging technology and trends in mobility with the leaders, innovators and strategists making it all happen—and give us a review on your preferred podcasting platform. . Follow SAE on LinkedIn, Instagram, Facebook, Twitter, and YouTube. Follow host Grayson Brulte on LinkedIn, Twitter, and Instagram.
In this episode of InTechnology, Camille looks back on some of the most exciting conversations on AI in 2023. Things kick off with Andres Rodriguez, Intel Fellow, and his conversation on deep learning, a subset of machine learning. Then, Selvakumar Panneer and Omesh Tickoo, Principal Engineers at Intel Labs, discuss synthetic data. This is followed up by touching on large language models or LLMs with Sanjay Rajagopalan, Chief Design and Strategy Officer at Vianai Systems. Finally, the episode wraps up with independent AI policy and governance advisor Chloe Audio giving her insight on emerging AI regulations. Listen to the full episodes: What That Means with Camille: Deep Learning (142): https://cybersecurityinside.libsyn.com/142-what-that-means-with-camille-deep-learning What That Means with Camille: Synthetic Data (139): https://cybersecurityinside.libsyn.com/139-what-that-means-with-camille-synthetic-data Why and How Enterprises Are Adopting LLMs (174): https://cybersecurityinside.libsyn.com/174-why-and-how-enterprises-are-adopting-llms Emerging U.S. Policies, Legislation, and Executive Orders on AI (178): https://cybersecurityinside.libsyn.com/178-emerging-us-policies-legislation-and-executive-orders-on-ai Deep Dive: U.S. Executive Order on Artificial Intelligence (181): https://cybersecurityinside.libsyn.com/181-deep-dive-us-executive-order-on-artificial-intelligence The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.
In this episode of InTechnology, Camille delves into a roundup of our most popular listener topics on sustainability in 2023. The first topic is green software with Asim Hussain, Director of Green Software and Ecosystems at Intel. The second covers electricity mapping featuring Olivier Corradi, Founder and CEO of Electricity Maps. Finally, on this roundup is energy efficiency in the cloud with Lily Looi, Intel Fellow as well as Chief Power Architect of Intel's Xeon product line. Listen to the full episode (EP 137) - WTM: Green Software with Asim Hussain. Listen to the full episode (EP 147) with Olivier Corradi – How Green Is Your Electricity? Listen to the full episode (EP 148) - WTM: Energy Efficiency In The Cloud with Lily Looi. The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.
In this episode of What That Means, Camille gets into energy efficiency in the Cloud with Lily Looi, Intel Fellow. They talk about the main ways data centers consume energy and ways to make data centers more energy efficient. The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.
Join The Full Nerd gang as they talk about the latest PC hardware topics. In this episode the gang is joined by special guest Tom Petersen, Intel Fellow, to talk about Arc updates, the work Intel is doing on GPU drivers, and much more. And of course we answer your questions live! Watch our recent Intel Arc testing: https://youtu.be/00T15aL1pkA Buy The Full Nerd merch: https://crowdmade.com/collections/pcworld Check out the audio version of the podcast on iTunes, Spotify, Pocket Casts and more so you can listen on the go and be sure to subscribe so you don't miss the latest live episode! Join the PC related discussions and ask us questions on Discord: https://discord.gg/SGPRSy7 Follow the crew on Twitter: @GordonUng @BradChacos @MorphingBall @KeithPlaysPC @AdamPMurray Follow PCWorld for all things PC! ---------------------------------- SUBSCRIBE: http://www.youtube.com/subscription_center?add_user=PCWorldVideos TWITCH: https://www.twitch.tv/PCWorldUS TWITTER: https://www.twitter.com/pcworld
Intel's Responsible AI work recently revealed FakeCatcher, a deepfake detection technology that can detect fake videos with 96% accuracy. The platform is the world's first real-time deepfake detector that returns results in milliseconds. FakeCatcher uses Intel's hardware and software to assess "blood flow" signals in real videos to detect inauthenticity. The technology can run up to 72 different detection streams simultaneously on 3rd Gen Intel Xeon Scalable processors. Social media platforms can use this technology to prevent the spread of deepfakes, global news organizations to avoid amplifying manipulated videos, and non-profit organizations to democratize the detection of deepfakes for everyone. Intel also recently announced its extended collaboration with Mila, an AI research institute in Montreal, to help advance AI techniques to tackle global challenges like climate change, identify drivers of diseases, and expedite drug discovery. Accelerating the research and development of advanced AI to solve some of the world's most critical and challenging issues requires a responsible approach to AI and the ability to scale computing technology. As leaders in computing and AI, Intel and Mila will work together to tackle some of the challenges the world faces today and drive tangible results. Lama Nachman, Intel Fellow and Director of Intelligent Systems Research Lab at Intel Labs, joins me on Tech talks Daily in a discussion about Responsible AI and the real-time deepfake detector.
On The Cloud Pod this week, Amazon EC2 Trn1 instances for high-performance model training are now available, 123 new things were announced at Google Cloud Next ‘22, Several new Azure capabilities were announced at Microsoft Ignite, and many new announcements were made at Oracle CloudWorld. Thank you to our sponsor, Foghorn Consulting, which provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you're having trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week. Episode Highlights ⏰ Amazon EC2 Trn1 instances for high-performance model training are now available. ⏰ 123 new things were announced at Google Cloud Next ‘22. ⏰ Several new Azure capabilities were announced at Microsoft Ignite. ⏰ Many new announcements from Oracle CloudWorld. Top Quote
Dr. Melvin Greer is a data scientist and we'll talk about what that is, what Artificial Intelligence is and how he sees AI developing as a tool for data analysis.
This week Intel and Article One, in association with the School of Law at Trinity College Dublin, hosted a symposium exploring responsible business conduct, innovation and Artificial Intelligence (AI). While rapid advancements in AI and other emerging technologies have the potential for significant positive human rights impacts, they also bring heightened risks of adverse effects. Best practices, principles, and tools to ensure responsible decision-making are vital elements in the evolution of AI technologies. The one-day symposium brought together thought leaders, policy makers, and academia to explore topics such as responsible development of AI and Applying responsible AI principles to manufacturing. The symposium was opened by Eamon Gilmore, EU Special Representative on Human Rights. Also speaking at the event was Lama Nachman, Intel Fellow and Director of Intelligent Systems Research Lab in Intel Labs. Lama's research is focused on creating contextually aware experiences that understand users through sensing and sense-making, anticipating their needs, and acting on their behalf. To coincide with the symposium, Lama shared her thoughts in an editorial on ‘Responsibly Harnessing the Power of AI'; “Artificial intelligence (AI) has become a key part of everyday life, transforming how we live, work, and solve new and complex challenges. From making voice banking possible for people with neurological conditions to helping autonomous vehicles make roads safer and helping researchers better understand rainfall patterns and human population trends, AI has allowed us to overcome barriers, make societies safer and develop solutions to build a better future. Despite AI's many real-life benefits, Hollywood loves to tell alarming stories of AI taking on a mind of its own and menacing people. These science fiction scenarios can distract us from focusing on the very real but more banal ways in which poorly designed AI systems can harm people. It is critical that we continuously strive to develop AI technologies responsibly, so that our efforts do not marginalise people, use data in unethical ways or discriminate against different populations — especially individuals in traditionally underrepresented groups. These are problems that we as developers of AI systems are aware of and working to prevent. At Intel, we believe in the potential of AI technology to create positive global change, empower people with the right tools, and improve the life of every person on the planet. We've long been recognised as one of the most ethical companies in the world, and we take that responsibility seriously. We've had Global Human Rights Principles in place since 2009 and are committed to high standards in product responsibility, including AI. We recognize the ethical risks associated with the development of AI technology and aspire to be a role model, especially as thousands of companies across all industries are making AI breakthroughs using systems enhanced with Intel® AI technology. We are committed to responsibly advancing AI technology throughout the product lifecycle. I am excited to share our updated Responsible AI web page, featuring the work we do in this space and highlighting the actions we are taking to operate responsibly, guard against the misuse of AI and keep ourselves accountable through internal oversight and governance processes”. Visit the Intel newsroom to read the full editorial. See more stories here.
Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
680: In this interview, Intel Fellow Andres Rodriguez discusses the work he and his team are doing to advance artificial intelligence and its applications. Andres speaks to the historical evolution of artificial intelligence, the work being done to democratize access to AI algorithms, and the benefits and risks that AI can pose to society. Andres also provides a unique look at the hardware side of artificial intelligence, the trade-offs in complexity with its software counterpart, and how companies of all sizes can train and distill large AI models into smaller more applicable models. Finally, Andres explains how open-source communities are advancing the progress being made in AI and how citizen developers and people without engineering backgrounds can be easily trained to become familiar with the tools behind AI.
Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
680: In this interview, Intel Fellow Andres Rodriguez discusses the work he and his team are doing to advance artificial intelligence and its applications. Andres speaks to the historical evolution of artificial intelligence, the work being done to democratize access to AI algorithms, and the benefits and risks that AI can pose to society. Andres also provides a unique look at the hardware side of artificial intelligence, the trade-offs in complexity with its software counterpart, and how companies of all sizes can train and distill large AI models into smaller more applicable models. Finally, Andres explains how open-source communities are advancing the progress being made in AI and how citizen developers and people without engineering backgrounds can be easily trained to become familiar with the tools behind AI.
In this episode of Cyber Security Inside What That Means, Camille breaks down and defines confidential computing with Amy Santoni, Intel Fellow, Design Engineering Group, and Chief Xeon Security Architect, Datacenter Processor Architecture. The conversation covers: - What confidential computing is and why it is important. - Why confidential computing is a focus of cybersecurity development right now. - What a trusted execution environment is. - What to put in a trusted execution environment and what data is safe to stay out. ...and more. Don't miss it! The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation. Here are some key takeaways: - Xeon is a line of processors in the server space that Intel produces. This is what Amy works in. - Confidential computing is about protecting data as it is being processed. It is protecting it while it is processing in the CPU. Experts have really figured out how to protect data where it is stored and while it is being transmitted, so now it is about while it is in use. - The reason we are getting to this now, is that it follows how the attacks have happened. The attacks started at what is on your disks, then intercepting in transit, and now this is where attacks are happening. - An example of this happening that is current is COVID x-rays. There are many x-rays, and AI models have been trained to look at those x-rays and automate and improve the accuracy of diagnosis. The data for training these AI models is coming from hospitals, while preserving the privacy of the patients. But securing that data and information while training and using the AI is part of confidential computing. - There are many layers to this protection, including protecting the environment from being tricked by someone. For example, we don't want an environment to think it's running on a secure Intel server when it is not. And we need protections to keep that from happening. - So why not put everything in a trusted execution environment? There are a few reasons. The first is that it is not free. Another is that there is a lot of software enabling to make this work, and it isn't the simplest process. There is a balance of resources and how much the data needs to be protected. - There are different types of trusted execution environments that work at different levels of your software and hardware. It could be at the app level, or the OS level. You can choose what parts of your data and even parts of your software go into these environments. - There is a lot of hesitancy for people using the cloud for storing their data, because it is in a place with many other people's data. Confidential computing is important for this, because it makes each person's lock, or walls around their data, unique. So even if my data and your data are next to each other, I can't access yours and you can't access mine. - COVID helped us realize that we needed agility in computing, but there was a push to do this before COVID as well. The desire to protect data at every part of the process, including when it is being processed, has been being looked at for a while. Even during the Snowden times. - It is difficult to predict how much of our hardware and software will be in these environments, and the projections for growth vary widely. However, it is all growth projections, and all of them are large increases. - Another industry trend is standardizing communication from how diverse it is across servers so that all the components work together. Another is working on the safety of computer memory, and how the software and hardware work together on that. - Physical protection has also become increasingly important as technology increases in our world. From Facebook and phones to the mall and the football stadium to a console, this has become very important. As data travels, it needs to be protected as well from someone physically trying to access the data. Some interesting quotes from today's episode: “Confidential computing is really focused on while [data] is being used. So the other ones we've solved, and then confidential computing, the new part, is: ‘hey, while I'm computing this data, let's make sure it's confidential and it's protected.'” - Amy Santoni “It followed the attack vectors. If you think about how malware started, it started corrupting things on your disk. And then people started putting sniffers or using things at the network site to intercept things between point A and point B. So this is where the attacks are going and where we need to start protecting.” - Amy Santoni “That's the trusted execution environment. It's a new environment and new hardware protections to protect the code and data within that trusted execution environment. Secure enclaves is a particular trusted execution environment.” - Amy Santoni “You can split your app into these trusted and untrusted parts, but again, the level of detail and the level of software enabling is greater in that second case. It reduces the attack surface to the smallest possible one, because you're just cutting out part of your app and saying, ‘this is the most critical part that I want to protect.' And all the rest of the app is untrusted. It can't get to that data.” - Amy Santoni “What confidential computing is bringing is extra confidence that I can take these things that maybe I wasn't comfortable taking to cloud before, and move them to cloud. And I have this extra hardware layer of protection to keep my data private from other people running on the same machine, but also from the cloud service provider, from that virtual machine monitor that happens to be running.” - Amy Santoni “I think that there was a push for this even before COVID, the move to protect the data while it's being computed. I think people have recognized that for a while. I don't know that I have a good example of a catalyst other than the one I'm familiar with, which is, like I said, we've called it the Snowden effect. When people realized that the government could get to some data that they didn't think they could get to.” - Amy Santoni “We've heard Microsoft say they think that the majority of their cloud, let's call it infrastructure, as a service will be running in some trusted execution environment this decade. So that's the projections - like the growth projections for the growth of confidential compute vary from 5x to 20x.” - Amy Santoni “What we're trying to do is make sure that all of those processing places along the path have, again, from a security centric point of view, have a trusted execution environment. They don't all have to be the same necessarily, but have some protection. So whether I'm processing here or processing there, I have some protection for my data.” - Amy Santoni
In this episode of Cyber Security Inside What That Means, Camille continues to dive into the idea of confidential computing and trust execution environments with Ron Perez, Intel Fellow, Chief Security Architect, CTO Office. The conversation covers: - What confidential computing and trust execution environments are. - Why we need them, and what data needs to be inside them. - How confidential computing works in something like the cloud. - Balancing security with usage and effectiveness using confidential computing. ...and more. Don't miss it! The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation. Here are some key takeaways: - Security is a very broad term. It can be making sure there are no vulnerabilities in products and also tapping into new capabilities and features for customers. The latter is where Ron focuses, because he wants to make sure technology is not limiting people, but is allowing them to do things they wouldn't or couldn't do before. - An example of this is cloud computing. It makes things more efficient and easier to be shared, but it also comes with security vulnerabilities. So new security needs to be developed to assure the safety of the work in these environments. - A perfect security situation is essentially encasing your computer in cement and not using it. But that's not very useful. People are wanting to do more, share more, and connect more in industry today, which creates a huge challenge. This is why security architects and technologists exist, and why they have so much job security. - Although ransomware is incredibly important to address and focus on right now, it is also the case that we are constantly connecting our networks and striving for computing on a global scale. This might magnify any threat or vulnerability because of the connections. - The idea of “break once, run everywhere” is going to become a real problem with how interconnected our systems are becoming. This is why security assurance is so important, and why we need to move back to focusing on this as an industry. - We have moved past the point of being able to use paper (except maybe for something like voting) because of the speed and connectivity of everything. That's why we have things like zero trust, down to the smallest pieces of software and hardware. - Zero trust and confidential computing are complimentary. Confidential computing is about protecting data in use, by doing the computing in a trusted execution environment that is hardware based. - Why is it difficult to protect data while it's in use? It's being accessed by several different things: memory, processors, another compute engine, the software you're using to do something to the data… There may be copies of it as the software is optimizing, and there is a lot happening to it at once. - A trusted execution environment is focused on confidentiality. It is also about protecting the data and seeing if the data and code have been modified in any way. At a minimum, it must do these two things. - A software like Intel Software Guard Extensions (SGX) can separate what code and data is inside the trusted environment and what is outside, and creates a strong separation between the two. - SGX also protects the memory that the code and data are in during the processing and use, and encrypts it. There are softwares that are also trying to support multiple environments at different levels of capability. - In the past, computer security has been based on a hierarchy, needing to trust your data, the software, the OS, the hardware, and more. You have to secure everything under your data as well. With confidential computing, you only need to trust your data and the environment. - Being about to only need to trust those two things is really powerful when you think on a global, interconnected scale. When your code is running across the globe, confidential computing helps you assure that it is protected. - Because the cloud has grown so much in how much it's being used, there is some worry about who has access to data in it, and the possibility that someone could access it. We're relying a lot on the ethics of the people running the security. It is about the capability to access it more than anything. - Confidential computing now allows those providers to say that they absolutely cannot see their data. You are just paying for their resources and their bandwidth. It is not based on their own ethical code, they physically cannot see it. Some interesting quotes from today's episode: “Security is very broad. It applies to so many things. And in fact, just saying security is not enough, because everybody will have a different image in their head of what that means.” - Ron Perez “But as a security technologist, you realize that yeah, sharing is not necessarily a good thing. That's where bad things happen. So we need new technologies to provide assurances, security assurances - confidence, basically - that you still have the same safety in terms of the security of your workloads in that environment that you can't control.” - Ron Perez “We're really trying to do computing on a global scale. We have a number of cloud service providers and telco providers, etc. All these networks and all these systems are going to be linked together… That massive scale is the part I'm worried about, because now any little vulnerability can be magnified because, most likely, we're using these same technologies everywhere else. So the break once, run everywhere problem is going to be huge.” - Ron Perez “Voting, for example, is probably an area where we should look at still having paper. Other than that, the speed of everything we're doing today really won't allow us to go back to those days. Even those systems that had back then. So what is a server, and can you put it in a silo?” - Ron Perez “The past 40, 50, 60 years now, we've been figuring out how to secure data when it's being stored, at rest, and when it's in transit, over network. That's been the whole purpose of computing security and the research and all the developments we've had. But we've missed this whole in-use part.” - Ron Perez “We're talking about when the homomorphic encryption first reemerged as a real possibility on the scene in 2009, they were thousands of orders of magnitude to worse performance. We've got that down now to just a few orders of magnitude, but even that obviously is not practical for most workloads. So we still have this need for what we can do short of that until we get to that nirvana.” - Ron Perez “Confidential computing now allows us to say, okay, you can take the thing that you care about that you want to protect, and the hardware which implements these trusted execution environments, and that's all you have to trust. You don't have to trust any of the operating system, the hypervisor, the other applications, the other middleware on the platform, the other firmware on the platform. All you have to do is trust those two things.” - Ron Perez
There are key questions that have emerged around the benefits of automated vehicles and how to bring them to market at scale, including efforts to increase public trust in AVs, how we measure and define “driving safely” in AVs, and the process of getting these vehicles on the road. Host John Bozzella discusses "driving safely in AVs" with Jack Weast, an Intel Fellow, the Chief Technology Officer for Intel's Corporate Strategy Office, and the Vice President of Automated Vehicle Standards at Intel's AV subsidiary, Mobileye. In this role, Jack leads a global team working on AV safety technology and the safety related standards needed for wide-scale AV adoption. This podcast is presented by Intel, a global technology leader. Together with subsidiary Mobileye, Intel is revolutionizing technology for the automotive industry, delivering best-in-class automated driving solutions to make roads safer for all. Learn more at mobileye.com. See acast.com/privacy for privacy and opt-out information.
Marcian Ted Hoff 130 Creating the Microprocessor and beyond with Marcian "Ted" Hoff BIOGRAPHY OF MARCIAN E. HOFF Dr. Marcian Edward "Ted" Hoff was born in Rochester, New York. His degrees include a Bachelor of Electrical Engineering from Rensselaer Polytechnic Institute, Troy, New York, (1958) and an MS (1959) and a Ph.D. (1962), both in Electrical Engineering, from Stanford University, Stanford, California. In the 1959-1960 time frame he and his professor, Bernard Widrow, co-developed the LMS adaptive algorithm which is used in many modern communication systems, e.g. adaptive equalizers and noise-cancelling systems. In 1968 he joined Intel Corporation as Manager of Applications Research and in 1969 proposed the architecture for the first monolithic microprocessor or computer central processor on a single chip, the Intel 4004, which was announced in 1971. He contributed to several other microprocessor designs, and then in 1975 started a group at Intel to develop products for telecommunications. His group produced the first commercially- available monolithic telephone CODEC, the first commercially-available switched-capacitor filter and one of the earliest digital signal processing chips, the Intel 2920. He became the first Intel Fellow when the position was created in 1980. In 1983 he joined Atari as Vice President of Corporate Research and Development. In 1984 he left Atari to become an independent consultant. In 1986 he joined Teklicon, a company specializing in assistance to attorneys dealing with intellectual property litigation, as Chief Technologist, where he remained until he retired in 2007. He has been recognized with numerous awards, primarily for his microprocessor contributions. Those awards include the Kyoto Prize, the Stuart Ballantine Medal and Certificate of Merit from the Franklin Institute, induction into the National Inventors Hall of Fame and the Silicon Valley Engineering Hall of Fame, the George R. Stivitz Computer Pioneer Award, the Semiconductor Industry 50th Anniversary Award, the Eduard Rhein Foundation Technology Award, the Ron Brown Innovation Award, the Davies Medal and induction into their Hall of Fame from Rensselaer Polytechnic Institute, and the National Medal of Technology and Innovation. He has been recognized with several IEEE awards including the Cledo Brunetti Award (1980), the Centennial Medal (1984), and the James Clerk Maxwell Award (2011). He was made a Fellow of the IEEE in 1982 "for the conception and development of the microprocessor" and is now a Life Fellow. He is a named inventor or co-inventor on 17 United States patents and author or co-author of more than 40 technical papers and articles. We talk about How do you see the value of IP? what should investors be thinking when they are studying a company's IP? What technologies were developed long ago that we are just now starting to see or as a society to adopt? What was it like being one the inventors of the microprocessor? How did Intel grow after the invention of the 4004 How have “Innovation” in Silicon Valley Changed over the decades And much more... Connect with Marcian “Ted” Hoff Best to connect through Mike, President of Intel Alumni (2) Mike Trainor | LinkedIn
Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
643: Lama Nachman, Intel Fellow and the Director of Intelligence Systems Research Lab at the company, discusses how her team is driving the future of artificial intelligence and its collaboration with people. Lama explains the mission of Intel's Intelligence Systems Research Lab and describes the types of skills necessary to research the groundbreaking collaboration between humans and AI. She also provides a look at what the next frontier of artificial intelligence is in human augmentation including aiding in education, accessibility for adults with disabilities, and manufacturing. Finally, Lama gives her perspective on developing responsible AI and how further developments could prove to create a more sustainable future.
Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
643: Lama Nachman, Intel Fellow and the Director of Intelligence Systems Research Lab at the company, discusses how her team is driving the future of artificial intelligence and its collaboration with people. Lama explains the mission of Intel's Intelligence Systems Research Lab and describes the types of skills necessary to research the groundbreaking collaboration between humans and AI. She also provides a look at what the next frontier of artificial intelligence is in human augmentation including aiding in education, accessibility for adults with disabilities, and manufacturing. Finally, Lama gives her perspective on developing responsible AI and how further developments could prove to create a more sustainable future.
Natan Linder, our 100th guest, is CEO and co-founder of Tulip, a manufacturing software company that builds digital applications to bridge the human-automation gap for frontline operations. Tulip's platforms are based on over a decade's worth of breakthrough research by world-class experts on technologies such as the Internet of Things (IoT), human-computer interaction, augmented reality, and machine learning. Natan is also co-founder and chairman of Formlabs, which develops affordable high-resolution 3D printing for professionals in a diverse set of industries and serves as an advisory board member of RightHand Robotics. He was a former Intel Fellow in the Fluid Interfaces Group at MIT Media Lab and co-founder of Samsung Telecom. He earned his doctorate in media arts and science at MIT and a BA in Computer Science and Business Administration at Reichman University.
In this episode of Cyber Security Inside What That Means, Camille explores intelligent systems and artificial intelligence with Lama Nachman, Intel Fellow and Director of Human & AI Systems Research Lab. The conversation covers: - Intelligent systems using a combination of observation, social science, artificial intelligence and more to improve human experience. - The difference between a full virtual setting and one that is a balance of virtual and analog. - What type of devices are used to observe and improve day-to-day activities, and how they work. - How privacy and ethics play a role in these systems as the digital and physical worlds become more blended. ...and more. Don't miss it! The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation. Here are some key takeaways: - Intelligent systems research is focused on using social science, design, AI, hardware engineering, and software engineering to augment and amplify human capabilities and experiences with AI. - This is more than just improving your experience when using your PC or device. It is about improving your day-to-day tasks using technology in the physical environment. - The idea is that humans are really good at some things, and AI is really good at others. And oftentimes these aren't the same things. For example, AI is really good at processing huge amounts of data, which humans can't do efficiently. If we can put these things together - what a human is good at and what an AI is good at - it can lead to better problem-solving and development. - Some environments are harder to observe than others, but still could have huge benefits. If we look at early childhood learning, there isn't a lot of screen usage, but a lot of learning is taking place. If we can get AI to observe that and learn from it, we can then bring a conversation to the physical world that helps that learning. Something that is being developed is a projection in that classroom on the wall that kids have to create a course for the projection to land on or interact with. It increases engagement. It also requires turning the space into a smart space with cameras, servers, etc. - The key thing here is the balance between virtual and analog. If it was full virtual, analyzing the data would be easy. But because we want it to be a balance of both, it is a hard problem to work with, because observing and learning is much harder for the AI. This information comes from cameras, microphones, text, and other things like heart rate and skin temperature. There are also sensors that capture muscle movement and brain waves for people with disabilities. - Wireless sensing, as opposed to cameras, is one way information can be collected about where people are moving, when they are, etc. This removes some hesitation for people not wanting cameras watching them, but still collects data. Movement is an important data point for AI. If someone is walking towards their computer to turn it on, the AI might start turning things on in the background to make boot-up faster. If someone is walking around in the kitchen, the AI can infer they are making dinner and make that process easier. - EEG sensors are also being used for people who can't communicate traditionally to help learn and interpret what different brain signals mean so that communication can happen. - Privacy and ethics are being taken into consideration when the developers look at how the data is collected, where it is getting sent, and where it is getting analyzed. They are also looking at equity and bias in terms of who is building and looking at the algorithms. Some interesting quotes from today's episode: “In terms of learning in a physical environment or working in a fab or just helping people with disabilities, as an example, what can we utilize as signals in the physical world? Then, with a lot of algorithmic innovation, turn that into understanding so we can better facilitate experiences for people as they traverse their normal life.” - Lama Nachman “You're looking at augmenting human experience, so we're focused on humans here and using technology and using sensors to understand better what is the human experience, and then improve that experience.” - Camille Morhardt summing up intelligent systems “So if you understood what somebody is doing, and if you understood what is supposed to happen, and the AI system can actually converse well with the human, then you could see how you can start to think of these things as human-AI systems where we're bringing the best of the human and the best of your AI system.” - Lama Nachman “It's amazing, because we've done tons of automation in general, especially in chip manufacturing. But you walk into a fab and you still see tons of people. It's not that the people disappear, they just do different tasks in the fab.” - Lama Nachman “Technology needs to come into the physical world, observe, and then have a conversation that is actually situated in that physical world.” - Lama Nachman “Ultimately there are all sorts of experiences within that spectrum - from total virtual reality where everything is virtual to everything analog and everything in between within that spectrum, right? What's really interesting about this is what is the problem that you're trying to solve, and what are the concerns that you're trying to mitigate?” - Lama Nachman “Actually, to solve some of the privacy issues, one of the things that you could do is reduce the gap between what is being sensed and what is being inferred.” - Lama Nachman “How do you enable responsible development of AI? That means at the very early stages, you're asking questions about risk.You're looking at the project as a whole before you start developing.” - Lama Nachman
In this episode of What That Means, Camille gets the definition, meaning, and importance of socio-technical systems from Intel Fellow and Chief Architect of socio-technical Systems, Maria Bezaitis. The conversation covers important questions like: - Why does the overlap of social and tech matter? - How does it impact how we should be thinking about security and product design? ...and more! Don't miss it! Here are some key take-aways: - While socio-technical isn't a new term or concept, it is new to tech. - For most of the developed world, social and tech are inextricably linked. - The brevity of some interactions (like those in the IOT space) is not new. What the Internet and the tech evolution have done is increase the diversity and depth of those encounters. - We love cities because they allow for more chance encounters, more shared experiences. But now, many of those chance encounters, those shared experiences, are happening online. We're sort of co-creating worlds. - With the younger generations, there's less of a distinction between the online world and the ‘real' world. There's less consideration given to what gets shared online vs. what remains offline. They see the link between social and tech much more clearly and the two are virtually inseparable in their minds. - We want to believe security and privacy can be concepts with fixed rules and regulations. But humans show us it's not that simple. We're constantly making trade-offs. So, what we need are real-time, responsive solutions. - We can no longer only think about what's best for tech or driving tech forward. When making decisions, product design engineers need to think more and more about who they're designing the product for and how it will be used. Some interesting quotes from today's episode:“The phrase has been used for years in areas like organizational design and workplace research. I'm bringing it to the fore for tech in part because our lives are no longer strictly social, nor are they exclusively focused around technology. And yet, technology companies, I think, are still working towards the importance of that intersection.”“When we're talking and thinking about technical and technical requirements, we really need to understand the social as inextricably linked. And when we look at our own lives as social entities on the planet, it's really hard, at least if you're in a lot of the developed world, to really think about them as somehow without technology or outside of technology.” “Which is to say, there are layers to this problem. Individuals exist in contexts, which include places and environments and other people. And technologies do as well. In order to understand how these things evolve, we really need to be looking at the intersection and the coevolution of people together with technology.” “If you happen to have teenagers at home, which I do, you know that the people that they're interacting with aren't just people that they know.”“This is why we love cities, because cities have always been these incredible environments for chance encounters, and for very quickly moving us into places and into moments that somehow are not foreseen by the trajectory of our lives.” “That early moment of a potential for something new, and a potential for encountering something different, was absolutely present and important. And actually, I think that in some respects, we're likely to encounter that again, as more and more parts of our lives are sourced from what we're doing online.” “I grew up in the 70s, and 80s, and we still operate with this notion that our lives are better without tech. There is a fundamental assumption that it's important to tell your child to park the device, put it away. That it's important to imagine leisure time or time off from technology. I don't think that that's mirrored at all in younger generations. And I'm not sure that's just because they're teens or preteens. Technology is occupying a very different kind of terrain for them.” “Their world is organized around communities and places and activities that are sourced from a digital world. And of course, COVID has deepened all of that for them.”“I think we still see people making all sorts of trade-offs against privacy all the time. What we know for sure, is that privacy has never been and will likely never be a concept or a practice that has fixed rules and protocols for people. We are always negotiating our privacy in the same way that we're always negotiating our security, which is what makes humans and communities of humans a really great place to look for thinking differently about both privacy and security. Technologists would like to think that those things lend themselves very easily to rules and guidelines and regulations. I think humans show us that it's not that simple.”“Once you remove yourself from the mindset that privacy or security is something that can be fixed – that can be defined and then implemented – and you move into this space where you can think about those concepts as much more dynamic and much more responsive, then I think you enter into a space where you're really thinking differently about the kinds of technologies that might make sense.”“You're not mapping technologies anymore to behaviors or workloads that are fixed or rigid, but you're able to maybe identify vulnerabilities and holes in a much more responsive, real-time manner. And that, I think, creates space for thinking about change quickly, and in real time.” “I'd like to see ethics move in the same vein that we're trying to move social research, which is that it's not something that ultimately lives outside. It doesn't necessarily require extra processes and tools and governing boards, but that it becomes much more integral. And I think anyone working in that space today would say that's exactly what we're trying to get done. But just like the general face of social science work in product development, and in tech specifically, that's going to take some time.” “I think our job as researchers who are working in the tech sector, is to make sure that those conversations have a landing zone, to bring them inside our companies, and then work with the right partners inside our companies to change how things get made.”
This week, Chris and Martin chat to Amber Huffman, Intel Fellow and President of NVM Express about the updates in NVMe 2.0. NVM Express (generally abbreviated to NVMe) is a storage protocol that was developed to overcome some of the shortcomings of traditional protocols when working with new high performance media such as NAND SSDs […] The post #206 – An Update on NVMe 2.0 with Amber Huffman appeared first on Storage Unpacked Podcast.
Will there ever be a world with no incidents on the road? From major crashes to fender benders, driving comes with risk, but what happens when the driving is left to the machine? In this episode we spoke with Jack Weast, Intel Fellow at Intel Corporation, to examine the safety of driving from all angles and how we can eradicate most driving incidents on the road with autonomous technology. Whether it’s how Intel Corporation is crowd-sourcing to accurately map cities or how it’s using Responsibility-Sensitive Safety (RSS) to bake in reasonable assumptions and decipher what is safe while driving, Jack Weast explains the steps taken to make every passengers’ ride safe and boring. Interested in learning more about the future of safe, self-driving technology? Sign up for SAE’s SmartBrief to get the latest articles, news and updates about RSS, autonomous driving and safety. Tweet us! @saeintl on Twitter Follow SAE on LinkedIn, Instagram, and YouTube Follow host Grayson Brulte on LinkedIn, Twitter and Instagram Subscribe to SAE Tomorrow Today or visit www.sae.org/podcasts to stay up-to-date on all the latest information from SAE. If you like what you’re hearing, please review and comment on your podcast app.
In this episode of Intel on AI guests Lama Nachman, Intel Fellow and Director of Anticipatory Computing Lab, and Hanlin Tang, Sr. Director of the Intel AI Lab, talk with host Abigail Hing Wen about the intersection of humans and AI. The three discuss a wide range of topics, from keeping humans in the loop of AI systems to the ways that AI can augment human abilities. Lama talks about her work in building assistive computer systems for Prof. Stephen Hawking and British roboticist Dr. Peter Scott-Morgan. Hanlin reveals work on a DARPA program in collaboration with Brown University and Rhode Island Hospital that’s trying to restore the ability of patients with spinal cord injury to walk again. Follow Intel AI Research on Twitter: twitter.com/intelairesearch Follow Hanlin on Twitter: twitter.com/hanlintang Follow Abigail on Twitter at: twitter.com/abigailhingwen Learn more about the Intel’s global research at: intel.com/labs
In this episode of Intel on AI guests Lama Nachman, Intel Fellow and Director of Anticipatory Computing Lab, and Hanlin Tang, Sr. Director of the Intel AI Lab, talk with host Abigail Hing Wen about the intersection of humans and AI. The three discuss a wide range of topics, from keeping humans in the loop […]
In this episode of Intel on AI guests Lama Nachman, Intel Fellow and Director of Anticipatory Computing Lab, and Hanlin Tang, Sr. Director of the Intel AI Lab, talk with host Abigail Hing Wen about the intersection of humans and AI. The three discuss a wide range of topics, from keeping humans in the loop of AI systems to the ways that AI can augment human abilities. Lama talks about her work in building assistive computer systems for Prof. Stephen Hawking and British roboticist Dr. Peter Scott-Morgan. Hanlin reveals work on a DARPA program in collaboration with Brown University and Rhode Island Hospital that’s trying to restore the ability of patients with spinal cord injury to walk again. Follow Intel AI Research on Twitter: twitter.com/intelairesearch Follow Hanlin on Twitter: twitter.com/hanlintang Follow Abigail on Twitter at: twitter.com/abigailhingwen Learn more about the Intel’s global research at: intel.com/labs
Vikas Bhatia (@vikascb, Head of Product, Azure Confidential Computing) and Ron Perez (@ronprz, Intel Fellow, Security Architecture) talk about the technologies and architecture behind Azure Confidential ComputingSHOW: 472SHOW SPONSOR LINKS:CloudAcademy -Build hands-on technical skills. Get measurable results. Get 50% of the monthly price of CloudAcademy by using code CLOUDCASTDatadog Security Monitoring Homepage - Modern Monitoring and AnalyticsTry Datadog yourself by starting a free, 14-day trial today. Listeners of this podcast will also receive a free Datadog T-shirt.BMC Wants to Know if your business is on its A-GameBMC Autonomous Digital EnterpriseCLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwPodCTL Podcast is Back (Enterprise Kubernetes) - http://podctl.comSHOW NOTES:Azure Confidential ComputingIntel and Microsoft Azure partnership pageIntel® SGX: Moving Beyond Encrypted Data to Encrypted ComputingConfidential Computing Consortium (website)Topic 1 - Welcome to the show. Before we dig into today’s discussion, can you give us a little bit about your background?Topic 2 - Defense in Depth is a strategy that has long been in place in Enterprise computing. We’ve seen previous approaches that connected the OS or Application with the Hardware (e.g. Intel TXT). How has this space evolved over the last few years, and what are some of the reasons why we need another level of depth?Topic 3 - Let’s talk about the technology basics of Confidential Computing. What are the software elements (Application, OS, SDK) and what are the hardware elements? Topic 4 - What is the normal migration path for a company to move workloads into Confidential Computing environments? Is this primarily for new workloads, or does it apply to existing applications too? Topic 5 - Azure has the ability to deliver either Confidential VMs, or recently added Confidential containers along with AKS. When does it make sense to be confidential in one part of the stack vs. other? Topic 6 - What are some areas where you’re seeing the broader ecosystem (e.g. technology partners or end-user customers) beginning to expand out the functionality of Confidential Computing?FEEDBACK?Email: show at thecloudcast dot netTwitter: @thecloudcastnet
Steve Grobman is Senior Vice President and Chief Technology Officer at McAfee. In this role, he sets the technical strategy and direction to create technologies that protect smart, connected computing devices and infrastructure worldwide. Grobman leads McAfee’s development of next generation cyber-defense and data science technologies, threat and vulnerability research and internal CISO and IT organizations. Prior to joining McAfee, he dedicated more than two decades to senior technical leadership positions related to cybersecurity at Intel Corporation where he was an Intel Fellow. He has written numerous technical papers and books and holds 27 U.S. patents. He earned his bachelor's degree in computer science from North Carolina State University., Jeanette ManfraAssistant Director for Cybersecurity, Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency. Ms. Manfra leads the Department of Homeland Security (DHS) mission of protecting and strengthening the nation’s critical infrastructure from cyber threats. Previously, Ms. Manfra served as Assistant Secretary for the Office of Cybersecurity and Communications (CS&C) for the National Protection and Programs Directorate (NPPD) before the agency became CISA on November 16, 2018. Prior to this position, Ms. Manfra served as Acting Deputy Under Secretary for Cybersecurity and Director for Strategy, Policy, and Plans for NPPD. Ms. Manfra also served as Senior Counselor for Cybersecurity to the Secretary of Homeland Security and Director for Critical Infrastructure Cybersecurity on the National Security Council staff at the White House. At DHS, she held multiple positions in the Cybersecurity Division, including advisor for the Assistant Secretary for Cybersecurity and Communications and Deputy Director, Office of Emergency Communications, during which time she led the Department’s efforts in establishing the Nationwide Public Safety Broadband Network. Before joining DHS, Jeanette served in the U.S. Army as a communications specialist and a Military Intelligence Officer. , Matt TurekProgram Manager, Information Innovation Office, Defense Advanced Research Projects Agency (DARPA). Dr. Matt Turek joined DARPA’s Information Innovation Office (I2O) as a program manager in July 2018. His research interests include computer vision, machine learning, artificial intelligence, and their application to problems with significant societal impact. Prior to his position at DARPA, Turek was at Kitware, Inc., where he led a team developing computer vision technologies. His research focused on multiple areas, including large scale behavior recognition and modeling; object detection and tracking; activity recognition; normalcy modeling and anomaly detection; and image indexing and retrieval. Turek has made significant contributions to multiple DARPA and Air Force Research Lab (AFRL) efforts and has transitioned large scale systems for operational use. Before joining Kitware, Turek worked for GE Global Research, conducting research in medical imaging and industrial inspection. Turek holds a Doctor of Philosophy in computer science from Rensselaer Polytechnic Institute, a Master of Science in electrical engineering from Marquette University, and a Bachelor of Science in electrical engineering from Clarkson University. His doctoral work focused on combinatorial optimization techniques for computer vision problems. Turek is a co-inventor on 14 patents and co-author of multiple publications, primarily in computer vision. Moderated by James A. Lewis, SVP & Director, CSIS Technology Policy Program 1:45PM - Registration Opens 2:00PM - Speaker Introductions 2:05PM - Opening Remarks 2:20PM - Moderated Discussion Begins...
AI and HPC are highly complementary – flip sides of the same data- and compute-intensive coin. In this Chip Chat podcast Dr. Pradeep Dubey, Intel Fellow and director of its Parallel Computing Lab, explains why it makes sense for the two technology areas to come together and how Intel is supporting their convergence. AI developers tend to be data scientists, focused on deriving intelligence and insights from massive amounts of digital data, rather than typical HPC programmers with deep system programming skills. Because Intel® architecture serves as the foundation for both AI and HPC workloads, Intel is uniquely positioned to drive their convergence. Its technologies and products span processing, memory, and networking at ever-increasing levels of power and scalability. For more information on developing HPC and AI on Intel hardware and software, visit the Intel Developer Zone at software.intel.com. More about AI activities across Intel is online at ai.intel.com. For details, click on the Technology and Research tabs. Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation
This is the second of a series of podcasts recorded at Flash Memory Summit 2018 in Santa Clara. In this episode, Chris talks to Amber Huffman, Intel Fellow and President and founder of NVM Express Inc. NVM Express is the standards body that governs the development of the NVMe base standard, NVMe-MI (Management Interface) and […] The post #61 – Introduction to NVM Express with Amber Huffman appeared first on Storage Unpacked Podcast.
Dr. Bill Magro, Intel Fellow and Chief Technologist for High Performance Computing (HPC) at Intel, joins Chip Chat to discuss Intel Select Solutions for Simulation & Modeling. Dr. Magro works on Intel's HPC strategy, helping customers overcome their challenges with HPC and driving new capabilities back into the product road map. In this interview, Dr. Magro discusses the evolution of HPC and how HPC's scope has grown to incorporate workloads like AI and advanced analytics. Dr. Magro then focuses on Intel Select Solutions for Simulation & Modeling and how they are lowering costs and enabling more customers to take advantage of HPC. To learn more about Intel Select Solutions for Simulation & Modeling, please visit https://intel.com/selectsolutions.
In this Rich Report podcast, Pradeep Dubey discusses AI & The Virtuous Cycle of Compute.Traditionally, there has been a division of labor between computers and humans where all forms of number crunching and bit manipulation are left to computers, whereas intelligent decision-making is left to us humans. We are now at the cusp of a major transformation that can disrupt this balance. This disruption is triggered by an unprecedented convergence of massive compute with massive data, and some recent algorithmic advances. This confluence has the potential to spur a virtuous cycle of compute.Deep Learning was recently scaled to obtain 15PF performance on the Cori supercomputer at NERSC. Cori Phase II features over 9600 KNL processors. It can significantly impact how we do computing and what computing can do for us. In this talk, I will discuss some of the application-level opportunities and system-level challenges that lie at the heart of this intersection of traditional high performance computing with emerging data-intensive computing. Dr. Pradeep Dubey is an Intel Fellow and Director of Parallel Computing Lab (PCL), part of Intel Labs. His research focus is computer architectures to efficiently handle new compute- and data-intensive application paradigms for the future computing environment. He previously worked at IBM’s T.J. Watson Research Center, and Broadcom Corporation. He has made contributions to the design, architecture, and application-performance of various microprocessors, including IBM Power PC, Intel i386, i486, Pentium Xeon, and the Xeon Phi line of processors. He holds over 36 patents, has published more than 100 technical papers, won the Intel Achievement Award in 2012 for Breakthrough Parallel Computing Research, and was honored with Purdue University’s 2014 Outstanding Electrical and Computer Engineer Award. Dr. Dubey received a PhD in electrical engineering from Purdue University. He is a Fellow of IEEE. Download the MP3 * Subscribe on iTunes * Subscribe to RSS Check out our insideHPC Events Calendar
The O’Reilly Radar Podcast: AI on the hype curve, imagining nurturing technology, and gaps in the AI conversation.This week, I sit down with anthropologist, futurist, Intel Fellow, and director of interaction and experience research at Intel, Genevieve Bell. We talk about what she’s learning from current AI research, why the resurgence of AI is different this time, and five things that are missing from the AI conversation.Here are some highlights: AI’s place on the wow-ahh-hmm curve of human existence I think in some ways, for me, the reason of wanting to put AI into a lineage is many of the ways we respond to it as human beings are remarkably familiar. I'm sure you and many of your viewers and listeners know about the Gartner Hype Curve, the notion of, at first you don’t talk about it very much, then the arc of it's everywhere, and then it goes to the valley of it not being so spectacular until it stabilizes. I think most humans respond to technology not dissimilarly. There's this moment where you go, 'Wow. That’s amazing' promptly followed by the 'Uh-oh, is it going to kill us?' promptly followed by the, 'Huh, is that all it does?' It's sort of the wow-ahh-hmm curve of human existence. I think AI is in the middle of that. At the moment, if you read the tech press, the trade presses, and the broader news, AI's simultaneously the answer to everything. It's going to provide us with safer cars, safer roads, better weather predictions. It's going to be a way of managing complex data in simple manners. It's going to beat us at chess. On the one hand, it's all of that goodness. On the other hand, there are being raised both the traditional fears of technology: is it going to kill us? Will it be safe? What does it mean to have autonomous things? What are they going to do to us? Then the reasonable questions about what models are we using to build this technology out. When you look across the ways it's being talked about, there are those three different factors. One of excessive optimism, one of a deep dystopian fear, and then another starting to run a critique of the decisions that are being made around it. I think that’s, in some ways, a very familiar set of positions about a new technology. Looking beyond the app that finds your next cup of coffee I sometimes worry that we imagine that each generation of new technology will somehow mysteriously and magically fix all of our problems. The reality is 10, 20, 30 years from now, we will still be worrying about the safety of our families and our kids, worrying about the integrity of our communities, wanting a good story to keep us company, worrying about how we look and how we sound, and being concerned about the institutions in our existence. Those are human preoccupations that are thousands of years deep. I'm not sure they change this quickly. I do think there are harder questions about what that world will be like and what it means to have the possibility of machinery that is much more embedded in our lives and our world, and about what that feels like. In the fields that I come out of, we've talked a lot since about the same time as AI about human computer interactions, and they really sat inside the paradigm. One about what should we call a command-and-control infrastructure. You give a command to the technology, you get some sort of piece of answer back; whether that’s old command prompt lines or Google search boxes, it is effectively the same thing. We're starting to imagine a generation of technology that is a little more anticipatory and a little more proactive, that’s living with us—you can see the first generation of those, whether that's Amazon's Echo or some of the early voice personal assistants. There's a new class of intelligent agents that are coming, and I wonder sometimes if we move from a world of human-computer interactions to a world of human-computer relationships that we have to start thinking differently. What does it mean to imagine technology that is nurturing or that has a care or that wants you to be happy, not just efficient, or that wants you to be exposed to transformative ideas? It would be very different than the app that finds you your next cup of coffee. There’s a lot of room for good AI conversations What's missing from the AI conversation are the usual things I think are missing from many conversations about technology. One is an awareness of history. I think, like I said, AI doesn’t come out of nowhere. It came out of a very particular set of preoccupations and concerns in the 1950s and a very particular set of conversations. We have, in some ways, erased that history such that we forget how it came to be. For me, I think a sense of history is missing. As a result of that, I think more attention to a robust interdisciplinarity is missing, too. If we're talking about a technology that is as potentially pervasive as this one and as potentially close to us as human beings, I want more philosophers and psychologists and poets and artists and politicians and anthropologists and social scientists and critics of art—I want them all in that conversation because I think they're all part of it. I worry that this just becomes a conversation of technologists to each other about speeds and feeds and their latest instantiation, as opposed to saying, if we really are imagining a form of an object that will be in dialogue with us and supplemental and replacing us in some places, I want more people in that conversation. That's the second thing I think is missing. I also think it's emerging, and I hear in people like Julia Ng and my colleagues Kate Crawford and Meredith Whitacre an emerging critique of it. How do you critique an algorithm? How do you start to unpack a black-boxed algorithm to ask the questions about what pieces of data are they waging against what and why? How do we have the kind of dialogue that says, sure we can talk about the underlying machinery, but we also need to talk about what's going into those algorithms and what does it mean to train objects. For me, there's then the fourth thing, which is: where is theory in all of this? Not game theory. Not theories about machine learning and sequencing and logical decision-making, but theories about human beings, theories about how certain kinds of subjectivities are made. I was really struck in reading many of the histories of AI, but also of the contemporary work, of how much we make of normative examples in machine learning and in training, where you're trying to work out the repetition—what's the normal thing so we should just keep doing it? I realized that sitting inside those are always judgements about what is normal and what isn't. You and I are both women. We know that routinely women are not normal inside those engines. There's something about what would it mean to start asking a set of theoretical questions that come out of feminist theory, out of Marxist theory, out of queer theory, critical race theory about what does it mean to imagine normal here and what is and what isn't. Machine learning people would recognize this as the question of how do you deal with the outliers. I think my theory would be: what if we started with the outliers rather than the center, and where would that get you? I think the fifth thing that’s missing is: what are the other ways into this conversation that might change our thinking? As anthropologists, one of the things we're always really interested in is, can we give you that moment where we de-familiarize something. How do you take a thing you think you know and turn it on it's head so you go, 'I don’t recognize that anymore'? For me, that’s often about how do you give it a history. Increasingly, I realize in this space there's also a question to ask about what other things have we tried to machine learn on—so, what other things have we tried to use natural language processing, reasoning, induction on to make into supplemental humans or into things that do tasks for us? Of course, there's a whole category of animals we've trained that way—carrier pigeons, sheep dogs, bomb sniffing dogs, Coco the monkey who could sign. There's a whole category of those, and I wonder if there's a way of approaching that topic that gets us to think differently about learning because that’s sitting underneath all of this, too. All of those things are missing. When you've got that many things missing, that’s actually good. I means there's a lot of room for good conversations.
The O’Reilly Radar Podcast: AI on the hype curve, imagining nurturing technology, and gaps in the AI conversation.This week, I sit down with anthropologist, futurist, Intel Fellow, and director of interaction and experience research at Intel, Genevieve Bell. We talk about what she’s learning from current AI research, why the resurgence of AI is different this time, and five things that are missing from the AI conversation.Here are some highlights: AI’s place on the wow-ahh-hmm curve of human existence I think in some ways, for me, the reason of wanting to put AI into a lineage is many of the ways we respond to it as human beings are remarkably familiar. I'm sure you and many of your viewers and listeners know about the Gartner Hype Curve, the notion of, at first you don’t talk about it very much, then the arc of it's everywhere, and then it goes to the valley of it not being so spectacular until it stabilizes. I think most humans respond to technology not dissimilarly. There's this moment where you go, 'Wow. That’s amazing' promptly followed by the 'Uh-oh, is it going to kill us?' promptly followed by the, 'Huh, is that all it does?' It's sort of the wow-ahh-hmm curve of human existence. I think AI is in the middle of that. At the moment, if you read the tech press, the trade presses, and the broader news, AI's simultaneously the answer to everything. It's going to provide us with safer cars, safer roads, better weather predictions. It's going to be a way of managing complex data in simple manners. It's going to beat us at chess. On the one hand, it's all of that goodness. On the other hand, there are being raised both the traditional fears of technology: is it going to kill us? Will it be safe? What does it mean to have autonomous things? What are they going to do to us? Then the reasonable questions about what models are we using to build this technology out. When you look across the ways it's being talked about, there are those three different factors. One of excessive optimism, one of a deep dystopian fear, and then another starting to run a critique of the decisions that are being made around it. I think that’s, in some ways, a very familiar set of positions about a new technology. Looking beyond the app that finds your next cup of coffee I sometimes worry that we imagine that each generation of new technology will somehow mysteriously and magically fix all of our problems. The reality is 10, 20, 30 years from now, we will still be worrying about the safety of our families and our kids, worrying about the integrity of our communities, wanting a good story to keep us company, worrying about how we look and how we sound, and being concerned about the institutions in our existence. Those are human preoccupations that are thousands of years deep. I'm not sure they change this quickly. I do think there are harder questions about what that world will be like and what it means to have the possibility of machinery that is much more embedded in our lives and our world, and about what that feels like. In the fields that I come out of, we've talked a lot since about the same time as AI about human computer interactions, and they really sat inside the paradigm. One about what should we call a command-and-control infrastructure. You give a command to the technology, you get some sort of piece of answer back; whether that’s old command prompt lines or Google search boxes, it is effectively the same thing. We're starting to imagine a generation of technology that is a little more anticipatory and a little more proactive, that’s living with us—you can see the first generation of those, whether that's Amazon's Echo or some of the early voice personal assistants. There's a new class of intelligent agents that are coming, and I wonder sometimes if we move from a world of human-computer interactions to a world of human-computer relationships that we have to start thinking differently. What does it mean to imagine technology that is nurturing or that has a care or that wants you to be happy, not just efficient, or that wants you to be exposed to transformative ideas? It would be very different than the app that finds you your next cup of coffee. There’s a lot of room for good AI conversations What's missing from the AI conversation are the usual things I think are missing from many conversations about technology. One is an awareness of history. I think, like I said, AI doesn’t come out of nowhere. It came out of a very particular set of preoccupations and concerns in the 1950s and a very particular set of conversations. We have, in some ways, erased that history such that we forget how it came to be. For me, I think a sense of history is missing. As a result of that, I think more attention to a robust interdisciplinarity is missing, too. If we're talking about a technology that is as potentially pervasive as this one and as potentially close to us as human beings, I want more philosophers and psychologists and poets and artists and politicians and anthropologists and social scientists and critics of art—I want them all in that conversation because I think they're all part of it. I worry that this just becomes a conversation of technologists to each other about speeds and feeds and their latest instantiation, as opposed to saying, if we really are imagining a form of an object that will be in dialogue with us and supplemental and replacing us in some places, I want more people in that conversation. That's the second thing I think is missing. I also think it's emerging, and I hear in people like Julia Ng and my colleagues Kate Crawford and Meredith Whitacre an emerging critique of it. How do you critique an algorithm? How do you start to unpack a black-boxed algorithm to ask the questions about what pieces of data are they waging against what and why? How do we have the kind of dialogue that says, sure we can talk about the underlying machinery, but we also need to talk about what's going into those algorithms and what does it mean to train objects. For me, there's then the fourth thing, which is: where is theory in all of this? Not game theory. Not theories about machine learning and sequencing and logical decision-making, but theories about human beings, theories about how certain kinds of subjectivities are made. I was really struck in reading many of the histories of AI, but also of the contemporary work, of how much we make of normative examples in machine learning and in training, where you're trying to work out the repetition—what's the normal thing so we should just keep doing it? I realized that sitting inside those are always judgements about what is normal and what isn't. You and I are both women. We know that routinely women are not normal inside those engines. There's something about what would it mean to start asking a set of theoretical questions that come out of feminist theory, out of Marxist theory, out of queer theory, critical race theory about what does it mean to imagine normal here and what is and what isn't. Machine learning people would recognize this as the question of how do you deal with the outliers. I think my theory would be: what if we started with the outliers rather than the center, and where would that get you? I think the fifth thing that’s missing is: what are the other ways into this conversation that might change our thinking? As anthropologists, one of the things we're always really interested in is, can we give you that moment where we de-familiarize something. How do you take a thing you think you know and turn it on it's head so you go, 'I don’t recognize that anymore'? For me, that’s often about how do you give it a history. Increasingly, I realize in this space there's also a question to ask about what other things have we tried to machine learn on—so, what other things have we tried to use natural language processing, reasoning, induction on to make into supplemental humans or into things that do tasks for us? Of course, there's a whole category of animals we've trained that way—carrier pigeons, sheep dogs, bomb sniffing dogs, Coco the monkey who could sign. There's a whole category of those, and I wonder if there's a way of approaching that topic that gets us to think differently about learning because that’s sitting underneath all of this, too. All of those things are missing. When you've got that many things missing, that’s actually good. I means there's a lot of room for good conversations.
Dr. Pradeep Dubey, Intel Fellow at Intel Labs, joins us live from Intel Developer Forum in San Francisco. Dr. Dubey's Parallel Computing Lab focuses on compute intensive applications such as machine learning. Dr. Dubey discusses the need to scale infrastructure to meet the growing demands of AI. Where software was once designed to make the right decisions, today it increasingly is designed to ask the right questions. Answering those questions using machine learning can require significant computational resources. Dr. Dubey highlights the ways Intel is accelerating machine learning via projects like the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN), an open source library for deep learning, and Intel-optimized Caffe, a fork dedicated to improving the framework's performance when running on CPUs, especially Intel® Xeon® processors. For more information on Dr. Dubey's work, please visit http://pcl.intel-research.net/.
In this episode Amber Huffman, Intel Fellow and Director of Storage Interfaces in the Non-Volatile Memory Solutions Group at Intel joins us again to discuss hardware infrastructure innovations being driven at Intel. Amber chats about NVM Express* PCI Express® SSDs – highlighting features that deliver improved efficiency and scalability, lower latency, and optimized storage. She shares the evolution to NVMe* over Fabrics – enabling low latency use of NVMe SSDs across fabrics like Ethernet and OmniPath Architecture. Amber discusses how changes in the cloud are being motivated by a virtuous cycle of storage and networking and explains that the faster we can allow the cloud to take advantage of NVMe* the more we can do with it. For more information visit www.nvmexpress.org or www.intel.com/ssd
Dr. Al Gara, Intel Fellow and Chief Architect for Exascale Systems for the Data Center Group at Intel stops by to talk about how Intel’s scalable system framework (SSF) is meeting the extreme challenges and opportunities that researchers and scientists face in high performance computing (HPC) today. He explains that SSF incorporates many different Intel technologies including; Intel® Xeon® and Phi® processors, Intel® Omni-Path Fabrics, silicon photonics, innovative memory technologies, and efficiently integrates these elements into a broad spectrum of system solutions optimized for both compute and data-intensive workloads. Al emphasizes that this framework has the ability to scale from very small HPC systems all the way up to exascale computing systems and meets the needs of users in a very scalable and flexible way. To learn more, visit http://www.intel.com/ and search for ‘scalable system framework’ or visit www.intel.com/ssf
In this archive of a livecast from the Intel Developer Forum, Mario Paniccia, an Intel Fellow and GM of Silicon Photonics at Intel and celebrity guest Andy Bechtolsheim, the co-founder of Sun Microsystems and current Founder, Chief Development Officer and Chairman of Arista Networks, stop by to talk about the need for 100 Gbps for large networks with massive aggregate throughputs. The biggest challenge to mainstream deployment is the cost of optics, which has been addressed with Intel Silicon Photonics by marrying optics with the silicon manufacturing process. For more information, visit http://intel.ly/SiliconPhotonics.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Eric Dishman, Intel Fellow, speaks about healthcare and technology.
Genevieve Bell, an Intel Fellow that was recently named one of the top 50 most creative people in business by Fast Company, discusses market research and consumer ethnography designs. (November 9, 2009)
Al Fazio, an Intel Fellow and Director of Memory Technology Development in the Technology Manufacturing Group. talks about non-volatile memories in the form of NAND memory.
How did they do it? Take a group of highly specialized computer wafer technicians and create one of the top computer chip manufacturers in the world? In this interview, Karl Kempf, an Intel Fellow and Director of Junision Engineering at Intel, explains how his expert group brought better Junision making to Intel – and helped a growing company blossom.