Podcasts about intel labs

  • 68PODCASTS
  • 117EPISODES
  • 38mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Aug 28, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about intel labs

Latest podcast episodes about intel labs

Compromising Positions - A Cyber Security Podcast
EPISODE 43: Not Everybody Does It Like US…So Hire Anthropologists in Cybersecurity!

Compromising Positions - A Cyber Security Podcast

Play Episode Listen Later Aug 28, 2024 51:55


This episode we are joined by the wonderful Jamie Sherman, a cultural anthropologist and principal UX researcher at ESRI.We explore how words like ‘security and privacy' are slippery and that can lead to a lot of confusion and misunderstanding.How to ask better questions to really understand how to protect the people and our customers in an organisation and how to create the perfect ‘sniff test' in our organisations so people feel more able to trust their intuition rather than relying on crap passwords. And Finally we unpack the importance of hiring more anthropologists in cybersecurity because not everybody does it like us.Key Takeaways:Anthropology in Cybersecurity: A Game-ChangerAnthropology isn't just the study of ancient cultures—it's about understanding how people make meaning in their lives today. Anthropologists, through methods like ethnography and participant observation, can uncover the human side of cybersecurity, ensuring that security measures align with real-world behaviours, not just technical protocols.What does ‘Security' actually mean? Words are Slippery We don't always have a shared meaning for terms like “security” and “privacy.” Misunderstandings can lead us down the wrong path, creating a sense of betrayal. It's essential to ask, “What is security to you?” and make security visible without overwhelming users with cognitive load.Sniff the milk, is it off? Traditional security measures, like passwords and phishing awareness, often overload users with cognitive demands. But what if we focused on building intuition instead? We can help users develop a gut sense of when something feels wrong—an essential skill in a world where data is constantly on the move and always at risk by triggering the same responses when we sniff off milk!Asking the Right Questions: Better questions lead to better security controls and a deeper understanding of user needs.Links to everything we discussed in this episode can be found in the show notes and if you liked the show, please do leave us a review. Follow us on all good podcasting platforms and via our YouTube channel, and don't forget to share on LinkedIn and in your teams.It really helps us spread the word and get high-quality guests, on future episodes. We hope you enjoyed this episode - See you next time, keep secure, and don't forget to ask yourself, ‘Am I the compromising position here?' Show NotesPumping Iron - Give it a watch!Over Half of Cybersecurity Professionals Engage in Risky Behaviours at Work - KnowBe4Purity and Danger: An Analysis of Concepts of Pollution and Taboo - Mary Douglas About Jamie ShermanJamie Sherman is a cultural anthropologist and principal UX researcher at ESRI. She holds a PhD in anthropology (Princeton, 2011) and has been in the technology and UX space since joining Intel Labs in 2012, where she worked on a range of existing and emerging technologies, including wearable tech, virtual reality, and content creation. Her most recent focus is on the use of data and analytics to do things, from making movies to making maps.LINKS FOR Jamie ShermanJamie's LinkedInKeywords: cybersecurity, anthropology, user behaviour, privacy, UX

Data Product Management in Action: The Practitioner's Podcast
The Minisodes, Live from Data Connect 2024: In Conversation with Vishakha Gupta-Cledat

Data Product Management in Action: The Practitioner's Podcast

Play Episode Listen Later Aug 14, 2024 13:16


The Data Product Management In Action podcast, brought to you by Soda and executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences. We've released a special edition series of minisodes of our podcast. Recorded live at Data Connect 2024, our host Michael Toland engages in short, sweet, informative, and delightful conversations with five prevelant practitioners who are forging their way forward in data and technology. Recorded on Day 2 of Data Connect 2024, Michael sits down with Vishaka Gupta-Cledat, CEO and co-founder of Aperture, a spin-off from Intel. They explore Aperture's mission to simplify the work of data scientists, data engineers, and machine learning teams. About our host Michael Toland: Michael is a Product Management Coach and Consultant with Pathfinder Product, a Test Double Operation. Since 2016, Michael has worked on large-scale system modernizations and migration initiatives at Verizon. Outside his professional career, Michael serves as the Treasurer for the New Leaders Council, mentors with Venture for America, sings with the Columbus Symphony, and writes satire for his blog Dignified Product. He is excited to discuss data product management with the podcast audience. About our guest Vishaka Gupta-Cledat: Vishaka is the Co-founder and CEO of ApertureData. Before launching ApertureData, she spent over seven years at Intel Labs, where she led the design and development of VDMS (the Visual Data Management System), which is now the foundation of ApertureData's flagship product, ApertureDB. Her expertise spans diverse areas, including scheduling in heterogeneous multi-core environments, graph-based storage, applications on non-volatile memory systems, and tackling visual data management challenges for analytics use cases. Connect with Vishaka on LinkedIn.  All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else. Join the conversation on LinkedIn. Apply to be a guest or nominate a practitioner.  Do you love what you're listening to? Please rate and review the podcast, and share it with fellow practitioners you know. Your support helps us reach more listeners and continue providing valuable insights!

memoQ talks
The Present & Future of LLMs and AI in Localization with Mike Dillinger and Jay Marciano

memoQ talks

Play Episode Listen Later Jun 10, 2024 57:59


Mike Dillinger, Ph.D. and Jay Marciano are two leading experts in machine translation (MT), LLMs, and AI in the localization industry.  Both have served as Presidents of the Association of Machine Translation for the Americas. Mike has held leading technical positions with large enterprises including LinkedIn, eBay, and Intel Labs.  Jay, on the other hand, has helped some of the world's largest LSPs, including SDL International and Lionbridge, to develop their MT strategies.In this episode, Mike and Jay talk about what types of tasks AI can do better than humans, how LLMs are evolving, knowledge graphs, the difference in how LSPs and enterprises view the application of LLMs,  how the day-to-day work of translators will change in the coming years, and more!

Cyber Security Inside
205. Parallel Computing for AI with Intel Sr. Fellow Pradeep Dubey

Cyber Security Inside

Play Episode Listen Later May 6, 2024 25:25


In this episode of InTechnology, Camille gets into parallel computing with Pradeep Dubey, Intel Senior Fellow at Intel Labs. They talk about how parallel computing works, why it's becoming more necessary, how it uses AI and machine learning to process large amounts of data, the challenges of designing systems and architecture for parallel computing, how machines can help humans make better decisions, and much more. The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Six Pixels of Separation Podcast - By Mitch Joel
SPOS #930 – Juliette Powell On The AI Dilemma

Six Pixels of Separation Podcast - By Mitch Joel

Play Episode Listen Later May 5, 2024 60:36


Welcome to episode #930 of Six Pixels of Separation - The ThinkersOne Podcast. Here it is: Six Pixels of Separation - The ThinkersOne Podcast - Episode #930. I've known Juliette Powell since she was a famed VJ on MusiquePlus - MuchMusic and I was a music journalist back in the mid-ninties. While we lost touch over the years, we reconnected when she published her book about social media in 2008, 33 Million People in the Room - How to Create, Influence, and Run a Successful Business with Social Networking, and more recently with her latest, The AI Dilemma - 7 Principles for Responsible Technology (co-authored with Art Kleiner). The balance between innovation and ethics with artificial intelligence is becoming increasingly crucial. Juliette, a seasoned consultant at the intersection of technology and business (wit her consultancy, KPI), addresses this challenge head-on in The AI Dilemma. The book is a roadmap for businesses and governments looking to harness AI's potential responsibly. Juliette delves into the pressing issues surrounding AI deployment and the imperative of upholding ethical standards. With her extensive background consulting for multinational companies and her research at Columbia University, Juliette brings a wealth of knowledge and a unique perspective to the AI discourse. She explores the dual nature of AI - its capacity to drive unprecedented progress and its potential to perpetuate harm. She articulates the seven principles outlined in her book, which serve as guidelines for developing AI systems that support human flourishing while minimizing risks. These principles focus on rigorous risk assessment, transparency, data protection, bias reduction, accountability, organizational flexibility, and fostering an environment of psychological safety and creative friction. Juliette's insights are informed by real-world examples and her collaborations with institutions like Intel Labs and governmental bodies, which underscore the complexity of AI's impact across various sectors. Our discussion also touches on the broader social implications of AI, including the challenges posed by data ownership, the illusion of personalized experiences, and the global divide in data value. Juliette addresses the confusion surrounding the term 'AI' and the critical need for digital literacy to navigate its consequences effectively. We all know that AI presents significant challenges but it also offers remarkable opportunities for those willing to engage with it thoughtfully and ethically. Enjoy the conversation... Running time: 1:00:36. Hello from beautiful Montreal. Subscribe over at Apple Podcasts. Please visit and leave comments on the blog - Six Pixels of Separation. Feel free to connect to me directly on Facebook here: Mitch Joel on Facebook. Check out ThinkersOne. or you can connect on LinkedIn. ...or on Twitter. Here is my conversation with Juliette Powell. The AI Dilemma - 7 Principles for Responsible Technology. 33 Million People in the Room - How to Create, Influence, and Run a Successful Business with Social Networking. KPI. Follow Juliette on LinkedIn. Follow Juliette on X. This week's music: David Usher 'St. Lawrence River'. Takeaways Understanding the impact of AI requires critical thinking and digital literacy. Data ownership and the responsible deployment of AI are crucial considerations. Government regulation and international cooperation are necessary to address the challenges of AI. The term 'AI' is often misused and misunderstood, leading to confusion in the marketplace. The development and deployment of AI should be driven by ethical considerations and a risk-benefit analysis. The Apex Benchmark. Alignment and Human Values: Ensuring that AI systems align with human values is a complex challenge, as different cultures and individuals have varying moral perspectives.  Creative Friction and Diverse Perspectives: The best products and ideas are often the result of collaboration and diverse perspectives.  AI as a Tool for Creativity: AI can enhance human creativity by providing new perspectives, prompting exploration of new ideas, and generating content.  Ethics, AI, and the Future of Work: The ethical implications of AI are significant, particularly in relation to job displacement and income inequality.  Unconditional Love and Connection: The power of unconditional love and connection can shape our perspectives and actions.  Chapters: 00:00 0 Introduction and Background 02:28 - Early Recognition of AI's Impact 04:02 - Understanding Machine Learning and Data Ownership 06:36 - Lack of Transparency in AI Systems 07:42 - The Quandary of Personal Data and AI 09:33 - The Disconnect Between Public Awareness and Concern 11:08 - The Rise of AI and Data as New Oil 15:12 - The Need for Responsible AI Deployment 16:24 - Government Discourse and Regulation on AI 21:35 - Nationalism and Geopolitical Competition in AI 24:18 - Confusion and Misuse of the Term 'AI' 28:56 - The Importance of Digital Literacy 32:19 - The Pressure to Deploy AI and the Lack of Understanding 39:16 - The Excitement and Impact of ChatGPT 41:26 - The Apex Benchmark and the Race to Follow 45:23 - Alignment and the Challenge of Human Values 48:53 - Creative Friction and the Power of Diverse Perspectives 50:09 - The Medium is the Message: AI as a Tool for Creativity 54:24 - Ethics, AI, and the Future of Work 55:56 - Unconditional Love and the Power of Connection

Cyber Security Inside
Top Conversations on AI in 2023: from LLMs to Regulations

Cyber Security Inside

Play Episode Listen Later Dec 25, 2023 19:09


In this episode of InTechnology, Camille looks back on some of the most exciting conversations on AI in 2023. Things kick off with Andres Rodriguez, Intel Fellow, and his conversation on deep learning, a subset of machine learning. Then, Selvakumar Panneer and Omesh Tickoo, Principal Engineers at Intel Labs, discuss synthetic data. This is followed up by touching on large language models or LLMs with Sanjay Rajagopalan, Chief Design and Strategy Officer at Vianai Systems. Finally, the episode wraps up with independent AI policy and governance advisor Chloe Audio giving her insight on emerging AI regulations. Listen to the full episodes: What That Means with Camille: Deep Learning (142): https://cybersecurityinside.libsyn.com/142-what-that-means-with-camille-deep-learning What That Means with Camille: Synthetic Data (139): https://cybersecurityinside.libsyn.com/139-what-that-means-with-camille-synthetic-data Why and How Enterprises Are Adopting LLMs (174): https://cybersecurityinside.libsyn.com/174-why-and-how-enterprises-are-adopting-llms Emerging U.S. Policies, Legislation, and Executive Orders on AI (178): https://cybersecurityinside.libsyn.com/178-emerging-us-policies-legislation-and-executive-orders-on-ai Deep Dive: U.S. Executive Order on Artificial Intelligence (181): https://cybersecurityinside.libsyn.com/181-deep-dive-us-executive-order-on-artificial-intelligence    The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.  

Cyber Security Inside
183. The Hottest Cybersecurity Topics of 2023

Cyber Security Inside

Play Episode Listen Later Dec 4, 2023 14:15


In this episode of InTechnology, Camille takes a look at our most popular episodes on cybersecurity in 2023. First up are conversation highlights on Root of Trust and firmware attacks with Jorge Myszne, Co-Founder of Kameleon. After that are highlights on confidential computing with Mark Russinovich, Technical Fellow and CTO of Microsoft Azure, and Anil Rao, a VP and GM at Intel. Finally are highlights on AI deep fakes with Ilke Demir, Senior Staff Research Scientist at Intel Labs and a creator of FakeCatcher. The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Leadership Today Podcast
Interview - Juliette Powell - The AI Dilemma

Leadership Today Podcast

Play Episode Listen Later Oct 17, 2023 43:58


In this interview we speak with Juliette Powell about her latest book, The AI Dilemma - 7 Principles for Responsible Technology During the interview we discuss a range of issues and opportunities around Artificial Intelligence: - What drew Juliette into AI in the first place - The "dilemma" in the AI dilemma - Ways to think about AI in terms of triple A systems - algorithmic, autonomous and automated - How to avoid the dehumanising impact of AI in organisations and society - How to introduce AI in an organisation while avoiding magnification of existing bias - Regulation of AI by technology firms and government - The implications of AI for the employment market Juliette Powell is an independent researcher, entrepreneur, and keynote speaker at the intersection of technology and business. Her consulting services focus on global strategy and scenarios related to AI and data, banking, mobile, retail, social gaming, and responsible technology. She has delivered live commentary on Bloomberg, BNN, NBC, CNN, ABC, and BBC and presentations at institutions like The Economist, Harvard, and MIT. She works with such organizations as Reuters, the United Nations, Warner Brothers, l'Union des Banques Suisses, Microsoft, The Red Cross, Cirque du Soleil, IBM, and the World Bank Group. Juliette's previous book is 33 Million People in the Room: How to Create, Influence, and Run a Successful Business with Social Networking (Financial Times Press, 2009). She was a cofounder with Intel Labs of the research network WeTheData. The AI Dilemma is based in part on her research conducted at Columbia University. Powell is a faculty member at New York University's Interactive Telecommunications Program and the founding partner of Kleiner Powell International (KPI), kleinerpowell.com. You can learn more about Juliette and her latest book here: https://www.juliettepowell.com/ 

Cyber Security Inside
175. What That Means with Camille: Encryption and Encrypted Computing

Cyber Security Inside

Play Episode Listen Later Oct 9, 2023 22:16


In this episode of What That Means, Camille gets into encryption and encrypted computing with Ro Cammarota, Principal Engineer and Chief Scientist of Privacy-Enhanced Computing Research in the Emerging Security Lab at Intel Labs. They talk about Intel's Encrypted Computing Software Development Kit, definitions of encrypted computing and homomorphic encryption, practical applications and standardization of homomorphic encryption, the potential security effects of quantum computing, hardware acceleration, and much more. Learn more from Ro about Intel Labs' work with the DARPA DPRIVE program to make fully homomorphic encryption viable here: https://community.intel.com/t5/Blogs/Tech-Innovation/Data-Center/Intel-Labs-Continues-Focused-Research-and-Standards-Efforts-to/post/1488532 The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.  

Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews
Juliette Powell & Art Kleiner, Authors of The AI Dilemma: 7 Principles for Responsible Technology

Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews

Play Episode Listen Later Sep 27, 2023 95:55


Today, Dinis Guarda interviews Juliette Powell and Art Kleiner, authors of The AI Dilemma: 7 Principles for Responsible Technology. Juliette Powell is an author, a television creator with 9,000 live shows under her belt, a technologist, and sociologist. Juliette's consultancy services are sought after by influential organisations including the United Nations, Microsoft, and Warner Brothers.Art Kleiner is a versatile writer, editor, and entrepreneur deeply engaged in contemporary business and tech challenges. He is now the Principal and Editor-in-Chief at Kleiner Powell International. Prior to that,  and as the editor-in-chief of PwC Global and the editor-in-chief strategy+business, PwC's award winning management magazine with a circulation of 1.3 million, Art had published some of his bestsellers.Juliette Powell BiographyJuliette Powell is an author, a television creator with 9,000 live shows under her belt, a technologist, and sociologist. She has recently co-written a book called "The AI Dilemma: 7 Principles for Responsible Technology" ( August 2023), which she co-authored with Art Kliener, brings together insights from engineering, business, government, and social justice perspectives.Juliette is the Founder and Managing Partner at Kleiner Powell International (KPI), a New York City-based consultancy. She is also the Founder, researcher, and curator at Turing AI and WeTheData.org, mapping the data economy with Intel Labs.Juliette's consultancy services are sought after by influential organisations including the United Nations, Microsoft, and Warner Brothers. She promotes digital literacy, critical thinking, and collaboration and contributes to discussions on the future of the internet and connected society.A business journalist, Juliette's research at Columbia University focuses on responsible AI deployment and ethical data exploration.A graduate in Economics from Rotman School of Management, University of Toronto, and Finance and International Business from McGill University, Juliette is also the faculty at The New York University. She was the Miss Canada titleholder in 1989, the contest's first Black Canadian winner.Learn more about Juliette Powell on https://www.openbusinesscouncil.org/wiki/juliette-powell Art Kleiner BiographyArt Kleiner is a versatile writer, editor, and entrepreneur deeply engaged in contemporary business and tech challenges. During his early career at the Whole Earth Catalog, he led the best-selling "Fifth Discipline Fieldbook" series. As the editor-in-chief of PwC Global and the editor-in-chief strategy+business, PwC's award winning management magazine with a circulation of 1.3 million, Art had published some of the bestsellers.With a journalism master's from UC Berkeley, Kleiner is part of the Marshall Goldsmith 100 Coaches group. He's a faculty member at the New York University's Interactive Telecommunications Program. Learn more about Art Kleiner on https://www.openbusinesscouncil.org/wiki/art-kleinerAbout Dinis Guarda profile and Channelshttps://www.openbusinesscouncil.orghttps://www.intelligenthq.comhttps://www.hedgethink.com/https://www.citiesabc.com/https://openbusinesscouncil.org/wiki/dinis-guardaMore interviews and research videos on Dinis GuarSupport the show

The Six Five with Patrick Moorhead and Daniel Newman
The Future of Tech with Intel's Rich Uhlig - Six Five On the Road

The Six Five with Patrick Moorhead and Daniel Newman

Play Episode Listen Later Sep 20, 2023 31:08


On this episode of The Six Five – On The Road, hosts Daniel Newman and Patrick Moorhead welcome Rich Uhlig, Intel Senior Fellow and Corporate VP, Director of Intel Labs for a conversation on Intel's vision for the future of technology, including the latest developments and what Intel Labs has been working on in the AI field. Their discussion covers: Intel Labs' Mission and Strategic Focus Areas Cutting-Edge Advancements in Neuromorphic Computing and Silicon Photonics by Intel Labs Intel Labs' Contributions to the AI Landscape Intel Labs' Robust Approach to AI Security Concerns An In-Depth Look at Intel's Quantum Computing Advancements

Windows Central Podcast
A 360-Degree Folding Display Surface Duo 3?

Windows Central Podcast

Play Episode Listen Later Sep 5, 2023 117:22


On this week's episode of the Window's Central Podcast, Dan and Zac discuss Microsoft's latest patent and whether it's for the next-gen Surface Duo, Paint & Photos getting AI capabilities, preview Microsoft's September Surface event, all the latest on Windows 11 insider builds, Bing Chat AI is coming to the Microsoft Launcher & Surface Duo, Zac reviews his new Google Pixel Fold phone & the Lenovo Legion Go, Dan reveals what he can about his trip to Intel Labs in Malaysia, and more! Links: Windows 11 will soon let you uninstall even more in-box apps - Windows Central Microsoft may bring AI capabilities to apps like Paint and Photos on Windows 11 - Windows Central Microsoft announces 2023 Surface event - Windows Central Microsoft patents 360-degree folding display — is this the Surface Duo 3? - Windows Central Bing Chat AI is FINALLY coming to the Microsoft Launcher and Surface Duo - Windows Central I tried Lenovo's new Steam Deck and ROG Ally killer - Windows Central Windows Central Podcast Sponsors:  Indeed: Hire better with Indeed. Visit indeed.com/wcp to start hiring now. Follow us on Twitter: @Daniel_Rubino @ZacBowden

Lancefield on the Line
Juliette Powell and Art Kleiner: Using AI responsibly

Lancefield on the Line

Play Episode Listen Later Aug 30, 2023 40:08


How should we be using AI positively and constructively?AI is part of our lives both personally and professionally whether you like it or not. Any senior executive or entrepreneur should be learning more about how best to use AI in their organisation, but there are many implications of its use that need to be considered. In this episode Juliette Powell and Art Kleiner share their wisdom, expertise and considerable research on this topic to give us a nuanced perspective of its use. We talk about fascinating applications of AI, not just generative. How much control we have over it, or think we do, in particular the data it uses. We also talk about how we can build trust in the algorithms and systems and how we can mitigate bias and what companies and governments should be doing more of.“I don't know where it's going to go but I am glad to be alive at this moment.” – Juliette PowellYou'll hear about:●      What is Artificial Intelligence?●      The implications of illusory control●      What is the AI dilemma●      Data privacy solutions with AI●      Behaviours developers of AI need to change●      How can leaders create AI debate space?●      How should governments use AI●      The biggest myths about AIAbout Juliette Powell and Art Kleiner:Juliette Powell is an independent researcher, strategist, entrepreneur, and keynote speaker at the intersection of technology and business. She has delivered live commentary on Bloomberg, BNN, NBC, CNN, ABC, and the BBC and presentations at institutions like The Economist, Harvard, and MIT. She was a cofounder with Intel Labs of the research network WeTheData.Art Kleiner is a writer, editor, and entrepreneur who focuses on machine, organizational, and human behaviour, the neuroscience of leadership, content strategy, and the strategies and insights needed in business to manage complex dilemmas. His books include The Age of Heretics: A History of the Radical Thinkers Who Reinvented Corporate Management and The Wise Advocate: The Inner Voice of Strategic Leadership.Their resources:- The book ‘AI Dilemma': https://kleinerpowell.com/- Profiles: Juliette - https://kleinerpowell.com/juliette-powellArt - https://kleinerpowell.com/art-kleinerMy resources:Sign up to my Strategic Leader newsletter (http://bit.ly/36WRpri) for stimuli, ideas, guidance and tips on how to lead your team, organisation or self more effectively, delivered straight to your inbox:Subscribe to my YouTube channel (http://bit.ly/3cFGk1k) where you can watch the conversation.Take the Extraordinary Essentials test (https://bit.ly/3EhSKY5) to identify your strengths and development areas as a strategic leader:For more details about me:●      Services (https://bit.ly/373jctk) to CEOs, entrepreneurs and professionals.●      About me (https://bit.ly/3LFsfiO) - my background, experience and philosophy.●      Examples of my writing (https://bit.ly/3O7jkc7).●      Follow me and engage with me on LinkedIn (https://bit.ly/2Z2PexP).●      Follow me and engage with me on Twitter (https://bit.ly/36XavNI).

Startupeable
132. Omar Florez, ex Intel Labs | Cómo las Startups de Silicon Valley Usan Inteligencia Artificial

Startupeable

Play Episode Listen Later Aug 1, 2023 52:32


Omar Florez, investigador científico de machine learning y PhD en Ciencias de la Computación. Omar ha trabajado más de 10 años en Silicon Valley en Inteligencia Artificial dentro de compañías como Intel, Twitter y el banco americano Capital One.En los últimos meses, hemos visto una explosión de IA. Literal si levantas una piedra ves herramientas de IA para escribir copies, aprender idiomas, atender a clientes, etc.Pero, realmente son estas startups innovadoras y diferenciadas o son solo una fina aplicación creada sobre Chat GPT?Los dejo con esta gran charla sobre inteligencia artificial.-La manera más sencilla de ayudarnos a crecer es dejando una reseña en Spotify o Apple Podcasts: https://ratethispodcast.com/startupeableEste episodio es presentado por:Talently Hire te conecta en 72 horas con desarrolladores de software con al menos 3 años de experiencia. Conoce más en https://talently.tech/hire/---Notas del episodio: https://startupeable.com/intel/---Para más contenido síguenos en

The Catchup
Guardians of Reality: Unveiling Intel's Real-Time Deepfake Detection

The Catchup

Play Episode Listen Later Jul 24, 2023 43:06


In this riveting episode, we dive into the fascinating yet unnerving world of deepfakes and the innovative technologies used to combat their malicious usage. We demystify the technology behind deepfakes, the potential threats they pose, and the groundbreaking efforts Intel Labs is making in the realm of real-time deepfake detection.Intel Labs has developed one of the world's first real-time deepfake detection platforms. Unlike other systems, Intel's technology doesn't seek signs of fabrication but focuses on recognizing the authentic—like detecting the subtle color changes in our veins related to our heart rate. We discuss how this detection technique is already making a profound impact across various sectors from social media platforms to broadcasters and startups.Support the showLet's get into it!Follow us!Email us: TheCatchupCast@Gmail.com

Data Driven
Steve Orrin on the Importance of Hardware in AI Development

Data Driven

Play Episode Listen Later Jun 27, 2023 62:40 Transcription Available


On this episode of Data Driven, the focus is on hardware from AI optimized chips to edge computing.Frank and Andy interview Steven Orrin, the CTO of Intel Federal.Intel has developed new CPU instructions to accelerate AI workloads, and FPGAs allow for faster development in custom applications with specific needs. The speaker emphasizes the importance of data curation and wrangling before jumping into machine learning and AI, LinksWebinar: AI application benchmarking on Intel hardware through Red Hat OpenShift Data Science Platform. Register here: https://qrcodes.at/RHODSIntelBenchmarkingWebinarGet a free audiobook on us! http://thedatadrivenbook.com/Moments00:01:59 Hardware and software infrastructure for AI.00:07:18 AI benchmarks show importance of GPUs & CPUs00:14:08 Habana is a two-chip strategy offering AI accelerator chips designed for training flows and inferencing workloads. It is available in the Amazon cloud and data centers. The Habana chips are geared for large-scale training and inference tasks, and they scale with the architecture. One chip, Goya, is for inferencing, while the other chip, Gaudí, is for training. Intel also offers CPUs with added instructions for AI workloads, as well as GPUs for specialized tasks. Custom approaches like using FPGAs and ASICs are gaining popularity, especially for edge computing where low power and performance are essential.00:19:47 Intel's diverse team stays ahead of AI trends by collaborating with specialists and responding to industry needs. They have a large number of software engineers focused on optimizing software for Intel architecture, contributing to open source, and providing resources to help companies run their software efficiently. Intel's goal is to ensure that everyone's software runs smoothly and continues to raise the bar for the industry.00:25:24 Moore's Law drives compute by reducing size. Cloud enables cost-effective edge use cases. Edge brings cloud capabilities to devices.00:31:40 FPGA is programmable hardware allowing customization. It has applications in AI and neuromorphic processing. It is used in cellular and RF communications. Can be rapidly prototyped and deployed in the cloud.00:41:09 Started in biology, became a hacker, joined Intel.00:48:01 Coding as a viable and well-paying career.00:55:50 Looking forward to image-to-code and augmented reality integration in daily life.01:00:46 Tech show, similar to Halt and Catch Fire.Key Topics:Topics Covered:- The role of infrastructure in AI- Hardware optimization for training and inferencing- Intel's range of hardware solutions- Importance of software infrastructure and collaboration with the open source community- Introduction to Havana AI accelerator chips- The concept of collapsing data into a single integer level- Challenges and considerations in data collection and storage- Explanation and future of FPGAs- Moore's Law and its impact on compute- The rise of edge computing and its benefits- Bringing cloud capabilities to devices- Importance of inference and decision-making on the device- Challenges in achieving high performance and energy efficiency in edge computing- The role of diverse teams in staying ahead in the AI world- Overview of Intel Labs and their research domains- Intel's software engineering capabilities and dedication to open source- Intel as collaborators in the industry- Importance of benchmarking across different AI types and stages- The role of CPUs and GPUs in AI workloads- Optimizing workload through software to hardware- Importance of memory...

Me, Myself, and AI
The Social Science of AI: Intel's Elizabeth Anne Watkins

Me, Myself, and AI

Play Episode Listen Later Jun 20, 2023 33:35


When Elizabeth Anne Watkins started her doctoral program, she landed a research role studying journalists' use of security and privacy technologies — but she found the security tools confusing and difficult to use. Today, as a research scientist in the social science of AI at Intel Labs, she advocates for other end users faced with understanding and working with new technologies. Elizabeth employs social science to understand the concerns of technicians performing complex chip manufacturing processes so that new AI systems will be developed to better serve those human experts. During this process, she also helps the technicians recognize AI's role as a supporting technology — even a coworker — rather than a human replacement. She joins this episode to discuss her role as a social scientist working in tech and some of the ways Intel is applying AI technologies like computer vision and natural language processing to improve semiconductor manufacturing processes. Read the episode transcript here. Me, Myself, and AI is a collaborative podcast from MIT Sloan Management Review and Boston Consulting Group and is hosted by Sam Ransbotham and Shervin Khodabandeh. Our engineer is David Lishansky, and the coordinating producers are Allison Ryder and Sophie Rüdinger. Stay in touch with us by joining our LinkedIn group, AI for Leaders at mitsmr.com/AIforLeaders or by following Me, Myself, and AI on LinkedIn. Guest bio: Elizabeth Anne Watkins is a research scientist in the Social Science of Artificial Intelligence at Intel Labs and a member of Intel's Responsible AI Advisory Council, where she applies social science methods to amplify human potential in human-AI collaboration. Her research on the design, deployment, and governance of AI tools has been published in leading academic journals and has been featured in Wired, MIT Technology Review, and Harvard Business Review. She was previously a postdoctoral fellow at Princeton and has a doctorate from Columbia University and a master's degree from MI We encourage you to rate and review our show. Your comments may be used in Me, Myself, and AI materials. We want to know how you feel about Me, Myself, and AI. Please take a short, two-question survey.

Open at Intel
Building Trust with Attestation

Open at Intel

Play Episode Listen Later May 31, 2023 33:49


In this episode, we dive deep into the concept of attestation as it relates to building trust in our software and systems.  Marcela Melara and Vinnie Scarlata take us on a technical tour of both software and remote attestation and how these relate to ideas we've covered previously with software supply chain security and confidential computing. We talk trust and integrity, standards and projects, and share some best practices.   Guests: Dr. Marcela Melara is a research scientist in the Security and Privacy Group at Intel Labs. Her current work focuses on developing solutions for high-integrity software supply chains and building trustworthy distributed systems. She has several publications and patents filed related to her research, and leads a number of internal, academic and open-source efforts on software supply chain security. Prior to joining Intel, she received her PhD in Computer Science from Princeton University and did her undergraduate studies at Hobart and William Smith Colleges. She is a Siebel Scholar, a member of Phi Beta Kappa, and her research on CONIKS was awarded the Caspar Bowden PET Award. Outside of work, Marcela is an avid gardener, bookworm, hiker, and gamer. Vinnie Scarlata is a Principal Engineer in the Security & Privacy Research lab in Intel Labs. He is one of the architects for Intel® Software Guard Extensions and Trust Domain Extensions, and has 20+ years of research experience in various areas of security, e.g. Trusted Computing, Trusted Execution Environments (TEE), Attestation, Recoverable Platforms, Runtime Integrity, and Key Management. He has been granted 50+ patents and co-authored several papers. Vinnie received a MS in Information Security from Georgia Tech and a BS in Computer Science from the University of Massachusetts, Amherst.

Open at Intel
Securing Applications with Gramine

Open at Intel

Play Episode Listen Later Apr 5, 2023 32:03


This episode continues our confidential computing conversation from our previous episode. Mona Vij, principal engineer at Intel Labs, leads Intel's efforts on the Gramine project, which is a library OS that allows for running unmodified applications and, among other things, solves the problem of running applications out-of-the-box on Intel SGX-enabled hardware. We'll dive into Gramine, a Confidential Computing Consortium Project and discuss easing the path to running in a trusted execution environment. Guest:  Mona Vij is a Principal Engineer and Cloud and Data Center Security Research Manager at Intel Labs, where she focuses on Scalable Confidential Computing for end-to-end Cloud to Edge security. Mona received her Master's degree in Computer Science from University of Delhi, India. Mona leads the research engagements on Trusted execution with a number of universities. Her research has been featured in journals and conferences including USNIX OSDI, USENIX ATC and ACM ASPLOS, among others. Mona's research interests primarily include trusted computing, virtualization, device drivers and operating systems.

The Big Take
Why Employers Will Soon Be Fighting Each Other To Hire You

The Big Take

Play Episode Listen Later Mar 9, 2023 32:38 Transcription Available


The future of US competitiveness is taking shape in a field in Licking County, Ohio. It's the site of a new Intel semiconductor plant, part of the Biden administration's effort to bring manufacturing jobs back to the US, with a focus on high tech products like semiconductors.But those ambitious plans have exposed a looming problem: The number of skilled workers needed to build factories and fill those jobs is shrinking. It's a demographic reality that will only become more acute in the decades to come.  Bloomberg senior economics writer Shawn Donnan joins this episode to explain why America's workforce is shrinking, and what a labor shortage now means for the nation's manufacturing future. And Gabriela Cruz Thompson of Intel Labs talks about how a big company like hers plans to recruit workers in an increasingly competitive job market. Read the story: https://bloom.bg/3J0Z9KV  Listen to The Big Take podcast every weekday and subscribe to our daily newsletter: https://bloom.bg/3F3EJAK  Have questions or comments for Wes and the team? Reach us at bigtake@bloomberg.net.This podcast is produced by the Big Take Podcast team: Supervising Producer: Vicki Vergolina, Senior Producer: Kathryn Fink, Producers: Mo Barrow, Rebecca Chaisson, Michael Falero and Federica Romaniello, Associate Producers: Sam Gebauer and Zaynab Siddiqui. Sound Design/Engineers: Raphael Amsili and Gilda Garcia.See omnystudio.com/listener for privacy information.

Open at Intel
Software Supply Chains

Open at Intel

Play Episode Listen Later Mar 8, 2023 45:55


Marcela Melara, a research scientist in the security and Privacy Research Group at Intel Labs, and Bruno Domingues, a chief technology officer in the financial services industry practice and a SLSA project contributor share their deep knowledge about software supply chain Security, a subject on everyone's minds today. Guests: Dr. Marcela Melara is a research scientist in the Security and Privacy Group at Intel Labs. Her current work focuses on developing solutions for high-integrity software supply chains and building trustworthy distributed systems. She has several publications and patents filed related to her research, and leads a number of internal, academic and open-source efforts on software supply chain security. Prior to joining Intel, she received her PhD in Computer Science from Princeton University and did her undergraduate studies at Hobart and William Smith Colleges. She is a Siebel Scholar, a member of Phi Beta Kappa, and her research on CONIKS was awarded the Caspar Bowden PET Award. Outside of work, Marcela is an avid gardener, bookworm, hiker, and gamer. Bruno Domingues is a Chief Technology Officer in Financial Services Industry practice (SMG), where he is responsible for technical direction and pathfinding across Intel's product portfolio. He serves as the champion for Digital Transformation in the Financial Services domain. Before joining Intel in 2007, Bruno worked with Microsoft. He was a pioneer in the FSI vertical practice back in the '90s and developed a rich ecosystem of partners around the Microsoft platform to solve the most challenging industry's problems. With over 23 years of experience in applying technologies. Bruno developed a deep understanding of the financial industry mojo: Have worked with regulators to help banks ramp up on Basel II and III, architected mission-critical trading-desks operation, and inter-banking national-wide online payment system in different markets and regions in the World. In the last 15 years, Bruno has been focused on cloud adoption in the financial services industry, as it is a unique industry with unique requirements. Bruno also has served as IEEE Computer Society chairman (R9), Academic Liaison Director with CMG, and Board Advisor for Fintechs.  

Cyber Security Inside
139. What That Means with Camille: Synthetic Data

Cyber Security Inside

Play Episode Listen Later Feb 13, 2023 23:20


In this episode of What That Means, Camille gets into synthetic data with Selvakumar Panneer and Omesh Tickoo, Principal Engineers at Intel Labs. They talk about how synthetic data is being used today for things like AI and how it's changing the world we experience. The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Code Together
Quantum Computing, How Intel is Driving Towards a Probabilistic Future

Code Together

Play Episode Listen Later Feb 7, 2023 35:56


Quantum computing technology is becoming more viable, which means programming models are going to have to be created to harness this potentially world changing technology. To many developers, understanding the possibilities of quantum computing and moving from a binary world to a world of quantum states is challenging. Tony talks to Anne Matsuura, a physicist and Director of Quantum Applications and Architecture in Intel Labs about what quantum computing can do for society and how Intel is working on building hardware and software solutions to make that future a reality. We also talk about what a quantum computer actually looks like today (hint: it's not a tiny flat chip). Learn more: Intel Quantum Computing Homepage intel.com/content/www/us/en/research/quantum-computing.html oneAPI oneapi.io intel.com/content/www/us/en/developer/tools/oneapi/overview.html Guests: Anne Matsuura – Director of Quantum Applications and Architecture in Intel Labs

The Tech Blog Writer Podcast
2248: Intel Labs - Real-Time Deepfake Detector and Responsible AI

The Tech Blog Writer Podcast

Play Episode Listen Later Jan 25, 2023 37:05


Intel's Responsible AI work recently revealed FakeCatcher, a deepfake detection technology that can detect fake videos with 96% accuracy. The platform is the world's first real-time deepfake detector that returns results in milliseconds. FakeCatcher uses Intel's hardware and software to assess "blood flow" signals in real videos to detect inauthenticity. The technology can run up to 72 different detection streams simultaneously on 3rd Gen Intel Xeon Scalable processors. Social media platforms can use this technology to prevent the spread of deepfakes, global news organizations to avoid amplifying manipulated videos, and non-profit organizations to democratize the detection of deepfakes for everyone. Intel also recently announced its extended collaboration with Mila, an AI research institute in Montreal, to help advance AI techniques to tackle global challenges like climate change, identify drivers of diseases, and expedite drug discovery. Accelerating the research and development of advanced AI to solve some of the world's most critical and challenging issues requires a responsible approach to AI and the ability to scale computing technology. As leaders in computing and AI, Intel and Mila will work together to tackle some of the challenges the world faces today and drive tangible results. Lama Nachman, Intel Fellow and Director of Intelligent Systems Research Lab at Intel Labs, joins me on Tech talks Daily in a discussion about Responsible AI and the real-time deepfake detector.

Economist Podcasts
Babbage: How to detect a deepfake

Economist Podcasts

Play Episode Listen Later Jan 18, 2023 40:56


Digital fakery, from the latest generation of deepfakes to lower-tech trickery, threatens to erode trust in societies and can prevent justice from being served. But how can technology be used to both detect deepfakes and authenticate real images?Patrick Traynor, a professor at the University of Florida, explains a novel method to expose audio generated by artificial intelligence. Ilke Demir of Intel Labs demonstrates how to spot visual fakery by analysing colour changes in the face. Plus, The Economist's Benjamin Sutherland investigates the flipside of deepfakes: how to prove that footage is real. And Wendy Betts of eyeWitness to Atrocities explains how her technology is being used as evidence for war crimes. Alok Jha hosts.For full access to The Economist's print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience. Hosted on Acast. See acast.com/privacy for more information.

Babbage from Economist Radio
Babbage: How to detect a deepfake

Babbage from Economist Radio

Play Episode Listen Later Jan 18, 2023 40:56


Digital fakery, from the latest generation of deepfakes to lower-tech trickery, threatens to erode trust in societies and can prevent justice from being served. But how can technology be used to both detect deepfakes and authenticate real images?Patrick Traynor, a professor at the University of Florida, explains a novel method to expose audio generated by artificial intelligence. Ilke Demir of Intel Labs demonstrates how to spot visual fakery by analysing colour changes in the face. Plus, The Economist's Benjamin Sutherland investigates the flipside of deepfakes: how to prove that footage is real. And Wendy Betts of eyeWitness to Atrocities explains how her technology is being used as evidence for war crimes. Alok Jha hosts.For full access to The Economist's print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience. Hosted on Acast. See acast.com/privacy for more information.

Cyber Security Inside
135. What That Means with Camille: Deep Fake

Cyber Security Inside

Play Episode Listen Later Jan 16, 2023 24:54


In this episode of What That Means, Camille gets into deep fakes with Ilke Demir, Senior Staff Researcher at Intel Labs. They talk about deep fake detection, responsible deep fake generation, and media authentication. The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Choses à Savoir TECH
Qu'est-ce que FakeCatcher, cette IA pour démasquer les deepfakes ?

Choses à Savoir TECH

Play Episode Listen Later Dec 1, 2022 2:51


La lutte contre les deepfakes compte aujourd'hui un nouvel acteur de taille : Intel ! Mi-novembre, le géant américain des semi-conducteurs a présenté FakeCatcher, son intelligence artificielle capable de détecter en temps réel des vidéos truquées. C'est dans un communiqué de presse qu'Intel a dévoilé FakeCatcher, une plateforme dont l'objectif est de devenir, je cite le « premier détecteur de deepfakes en temps réel au monde qui renvoie des résultats en quelques millisecondes » fin de citation. Dans le détail, FakeCatcher a été conçu par Ilke Demir, chercheur chez Intel Labs, et Umur Ciftci, de l'Université de l'État de New York. L'ossature de FakeCatcher est composée de plusieurs outils et logiciels développés par Intel, comme OpenVino, Intel Integrated Performance Primitives et OpenCV. Ainsi, ce nouveau dispositif d'Intel basé sur la technique du deeplearning, apprentissage profond, se distingue des autres technologies de détection des visages par sa capacité à analyser, je cite « des blocs de vision ». En s'appuyant sur l'Open Visual Cloud et les processeurs Intel Xeon Scalable 3e génération, les développeurs ont gagné en vitesse, c'est-à-dire que FakeCatcher peut traiter simultanément jusqu'à 72 flux détectés dans un pixel vidéo. Voilà grossièrement résumé le côté technique de ce nouvel outil. Ceci dit, si vous n'êtes pas familier avec les deepfakes, sachez que ces vidéos sont généralement montées par des personnes mal intentionnées, qui font notamment tenir des propos outranciers à leurs victimes. L'ancien président des États-Unis Barack Obama a notamment servi d'exemple il y a quelques années, tout comme l'opposant à Vladimir Putin, Alexeï Navalny, dont le visage avait été détourné pour servir la communication du gouvernement russe il y a quelques années. D'après le cabinet Gartner, les dépenses des entreprises liées à la cybersécurité s'élèveront à pratiquement 190 milliards de dollars en 2023, soit une hausse de plus de 11 % par rapport à cette année. À noter qu'Intel n'est pas le seul sur le marché des outils de détection de deepfakes. Facebook et Alphabet s'y essayent déjà depuis plusieurs années, quand Microsoft a fait son grand saut en septembre avec le Video Authenticator. Ceci dit, il convient de ne pas diaboliser totalement les deepfakes, étant donné que cela peut aussi servir, notamment au cinéma ou dans les séries pour faire apparaître un acteur décédé ou en rajeunir un autre par exemple. Learn more about your ad choices. Visit megaphone.fm/adchoices

Choses à Savoir TECH
Qu'est-ce que FakeCatcher, cette IA pour démasquer les deepfakes ?

Choses à Savoir TECH

Play Episode Listen Later Dec 1, 2022 2:21


La lutte contre les deepfakes compte aujourd'hui un nouvel acteur de taille : Intel ! Mi-novembre, le géant américain des semi-conducteurs a présenté FakeCatcher, son intelligence artificielle capable de détecter en temps réel des vidéos truquées.C'est dans un communiqué de presse qu'Intel a dévoilé FakeCatcher, une plateforme dont l'objectif est de devenir, je cite le « premier détecteur de deepfakes en temps réel au monde qui renvoie des résultats en quelques millisecondes » fin de citation. Dans le détail, FakeCatcher a été conçu par Ilke Demir, chercheur chez Intel Labs, et Umur Ciftci, de l'Université de l'État de New York. L'ossature de FakeCatcher est composée de plusieurs outils et logiciels développés par Intel, comme OpenVino, Intel Integrated Performance Primitives et OpenCV. Ainsi, ce nouveau dispositif d'Intel basé sur la technique du deeplearning, apprentissage profond, se distingue des autres technologies de détection des visages par sa capacité à analyser, je cite « des blocs de vision ». En s'appuyant sur l'Open Visual Cloud et les processeurs Intel Xeon Scalable 3e génération, les développeurs ont gagné en vitesse, c'est-à-dire que FakeCatcher peut traiter simultanément jusqu'à 72 flux détectés dans un pixel vidéo. Voilà grossièrement résumé le côté technique de ce nouvel outil.Ceci dit, si vous n'êtes pas familier avec les deepfakes, sachez que ces vidéos sont généralement montées par des personnes mal intentionnées, qui font notamment tenir des propos outranciers à leurs victimes. L'ancien président des États-Unis Barack Obama a notamment servi d'exemple il y a quelques années, tout comme l'opposant à Vladimir Putin, Alexeï Navalny, dont le visage avait été détourné pour servir la communication du gouvernement russe il y a quelques années. D'après le cabinet Gartner, les dépenses des entreprises liées à la cybersécurité s'élèveront à pratiquement 190 milliards de dollars en 2023, soit une hausse de plus de 11 % par rapport à cette année. À noter qu'Intel n'est pas le seul sur le marché des outils de détection de deepfakes. Facebook et Alphabet s'y essayent déjà depuis plusieurs années, quand Microsoft a fait son grand saut en septembre avec le Video Authenticator. Ceci dit, il convient de ne pas diaboliser totalement les deepfakes, étant donné que cela peut aussi servir, notamment au cinéma ou dans les séries pour faire apparaître un acteur décédé ou en rajeunir un autre par exemple. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

Cyber Security Inside
128. What That Means with Camille: Indigenous Data Sovereignty

Cyber Security Inside

Play Episode Listen Later Nov 28, 2022 23:48


In this episode of What That Means, Camille gets into indigenous data sovereignty with Dawn Nafus (Anthropologist and Senior Research Scientist at Intel Labs), Bobby Maher (member of the Maiam Nayri Wingara Indigenous Data Sovereignty Collective), and Karaitiana Taiuru (Māori Indigenous Data Specialist).    They talk about the definition of indigenous data sovereignty, cultural bias in data collection of indigenous peoples, indigenous peoples' access to data collected about them, and why input from indigenous peoples in the data collection process is imperative.   The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Clearer Thinking with Spencer Greenberg
Is the universe a computer? (with Joscha Bach)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 12, 2022 83:25


Read the full transcript here. What is intelligence? What exactly does an IQ test measure? What are the similarities and differences between the structure of GPT-3 and the structure of the human brain (so far as we understand it)? Is suffering — as the Buddhists might say — just a consequence of the stories we tell about ourselves and the world? What's left (if anything) of the human mind if we strip away the "animal" parts of it? We've used our understanding of the human brain to inform the construction of AI models, but have AI models yielded new insights about the human brain? Is the universe is a computer? Where does AI go from here?Joscha Bach was born in Eastern Germany, and he studied computer science and philosophy at Humboldt University in Berlin and computer science at Waikato University in New Zealand. He did his PhD at the Institute for Cognitive Science in Osnabrück by building a cognitive architecture called MicroPsi, which explored the interaction of motivation, emotion, and cognition. Joscha researched and lectured about the Future of AI at the MIT Media Lab and Harvard, and worked as VP for Research at a startup in San Francisco before joining Intel Labs as a principal researcher. Email him at joscha.bach@gmail.com, follow him on Twitter at @plinz, or subscribe to his YouTube channel.Further reading:The 7 Realms of Truth [Read more]

Clearer Thinking with Spencer Greenberg
Is the universe a computer? (with Joscha Bach)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 12, 2022 83:25


Read the full transcriptWhat is intelligence? What exactly does an IQ test measure? What are the similarities and differences between the structure of GPT-3 and the structure of the human brain (so far as we understand it)? Is suffering — as the Buddhists might say — just a consequence of the stories we tell about ourselves and the world? What's left (if anything) of the human mind if we strip away the "animal" parts of it? We've used our understanding of the human brain to inform the construction of AI models, but have AI models yielded new insights about the human brain? Is the universe is a computer? Where does AI go from here?Joscha Bach was born in Eastern Germany, and he studied computer science and philosophy at Humboldt University in Berlin and computer science at Waikato University in New Zealand. He did his PhD at the Institute for Cognitive Science in Osnabrück by building a cognitive architecture called MicroPsi, which explored the interaction of motivation, emotion, and cognition. Joscha researched and lectured about the Future of AI at the MIT Media Lab and Harvard, and worked as VP for Research at a startup in San Francisco before joining Intel Labs as a principal researcher. Email him at joscha.bach@gmail.com, follow him on Twitter at @plinz, or subscribe to his YouTube channel.Further reading:The 7 Realms of Truth

Clearer Thinking with Spencer Greenberg
Is the universe a computer? (with Joscha Bach)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 12, 2022 83:25


What is intelligence? What exactly does an IQ test measure? What are the similarities and differences between the structure of GPT-3 and the structure of the human brain (so far as we understand it)? Is suffering — as the Buddhists might say — just a consequence of the stories we tell about ourselves and the world? What's left (if anything) of the human mind if we strip away the "animal" parts of it? We've used our understanding of the human brain to inform the construction of AI models, but have AI models yielded new insights about the human brain? Is the universe is a computer? Where does AI go from here?Joscha Bach was born in Eastern Germany, and he studied computer science and philosophy at Humboldt University in Berlin and computer science at Waikato University in New Zealand. He did his PhD at the Institute for Cognitive Science in Osnabrück by building a cognitive architecture called MicroPsi, which explored the interaction of motivation, emotion, and cognition. Joscha researched and lectured about the Future of AI at the MIT Media Lab and Harvard, and worked as VP for Research at a startup in San Francisco before joining Intel Labs as a principal researcher. Email him at joscha.bach@gmail.com, follow him on Twitter at @plinz, or subscribe to his YouTube channel.Further reading:The 7 Realms of Truth

Irish Tech News Audio Articles
Intel explores the topic of responsible AI at Dublin Symposium

Irish Tech News Audio Articles

Play Episode Listen Later Sep 14, 2022 3:43


This week Intel and Article One, in association with the School of Law at Trinity College Dublin, hosted a symposium exploring responsible business conduct, innovation and Artificial Intelligence (AI). While rapid advancements in AI and other emerging technologies have the potential for significant positive human rights impacts, they also bring heightened risks of adverse effects. Best practices, principles, and tools to ensure responsible decision-making are vital elements in the evolution of AI technologies. The one-day symposium brought together thought leaders, policy makers, and academia to explore topics such as responsible development of AI and Applying responsible AI principles to manufacturing. The symposium was opened by Eamon Gilmore, EU Special Representative on Human Rights. Also speaking at the event was Lama Nachman, Intel Fellow and Director of Intelligent Systems Research Lab in Intel Labs. Lama's research is focused on creating contextually aware experiences that understand users through sensing and sense-making, anticipating their needs, and acting on their behalf. To coincide with the symposium, Lama shared her thoughts in an editorial on ‘Responsibly Harnessing the Power of AI'; “Artificial intelligence (AI) has become a key part of everyday life, transforming how we live, work, and solve new and complex challenges. From making voice banking possible for people with neurological conditions to helping autonomous vehicles make roads safer and helping researchers better understand rainfall patterns and human population trends, AI has allowed us to overcome barriers, make societies safer and develop solutions to build a better future. Despite AI's many real-life benefits, Hollywood loves to tell alarming stories of AI taking on a mind of its own and menacing people. These science fiction scenarios can distract us from focusing on the very real but more banal ways in which poorly designed AI systems can harm people. It is critical that we continuously strive to develop AI technologies responsibly, so that our efforts do not marginalise people, use data in unethical ways or discriminate against different populations — especially individuals in traditionally underrepresented groups. These are problems that we as developers of AI systems are aware of and working to prevent. At Intel, we believe in the potential of AI technology to create positive global change, empower people with the right tools, and improve the life of every person on the planet. We've long been recognised as one of the most ethical companies in the world, and we take that responsibility seriously. We've had Global Human Rights Principles in place since 2009 and are committed to high standards in product responsibility, including AI. We recognize the ethical risks associated with the development of AI technology and aspire to be a role model, especially as thousands of companies across all industries are making AI breakthroughs using systems enhanced with Intel® AI technology. We are committed to responsibly advancing AI technology throughout the product lifecycle. I am excited to share our updated Responsible AI web page, featuring the work we do in this space and highlighting the actions we are taking to operate responsibly, guard against the misuse of AI and keep ourselves accountable through internal oversight and governance processes”. Visit the Intel newsroom to read the full editorial. See more stories here.

Deep Dive: AI
Building creative restrictions to curb AI abuse

Deep Dive: AI

Play Episode Listen Later Sep 6, 2022


Along with all the positive, revolutionary aspects of AI comes a more sinister side. Joining us today to discuss ethics in AI from the developer's point of view is David Gray Widder. David is currently doing his Ph.D. at the School of Computer Science at Carnegie Mellon University and is investigating AI from an ethical perspective, honing in specifically on the ethics-related challenges faced by AI software engineers. His research has been conducted at Intel Labs, Microsoft, and NASA's Jet Propulsion Lab. In this episode, we discuss the harmful uses of deep fakes and the ethical ramifications thereof in proprietary versus open source contexts. Widder breaks down the notions of technological inevitability and technological neutrality, respectively, and explains the importance of challenging these ideas. Widder has identified a continuum between implementation-based harms and use-based harms and fills us in on how each is affected in the open source development space. Tune in to find out more about the importance of curbing AI abuse and the creativity required to do so, as well as the strengths and weaknesses of open source in terms of AI ethics. Full transcript. Key points from this episode: Introducing David Gray Widder, a Ph.D. student researching AI ethics. Why he chose to focus his research on ethics in AI, and how he drives his research. Widder explains deep fakes and gives examples of their uses. Sinister uses of deep fakes and the danger thereof. The ethical ramifications of deep fake tech in proprietary versus open source contexts. The kinds of harms that can be prevented in open source versus proprietary contexts. The licensing issues that result in developers relinquishing control (and responsibility) over the uses of their tech. Why Widder is critical of the notions of both technological inevitability and neutrality. Why it's important to challenge the idea of technological neutrality. The potential to build restrictions, even within the dictates of open source. The continuum between implementation-based harms and use-based harms. How open source allows for increased scrutiny of implementation harms, but decreased accountability for use-based harms. The insight Widder gleaned from observing NASA's use of AI, pertaining to the deep fake case. Widder voices his legal concerns around Copilot. The difference between laws and norms. How we've been unsuspectingly providing data by uploading photos online. Why it's important to include open source and public sector organizations in the ethical AI conversation. Open source strengths and weaknesses in terms of the ethical use of AI. Links mentioned in today's episode: David Gray Widder David Gray Widder on Twitter Limits and Possibilities of “Ethical AI” in Open Source: A Study of Deep Fakes What is Deepfake Copilot Credits Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix. This podcast is sponsored by GitHub, DataStax and Google. No sponsor had any right or opportunity to approve or disapprove the content of this podcast.

Wunderbar Together
Joscha Bach, wie lange gibst du der Menschheit noch?

Wunderbar Together

Play Episode Listen Later Jul 11, 2022 112:03


Mit dem Kognitionsforscher Joscha Bach zu sprechen, ist ein bisschen wie eine Sinfonie seines berühmten Vorfahren, Johann Sebastian Bach, zu hören. Man versteht nicht alles, aber man genießt, lernt und wächst an dem, was die eigenen Ohren mitbekommen. Ein paar Takte zur Kostprobe: „Ich habe kein Interesse daran, das zu tun, was der Rest der Welt tut. Denn das, was der Rest der Welt macht, ergibt meist keinen Sinn.“ „Künstliche Intelligenzen sind unsere Kinder.“ „Die Menschen, die auf den Mars umsiedeln, werden nur noch wenig Ähnlichkeit mit unserer jetzigen Lebensform haben.“

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
Intel Labs VP Gadi Singer on Cognitive Artificial Intelligence and The Third Wave of AI

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)

Play Episode Listen Later Apr 18, 2022 34:52


655: In this interview, Gadi Singer, VP of Intel Labs and Director of Emergent Artificial Intelligence Research Lab, joins Peter High in a discussion about the third wave of artificial intelligence and the development of cognitive AI. Gadi provides an overview of the Emergent Artificial Intelligence Research Lab at Intel and the multidisciplinary nature of his team. He speaks on the history of artificial intelligence's development, where the current limitations exist, and how we can continue to safeguard artificial intelligence from bad actors, among other topics.

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
Intel Labs VP Gadi Singer on Cognitive Artificial Intelligence and The Third Wave of AI

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)

Play Episode Listen Later Apr 18, 2022 34:52


655: In this interview, Gadi Singer, VP of Intel Labs and Director of Emergent Artificial Intelligence Research Lab, joins Peter High in a discussion about the third wave of artificial intelligence and the development of cognitive AI. Gadi provides an overview of the Emergent Artificial Intelligence Research Lab at Intel and the multidisciplinary nature of his team. He speaks on the history of artificial intelligence's development, where the current limitations exist, and how we can continue to safeguard artificial intelligence from bad actors, among other topics.

Cyber Security Inside
80. What That Means with Camille: Intelligent Systems: Emotion Recognition Research

Cyber Security Inside

Play Episode Listen Later Feb 21, 2022 29:09


In this episode of Cyber Security Inside What That Means, Camille talks with three experts in very different fields all coming together to look at intelligent systems and emotion recognition research. Her guests are Saurav Sahay (Staff Researcher and Manager of Multimodal Dialogue and Interactions Team at Intel Labs), Sinem Aslan (Research Scientists), and Dawn Nafus (Anthropologist). The conversation covers: -  What emotional recognition research is and why it is such a collaborative area of research across disciplines. -  Areas this research is being tested and used in, including driving, classrooms, predictive text, and in medicine. -  What types of sensors are used to collect data for emotional recognition. -  The ethics and questions behind this research and what the researchers are doing to address them in advance. ...and more. Don't miss it!   The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation. Here are some key takeaways: -  Emotional recognition research gets into a place with ethics and insight from a multitude of fields is incredibly important, so it is often a collaborative research between ethicists, anthropologists, and computer scientists.  -  Emotional recognition is when an AI can recognize different emotional states through facial expressions, body language, and tone. -  This research is being used to assess if a driver is drowsy, if a student is engaged or distracted, and more. They are looking at facial expressions, but also other sensors because facial expressions can be so subtle. They are using cameras, performance data, and more to get a combined data set and a better picture of what is happening. -  The “yikes” factor is that people can be monitored and they may not be able to conceal things with so many sensors happening. That is why ethicists and anthropologists need to be involved in this type of research. Especially when you start to make claims about someone, how they feel, and how they are engaging. -  In thinking about using this technology with students, the researchers discuss how this can benefit a blended learning or online classroom. In a teacher's day-to-day job, they are essentially responding to the students' emotional states, and they can read that when they are in person. But when a student is on a computer, or when the classroom is very full of students, that emotional data is gone or hard to obtain. This research is trying to obtain that data to help with efficiency and effectiveness. - Where this research could go can sound scary. This is why these particular researchers are focusing their research on things like giving teachers information to start a conversation with a student, not evaluating a student or doing a summative evaluation of a whole class. They are not saying the teacher should entirely rely on the evaluation, but should use it as information as a starting point. Although this could be used for negative purposes, it is not the researchers' intent. -  Postpartum depression is another area where this research is being used, to try and detect whether someone is going through it or not. -  The researchers discussed that even though their intent is important, they have to put on the hat of what could happen with their technology when someone else purchases it. Although intent is a great place to start, it is not the whole story. And this is part of their research as well. They also encourage people to be tough customers and ask good questions of companies and organizations that are using technology like this. -  There is not legally a universal standard for privacy, which also plays a role in this. Also, there are significantly different opinions across cultures with how communication works and what it means to be watched or to be private. -  There are many types of sensors for this type of work, including cameras, gesture recognition, wifi signal detection, and language processing. There are audio, vision, and text inputs, as well as things like heart rate. If you suddenly start hitting the keyboard really hard, or slowing down, that is one signal that an emotional state has changed. -  Language sensors and human-brain interfaces are developing enough to be able to detect words from characters you're thinking about. You can see this in predictive text on your phone, on autocomplete, etc. There is also work being done with connecting EEG signals with this word prediction technology. This might be useful for patients who are not able to speak well. The tech is limited right now, but it is getting better.   Some interesting quotes from today's episode: “Facial expressions and physiological sensing to audio sensing, using acoustic context… Using say, for example, how you type on the keyboard, your typing speed, and things like that, and takes into account all of these sensors to compute emotional states.” Saurav Sahay   “In my vehicle I have this amazing attention-assist feature that tells me when I'm drowsy. So this system is also using some flavors of sensing to detect if I'm alert or not.” - Saurav Sahay   “We know that learning is emotional as much as intellectual. So we are trying to understand whether a student is on task or off task during learning, but at the same time, the other level of engagement is emotional engagement, whether they are confused, bored, or satisfied at any time of learning.” - Sinem Aslan   “If we ask who's in charge, who benefits, who doesn't, and who gets to make these technologies at all, then we can start to unpack where's the benefits and where's the real risk?” Dawn Nafus   “Even myself would not be okay with sharing my emotional states with my manager, because that's not what she does on a day-to-day basis. That's extra information for her. But in a classroom scenario, it's already part of that context. And what we are doing is really making it more efficient.” - Sinem Aslan   “As a society, we need to be much tougher customers. When schools are starting to purchase this stuff, we need to, as the responsible AI community, be supporting them and asking the really hard questions… And with that kind of more skeptical customer base, then we can start to make sure that things land where they want to land.” - Dawn Nafus   “I remember a demo that happened more than 10 years ago, when I was at Georgia Tech. There was a person sitting on a machine, and he was thinking about getting a mug of coffee. And there was this robot that just by magic gave coffee to the person. So just like that, neural interfaces are getting mature enough with a lot of sensing that happens.” - Saurav Sahay   “We can potentially control the bias in machine learning models by controlling the data set that we are kind of training them with. But on the other hand, there is also the bias that humans do on a day-to-day basis. So, how do we balance these two?” - Sinem Aslan

Intel on AI
Designing Molecules with AI – Intel on AI Season 3, Episode 8

Intel on AI

Play Episode Listen Later Feb 16, 2022 56:04


In this episode of Intel on AI hosts Amir Khosrowshahi and Santiago Miret talk with Alán Aspuru-Guzik about the chemistry of computing and the future of materials discovery. Alán is a professor of chemistry and computer science at the University of Toronto, a Canada 150 Research Chair in theoretical chemistry, a CIFAR AI Chair at the Vector Institute, and a CIFAR Lebovic Fellow in the biology-inspired Solar Energy Program. Alán also holds a Google Industrial Research Chair in quantum computing and is the co-founder of two startups, Zapata Computing and Kebotix. Santiago Miret is an AI researcher in Intel Labs, who has an active research collaboration Alán. Santiago studies at the intersection of AI and the sciences, as well as the algorithmic development of AI for real-world problems. In the first half of the episode, the three discuss accelerating molecular design and building next generation functional materials. Alán talks about his academic background with high performance computing (HPC) that led him into the field of molecular design. He goes into detail about building a “self-driving lab” for scientific experimentation, which, coupled with advanced automation and robotics, he believes will help propel society to move beyond the era of plastics and into the era of materials by demand. Alán and Santiago talk about their research collaboration with Intel to build sophisticated model-based molecular design platforms that can scale to real-world challenges. Alán talks about the Acceleration Consortium and the need for standardization research to drive greater academic and industry collaborations for self-driving laboratories. In the second half of the episode, the three talk about quantum computing, including developing algorithms for quantum dynamics, molecular electronic structure, molecular properties, and more. Alán talks about how a simple algorithm based on thinking of the quantum computer like a musical instrument is behind the concept of the variational quantum eigensolver, which could hold promising advancements alongside classical computers. Amir, and Santiago close the episode by talking about the future of research, including projects at DARPA, oscillatory computing, quantum machine learning, quantum autoencoders, and how young technologists entering the field can advance a more equitable society. Academic research discussed in the podcast episode: The Hot Topic: What We Can Do About Global Warming Energy, Transport, & the Environment Scalable Quantum Simulation of Molecular Energies The Harvard Clean Energy Project: Large-Scale Computational Screening and Design of Organic Photovoltaics on the World Community Grid Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning Neuroevolution-Enhanced Multi-Objective Optimization for Mixed-Precision Quantization Organic molecules with inverted gaps between first excited singlet and triplet states and appreciable fluorescence rates Simulated Quantum Computation of Molecular Energies Towards quantum chemistry on a quantum computer Gerald McLean and Marcum Jung and others with the concept of the variational quantum eigensolver Experimental investigation of performance differences between coherent Ising machines and a quantum annealer Quantum autoencoders for efficient compression of quantum data

All Things 5G
Time-Sensitive Networking

All Things 5G

Play Episode Listen Later Jan 1, 2022 31:49


Dave Cavalcanti of Intel Labs joins Keysight’s Avik Bhattacharya and the All Things 5G crew to talk about the benefits and challenges of wireless TSN for 5G private networks.

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
Intel's Mike Davies on the Future of Neuromorphic Computing

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)

Play Episode Listen Later Dec 16, 2021 23:44


620: Mike Davies discusses the relevance and impact of neuromorphic computing on Intel as a company and the world more generally. Mike gives a birds-eye view of what neuromorphic computing aims to achieve and the relationship it has to traditional forms of artificial intelligence and deep learning. He breaks down the near-, medium-, and long-term implications the technology could have on how we think about computing capabilities and the technology and cost challenges that his team is working to overcome. A key component of commercializing this technology is the collaboration with partners, and Mike spends a moment discussing how he looks to curate an ecosystem of academics and corporations to help deliver this technology as well as how he collaborates internally with other groups within Intel Labs.

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
Intel's Mike Davies on Developing and Delivering Neuromorphic Computing

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)

Play Episode Listen Later Dec 16, 2021 23:44


620: Mike Davies discusses the relevance and impact of neuromorphic computing on Intel as a company and the world more generally. Mike gives a birds-eye view of what neuromorphic computing aims to achieve and the relationship it has to traditional forms of artificial intelligence and deep learning. He breaks down the near-, medium-, and long-term implications the technology could have on how we think about computing capabilities and the technology and cost challenges that his team is working to overcome. A key component of commercializing this technology is the collaboration with partners, and Mike spends a moment discussing how he looks to curate an ecosystem of academics and corporations to help deliver this technology as well as how he collaborates internally with other groups within Intel Labs.

Amelia's Weekly Fish Fry
Superhuman Code, Semantic Analyzers and Automated Debugging: How Machine Programming Will Change the Future of Electronic Engineering

Amelia's Weekly Fish Fry

Play Episode Listen Later Nov 12, 2021 35:46


What if we could improve engineering productivity by 1000% and decrease debugging by 50%? In this week's podcast, we investigate how machine programming will help us do all of this and more! Justin Gottschlich (Principal AI Scientist & Director/Founder of Machine Programming Research at Intel Labs) joins me for a deep dive into the world of machine programming. We take a closer look at the development of this pioneering research initiative, the details of Intel's open source machine programming research system (ControlFlag) and why Justin believes that automated debugging and performance extraction will unlock untold possibilities in the realm of software and hardware development.  

The UAV Digest
385 Hunting with Drones

The UAV Digest

Play Episode Listen Later Nov 5, 2021 32:10


Hunting with drones and the fair chase concept, autonomous agile flight, thermal mapping to reduce building heat loss, Trick or Treat, the delivery drone hold-up, UAS over wildfires, and a LAANC update. UAV News Outdoors in Maine: Drones in the woods, and the ethical debate over whether they belong The law in Maine says hunters cannot use drones to locate game. According to Title 12 Section 11216 Hunting with aid of aircraft, “A person on the ground or airborne may not use an aircraft to aid or assist in hunting bear, deer or moose.” Section 10001(1) defines an aircraft as “a machine or device designed for flight.” In 2014, Colorado became the first state to outlaw the use of drones for scouting, hunting, and taking wildlife. The concept of “fair chase,” a registered trademark of the Boone and Crockett Club, comes into play. Watch these autonomous drones zip through the woods Rapid autonomous flight in complex and changing environments is difficult, but researchers at the University of Zurich in conjunction with Intel Labs are tackling this problem. Using simulations, they are training drones to imitate expert human pilots for autonomous agile flight. Video: Learning High-Speed Flight in the Wild (Science Robotics, 2021) https://youtu.be/m89bNn6RFoQ Project webpage: Learning High-Speed Flight in the Wild  Warren, MN, uses drones, thermal sensors to map its heat-leaking homes The Minnesota town of Warren is using a drone to help residents cut their energy costs. Warren is a Climate Smart Municipalities Partnership member where cities in Minnesota and Germany link up for sustainability and climate initiatives. They used a thermal sensor-equipped drone to produce a municipal map showing structural insulating inefficiencies and losses of heat. Pilots and drones came from Northland Community College. Drones have fun at Halloween 2021 This is a compilation of drone Halloween videos and photos, including a drone light show in Dallas produced by Sky Elements Drone Shows, flying witch drones, trick or treating robots, and an Australian youngster dressed up as a Wing delivery drone. Amazon Drone Delivery Was Supposed to Start By 2018. Here's What Happened Instead In 2013, Amazon announced an experimental drone delivery service. Now it's 2021, almost 2022. Where are the Amazon delivery drones? Amazon Prime Air is still committed to delivering packages by drones, and they say, “We are pioneering new ground and it will continue to take time to create the right technology and infrastructure to safely deliver packages to customers.” In the U.S., drone delivery is paced by the FAA as it develops the regulatory framework. Hybrid rotor/fixed wing drone was used extensively over Schneider Springs Fire In August and September 2021, the Schneider Springs Fire in Washington burned more than 107,000 acres. An L3 Latitude Engineering FVR-90 unmanned aircraft was used to monitor the fire. The eVTOL uses four electrically-powered rotors to take off, and then transitions to horizontal flight driven by a rear-mounted gasoline-powered propeller. It can fly for up to 12 hours and fly at an altitude of 10,000 to 12,000 feet. Visual video cameras and heat-sensing infrared sensors monitor and map fires, with images transmitted to the ground in real-time. Video: UAS Type 1 Drone Used on the Schneider Springs Fire https://youtu.be/zdjZPDxb5OY Unmanned aircraft on wildfires — what have we learned? Incident Management Teams used other drones on the Schneider Springs Fire. A Type 3 UAS was used at night for Plastic Sphere Dispenser (PSD) burning operations. This drone could access areas of the fire that were inaccessible to other aircraft and provided better intel and risk management. A number of challenges remain. LAANC Update: Refined Airspace Grids & Night Drone Authorizations The latest generation of FAA's Low Altitude Authorization and Notification Capability (LAANC) introduces authoriza...

Metaverse
Making sense of the future in an era of ‘hyperchange' with Madeline Ashby

Metaverse

Play Episode Listen Later Oct 1, 2021 32:34


Madeline Ashby is a futurist, writer, speaker and teacher. She is best known for her successful series of science-fiction novels, including the bestseller ‘Company Town,' the ‘Machine Dynasty' series, and a number of short stories. In 2020, Madeline co-authored her first non-fiction book, ‘How To Future: Leading and Sense-Making in an Age of Hyperchange' - and has worked in the futurism space for the World Health Organisation, the Ontario government and Intel Labs. In this conversation, Madeline recalls what attracted her to the world of futurism, including what the term means to her; discusses the impact the Covid-19 pandemic will have on business and technology; and explores the different possible landscapes of the future, explaining why it was so important to make understanding the possibilities, accessible to all. 

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
How Intel Labs Managing Director Rich Uhlig Delivers Breakthrough Innovations

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)

Play Episode Listen Later Sep 23, 2021 23:52


596: In this interview, Rich Uhlig, Intel Senior Fellow and the Managing Director of Intel Labs, focuses on the research and delivery of breakthrough innovations. Rich provides an overview of the work that Intel Labs does and explains his team's process in evaluating prototypes and ideas to determine which can go to scale. He describes a few of the areas of innovation the organization has explored including the intersection of data and AI, cybersecurity and a trusted compute base, and homomorphic encryption as well as provides how these areas can be benefited from advancements in technology. Finally, Rich describes the progress towards quantum computing and how his team at Intel Labs leverages ecosystems of companies and consumers for insight into the technology's feasibility.

The Change Alchemist
Madeline Ashby on science fiction, AI, automation and prototyping futures

The Change Alchemist

Play Episode Listen Later May 19, 2021 73:49


Madeline Ashby is a science fiction writer, futurist, speaker, teacher, and immigrant living in Toronto.   Madeline Ashby has worked with Intel Labs, the World Health Organization, the Institute for the Future, SciFutures, Nesta, Data & Society, The Atlantic Council, Changeist, and others. She has spoken at SXSW, FutureEverything, MozFest, and other events. Her essays have appeared at BoingBoing, io9, WorldChanging, The Atlantic, MISC Magazine, and FutureNow. Her fiction has appeared in Slate, MIT Technology Review, Clarkesworld, and multiple anthologies. She is a member of the XPRIZE Science Fiction Advisory Council and the AI Policy Futures Group at the ASU Center for Science and the Imagination. She is the author of the Machine Dynasty novels. Her novel Company Town was a Canada Reads finalist. --- Support this podcast: https://anchor.fm/shobhana-viswanathan/support

IEEE Quantum
Special Edition: A Conversation with Anne Matsuura, Director of Quantum Applications and Architecture, Intel Labs

IEEE Quantum

Play Episode Listen Later Jan 12, 2021 10:47


People Behind the Science Podcast - Stories from Scientists about Science, Life, Research, and Science Careers
140: Addressing a Battery of Research Questions to Improve Energy Storage - Dr. Cary Pint

People Behind the Science Podcast - Stories from Scientists about Science, Life, Research, and Science Careers

Play Episode Listen Later Aug 24, 2014 43:11


Dr. Cary Pint is an Assistant Professor of Mechanical Engineering at Vanderbilt University. He received his Masters and his PhD in Applied Physics from Rice University. He completed his postdoctoral training in Electrical Engineering and Computer Science at the University of California, Berkeley. He worked at Intel Labs in the Extreme Technologies Research Group before joining the faculty at Vanderbilt. Cary has received many awards and honors during his career, including the the American Institute of Chemical Engineers STS section Best Applied Paper Award, the American Vacuum Society Dorothy M. and Earl S. Hoffman Award, Intel Corporation's “High Five” patent Award, and the Ralph E. Powe Junior Faculty Enhancement Award. Cary is a Kavli Fellow of the National Academy of Sciences and was named one of Forbes Magazine's “30 under 30” disrupters in the field of science and innovation. Cary is here with us today to tell us about his journey through life and science.