POPULARITY
Everywhere you look, there's talk of artificial intelligence and machine learning. Daniela Rus and Gregory Mone have dedicated their lives to studying both. Daniela is a world expert in robotics. She's the first female director of MIT's Computer Science and Artificial Intelligence Lab, and a MacArthur “Genius” Fellow. Gregory has authored 19 books and is a former columnist for Popular Science magazine. He's collaborated with Neil deGrasse Tyson, Susan Cain, Bill Nye, and Skeletor. They join Google to talk about their book, “The Heart and the Chip: Our Bright Future with Robots.” The book overviews the interconnected fields of robotics, AI, and machine learning. It reframes the way we think about intelligent machines while also weighing the moral and ethical consequences of their role in society. Watch this epsode at youtube.com/TalksAtGoogle.
Happy New Year 2025! To celebrate, here is an encore of what proved to be the most popular episode of 2024. This rerun combines episodes 30 and 31 into one epic journey towards the frontiers of human understanding. My guest is Donald Hoffman. Our topics are consciousness, cosmos, and the meaning of life. Enjoy! Original show notes Laws of physics govern the world. They explain the movements of planets, oceans, and cells in our bodies. But can they ever explain the feelings and meanings of our mental lives? This problem, called the hard problem of consciousness, runs very deep. No satisfactory explanation exists. But many think that there must, in principle, be an explanation. A minority of thinkers disagree. According to these thinkers, we will never be able to explain mind in terms of matter. We will, instead, explain matter in terms of mind. I explored this position in some detail in episode 17. But hold on, you might say. Is this not contradicted by the success of natural sciences? How could a mind-first philosophy ever explain the success of particle physics? Or more generally, wouldn't any scientist laugh at the idea that mind is more fundamental than matter? No — not all of them laugh. Some take it very seriously. Donald Hoffman is one such scientist. Originally working with computer vision at MIT's famous Artificial Intelligence Lab, Hoffman started asking a simple question: What does it mean to "see" the world? His answer begins from a simple idea: perception simplifies the world – a lot. But what is the real world like? What is “there” before our perception simplifies the world? Nothing familiar, Hoffman claims. No matter. No objects. Not even a three-dimensional space. And no time. There is just consciousness. This is a wild idea. But it is a surprisingly precise idea. It is so precise, in fact, that Hoffman's team can derive basic findings in particle physics from their theory. A fascinating conversation was guaranteed. I hope you enjoy it. If you do, consider becoming a supporter of On Humans on Patreon.com/OnHumans. MENTIONS Names: David Gross, Nima Arkani-Hamed, Edward Whitten, Nathan Seiberg, Andrew Strominger, Edwin Abbott, Nick Bostrom, Giulio Tononi, Keith Frankish, Daniel Dennett, Steven Pinker, Roger Penrose, Sean Carroll, Swapan Chattopadhyay Terms (Physics and Maths): quantum fields, string theory, gluon, scattering amplitude, amplituhedron, decorated permutations, bosons, leptons, quarks, Planck scale, twistor theory, M-theory, multiverse, recurrent communicating classes, Cantor's hierarchy (relating to different sizes of infinity... If this sounds weird, stay tuned for full episode on infinity. It will come out in a month or two.) Terms (Philosophy and Psychology): Kant's phenomena and noumena, integrated information theory, global workspace theory, orchestrated objective reduction theory, attention schema theory Books: Case Against Reality by Hoffman, Enlightenment Now by Steven Pinker Articles etc.: For links to articles, courses, and more, see https://onhumans.substack.com/p/links-for-episode-30
We all know about keeping our dogs in the yard with visual fencing. But wait…what about cattle. Well…
We all know about keeping our dogs in the yard with visual fencing. But wait…what about cattle.Well,,,
WATCH ON YOUTUBE: https://youtu.be/iInUMPcOqsIIn this episode, I speak to Stephanie Antonian. Stephanie is the founder and CEO of Aestora, an AI research lab focused on love.The Human Podcast explores the lives and ideas of inspiring individuals. Subscribe for new interviews every week.
In the podcast episode, Dave Sobel discusses the current state of AI adoption among managed services providers (MSPs). According to Barracuda's global MSP report, MSPs are feeling the pressure to enhance their knowledge and application of AI products and services to meet customer demands. The report also highlights the increasing focus on security-related services, with MSPs anticipating a rise in revenue from recurring managed services in 2024.The episode delves into the impact of AI on the labor market, as revealed by new research from MIT's Computer Science and Artificial Intelligence Lab. Contrary to previous fears, the study suggests that job destruction from AI will occur gradually, giving policymakers time to prepare and mitigate its effects. Additionally, a Slack report emphasizes the importance of providing proper training to employees using AI to avoid an increase in busy work and ensure productivity.Salesforce's introduction of Slack Lists, a feature aimed at transforming Slack into a collaborative project management tool, is highlighted in the episode. The integration of project and task management functionalities within Slack is seen as a strategic move to enhance collaboration, improve accountability, and increase productivity for teams. The discussion emphasizes the need for proper training to maximize the potential of Slack Lists and the importance of support from Salesforce during the transition.The episode also touches on initiatives such as CISA's Secure by Design and Gartner's recommendations for CISOs to enhance their cybersecurity approach. The focus on secure design practices, legislative developments, and building a resilient cyber workforce underscores the evolving landscape of cybersecurity. Furthermore, insights into AI jailbreaking and innovative workplace communication tools like Imovid provide a glimpse into the future of technology and the importance of staying ahead of industry trends for MSPs and software developers. Three things to know today 00:00 AI Hype vs. Reality: Why Training and Security Remain Key for MSPs03:38 Salesforce Enhances Slack with Project Management Features: Introducing Slack Lists04:50 Government Initiatives, Cybersecurity Strategies, and AI Jailbreaking: Key Takeaways for Tech Professionals Supported by: https://coreview.com/msphttps://huntress.com/mspradio/ All our Sponsors: https://businessof.tech/sponsors/ Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/ Support the show on Patreon: https://patreon.com/mspradio/ Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessoftech.bsky.social
The world is governed by objective laws of physics. They explain the movements of planets, oceans, and cells in our bodies. But can they ever explain the feelings and meanings of our mental lives? This problem, called the hard problem of consciousness, runs very deep. No satisfactory explanation exists. But many think that there must, in principle, be an explanation. A minority of thinkers disagree. According to these thinkers, we will never be able to explain mind in terms of matter. We will, instead, explain matter in terms of mind. I explored this position in some detail in episode 17. But hold on, you might say. Is this not contradicted by the success of natural sciences? How could a mind-first philosophy ever explain the success of particle physics? Or more generally, wouldn't any scientist laugh at the idea that mind is more fundamental than matter? No — not all of them laugh. Some take it very seriously. Donald Hoffman is one such scientist. Originally working with computer vision at MIT's famous Artificial Intelligence Lab, Hoffman started asking a simple question: What does it mean to "see" the world? His answer starts from a simple idea: perception simplifies the world – a lot. But what is the real world like? What is “there” before our perception simplifies the world? Nothing familiar, Hoffman claims. No matter. No objects. Not even a three-dimensional space. And no time. There is just consciousness. This is a wild idea. But it is a surprisingly precise idea. It is so precise, in fact, that Hoffman's team can derive basic findings in particle physics from their theory. A fascinating conversation was guaranteed. I hope you enjoy it. If you do, consider becoming a supporter of On Humans on Patreon.com/OnHumans. ESSAYS AND NEWSLETTER You can now find breakdowns and analyses of new conversations from OnHumans.Substack.com. Subscribe to the newsletter to get every new piece to fresh from the shelf. MENTIONS Names: David Gross, Nima Arkani-Hamed, Edward Whitten, Nathan Seiberg, Andrew Strominger, Edwin Abbott, Nick Bostrom, Giulio Tononi, Keith Frankish, Daniel Dennett, Steven Pinker, Roger Penrose, Sean Carroll, Swapan Chattopadhyay Terms (Physics and Maths): quantum fields, string theory, gluon, scattering amplitude, amplituhedron, decorated permutations, bosons, leptons, quarks, Planck scale, twistor theory, M-theory, multiverse, recurrent communicating classes, Cantor's hierarchy (relating to different sizes of infinity... If this sounds weird, stay tuned for full episode on infinity. It will come out in a month or two.) Terms (Philosophy and Psychology): Kant's phenomena and noumena, integrated information theory, global workspace theory, orchestrated objective reduction theory, attention schema theory Books: Case Against Reality by Hoffman, Enlightenment Now by Steven Pinker Articles etc.: For links to articles, courses, and more, see https://onhumans.substack.com/p/links-for-episode-30
Professor Raymond Mooney is the Director of the Artificial Intelligence Lab at the University of Texas at Austin. Having written his first paper on AI as a 17 year-old and spending his entire academic and professional career as an AI researcher, Prof. Mooney has become one of the most respected and distinguished thought leaders in the world on artificial intelligence and natural language processing. First, we talk about what the blanket term “artificial intelligence” really encompasses and dig into some of the groundbreaking research that Prof. Mooney has conducted in the recent past. Next, we take a look into how AI has evolved over the past 70 years and point out some important historical landmarks for the technology. We then take a look into how AI differs in academia and enterprise and end off the episode with a discussion of some of the potential negative ethical implications that AI can bring about as well as some of the tremendous upsides that the technology can have in the near future.
Welcome back to the Tech Policy Grind Podcast by the Internet Law & Policy Foundry! In this episode, Class 4 Fellow Lama Mohammed chats with Jiahao Chen, Founder and CEO of Artifical Intelligence, LLC, Amber Ezzell, Policy Counsel at the Future of Privacy Forum, and Juhi Koré, Digital Projects in a recent panel on bias in artificial intelligence (AI). In a fourth event in a series of AI-related webinars leading up to The Foundry's Annual Policy Hackathon, Lama, Jiahao, Amber, and Juhi define AI bias, explain its harmful effects, and provide insights into global AI policy developments. The experts that joined the episode: Jiahao Chen is the Founder and CEO of Responsible Artificial Intelligence, LLC. Before founding Responsible AI Jiahao was a Research Scientist at MIT's Computer Science and Artificial Intelligence Lab where he co-founded and led the Julia Lab. There, he focused on applications of the Julia programming language, scientific computing, and machine learning. Amber Ezzell is a Policy Counsel at the Future of Privacy Forum. In particular, she focuses on artificial intelligence and machine learning, and employee and workplace privacy. Juhi Koré works within the UNDP's Chief Digital Office, where she manages digital products and contributes to fundraising/partnerships efforts. For more, listen to the entire conversation on YouTube. Check out the Foundry on Instagram, Twitter, or LinkedIn and subscribe to our newsletter! If you'd like to support the show, donate to the Foundry here or reach out to us at foundrypodcasts@ilpfoundry.us. DISCLAIMER: Lama, Jiahao, Amber, and Juhi engaged with the Internet Law & Policy Foundry voluntarily and in their personal capacities. The views and opinions expressed on this show don't reflect the organizations and institutions they are affiliated with.
The Spatial Web is Coming, Part 3 #futureofai #artificialintelligence by Denise Holt Okay, so we build the Smart Technologies, but how do they translate into the Spatial Web? Enter The Spatial Web Foundation and VERSES Technologies, a next-gen AI company that is literally laying the foundation for the Spatial Web Protocol by establishing and defining an entirely new computing technology stack comprised of three tiers: Interface, Logic & Data. The Interface Tier blends spatial computing experiences, merging AR & VR experiences with our physical world. VERSES has created the Hyperspace Transaction Protocol (HSTP), using Hyperspace Modeling Language (HSML), as the foundation for a common networked terminal, to bring all the interface tier components together in order to facilitate an indexed and searchable Spatial Web Browser of every person, place or thing, both real and digital. As Dan Mapes of VERSES points out, “HTML lets you program a web page – HSML lets you program a web space.” The Logic Tier enables the parsing of this huge amount of new spatial & UX data through cognitive computing methods, powered by VERSES' flagship contextual computing AI Operating System called, KOSM™. VERSES is Blockchain agnostic which means you can use multiple chains and even operate a hybrid data layer using both DLT technologies and the cloud. VERSES KOSM™ includes their new Biomimicry-inspired Artificial Intelligence based on the “Free Energy Principle”, which will be part of every VERSES spatial application and will live on the network, updating itself while learning in real-time. On September 21, 2022, VERSES introduced the creation of its new Artificial Intelligence Lab and Sensor Fusion Research Facility, showcasing their technology portfolio, including KOSM™, and providing an immersive space for data science and product development teams to cultivate advanced adaptive intelligence solutions required for translating diverse data into contextual awareness between humans, machines and AI in physical and digital spaces. This brings us to the Data Tier. In a world where it will become more and more challenging to decipher between real and digital experiences (think, deep fakes), trust and verification will become paramount to our security. Distributed Ledger Technologies like Blockchain offer a “trust-less” cryptographically secure architecture. In DLTs, there is no need for any third-party entity, such as a corporation or government, or even attorney or accountant, to act as a trusted central authority figure. DLTs provide us with a real opportunity to shift data sovereignty, control, and security into the hands of individuals. Smart Contracts, at the heart of DLTs, are a programmable set of rules, stored on Blockchains, and run when certain pre-determined conditions are met. These automated, self-executing & immutable strings of code are recorded onto Distributed Ledger Blockchains, securing transactions & agreements by replacing static documents and the need for third party mediation. Through intelligent automation, Smart Contracts secure the management of property rights, spatial rights, proof of origination, verifiable traceability, and auto-execution of payments and transfer of assets, providing security, protecting privacy, and allowing risk-free interoperability — all essential to a favorable and prosperous augmented and networked Web 3.0 experience. Special thanks to Dan Mapes , President and Co-Founder, VERSES AI and Director of The Spatial Web Foundation. If you'd like to know more about The Spatial Web, I highly recommend a helpful introductory book written by Dan and his VERSES Co-Founder, Gabriel Rene, titled, “The Spatial Web,” with a dedication “to all future generations.” Listen to Part 4 to find out how to stay ahead of the curve, and why it is critical for everyone to be included in this new network. #futureofai #artificialintelligence #spatialweb #intelligentagents #aitools
Jan Becker is President, CEO and Co-Founder of Apex.AI, Inc. He is also the Managing Director of the Apex.AI GmbH. Key topics in this conversation include: Understanding software defined vehicles Apex.Grace, Apex.Ida, and Apex.OS Building the operating system for autonomous vehicles that is designed to never fail Apex.AI's role enabling autonomy outside of automotive Functional safety considerations for modular software How Apex.AI is making their mark on the industry Links: Show notes: http://brandonbartneck.com/futureofmobility/janbecker https://www.apex.ai/ https://www.linkedin.com/in/janbecker23/ Jan's bio Jan Becker is President, CEO and Co-Founder of Apex.AI, Inc. He is also the Managing Director of the Apex.AI GmbH, our subsidiary in Germany. Prior to founding Apex.AI, he was Senior Director at Faraday Future responsible for Autonomous Driving and Director at Robert Bosch LLC responsible for Automated Driving in North America. He also served as a Senior Manager and Principal Engineer at the Bosch Research and Technology Center in Palo Alto, CA, USA, and as a senior research engineer for Corporate Research at Robert Bosch GmbH, Germany. Since 2010, Jan is Lecturer at Stanford University for autonomous vehicles and driver assistance. Previously, he was a visiting scholar at the University's Artificial Intelligence Lab and a member of the Stanford Racing Team for the 2007 DARPA Urban Challenge. In 2019, Jan was appointed to serve on the external Advisory Board of MARELLI to provide strategic advice to the MARELLI Board. In 2018, he co-founded the Autoware Foundation and was on the foundation's board of directors until 2020. Jan earned a Ph.D. in control engineering from the Technical University of Braunschweig, Germany, a master's degree in mechanical and aerospace engineering from the State University of New York at Buffalo, USA, and a master's degree in electrical engineering from the Technical University of Darmstadt, Germany. About Apex.AI Apex.AI is a Palo Alto, Munich, Berlin, Stuttgart and Gothenburg-based company that is developing breakthrough safe, certified, developer-friendly, and scalable software for mobility systems. Our software products are based on proven open-source software, such as ROS or Eclipse iceoryx, so that we don't spend time redeveloping what already works. Instead, we fork software that has been developed and proven in use by large developer communities. We then add what is missing: Functional safety, flawless performance, and support for application in commercial and safety-critical products. In order to do so, we have developed a proprietary process to rework open-source software in record time such that it conforms to the highest requirements of the applicable functional safety standard. We launched our award-winning first product Apex.Grace, formerly known as Apex.OS, after three years in 2020 and have taken it through certification in record time for launch in 2021. Future of Mobility: The Future of Mobility podcast is focused on the development and implementation of safe, sustainable, effective, and accessible mobility solutions, with a spotlight on the people and technology advancing these fields. Edison Manufacturing: Edison manufacturing is your low volume contract manufacturing partner for build and assembly of complex mobility and energy products that don't neatly fit within traditional high-volume production methods.
Nikolaos Kourentzes is Professor of Predictive Analytics at the University of Skövde (Sweden) in the Artificial Intelligence Lab as well as a member of the Centre for Marketing Analytics and Forecasting, at Lancaster University in the United Kingdom. In this episode, Nikos talks about the role of AI and judgement in forecasting, and what we as forecasters need to learn from other fields such as algorithmic learning. He continues with a discussion on temporal and hierarchical forecasting problems. On a more personal note, he shares with us his background, career and philosophy on the "why” behind the problems. And, finally, he discusses his new book, Principles of Business Forecasting, 2nd edition as well as recommendations for forecasting books and papers.
Can we build a Dolittle Machine, a piece of technology that will let us converse with the animals of planet Earth? Science fiction writer Matthew De Abaitua investigates how the latest advances in AI mean that this is now more in the realms of the possible, rather than in the purely fantastical. Starting in his garden with two cats, he finds himself in a tropical forest with big-brained hook-wielding birds, surveying multidimensional neural networks, and meets a woman who found out about her pregnancy from a dolphin. There are also robotic fish and sound pictures painted at high speed by fruit bats. What is Matthew's machine going to look like, how will it operate, and what will we learn from it all? Featuring: Linda Erb, vice president of animal care and training, Dolphin Research Center, Florida. Martha Nussbaum, professor of law and philosophy, University of Chicago. Diana Reiss, professor, Department of Psychology, Hunter College. Daniela Rus, roboticist, professor and Director of the Computer Science and Artificial Intelligence Lab, MIT. Natalie Uomini, researcher into New Caledonian crows and animal intelligence. Yossi Yovel, professor, Department of Zoology, Tel Aviv University. Extracts from ‘Songs of the Humpback Whale' used with permission from Ocean Alliance. Sperm whale sounds courtesy of Project CETI. Presenter: Matthew De Abaitua Producer: Richard Ward Executive Producer: Jo Rowntree A Loftus Media production for Radio 4
Our guest today did his bachelor's of Engineering in Algeria. Then he went to Sorbonne University in Paris to pursue his master's and PhD. He then joined MIT as a post-doc in Computer Science where he worked in the MIT Computer Science and Artificial Intelligence Lab. He studied compilers for high-performance computing. Now he serves as an Assistant Professor at New York University Abu Dhabi. Meaning he lived in four countries on four continents! He is enthusiastic about giving back to the Algerian community by sharing his experience of studying abroad. In this episode, he will share invaluable career advice, in addition to some insight into his travel adventures.
Artificial Intelligence can make the world better or be a tool for war – says Lester Earnest, founder of the Artificial Intelligence Lab at Stanford, on Reality Asserts Itself with Paul Jay. This is an episode of Reality Asserts Itself, produced January 4, 2019.
“Patents block progress” – says Lester Earnest, founder of the Artificial Intelligence Lab at Stanford, on Reality Asserts Itself with Paul Jay. This is an episode of Reality Asserts Itself, produced January 2, 2019.
Profit and deception drove cold-war militarization, says Lester Earnest, founder of the Artificial Intelligence Lab at Stanford; Earnest says the anti-nuclear bomber SAGE radar system never worked and carried on for 25 years – Lester Earnest on Reality Asserts Itself with Paul Jay. This is an episode of Reality Asserts Itself, produced December 24, 2018.
Peak Human - Unbiased Nutrition Info for Optimum Health, Fitness & Living
Hey friends, thanks for tuning into Peak Human. I'm Brian Sanders, the creator of the much-anticipated Food Lies film. Things are going well with the post production so thank you for your continued support on Indiegogo and your patience. My guest today is Dr. Stephanie Seneff who has been studying the effects of glyphosate for almost 10 years now. She has put together TONS of science about why it's bad for you and the environment, even though it's marketed as safe. Of course Monsanto has led you and regulators to believe this. The tricky part here is that it supposedly doesn't affect human cells, but it does affect all the billions of bacteria in your microbiome and the soil biome. She'll get into all the details in this episode so listen on, my friend! Stephanie Seneff, PhD is a senior research scientist at MIT's Computer Science and Artificial Intelligence Lab. Her new book, Toxic Legacy, explores the harmful effects of the herbicide glyphosate on our health and the environment. Her book is backed by shocking evidence that points the finger at glyphosate as being responsible for many debilitating chronic diseases. While at MIT, she received four degrees: a bachelor's degree in biophysics as well as a M.S., E.E, and a PhD in electrical engineering and computer science. She has also written dozens of peer-reviewed papers on the effects of drugs, nutritional deficiencies, toxic chemicals, and diet on our health. Get her book! https://amzn.to/3hlMYeZ Before we jump in I'll give you my disclosures. I take no outside funding or sponsorships for this podcast or any of my work. The only thing supporting it is my company http://NoseToTail.org that I started well after I understood the health benefits of including quality animal foods in our diet. In other words I have no outside interests influencing the information I present and I am not creating this content to sell my products. My foundational belief is that animal foods are healthy so I connected with the best producers around the country to help people purchase them. We raise them humanely and sustainably. We don't use additives or curing agents. Everything is all natural. We use the animals nose to tail. We have products such as primal ground beef that includes liver, heart, kidney, and spleen for all the amazing nutrients and none of the hassle or taste you may not be used to. That's at http://NoseToTail.org We ship boxes straight to you and have free shipping options. We also have biltong if you want grass finished meat on the go. This is a traditional South African meat snack that's air dried, soft, and delicious. We have body care products made from beef tallow and only a couple other natural ingredients. We also have seasonings to go along with these wonderful products and make cooking that much easier. Make a custom box today at http://NoseToTail.org and take advantage of the free shipping options. That's how these interviews are all possible along with all the other content we produce on youtube and social media. Thanks for supporting the ranchers, my other producers, and my team. The other way you can take advantage of this win-win-win situation is at http://Sapien.org where we have the Sapien Tribe. It's a private members community where you get the extended show notes for these podcasts, discounts on NoseToTail products, zoom calls with Dr. Gary and myself, and a lot more. There's also the Sapien Program if you'd like some help making a lifestyle change. Go to http://Sapien.org to find out more. Now onto the show! GET THE MEAT! http://NosetoTail.org GET THE FREE SAPIEN FOOD GUIDE! http://Sapien.org SHOW NOTES [5:30] Her introduction to glyphosate. [12:30] How is glyphosate still being used? [17:10] How many diseases start in the gut? [20:30] Processed foods drive micronutrient deficiencies. [22:30] Get out in the sun without sunscreen or sunglasses. [25:10] The importance of cholesterol and sulphate. [29:30] Getting a tan protects us from sunburn. [34:30] The shikimate pathway. [39:00] Animal foods limit our exposure to glyphosate. [40:30] Glyphosate is driving the celiac epidemic. [48:30] The connection between virus deaths and glyphosate. [52:30] Glyphosate impairs many enzyme processes. [56:30] How glyphosate mimics glycine in our body. [1:00:00] Epigenetic effects of glyphosate. [1:02:30] Glyphosate depletes our soil. [1:06:00] Glyphosate was made to clean metals off pipes. [1:11:40] The problem with aluminum. [1:15:30] Tips for avoiding glyphosate and heavy metals. [1:21:20] Why sulfur is so important. GET THE MEAT! http://NosetoTail.org GET THE FREE SAPIEN FOOD GUIDE! http://Sapien.org Follow along: http://twitter.com/FoodLiesOrg http://instagram.com/food.lies http://facebook.com/FoodLiesOrg
In today's episode, our guest Ramin Hasani is here to talk about data quilty and touch on the world of Robotics. He is a machine learning scientist at the Computer Science and Artificial Intelligence Lab, at MIT. He has completed his PhD studies (with distinction) in Computer Science, at TU Wien, Austria. His research focuses on developing interpretable deep learning and decision-making algorithms.
A lot has happened in the first few weeks of 2021. Elon Musk has thrown himself into the Bitcoin world. Let's not forget Robinhood, GameStop and Big tech. And this was just in one week! With everything that has happened Joe decided to ask a good friend and fellow investor to appear back on the show to discuss all that has happened in the 9 months since their last interview. Alberto Artasanchez, was the first guest on The Joe Robert Show, you can count on Alberto to give you facts, and expert opinions about what is currently going on. Alberto Artasanchez is a principal data scientist and the director of the Artificial Intelligence Lab at Knowledgent (a part Accenture) with experience spanning over 25 years in multiple industries but with a focus on the financial industry. He has an extensive background in artificial intelligence and advanced algorithms, holds 6 AWS certifications including the Big Data Specialty certification and publishes frequently in a variety of Data Science blogs. Connect with Joe Robert: http://www.joerobert.com Find him on all social platforms at @JoeMRobert Enjoyed the podcast? Be sure to subscribe on Apple Podcasts and leave a review. We love to hear your feedback and please share this with others who would benefit.
Conquering Digital Health Barriers and Misconceptions was delivered by: -Tory Cenaj, Founder/Publisher of Partners in Digital Health-Dr. Amar Gupta, Professor of Computer Science & Artificial Intelligence Lab at MITThe 2019 Converge2Xcelerate Conference in Boston, MA, was filmed LIVE on the Traders Network Show, hosted by Matt Bird. To inquire about being a guest on this show or others: Matthew Bird CommPro Worldwide C: +1 (646) 401-4499 E: matt@commpro.com W: www.commpro.com Visit: http://tradersnetworkshow.com for more details about the show.
Overcoming Barriers to Innovation Adoption Discussion was delivered by: -Dr. Amar Gupta, Professor of Computer Science & Artificial Intelligence Lab at MIT-Joe Kristol, Senior Advisor to the Chairman at United Hatzalah-George Matthew, CMO at DXC Technology The 2019 Converge2Xcelerate Conference in Boston, MA, was filmed LIVE on the Traders Network Show, hosted by Matt Bird. To inquire about being a guest on this show or others: Matthew Bird CommPro Worldwide C: +1 (646) 401-4499 E: matt@commpro.com W: www.commpro.com Visit: http://tradersnetworkshow.com for more details about the show.
Dr. Amar Gutpa, Professor of Computer Science & Artificial Intelligence Lab at MIT was interviewed LIVE on the Traders Network Show, hosted by Matt Bird, at the Red Carpet Event at the 2019 Converge2Xcelerate Conference in Boston, MA.To inquire about being a guest on this show or others: Matthew Bird CommPro Worldwide C: +1 (646) 401-4499 E: matt@commpro.com W: www.commpro.com Visit: http://tradersnetworkshow.com for more details about the show.
PwC is becoming a leader in artificial intelligence (AI) by creating an Artificial Intelligence Lab that makes it easier to access AI models our data scientists are creating. In this episode, PwC AI Lab Leader James Larmer and PwC Data Scientist Judy Zhu describe how we’re enabling AI within PwC and consulting with clients about structuring successful data scientist teams.
The world is changing fast, and now is the time to decide whether we will change with it, and how we might make that happen. Business owners, in particular, are looking for new ways to invest and get ahead financially. For anyone in that frame of mind, it helps to turn to an expert. Alberto Artasanchez is a principal data scientist and the director of the Artificial Intelligence Lab at Knowledgent (a part Accenture) with experience spanning over 25 years in multiple industries but with a focus on the financial industry. He has an extensive background in artificial intelligence and advanced algorithms, holds 6 AWS certifications including the Big Data Specialty certification and publishes frequently in a variety of Data Science blogs. Mr. Artasanchez is also frequently tapped as a speaker on topics ranging from Data Science to Big Data and Analytics, Fraud Detection and more. He has an extensive track record designing and building end-to-end machine learning platforms at scale. On the first episode of the Joe Robert Show, Joe and Alberto get into the details of cultivating a positive mindset and healthy body in plentiful days and tough ones alike (and how to find help doing so), starting out in real estate investment while keeping in balance with your full-time job, and all the ways the automations that have emerged from the COVID-19 quarantine will change the working world in the future. Listen in for an informative and inspiring interview to lift you up and leave you feeling prepared. What You'll Learn How diversification of your investments helps you manage your risk even in the worst of times What you need to know about navigating the value of cryptocurrency in the present and future economy Why the rapid advancement of AI and technology poses risks along with the benefits we enjoy And much more! Favorite Quote “It's a lot of balls to be juggling, but what's the choice? So make sure you spend that hour on your health, make sure you spend that hour on your relationship, make sure you spend that hour on your finances, and keep those balls in the air, moving constantly.” - Alberto Artasanchez Connect with Alberto Artasanchez: alrencapital.com thedatascience.ninja alberto@alrencapital.com Facebook Twitter Connect with Joe Robert: Joe@robertventures.com Twitter LinkedIn Facebook Enjoyed the podcast? Be sure to subscribe on Apple Podcasts and leave a review. We love to hear your feedback and we'd appreciate it for you to help us spread the word!
In this third and final episode, Capgemini's Didier Bonnet and Carol Bitter are joined by Neil Thompson - Research Scientist and Innovation Scholar at MIT's Computer Science and Artificial Intelligence Lab and the Initiative on the Digital Economy - as they discuss how externally developed capabilities and skills can be brought into a large corporation in the medium or long term. To get your copy of the Capgemini Invent and MIT Report: Lifting the lid on corporate innovation in the digital age, visit https://www.capgemini.com/news/invent-report-corporate-innovation/.
In this first episode, Capgemini's Didier Bonnet and Carol Bitter are joined by Neil Thompson - Research Scientist and Innovation Scholar at MIT's Computer Science and Artificial Intelligence Lab and the Initiative on the Digital Economy - as they discuss and debate the topic of identifying the critical capabilities needed for developing successful corporate innovation systems. To get your copy of the Capgemini Invent and MIT Report: Lifting the lid on corporate innovation in the digital age, visit
In this second episode, Capgemini's Didier Bonnet and Carol Bitter are joined by Neil Thompson - Research Scientist and Innovation Scholar at MIT's Computer Science and Artificial Intelligence Lab - as they examine the topic of finding the balance between internal and external innovation sources through a clear architecture and finding a way to incorporate critical resources in-house. To get your copy of the Capgemini Invent and MIT Report: Lifting the lid on corporate innovation in the digital age, visit https://www.capgemini.com/news/invent-report-corporate-innovation/.
Advances in technology and computational power opened the doors to dissecting human genomic datasets and understanding the complex genetic interactions underlying disease. In this episode, Dr. Manolis Kellis discusses his recent paper with Dr. Tsai analyzing the transcriptomes of 48 postmortem Alzheimer’s Disease brain samples. Dr. Kellis is a Professor of Computer Science at MIT, the head of the MIT Computational Biology Group, the principal investigator of CSAIL (Computer Science and Artificial Intelligence Lab), and an Institute Member at the Broad Institute.
Dr. Paul Yi and Dr. Haris Sair are both radiologists that co-direct RAIL, the Radiology and Artificial Intelligence Lab, which is a part of the JHU Malone Center for Engineering in Healthcare. The purpose of this lab is to use innovations in machine and deep learning, to advance the field of radiology and improve doctor's caseload. Dr. Paul Yi describes the concept of “transfer learning” that they use in RAIL as “basically using algorithms that were used to train googles algorithms for image recognition, and then we apply that to radiology images.” Listen to the full podcast to learn more about how artificial intelligence is connecting doctors and engineers!
A new season of the Entrepreneurial Thought Leaders starts on October 9th! Guests this season include Arlan Hamilton, founder and managing partner of Backstage Capital; Barbara Liskov, Institute Professor at MIT’s Computer Science & Artificial Intelligence Lab; Srin Madipalli, accessibility and product manager at Airbnb; Sarah Nahm, co-founder and CEO of Lever; and Aileen Lee, founder of Cowboy Ventures and All Raise. Be sure to subscribe to the podcast to get new episodes delivered straight to you every Wednesday!
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Ahmed Elgammal, a professor in the department of computer science at Rutgers, and director of The Art and Artificial Intelligence Lab. In my conversation with Ahmed, we discuss: • His work on AICAN, a creative adversarial network that produces original portraits, trained with over 500 years of European canonical art. • How complex the computational representations of the art actually are, and how he simplifies them. • Specifics of the training process, including the various types of artwork used, and the constraints applied to the model. The complete show notes for this episode can be found at twimlai.com/talk/265.
Virtual assistants have come a long way in recent years, but they're still limited in the ability to interpret the spoken word and match objects with their corresponding descriptions with speed and accuracy. On today's episode, Adria Recasens from the MIT Computer Science and Artificial Intelligence Lab discusses the research they're doing to create a system that can outperform Siri and Alexa in terms of correctly and quickly identifying objects based on spoken descriptions, allowing for our interactions with computers to be more like interactions with other humans who speak our language. Recasens discusses the use cases of this technology, which include higher and better-functioning robots or personal assistants, as well as image analysis in medicine. He also discusses the areas of research they plan to explore in the near future, which include teaching the system different languages and concepts. Press play to hear the full conversation, and visit https://www.csail.mit.edu/ to learn more.
In early April, 1999, a time capsule was delivered to the famed architect Frank Gehry with instructions to incorporate it into his designs for the building that would eventually host MIT's Computer Science and Artificial Intelligence Lab, or CSAIL. The time capsule was essentially a museum of early computer history, containing 50 items contributed by the likes of Bill Gates and Tim Berners-Lee.
Developments in artificial intelligence have made our lives easier, but they have also sparked fears of replacement. The secret? Coupling man and machine. Stephen Boyd, Chair of the Electrical Engineering department at Stanford and head of BlackRock’s Artificial Intelligence Lab, discusses why from space travel to investing, the centaur may be the way of the future.This material is intended for U.S. distribution only.This material is for informational purposes and is prepared by BlackRock, is not intended to be relied upon as a forecast, research or investment advice, and is not a recommendation, offer or solicitation to buy or sell any securities or to adopt any investment strategy. The opinions expressed are as of date of publication and are subject to change. The information and opinions contained in this material are derived from proprietary and nonproprietary sources deemed by BlackRock to be reliable and are not guaranteed as to accuracy or completeness. This material may contain ’forward looking’ information that is not purely historical in nature. There is no guarantee that any forecasts made will come to pass. Reliance upon information in this material is at the sole discretion of the reader. Past performance is not indicative of current or future results. This information provided is neither tax nor legal advice and investors should consult with their own advisors before making investment decisions. Investment involves risk including possible loss of principal.©2019 BlackRock, Inc. All Rights Reserved. BLACKROCK is a registered trademark of BlackRock, Inc. All other trademarks are those of their respective owners.673994
In today’s episode I welcome you to the Museum of Non-Human Art, a brand new gallery full of art made entirely by machines, computers, algorithms, robots and other non-human entities. I hope your enjoy your visit! To see pictures of any of the artworks we talked about on this show head to the website! Guests: Elizabeth Stephens, Australian Research Council Future Fellow in the Institute for Advanced Studies in the Humanities at the University of Queensland Michael Noll, computer artist, professor emeritus at the Annenberg School for Communication and Journalism at the University of Southern California Ahmed Elgammel, director of the The Art & Artificial Intelligence Lab at Rutgers University Orit Gat, art critic & writer Xiaoyu Weng, Robert H. N. Ho Family Foundation Associate Curator of Chinese Art at the Guggenheim Further Reading: Do Androids Dream of Electric Bananas? Machines in the Garden Automata by Jacquet-Droz The Story of Jacquet-Droz When the Machine Made Art "Incredible Machine" (1968) — main-title animation sequence for award-winning movie by Owen Murphy Productions for the American Telephone & Telegraph Company. "Patterns by 7090," by Michael Noll "Computer Generated Ballet" by Michael Noll Human or Machine: A Subjective Comparison of Piet Mondrian’s ‘Composition with Lines’ and a Computer–Generated Picture CAN: Creative Adversarial Networks Generating “Art” by Learning About Styles andDeviating from Style Norms Quantifying Creativity in Art NetworksLarge-Scale Classification of Fine-Art Paintings: Learning the Right Metric on the Right Feature A Computer Vision System for Artistic Influence Mining Tales of Our Time Exhibit at the Guggenheim Sun Yuan & Peng Yu: Tales of Our Time Flash Forward is produced by me, Rose Eveleth. The intro music is by Asura and the outtro music is by Hussalonia. The episode art is by Matt Lubchansky. Want to send a note or idea? Spotted a reference? Email me at info@flashforwardpod.com. Want to donate? Head this way. Can't give money? Leave a review on Apple Podcasts! Tell a friend about the show! Learn more about your ad choices. Visit megaphone.fm/adchoices
Globally, there are approximately 3000 motor vehicle deaths per day, 90% of which are due to human error such as distracted driving or impaired driving (drugs, alcohol, sleep deprivation). With statistics like these, it's no wonder that auto manufacturers are in a race against time, and each other, to develop self-driving cars that will meet the challenges of all driving conditions. Teddy Ort, a researcher and graduate student at MIT's Computer Science & Artificial Intelligence Lab (CSAIL) a part of the Distributed Robotics Laboratory provides an interesting look at the future of self-driving vehicles. The MIT researcher discusses his research program's goal of developing the algorithm and artificial intelligence (AI) necessary to enable a car to avoid all motor vehicle accidents. We'll learn why self-driving cars are not available to the public quite yet. While AI may be perfectly successful within a well-established grid such as a city, it may not score as well in rural areas that are not as delineated through mapping technology. Mr. Ort provides an overview of some of the technical issues that must be overcome before self-driving vehicles rule the roads. Though it may seem sensible that these cars simply do the driving when the tech is able, then hand over the control to a human when needed, data suggests this ‘passing of the reigns' is a sticky problem indeed. The MIT researcher gives an overview of the impediments to rural driving for these self-drivers, and how laser-scanning technology will provide the data necessary to read roads in much the same way a human would. And with unmarked roads comprising approximately 60% of all roads in the US, it's easy to see why camera and laser scanning technology will have to rise to the challenges of rural driving. Further, AI based technology will still have a learning curve when it comes to weather and reflective surfaces, for as a human can easily decipher that a car's image seen in a rainy road reflection is not real, AI must learn this skill. Though challenges certainly lie ahead, Teddy Ort informs us that these self-driving cars are on their way to our garages, but exactly when that day will be remains unknown.
Dr. Richard Matthew Stallman is a software developer and software freedom activist. He worked at the Artificial Intelligence Lab at MIT from 1971 to 1984, learning operating system development and wrote the first extensible Emacs text editor there in 1976. He also developed the AI technique of dependency-directed backtracking, known as truth maintenance. In 1983 Stallman announced the project to develop the GNU operating system, a Unix-like operating system meant to be entirely free software, and has been the leader of the project ever since. With that announcement he also launched the Free Software Movement. Stallman pioneered the concept of copyleft, is the main author of the GNU General Public License and gives speeches frequently about free software and related topics. For comprehensive shownotes, a complete bio and links in the episode take a look below or click the episode title.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
For today’s show, the final episode of our Black in AI Series, I’m joined by Zenna Tavares, a PhD student in the both the department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Lab at MIT. I spent some time with Zenna after his talk at the Strange Loop conference titled “Running Programs in Reverse for Deeper AI.” Zenna shares some great insight into his work on program inversion, an idea which lies at the intersection of Bayesian modeling, deep-learning, and computational logic. We set the stage with a discussion of inverse graphics and the similarities between graphic inversion and vision inversion. We then discuss the application of these techniques to intelligent systems, including the idea of parametric inversion. Last but not least, zenna details how these techniques might be implemented, and discusses his work on ReverseFlow, a library to execute tensorflow programs backwards, and Sigma.jl a probabilistic programming environment implemented in the dynamic programming language Julia. This talk packs a punch, and I’m glad to share it with you. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/114. For complete contest details, visit twimlai.com/myai. For complete series details, visit twimlai.com/blackinai2018
AI is being used to enhance and improve life in varied and often incredible ways. But what if we could use AI to improve the end of our lives, too? Our guest today is Anand Avati, a graduate student in the Artificial Intelligence Lab at Stanford University's Computer Science Department. Anand is co-author of a research paper entitled "Improving Palliative Care with Deep Learning" which details his team's use of a deep learning system to predict patient mortality with the aim of improving access to palliative care for critically-ill patients.
In this podcast, you will listen to an interview with the directors, Brody Foy and Logan Graham, of Givology's partner organization, RAIL, a scholar run consultancy that aims to apply machine learning techniques to interesting problems.
Click Here Or On Above Image To Reach Our ExpertsSecurity Expert Says, "We Can Now Spy On Human Emotions" Emotional surveillance has an undeniably dystopian vibe, like George Orwell's 1984, but it's not science fiction. Banks are already signing up for services that incorporate it into their analysis of behavior. A startup founded by MIT graduates called Humanyze has created a sensor-laden badge that transmits data on speech, activity, and stress patterns.One of these days, the walls may know when you're happy, sad, stressed or angry by using an experimental device unveiled Tuesday by researchers at the Massachusetts Institute of Technology that uses wireless signals to recognize emotions through subtle changes in breathing and heartbeat.Computer scientist Dina Katabi and her colleagues at the university's Computer Science and Artificial Intelligence Lab developed a radar system for vital signs that uses reflected radio signals to track movements, moods and behavior, with potential applications for smart homes, offices and hospitals.They posted their new research online Tuesday and plan to present their test results next month at a mobile-computing conference in New York.These wireless signals—a thousand times less powerful than conventional Wi-Fi—are designed to bounce off anyone within range, capturing variations in vital signs that can be analyzed quickly by a computer algorithm able to detect emotional states, the researchers said. To distinguish one mood from another, their system measures patterns of respiration, cardiac rhythms, and minute variations in the length of each individual heartbeat.“All of us share so much in how our emotions affect our vital signs,” said Dr. Katabi. “We get an accuracy that is so high that we can look at individual heartbeats at the order of milliseconds.”The system, which they call EQ-Radio, is 87% accurate at detecting whether a person is joyful, angry, sad or content, they said.By providing an accurate readout of moods, the system promises to loop people more directly into wireless sensor networks, the researchers said. While still experimental, the system could one day give buildings the capacity to respond automatically to changes in vital signs among the people living or working in them, without a need for explicit commands or a direct link to a body sensor, the researchers said.A hospital emergency room might automatically monitor patients awaiting treatment. An amusement park might modulate special effects by monitoring the involuntary reactions of people on an exhilarating ride. A house might one day react to a family's stress by playing pleasant music.PRO-DTECH II FREQUENCY DETECTOR(Buy/Rent/Layaway)“We have explored this idea of allowing a home to recognize someone's emotions and adapt to it,” said project researcher Fadel Adib. “The idea is to enable you to seamlessly interact with your home.”The team is already testing an earlier version of the system that tracks movements and behavior in about 15 homes in the Boston area, including that of Dr. Katabi. She uses it to monitor her sleep patterns and eating habits. It can track movements even if the person is in another room.“I would really like future homes to be more health aware,” she said.In the research made public Tuesday, Dr. Katabi and her colleagues tested the wireless system on 10 women and 20 men, between 19 and 77 years old, while in a standard office setting, which contained desks, chairs, couches and computers.CELLPHONE DETECTOR (PROFESSIONAL)(Buy/Rent/Layaway)During the tests, the volunteers sat from three to 10 feet away from the wireless sensors while attempting to evoke specific emotions by recalling emotion-rich memories. As a control, their vital signs during the experiment were also monitored using conventional electrocardiography and a video-based emotion recognition system that homes in on facial expressions.PRO-DTECH III FREQUENCY DETECTOR(Buy/Rent/Layaway)All told, the researchers collected measurements of 130,000 individual heartbeats. To classify the mood changes, the computer employed a machine learning algorithm to match the waveforms within each heartbeat.PRO-DTECH III FREQUENCY DETECTOR(Buy/Rent/Layaway)When they compared results, they found that the experimental system was almost as accurate in recognizing changes in emotion as the electrocardiograms. It was about twice as accurate as the facial cues recorded by the video system, they said.“We use the wireless signal to obtain the changes in the vital signs and then run a machine learning algorithm to get to emotions,” she said. “The algorithm can immediately recognize the emotions of someone new.”PRO-DTECH III FREQUENCY DETECTOR(Buy/Rent/Layaway)Wall Street Uses Technology To Spy On Traders Emotional StateThe trader was in deep trouble. A millennial who had only recently been allowed to set foot on a Wall Street floor, he made bad bets, and in a panic to recoup his losses, he'd blown through risk limits, losing $4.9 million in a single afternoon.WIRELESS/WIRED HIDDENCAMERA FINDER III(Buy/Rent/Layaway)It wasn't a career-ending day. The trader was taking part in a simulation run by Andrew Lo, an MIT finance professor. The goal: find out if top performers can be identified based on how they respond to market volatility. Lo had been invited into the New York-based global investment bank—he wouldn't say which one—after giving a talk to its executives. So in 2014, unknown to the outside world, he rigged a conference room with monitors to create a lab where 57 stock and bond traders lent their bodies to science.PRO-DTECH IV FREQUENCY DETECTOR(Buy/Rent/Layaway)Banks have already set up big-data teams to harvest insights from the terabytes of customer information they possess. Now they're looking inward to see whether they can improve operations and limit losses in their biggest cost center: employees. Companies including JPMorgan Chase and Bank of America have had discussions with tech companies about systems that monitor worker emotions to boost performance and compliance, according to executives at the banks who didn't want to be identified speaking about the matter.As machines encroach on humans' role in the markets, technology offers a way to even the fight. The devices Lo used—wristwatch sensors that measure pulse and perspiration—could warn traders to step away from their desks when their emotions run wild. They could also be used to screen hires to find those whose physiology is best suited to risk-taking—what interested the bank that allowed the MIT study.Wireless Camera Finder(Buy/Rent/Layaway)The most promising application, and the one with the most profound privacy issues, would be for keeping tabs on employees, Lo says. Risk managers could use it to spot problems brewing on a specific desk, such as unauthorized trading, before too much damage is done. “Imagine if all your traders were required to wear wristwatches that monitor their physiology, and you had a dashboard that tells you in real time who is freaking out,” Lo says. “The technology exists, as does the motivation—one bad trade can cost $100 million—but you're talking about a significant privacy intrusion.”MAGNETIC, ELECTRIC, RADIO ANDMICROWAVE DETECTOR(Buy/Rent/Layaway)Emotional surveillance has an undeniably dystopian vibe, like a finance version of George Orwell's 1984, but it's not science fiction. Banks are already signing up for services that incorporate it into their analysis of behavior. A startup founded by MIT graduates called Humanyze has created a sensor-laden badge that transmits data on speech, activity, and stress patterns.COUNTERSURVEILLANCE PROBE / MONITOR(Buy/Rent/Layaway)Microphones and proximity sensors on the gadgets help employers understand what high-performing teams are doing differently from laggards. The Boston-based company is close to announcing a deal with a bank that's moving some employees to new offices, according to Chief Executive Officer Ben Waber. The bank wants to use Humanyze badges to determine seating locations for traders, asset managers, and support staff to improve productivity, he says.Another startup, Behavox, uses machine-learning programs to scan employee communications and trading records. Emotional analysis of telephone conversations is a part of a worker's overall behavioral picture, according to founder Erkin Adylov, a former Goldman Sachs research analyst. When a worker deviates from established patterns—shouting at someone he's trading with when previous conversations were calm—it could be a sign further scrutiny is warranted. “Emotion recognition and mapping in phone calls is increasingly something that banks really want from us,” says Adylov, whose company is based in London. “All the things you do as a human are driven by emotions.”Emotions are reflexes that developed to drive behavior, scientists say, improving our prospects of seizing opportunity and surviving risk. They're accompanied by measurable physiological changes such as increased blood pressure, sweating, and a pounding heart. Their role in investing has been established since at least the time of economist Benjamin Graham, the father of value investing. More recently, John Coates, a University of Cambridge neuroscientist and former derivatives trader, has studied how financial risk takers' decisions are influenced by biology. His experiments, chronicled in a 2012 book, The Hour Between Dog and Wolf, show that hormones such as testosterone and cortisol play a part in exacerbating booms and busts.The volunteers in Lo's study were given a $3 million risk limit and told to make money in markets including oil, gold, stocks, currencies, and Treasuries. They came from across the bank's fixed-income and equity desks and ranged from junior employees to veterans with 15 years of experience. Top traders have a signature response to volatility, says Lo, who plans to publish his findings by next year. Rather than being devoid of feeling, they are emotional athletes. Their bodies swiftly respond to stressful situations and relax when calm returns, leaving them primed for the next challenge. The top performer made $1.1 million in a couple of hours of trading.Those who fared less well, like the trader who lost almost $5 million, were hounded by their mistakes and remained emotionally charged, as measured by their heart rate and other markers such as cortisol levels, even after the volatility subsided. Lo's findings suggest there's a sweet spot for emotional engagement: too much, and you're overly aggressive or fearful; too little, and you aren't involved enough to care. Veteran traders had more controlled responses, suggesting that training and experience count.There are other ways to infer emotional states. Researchers led by Kellogg School of Management professor Brian Uzzi pored over 1.2 million instant messages sent by day traders over a two-year period. They found that, as in Lo's study, having too much or too little emotion made for poor trades. Uzzi, whose study was published this year, says he's working with two hedge funds to design a product based on the research.As younger traders accustomed to biometric devices like the Fitbit enter the industry, applications designed to boost performance and monitor employees will become commonplace, says Lo, who expects it to be widespread in less than 10 years. “The more data we have, the more we're able to characterize the emotional state of the individual,” he says. “Everybody will have to have these kinds of analytics.”PRO-DTECH FREQUENCY DETECTOR(Buy/Rent/Layaway)Detecting Emotions In Thin AirOne of the most writerly things a person can do is to characterize air as thick, or emotions as tangible. Sadness lingers in the air. The best dinner parties are powered by palpable tension. The practice suggests that you are keenly attuned to your surroundings. Beyond observant, you use your senses in ways others had not thought possible. That is why people want to have sex with writers.But if you told me that the air is actually transmitting chemical signals that influence emotions between humans, I would add you to a list that I keep in my head. It's not a bad list, per se, but it is titled “Chumps.”One person who would not be on that list is Jonathan Williams. An atmospheric chemist, he describes himself as “one of those wandering scientific souls,” but not in an annoying way. He maintains a jovial British lilt after moving to Colorado to work at the National Oceanic and Atmospheric Administration, and then to Germany for a job with the Max Planck Institute (which describes itself as “Germany's most successful research organization”). There Williams and his colleagues study air.They focus on gases that come from vegetation in the tropics, as well as carbon industry. In doing so, the chemists use finely calibrated machines that sense the slightest changes in the contents of air. Taking measurements in the field, Williams and his colleagues always noticed that when they themselves got too close to the machines, everything went haywire.That made sense, in that humans are bags of gas. As breathing people know, we tend to emit carbon dioxide. (Though each exhalation still contains about four times as much oxygen as carbon dioxide.) And there are many subtler ingredients in the concoctions we breathe out. So Williams began to wonder, are these gases “significant on a global scale”? Could they be, even, contributing to climate change? Especially as the number of humans on Earth rockets toward 8 billion?The answer was no. Just a clear, simple no. By measuring gases in soccer stadiums, the Planck chemists found no consequence of human breath. There might be some effect at a global scale, but it's just nothing compared to the air-ravaging effects of transportation and agriculture.But Williams didn't come away from the stadium empty handed. As he sat and watched the fluctuating readings on the air sensors, he got an idea. In the manner of a typical European soccer crowd, the people went through fits of elation and anger, joy and sorrow. So Williams began to wonder, as he later put it to me, “Do people emit gases as a function of their emotions?”If we do, it wouldn't be unprecedented. Tear some leaves off of a tree, for example, and it will emit chemical signals that may be part of a system of communication between trees. The behavior for bees and ants is clearly chemically dominated.“We're not like that—not like robots following chemicals,” Williams explained. “But it could be possible that we are influenced by chemicals emitted by other humans.”The idea of airborne pheromones—chemicals that specifically influence mating behaviors— has been a source of much fascination, but the actual evidence is weak. Some small studies have suggested an effect when people put cotton balls under their armpits, and then other people smell the balls—but in minor, unreliable ways.“I don't know why so many previous researchers have been so into armpits,” said Williams. “A much better way to communicate would be through your breath. Because you can direct your breath, and your breath is at roughly the same height as the person you're trying to communicate to, silently. In the dark, maybe, in your cave.” And if these behavior-modifying volatile chemicals exist (volatile meaning anything that goes into the air), then why would they be limited to sex? Why shouldn't we be able to signal fear or anxiety? It is true that birds seem to know that I'm afraid of them.Williams was so intrigued by the idea of gases and emotion that he designed another experiment—something more predictable than a German soccer game. This time he used a movie theater. Unlike the open-air stadium, the theater presented fewer variables. “You've got this box, the cinema, and you spool through air from outside at a continuous rate, and you have 250 people sitting there, not moving. And you show them all, simultaneously, something that should make them frightened or anxious or sad, or whatever.”The changes in any one person's breath might be minuscule, but a crowd of breathers could be enough to overcome the rest of the background signals. And more importantly, unlike a soccer match, the experiment could be done with the same film again and again. This could test the reproducibility of findings, which is critical to science.Rigging a mass spectrometer into the outflow vent of the theater, the Kino Cinestar in Mainz, Williams had a sense that the experiment as something of a lark. “I thought, we're probably just going to get a big mixture of popcorn and perfume,” he said. But, nonetheless, to measure relationships between scenes and gases, his team meticulously mapped out and labeled every scene in 16 films—from beginning to end. In 30 second increments, the team labeled each by its quality (kiss, pet, injury), as well as its emotional elements using a finite set of descriptors.
Apple and the FBI may have been settled out of court, but that doesn’t mean the fight is over. With Congress on the verge of considering new legislation to compel technology companies to decrypt data, the Going Dark debate is alive and well. Last week on a panel at the IAPP Global Privacy Summit in Washington D.C., Lawfare's Editor-in-Chief Ben Wittes and Daniel Weitzner discussed the fallout from the battle between Apple and the FBI and what is likely to come of the Going Dark debate. Weitzner is the Director of the MIT Internet Policy Research Initiative and Principal Research Scientist at the MIT Computer Science and Artificial Intelligence Lab; he was formerly the United States Deputy Chief Technology Officer for Internet Policy at the White House. He and Ben parse the contours of the recent dispute between the Bureau and the technology giant, explore the boundaries of commercial use encryption, and debate the role of backdoors in law enforcement investigations. They conclude with thoughts on the policy implications of the latest reemergence of the cryptowars.
A researcher at MIT’s Computer Science and Artificial Intelligence Lab, Michael Stonebraker has founded and led nine different big-data spin-offs, including VoltDB, Tamr and Vertica - the latter of which was bought by Hewlett Packard for $340 million. Now he’s bringing his insights to a new online course being offered this month through edX and MIT Professional Education. Co-taught by long-time business partner Andy Palmer, “Startup Success: How to Launch a Technology Company in 6 Steps” covers topics ranging from generating ideas and recruiting top talent to pitching VCs and negotiating deals - all in the span of three weeks.
Maybe people and robots can co-exist in the workplace after all. Automation isn't always about replacing humans with machines. In fact, recent advances in industrial technology are allowing them to work side by side. In this episode of the podcast, Julie Shah describes the work her team is doing on ''scheduling the choreography of robots'' in factories and distribution centers. She heads up the Interactive Robotics Group at MIT's Computer Science and Artificial Intelligence Lab. In the past, Shah says, robots and people often have had to be segregated in industrial environments, for reasons of safety and efficiency. Now they can function in much closer quarters, thanks to new safety features and algorithms that manage changes in the production line with maximum efficiency. It's all about ''understanding where people provide the most value,'' she says.
Andrew Y. Ng is the director of the Artificial Intelligence Lab and associate professor in computer science at Stanford University.
Audio File: Download MP3Transcript: An Interview with Helen Greiner Co-founder and Chairman of the Board, iRobot Corp. Date: June 11, 2007 NCWIT Interview with Helen Greiner BIO: In the early days of iRobot Corp. (Nasdaq:IRBT), co-founder and Chairman of the Board Helen Greiner envisioned robots as the basis for an entirely new class of products that would improve life by taking on dangerous and undesirable tasks. Greiner's vision has been brought to life by products such as the iRobot Roomba® Vacuuming Robot, which has sold more than 2 million units to consumers throughout the world, and the iRobot PackBot® Tactical Mobile Robot, which is helping to save soldiers' lives in Iraq and Afghanistan. Greiner's nearly 20 years in robot innovation and commercialization includes work at NASA's Jet Propulsion Lab and MIT's Artificial Intelligence Lab, where she met iRobot co-founders Colin Angle and Rodney Brooks. Before founding iRobot in 1990, Greiner founded California Cybernetics, a company focused on commercializing NASA Jet Propulsion Lab technology and performing government-sponsored research in robotics. Greiner holds a bachelor's degree in mechanical engineering and a master's degree in computer science, both from MIT. In 2005, she led iRobot through its initial public offering. She also guided iRobot's early strategic corporate growth initiatives by securing $35 million in venture funding to finance iRobot's expansion in the consumer and military categories. In addition, Greiner created iRobot's Government & Industrial Robots division - starting with government research funding leading to the first deployment of robots in combat in Operation Enduring Freedom. Currently, the division is shipping iRobot PackBot robots for improvised explosive device (IED) disposal in Iraq. In part because of the success of these initiatives, Greiner has helped enhance public acceptance of robots as one of today's most important emerging technology categories. Greiner was named by the Kennedy School at Harvard in conjunction with US News and World Report as one of America's Best Leaders and was recently honored with the Pioneer Award from the Association for Unmanned Vehicle Systems International (AUVSI) in appreciation for her work in military robotics. Greiner has been honored by the World Economic Forum as both a Global Leader for Tomorrow and a Young Global Leader. In 2005 Good Housekeeping Magazine named her "Entrepreneur of the Year," and Accenture honored her as "Small Business Icon" in its Government Women Leadership Awards. In 2003, Greiner was recognized by Fortune Magazine as one of its "Top 10 Innovators of 2003" and named the Ernst and Young New England "Entrepreneur of the Year" with cofounder Colin Angle. Greiner won the prestigious "DEMO God" award at the DEMO 2000 Conference. In 1999, she was named an "Innovator for the Next Century" by Technology Review Magazine. Lucy Sanders: Hi, this is Lucy Sanders. I am the CEO of the National Center for Women and Information Technology or NCWIT. This is part of a series of interviews that we are having with fabulous IT entrepreneurs, women who have started IT companies in a variety of different sectors, all of whom have absolutely fabulous stories to tell us about being entrepreneurs. With me doing these interviews is Larry Nelson from w3w3.com. Hi, Larry. How are you? Larry Nelson: Well, hello. Boy, am I happy to be here. Lucy: Why don't you tell us a little bit about w3w3 because these will be podcasts on w3w3 as well as on the NCWIT website. Larry: Well, just briefly, we started in 1998 before anybody knew what radio on the Internet was all about. And finally we learned a number of interesting lessons. We started doing podcasting a little over a year ago, so that's a big leap since then. We have been very fortunate to have a number of interviews with top‑notch heavy hitters, but after I saw the list that Lucy put together I was just absolutely stunned. Lucy: To really just get right to it, the person we are interviewing today is Helen Greiner. She is the co‑founder and chairwoman of iRobot. I have to admit up front that I am an iRobot stockholder, and Helen knows I am one of her best salespeople ‑‑ maybe not her best sales person but certainly one of her salespeople. Helen Greiner: I hope you are not just a stockholder, but I hope you are also a Roomba owner. Lucy: I am a Roomba owner. It's getting double duty now because we're doing a kitchen renovation, and we set it loose in the house at night to pick up all the dust and stuff so it's getting a workout, Helen. Helen: You'll be needing the Dirt Dog model for wash ups and construction areas. Lucy: Absolutely. Larry: We're going to have to have a link to all of these on the website. Lucy: Absolutely. We are really happy to have you here, Helen. We are really looking forward to talking to you about entrepreneurship. Larry: You know, I can't help but wonder: we have four daughters, and how did you, Helen, get really involved and interested in technology? Helen: Well, I think this is a common story in technology, but I was inspired by science fiction. I went to see "Star Wars" when I was 11 on the big screen, and I was enthralled by R2‑D2 because he was a character. He had a personality and a gender, and he was more than a machine. I was inspired to start thinking about, can you build something like that? As I was hacking on my little TSR 80 personal computer, obviously I had no idea just how complex it would be. Lucy: What are you thinking about those new mailboxes that are R2‑D2 mailboxes, Helen? Helen: I think they're pretty damn cool. Lucy: I think it's pretty cool. As a technologist you obviously look at a lot of different technologies. I am sure you have some on your radar screen that you think are particularly cool and compelling. Maybe you could share some of those with us. Helen: Well, of course, the coolest is robots because they are just on the cusp of adoption today. Other than the robots and ones that very well might feed into the robot, are large scale memories, multiple core processors, cameras on cell phones. Technologies as they go to mass market are getting cheaper and cheaper which enables them to be bringing them into other applications, like on the robots. Larry: I just want to make sure that the listeners do understand that you are talking about robots everywhere from the kitchen to Iraq. Helen: Yes. We have over two million Roombas out there in people's homes doing the floor sweeping and vacuuming. We have a floor washing robot, the Scooba, that you just leave on your floor and when you come back it's clean. We have a robot for the work shop called the Dirt Dog, and what most people don't realize is we also sell a line of robots for the military. Our Packbot model was used for the first time in cave clearing in Afghanistan and now is being used for bomb disposal over in Iraq. One of the neat new developments we have is we just put out a version of this with a bomb sniffing payload, so it can actually go out and find improvised explosive devices. Lucy: Well, I've heard you speak about the robots over in Iraq, and it's very compelling to know that we can use technology like this to really go on these types of missions instead of our young men and our young women. Helen: The robots allow a soldier to stay at a safe, standoff distance. He doesn't have to go into unnecessary danger. Lucy: Right. Helen: Our servicemen and women, you know, are exposed to a lot of danger when you send them to roadside bombs when a robot could do the job instead. We think that's really something that should be changed quickly, and it has changed very rapidly. Just two years ago they would suit up a soldier in a bomb suit and send them down range, and now you have to get permission to do that. The common operating procedure is to send a robot into the danger. Larry: That sounds like iRobot is doing everything from saving backs in kitchens to saving lives in dangerous situations. Let me see if I can migrate to the entrepreneur part of you. What is it that made you become, or why are you an entrepreneur? Helen: I was deeply interested in making robots into an industry. People have been talking about robots. They have been in science fiction for decades and decades. Yet, when I started in this field I looked around and there were very few robots that people could actually purchase and could actually use. When I was at the university at MIT the people worked on wonderful robot projects. It was really, really cool technology, but when the PhD got done or when the project ended, all of it would kind of stop and then somebody would start a new project potentially building on some of the results. But the actual robot that was built. many times progress stopped on it. Just like the computer industry, I believe it takes a company that can reinvest some of the profits back into the next generation and the next improvements on the products that really has started the industry to take off. Lucy: Well next the definition that I carry in my head of true innovation is taking research and the types of projects you are talking about, Helen, and driving them out into the consumer space and into the mass market. That is what innovation is all about. Larry: You bet. By the way, what is it about being an entrepreneur, what is it that makes you tick and turns you on as an entrepreneur? Helen: Being an entrepreneur is creating something out of nothing. You know, when you start it, it's all consuming. It takes your whole focus. It is very compelling to me. I tend to be someone who when they jump into something they jump into it with absolutely full force, and it allowed me to learn so much along the way. Everything from how to hire people, how to apply for and win a military research contract, how to raise venture capital, how to set up a management structure and, very recently, how to take a company public. Lucy: Helen, tell us, obviously, entrepreneurship makes you tick. You love to create things from nothing, and along the way as you chose this career path, who influenced you? What kind of mentors did you have? Helen: I have had a lot of advisors who I could talk to about the different stages of the business, and that's been an incredible gift. That is one of the most valuable things you can give: the benefit of your own experience. Early on I was influenced by my dad having founded a company, so entrepreneurship was part of my culture growing up. Larry: So, it's not genetic. It's part of the culture, right? Helen: I believe that. Larry: You, I'm sure, like all of us entrepreneurs ‑‑ you know, Pat and I, we have been in business together and entrepreneurs for over 30 years. There are a lot of bumps and things along the road. What would be some of the most challenging things that you have experienced? Helen: Well, iRobot has been in business for 17 years, and it's a lot different company today than when we founded it. Early on, this was a bootstrap company, credit cards filled to the max. Larry: So you made money right away? Helen: Yeah. Larry: You were profitable right away? Yeah. Lucy: Like many of us. Helen: No, we really had a bumpy beginning because in part the technology wasn't ready yet upon time. So we came up with a method to develop the technology and to develop business plans so when the opportunity was right we could capitalize on it. Lucy: So, as we shift a little bit now toward the future entrepreneurs, if you were giving advise to people about entrepreneurship, young people, about the career path you have chosen being an entrepreneur, what would you tell them? What advice would you give them? Helen: I would say, definitely do it, because it's probably one of the most rewarding career paths you can take. One of the most challenging, but one of the most rewarding. I would say very strongly, don't do it like we did it at iRobot. IRobot, we didn't do it with a business plan. We didn't start a real crisp idea of what these robots would used for. We basically started with the future of the technology and it happens to have worked for us, but it was a long haul in the early years. I think if I had it to do over again, it would be done a lot more efficiently. Larry: When did you finally get the real management team put together? Helen: In 1998 we decided to take venture capital for the first time. And that was a big decision because that's what took it from being more of a lifestyle company, somewhat of a research lab. Folks were building any kind of robot, because they were passionate about it. Some of them are quite frankly cool to a real business concern. You could almost consider the company a re‑start in 1998. It only took the first venture capital, which allowed us to invest in the management team and take it to the next level. Also to invest in our own product lines, rather than relying on government contracts coming in or strategic relationships with larger companies. Larry: Well, you have been very passionate about iRobots and you've also been very humble in terms of what you have done, what you have been through. What are some of the characteristics that maybe have been a benefit to you in becoming a successful entrepreneur? Helen: I'd say the biggest one is persistence. There will always be speed bumps along the way. And generally being able to say, OK, I might not have the solution to this problem right now, but I know that there's a way. And either by talking to people, getting advice, by brainstorming with people, by being creative, by thinking out of the box. There is always a way to get through any problem that presents itself. It's takes persistence to do that because you will get knocked quite a few times along the road. Being able to pick yourself up, dust off and say, I learned from that experience, I won't do it again. We don't look at anything at iRobot as failed. This got us to the next step and the next step was different, but they were all stepping‑stones to where we are today. And many of them were necessary. Larry: I have heard that persistence is omnipotence. Lucy: Sometime we refer to it as relentlessness. Larry: Oh, is that what that is. Lucy: Yes. I also have to say something about Helen how and just as a sidebar: Helen gives one of the best talks on robotics I have ever seen. Helen, your talk at the Grace Harper Conference was outrageously good. Helen: Oh, well I appreciate that. One of the things that I would like for folks listening to know that it is important to be able to grab the microphone and get your message across. My personal background is: I was extremely shy, terribly afraid of public speaking. You know, reports that people who would rather do anything else sometimes than get up in front of a group of people and speak. I was one of those people. It doesn't come naturally to me. But I recognized that it was important in getting the message of the company across. I really worked on how to improve and just by taking speaking opportunities I got better and better at it. Which doesn't mean I will ever be a natural just really, really want to jump out and do it. If I can do it, anybody can learn to be a better public speaker. So they can take advantage of the opportunities to get their message out that it provides. Larry: It might not be natural but you certainly are unique and passionate. Lucy: The best talk I've heard, a mix of computer science and business and humor, it's wonderful. Helen: That is very nice of you. It means a lot because I did have to work harder than people who are naturals, "Yes, I want the mike!" Lucy: One of the things that our listeners will be interested in. The entrepreneurial life is a tough life. It is a lot of work and yet it is important to bring balance between our personal lives and our professional lives. So what kinds of hints do you have to pass along? Helen: I don't think I'm a shining example of balance in my life, but I can say the philosophy I've always had is: work hard, play hard. So, when I do take off from iRobot, being able to go out snowboarding, being able to tight‑board, being able to go scuba diving. I'm just learning how to tight‑board. I have a goal to learn one new sport each year, because it's good to take up something new and to me I like doing it in the athletic arena. Lucy: Well, it sounds like fun to me. Larry: Lucy likes to go out there and jog every day after... Lucy: Well, you're right I'm not that good at it either, but I still get out there. Larry: I can't help but ask this. You know, you have had a very exciting and challenging ‑‑ and obviously with the persistence and the talent ‑‑ you really accomplished a great deal. I know you want to accomplish a great deal more with iRobot. What's next for you? Helen: Well, the challenges that iRobot faces today are different than when we were a start up company. Now we have over 350 people. In 2006 we did just about $189 million in revenues and now it's about making the organization click, to function as a team, and making sure that things work like clockwork at the organization, while still keeping that innovative flair, so you can get the next generation of products into the pipeline. Lucy: So, I have to ask, just because I love iRobot so much, what's the next great product? Can you spill the beans? Helen: I can't tell you what the next consumer robot products are, but on the military side, we have a hugely exciting robot that can run over 12 miles an hour, that can carry a soldier's pack. It's got a manipulator on it that can pick up a Howitzer shell. That thing picked me up the other day. Lucy: Oh. Larry: Wow. Helen: We're very excited to get that type of capability also into hands of our soldiers. Lucy: Wow, that's pretty exciting. Larry: Nothing like getting picked up. Boy, that's for sure. Lucy: I don't know what I would do if a robot picked me up, but I guess one of these days maybe we'll experience ‑‑ we'll get you to bring that to one of our meetings, Helen. That would be very cool. Larry: I'd love a picture of that for the website. Lucy: Yeah, thank you. OK. Larry: Helen, I want to thank you so much for joining us. We are so excited about this program. When we get to talk to people like you with your background and your experience, it makes it just that much more exciting and motivating to a number of young people. Helen: Well, I appreciate it. Lucy: Well, and we want everybody to know where they can find these podcasts. They are accessible on the NCWIT website at ww.NCWIT.org And along with the podcast, his information about entrepreneurism and how people can be more involved as entrepreneurs and also get resources on the web and also from other organizations, should they be interested. Larry: Yes, and thank you for all of the great hints and probably more than that, some really golden nuggets in there. One that's sticking out in my mind right now is the mass‑market adoption. I guess that is what we all want to charge for. Helen: It's not where we started out, but it is where we're fully focused at. Lucy: Well, thank you very much. Helen: OK, thank you. Have a good one. Series: Entrepreneurial HeroesInterviewee: Helen GreinerInterview Summary: Helen Greiner is co-founder and Chairman of the Board of iRobot Corp., maker of the Roomba® Vacuuming Robot (over 2M units sold) and the iRobot PackBot® Tactical Mobile Robot, which deactivates mines in Iraq and Afghanistan. Release Date: June 11, 2007Interview Subject: Helen GrenierInterviewer(s): Lucy Sanders, Larry NelsonDuration: 15:30
Künstliche Intelligenz braucht nicht immer gigantische Computer. Die Forschenden vom Artificial Intelligence Lab der Universität Zürich zeigten im Lichthof ihren Roboter-Zoo.