AI&U - Sharad Gandhi and Christian Ehl

Follow AI&U - Sharad Gandhi and Christian Ehl
Share on
Copy link to clipboard

Podcast by AI&U - Sharad Gandhi and Christian Ehl

AI&U - Sharad Gandhi and Christian Ehl


    • Mar 17, 2019 LATEST EPISODE
    • infrequent NEW EPISODES
    • 19m AVG DURATION
    • 17 EPISODES


    Search for episodes from AI&U - Sharad Gandhi and Christian Ehl with a specific topic:

    Latest episodes from AI&U - Sharad Gandhi and Christian Ehl

    AI&U Episode 17 AI and Sensors

    Play Episode Listen Later Mar 17, 2019 15:40


    AI and Sensors are a perfect match today. One supplies the data, the other one acts on the data. Here is what you need to know...

    AI&U Episode 16 - Healthcare for Free

    Play Episode Listen Later Feb 17, 2019 22:59


    In the last 25 years computers and Internet have made worldwide communication and information access (virtually) free and instantaneous for everyone on the planet. Similarly, I predict that AI will enable expert medical diagnosis - another critical need for everyone - will become free and instantaneous for everyone. Next step in Healthcare is a personalized treatment-plan. AI has shown great promise in formulating treatment-plans based on patient medical data and millions of cases with comparable history.

    AI&U Episode 15 - AI in B2B and B2C

    Play Episode Listen Later Dec 2, 2018 15:52


    AI is now used in B2C and also in B2B applications. What works best, what are some examples and differences, where should you focus?

    AI&U Episode 14 - The World's First Podcast With a Robot

    Play Episode Listen Later Nov 25, 2018 19:35


    We spent an afternoon with David Jenkins from eXXcellent Solutions (https://www.exxcellent.de) talking to the robot Pepper. We interviewed him (or her) on our podcast on Artificial Intelligence. Meet Pepper!

    AI&U Episode 13 - AI and Health Care

    Play Episode Listen Later Nov 18, 2018 21:58


    AI in Healthcare 100 years ago medical knowledge doubled every 150 years. Now it doubles every 73 days. 800,000 journal articles are published every year. Human genome has 3 billion data points. We need AI just to keep up. http://fortune.com/2018/08/22/toby-cosgrove-cleveland-clinic-brainstorm-health/ Some big healthcare challenges with a promise of benefiting from AI and automation: ● Diagnosis and Treatment Plan ○ Fast with expert level accuracy ○ Economical and accessible for all ● Patient Remote Monitoring With Emergency Detect ● Elderly Care ● Detection and spread of new infections ● Precision medicine ● Clinical Trail Eligibility Assessment ● Drug Discovery ○ Faster and Cheaper Healthcare costs have been rising steeply for many years coupled with a severe shortage of professionals like doctors, nurses and caretakers in most developed countries. Chronic illnesses, as opposed to infectious diseases, need a long-term management of health for a population with growing number of elderly people. Most important need in healthcare today is quick, cheap, accurate, readily accessible expert level diagnosis of most common current illnesses. Use of AI and robotic technology for automating diagnosis and medical procedures seems the only way out to satisfy healthcare needs. Details on some new methods of diagnosis leveraging AI: Skin Cancer. ● http://news.stanford.edu/2017/01/25/artificial-intelligence-used-identify-skin-cancer/ ● https://www.theverge.com/2017/1/26/14396500/ai-skin-cancer-detection-stanford-university ● Google’s image recognition AI was used as a starting point ● 130,000 validated samples used for training AI ● Covers over 2000 skin diseases – incl. Melanoma ● 2000 gold standard used for testing AI vs. 21 expert dermatologists – AI achieved 91% accuracy ● With melanomas, for example, the human dermatologists correctly identified 95 percent of malignant lesions and 76 percent of benign moles. In the same tests, the AI was correct 96 percent of the time for the malignant samples, and 90 percent of the time for harmless lesions. ● 5.4 million new cases of skin cancer diagnosed in the US every year ● Chances of survival 97% for early diagnosis and treatment but falls to 14% in 5 years. AI to predict cardiovascular risks – from retina image. • Google scientists trained AI with a medical dataset and eye scans of nearly 300,000 patients. • Rear interior wall of the eye (the fundus) is chock-full of blood vessels that reflect the body’s overall health. • Accurately deduce an individual’s age, sex, blood pressure, BMI, Diabetic, and whether they smoke. • AI predicts risk of suffering a major cardiac event. • https://www.theverge.com/2018/2/19/17027902/google-verily-ai-algorithm-eye-scan-heart-disease-cardiovascular-risk • https://www.nature.com/articles/s41551-018-0195-0#Fig2 • https://www.nature.com/articles/s41551-018-0195-0/figures/2 AI to diagnose Parkinson’s – in just 3 minutes • At King’s College Hospital, London, with Tencent and Medopad • Patients need to wear no sensors or devices – no hospital visit needed • Motor function assessment – normally takes half hour by professional • 10 million people worldwide live with Parkinson’s • https://www.bbc.com/news/technology-45760649/ AI to spot early signs of Alzheimer’s – before your family does • Wireless tracking thousands of movements every day looking for subtle changes in behavior and sleep patterns: • Sleeping or fallen • Gait speed • Sleep patterns • Location • Breathing pattern • MIT’s AI Machine Learning predicts the symptoms of Alzheimer’s. Not obvious in the early stages. • https://www.technologyreview.com/s/609236/ai-can-spot-signs-of-alzheimers-before-your-family-does • Wireless tracking with AI: http://news.mit.edu/2018/artificial-intelligence-senses-people-through-walls-0612

    AI&U Episode 12 - Impact of AI on People’s Daily Life

    Play Episode Listen Later Oct 14, 2018 17:21


    Positive • All products and services improve with automation o More abundant o Easier to use o More personalized o Faster o Cheaper • “Experts at your service 24x7” – via your smartphone and other devices o Finance o Medical o Law o Taxation o … Negative AI can create major psychological and sociological impact on individuals and society leading to disorientation (for some) due to changes in: • Privacy • Security • Job loss and insecurity • Burden of learning new ways of doing things • Inequality • Loss of meaning and purpose

    AI&U Episode 11 - Important Questions for Business

    Play Episode Listen Later Oct 1, 2018 19:59


    What are the most pressing questions that business have on AI? We do a lot of presentations, here is what we get asked and what you should know...

    AI&U Episode 10 The AI Canvas

    Play Episode Listen Later Sep 23, 2018 21:09


    AI&U Episode 10 The AI Canvas by AI&U - Sharad Gandhi and Christian Ehl

    ai canvas u episode
    AI&U Episode 9 Value of AI for Business

    Play Episode Listen Later Sep 16, 2018 15:54


    Can you think of an industry that does not benefit from intelligence? We believe that AI can enhance all business areas and will fundamentally impact most industries and businesses. The reason is very simple  —  artificial intelligence allows you to make better decisions for both simple and very complex tasks. It does so by understanding and evaluating all the parameters and factors that influence it. It can leverage complete sets of data and it can better understand the influencing factors and produce an answer that is more reliable than humans alone can. We are seeing the evolution of new AI-based tools and services, that can help organizations function more competitively, optimize core business processes, as well as create better products and services for customers. Continuous technological advancement in computing power, connectivity, neural networks will fuel the development of more and better AI, which can be leveraged in business. In this podcast we discuss how to approach the value of AI for business.

    AI&U Episode 8 AI and Human in the Loop - Guest Sara Wasif of Pactera

    Play Episode Listen Later Sep 6, 2018 21:27


    The quality of the AI is defined by the data that trains it. Any bias in the training data can affect the training of the ML algorithms and ultimately the accuracy of the AI. With human in the loop for algorithm training, tuning and testing, in addition to ensuring that the AI is working as it was intended to, it also serves as a safeguard against bias as human judges can make ethical, cultural and emotional judgements that an AI shouldn’t be tasked or not be trusted to do. However, this introduces another challenge, the implicit bias. The biases people involved in the development of an AI may have, could be inadvertently transferred to the decision making of the AI they helped develop or train. What is considered fair and acceptable in a society also changes over time. Predictive models learning from historical data pose a serious problem when they are being used in high-stake situations. AI powered systems can amplify biases in society, not just individuals. A sophisticated AI powered system, drawing from a historical database for it decision making, maybe blind to bias caused by patterns reflecting centuries old discrimination. Data that seems neutral may have correlations embedded in them that could lead deep learning programs to make decisions that are biased to minorities or under-represented groups. Sampling-bias problem can cause image recognition programs to “ignore” under-represented groups in data. Example of popular data sets used to train image recognition AI included gender-bias where a picture of man cooking would be misidentified as a woman or race-bias where lighter-skinned candidates were deemed more beautiful as they represented the majority. Based on common characteristics of majority of highest ranking corporate executives, an AI program tasked to pick out perfect candidates for high ranking roles may consider white males to fit the bill better. Oversampling can be used to mitigate some of this challenge by assigning heavier statistical weights to underrepresented data. Mostly the members of a dominant majority are oblivious to the experiences of the other group. To fight bias, we need to have more diversity. Diversity in humans in the loop of an AI will ensure that no viewpoint is ignored. And as AI amplifies and brings to light more intrinsic biases in our society, we need to start expecting better from our society as we do from AI. Philosophical question (and might be slight digression from the topic at hand): Do we expect AI to behave more ethically than most humans? If the task of an AI is to “do what humans can do”, can we humans “oversample” in our mind when we are making decisions? I belong to a minority group. If I want an AI program to predict my professional success, would I not want it to take in account all the racial and gender biases that will affect by career in real world? Maybe over time social values evolve to a point where human decisions/actions become gender neutral and bias-free, but for now do I expect more from a machine than I do from the society I currently live in? Should the an AI’s decisions represent where the society stands today or where it should stand in an ideal world? Thank you Sara for being the guest on this show! If you have any question for Sara - please contact her at sara.wasif@pactera.com.

    AI&U Episode 7 Human Level AI Conference Impressions 2018

    Play Episode Listen Later Aug 30, 2018 22:08


    Human Level AI 2018 in Prague– has been the best AI conference for me in a while. Here is why (unsorted): It reminded me that the advantage of the first mover is huge, so is the responsibility. We will need a new architecture to reach human level skills, deep learning won‘t cut it. It pointed to the end of programming because all programming will become obsolete, once the machine reaches general AI and can design better algorithms than humans. It reiterated the limits of deep learning where it can better identify a tree but does not grasp the concept of one. Lot‘s of basic training will be needed for machines to understand our world and values – we need an AI School! Nice advances in disentangled representations as we decompose learnings and put them together for new tasks. Open-ended processes like evolution are the place to look for magic of artificial general intelligence, divergence is an opportunity, so is creativity. I love the tree of life, I will love the AI tree of life. Some things we only find if we re not looking for them. Inspiration for new algorithms can come from the earth’s open ended creativity. AI machines need to be able to adopt their behavior to perform better (no surprise, but feels scary). All components for artificial general intelligence are „almost“ there, a few insights are missing, which may come from a massive effort or a lucky invention. Maybe „not“ achieving AI is the danger for humanity (increasing complexity to manage, environmental changes, too much prio for short term gains) Institutions (like UN) are still behind, we must think ahead much more and further and accelerate the pace, once AGI is here, it will be too late… Last but not least, a personal note: there is a great opportunity to build a real intelligent caring agent – this is what I will DO. Prague was great too, will be back next year for the entire conference… Thanks to the organizers, Marke Rosa (GoodAI), Brenden Lake (NYU Data Science), Tomas Mikolov (Facebook), Irina Higgens (DeepMind), Ryota Kanai (ARAYA), Ken Stanley (Uber AI Labs), Ben Goertzel (SingularityNet), Pavel Kordik (Czech Tech Uni), Lohn Lanford (Microsoft) and Irakli Beridze (UN) and all the others involved, I learned a great deal and enjoyed the conversations – hope to be in touch…

    AI&U Episode 6 AI and Algorithms

    Play Episode Listen Later Aug 16, 2018 20:35


    Algorithms, Machine Learning and Deep Learning Algorithm: Formal Definition: An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions. A computer program can be viewed as an elaborate algorithm. Or simply, it is a set of steps to accomplish a task. Our view: Algorithm is a model (mathematically – a target function) that best correlates (predicts) output behavior to all possible input combinations. ==== Machine Learning Formal Definition: Machine learning is the process of teaching a computer to carry out a task, rather than programming it how to carry that task out step by step. At the end of training, a machine-learning system will be able to make accurate predictions for all possible input data. Our View: Machine Learning is a method of arriving at the right algorithm for a machine for most accurately correlating outputs with all combinations of input data. It uses labeled data inputs to check, tune, and improve its accuracy. ==== Deep Learning (a form of ML using neural networks) AI develops an algorithm on its own from the data inputs during the training phase. The machine literally learns. The machine-developed algorithm is a predictive algorithm and “labels” what the input data means. It is a snapshot and not a logical sequence of steps. Machine Learning using algorithms • For a general learning task where we would like to make predictions in the future (Y) given new examples of input variables (X). We don’t know what the function (f) looks like or its form. If we did, we would use it directly and we would not need to learn it from data using machine learning algorithms. • The most common type of machine learning is to learn the mapping Y = f(X) to make predictions of Y for new X. This is called predictive modeling or predictive analytics and our goal is to make the most accurate predictions possible. Artificial intelligence: Intelligence exhibited by machines for tasks that would typically require human intelligence. ==== Machine learning is typically split into: • Supervised learning, where the computer learns by example from labeled data • Unsupervised learning, where the computer groups similar data and pinpoints anomalies. Neural networks are mathematical models whose structure is loosely inspired by that of the brain. All neural networks have an input layer, where the initial data is fed in, and an output layer, that generates the final prediction. But in a deep neural network, there will be multiple "hidden layers" of neurons between these input and output layers, each feeding data into each other. Hence the term "deep" in "deep learning" and "deep neural networks", it is a reference to the large number of hidden layers -- typically greater than three -- at the heart of these neural networks.

    AI&U Episode 5 Chat Bots based on AI

    Play Episode Listen Later Jul 29, 2018 15:17


    AI enables significant new forms of interactions between people and machines. Especially chat bots with text and voice are a new way of interaction. AI makes chat bots somewhat smart and enable new use cases and significant cost savings in areas like sales and customer service. In this podcast we discuss some of the important elements you need to know.

    AI&U Episode 4 Important Quotes

    Play Episode Listen Later Jul 29, 2018 23:20


    Jeff Bezos (Amazon CEO) Quotes: “It is a renaissance, it is a golden age. We are now solving problems with machine learning and artificial intelligence that were … in the realm of science fiction for the last several decades. And natural language understanding, machine vision problems, it really is an amazing renaissance.” “Artificial Intelligence and Machine Learning ... will empower and improve every business, every government organization, every philanthropy … there is not an institution in the world that cannot be improved with Machine Learning.” “[A] lot of the value that we’re getting from machine learning is actually happening beneath the surface. It is things like improved search results. Improved product recommendations for customers. Improved forecasting for inventory management. Literally hundreds of other things beneath the surface.”   Sundar Pichai (Google CEO) Quotes: "AI holds the potential for some of the biggest advances we are going to see.” “In an ‘AI first’ world we are rethinking all our products and applying Machine Learning and AI to solve user problems. We are doing that across every one of our products.” "AI is one of the most important things humanity is working on. It is more profound than, (I dunno,) electricity or fire" "Well, it kills people, too," Pichai says of fire. "We have learned to harness fire for the benefits of humanity but we had to overcome its downsides too. So my point is, AI is really important, but we have to be concerned about it."   Elon Musk (Tesla, SpaceX CEO) quotes: "AI is a fundamental risk to the existence of human civilization.” "AI will be the best or worst thing ever for humanity" "If one company or small group of people manages to develop god-like superintelligence, they could take over the world," "We are rapidly heading towards digital superintelligence that far exceeds any human, I think it's very obvious." "We have five years. I think digital superintelligence will happen in my lifetime, 100%." "AI doesn't have to be evil to destroy humanity — if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings. It's just like if we're building a road and an anthill happens to be in the way, we don't hate ants, we're just building a road, and so goodbye anthill." "Governments don't need to follow normal laws. They will obtain AI developed by companies at gunpoint, if necessary."   Stephan Hawking (Physicist, Futurologist) Quotes: "The development of full artificial intelligence could spell the end of the human race." "It would take off on its own, and re-design itself at an ever increasing rate." "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." "Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.” "Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy." "I am an optimist and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance."

    AI&U Episode 3 Bias in AI decisions

    Play Episode Listen Later Jul 24, 2018 15:43


    We expect the decisions by AI to be neutral, without any bias, because a machine has no emotions, opinions or a personal agenda. However, AI decisions do carry a bias. Why is that? Simple: AI inherits the bias from all the examples used for its training. A Deep Learning Neural Network (DLNN) develops its algorithm for making decisions during the supervised training from the validated examples used. Opinions and biases of real people whose decisions are represented in the examples are integrated into those examples. AI inherits that integrated bias. The validated examples for the DLNN AI learning are like pages from history textbooks. If history textbooks are biased, then we acquire a biased view of the world which makes our decisions biased. Same happens for AI. By itself, bias is neither good nor bad. It is just a natural consequence of learning. All humans have biases which develop during their lifetime of experiences. It represents the integration of our personal and cultural value system and opinions. The bias of AI systems can be reduced by using training examples representing a wide diversity of opinions and biases. Self-learning AI systems learn purely from their original observations and not from examples of human decisions. Such systems are currently only used for games, not for real-life situations. Bias is defined as: Inclination or prejudice for or against one person or group, especially in a way considered to be unfair.

    AI&U Episode 2 Data, the fuel for Artificial Intelligence

    Play Episode Listen Later Jul 16, 2018 22:41


    Software is eating the world and AI is eating software. And is the new oil that fuels the success of Artificial Intelligence. The data is used to train the AI. But obtaining and preparing the data right is difficult. We look at some examples, take you through the various steps of preparing the data and talk about bias in data, that resolves in bias of AI. Excellent quality of data is as critical for the success of your AI solution as the quality of software for your mission critical programs. Getting data skills is a must have in your AI journey and they are needed to develop ethical AI solutions.

    AI&U Episode 1 AI black box and transparency of AI decisions

    Play Episode Listen Later Jun 24, 2018 17:57


    A well trained Deep Learning Neural Network (DLNN - which is one form of AI) produce fairly accurate and consistent decisions. However, it is not possible to extract the rationale or the causation for any specific decision/prediction. That is what we call a black box. The decision making algorithm is developed out of the training examples and contained within the DLNN as billions of parameters, which cannot retrospectively be expressed as a logical sequence. DL decisions result from an integration of all examples/cases used in the training of DLNN. Each example results in the tuning of the parameters (weights and biases) of the neurons in the DL network. This is similar to the human intuition - a result of integration of all the perceptions we have experienced in a given field. Our intuition is also a black box - and not rational, but experiential. When an expert gives us an explanation for his decision (transparency), he is essentially trying to co-relate a rational explanation with his intuitional decision based on the integration of all his experience - trying to make it transparent. The rational interpretation is generally quite approximate - touching just a few of the data points involved in the intuitional decision process. Our trust in systems results not just because of transparency, but also because of system’s consistency in delivering good decisions. Hence, over time, DL AI may win our trust, even without transparency.

    Claim AI&U - Sharad Gandhi and Christian Ehl

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel