Voice-first technology is becoming the operating system of healthcare, and it is poised to completely disrupt the way we experience everything in health and medicine. We are entering the era of ambient computing – smart speakers around us that are ready to carry out our commands through the most nat…
Voice Technology for Physician Burnout See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Eric Sauvé, the Chief of Product and User Experience at Speebly.Eric has been a serial entrepreneur for years and has started and built numerous startups, some of which were acquired by larger companies. He developed an interest in voice technology somewhere along the journey and ended up co-founding Speebly, a voice assistant program that can be used across multiple platforms. Key Points From Eric!How they are using voice technology (Siri) and the Apple Watch to help with handwashing during the current Covid-19 pandemic.Focusing on SiriTheir initial inspiration was using Siri. The fact that Siri gives users a bunch of web results when she can’t answer a question gave him the idea of creating a seamless hand-off from Siri to the different web properties.That would mean that a user could continue searching using their voice.They have also worked with Google Assistant and Amazon Alexa, but their focus on Siri was informed by the fact that Siri has way more users.Inspiration Behind SpeeblyTheir inspiration is based on the fact that when people are doing anything, they will either want to type or use their voice, and voice is, of course, the best option especially where there is a lot of text input.Siri is a closed ecosystem compared to Alexa and Google on the speaker side of things, but it has a ton of users and an app environment of third-party developers. This is why they focus more on Siri.Their main aim is to make it so that anyone who has an app can take advantage of voice search to drive traffic to their app or so that there can be a seamless handoff where a user asks Siri a question and they can keep talking to the app on their phone.They released a software development toolkit (SDK) that app developers can put in their iPhone or Android projects to serve as the talking interface of their app.The toolkit is also available for Apple watch OS and people can use it without their phones.Helping With the PandemicThey have been aiming at helping people understand that they could use voice in the context of smartphones and the Apple watch.They’ve been working on an in-house app called Handwash Circles to encourage people to not only wash their hands but wash them long enough.It’s a touch-less voice first hand wash timer. A user can say, “Hey Siri, start handwash” and the app will start a countdown timer for the appropriate amount of seconds that one is supposed to wash their hands.They plan on implementing accelerometer and gyroscope features where the app can determine if someone has done a good job washing their hands.The community feature of the app enables circles of people, for example, a person’s workplace to access data on their handwashing activities.Studies have shown that there is an improvement in the quality of hand washing where people have devices on them to monitor their hand washing. They ensure people’s data privacy in different ways.They are currently in the process of onboarding their first 10 organizations that are interested in implementing the use of the app at their workplaces.People can also sign up to be beta testers.Links and Resources in this Episodewww.Speebly.com/circleThe Comprehensive Flash Briefing Formula CourseVoice Technology in Healthcare Bookwww.TheVoiceDen.com See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Heather Utzig, the Co-Founder and CEO at Pragmatic Voice, a tech innovation company that combines big data, analytics, tech & creativity to drive businesses.Heather has an extensive background in healthcare having worked for companies like Johnson & Johnson and Eli Lilly & Co with her efforts focused on the rapid growth of their sales and sales teams. She was responsible for managing over 130 field sales and sales managers in the area of Brain Health and Sleep efficacy. Her team won the most awards for sales success. She was awarded the Summit Award for outstanding leadership. She has also been a successful entrepreneur and has helped a lot of people launch businesses in catering, construction, healthcare, etc. She has owned and sold successful businesses, and worked with several technological platforms and implementation projects with companies over the past 10 years. Key Points From Heather!The voice applications they have been developing at Pragmatic Voice, specifically the ones geared towards helping surgeons keep track of instruments in the operating room and so much more.Her Introduction Into VoiceShe was developing a technology with one of her companies and she had requested voice to be developed for it because it was around medical instrumentation in general (they worked with medical instrument service providers and were looking at how to prevent infections through the touching of the instruments so voice would ensure that a lot of the processes were hands-free and mobile).In the process of having that voice application developed, she met her co-founder and learned a lot about voice from him.What They’re Doing In The Healthcare SpaceFrom her healthcare background, she has always considered how voice can be applied to solve the problems in healthcare.The fact that there is a lot of human connection in healthcare, especially when it comes to doctor-patient interactions, makes voice very crucial in ensuring that there’s more effectiveness in the delivery of healthcare.Pragmatic works with healthcare companies, facilities, and even physicians to help them place their applications into voice, and advice them on how that is related to HIPAA (privacy) and other areas. They have developed several voice applications in relation to that.One of those applications is Instrument Voice which works inside a hospital, surgery center, or doctor’s office where there is instrumentation that needs to either be repaired, maintained, sterilized, or logged.Anyone working with the instruments within a healthcare setting can look at an instrument’s history, ask questions, pull out manuals, see videos, and even request repairs through the voice app. Pragmatic is streamlining that whole process to make it easier for the healthcare providers.They also have Instrument Wiki, an application that enables doctors, hospitals, and manufacturers to collaborate on information to help each other out in working with their instruments and assets in the hospital.The applications are built on Amazon Alexa and Google Assistant, with their own proprietary open-source database technology.Most hospitals have had a problem in unifying their biomed and sterilization departments, and Pragmatic’s applications, because of their ease of use, can help in bringing a couple of departments in the hospitals together to work in an easier way. Their PresenceThey have been working with some hospitals in New York and they are working on refining several things with plans to go to full scale market in the next month.Links and Resources in this EpisodePragmatic Voice WebsiteInstrument VoiceThe Comprehensive Flash Briefing Formula CourseVoice Technology in Healthcare Bookwww.TheVoiceDen.com See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Ilana Meir, a voice designer and mentor in the voice technology space. Ilana is a conversational interface designer and is one of the leading experts in Voice User Interface (VUI) design specifically at the intersection of voice technology and health. Using her immense knowledge and experience, she thinks critically about the future of voice design in ways only few industry experts do, and she encourages her students to do the same. Ilana was a chapter contributor to the recently released book, Voice Technology in Healthcare. Key Points From Ilana!Her expertise in Voice User Interface (VUI) design and some of the tips she can share.How to design a great voice experience.Getting Into Voice DesignShe came into the field of voice design from a prior background in the fields of anthropology, psychology and marketing.During her period in marketing she wanted so badly to make the transition into product design but she found herself falling into voice design which blended perfectly with her background in strategic communications, her creative thinking and her vocal ability in singing.Ilana thinks the landscape of voice design is gradually shifting in relation to how it used to be historically.Voice design is now attracting a variety of people from different fields such as interaction design.Importance of Voice User Interface DesignShe considers design as the last mile logistics, and in regards to that, she feels that it helps in organization and ensuring everything is in perfect condition for patients.The Voice Design FrameworkHer thoughts on the first step into getting into voice design is doing the correct research and having the perfect understanding of the stakeholder’s side and patients side.In terms of the stakeholder’s side, thinking about their customer base, knowing how they are trying to forge relationships with their patients, the legal considerations, understanding the kind of technology available and understanding the downstream effects that might come along the way are key.On the patients side you need to understand how they are receiving this interaction so you can package it perfectly, think about the patient’s day to day interaction so you can know who’s affected.When it comes to voice design in healthcare, one has to think about it as a strategic communication. With every strategic initiative, a lot of efforts and meetings are put into them, and so the same should be applied when designing a voice experience.Best Practices in Designing for the PatientShe advocates for the creation of a culture of participatory medicine which is achieved by creating a dynamic set up where patients and doctors are equal partners in healthcare delivery.How a computer system will communicate with patients is the second consideration, and the focus should be on questions and other things.When it comes to building rapport with patients, one of the main goals should be to mitigate the presumptions a patient might have regarding the healthcare system.A conversational system should be designed to keep interactions with patients brief, precise, and informative.Links and Resources in this EpisodeIlana on LinkedinThe Comprehensive Flash Briefing Formula CourseVoice Technology in Healthcare Bookwww.TheVoiceDen.com See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Dr. Bob Kolock, a retired physician, executive, and active Amazon Alexa skills developer.Dr. Bob Kolock has over 35 years of experience in healthcare delivery. Since retirement, he has had the opportunity to spend more time on things that really interest him. That led him to begin learning JavaScript and the Amazon Alexa development process. To date, he has 5 Alexa Skills certified by Amazon. He is currently working on 2 more and one of the two is very much aligned with his healthcare career. It’s focused on improving the transition of care of patients who have undergone a medical procedure. He will be looking for partners to make this Alexa Skill a routine way to deliver post-procedure care instructions in health care delivery systems.Key Points from Dr. Kolock!Becoming an extremely prolific Alexa skills developer after retirement from the healthcare space.The healthcare oriented suite of skills that he’s developing.Getting in Voice TechnologyHe retired 6 years ago and he had had an idea to build a smartphone tool to help manage foods in the pantry or refrigerator so someone could identify them before they got spoiled. He therefore started learning iOS, Android, JavaScript, and how to create an Alexa skill.His first skill was Food Manager and he initially thought that a bar code scan would give the necessary information in as far as expiration dates were concerned, but they didn’t, so he had to find another way to import the information.His Skills So FarHe has created a variety of 18 Alexa skills, 3 of which are revised, and a number of them have to do with healthcare and behavior change. They also relate to the website that he built to function with his database.One of his most popular skills is called Our Little Secret, and the concept behind it is that a brother or sister gives the user secrets based on what they hear around the user’s house. This plays on the privacy concern but the secrets are fictitious and are meant to be funny.Another one is Wine Jester where the idea is to hold a glass of wine to the smart speaker and it will say what the taste, fragrance, and components of the wine are.Healthcare SkillsOne of his first healthcare skills was Blood Pressure Check, and it’s based on the American Heart Association guidelines. The user tells the skill what their blood pressure reading is and the skill gives them feedback as to where that blood pressure might fall.Another one is My Weigh Loss Coach that helps users track their weight loss goals, and gives them positive or negative feedback based on their results.On his website, one can set up text messages to themselves to help with behavior change, and hence he has a skill called Healthy Text Scheduler that sends users scheduled healthy eating texts.The other one is Track My Dose, which helps people manage the medication they’re supposed to take on an as-needed basis.He also has Kindness Counts, a skill that was inspired by all the negativity that’s been in the world. It helps people focus on the good things that are happening around us.He also has two other skills that are geared towards helping physicians become more efficient so they can deliver care to more patients, and also help patients with their follow up care after a medical procedure.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseVoice Technology in Healthcare Bookwww.TheVoiceDen.comDr. Kolock’s email - rakolock1@gmail.comFood ManagerOur Little SecretWine JesterBlood Pressure CheckMy Weigh Loss CoachHealthy Text SchedulerTrack My DoseKindness Counts See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Israel Krush, the CEO and Co-Founder at Hyro, a voice platform that allows enterprises to easily add voice capabilities to their websites and mobile apps.Israel is based in Israel, and is a former elite intelligence officer in the Israeli Defense Forces. He studied Computer Science and Statistics, has a background in machine learning. He previously worked as a software engineer for Intel and various startup companies. His company Hyro allows customers to have two-way conversations to simplify their access to relevant information. Starting with healthcare, the software enables organizations to better engage with their existing customers and reduce the cost of customer support.Key Points from Israel!What they’re doing in the voice technology and conversational AI space taking data from various places and serving it up via their AI technology for people to incorporate into their websites, businesses, and other entities.The interesting work they’re doing to help with the COVID-19 pandemic.What Hyro DoesIt’s a plug and play conversational AI platform for healthcare providers.The company is focused on both voice and text as long as it’s natural language.They target enterprises and organizations that have massive amounts of data that is hard to navigate.The most important aspect of their solution is the plug and play.While researching the voice assistant and chatbot market, they learned that a lot of the existing solutions are based on a creation platform.Hyro gives their users a creation platform where they can define their intents and build their workflows or conversational flows. They discovered that there was a lot of friction in the deployment and maintenance of their platform for users, so they decided to look for a plug and play approach. One of the main valuable use cases of their solution for healthcare providers has been helping patients find a physician when they need one. They find a physician based on various attributes of the physician.Another use case is in helping patients find the services that a healthcare provider offers.The patients and other users can interact through various modes and devices. Most traffic comes from mobile devices through typing (texting).Helping Battle COVID-19When the pandemic started, they gathered in conference rooms in all their locations to discuss how they could help with the situation because they knew that patients would have multiple questions regarding the Coronavirus.Based on their technology, they scrapped the certified resources that have answers for questions around COVID-19. They specifically scrapped the WHO and CDC websites, and then constructed a knowledge graph about the virus and released a free chatbot that is also addressing issues around the virus. The chatbot answers frequently asked questions about the virus and gives people a risk assessment based on a short dialogue with a user about their age, where they’re based, whether they’ve interacted with a COVID-19 patient, and other things.Feedback From UsersHyro doesn’t offer a one-fits-all solution. Every healthcare provider has their own unique needs, data sources, and how they handle their patients.With the COVID-19 solution, some healthcare providers have provided additional resources about the virus like their own FAQ webpages.Healthcare providers saw a need for a conversational solution to help patients in getting relevant information. They therefore felt Hyro’s solutions made sense.The Rise of TelemedicinePeople are adopting telemedicine more and more because it has become clear that the old way of healthcare is gone and patients are more willing to use telemedicine.They see the same adoption in the conversational aspect of their solution. Patients are constantly asking how they can schedule a virtual appointment.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseVoice Technology in Healthcare BookHyro WebsiteHyro on Linkedinwww.TheVoiceDen.com See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Dave Kemp, a thought leader in the intersection between voice first technology and hearables.Dave is part of a company called Oak Tree Products and they provide medical supplies and devices to the hearing technology industry. He also has a blog called FuturEar.co where he documents the rapid technological breakthroughs that are occurring in the hearables niche, including biometric sensors and voice assistants that are being incorporated into the hearable devices.Key Points from Dave!How he became an expert in hearables and voice technology.The content that he wrote in a chapter of the book, Voice Technology in Healthcare.Concrete examples of case scenarios where voice technology can be used to make a difference in people’s lives.Voice Technology and Hearables: The Origin StoryThe first time he was introduced to voice technology was at one of the first Alexa Conference events. He had gone there because he was researching what would happen with hearables due to that fact that hearing aids were becoming Bluetooth enabled.In 2015/2016, all the hearing aids that were coming to market were Bluetooth enabled and so he started thinking about the app economy and what else could be done technology wise in the hearing aid arena.The person who got him interested in voice technology was Brian Roemmelle when he came across his content on Twitter and read it.Brian talked about voice technology as something that would simplify everything back to the basics such that a four-year-old could communicate with the technology just as a 95-year-old could. That’s what gave Dave the aha moment, and he started to see the potential of smart speakers.He realized that if smart speakers continued to proliferate and people continued to increasingly depend on them for more and more things, then people would probably want that type of functionality on their person. He saw the Bluetooth enabled hearing aids as a potential tech to fulfill that.Voice Technology in Healthcare BookDave wrote a chapter in the book about hearables and how they're becoming enabled. He started by talking about the technical side of it and progressively wrote about how they would impact the end users.There’s been the development of consumer grade devices that have the type of technology that legitimizes them as medical grade wearables and hearables. An example is the Apple Watch. The Apple Watch series 4 even has an ECG monitor.There will be a number of applications and environments where that technology will be applied but Dave is more focused on how the everyday person could build a longitudinal health data set (this refers to how someone can collect data about their health a few times a year through a wearable)He talked a lot about how the devices work through PPG sensors, which are the optical based sensors that are increasingly being placed into different wearable devices, for example, on the underside of an Apple watch.The sensors don’t really capture new things and the machine learning algorithms that are layered on top of them are the ones that create new insights by detecting patterns. Dave talked all about that in the chapter from a data collection standpoint.He also wrote about how voice technology could be layered on top of that. He dived into how that would be impactful to end users, caregivers, and all different types of stakeholders.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseVoice Technology in Healthcare Bookwww.FutureEar.coFuture Ear Radio Podcast See acast.com/privacy for privacy and opt-out information.
In this episode, Teri shares a recording of his recent webinar where Brian Roemmele spoke about some of his ideas and visions for what our world is going to look like using voice technology after the current Coronavirus pandemic.Brian is the man that actually came up with the term “Voice First” and he’s often referred to as the Oracle of Voice and the Modern Day Thomas Edison. He is a scientist, researcher, analyst, connector, thinker, and doer. Over the long winding arc of his career, Brian has built and run payments and tech businesses, worked in media, including the promotion of top musicians, and explored a variety of other subjects along the way.He actively shares his findings and observations across fora like Forbes, Huffington Post, Newsweek, Slate, Business Insider, Daily Mail, Inc, Gizmodo, Medium, Quora (An exclusive Quora top writer for: 2017, 2016, 2015, 2014, and 2013), Twitter (quoted and published), Around the Coin (earliest cryptocurrency podcast), Breaking Banks Radio and This Week In Voice on VoiceFirst.FM that surfaces everything from Bitcoin to Voice Commerce.Key Points from Brian!Where he sees voice making the biggest impact in times like these with the Coronavirus pandemic.How voice technology will change healthcare and life in general as we know it for the better.Brian’s Predictions On The Future Impact Of VoiceWe are going to see a redesign of public interaction surfaces (like over the air hand gestures) and more things interacting with voice.Our devices will also become an interface actuated by voice or touch to open doors, choose locations and elevators, open car doors, and a number of similar things, because people will be galvanized with the thought that there could be some dangerous virus years after the Coronavirus.He recently studied a lot of information about the 1918 pandemic and he was able to dive into the mindset of what happened after the pandemic to determine what changed in society. He was able to come up with some of the similarities between that pandemic and the current pandemic, and determine just how society today will change after the Coronavirus pandemic is over.One of the discoveries that were made after the 1918 pandemic was that copper surfaces had an immediate response in devitalizing or deactivating viruses.Certain minerals and metals also devitalize viruses and bacteria through something called Contact Kill which has been widely known for hundreds of years. People in Sumerian times were actually using silver and copper utensils, which some people saw as a sign of wealth, when in reality the utensils actually killed viruses and bacteria, and made their food more presentable.Brian feels that hospital surfaces and beds should have a copper alloy coding to safeguard against viruses and other pathogens.He thinks that there will need to be a way to diagnose people through voice, and how he sees that happening is through different bio-sensors that will be put on a person when they walk into a hospital and start diagnosing them before a medical attendant gets to see them.He insists that that those biosensor devices must not be on the internet in any way so that they’re never compromised. Those devices will be tuned to a user’s personality, outlook, goals, motivations, and they will notice changes in someone’s sleep patterns, and other things that serve as an early warning system.Brian has looked at several studies on Coronaviruses and realized that there are several early warning systems like sleep pattern disturbances, digestive pattern disturbances, change in temperature, change in heart rate variability, change in blink rate, and other things.There are a number of signs of any virus within a human body, and one of those things is a change in someone’s temperature gradient. If one has a voice first device on them, it can be notified of their change in temperature and take the necessary action.The Catalyst to Overhaul MedicineThe Coronavirus pandemic will be the catalyst to overhaul medicine and Brian highlights the fact that times of crisis are the only times in history that anything changes.He predicts the hospital room and points of contact will change because of the amount of attention we have put on the Coronavirus.He highlights the importance of self-sufficiency within countries in order to ensure that people don’t find themselves in the same kind of trouble they’re in right now with the Coronavirus pandemic, and he feels voice first technology will be a great start towards that.Monitoring People’s Vital Signs to Predict PandemicsBrian says with proper human telemetry, a physician can figure out the health of a person.There are other signs people can use to determine if someone has a virus or whether they are ill.He actually has a voice first AI with cameras that can determine that someone presents like they’re sick.He highlights the fact that if people’s health could be monitored electronically, then we would have an early warning sign of an oncoming pandemic. People are not very good observers of their own health conditions and even with the current healthcare systems no one is ever really sure whether a diagnosis is exact, but with a system of telemetry, we can have accurate diagnosis.It all boils down to being able to collect tons of data that is voice first. A great scenario would be someone asking their voice first device how they are doing, and the device would tell them exactly what their health is like.The Roaring 20s That Will Come Out Of The Current PandemicSociety will be re-organized and there’s going to be more telecommuting.Companies will not need to have a lot of their employees going back to their work stations because they will see a need for them to work from their homes as long as they can do their work. Technology is going to inform that.With properly designed voice devices for the corporate environment, work mates will be able to easily communicate with each other from their different locations.Brian walked into one of his companies in the early 2000s, asked most of the employees to go work from home, and productivity exploded as a result.Before the 1918/1919 pandemic, the average person was not interested in the telephone and the radio, but after the pandemic, they were very interested in both technologies because they were technologies that connected people through strong and meaningful communication.He predicts that as a result of the Coronavirus pandemic there will be a release of productivity, creativity, and socialization. He feels voice technology will lead the way in that.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula Coursewww.VoiceFirst.ExpertBrian Roemmele on TwitterBrian Roemmele on QuoraBrian on Linkedin See acast.com/privacy for privacy and opt-out information.
Can Voice First Technology Help with Pandemics? See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Dr. Randall Williams, the Co-Founder of WellSaid, to talk about the work they are doing to help people age in place with a skill called “My Day”Dr. Williams is a serial digital health technology entrepreneur, physician, founder and CEO. He supports C-suite executives, investors, and boards seeking to refine and implement scalable and engaging healthcare products and go-to-market strategies. He is also a trusted adviser who brings a unique breadth of healthcare market experience and understanding, ranging from reimbursement policy, market strategy, technology development, and executive development. WellSaid connects seniors to Alexa for their healthy independence and so that their loved ones can have peace of mind. Through their MyDay skill, they support healthier behaviors, connects seniors to loved ones and bridges the digital divide for a fuller and happier life.Key Points from Dr. Williams!The aging in place Alexa skills that he is producing to apply voice technology to the challenges of staying healthy, active, and independent as one gets older.The role patients should be taking as the users of the healthcare system versus how much of the responsibility is on the administrators and healthcare providers.BackgroundHe trained as a cardiologist (Heart failure and transplant specialist).Being in healthcare, he saw some of the failings of the healthcare system and how people with chronic diseases really fell through the cracks and lacked the systems and support they needed to stay healthy and out of the hospital.They built a program that provided resources in both a hospital setting and home setting to help people and also stay on top of where the patients were having challenges.They got into voice technology and created an app (interactive voice response) that allowed heart failure patients to report in every morning on how they were feeling and how they were doing.That enabled them to prevent 50% to 60% of hospitalizations in that group of people. That led to an opportunity to look into other chronic diseases and see if the model could apply elsewhere.Between 2000 and 2004 they got the opportunity to commercialize their technology and form their first startup.At some point he had to give up his clinical practice to focus on voice technology.How WellSaid Came AboutAs they were growing their initial company (Pharos Innovations), they started to hear about and see voice assistant technologies starting to emerge.They saw an opportunity in incorporating the voice technology into their platform as another interface.They had a challenge with the fact that smart speakers at the time were not set up for HIPAA Compliance. They also had a disadvantage in the fact that they were often at an arm’s length from the users of their technology.They therefore made the decision to venture into the “Aging in Place” industry and started another company from Pharos. And that’s how WellSaid was created.Their goal was to use voice technology with the older adult population to help them stay healthy, active, and independent.They created a prototype and tested it out with 50 seniors which taught them a lot that helped them improve on the technology.All About MyDayIt promotes healthy and independent aging.It links seniors with their family, friends and caregivers through daily interactions.Seniors use a smart speaker to go through a daily program that looks at and assesses seven different dimensions of well-being that are known to create challenges to their independence like cognitive decline, nutrition status, mobility, and others.Within the program, a senior can learn each day more about those areas, get coaching, or go through exercises or other interventions to help strengthen different areas of their well-being.As a result of all that information, the seniors and their loved ones can understand where there may be risks and vulnerabilities, and where they may need additional support.The back-end of the product is a companion app that allows family members to pair with a senior. The senior gives permission for their data to be shared with others so their loved ones can track how they’re doing.Better Every Day Flash BriefingIt focuses on seniors and has the goal of helping inform, educate, equip and inspire them to stay on top of active healthy aging.Administration of Healthcare Vs Proactivity in Personal CareWhile the healthcare system is great at taking care of patients when they are in healthcare facilities, they do a poor job of helping equip people to stay healthy outside of the healthcare facilities.Dr. Williams and his team aim at supporting consumers who bear the problem of not staying independent.Voice in healthcare technologies are already emerging as HIPAA compliant and many people are taking advantage of that.The Uniqueness of WellSaid/MyDayTheir technology leverages their expertise around the aging population from an ethnographic understanding where they have interacted with hundreds of thousands of seniors. Over 10,000 seniors have so far interacted with WellSaid/MyDay.They are learning very rapidly by leveraging existing technology platforms to bring the competencies of implementation as well as the competencies of implementation, content development, and aging expertise.Other companies in the same space have just chosen to just create devices or their own unique versions of smart speaker software.They also have some continuous learning and optimization advantages with their platform.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseBetter Every Day Flash BriefingMyDay WebsiteDr. Williams’ EmailDr. Williams on Linkedin See acast.com/privacy for privacy and opt-out information.
In this episode, Teri will give us a preview of the Voice Technology in Healthcare book that he co-wrote with David Metcalf (PhD), Sandhya Pruthi (MD), and Harry Pappas. Teri has made a compilation of a number of different people who contributed to the book which will be officially launched at HIMSS (Healthcare Information and Management Systems Society), a major medical information conference which will take place in Orlando, Florida on March 10th 2020. The book brings together the expertise of 32 thought leaders in different areas at the intersection of voice technology and healthcare.Key Points!The Voice Technology in Healthcare book and how it can be useful to all of us.How the book covers a cross section of the voice technology industry; where the industry is at today, a history of it, what is available right now, and its future.The Voice Technology in Healthcare bookIt’s divided into four main sections. Section one is made up of four different chapters and they serve as an introduction to voice technology. They cover some of the key concepts of voice technology in healthcare. The chapters include:Chapter 1This was written by Teri and includes an overview of why voice is such an important concept when it comes to healthcare and technology. Teri shares why he feels voice will transform healthcare and become the next operating system (Voice Operating System).Chapter 2It was written by Ilana Meir who has spoken at different voice events. She is the world’s most foremost expert on Voice User Interface design (VUI), and how it applies to healthcare. This will help in the design of voice applications because VUI is critical.Chapter 3It was written by Audrey Arbeeny, the founder and CEO of Audiobrain. The chapter is titled, “The Science Behind Sonic Branding: How Audio Can Create Better Patient, Caregiver, and Healthcare Provider Outcomes.” She discussed her 25 years’ experience working in healthcare, how the brain processes music and sound, and why sound is the perfect tool for communicating, helping to heal, and promoting wellness. She discusses some of the projects her company has worked on, the history of the voice industry, and where it’s headed in the future.Chapter 4It was written by Nathan Treloar from Orbita and it’s titled, “Secure Voice”.Section 2This one has seven chapters which look at voice technology and the patient experience. The authors of these chapters are mostly people who have had experience with creating voice applications and how they impact patients. The chapters include:Chapter 5It’s titled, “Automated Virtual Caregiving Using Voice First Services: Proactive, Personalized, Holistic, 24/7, and Affordable” It was written by Stuart Patterson from Lifepod.Chapter 6This is about voice and wearables and was written by Dave Kemp.Chapter 7This was written by Rupal Patel and it’s about synthetic voices for healthcare applications. Rupal has been doing some amazing work looking at how people can create voices for brands, but also for the medical field where a voice can be created for someone who is losing their voice.Chapter 8 & 9These include edited versions of podcast interviews that took place in Voice First Health Podcast. Teri wanted to incorporate the interviews in the book to bring a real personal aspect to the narratives that readers will be reading in the book.Chapter 8 is titled, “Voice First Health Interview: An Diabetes Care Plan” This was with Anne Weiler who actually won an award for her diabetes Alexa skill.Chapter 9 is titled, “Voice First Health Interview: Alexa Skills for Pediatrics” This interview was with Devin Nadar speaking about some of her experiences with creating skills specifically for kids.Chapter 10This one was written by Robin Christopherson, and it’s called “The Rapid Rise of Voice Technology and its Awesome Power to Empower” It’s all about accessibility and he wrote about how the Echo and voice first technology more broadly represents a fantastic opportunity for people with people with disabilities.Chapter 11It’s titled, “An Overview of Voice Technology and Healthcare” and it’s by a team of authors from Macadamian Technologies.Section 3It’s titled, “Voice Technology and the Provider Experience” and it’s all about what the healthcare provider is experiencing with voice technology.Chapter 12It’s titled, “Mayo Clinic: Patient Centered, Innovation Driven” and it’s written by a team at the Mayo Clinic, including Dr. Sandhya Pruthi who is one of the cover authors of the book.Chapter 13This is another Voice First Health interview titled, “Voice Technology for Behavioral Changes” where Teri talked to Dr. Matthew Cybulsky about how we can use voice technology to really influence positive behavioral changes with the hope of positive health outcomes.Chapter 14It’s called “The Laws of Voice” and is written by two lawyers, Heather Deixler and Bianca Phillips.Chapter 15This one is based on a Voice First Health Podcast interview, and is called “Medical Documentation in the Voice First Era” It features Dr. Harjinder Sandhu from Saykara and he talks about medical documentation through voice.Chapter 16It’s written by Yaa Kumah-Crystal and Dan Albert from Vanderbilt University and they are looking at creating a voice enabled EMR (V-EVA). They discuss the considerations for designing a voice user interface like Siri or Alexa, to help doctors ask for information from the electronic health record, and have it summarized back by the computer in words. Chapter 17It’s titled, “The Power of Voice in Western Medical Education” and it was written by Dr. Neel Desai and Dr. Taylor Brana. They are leaders when it comes to using voice to educate the next generation of medical students. They have an Alexa skill, MedFlashGo, that is doing just that.Chapter 18This was written by Michelle Wan and is titled, “Voice First Health Interview: Voice Technology for Educational Simulations”Chapter 19It was written by Ed Chung and is called, “Voice Control of Medical Hardware”. Ed talks about how voice is such a wonderful way to control medical hardware in a number of settings.Section 4Its titled, “Voice Technology and the Future of Healthcare” and has four chapters, namely:Chapter 20It’s titled, “Voice First Health Interview: Voice Applications with Dr. David Metcalf” and is about the fascinating things Dr. Metcalf and his team are doing that incorporate voice technology and healthcare.Chapter 21This is titled, “Voice First Health Interview: Vocal Biomarkers and the Voice Genome Project with Jim Schwoebel” and Jim talks about some fascinating areas of vocal biomarkers and being able to diagnose diseases by listening to someone’s voice.Chapter 22It was written by Suraj Kapa and is called, “Artificial Intelligence and Voice Analysis: Potential for Disease Identification and Monitoring” Suraj talks about how we can use voice to analyze different types of diseases and monitor diseases as well.Chapter 23This is a roundtable discussion amongst Dr. David Metcalf, Dr. Sandhya Pruthi, and Teri. They talk about some of the themes, trends, and aha moments that they noticed in putting the book together.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseThe Voice Technology in Healthcare Bookwww.VoiceFirstHealth.com/Live See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Drs. Neel Desai and Taylor Brana who wrote a chapter together in the Voice Technology in Healthcare book about medical education and how voice technology is completely changing the way that we look at education for medical trainees.Dr. Desai has dedicated many years towards helping medical students to understand that there is so much more to medicine that just practicing natural medicine. Dr. Brana is the host of the Happy Doc Podcast which amplifies the voices of physicians that are doing really interesting, creative, and fulfilling things that are outside of their regular medical practice. He has had a deep interest in aiding in the processes of medicine, healthcare, and education. Starting The Happy Doc Podcast has enabled them to learn what makes physicians fulfilled in their careers and lives.Key points from Dr. Desai and Brana!Voice technology and how it will improve fulfillment, happiness, education, and having the ability to learn in a different way.Getting Interested in Voice Taylor:It came so naturally for Taylor.He got into podcasting because it was a way for someone to learn a lot of things while they’re still active doing other things.They learned a lot about voice technology and what it could do from a lot of people including Gary Vaynerchuk.He sees voice as podcast plus where one can imagine a world of the future where they will be able to interact with voice experiences in a way that the user will be in control.Neel:He got his first smart speaker in 2016 and one day he noticed his son was doing a quiz through an Alexa skill, which he found fascinating.His own frustrations with EMRs made him think of voice as a solution even for the training of medical students.Voice and the Future of Education - TaylorTeaching has always happened through oral (voice) teachings and so using voice technology is a natural progression.He feels it’s important to understand voice so we can use it to the best of our capacity.In the chapter they wrote in the Voice Technology in Healthcare book, they talked about where the technology is now and where it’s heading.The Mayo Clinic, for example, has a skill that is like a general repository of information. People can ask the skill medical questions, and it will give them solid information about, for example, what to do if they get injured or have a muscle spasm.The benefits of applying voice technology in education are tremendous and there is still so much more to do.Their Education Voice Applications Neel:They created MedFlashGo, a medical question bank for students in medical school who are studying for different exams.The skill is for studying for board exams. In the US there are 3 steps to becoming a licensed physician. Each step has important exams that one has to pass to move on to the next level.The skill consists of hands free medical flash cards on the go. The cards consist of test questions for the user. They can be either multiple choice or fill-in-the-blank questions about medical subjects for the user’s exams.They currently have around 1,000 questions with a goal to cover step 1, step 2, step 3, and shelf concepts. Their focus is on teaching the next generation of medical students by creating new healthier learning systems by training them on voice early on. That will ensure that it will not be a big barrier to get them to adopt using voice speakers in healthcare facilities. He believes this application will pioneer the way for new ways of learning.Taylor:They are also working on DentalFlashGo and MCatFlashGo.The skills are all focused on helping students with their board exams.The Happy Doc PodcastTaylor:Burnout, depression and exhaustion is happening a lot in medicine and one of the things they have been talking about is how learning, teaching, creating and practicing medicine in the century we live in is going to be easier and more efficient.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseVoice Technology in HealthcareMedFlashGo SkillMedFlashGo WebsiteMedFlashGo on FacebookMedFlashGo on TwitterMedFlashGo on InstagramThe Happy Doc Podcast WebsiteDr. Neel on TwitterHappy Doc Podcast on FacebookHappy Doc Podcast on InstagramHappy Doc Podcast on Twitter See acast.com/privacy for privacy and opt-out information.
In this episode, Teri will share a recording of the presentation he did at Project Voice 2020.Project Voice 2020 was a fantastic event and Teri had the opportunity to participate in a number of different talks including a workshop that he gave with his colleagues Harry Pappas, David Box, and Ilana Meir.Key points from the Presentation!Teri’s ideas around what is the opportunity right now for voice technology specifically in healthcare.Designing for voice in healthcare.The Basics for Voice for HealthcareTeri started the presentation by playing a clip from the movie Elysium where a little girl with Acute Lymphoblastic Leukemia is put on some futuristic home-based medical machine that completely heals her of the disease.With voice technology and how it’s penetrating the market where we have access to incredible technology, Teri sees it as the primitive version of the technology in that movie scene.Where We Are and What’s Happening NextTeri shares a story about when he was ten and his parents got their first personal computer (a Tandy computer from RadioShack).They showed Teri how to use it and he spent a long time typing on it.Technology evolved and we went from that type of computer to the newer models we have now.12 years ago Steve Jobs introduced the iPhone, and now we have a computer in the palm of our hands.All those technologies have several things in common including the keyboard and an interface.For the first time ever, we don’t need an interface or a device because we can use our voice as the device sits in the background. We can communicate in the most natural way that we know how to communicate using our voices.Voice is extremely efficient and the speed at which we can communicate a message when we are speaking is much faster than if we type.With voice we can multitask, and because of that, voice will be the next operating system.Research on the penetration rates of consumer technologies over the course of history shows that smart speakers are being adopted at a rate more rapid than any consumer technology in our history, including mobile phones.Scenario: What if we could have a physician, dietician, physiotherapist, and nurse living in our homes 24/7 through the kind of technology portrayed in the movie Elysium?That kind is scenario is what gets Teri excited and he believes that we are starting to see the primitive kindlings of that kind of healthcare team in the home.Opportunities for Healthcare in the HomeIt’s often been said that the right healthcare system is when you’re getting the right care, at the right time, and in the right place.The Right Care:If one doesn’t have any medical training and they’re feeling sick, often times they’re not sure what the right care is.They can’t tell whether to see a doctor, take some pills or get some therapy. The way they can figure that out without accessing a healthcare system is through voice. For example, one can talk to their smart assistant which will then tap into AI physicians, nurses, dieticians, etc. who can answer common health questions.Teri believes that will mean voice assistants becoming care providers in the home.The Right Time:If one feels sick, they may not know when to see a doctor.Scenario: What if that person could ask their voice assistant for help and the voice assistant could start to triage and assess the urgency? Once it does that effectively, it could start to direct the resources of the population.The voice assistants will be doing that at an individual level in each home.The Right Place:One of the biggest issues for Canadians and people worldwide, is that there is so much demand on the healthcare system that patients will often go to the emergency department for something like a cold, because they can’t access their family doctor. It can take weeks to see a family doctor.That is a huge issue in the Canadian healthcare system, and so directing people to the right place is critical.The question that lingers is, “How does a person know where to go when they don’t have the medical training?” because there are lots of different options for where a person could get some medical care depending on the issue.Scenario: What if voice assistants acted like a tour guide for a healthcare system helping people to navigate through the system?The Pillars of Creating a Healthcare ApplicationRight now it’s very easy for someone to create a healthcare application that allows one to educate their patients, meaning it’s a one-way communication. A great example is Teri’s health oriented flash briefings.Another great example is how someone can talk to a smart assistant and ask Diabetes related questions which the assistant will answer.The only problem is, there’s very little of a clinical component to that, and so it doesn’t actually provide any diagnosis or treatment.We can’t rely on smart devices to treat a disease but Teri believes in the future we will be relying on the devices to treat us.Aging in PlaceThis refers to, for example, being able to easily and seamlessly connect with family members through a voice assistant or being able to reminisce about events by talking to a voice assistant to remind us about something that happened in the past.Physician NotesThis is one of the biggest pain points for physicians and if somebody can solve it effectively, it’s the holy grail.Having a voice controlled EMR with AI enriched interactions will also be great breakthrough.Vocal BiomarkersIt’s an exciting area. Entities like the Mayo Clinic are studying biomarkers for voice so that a person could just be speaking and the smart speaker could pick up a health condition in the person.They are looking at coronary artery disease and seeing if they can pick it up from a person’s voice. The technology can also be leveraged for other types of diseases like Parkinson’s and others.Patient-Centered HealthcareHealthcare systems are a maze that patients have a difficult time trying to navigate. They’re often times not sure where to go and if they do go somewhere, they don’t know where to go next.Teri thinks we should start having smart speakers in the home where a patient can be in the driver’s seat. Patients can access the voice assistants when they need to, get some advice, and when necessary, have their supporting crew of physicians to tap into when they need them.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseLifePodVoice Technology in Healthcare Book See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Sandeep Konam, the Co-Founder and CTO at Abridge, a healthcare and machine learning company.Abridge creates AI-powered tools that empower patients to take control of their health stories. Sandeep is always excited to build products that positively impact the lives of people. He has great experience working at the intersection of machine learning and health. In the past, he has worked on enhancing perception capabilities of UAVs on multi-robot coordination, and was involved with several health-tech projects. Most recently, he worked on deep-learning-based autonomy for UAVs and developed techniques to make deep-learning algorithms interpretable. He is also the founder at KONAM Foundation where they build tech interventions to address challenges in agriculture and education.Key points from Sandeep!How Abridge enables patients to record their conversations with their healthcare providers, and then have a log of what the encounter was about, using their own speech language and machine learning tech.Getting into VoiceHe has been working at the intersection of machine learning and health for about seven years.Around 2016, he got interested in creating an oncology clinical trial matching platform, where oncologists and patients can easily find a list of clinical trials for eligible patients. Doctors, clinicians, and patients loved the idea, but data liquidity and integrations with EHR systems turned out to be an uphill battle.The fact that patients have no access or right to their data challenged Sandeep, and that is what formed the basis for the creation of Abridge’s technology.Abridge works towards helping patients capture, create, and curate their own healthcare data via audio recording.How their Technology WorksPatients use the Abridge platform to securely record, review, and share their clinical data.They tap a button in the app, and then record audio which is then encrypted and uploaded to the cloud for further machine learning processing.Once the audio is processed, they can quickly review it using clinical concepts that are highlighted by Abridge's machine learning pipeline. A patient can also keep their loved ones in the loop by sharing the recordings with them.Everything is powered by Abridge's in-house machine learning tech that has been built over time.They use different machine learning platforms in addition to their own to parse all the information being captured and synthesize it into a way that is succinct and summarized in a way that patients can benefit from.Feedback from Patients and CliniciansThey get very positive feedback from caregivers, doctors, clinicians, and patients. They have come forward and showed their appreciation for the app and talked about how it’s been transformational to them.Some clinicians appreciate that the patients can handle the recording part of their conversations where all they have to do is review the recordings. Doctor-Patient relationships have also improved because of this.He sees this form of recording healthcare data becoming the future trend.Physicians will sometimes request the recordings to put in their medical charts, but the decision lies solely on the patients.Plans for a Clinician-focused PlatformThey believe they can provide value to clinicians, but in the short term, patients are their only focus.Privacy and SecurityPrivacy is a core principal that they abide by with their platform. Users own and control their data. If they deleted the data it’s deleted for good, and if they share it it’s shared. Abridge doesn’t sell any data.They use a state of the art encryption technique for security when a patient records audio all the way to the point where they get a summary of their recording.HIPAA ComplianceThey have not necessarily altered the app for HIPAA Compliance, but they have checked off all the necessary boxes because they go above HIPAA Compliance in their data privacy and security processes.The fact that their app is patient-focused also means that they’re not necessarily required to have HIPAA Compliance.The Future for AbridgeThey are constantly working on improving privacy and security.They’re also continuously enhancing their machine learning capabilities.The Meaning of Voice First Health to SandeepWith the ongoing talk about voice as a biomarker that can be used to detect health issues, and talk about doctor burn out, he believes the right approach is working on giving patients a voice centered experience where they can capture the audio of their interactions with their doctors.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseAbridge’s WebsiteSandeep’s Email – San@Abridge.com See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Rana Gujral, the CEO of Behavioral Signals, a company that allows developers to add speech emotion and behavioral recognition AI to their products.Rana is also an entrepreneur, speaker, and investor, and has been awarded the ‘Entrepreneur of the Month’ by CIO Magazine, and the ‘US China Pioneer’ Award by IEIE. He was also listed in Inc. Magazine as an “AI Entrepreneur to Watch”. He has extensive experience in enterprise software, product development, strategy, business-building, and emerging markets. Rana previously served as an Executive Vice President at Cricut Inc where he headed engineering, strategy, technology and IT, and also held leadership positions at Logitech S.A., Kronos Inc., and Deutsche Bank AG, where he was responsible for the development of best-in-class products and contributed towards several award-winning engineering innovations.Key points from Rana!The many reasons why it’s important that machines can recognize and express human emotion, including improving human computer interaction and boosting business KPIs.Looking at the intention behind the words that somebody is speaking and how that is being used in the financial and health sectors.Suicide risk and the ethics behind what happens when an AI is showing the meaning of what a person is saying.Developing an Interest in Voice and Joining Behavioral SignalsHe had been following the voice space and fascinated by AI.In the last decade, the voice technology industry has really progressed. Five to seven years ago we were speaking to NLP or speech-to-text as cutting edge technology, but fast forward to now, it’s no longer considered cutting edge even though it’s very accurate with brilliant business models built on top of it.Voice technology has been directed towards creating a wide range of solutions in different domains, and what Behavioral Signals focuses on is how words are said, and what that says to the state of mind of a speaker. Rana believes this is the aspect of voice interactions that’s going to be a game changer.He also believes that it is the aspect of voice interaction that has been holding the whole interaction piece of voice technology back.How the Technology Tells the DifferenceAt the core of Behavioral Signals’ engine processes, is the variety of outputs when an interaction is being recorded. They go after who spoke when (diarisation) where they deduce basic emotions like anger, sadness, happiness, and frustration. They also go after specific aspects of tone change which is the trend of positivity within the duration of a conversation.They are less hang up on what is being said, and are focused more on the emphasis behind the words. That tells them a lot.An example is a study that was done at Yale University where they took a piece of content from YouTube and just took the audio out of it. They then deduced the emotional behavioral signals from it and mapped it out. Then they turned the video on and analyzed both the audio and the video, looking at the facial expressions and body language of the people in the video, and added that to the emotional and behavioral map that they were piecing together.The natural expectation there was that when adding those additional data points, in addition to just processing the audio piece, they could get more accurate, but what they found was that they actually became less accurate.That meant that when they were mapping based on audio only; there was a higher score than when they were also looking at the video. What they realized is that we as humans are fairly adept at masking our emotions through our facial expressions, but we can’t do that with our tone of voice.Scientists have proven that if one can accurately create an emotional behavioral map of an interaction, or decipher the cognitive state of mind of a participant, and understand the context of that interaction, they can predict what that person will do in the near future, and in some ways predict intent.For example, Behavioral Signals is working with collections and banks to predict if a debt holder is going to pay their debt or not, simply by listening to a voice conversation. They have been able to do that with a very high level of accuracy.They are also working with another company that is building a software platform to cater to patients with depression. The company using Behavioral Signals’ technology to predict a propensity for suicidal behavior.The EthicsThe technology could be misused and so people must be careful when using it. He believes companies like Behavioral Signals have a responsibility to protect the data.We are going to rely more and more on inanimate systems and machines, and the systems/machines will become more intelligent than humans.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseRana on Twitter and LinkedinRana’s WebsiteBehavioral Signals Website See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Rabbi Elijah Dordek, the founder and CEO of the ShanenLi Speech Recognition app.Rabbi Dordek is a pioneer when it comes to voice technology and using voice applications to help people learn things. ShanenLi is an interactive tutor that helps students, young and old, to achieve literacy and master memory, using Automatic Speech Recognition. It guides users to practice reading, reciting and memorizing.Currently featuring Pirke Avot (Classic Hebrew Text), the user listens to each paragraph and can follow along as each line lights up when read. When proficient, the user reads aloud and is tested. The recitation is marked in color for errors to enhance self-correction. Stars are earned according to score, along with sound effects and words of encouragement. The user can share their progress with others.Key points from Rabbi Elijah Dordek!Helping people learn religious texts through the ShanenLi app.How the technology can be applied in medical education for medical students that need to learn and memorize tons of information, and as a tool to help determine and become proficient in ways of communicating with patients.BackgroundHe is based out of Jerusalem, Israel.He studied Judaism at a deeper level and decided to abandon his plans to go to MIT and become a teacher/educator.Through his studies (Oral Law of the Torah), he ended up developing systems for helping people learn the same. He also looked at how he could use technology to create an interactive way through which people could learn and master large or small bodies of text.He’s also a trained psychotherapist.Introduction to Voice TechnologyBeing involved in the Oral Law of Torah (which is meant to be known by heart), he wanted to find ways to help those who were struggling with their studies of the Oral Law. He printed books that visually put the texts in the Oral Law in such a way that people could learn much more easily.While thinking about how he could use technology to achieve the goal of helping people learn, he came up with the idea of using voice to have a conversation between man and machine, where the machine would be able to present the text and line by line help the user hear it and repeat it until they had gained proficiency.The Technology at the Root of the TechnologyThey have the ShanenLi Speech Recognition app which is in the Play Store and it’s primarily in Hebrew, but they’re trying to grow out into other languages. The backend for this app is the Google speech recognition API.A great user case example would be someone who is scheduled to do a TED Talk and they have to know their talk by heart. It might not be the easiest thing to do, but the technology can help this person achieve that more quickly, easily, and in a pleasurable way.The EvolutionHe believes we’ve come a long way to getting machines to do what we want, but that it’s not there yet because the conversations are not flowing.He believes machines will progress gradually to get there in the future.How the Technology Can Impact HealthcareHe knows of a company that created a voice application that analyzes the voice for health issues. It can help a doctor find if someone has a health issue just from their voice.He believes and has high hopes that anything we can humanly define, we can then somehow ask a computer to search for.Links and Resources in this EpisodeRabbi Dordek on LinkedinRabbi Dordek on TwitterRabbi Dordek’s Email Address - 360@shanen.li See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes back Brian Roemmele, the “Oracle of Voice” and the “Modern Day Thomas Edison.Brian is the consummate Renaissance man. He is a scientist, researcher, analyst, connector, thinker and doer. He is actually credited for having come up with the term “Voice First”. Over the long, winding arc of his career, Brian has built and run payments and tech businesses worked in media, including the promotion of top musicians, and explored a variety of other subjects along the way.Brian actively shares his findings and observations across fora like Forbes, Huffington Post, Newsweek, Slate, Business Insider, Daily Mail, Inc, Gizmodo, Medium, Quora (An exclusive Quora top writer for: 2017, 2016, 2015, 2014, and 2013), Twitter (quoted and published), Around the Coin (earliest cryptocurrency podcast), Breaking Banks Radio and This Week In Voice on VoiceFirst.FM that surfaces everything from Bitcoin to Voice Commerce.Key points from Brian!Medical transcription devices and how the new release of the Google Recorder is a huge leap forward in the future for medical transcription.Google RecorderIt takes real time voice and transcribes it into the new Pixel 4 phone. It's all being done completely local to the device, meaning that the device can be in airplane mode, have no SIM card or WiFi, and the device will still do phenomenal speech to text.The device is a Google search engine built inside of the storage of the voice speech-to-text files that one creates.It’s an assembly of a lot of technology that has existed in the market, but done in such a way that it is unique and more useful.Voice transcription has been around since the 70s, but the ability to file the transcriptions in a meaningful way hasn’t been around for a long time.The Google Recorder is much more simplified than using a desktop or laptop.The device is currently just a freeform database. It’s not a physician fill-out-the-patient-form with your voice solution, it’s a notes solution.Google doesn’t yet know what they have with the Google Recorder.Tackling the Issue of Voice Dictation/Medical TranscriptionThe amount of paperwork physicians have to handle takes up more than 60% of their time.The problem with the existing medical transcription devices is that they’re designed for engineers to solve the problem and human factors are not built into the system.Brian works from the solution backwards. He believes that the best solution would be a device that doctors wear which talks to them.It will probably be in their ear and will have a microphone. It might also have glasses and give visual feedback.Scenario: A doctor visits their patient and asks the patient for permission to use the transcription device which is covered by HIPAA laws and deletes the audio file after full transcription, but the text remains. At some point, after say 30 to 50 years of patient interactions, the database will be the entire notation of the patient. It will be legally okay for a physician never to fill out a form again, because the database itself will maintain the continuity of that patient.The solution will be physicians being able to fill out notes in real time as they are talking to the patient. The intelligence will be built into the intelligence amplifier (the device) and it will extract the notes necessary to fill out the forms without a physician directing it.Brian says that if he was at Google, he would make it the size of a gum stick pack, put it with a really good set of headphones (one in one ear and one charging at any one time). It would have a light to demonstrate that it’s on. Patients would be aware when it’s recording.He believes that Google can do that today with a standalone device, but they don’t have the AI to support it.The device must not use the cloud or a network to store the patients’ data. It must instantly transcribe to a hardened local system.Links and Resources in this Episodewww.VoiceFirst.ExpertBrian Roemmele on TwitterBrian Roemmele on Quora See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Scot and Susan Westwater, the co-founders of Pragmatic Digital, a digital consultancy experienced in designing voice experiences. Scot is the Lead Strategist at Pragmatic while Susan serves as the CEO. They started the consultancy to help entrepreneurs and marketers understand the opportunity that voice represents and to help them take advantage of it.Key points from Scot and Susan!Designing voice applications specifically with regards to healthcare.Getting into VoiceThey first heard about voice from Gary Vaynerchuk and they got interested.They did some research and saw voice as a technology that had the potential to blow up just like web, mobile, and social media did.They saw it as an opportunity to do some good in the world and help a lot of people through education.For Susan, what made the selling point for voice is how it makes information accessible.Creating a Good Voice Experience for HealthcareThe first strategic move to make when creating a good voice experience for healthcare is to look at the target audience. In healthcare, there are 3 distinct audiences namely; the payer, provider, and patient.A business or organization must make sure that they clearly understand what they want to accomplish for either one or all of the three audiences.Another thing to consider as an organization is what the desired achievements are with the healthcare experience they want to create.VoiceFirst = PatientFirstThis is an idea that you start with the patient.A lot of healthcare organizations talk about being patient-centric, but Scot has observed that most of them are not as patient-centric as they say they are.Healthcare organizations should start with patients’ needs by ensuring that they are provided with all the information they seek on the healthcare issues they suffer from.They should also have empathy for the healthcare conditions that they deal with by putting themselves in the shoes of the patients suffering from them.How HIPAA Requirements Play Into ThingsPhysicians can be able to get disease information out there in a way that most health organizations cannot, because the organizations have a lot of regulatory and legal concerns to consider.The BarriersScot and Susan just did an informal survey online with the business and healthcare community to try and understand the challenges and barriers that prevent them from creating healthcare experiences.Most of them take a more wait-and-see approach because they don’t understand the regulatory and legal issues they might have to face.The Future of VoiceIn Scot’s mind, voice is going to become the default input for most computing devices.The huge inroads being made by companies like Samsung in TVs, refrigerators, and other appliances, demonstrates how voice will become a ubiquitous technology that we interface with.Scot believes voice technology will follow us everywhere we go through different devices and experiences.Currently, voice is playing a major role in elder care and the potential impact it can have in healthcare is huge. Moving things into a hands-free voice-enabled space can help with everything from physician-patient relationships and how they interact, to patient care and so many other things like symptom tracking.There are numerous potential use cases for voice in healthcare.Their BookThey have a book coming out in November called Voice Strategy: Creating Useful and Usable Voice Experiences.The book will include their combined 20+ years of experience and all the voice knowledge that they have gained over the past couple of years, and it will be geared towards helping people create good voice experiences.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseVoice Strategy By Scot and Susan WestwaterPragramatic’s Website (Scot & Susan's Blog)Scot on TwitterSusan on TwitterScot on LinkedinSusan on Linkedin See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Monica Chaudhari, the Founder/CEO of AdirA, a Health Action Platform which helps women be the Chief Wellness Officers (CWOs) of their families and implement good health and wellness decisions for themselves and their entire families. Monica became a start-up entrepreneur after age 50, building AdirA for women, by women. Prior to starting AdirA, she worked all over the world for a reputed pharmaceutical company. Her vision for AdirA is to serve CWOs with the utmost independence and integrity while running a profitable business. Monica herself is a CWO to her mother, her spouse, and two grown sons. Key points from Monica!Using personalized decision support tools to help women make and act on health and wellness decisions, and incorporating voice technology into that.The Concept Behind AdirAAdirA means a strong, noble, and powerful woman.Monica named the company AdirA because they focus on the Chief Wellness Officer of the family, and that is 80% of women who are 35 to 64 years old. These women are usually the key decision makers for health and wellness related matters for about 5 to 7 people in their family.Statistics from major health information websites indicate that more than 75% of all the health-related information that is researched on the web is done by women, for themselves and their families.There is a lot of good health information on the web, but due to its large volumes, it becomes overwhelming. The women end up delaying a decision or not making it at all because of that. They also end up in a doctor’s office unprepared for the necessary conversation.The whole researching process is a dissatisfying and negative experience, and that’s where AdirA comes in.AdirA helps women in accessing the necessary information and concierges them to the products and services that are relevant to the decision or recommendation that’s right for them and their families. AdirA also provides them with two talking points (based on their decision process and recommendation) on what to say when they go to see a doctor.AdirA doesn’t own the products and services that they concierge the women to.How AdirA Works and The Voice Technology ElementAs they were building out the platform, they decided to partner with Orbita instead of investing a lot of money and time into building their own IT infrastructure.They built their own proprietary interface around Orbita’s chat and voice activated technology.They develop consumer experiences based on the needs of the clients and sponsors that they have.They target all their communication towards women and the women are engaged by a chatbot on the AdirA website.The experiences are under 3 minutes because women are busy, and they want fast, efficient, and personalized services.In less than 3 minutes, the women are assisted in getting through a series of 12 to 14 questions. At the end of the experience, the women are given 2 to 4 recommendations.The women are also concierged to 3 or 4 relevant products and services to implement their decision.AdirA never plugs any sponsor ads or sponsor connects into the concierging. They have a sponsor corner on the website where women can choose to interact with the sponsors on their own volition.They currently have a contraception decision tree. When a woman uses the tree, they know exactly which medical professional built the tree.She also gets to ask her medical question related to contraception and she gets a recommendation on where to get information on that. She is connected to telemedicine and a doctor finder, and then given two talking points based on the answers that she gave in the decision tree that are most relevant to he conversation with a doctor.They are also concierged to all the relevant products and services.The contraception decision tree is ratified by Healthy Women, a non-profit for women’s health education.The whole experience makes the process of seeking contraception information much easier and efficient for women.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseAdirA Website See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Natalia Suarez, a young UX and Voice User Experience (VUX) designer who is passionate about using voice technology to solve the issue of Dementia.Natalia is originally from the Dominican Republic and now living in London. She is studying interactive media and is currently working on three different skills that are geared towards tackling the issue of Dementia.Key points from Natalia!How to leverage voice technology to help those with Dementia.Natalia’s BackgroundShe was focused on graphic design, but later shifted into UX, which is what she has been doing for the last 5 years.She recently completed a Master’s degree in Interactive Media Practice which is how she got involved in voice technology.Master’s in Interactive Media PracticeThis is a program at the University of Westminster and it entails topics such as social media, interaction design, user experience, mobile apps, wearables, and much more.Getting Interested in Voice TechnologyWhile looking for something major and unique to do for her final project, she came across someone interacting with an Echo device.That caught her attention and as a UX designer, and she wanted to experience what it would be like to be a voice user interface designer.She based her thesis project around that.Focusing on DementiaHer grandmother had Dementia before she passed away and Natalia always wondered how she could have communicated with her grandmother better so she could understand her needs. Her grandfather also has Dementia. That formed her passion for tackling Dementia.Dementia and Voice TechnologyFor her final thesis project, she has developed 3 Alexa skills, one of which is published.The published skill, Colour Mind, is a basic quiz for people with Dementia. It helps people going through memory loss remember colors and other things, and is dedicated to providing a fun and engaging way for Dementia patients to communicate with family members, carers, and other people. It helps them associate things from nature with the correct color names, and allows them to engage in informal conversations.The other Alexa skills that she’s working on are “About Me” and “My Journey”.About Me will be geared towards helping Dementia patients remember things about their lives like the names of their parents and other people, what things they like, and other things. This skill will be integrated with a form that a carer will complete for the patient, including all the information about the patient.My Journey will be geared towards helping patients remember where they’re going. Natalia is most excited about this skill because it can be expanded and help so many people. A patient will be able to leave their house when going somewhere and if they forget where they’re going along the way, they will be able to ask Alexa to remind them where they’re going. This skill can be used in some many ways by both carers and patients.About Me and My Journey will be ready for publishing on the skill store in a month’s time.Feedback From Users About Colour MindShe did a lot of user testing while working on the skill.She did the testing online and also used card sorting through a tool called Optimal Workshop, where she had some of her friends test the skill with their grandparents who had Dementia.Alexa’s responses are always positive whenever a patient answers something wrong.Natalia has been working on making the skill multimodal.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseColour MindNatalia on LinkedinNatalia on Twitter See acast.com/privacy for privacy and opt-out information.
In this episode, Teri talks to doctors Matt Cybulsky and Reid Maclellan, who were both with him at the Voice of Healthcare Summit in Boston in the summer.The three interviewed each other in more of like a fireside chat about the incredible things going on at the intersection of voice technology and healthcare, and in particular the great stuff that they heard when they were at the Voice of Healthcare Summit.Matt Cybulsky is the founder and chief consultant of IONIA Healthcare Consulting, where they offer organizations with support addressing strategic and operational innovative projects. His expertise focuses on technology and healthcare delivery.Reid Maclellan is a physician and adjunct professor at Harvard Medical School and Boston Children’s Hospital. He’s also an instructor of surgery at Harvard Medical School as well as the founder and CEO of Asclepius, an artificial intelligence company with the mission to restore the care in healthcare and improve quality of life for patients and physicians. Key Points from the Discussion!Conversations at the Voice of Healthcare Summit around building applicable tools, ethics and privacy, and expanding tech for voice-first.Key Takeaways from the SummitCompared to the summit in 2018, the number of people doing incredible things in the voice space and healthcare more than doubled. This is evidence that there is a lot of continuously increasing interest.The breadth of different speakers who were representing so many organizations also increased.There were so many different topics in voice and healthcare.We are going to see a skyrocket in wonderful applications that are going to assist physicians and patients in healthcare.Tangible Builds for VoiceThere’s been excellent research in the integration of voice technology into EMRs.The idea of vocal biomarkers is a fascinating area with great potential.There are different voice applications, like from the Mayo Clinic, which has been growing their applications beyond just being informational skills to integrating patient care with voice technology.Beyond Pilot Projects2019 has been called the year of the pilot.Most revolutionary applications that could impact healthcare on a massive scale are still in pilot stage.Some of the organizations, hospitals, and clinicians that have their own pilot projects that they want to grow will need to hasten their efforts in coming out with tangible tools, because some of the larger firms with more capital will start introducing voice tools and interactive features that may be a little more advanced. One of the biggest barriers to launching health-oriented voice applications at scale has been the protection of users’ privacy. With the recent HIPAA Compliance for Alexa for certain institutions, we are going to see some of the healthcare applications come online.The Mayo Clinic has been giving patients a lot of information through voice applications.AI in GeneralA user can converse with an AI type bot in different ways.AI is not intelligent until the creator makes it that way. That’s why a lot of AI was originally formed based off of image pattern recognition because it was easy to give the computer a bunch of data (all the same images), and then a few that were different, for it to learn.With the advent of NLP (Natural Language Processing), AI can understand conversations and words.There will be a cultural change for physicians to really adopt conversational AI.Notable Speakers at the SummitNathan Treloar of Orbita is a heavyweight boxer in the voice first world. Orbita has tools that are tangible and being used in the market. They’ve been able to achieve great milestones.Stuart Patterson of LifePod opened people’s eyes in as far as what can be done in home health, keeping families informed about loved ones who are ill, care for the elderly, and so much more.Punit Soni of Suki talked about the concept of being able to utilize voice while you’re hands-on with the patient and not being distracted by the keyboard and screen.Bianca Phillips of e-Health Consultants talked about the ethics and law of voice technology in health.Other interesting speakers included Michael Antaran of CARROT Pass, Henry O'Connell of Canary Speech, Audrey Arbeeny of Audiobrain, and others.Voice as a Potential Vital SignThere’s a study that was done by Mayo Clinic and BeyondVerbal to look at patients who were going in for a coronary angiogram and they had those patients read different statements. They then looked at the outcomes of the coronary angiograms and found a statistically significant correlation between the way someone speaks and the risk of coronary heart disease.They also did a study to look at people with congestive heart failure. They followed those people over time and had them read or describe an experience. They found a statistically significant correlation between the way someone spoke and the risk of death.Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseVoice of Healthcare Podcast See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Audrey Arbeeny, the Founder/CEO and Emmy Award-Winning Executive Producer/Creative Director for Audiobrain, a globally recognized sonic branding boutique dedicated to the intentional development of music and soundAudrey is the brain behind sonic branding and is highly skilled in many areas of sonic branding development and implementation. She oversees Audiobrain’s projects from start to finish, coordinating logistics, strategy, experience design, resources, and talent. In addition, Audrey oversees Audiobrain’s ongoing research in areas of psychoacoustics and biomusicology.Audiobrain has consistently stayed leaders in this field through innovation, research, education, advanced technological skills, and forward-thinking initiatives for some of the world’s largest brands.Key points from Audrey!Her depth and breadth of experience in sound and audio branding.How the element of sonic branding can have profound impact on people’s health.Getting into Sonic BrandingTheir goal/mission at Audiobrain is to promote and advocate the intentional use of music, sound, and voice to promote health, wellness and well-being. That can be in healthcare or in a device to make experiences better, to not create noise, to be respectful of the end user, and to just make everything sound better.Audiobrain enabled Audrey to realize her dream of combining her lifelong love of music and science with proven business skills.Music has long been an important part of Audrey’s life. She began formal piano training at the age of four. She is also an accomplished flutist and studied voice at Carnegie Hall, under the late Silas Engum, for many years.She also has extensive recording, editing, licensing, interactive, and sonic branding experience.Brands and ProjectsThey have done a lot of unified communications projects with IBM.She worked on the IBM ThinkPad sound.They worked with Microsoft on the XBox 360.They work with Holland America Cruise Lines. They just finished expanding the experience on 5 ships in 4 days in Alaska.She was the music supervisor for the past 7 Olympic Broadcasts for NBC.She has been the creative director/head of production for the strategic development of brands’ sounds including Virgin Mobile USA, Glaxo Smith Kline, Google, Logitech, Major League Soccer, KIA Motors Corp, The New York Giants, McDonald’s, Merck, and HBO to name a few.Healthcare ApplicationsAudrey refers to a story about female identical twins that were in an accident and how they used music to communicate.Audrey has used music on one of her family members who was on a ventilator. Audrey would do music therapy with her because she knew who she really loved, and she loaded an iPod so they could just play music. When the relative came off the ventilator, she said she was tired of Josh Groban songs, because it was all she heard the whole time she was on the ventilator.Veterans have testified that music makes them feel better.Music can reach people where other things cannot.There are many new devices. They just completed a major surgical robot because there are times where the surgeon’s eyes are on a computer and if something is happening behind them, and they need certain alerts, Audiobrain has provided them with the necessary auditory signals.Audrey’s elderly father has an apnoea machine from Phillips that reports back directly to his physician. He’s got a monitor for his pace maker and other positively impactful devices.The Thought Process Behind Creating Sonic BrandingThey create full holistic sonic branding.Someone may approach them with a heart health monitor running device. Audiobrain will first of all look at their brand, their aspirations, their touch points, the critical places where they use sound, etc.They then create a soundscape around that.There are a series of tools that they create to avoid overwhelming their clients and zero in on what resonates with them. They then start to educate, pull things together, handcraft the various core characteristics and start to evolve the sound.They just did Logitech's JayBird which has product personification and overarching branding. They recorded one of the voices within the product for their call center. Links and Resources in this EpisodeThe Comprehensive Flash Briefing Formula CourseAudiobrain WebsiteAudiobrain on InstagramAudioBrain on TwitterAudiobrain Email - info@AudioBrain.com See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Matt Hammersley, the Co-Founder and CEO at Novel Effect, Inc.Novel Effect is specialized in creating innovative, entertaining, and immersive story-based experiences that seamlessly blend real-life and imaginary worlds. Their patented technology brings together the power of voice recognition and the best creators in entertainment to add dimension and interactivity to traditional media people enjoy together like books, television, film, games, and more.Novel Effect is an award-winning free storytime app and has achieved great milestones in the voice technology space. Matt started the company with his wife, and has a background in Chemical Engineering and working as a cryptographer for the government. He was also a patent attorney and has filed numerous patents, including for Novel Effect.Key points from Matt!All the great things that Novel Effect is doing and how that ties in with healthcare and kids’ development.Using technology in a way that brings people together.Novel EffectThey started off and are well-known for books. The app synchronizes a movie-style sound track as a parent and child read aloud a print book together.The app hears a person read a book aloud and determines whether that person is actually reading the book or talking to the person next to them. When it finds that the person is reading the book, it knows exactly the point at which they are in the book, and synchronizes the right music and sound effects for wherever they are.A user can open a book in the middle and the app will pick right up from that point.The idea for the app all started with Matt’s daughter. He and his wife wanted to build a little library for her. A friend of his wife brought them a book and read it aloud for everyone. She captivated everyone by adding fun voices and sound effects to her story. This gave Matt the idea to create something that would enable everyone to enjoy such storytelling.He filed patents for the idea, and within three months he had quit his job, they had sold their house, and they had moved across the country to start the company.Matt isn’t a developer, so he had to find a tech co-founder to help them build the app. They started with the “Little Engine That Could”, and they got the first three pages to work and read it to their daughter. She stopped what she was doing and listened to the story, and that’s when they knew they had it.How it All WorksThey custom compose the music and sound effects for each book. They create them at their state of the art digital studio.A user has to have a copy of the book they want to read and the app just pairs with the book regardless of whether it’s an eBook or print book. The user just taps play on their phone and puts it down to read the book.Novel Effect and Pediatric DevelopmentFor their third co-founder, they brought in their sister in law who was getting a PhD in teaching children with visual impairments. It was a light bulb moment for their sister in law because the idea of Novel Effect meant enabling kids with visual impairment to read and understand much more easily.They have been passionate about education and making a positive impact on literacy and the motivation to read for kids.Current Statistics on SuccessesThey focus on three core educational outcomes. The first is engagement (Are they paying attention or not?), comprehension (Do they understand what the material is, and is the music and sound effects improving their comprehension), and enjoyment (Are they enjoying? If they are then they are motivated to keep doing it).They have a 75% improvement in engagement, and they measure that by how many people are focused and paying attention to the teacher versus being distracted.They have seen an 85% improvement in comprehension.Novel Effect in Speech Language PathologyThey have focused on helping secondary language learners because the system is powerful in that it responds to them positively when they say something correctly which encourages them to practice. The instantaneous feedback on how well they’re doing helps in a big way. The biggest impact Matt has seen is with his Asian CTO who has a thick Chinese accent even though he has lived in the US for 20 years. Ever since he started working with Novel Effect, he has managed to improve in his dictation, speed of speech, and pronunciation.The Built in GamificationThe app has a gamification aspect to it in the way a person gets the music and sound effects when they say the right words the right way.They have thought about making the gamification more real around like giving badges and points. Stuff in the HorizonThey have Spanish coming up and will have 30 to 40 titles. They will have Spanish enabled so a user can read a book in English and Spanish with the capability to switch back and forth.They are also getting into TV where adults and kids will be able to role play and read from a television screen and talk to the characters who will also talk back to them.Links and Resources in this EpisodeNovel Effect Website See acast.com/privacy for privacy and opt-out information.
In this episode, Teri was invited on to the DataTalk Podcast by the host, Michael Delgado, to talk about all types of scenarios and implications of the intersection of voice technology and healthcare.Enjoy the Show!Teri is a Sport & Exercise Physician and Clinical Assistant Professor in the Faculty of Medicine at the University of British Columbia in Vancouver, Canada. He has a particular interest in e-health innovation and the intersection of voice technology and healthcare. He is the Founder and host of “Voice-First Health”, a website and podcast that highlights the rapidly expanding intersection of Healthcare and Voice-First technologies. Teri and Michael got a little futuristic and painted some pictures of where we are going with voice technology.Doing it AllTeri teaches courses to help the next generation of physicians, runs two podcasts around voice technology, and so much more.He is passionate about what he does which makes things easier.Getting Interested in Voice TechnologyHis biggest passions are education, technology, and healthcare.Before medical school, he did an education degree and became a teacher before moving on to medical school.Around 3 years ago, he started hearing about voice technology. Being a techie at heart, he was intrigued by it. Back then Amazon Alexa was not available in Canada.He saw voice technology as a great way to bring his three passions together, and that’s when he decided to start his Alexa in Canada blog and podcast.He later launched his Voice in Canada Flash Briefing and the Voice First Health Podcast.From Teaching to Medical SchoolHe holds four degrees; a Bachelor of Science in Anatomy, Cell Biology, and Biotechnology, a Master’s of Science in Experimental Medicine, an education degree, and also went to medical school. That was followed by his family practice residency and sports medicine fellowship.He’s been able to bring all of them together into the voice first space.The Technology’s PotentialHe initially saw Alexa devices as a tool for education, but he later realized that it’s not just about education. The available applications take the technology to a whole new level.With the recent HIPAA Compliance of Alexa devices, they can now store personal health information which means that we can now start to use the devices as surrogates for care providers.The devices will gradually take on more of a role as a care aid or provider that can take care of people.There are lots of applications, other than voice applications, that can provide medical care, and they always have a physician, hospital, or company behind them. This is the same kind of way that voice applications will work in healthcare.Vocal Biomarkers and the Ethical Issues Around ThemThe Mayo Clinic has been studying biomarkers for voice where a person could just be speaking and their smart speaker could pick up a health condition in the person. They are looking at coronary artery disease and seeing if they can pick it up from a person’s voice. The technology can also be leveraged for other types of diseases like Parkinson’s and others.There are some discussions on the ethics of voice technology.There are concerns about who owns the data that voice assistants/devices store and there are no correct answers right now. Teri’s Favorite Episodes on Voice First HealthTeri is a big fan of Brian Roemmele, who is known as the Oracle of Voice. Brian is a technologist and futurist, and has been studying voice technology for decades.He assimilates knowledge from all different disciplines and puts them together to predict where voice technology is going to take society.Brian believes that we will get to a point where voice assistants will know us personally. Links and Resources in this EpisodeBrian Roemmele on Voice First HealthVoice First Health PodcastDataTalk Podcast See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Rupal Patel, the founder and CEO of VocaliD, a voice AI company that creates custom synthetic voice personalities so that brands and individuals can be heard as themselves.Rupal is also a professor at Northeastern University in Boston and has a background in speech science. VocaliD is looking at creating synthetic voices from voice recordings of real people. They are able to produce synthetic voices that are so life-like that it’s actually being used as a way for a person that does not have a voice, or is going to lose their voice, to be able to still have a voice.Key points from Rupal!Learning about how people produce speech and using that basic science to develop new technologies that can help and assist people with impaired speech in learning how to speak and cope with their speech disorders.Transitioning into Voice TechnologyAt Northeastern University, she works in speech and hearing sciences.Her earlier work was in developing assistive technologies, but in the last few years or so, she has also been working on learning technologies.The Origin of VocaliDIt’s a project she started in her lab in 2007. It brought together the basic speech science and the design of assistive technology. Her lab at Northeastern University is called the Communication Analysis and Design Laboratory. She’s been on leave from the university for the last few years.VocaliD started because they found that people who couldn’t speak clearly had severe neurological disorders in speech production, but they realized that they could still be able to control certain aspects of the voice (the prosody of one’s voice),She found that most people with speech impairment had to use assistive technologies to communicate because their speech wasn’t clear enough to interact with people who weren’t familiar with them.The voices that were used on the assistive technologies were very few, so Rupal saw an opportunity to develop customized synthetic voices to fit each individual.VocaliD’s Current WorkThey moved out of the lab at Northeastern in 2015 and got some funding from the government (National Science Foundation and National Institute of Health) to take the laboratory-based science and turn it into commercial products.Between 2015 and 2017, they were focused on getting that technology ready to be used and integrated with existing assistive technologies, and focusing more on users of assistive technologies.They kept refining the technology and the custom voices they were creating, and by 2018 they started seeing more interest from broader market applications (apps that talk, that didn’t want to sound the same as Alexa or Siri).They are now working with companies across a variety of different verticals that are interested in creating a custom voice identity for their product or brand.The Process of Creating a Custom VoiceThey start by recording a person’s speech and then gluing together little bits of the speech sounds to create the synthetic voice.They are now doing parametric speech synthesis where after doing the voice recording, they don’t glue together the little bits of speech sounds, but instead learn the pattern (through an algorithm) of how the individual speaks and then try to emulate that.The Human VoicebankThis was an initiative that they started when the company was still in it's early startup stage.The people they were making the customized voices for could still vocalize so the company could still get some sound from them. The initial technique they used was to get whatever sound they could get from the individual, and then try to find a surrogate voice donor who could produce 5 to 7 hours of speech. They would then mix the sound sample they got from the client with the speech donor’s voice.They needed voice volunteers to donate their voices and they created a massive dataset of people from around the world. 26,000 people from 110 countries with ages ranging from 6 to 91 years have contributed to the voicebank so far.That dataset is what enables them to create voices for those who can’t speak.While they don’t use any of the voices in the voicebank to create voices for enterprise clients, they created an easy-to-use online voice recording platform that anyone in the world can use to record their voice for the voicebank.Use Case StoriesThe most powerful use cases are where people bank their voice because they have a few days to lose it. Prior to this technology, people had two options, using an electrolarynx or having a tracheostomy speaking valve fitted. Their technology offers a better alternative.Some people use their technology during the first 3 or 4 months of recovery right after voice-related surgery because they have no other way to communicate otherwise. As they get through voice therapy, they can have an option of communicating with those around them.Security ApplicationsIn 2018, they were approached by a large national institution to test their voice authentication systems.With banks, for example, people can access their accounts using their voices. There is an authentication that takes place where the voice of the speaker is compared to a pre-recorded and saved voice print.As speech synthesis technology gets better, it’s going to be more difficult for machines and people to tell the difference between a synthetic voice and a real one, and so they have been creating tools to recognize synthetic voices to ensure that the synthetic voice technology is not misused in future. Links and Resources in this EpisodeVocaliD Website See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Michael Antaran, the founder and CEO of CARROT, a wellness mobile app that allows people to be rewarded for active healthy living by walking.CARROT is a mobile wellness program that rewards people financially for walking and meeting personal activity goals. CARROT successfully “gamified” health and wellness, delivering three times the engagement at less than one-sixth the cost of traditional programs. Unlike anything in the marketplace, CARROT’s “set-it-and-forget-it” approach uses individualized goals and instant gratification to motivate even the most sedentary users, making CARROT an attractive corporate wellness program for companies of all sizes.Key points from Michael!Providing people with “instant gratification” for achieving personalized activity goals, and how voice technology can help to facilitate that.Coming Up with CARROTThe startup is based in Detroit.Michael's life always revolved around health, hospitality and marketing based on what members of his family did for a living.He was in the automotive industry for 15 years and wanted to do something different that would enable him to raise his kids at the same, so he started a mobile gaming company.He was one of the first 20,000 developers on the Apple iPhone SDK platform. He released an app that was one of the first 500 apps on the Apple app store.Due to his own personal concerns about the impact of his company's games, he shifted from a pay-to-play model to a walk-to-play model which brought about the development of CARROT.How CARROT WorksCARROT taps into the pedometer features of a user’s phone to learn how many steps they take on a daily basis. They have a built-in AI that’s on the device that understands the characteristics of a user’s activity habits.From there, they define a personalized goal that makes sense to the user.Based on how active a person is, the app creates a goal that will make them walk a little bit more every day so as to establish a healthier new normal of walking each day.Once a user has achieved their personal goal; they get a green carrot coin. They also get a yellow reward point for every step they take. With those two forms of currency, they can play different games, participate in different challenges, or get a digital gift card instantly on their phone through the app.Now that they are moving to voice, a user can instantly get a gift card by saying, “Buy that gift card.” The fact about this kind of gamification is that the way users can earn something and spend it, gets them very engaged with CARROT.Going into VoiceVoice can help in engaging individuals daily just with their voice. It will work in ensuring a minute or less engagement every day to get users motivated to stay active every day and make healthier decisions.Voice helps make the app more intuitive.Future PlansVoice technology will enable CARROT to expand in terms of rewarding users for healthy behaviors.The Current StatsCARROT was launched in January 2016 as a corporate wellness program for organizations.3 years later, they released CARROT for individual users, but they needed to create a different game so they came up with Auctions.They have over 165,000 accounts in 39 countries.They have other fun games that people will start to see, like CARROT Verses, which is their take on fantasy football. They will be releasing it for Alexa where users will select their fantasy team in about a minute using their voice. However a user performs on the leader board, they will be able to win $250 a week in prizes.Links and Resources in this EpisodeCARROT PassCARROT Wellness for AndroidCARROT Wellness for iOSCARROT Wellness WebsiteThe Most Comprehensive Flash Briefing Course See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Dev Singh, the CEO and Co-founder of Viki Health.Dev is a HealthTech entrepreneur from Chicago and the Founder CEO of two successful tech ventures - AiRo Digital Labs and ViKi Health AI. Dev is passionate about leveraging advanced technology to positively impact patient outcome. His most recent venture is ViKi Health AI, is the first-ever revolutionary 24/7-365 Voice-AI companion for seniors, using a combination of voice technology with AI, machine learning, and natural language processing, to bring the best of these technologies to the senior living sector, along with remote patient monitoring.It offers various amazing features such as voice-based interaction with the back-end EMR for care management, voice-based dictation for nurses, voice-IoT integration with the wearables for remote monitoring, and real-time diagnostics, emergency assistance, lifestyle and entertainment profiles, concierge and home automation services, and many others.Key points from Dev!Producing a very comprehensive ecosystem to help the aging population live more independently.Forming Viki HealthIt’s less than an year old. They have development centers globally.The company has 4 founders who all have elderly parents which is part of the reason why they started it, in order to provide the elderly with some level of support through technology.They created Viki by combining five technologies (voice, AI, natural language processing, RPA, and IoT) to create one use-case.How Viki Health WorksIt has five different modules namely:Care: This helps seniors connect better with their care providers, and vice versa. This could mean a hotline with the doctor, or enabling a nurse to dictate notes while administering care to a patient which would make them more efficient. This module also enables the user to transmit their vitals on an ongoing basis.Alerts and Notifications: It enables the user to set thresholds and it can alert care givers and family members of any emergency issues.Home Automation: It helps in controlling lights, electronic fixtures, etc.Concierge: This enables the request of support services like facility maintenance, medication refills, food menu, and others.Lifestyle and Engagement: This has huge potential. It’s Alexa-based and is linked to the TV and audio devices. Doctors and care givers can pre-program a patient’s choices of entertainment.The alpha-version for Viki was released a month ago and it’s Alexa-based. It requires the Amazon Echo Show as the user’s main device. They are hosted on Amazon’s AWS and are using Fitbit. They also use HIPAA Compliant encryption.They have three ongoing pilots and hope to have a change in their device combo within a year.Interactions for UsersThe skill on Alexa is completely customized to a particular facility. The facilities can customize the name of the skill to fit their preference.Viki is also customizable to fit a facility’s unique preferences.The Pilot ProgramsOne of them is at a 1,000-Seniors facility and there is another one at a much larger assisted living facility.They have learnt that people are excited about the technology. All the stakeholders involved in the pilot programs have supported and believed in the technology.Some more features were added into Viki during the programs, for example, their concierge feature was the brainchild of one of their users.The Future of VikiThey aim for a future where when a senior goes to a care facility, they will demand to have a Viki in their room.Links and Resources in this EpisodeViki Health Website See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Dr. Harjinder Sandhu, the founder and CEO of SayKara, a company that is tackling the issue of medical documentation.Dr. Sandhu has 20 years’ experience in speech and machine learning in healthcare, and is the former VP and chief technologist of Nuance Healthcare R&D. He started his career as a professor of Computer Science, taught at York University for a number of years, co-founded a startup that was doing speech recognition for dictations and sold it to Nuance. He also co-founded a startup that was focused on patient engagement. SayKara’s goal is to free physicians from the mountains of paperwork that await them at the end of each day and the company’s solution is Kara, a virtual assistant that documents an entire doctor-patient conversation without interruption.Key points from Dr. Sandhu!Why medical documentation is an important problem for physicians and why SayKara is focused on it. The Origin of SayKara’s Core FocusPhysicians have been doing dictation and they always have to type the data directly into the EMR and make sure the data is in the right places. Studies show that physicians are spending a lot of time on their screens (2 hours screen time for every hour of patient time). Physicians these days stare a lot at their screens while listening to their patient which is a bad experience for both the doctor and the patient. This is the challenge that Dr. Sandhu has been looking to solve with SayKara.How SayKara WorksThey wanted to change the equation in terms of how speech recognition works.Their solution enables a physician to face their patient without looking at a screen.It runs on an iPhone or iPad, the physician just selects the patient then they sit and talk to the patient. The app listens in the background and captures the entire conversation. It then captures all the relevant data and enters it into the EMR. The next time the physician wants those notes, they will be in narrative form and look like they were written by the physician personally.Ensuring SayKara’s AI Captures Conversations AccuratelyCapturing conversations and translating them into notes is a difficult problem.Speech recognition is about capturing words without necessarily understanding any meaning. It’s about predicting the next word and understanding the sounds that were generated.SayKara is focused on listening in on conversations and interpreting them. This is a problem that takes time and a lot of data to solve adequately. They didn’t have the necessary data to solve the problem so they embarked on capturing the data by creating a service that physicians could use. They created an augmented AI solution that is similar to the built-in AI in autonomous cars. The AI keeps learning, adapting, and improving.There are people who make sure that the AI makes the appropriate interpretations of conversations.The physician gets to sign off on the notes after they are produced by the application.The New ReleaseThe previous version of SayKara was similar to the Alexa model where a user has to say Alexa every time they want to issue an instruction.They understood that physicians needed to do long form conversation and not short one sentence interactions.Previously, a physician had to say, “Kara” whenever they were interacting with the system, which disrupted the conversation a lot.They spent a lot of time trying to figure out how to make the interactions much more natural.The Current StatusThey have had a phenomenal response to SayKara and are in a number of large health systems. They have been live commercially for about a year and are gradually growing their user base.Their user base has been a combination of physicians in large health systems as well as small medical groups.They are present in the US only for now.Physicians are saying how life changing SayKara is for them because they save so much time that they can spend doing other important things. It also improves their productivity and they make less mistakes because they speak out loud so patients can correct them where they’re wrong.Patients generally love that their physicians use SayKara.Future PlansThe big innovation for them now is to build a true clinical assistant system.They are teaching the system not to just listen but to also predict and understand what’s going to happen in an encounter.They’re trying to make the system smarter in terms of clinical pathways and have it become a natural clinical assistant.Links and Resources in this EpisodeSayKara’s Website See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes David Box of Macadamian, a leading healthcare software and application development company specializing in user experience design and end-to-end software product development.David Box is an internationally experienced business development and account management executive. He has demonstrated success in the ideation, development, and commercialization of healthcare technologies, industrial solutions, and creative agency services, in Europe, Asia, and the US.Macadamian is an incredible company doing some very innovative work by looking at the latest technologies and using them to help businesses get their messages out, solve problems, and ultimately improve the efficiencies and lives of healthcare providers. They also focus on bettering healthcare quality for patients.Key points from David!Helping businesses develop software solutions, including voice-based ones, to their most major challenges.Architecting the future of healthcare at Macadamian.Working on a special project with Teri for Voice of Healthcare SummitAll About MacadamianThey work with clients to identify business challenges and solve those business challenges creatively with software.They have three core areas to their business including software engineering, UX research and design, and data science.They work with various groups in multiple verticals of healthcare to help them improve their business with software.Macadamian’s Interest in VoiceVery early in the days of Alexa, Macadamian recognized the importance of voice as an input methodology, and decided to invest in learning the technology and developing various different projects as proofs of concepts for different use cases.Their first skill (a voice-to-text skill called Scryb) was published in the Alexa library when there were only 12 skills available.Macadamian’s Approach to Developing a SkillThey work with clients to understand their underlying business challenges and then come up with a technological solution that will help them solve that. If voice is a logical input methodology for the problem they’re trying to solve, they recommend it and build out a voice-based solution.Early SuccessesThey developed the Siemens Healthineers ultrasound skill for Siemens.Siemens approached Macadamian about needing to engage both their sales force and potential customers with information about their ultrasound devices. Their challenge was to break through the noise.They also developed the “My Diabetes Coach” skill for adolescents with Type 2 Diabetes. It’s a multi-modal solution that enables patients to speak their health results like glucose levels, weight, diet, and sleep habits. They are also able to get information about their disease state and how to take care of themselves.The Future of Voice in HealthcareThere is a huge problem with physician burnout which has led to an increase in physician suicide. David believes that there are voice-based solutions that they can create to reduce the amount of paper work that a physician needs to do, and free them up to do more of what they love, which is treating patients.There are plenty of work flows in physician offices that can benefit from voice assistants from recording data into their EMRs, to the business challenges of running a practice.The Biggest Obstacles to Implementing Voice Technologies in HealthcareRegulatory restrictions, some security factors, and certain functionality issues are some of the obstacles.Voice-only is not there yet, but from a multi-modal perspective, voice can be used to augment certain input requirements to make it easier for people to access and provide data when and where they are.The Voice of Healthcare SummitDavid, Teri and Harry Pappas are organizing a 3-hour workshop that they will carry out at the end of the summit, and they will be doing a deeper dive into voice assistants in healthcare and looking at various use cases.They will review the current state of the industry from a statistics perspective on the adoption rates, and so much more.They will also look at where the technology has been and where they see it going to.Links and Resources in this EpisodeVoice of Healthcare SummitMacadamian WebsiteDavid on LinkedinDavid's Email - dbox@macadamian.com See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Bradley Metrock, the CEO of Score Publishing and the creator of VoiceFirst.FM, the leading podcast network covering everything about voice technologyBradley is the guy behind some of the best voice technology events out there in all different sectors. The flagship event that he puts on is Project Voice, and he also puts on many voice events covering many different verticals and industry sectors which include healthcare.Key points from Bradley!The Voice of Healthcare Summit 2018 and 2019, and some of the things that he has observed over the last year with regards to the evolution of voice technology particularly in the healthcare sector.Amazon Alexa’s HIPAA Compliance and how it will affect Amazon in both the short and long term.Developing an Event Around Healthcare and VoiceThe genesis of the Voice of Healthcare Summit started with the Voice of Healthcare Podcast which he co-created with Dr. Matt Cybulsky.They needed an in-person gathering of all the people interested in voice technology and AI, and its intersection with modern healthcare. The perfect venue for it was Harvard Medical School. They piloted the event in 2018 and it went very well. They had very qualified attendees and fantastic speakers.The Current Happenings in Healthcare and VoiceWe have sort of reached a saturation point especially for the early adopters of voice technology. There’s a prevailing thought on what the next voice technology that will carry us forward will be.There are still plenty of people who are yet to interact with computers by voice. More people are still discovering what voice technology can do for them that other interfaces cannot.It feels like the industry is plateauing, but several companies are making major moves.Samsung came out with a number of developer tools and Bixby 2.0. Apple hired some people to get more serious on improving Siri. Microsoft’s decision to partner with Amazon Alexa had a number of positive implications for their business.Different companies are diving into voice strictly into specific verticals.He predicts that there will be some sort of killer voice app in healthcare that will open up people’s eyes to the potential of what voice can do for people everywhere. Looking at the conference program at the Voice of Healthcare Summit 2019, one company called Canary Speech is well-funded and has top-notch computer scientists involved. They use voice to impact healthcare by analyzing the sound waves in someone’s voice, and measuring them against the person’s baseline and other benchmarks that they have.Other companies that will talk about their great voice apps include CARROT Pass, The Mayo Clinic, Orbita, Lifepod, Triad Health A.I. (They have a very different approach using their voice platform to help people who have been diagnosed with Parkinsons), Transform9, and many more.Innovation in Voice for Healthcare Versus Other VerticalsThe amount of innovation taking place in other verticals is huge but the attention on them by their respective sectors is different. For example, in the hospitality sector, based on the Voice of Hospitality Summit, there is a lot of innovation going on but there is very little attention on that from the hospitality market. The adoption of those technologies is very slow.From the Voice of the Car Summit, there’s plenty of innovation going on in voice technology for cars, but the attention from the market is slightly less than in healthcare or hospitality.Amazon Alexa’s HIPAA ComplianceIt brings things to a whole new level.The number of devices being sold is something that can’t be ignored.The fact that Amazon would go to the trouble of getting Alexa to be HIPAA Compliant sends a signal that voice and AI can deeply impact healthcare positively.Links and Resources in this EpisodeVoice of Healthcare Summit 2019Voice Summit 2019 See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Joyce Even, the vice-chair of content management and delivery in global business solutions for the Mayo Clinic.The Mayo Clinic is a pioneer in voice technology when it comes to healthcare and they are really paving the way for a lot of organizations to do some great things. Joyce has 30+ years of in-depth understanding of healthcare, financial, and data/systems management. She began her career at Mayo Clinic in a financial analyst role, progressing into leadership positions, and responsibilities. She’s involved in extending Mayo Clinic’s knowledge and capabilities by delivering integrated, market-leading health and healthy living solutions.Key points from Joyce!How Mayo Clinic got started in voice, why they think it is such an important technology, some of the skills that they have produced, and some of their early successes.The Voice Journey for MayoThey went into voice back in 2017.Mayo Clinic has been delivering health information for many years through multiple channels. They started out with print and progressively got into other channels like internet and mobile devices.As they continued to look into where the new channels would be, that’s where voice came up, and they started looking into how to deliver health information through voice technology.Constructing Content for VoiceAs they were investigating how to construct spoken content, they recognized that they would have to leverage some different tool sets in order to produce their content in a way that would allow them to actually hear it being spoken back to them.Their editors began to experiment with some of those tools and they started by creating the Mayo Clinic first aid skill for Alexa which enabled them to develop content that could be spoken. That is how they started their journey with voice.The skill receives a lot of activity and they also created a Google Action around the same information.They keep learning how to make the information better and have been understanding more and more how people ask questions, and realizing the importance of intent in developing content for voice.They started working with Amazon to develop first party content which enables people to ask Alexa questions without having to download any skill. They have so far worked with Amazon to deliver over 8,000 health concepts so that a user can simply ask Alexa anything health related and Alexa will respond without the user having to say, “Download that skill” or “Open Mayo Clinic First Aid.”The Impact of Alexa’s HIPAA ComplianceShe thinks it really opens up some doors for health providers to engage their patients differently than how they have been, especially in delivering patient education for when they first might be diagnosed with a disease, to being educated on how to take care of themselves after medical procedures.They are looking into how voice technology can play a role and be more effective than traditional patient education is.Vocal BiomarkersIt’s an exciting area for the Mayo Clinic. They are studying biomarkers for voice so that a person could just be speaking and a smart speaker could pick up a health condition in the person.They are also looking at coronary artery disease and seeing if they can pick it up from a person’s voice. The technology can also be leveraged for other types of diseases like Parkinson’s and others.The diseases can then be confirmed through other tests to avoid relying solely on voice.They don’t see voice as the only thing for diagnostic or health information delivery.The Future Directions of Mayo’s Voice Related WorkThey will continue to investigate the biomarkers for multiple conditions going beyond coronary artery disease.They will also be pursuing post-procedural patient education with patients, and have been developing interactive care plans for after a person is dismissed from the hospital. They will be piloting some activities in that arena.Links and Resources in this EpisodeMayo Clinic WebsiteThe Voice SummitThe Voice of Healthcare Summit See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Mauricio Meza, the Co-Founder of Komodo OpenLab.Mauricio is an experienced professional with MBA and biomedical engineering degrees. He worked as an assistive technology consultant for over a decade recommending assistive technology to individuals with disabilities to complete their home, school and work activities.Komodo OpenLab’s creates technologies that enable simple and easy access to smartphones and tablets for millions of people with mobility impairments, greatly enhancing their work, school and family lives with the rich and dynamic communication and productivity tools these mobile devices provide.Their flagship product, Tecla, is an assistive device that gives people with upper-body mobility impairments the ability to fully access smart devices and technology at a fraction of the cost of traditional assistive devices. Key points from Dr. Desai!What Komodo OpenLab, through Tecla, is doing for people who have mobility issues and how they are leveraging voice to make their lives much easier.Starting Komodo OpenLabHe formed the company with his classmate from Biomedical Engineering School.While working in the assistive technology field, he identified a big need for more accessible mobile technology. They started by making android devices accessible to people with critical disabilities.As technology progressed, they started getting into smart home technologies. They created Tecla to make mainstream technology accessible for those that have limited mobility.Komodo is now 8 years old and has three generations of products. Their latest generation of hardware is Tecla E, and it comes with companion apps for Android and iOS, which provides access to smart home devices. It is an integrated interface where people with mobility issues can have all their devices, and they can control them by blinking, blowing or through their wheel chair controllers.Use CasesTecla connects wirelessly to a person’s phone or tablet, and then works with the accessibility service of the device to allow someone to navigate the interface in a simpler way. The user might then be able to control the device by blinking, blowing, or through their wheelchair controllers.Some Tecla devices are mounted on a user’s wheelchair and connected to their electronics. Others will have it mounted in their bedside table so they can use it from bed.Tecla’s Integration with VoiceVoice technology has created an opportunity to make things more accessible. With the increased adoption of smart home devices, Tecla saw an opportunity to use the technology to make things more accessible for people with limited mobility.A lot of their users can still use their voices, for example, those with spinal cord injuries. They can use voice assistants to make phone calls and send text messages. They can also use them to control their lights, change their thermostat temperature, etc.They built in the Alexa voice service into their app for use by people who may not have their voice devices with them at any one time.For users who have problems with their voices like speech impairment, Tecla created buttons that users can assign functions to, so they don’t have to use their voices. Users can use the buttons to give commands to their voice assistants.Button FunctionsWhen a user adds their Alexa account into the Tecla app, they create a button and a sound file is generated with a specific message to Alexa. Every time the user activates that button, Tecla sends that sound file to the Alexa voice service so that Alexa can respond.The user doesn’t hear the sound being played to Alexa because it all happens in the back end.Feedback From UsersUsers like Tecla’s simplicity compared to other systems. The Alexa set up is so much easier. Once a user turns their Alexa on and their Wi-Fi recognizes devices in their area and automatically adds them into their account, they just add their Alexa credentials and type the message that they would want to say to Alexa.Recently, a rehab center easily transitioned into using Tecla with the same Alexa devices that they were using.Future Plans for TeclaThey are trying to gain more visibility so they can get more users because their solution is very niche.They work with rehab centers and recently partnered with Bell which is subsidizing the Tecla hardware for those users who would otherwise be unable to use a smartphone or tablet. They are seeking more partnerships that can help them reach more users.They are also looking into providing more services. Right now they have a remote monitoring feature which for example, enables a family to monitor their family member's wheelchair to make sure that the environment around them, like temperature, is safe. Links and Resources in this EpisodeTecla WebsiteMauricio’s Email - Mauricio@kmo.do See acast.com/privacy for privacy and opt-out information.
In this episode, Teri was interviewed on The Happy Doc Podcast by the host, Dr. Taylor Brana, about voice technology and what he is doing in the space.The Happy Doc Podcast amplifies the voices of physicians that are doing really interesting, creative and fulfilling things that are outside of their regular medical practice. Dr. Brana is a Resident Physician in the field of Psychiatry and has had a deep interest in aiding in the processes of medicine, healthcare, and education. Starting The Happy Doc Podcast has enabled him to learn what makes physicians fulfilled in their careers and lives.Key points from Teri!Voice technology and his thoughts about where he thinks things are going in the voice technology space.Working with AthletesTeri works at the University of British Columbia in the student health clinic looking after students and varsity athletes.He also works with the minor league affiliate baseball team for the Toronto Blue Jays as one of their primary care physicians.How Voice will Help Deliver the Right Healthcare to PeopleCanada has a great publicly funded healthcare system, but a major issue is that people cannot access it easily. Physicians are overworked and burnt out.A good efficient healthcare system has 3 components; The Right Care, At the Right Time, and At the Right Place.When we can start having interactions with voice assistants in our homes, they can start to actually provide us with care in the home and at the right time (they will act as triage nurses and guide us on where to go get the right healthcare) That will be personalized decentralized medicine.With the continuous adoption of smart devices with voice assistants throughout communities, more people will have their own personal healthcare provider and triage nurse for the healthcare system in their homes.That would enable the efficient allocation of healthcare resources and the pressure on the healthcare system would be eased.People’s Concerns About Voice TechnologyThe biggest fear people have relates to privacy.People are not comfortable with the idea of voice assistants spying on them.The Potential Impact of Voice Technology on HealthcareTrying to get some medical attention is always a big ordeal, but in the future, when people talk to their smart devices, the devices will use changes in voice patterns to determine whether someone is unwell.At that point, a device will become proactive and start asking its user questions about what could be going on. It will then start taking a medical history at that time and use its AI algorithms (that will hopefully be evidence-based) to come up with a probability of what illness the user will be suffering from.If the device deems it fit for the user to go see a doctor, it will make a doctor’s appointment, or if not, it will make all the necessary arrangements for treatment at home.The potential for highly efficient healthcare is high with voice technology.Top Voice TechnologiesTeri is fascinated by the idea of vocal biomarkers.Vocal biomarkers are like metadata for voice, similar to the way photographs have metadata like the camera model used, the shutter speed, the location where a photograph was taken, et cetera.Vocal biomarkers will be the ones to enable voice assistants to tell when there is a difference in someone’s voice and use that to determine that person's health.There is a company that did a study with the Mayo Clinic where they had people read some statements. Those people were undergoing coronary angiograms and the findings showed a statistically positive correlation between certain changes in the way they said something and the risk of coronary disease. That demonstrated that a person’s voice can be used to predict their risk of coronary disease and many other diseases. A lot of this technology is in the research and development phase.Education and VoiceThere are different areas of education where people are developing skills to educate people about a certain condition.Teri loves what Dr. Brana and Dr. Desai are doing with MedFlashGo, the first voice-interactive medical question bank for medical students. It is a question bank/flashcard set geared for medical students to study for boards using just their voice.Physicians and medical students are in a great position right now to come up with good voice applications that will serve patients. They will need to partner with developers who are skilled in voice application development.How to Learn About VoiceIt depends on someone’s learning style, but Teri recommends voice-oriented podcasts and flash briefings.Links and Resources in this EpisodeThe Happy Doc PodcastMedFlashGoVoice First Health PodcastAlexa in Canada PodcastVoice In Canada Flash BriefingVoice SummitVoice of Healthcare SummitComprehensive Course on How to Start a Flash BriefingDr. Teri Fisher on Twitter See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Dr. Neel Desai, a physician, true innovator and the co-founder of MedFlashGo.Dr. Desai has dedicated many years towards helping medical students to understand that there is so much more to medicine than just practicing natural medicine. He comes onto the podcast to talk about his brand new Amazon Alexa skill, MedFlashGo, the first voice-interactive medical question bank for medical students. It is a question bank/flashcard set geared for medical students to study for boards using just their voice.Key points from Dr. Desai!A brand new way of helping medical students to learn the information they are required to know for their licensing exams.Teaching medical students through voice and how that is training the students to use voice further on in their careers.Interest in VoiceHe has been fascinated with digital technology and gadgets for the last 20 years.He was part of a podcast called the Happy Doc, which was focused on preventing burnout in doctors and medical students. While trying to figure out the pain points that frustrate physicians and medical students, they realized that poor technology was a huge factor.To expand their horizons and improve their podcast, they started listening to voice influencers like Gary Vaynerchuk and found out they could use Alexa skills to achieve their main goal. They eventually fell into the voice community and it matched with their purpose for starting the Happy Doc podcast.They realized that a lot of medical students are using online videos to learn and live stream lectures which demonstrated that digital and individualized learning was the way to go.They immersed themselves in podcasts and knowledge conferences to learn more about how they could use voice.MedFlashGoHe co-founded it with his partner, Dr. Taylor Brana.It’s a medical question bank for students in medical school who are studying for different exams. They just recently released the skill.The skill is for studying for board exams. In the US there are 3 steps to becoming a licensed physician. Each step has important exams that one has to pass to move on to the next level.The skill consists of hands-free medical flashcards on the go. The cards consist of test questions for the user. They can be either multiple choice or fill-in-the-blank questions about medical subjects for the user’s exams.They currently have around 1,000 questions with a goal to cover step 1, step 2, step 3, and shelf concepts. The Process and Challenges of Developing the SkillThey partnered with developers and the first time they tried to do it was challenging.They found a developer company in Silicon Valley to work with. The company was just starting out in the voice business when they starting working with them.The iterations were not great at first, but they worked at it, and eventually, the iterations started working a lot better than expected.Alexa doesn’t have a medical dictionary, but she actually pronounces diseases, medication, and enzymes very well.They beta tested the skill with medical students who wanted to be a part of something great because it helped them study.They already had a community of medical students through their Happy Doc podcast so they reached out to the community to try the skill out. Students can use the skill while cooking, lounging, resting, cleaning, driving, or even exercising. Once the skill is enabled through an Alexa device, the skill can be used on a user’s phone using an Alexa app or on an Alexa-enabled device.Shaping the Way Medical Students will Practice MedicineOne of the things they have been talking about with Happy Doc and MedFlashGo is learning, teaching, creating and practicing medicine in the century we live in.Their focus is on teaching the next generation of medical students by creating new healthier learning systems by training them on voice early on. That will ensure that it will not be a big barrier to get them to adopt using voice speakers in healthcare facilities. Links and Resources in this EpisodeMedFlashGoMedFlashGo WebsiteMedFlashGo on FacebookMedFlashGo on TwitterMedFlashGo on Instagram See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Fahad Aziz, the Co-founder and CTO at Caremerge and a Forbes contributor.He has a lot of experience in technology, Artificial Technology, and Voice. He is also a contributor to Forbes Magazine where he covers technologies in healthcare. Caremerge leverages Artificial Intelligence and Voice to offer the most innovative wellness and engagement platform for senior living communities. They have been tackling some of the biggest pain points that users have when it comes to using voice technology in their homes and living facilities.Caremerge was named one of the 2018 Inc. 5000 fastest growing companies in the US and have won a number of awards. Key points from Fahad Aziz!Caremerge’s incredible work in the seniors’ living space and how they support those seniors wherever they may be living.Caremerge’s formal partnerships with Amazon to help them address some of the pain points that seniors have when using voice technology.Fahad’s BackgroundHe has been in technology for over 15 years and while working with different organizations he felt inspired to start something new, and decided to quit his job and move to Chicago.He initially had the idea for a Facebook for healthcare, but realized it wasn’t a good idea two months into it, so he pivoted into something else.He got an opportunity to spend some time in a seniors’ community in the suburbs of Chicago, where he witnessed a lot of challenges, so he decided to do something to solve those challenges. That was the birth of Caremerge and since then they have been serving 400 communities with over 100,000 residents.In their interactions with the communities, they saw the need for voice technology to be implemented in the communities.He did a research that he published on Forbes. Part of the research was on what voice technology could do for seniors and the results showed that the technology could really connect hundreds of thousands of people to technology.They were the first ones to build a product like theirs and build it the right way.The Caremerge ProductThey started with basic features like allowing residents to ask simple questions like, “What’s on the menu. What are the activities today? Has the mail arrived?” Once they deployed that, they saw a need for more. What was needed wasn’t more features but more capabilities to answer questions correctly. They spent a lot of time improving their AI engines and creating their own models for understanding questions and answering them correctly. They then launched their Alexa technology and started seeing an increase in their adoption.They got involved with Amazon to help them build an Enterprise Fleet Management System that would be used to deploy hundreds of thousands of Alexa devices from one portal and ensure their effective management.Name Free Invocation of SkillsThe biggest challenge that came to them with Alexa and voice technology, was that seniors had difficulties remembering to invoke the skills.They worked with Amazon to further improve their system that it was then deployed through the fleet management system, so that seniors didn’t need to say, “Alexa, ask community”, they would just say, “Alexa, what’s on the menu.” They got rid of that invocation command that was there to increase an option.Feedback from UsersThey have received mixed feedback. There has been a lot of excitement, but they are now going into a bit disappointment phase.They are waiting for that aha moment when people will realize that their product is exactly what they want.They are in the phase where people have bought their devices and enjoyed the first few weeks, but their uses for the devices have plateaued a bit.Some other feedback is that people love the whole oral experience. They like the fact that they don’t have to pick up the device to get whatever information they need. They use Alexa in the most mind blowing ways.New FeaturesThey just rolled out a reminders feature which enables people to set up reminders with Alexa where Alexa will wake up to remind a user whatever they needed to be reminded of.Google AssistantThey are planning on designing their platform for Google Assistant too.The only difference with Google Assistant is that they don’t have a fleet management system so it would be challenging for enterprise deployments than it is with Alexa.Alexa’s HIPAA ComplianceThey are very excited about this development.They have been part of the program and watching it very closely as an Amazon partner.For the first time, a care provider doesn’t have to take up any liability any longer. Before Alexa’s HIPAA compliance, the care providers were liable for any Alexa device in their buildings. Now, when a vendor deploys a device, they sign a BAA agreement while Amazon will sign the same, which takes a bit of liability away from the care provider or community.Caremerge can now comfortably deploy their devices due to the compliance.The Future for CaremergeThey are doing more research on how to further improve their response model (how to better answer questions)They are also continuously working on their reminders and notifications features.They are working with Amazon to address and improve on data privacy and safety issues. Links and Resources in this EpisodeCaremerge Website See acast.com/privacy for privacy and opt-out information.
In this episode, Teri reached out to a number of key influencers in the voice-first health space so they could share what Alexa’s recently acquired HIPAA compliance means to them. Alexa healthcare skills created by industry leading healthcare providers, payers, pharmacy benefit managers, and digital health coaching companies are now operating in a HIPAA eligible environment. In the future, Amazon will enable developers to take full advantage of this capability. Key points!The recent announcement by Amazon that Alexa is now HIPAA Compliant and how this will impact the voice-first health space and health in general.Current HIPAA-Compliant Alexa Healthcare Skills Express Scripts - focuses on home delivery of prescriptions.Cigna Health Today - allows eligible employees to manage their personalized wellness incentives.My Children’s Enhanced Recovery After Surgery (ERAS) - This is by Boston Children’s Hospital and it allows parents and caregivers of children to provide their care team’s updates on recovery after surgery.Swedish Health Connect - It allows customers to find an urgent care center near them and schedule appointments.Atrium Health - allows North and South Carolina residents to find urgent care locations near them and schedule same-day appointments, find out about opening hours, and current waiting times.Livongo Blood Sugar Lookup - allows members to actually look at some of their diabetes care plan parameters like querying their last blood sugar readings, blood sugar measurement trends, and receiving different insights.Nathan Treloar of OrbitaHe is the president and co-founder of Orbita.Alexa becoming HIPAA-Compliant is big news for his company and the healthcare industry.Orbita provides a secure platform for creating voice-enabled virtual assistants for the healthcare industry. They offer HIPAA-Compliant virtual assistants for web and mobile chat, and custom devices, but have not been able to do the same for Alexa skills.Their clients and partners always ask about HIPAA for Alexa. They had been waiting for the Alexa HIPAA compliance announcement for 3 years.Now they can go beyond the general health information services and symptom checkers that are the standard of healthcare applications in the skill store, and build much more personalized health information applications that provide highly contextual guidance and support.Bianca PhillipsShe is a lawyer and looks at the implications of voice technology on digital health lawmaking.She is the founder of the Electronic Health Consulting Group.She is excited to hear that Alexa has achieved HIPAA compliance which means that the new healthcare skills are going to benefit patients.She believes that the next big step is about digital health companies considering the present day and future uses of their data and data rights. Data rights are tied to a person’s sense of security so it is critical to developing trust that there is determination of how rights should be distributed amongst different parties. It’s going to involve companies looking at past law making and private decision making in the area of data ownership, to help them understand the historical challenges of data ownership and aid their development of considered stance moving forward. That would signal real digital health leadership.Dave Kemp He is one of the foremost thought leaders in hearables and how voice technology is affecting, influencing and being incorporated into hearables.He is a part of Oak Tree Products, a company that provides medical supplies and devices to the hearing technology industry.He feels that the Alexa HIPAA-Compliant news is one of the most important developments in the whole voice technology space, not just in the healthcare setting. The compliance needed to happen for voice technology to be implemented into the healthcare setting in a meaningful way.There will be numerous possibilities like linking your wearable data to Alexa so that you can ask Alexa about the type of data that is being recorded. There are going to be some very interesting use cases.Patients, healthcare providers and all of the different entities that work in-between the patients and healthcare providers are all going to benefit. Heidi CulbertsonShe is the founder and CEO of Ask Marvee, a company that is devoted to providing voice applications/voice skills to help the aging population maintain their independence in their home.She feels voice is still in its infancy and HIPAA protection is very important. She sees the Alexa HIPAA compliance as a starting point to address privacy and personal health information. It’s also a recognition and validation of the huge impact voice can have across all the many healthcare touch points in hospitals and homes.It will be extending the edges of healthcare its reach and its interaction models.It’s a huge opportunity for innovation and partnership amongst health organizations and third-party development and design shops.Dr. Neel DesaiHe is the co-founder of MedFlash Go, Alexa’s first interactive medical question bank for medical students.He feels Alexa’s HIPAA compliance will be a great thing for patients and healthcare teams, as it will reduce a lot of friction and allow for what is most important, which is communication between healthcare teams and patients, which will make it easier to reduce errors, save time and make it easier for healthcare teams to make sure patients are compliant with their medications.It will also be helpful for patients to communicate as far as they are having problems with medications and things that get lost with constant telephone calls back and forth.Stuart PattersonHe is the CEO of Lifepod Solutions. Lifepod is all about creating pro-active voice (meaning the voice technology can initiate the interactions) for supporting elderly living in various homes or trying to maintain independence in their own homes, and using these devices as true assistant in their lives.He doesn’t think Alexa’s recent HIPAA compliance announcement is a breakthrough because Nuance has offered HIPAA-Compliant voice technology for years, particularly for in-hospital voice-first use cases like dictation, transcription and data entry into EHR. Those are the most obvious and cost-effective use cases for voice in the healthcare market, as long as its reactive voice only.He feels having a HIPAA version of Alexa definitely benefits the voice-first market overall, because they are a leader in that market. Alexa use cases in healthcare, however, are like all of the other virtual assistant services done so far. They are in a self-service model, and they exclusively use a reactive voice mode where the user speaks and the service tries to respond.Timon LeDainHe is the Director of Emerging Technologies at Macadamian Technologies, and they help providers and healthcare organizations to create some really innovative healthcare related skills, for example, voice-enabled healthcare plans like the Diabetes Care Plan.They were thrilled to learn that Amazon had released their HIPAA-Compliant Alexa skills kit. The compliance will unlock opportunities with organizations which demand HIPAA-Compliant end-to-end solutions and had been waiting for this before exploring voice opportunities in earnest.They have developed work-arounds in the past to achieve compliance with the healthcare-related Alexa skills they released previously, but they still didn’t get around Amazon’s terms of service which prevents any personal health information to be captured via an Alexa skill.Companies like Macadamian were working in the grey with their early adopters while they waited for Amazon to address the compliance.They now anticipate significant new business coming from pharma and medtech companies, for their omni-channel digital health platform as a service.They look forward to seeing voice play a larger role now in the digital transformation of the healthcare industry, and new digital therapeutic solutions under development today.Dave IsbitskiHe is the Chief Evangelist of Amazon Alexa, keynote speaker, podcaster, and voice designer.Alexa’s HIPAA compliance is in private beta.It will enable patients and healthcare providers to get and track information easily.He is excited about the future, and what the compliance means for the healthcare industry.He is excited to see the skills that people will build.Links and Resources in this EpisodeNathan Treloar InterviewBianca Phillips InterviewDave Kemp InterviewHeidi Culbertson InterviewStuart Patterson InterviewTimon LeDain InterviewDave Isbitski See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Dr. David Metcalf, the director of the Mixed Emerging Technology Integration Lab (METIL), at the University of Central Florida.David Metcalf has more than 20 years' experience in the design and research of web-based and mobile technologies converging to enable learning and healthcare. Dr. Metcalf is Director of the Mixed Emerging Technology Integration Lab (METIL) at UCF's Institute for Simulation and Training. The team has built mHealth solutions, simulations, games, eLearning, mobile and enterprise IT systems for Google, J&J, VA, U.S. military and UCF's College of Medicine among others.METIL develops and researches emerging technologies in healthcare which includes everything from voice to blockchain. Dr. Metcalf comes on the show to talk about a whole bunch of their projects at METIL and his recent book on blockchain in healthcare. Key points from Dr. Metcalf!METIL’s diverse range of emerging healthcare technology projects.How blockchain could be the backend to the frontend of voice when it comes to making some real changes and providing the best possible healthcare for patients and the society at largeCreating METILDr. Metcalf always wanted such a lab and previously had one at NASA that was modeled after some of the labs that he admired like MIT’s media lab. He spun off the lab to go into the corporate space, and later came back to academia to help young people do what he did in life early by creating spin offs of their own and understand how to take emerging technologies and bring them out from the public sector to the private sector.Compelling Nature of VoiceVoice is going to be a more natural interface than some of the things people have to do in the past with regard to using keyboards, mice, smartphones, wearables, etc.Current ProjectsTheir voice technology experience goes back to the interactive voice response days using technologies such as Microsoft’s Salesforce to build out their learning capabilities. People would say whatever products they wanted over the phone and it would give them back the information that they wanted in natural language.They have expanded upon that as new technologies like Alexa, Siri and Cortana have come on board. This enables them to build unique toolsets and apply them in new unique ways. They have applied them in some of the most advanced intelligent homes for health in Florida, nationally and worldwide. An example is the Lake Nona Medical City which has the WHIT (Wellness Home built on Innovation and Technology). METIL did all of the Alexa integration work for that home. Someone can talk to the home and ask any question about the health features of the unique home as well being able to control different other features of the home by voice.METIL has worked on similar type of technology projects with a number of interesting companies like Cisco, GE, Florida Blue/Guidewell, Blue Cross Blue Shield, Johnson & Johnson, Philips, and Florida Hospital.They have also worked on projects for communities like Connected City in North of TampaThey have explored other intelligent homes like the iHome.They have also been involved in clinical setting projects. They worked with some really smart doctors in the cancer ward of the Orlando Health UF Health Cancer Center, to put in place a social companion robot (Betty) that would be able to converse with and answer questions for some of the people coming in to the waiting room and into the exam rooms at the ward. The main use case for this was looking at some of the social history and having a cute little engaging robot (both a physical one that was 3D printed by one of their sister laboratories, and a virtual one which was a hologram of the robot, that could be interacted with by voice)Strategies in the way the Robot (Betty) asks questionsTo ensure that people are more forthcoming with the robot (Betty), a team of psychologists did research on what works to engage people and make them more at ease and the findings were used in developing Betty.Betty has a very pleasant female voice which expresses a lot of emotion and empathy within its speech patterns. She also told a nice joke to disarm someone a little bit.What we choose to do about the patterns and use of voice and speech are really important to get right. A mechanized computerized voice that feels cold and sterile may not have the same effect as something that feels a little bit warmer and engaging.Current Status with BettyThe robot is being expanded to hospitals and other health-oriented centers.It’s currently being expanded to the Orlando Health Foundry.METIL’s Key ProjectsThey are very interested in what’s going on with the ability to tie in voice and games. It is a great way to engage people socially.They are looking at ways that that can be used to engage people in their health using the same techniques.They are in the process of building some of games that use voice to engage people in social play that helps with their health. They are doing this at the Center for Health and Wellbeing in Winterpark, Florida. They are looking at ways that they can enable both voice and motion-based games in that environment.Blockchain and how it can Affect Healthcare and Voice in HealthcareTwo areas that are exciting in healthcare are the front-end which is the voice technology and user experience, and the back-end blockchain technology for being used to be able to verify records and trust those records between multiple organizations.The problem with some of the blockchain technology is that it is sometimes harder to use. There are multiple steps and people have to do certain things to be able to use the technology effectively.If the best front-end technology in voice can be paired with the back-end power of blockchain, that is going to create some new use cases.Dr. Metcalf’s BookHIMSS asked them if they could write a book called Blockchain and Healthcare. They wanted to make it very realistic with real world case studies too. They had over 50 authors that contributed to the book in case studies, thought leadership and chapters. They curated a book of some of the best thinking in blockchain across a number of different areas.They have written other books for HIMSS.Links and Resources in this EpisodeMETIL’s WebsiteDr. Metcalf’s Books and PublicationsBlockchain in Healthcare BookLinks and Resources in this EpisodeCaremerge Website See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Ryan Plasch, the VP of Growth and Strategy at Saykara, a company that is tackling the holy grail of medical documentation.Ryan has been in healthcare about 22 years with a specialty in Sales and Business Development Executive Leadership. He is passionate about the transformational use of modern technology that can improve healthcare delivery.Saykara is using their own speech recognition model and AI to tackle the issue of clinicians being in a room and having to chart their notes on a computer while they are interacting with a patient, which is not an ideal situation. That AI-powered healthcare virtual assistant that simplifies the documentation process is actually called Saykara. Saykara listens to the interaction between a physician/clinician and their patient while recording the documentation, and then being transcribed into the EHR. Key points from Ryan!Addressing one of the biggest pain points that physicians and clinicians have.SaykaraIt was formed in 2015 by Harjinder Sandhu.Harjinder had been closely following some of the trends in the voice industry around speech, specifically Amazon, Apple and Google.It is a virtual assistant designed primarily to create a comprehensive clinical note during a patient and physician/clinician encounter. The physician/clinician doesn’t have to look at a screen anymore, and doesn’t have to click and use keyboards.It is currently deployed over smartphoneHow Saykara worksSaykara is integrated with all major the EHRs. It can operate in a disconnected mode as well where everything is done manually, but for the most part the physician/clinician will walk into the encounter with a patient as they normally would, and on their phone they will have a worklist scheduled for the day as extracted by Saykara from the EHR. The physician/clinician will simply select that patient, set the phone down close them, and then turn on Kara (Saykara’s AI) and ask her to listen or press the on button to start recording.When the physician is done with the examination, he just presses off.The Natural Language Understanding (NLU) and Natural Language Processing (NLP)Saykara primarily listens to the physician because of their constant each day placing orders, referrals and other recommendations for the treatment plan.They do record the patient’s voice although with all the hype around AI, most AI is narrow, but they are very focused on areas where they think they can make a major contribution.They have their own speech recognition engine combined with their NLP platform, and their own unique AI algorithms.When Kara listens primarily to the physician, the phone app will buffer some of that and then synchronize to the cloud. They partner with industry leaders like Amazon and Microsoft, which are both secure, HIPAA compliant and encrypted.All a physician needs is an internet signal (cell or WiFi) They process information through their augmented AI where the machine learning does the first processing and then they have a team that QAs the data for intent, formatting, structure and terminology to ensure 100% accuracy.The QA process also serves as a feedback loop to help the machine learn, advance and understand at a much faster rate.They collect the audio data converted to text and then map it through NLP, and then build patterns/concepts based on the conversation flow.Kara listens to the physician verbalized out loud what they are thinking as they go through the exam, and then with the patterns and concepts they build, they generate what they believe the physician’s intent is and then use the QA person in the loop to ensure that that is done accurately.They also have the audio file which they listen to and their QA people listen to that as well to ensure that everything is accurate. They then generate a full complete billable note that is uploaded into the EHR.A physician can review their notes from Kara as soon as they need them.Kara has the ability to perform tasks like creating orders and to refer a patient.Kara works with any EMR. They have an agnostic HER interface engine and they can integrate with all the EHRs in a number of ways. They prefer the API approach because it allows for some very advanced functionality.The Phone Verses Voice Assistant HardwarePhysicians move around a lot from room to room, and the only constant is that they always have their phone with them. They are available for texts, calls, and emergencies. Kara is a very easy platform just to run over consumer smartphones which gives Saykara flexibility. It doesn’t require a dedicated power supply so it is not limited by location and/or the need for a lot of hardware.The sound sensitivity on smartphones is very good for creating clear audio to be able to generate a note using the AI platform.Physician Burn-outThis is a very important area for Saykara to tackle.It’s front and center in most of the health systems that they are working with, and it’s a public health crisis.Statistics show more than 50% of physicians are already burned-out or experiencing burn-out. Two thirds won’t recommend their profession.Saykara gives physicians their time back so they can improve their quality of life. The platform reduces their documentation time by 70% which really opens up doors for other activities that they couldn’t take care of otherwise.Links and Resources in this EpisodeSaykara’s WebsiteRyan’s Email See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Bianca Phillips, a lawyer and leader/researcher in the area of digital health law making, and the founder of the Electronic Health Consulting Group.Bianca is advocating for a future where telemedicine allows for access to healthcare no matter where you live, where the mainstream use of wearables allows us to predict the onset of disease before it happens, and where clinical outcomes are improved because of uses of precision and personalized medicine. She believes that the digital health future should balance the needs of both patients and healthcare providers by placing evidence-based approaches and civil rights considerations at the core of digital health law-making. She aims to be a prominent voice for a Digital Health School of Thought founded on these principles and will be one of the speakers at the upcoming Voice of Health (VOH) Summit.Key points from Bianca!The 8 pillars of digital health law making that Bianca has come up with and how each is so critical in the development of appropriate laws for healthcare while at the same time trying not to stifle innovation.Bianca’s fascinating thought experiments on the voice and healthcare space.Bianca’s ResearchShe has been researching in medical law and digital health law, and has had support from several universities in Melbourne, Australia.She’s interested in how we can achieve a future that is immersed in digital health.Lawmakers have the power to shape the future of digital health.The law decides how technologies can be created, developed, and used clinically and commercially. It also imposes restrictions on the technology world in terms of what they can and cannot do.Her interest is in how lawmakers make their choices on questions of compliance, privacy, security, data ownership and civil rights. Her research included examining their decision making processes to unpack those choices and provide an opinion on whether they’ve been accountable for the reasons of their decisions.The 8 pillars of digital health lawmakingShe describes them as a principle centered framework for digital health with the idea being that principles/values that we hold dear can be at the heart of digital health decision making.They include; lawmakers should be accountable for the reasons of their decisions, human rights factors, clinical benefits, societal benefits, harm reduction, risk reduction, business case, and public consultation.Pillar One: Lawmakers should be accountable for the reasons of their decisions - The idea here is that when lawmakers make statements in parliament about why they are enacting a certain law, they don’t have to provide sources which makes it difficult to ascertain the reasons for their decisions.Pillar Two: Human right factors - This is about applying human rights principles to lawmaking. In Australia, when a law is enacted, a statement of compatibility with human rights must be provided.Pillar Three and Five: Clinical benefits and harm reduction can be combined, and they are about applying clinical medicine principles (the idea of doing what is of benefit to a patient), applying evidence-based medicine, and also doing no harm to patients.Pillar Seven: Business case - In Australia they have a process called the “Gateway Process”, to determine whether high risk/high cost ventures should be pursued by government.The Issue of Data and Best Practices in the Voice-First SpaceCompliance especially in the voice of health (VOH) space is tricky. Waiting for HIPAA compliance to be met is a barrier towards VOH becoming a key contributor in healthcare.She has no doubt that meeting that compliance will happen at some point.The key focus should be in ensuring that the rules that are created protect people but do not place too much of a burden on innovators. This is very tricky and jurisdiction-specific. Privacy and security is the biggest issue and it’s the aspect of digital health that gets the most attention.Ownership and Control of Health DataThis is one of the most significant areas for assessment in the digital age.It’s a complicated area. There have been questions about who should own and control health data.Thought Experiment on Ownership and How it Could Apply to VOHThere is a situation where an Australia patient has undergone surgery to insert breast implants and complications arise, and the patient has to undergo surgery for a bilateral capsulotomy. A year later she develops a lump on the left side and she is diagnosed with a leakage from the silicone job, and she undergoes further surgery.The surgeon has recorded all notes using a virtual assistant (VA), and the VA took information about the description of the patient’s medical condition, the history of her referral, observations from examination of the patient, and correspondence between the surgeon and other doctors. Those notes are very clear and easy to understand, and in ordered log of those requests or questions that were asked to the VA are stored in the activity section of that platform.Sometime later, a class action litigation is commenced against the manufacturer of the implants, which is an overseas company, and they make a requirement that Australian litigants like the patient will be excluded from the settlement with the manufacturer unless they can file copies of their medical records in support of the claim.The doctor would most likely not want to provide the ordered log of the conversations he had with the VA because he may be of the opinion that those conversations are personal records of his thought processes, and he might argue that they are not medical records.So the question here is, who owns the ordered log information? The creator of the technology, the doctor, or the patient. Links and Resources in this EpisodeThe Electronic Health Consulting GroupBianca on Linkedin See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Kristi Ebong, the Senior Vice President of Strategy and General Manager of Healthcare Providers for Orbita, a Voice-as-a-Service company.Orbita is the only enterprise-grade, conversational platform powering HIPAA-compliant voice and chatbot applications in healthcare. They were recently awarded the inaugural award for voice technology at HIMSS 2019 from the Intelligent Health Association in partnership with Pillo.Kristi was previously Head of Emerging Technology at Cedars-Sinai Health System where she served as an advisor on new and emerging technology and led deal flow for the Cedars-Sinai Accelerator Powered by Techstars. Kristi previously worked as an independent consultant for provider systems, technology startups, and cross-vertical health organizations (including the Robert Wood Johnson Foundation, Healthspottr, and the US Department of Health and Human Services Office of the National Coordinator for Health IT).OrbitaOrbita is a conversational AI platform for healthcare.They allow people to build voice, chat and bot interfaces and workflows much faster in the same way that web content management platforms came along to help people build a website. Orbita provides that to developers to allow them to create conversational interfaces more quickly and in more delightful ways, as well as to business users. It’s like SurveyMonkey for voice and bots. They provide their customers with both the platform itself as well as turnkey solutions within healthcare including digital marketing, symptom checking, consumer and content services, patient member services, patient bedside assistance, remote patient monitoring, and others.Orbita’s largest vertical is with healthcare providers, but they are live across provider, payer, pharma, and Fortune 500 companies. They are even working telecom companies that are looking at really augmenting their existing presence with aging-in-place solutions.They both augment existing technologies, products and workflows, and also offer direct out-of-the-box solutions as well, all of it being selling directly to the enterprise.Orbita’s Success Stories/Use Case ScenariosThey are working with a prostate oncology group to offload some of the call center operations that they are doing. Prostate cancer patients are getting lab results for their PSA labs very regularly. Orbita is augmenting and automating that workflow.Orbita is the “powered by” behind the scenes for Deloitte’s bedside assistant, DeloitteASSIST, which has been live in Australia. DeloitteASSIST is an AI enabled patient communication solution enabling patients to request assistance without the need to press a button.Orbita is also working with large pharma companies for post-acute or patient engagement in the home when it comes to clinical trials and medication adherence.Orbita/Pillo PartnershipThis is a great example of where Orbita is operating as an enabling technology.They are augmenting and supporting Pillo’s conversational interfaces behind the scenes. They use their AI tools to help get that up and running very quickly.HIPAA ComplianceTheir platform is HIPAA secure and their technology is presenting HIPAA secure devices in the marketplace today.They take privacy and security very seriously. Their team spends a lot of time exploring the workflow around that and the back end. They brought on a vice-president that is focused exclusively on that space and making sure that that is their number one priority as they grow.ChannelsOrbita is omnichannel. They transcend and work across devices. They can augment smartphones and mobile apps. They can with a chatbot, web browser, all smart speakers and even analogue phones. This is critical because for example, in healthcare, a lot of seniors are comfortable with analogue phones.In Orbita software and platform, they can write within one of those modalities and deploy to the others (Build Once and Deploy Everywhere)Future of VoiceLooking at how long it took for the adoption of certain disruptive technologies to reach a quarter of the US market, web took over 2 years, smartphones took 5 years, but voice has already achieved that in 4 years, and so the scale of adoption is extraordinary.There are a lot of single channel point solutions in the marketplace with a lot of success. Some of those are expected to keep doing very well.Links and Resources in this EpisodeDeloitteASSISTOrbita's WebsiteKristi on LinkedinThe Best Articles on Voice in Healthcare See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Lorraine Chapman, the Senior Director of Healthcare Services at Macadamian Technologies.Macadamian provides user experience research design and software consulting to the healthcare market. Lorraine heads the healthcare services group which has been specifically focused on healthcare for over 6 years. As part of her role, she provides strategic UX and business direction to digital and connected health companies and providers across the United States and Canada.The result of Lorraine’s commitment to solving customer problems is a unique ability to harmonize technology with the needs of people who use it. Lorraine’s goal is to make interactions with healthcare (services and technology) easier, satisfying and meaningful for all stakeholders involved, including patients.Macadamian's Voice First ProjectsMore and more clients are going to them directly due to their voice experience. Most of the clients are healthcare companies that have an existing product that they want to integrate voice into.Some clients, who include provider networks and other healthcare vendors, approach Macadamian with their own voice first solutions looking to disrupt the market especially in remote health monitoring.The solutions are based on medication adherence (for example LifePod, a voice first solution that helps seniors living independently by reminding them to take their medication. The unique part of LifePod is that it doesn’t wait for the senior to wake it up, it instigates a conversation and tracks everything in the background to ensure that a senior actually takes their medication)They have also done clinical trial-based projects where they create voice applications that support clinical trial processes for patients. Patients can participate in clinical trials from their homes.User Experience Challenges or BarriersIt’s always a challenge for them to design applications that fully incorporate the nuances to our language and the way we interact with people, but their user experience team has a very specific process that they employ to ensure that they overcome that challenge and develop the best quality, easiest and friction-less experience for the target user base.Their usage scenarios are very conversationalThey must do usability testing because it’s very crucial when developing a voice application.My Diabetes CoachThis is a digital therapeutics application/platform that Macadamian developed in collaboration with CHEO (Children’s Hospital of Eastern Ontario). The application is for juvenile youth with Type 2 Diabetes.CHEO wanted to provide an omnichannel platform for the youth that included a mobile app and a voice first application.The platform integrates a lot health related things including integrating directly into a physician’s HER so that they are getting the needed data. It’s also integrated with content management information, weight scales, glucometers, fitbits and others. Juvenile diabetics only see their clinician 1% of the time while they have to deal with it daily so 99% of the time, they deal with it on their own. My Diabetes Coach supports them and their care givers when they are on their own.The voice interaction component of it allows the youth to provide information through voice or ask questions.Further development of the application/platform is ongoing.They have developed an underlying digital therapeutics platform that could be customized for other areas.They are still working with CHEO and are planning to go into pilot with patients.The application is fully HIPAA Compliant and secure.Links and Resources in this EpisodeLifePodMy Diabetes CoachMacadamian’s WebsiteLorraine’s Email See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Dr. Sandhya Pruthi, the associate medical director for content management and delivery for global business solutions at Mayo Clinic, and the chief medical editor for MayoClinic.orgDr. Pruthi is also a clinician and attends to patients. The Mayo Clinic is a leader in health-oriented voice technology. They have been a trusted provider of accurate health information for over 150 years. They provide the information in print, digital and mobile. They are now pursuing opportunities in the voice space. They have multiple skills and an especially unique relationship to provide first-party content to Amazon.Key Points from Dr. Pruthi!Why the Mayo Clinic feels that voice technology is an important area to get into.The specific things that Dr. Pruthi does at the Mayo Clinic and in the voice space.The Mayo Clinic and VoiceThey have been watching the trends in the use of “search in voice”It’s predicted that in the next 5 to 10 years, more than 50% of people will search for healthcare information using voice activated devices like smartphones and voice assistants.They did early work in the voice space by developing a Mayo Clinic First Aid skill.The Mayo Clinic First Aid SkillThe skill gives people access to Mayo Clinic content that spans 50 different first aid topics like fever, burns, chest pains, and others.They wanted to make sure that they gave trusted information that would help users with not only self-treatment, but also when to seek urgent or emergent care.The skill enabled them to understand how they needed to condense the content that is available today on the web, and create it in a way that it was conversational.Providing Medical Content for AlexaTheir work on the Mayo Clinic First Aid skill led them into working with Amazon to so far provide 8,000 concepts around health conditions that will help users get information from the first-party use rather than just a third-party use.Being able to deliver high quality healthcare information made it a good option to partner with Amazon in their venture into the voice space. They are also looking at partnering with other platforms.Converting their Written Content into a Voice FormatThey have a great editorial team that worked on taking their written content and creating conversational information that would ensure single responses to questions.It took them a great deal of time.Mayo Clinic’s Future Plans in VoiceThey have 3 pillars and one is that they have been providing voice content around the needs of the healthcare consumer.The second one is looking into how voice can help them in improving the healthcare provider-patient efficiency in healthcare facilities.The third one is looking into voice as a diagnostic tool or bio-marker to detect certain diseasesVocal Bio-markers The cardiovascular team at the Mayo Clinic worked on how they could detect changes in the voice signal and intensity among patients who were having a coronary angiogram. They were able to detect correlations between the voice signal changes and how that could correlate with a higher risk of having a heart attack or coronary event. They see that as a gateway towards developing a tool that could be used to take care of patients at a distant or remotely.Where we are going with Voice Technology over the next 5 to 10 yearsGoing into the personalized approach to healthcare and voice would be exciting.Voice could help people search for the healthcare they need.Voice could help people track changes in their blood pressure or heart rate, and predict whether an intervention is necessary.Links and Resources in this EpisodeMayo ClinicMayo Clinic First Aid SkillDr. Teri Fisher on TwitterDr. Teri Fisher on LinkedInPlease leave a review on iTunes See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Brian Roemmele, the “Oracle of Voice” and the “Modern Day Thomas Edison. Brian is the consummate Renaissance man. He is a scientist, researcher, analyst, connector, thinker and doer. He is actually credited for having come up with the term “Voice First”. Over the long, winding arc of his career, Brian has built and run payments and tech businesses, worked in media, including the promotion of top musicians, and explored a variety of other subjects along the way.Brian actively shares his findings and observations across fora like Forbes, Huffington Post, Newsweek, Slate, Business Insider, Daily Mail, Inc, Gizmodo, Medium, Quora (An exclusive Quora top writer for: 2017, 2016, 2015, 2014, and 2013), Twitter (quoted and published), Around the Coin (earliest crypto currency podcast), Breaking Banks Radio and This Week In Voice on VoiceFirst.fm that surfaces everything from Bitcoin to Voice Commerce.Key Points from Brian!The future of Voice: The Last Interface, the Intelligence Amplifier and the Wisdom KeeperWhere Voice Technology is goingThe talk he did at the Alexa Conference was dubbed “The Last Interface” and was based on the term “What if?”The Last Interface refers to the last interface that we will have with technology.We type to computers because they cannot understand our volition and our intent.Computers are already intelligent enough with current technology to take a user’s context and present to them the information that they are searching for. That’s the premise of The Last Interface.Intelligence AmplificationBrian presented this idea by searching through history how we developed the concept of why we speak (why we developed language). He found that we did it because our brain got too large.Humans had to offload memories into archival systems which became known as writing. Typing is an example of an extension of an archival system which means we are storing the things that we can’t pass on generationally on an offloaded system. Computers took over that and now we archive in systems and places like websites, Google, PDFs, and others, but it’s still an archival system and it still doesn’t transmit the volition and intent of an individual. The short term aspect is what Brian calls “The Intelligence Amplify”He doesn’t fully believe in the concept of AI (Artificial Intelligence) because he doesn’t think we can fully define what intelligence is in humans and where it comes from, and therefore, we cannot artificially create it in any way, shape and form that is human like.We have been trying to amplify our intelligence by archiving our world and our stories, whether they be allegorical, mythological or “factual”. Factual is as we see it today. All of our facts today will 1,000 to 10,000 years from now look allegorical to people because they will not be facts any longer, they will be seen as primitive.The Intelligence Amplify takes in everything around us. How this works; in this world, with the technology that exists today, the moment you’re born to the moment you die, is a device that will have a camera and a microphone. Assume that it has the highest security you can ever imagine and it never goes on the internet (it has no internet connection). It’s recording everything you’ve ever seen, everything you’ve ever read, every comment you’ve made, every comment you’ve heard, and everything is archived. All those things will be presented to you as the basis of your AI to derive context and to understand not only your paradigm (how you make you as you because you are the sum total of the experiences, good and bad, that define us as human beings), and so it starts amplifying your intelligence.During his talk at the 2019 Alexa Conference, he pointed out that the human being discards (exformation) over 99% of everything that comes through our senses.With The Intelligence Amplify, the best of us can be amplified.The Wisdom KeeperWhen we die, everything is thrown away, but not in the world of The Last Interface, because the next stage is called The Wisdom Keeper.The Wisdom Keeper (your Wisdom Keeper) is important because it is the sum total of all of your experiences, the essence of your experiences. All that data will be stored on your person in form of a holographic crystal memory.Every human being has some wisdom to contribute to the world.A person’s Wisdom Keeper will be their testament, who they were.Holographic Crystal MemoryThe breaches of people’s personal data like emails, pictures and others, and having that data leaked out to the greater public, will lead to some sort of rebellion to the idea that we will all have our privacy reaped apart.We are made to be private because that creates the dignity of an individual.The cloud has proven itself to be ineffectual to store even a few emails. Brian is not advocating for storing everything in the cloud, he just advocates that we are going to do it regardless, but hopes that people will be guided on it.When we have an intelligence amplifier in the wisdom keeper world, the penalty for hacking this non-internet connected device without the permission of an individual will be equivalent to a murder one charge.With current technology, we can store 15 to 20 years of a person’s life easily. These devices will eventually go smaller.In future, we will have holographic crystal memory. Crystalline structures are incredibly stable for holding information over long periods of time. We are using nano-doping within crystals to create the substrate that allows us to store information.The sum total of our experiences will be stored in holographic crystal memory because it will sustain over the ages. It is the survival archival system that will have the throughput, bandwidth, and storage system (petabyte capabilities) to store every single waking and sleeping moment of a person’s life.Links and Resources in this EpisodeThe User Illusion: Cutting Consciousness Down to SizeDr. Teri Fisher on TwitterDr. Teri Fisher on LinkedInPlease leave a review on iTunes See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Dr. Chris Landon, a Clinical Assistant Professor of Family Practice and Pediatrics at the University of California in Los Angeles.Dr. Landon served as a Clinical Assistant Professor of Pediatrics, University of Southern California. He has pioneered device development in the area of pediatric pulmonary medicine. He serves as Director of Pediatrics at Ventura County Medical Center. He also has a TV show and podcast. Key points from Dr. Landon!Innovating healthcare technology through GECKO (Gamification, Education, Communication, Knowledge, and Organization/Ordering)Attending HIMSS 2019Dr. Landon attended the conference in 2018 and gave a lecture on a med-platform that was integrating devices and other technology solutions. He was invited back in 2019. It was his fifth year of attending the conference.He hosts Get Moving TV, a TV show that has done 200 or so episodes on technology in Ventura County, California.He has worked on numerous technology projects. The people that he meets at the HIMSS Conference lead to other projects.They are trying to build up an innovation center in Ventura County. There was an aerospace industry in Ventura a long time ago but they all closed shop. They’re now trying to build that back up.Uses of Voice Technology in Advancing HealthcareThey have been working with Orbiter to develop a voice technology where a person can just say short sentences into a phone for that voice to be analyzed for congestive heart failure, COPD and other diseases.They work on solving bilingual education problems. Being able to deliver language to people who are illiterate is very important to Dr. Landon.They are working on solutions to the opioid use disorder problem in the United States and worldwide. There can be a technology that analyses the voices who call in to report opioid use cases.They also work on the issues of newborns with cystic fibrosis. There can be a voice technology in people’s homes that responds to their questions about such diseases. Links and Resources in this EpisodeGet Moving TV on YouTubeTDC Labs and StudioLandon Pediatric FoundationTDC Labs and Studio PodcastDr. Teri Fisher on TwitterDr. Teri Fisher on LinkedInPlease leave a review on iTunes See acast.com/privacy for privacy and opt-out information.
In this episode, Teri shares his interview on the Boldly Podcast by Joule Inc.The podcast is hosted by Steve Mortimer, Joule’s Vice President of Business Development. Steve interviews thought leaders who are making strides in Canadian health care and beyond.They share their ideas that are shaping the world and Boldly spreads them. Teri had the opportunity to join the host to talk about voice technology, how it’s disrupting health care, and how transformational the technology is expected to be in future. Enjoy!Voice Technology and How it’s Impacting SocietyComputers are adapting to the way we communicate as human beings.We are entering an era of ambient computing where there will be microphones around us and we are going to be interacting with a computer through our voice, but won’t be voice only. Sometimes a screen (something visual) will be required to provide something visual.Voice is powerful because it’s the most natural and much more efficient than typing or texting.Speaking is actually 3 to 4 times more efficient that typing or texting. The average person can type at least 40 words per minute while the average person will speak about 150 words per minute.With voice, a person can multitask and do just about anything while talking.Using Voice Technology to Solve Health Care Challenges in CanadaTo have an efficient health care system, there must the right care, at the right time and at the right place. Voice can ensure all three.The right care: When somebody has a health care concern, they don’t know the best place to access the system and what the right care is for them. Voice assistants can fill this gap perfectly. When someone wakes up feeling unwell, they can talk to their AI voice assistant, tell her how they feel, and the voice assistant will use good evidence-base medicine and proven algorithms to provide care.The right time: The voice assistant will go through a symptom checklist to determine what is wrong with someone. The assistant will act as a virtual triage nurse in a home. This could direct the resources of the health care system as a whole at an individual level. The voice assistant can decide how urgently a person needs to be seen by a doctor and also direct the person to the right health care location.How Voice can Help Patients Navigate the Health Care SystemVoice first computing will bring about patient-first health care where the patient is the leader. Voice assistants can diagnose diseases through the use of vocal biomarkers (the way the patient sounds) and it can give guidance on what to do. The patient will interact with the voice assistant and the assistant will help them figure out where to tap into the health care system.That will take some pressure off of the health care workers and overcrowded health care facilities.As voice assistants develop, they will become more and more effective at guiding patients.Present SituationWith voice technology, we are at early stage similar to how it was 11 years ago with the smart phone.Right now, health care oriented voice applications are relatively simple and provide information to patients, for example, The Mayo Clinic has a First Aid skill which is a complete voice-enabled interaction where somebody who needs First Aid calls upon it through voice.There are skills that provide health care information and other types of skills.There is ongoing research and development of more complex skills that will have more back and forth interaction between a patient and the voice assistant (care giver). 2018 has been referred to as the year of the pilot studies for such skills and 2019 is expected to be the year when they come to market. The biggest barrier is the issue of privacy. There aren’t regulations yet to allow voice assistants to store medical information but it’s being worked on.Privacy is the biggest barrier to integrating voice first into the health care systems. Once people realize the convenience that voice first will provide in health care, they will adopt the technology more.The Future of Voice First in the Next 10 YearsAs the AI in voice assistants becomes more intelligent, it will be able to further understand what a patient is experiencing and then become a guide to the health care system.There is going to be a decentralization of health care where we will have little mini-clinics in each person’s home where voice assistants will be present. It’s going to be way for a person to interact with the health care system in their home. It will mean less patients will put their demands on the health care facilities because they will get the necessary advice in their home.An exciting area over the next 5 to 10 years is the whole idea of vocal biomarkers, which comprises of pulling out metadata from voice, similar to the way metadata is taken from a photo in digital photography. The AI in the algorithms can pick up the emotion in a voice. Devices can pick up somebody’s emotional state based on the way they are talking to it. The devices can scientifically quantify the emotion in a voice by looking at the audio wave forms. The devices can pick up changes in the way that someone is using words. They can use all that to determine if someone has different types of diseases like Dementia and Parkinsons.Research shows that we will be able to use voice as another vital sign. A device will listen to the way a person is speaking and be able to pull out diagnosis and even suggest risks for certain diseases.Links and Resources in this EpisodeDr. Teri Fisher on TwitterDr. Teri Fisher on LinkedInPlease leave a review on iTunes See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Harry Pappas, the Founder and CEO of the Intelligent Health Association (IHA) Harry is a serial entrepreneur and has never worked for the corporate world. His core goal is to get the healthcare community to adopt new technology, software, apps, voice and other forms of technology that can have a dramatic impact on improving patient outcomes, patient care and patient safety while driving down the cost of Healthcare for all citizens. When it comes to healthcare and technology, Harry knows his stuff. HIMMS is the largest health IT show in the United States. In 2018, they had over 42,000 healthcare professionals from about 150 countries. In 2019, the event will be in Orlando, Florida and kicks off on February 12th.Key points from Harry!What they are doing at the Intelligent Health Association (IHA) and how they are really going to feature voice technology at The HIMSS Global Conference & Exhibition.How he got into the Voice SpaceHe had never planned to start an educational association for healthcare technology, but due to a number of personal events that took place with his mother and some other family members, he was forced to get into helping to transform healthcare by developing educational events to help the healthcare community better understand the technology revolution that we are in right now.He has been in technology his whole life and has been in the healthcare space for about 15 years.He combined the knowledge and experience he gained from working in commercial enterprises and major hospitals, and applied it into innovating healthcare technology.How he sees Voice changing things and HIMMSAt HIMSS, for the last three years, they have been demonstrating, under the radar, the use of voice in the operating room, the use of voice in the labor and delivery room, and even in the trauma ED.Due to the tremendous proliferation of voice skills in so many areas of the hospital, they can now demonstrate a lot of new skills in many more areas.They have the iHome, a health and wellness smart home where they will be able to demonstrate voice skills pertaining to better health and wellness in the home, and how the technology can enable seniors to age in place, and provide them with remote healthcare coverage.They added a voice award program into HIMSS. They are also doing a one day conference on voice technology in healthcare. They will have some of the top hospitals to demonstrate for visitors what and how they are using voice. They also have a women in voice panel with a great lineup of speakers.Links and Resources in this EpisodeThe HIMSS Global Conference & ExhibitionAlexa in Healthcare ConsortiumDr. Teri Fisher on TwitterDr. Teri Fisher on LinkedInPlease leave a review on iTunes See acast.com/privacy for privacy and opt-out information.
In this episode, Teri shares a recording of the talk he did at the Alexa Conference 2019.Teri was recently at the 2019 Alexa Conference in Chattanooga, Tennessee, and was privileged to give a couple of talks, one of which he was able to broadcast live titled, “No Appointment Necessary; Your Healthcare Team Now Lives in your Home” The talk was about how voice technology is completely changing the way that we are experiencing healthcare, and how this is going to occur over the next number of years. He took the recording of that talk and produced it as a podcast for those of us who didn’t get the opportunity to attend the conference or tune into the live episode. Enjoy the Talk!Scenario: Imagine waking up and feeling like something is off. You have been rolling over in bed, you have a scratched throat, a headache and it’s like you’ve sweating all night. You start thinking you’ll have to call in to work, you don’t know how you’ll get your kids to school, and maybe you need to see the doctor. You drag yourself into the car, drive to the doctor’s (assuming you’ve been able to make an appointment), you wait in the waiting shivering, the doctor says you have to go to the pharmacy to pick up your prescription, so you do it. You go back home and climb into bed. What an ordeal!Healthcare SystemsThere are a lot of great things going on in different healthcare systems but there are also a lot problems.Teri has always struggled to figure out how we can make a change in healthcare systems. Attempts have been made, but things are very much the same. With voice technology, we will be able to radically transform the way that we all experience the whole healthcare journey.Reference: The movie “Elysium”In the movie, people have sick bays (little mini-clinics) in their homes.It gives us an idea of where we could be going in the future.There is no doctor in the picture; it’s all based on what’s going on in the home.We are now in the primitive stages of such a scenario with voice technology, where we can now start to interact with these devices through our voices in a frictionless way such that pressure is taken off the healthcare system and that has a profound impact on patients.The technology has caught up to the fact that we no longer have to adapt to technology, the technology is adapting to us in our most natural form of technology.Changing the way we experience healthcareWe have yet to see a major change in healthcare.The ideal solution comes down to three factors; the right care, the right time, and the right place.The right care: To know the right care for us whenever we feel sickly, we would need some type of resource to explain it to us. British Columbia, Canada has the HealthLink BC line where anyone can call 24 hours a day, 7 days a week, and get in touch with a health service navigator that gets them in touch with live health care professionals who can advise them accordingly. This type of service can be put intoThe right time: This is a big problem in healthcare. People don’t get the healthcare they need at the exact time they need it. Voice assistants can become the triage nurses in homes, and they can play an important role in determining the right time for someone to get proper healthcare.The right place: It’s always hard to determine where to go when one is feeling sick, for example, the hospital, the doctor’s office, the community clinic, a travel clinic, a therapist’s office, etc.Vocal biomarkersThese are the metadata for voice.The metadata is different in someone’s voice when they say the same thing in different situations.Using algorithms and AI, we can pick out the different patterns in that metadata to make diagnosis, monitor cognitive decline and many other possibilities that haven’t been thought of.Currently, vocal biomarkers can be used to aid in diagnosis. They can be used in real time emotional insights and can detect cognitive diseases.Patient-centered healthcareThe concept is based on the patient being at the centre of the healthcare team and everybody is doing their best to look for the patient, and make sure that the patient is at the forefront of everybody’s mind.The problem with the concept is the maze of bureaucracy in healthcare systems which is very difficult for patients to navigate.Patient-first healthcareSince we are going into “voice first”, we need to think about “patient-first healthcare” where a patient will be the leader of their healthcare. This can be achieved if the patient can tap into the technology in their home, get the guidance they need, and access the healthcare they need, as needed, at the right time and at the right place.The Alexa devices we have in our homes can become little med-bays (little medical clinics). That would relieve so much pressure on the healthcare system and the overworked healthcare workers. It would also greatly improve the patient experience and the overall quality of care. Links and Resources in this EpisodeAging in placeLifepod See acast.com/privacy for privacy and opt-out information.
In this special episode for the New Year 2019, Teri takes a look back at the last 10 episodes or so, where he had the opportunity to ask each of the guests what voice first health means to them. Teri has interviewed CEOs, thought leaders, researchers, industry leaders, developers and all kinds of people that are working the voice first health space. Key points!The common thread among the guests on the podcast so far has been that they are interested in voice technology and healthcare. Voice First Health Podcast, Episode 15: Dave KempDave talked about how the hearables industry is transforming as a result of voice technology entering that industry.Dave is part of Oak Tree Products which provides medical supplies and devices to the hearing technology industry.He runs the Future Ear Blog where he documents the technological breakthroughs that are occurring in the hearables niche.What Voice First Health Means to DaveThere are so many efficiencies to be had here. The area for improvement is all the clerical work in the actual medical setting from how doctors record notes to the healthcare administration processes.Capitalizing on the new efficiencies to make the whole medical system more efficient.He thinks the smart assistant will eventually be a person’s personal nurse. Through voice analyzation they will be able to understand a person’s state of health. Amazon has a patent based on Alexa being able to tell if a person is sick based on the inflection of their voice and it would prompt them to seek medical help.Voice First Health Podcast, Episode 16: Erum Azeez KhanErum spoke about creating voice applications for the aging population and specifically seniors living in senior living campuses. She also talks about how voice technology is affecting those seniors.What Voice First Health Means to ErumThey look at how much caregivers can do through voice without having to go to the screen because when they look at a screen, their attention is directed away from their patient or the resident under their care.They focus on human-centric design to make connections stronger.Voice First helps strengthen relationships. It’s a way to stay engaged with other people and the tasks at hand. It’s also a way to create transparency because it’s a multiplayer experience.Voice First Health Podcast, Episode 17: Dr. John LoughnaneDr. Loughnane spoke about some of the fascinating and impactful work that they are doing in the voice first health space.What Voice First Health Means to Dr. LoughnaneIt means patients having the ability to engage with augmented care from their medical/social/behavioral health providers in a way that they’ve never been able to deliver in the past.Voice First Health Podcast, Episode 19: Brian RoemmeleBrian has been studying voice technology for decades. He is one of the most eminent thought leaders in voice technology.Brian talked about his vision of the future of having a truly voice first assistant.What Voice First Health Means to BrianElderly people are becoming isolated because of the reality of societal existence as we are today. We used to live communally as big families but now we separate ourselves.The elderly don’t get to speak very often because nobody wants to talk to them anymore. Voice first technology will enable the elderly to at least create some dialogue and ask questions.The technology is extending their usable lives. They are able to inform themselves, reach out into the world, get access to podcasts and get access to information that they would not have had access to because the user interface is not getting in the way.Voice First Health Podcast, Episode 20: Jim SchwoebelJim is the founder and CEO of Neurolex, a company that is using speech analysis to detect health conditions using the data that is wrapped in the wave forms that we create every time that we are uttering a word or phrase.What Voice First Health Means to JimIt’s looking within voice and using that information to improve healthcare through our work.Voice First Health Podcast, Episode 21: Stuart PattersonStuart is the CEO of Lifepod, a company that has created a product that allows proactive conversations from Alexa to a senior living in their home that needs some extra help, guidance and reminders in their daily activities.What Voice First Health Means to StuartHe is always excited by the possibility that a caregiver can support and monitor their patient or parent from a distance, whether it’s a short or long distance. Links and Resources in this EpisodeVoice First Health Podcast, Episode 15: Dave KempVoice First Health Podcast, Episode 16: Erum Azeez KhanVoice First Health Podcast, Episode 17: Dr. John LoughnaneVoice First Health Podcast, Episode 19: Brian RoemmeleAlexa in Canada Podcast, Episode 54: Brian RoemmeleVoice First Health Podcast, Episode 20: Jim SchwoebelVoice First Health Podcast, Episode 21: Stuart PattersonTeri’s Live Broadcast at the Alexa ConferenceTeri’s Live Workshop at the Alexa ConferenceDr. Teri Fisher on TwitterDr. Teri Fisher on LinkedInPlease leave a review on iTunes See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Stuart Patterson, the founder and CEO of Lifepod, a revolutionary new service that can best be thought of as “2-way Alexa for the elderly” which enables seniors to use their voice to live more comfortably and safely in their home by providing essential services such as news/weather updates, medication reminders, access to transport, control of smart-home appliances, and fall detection.LifePod also offers the adult children and caregivers of these elders a powerful new way, using a mobile portal and AI based on data from local IOT sensors, to monitor, keep in touch with, and support their elderly loved ones without having to be there in person.Stuart is an experienced business leader who has led early/mid-stage ventures in a variety of markets including: voice or virtual assistants, mobile/online apps and services, video content, speech recognition/synthesis, identity management, telephony services, and clean-tech solutions. Key points from Stuart!The combination of voice technology with IOT devices to address the needs of the elderly aging in place in an inexpensive and easy to use way.LifepodWas invented by serial inventor, Dennis Fountaine. Dennis invented the first wireless earpieces. He developed the first prototypes of Lifepod. He also developed the first prototype of the “Lifepod caregiver portal” which allows caregivers to setup the routines that speak via Lifepod. He created it to be integrated with Alexa so that it could have the greatest impact.Stuart and his business partner bought out all the rights for Lifepod from Dennis.Lifepod uses Alexa and they are in talks with Google to incorporate Google Assistant. They did a demonstration with Samsung for their Bixby assistant.Lifepod uses voice technology to add a “virtual caregiver capability” to its service.Voice assistants use reactive voice modalities which enable them to react to what users ask them to do. Lifepod adds the concept of proactive voice which allows virtual assistants to speak to the user without being woken up and without being spoken to. Caregivers are able to schedule dialogues to speak throughout a day or week.Lifepod also invokes other skills like asking if you would like to listen to your favorite music or have Lifepod order an Uber for you.They focus more on the demographic that has been having difficulties using the reactive mode of voice assistants.The FutureThey hope to develop technology that will enable Lifepod to detect whether a user is talking so as to avoid interrupting the user’s conversation or ask to be excused before talking or asking a question.They intend to incorporate presence detection technology so that Lifepod never speaks in an empty room.Smart speaker producers are working on incorporating multi-room capability.Onboarding UsersThey preconfigure the Lifepod to make it as simple as possible to use for both the elderly users and caregivers.Caregivers are able to configure the Lifepod to meet the unique needs of each individual elderly user.Lifepod has three types of routines; wellness check-ins, social or other reminders, and other voice services (skill linking like giving the weather, or the news, etc.)A caregiver can configure wellness check-ins like, “How are you feeling?” and if the user says they’re not feeling well, Lifepod asks, “Would you like your daughter/son or professional caregiver to contact you today or by phone”They have a “capability of configurable alerts” feature within the check-ins and reminders which sends relevant text messages to caregivers’ phones.Current StageLifepod is currently in beta testing. They have 20 prototypes and will soon have 200 beta Echo clones with the Lifepod name on them and the Lifepod firmware in them.They are beta testing with individual members of the public in Boston and California. They are involving companies and institutions like Boston’s Commonwealth Care Alliance where they have almost 50 beta devices in different patients’ homes.They are also doing smaller pilots with senior living facilities and home care agencies.They have a wait list and hope to be available to the public by April 2019.For some time they might only be available through institutional partners like Commonwealth Care Alliance.What Voice First Health Means to StuartHe is always excited by the possibility that a caregiver can support and monitor their patient or parent from a distance, whether it’s a short or long distance.Links and Resources in this EpisodeLifepod WebsiteReach out to Stuart at Stuart@Lifepod.comLifepod on LinkedinDr. Teri Fisher on TwitterDr. Teri Fisher on LinkedInPlease leave a review on iTunes See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Jim Schwoebel, the founder and CEO of Neurolex, a diagnostics company that is applying speech analysis to detect various health conditions early, before full-blown symptoms occur. Their core vision is to pioneer a universal voice test, like a blood test with extracted features and reference ranges, for use in primary care to refer patients to specialists faster.Jim is a Georgia Tech-trained biomedical engineer and co-founder/partner in Atlanta-based accelerator CyberLaunch. He got the idea for Neurolex after seeing his brother hospitalized for a psychotic episode, eventually being diagnosed with schizophrenia. He wondered how these types of conditions could be diagnosed earlier on, before the patient was past the point of pre-clinical intervention.The company is currently in the midst of over 20 research trials taking place around the world, helped by 30 fellows they have recruited to help gather a massive dataset on voice diagnosis. Key points from Jim!Voice signals and how they can be correlated with various types of illnesses and diseases.How patterns in voice could potentially help clinicians in diagnosing mental illness.Research FindingsYou can predict with very high accuracy just with a voice sample who would or would not develop a psychotic episode.Voice SamplesCollecting samples is more of an in-clinic procedure because there are a lot of issues taking samples at home. It’s usually a short test that is done like a voice survey. It takes 3 to 5 minutes.For some diseases, there are alternative voice sample collection tests.The voice responses are sent to the cloud or locally in the clinic then a report is generated so that the health provider can use that information to infer the health of the patience.Jim believes with time they will find more robust models that can be used within home environments.Audio Data ModelingThey apply techniques that are use MFCC coefficients and ASR models to voice labels.For small data sets, they use old school techniques like support vector machine modeling or logistic regression.They either look at it as a binary problem or regression problem and estimate the scale itself, question by question from a voice file.They are continuously learning new features and traits that are correlated with voice featuresThey also transcribe the audio and extract features from the text.Getting enough data is the biggest challenge right nowVoice Sample SourcesThey get them from academic collaborations. They have collaborations with the University of Washington where undergraduates go into clinics and collect data from patients. Patients have to consent to giving a voice sample.They’ve created a product called SurveyLex that helps them create, design and deploy voice surveys in the cloud like a SurveyMonkey survey. They have optimized it for research use and it gets a lot of data quickly. Different health entities are using the product on a subscription basis.The Voice Genome ProjectThey’ve been brainstorming on how to engage external collaborators in a more comprehensive way and also centralize their work because so far it’s too scattered. They have separate work a Harvard, MIT, Stanford, and UCSF.They are trying to create one survey using SurveyLex. They will launch it in January 2019.The first step will be getting a lot of survey information tied to voice information which will mainly include self-reported health inventories labeled with voice files.To contribute, people can donate their voice and be part of the research study or become a research collaborator.Collaborators can analyze the data beyond how Neurolex has.Meaning of Voice First Health to JimIt’s looking within voice and using that information to improve healthcare through our work.Links and Resources in this EpisodeNeurolex.aiReach out to Jim at js@neurolex.coDr. Teri Fisher on TwitterDr. Teri Fisher on LinkedInPlease leave a review on iTunes See acast.com/privacy for privacy and opt-out information.
In this episode, Teri welcomes Brian Roemmele, a scientist, researcher, analyst, connector, thinker and doer. He is referred to as the “Oracle of Voice” and is actually credited for having come up with the term “Voice First”. Over the long, winding arc of his career, Brian has built and run payments and tech businesses, worked in media, including the promotion of top musicians, and explored a variety of other subjects along the way.Brian actively shares his findings and observations across fora like Forbes, Huffington Post, Newsweek, Slate, Business Insider, Daily Mail, Inc, Gizmodo, Medium, Quora (An exclusive Quora top writer for: 2017, 2016, 2015, 2014, and 2013), Twitter (quoted and published), Around the Coin (earliest crypto currency podcast), Breaking Banks Radio and This Week In Voice on VoiceFirst.fm that surfaces everything from Bitcoin to Voice Commerce.Key points from Brian!The exciting aspects of a true voice first personal assistant.A Thought ExperimentWaking up from being born there is a device which your parents and family agree to have on them at all times.The device is highly secured and not connected to the internet and it only sends and brings information.It records the story of your life in audio and video imagery that you will probably never look at.That will be the beginning of your voice first assistant (voice first, not voice only).It becomes a memory system for you. Throughout your life you’re going to refer back to it in simple commands like, “Alfred, was I on March 22, 2037 at 3pm”The only way all that works is if there is a form of highly regarded privacy. None of the information in the personal assistant will be on the cloud or in the internet.You are a 49 year old father of three and you are driving along in your self-driving car and unfortunately the car decides to drive off a cliff due a glitch in the program. What’s left is phenomenal; your voice first personal assistant.There will have to be rules and laws of how that information will be stored or erased.A program might be there to eliminate all those things that you don’t want your children to have raw access to, but you will want them to have access to the sum total of what your knowledge and experience was. That will form the book of your life.That book of your life might generate into a hologram or embodied in a robotic system.When your son turns 28 years old, he might turn to your essence (your voice first assistant which is still there) and say, “Dad, I’m getting married today and I need some advice. How did you do it?”The voice first assistant can respond as a third person and in the first person and advise him based on your experience.What we will have is the ability to audit memories that these people have allowed to be audited and be able to have conversations with those memories.The Voice First RevolutionWe will be using our computer in a much different way.Social media will look a lot of different that it does today.We will look things up but we will be asking for the best results.Voice commerce (shopping) will be a massive industry just web commerce and mobile commerce really made the internet.Advertising will be much more different. We are not going to be interrupted or allow our experiences to be adulterated, but we will seek out experts and influencers that we have concluded through our smart assistants that will have researched on our behalf.The Meaning of Voice First Health to BrianElderly people are becoming isolated because of the reality of societal existence as we are today. We used to live communally as big family but now we separate ourselves.The elderly don’t get to speak very often because nobody wants to talk to them anymore.Voice first technology will enable the elderly to at least create some dialogue and ask questions. The technology is extending their usable lives. They are able to inform themselves, reach out into the world, get access to podcasts and get access to information that they would not have had access to because the user interface is not getting in the way.Links and Resources in this Episode Voice First Revolution with Brian Roemmele (Part 1 of this Interview)Brian on TwitterBrian on QuoraDr. Teri Fisher on TwitterDr. Teri Fisher on LinkedInPlease leave a review on iTunes See acast.com/privacy for privacy and opt-out information.