Podcasts about Movidius

  • 13PODCASTS
  • 14EPISODES
  • 34mAVG DURATION
  • ?INFREQUENT EPISODES
  • Mar 18, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Movidius

Latest podcast episodes about Movidius

Entangled Things
Episode 110: Silicon's Quantum Leap: Balancing Heat & Qubit Fidelity with Brendan Barry

Entangled Things

Play Episode Listen Later Mar 18, 2025 39:24


In Episode 110, Patrick and Ciprian are joined by Brendan Barry, CTO of Equal1, to discuss the cutting-edge world of silicon-based quantum computing.

Vast and Curious, cu Andreea Roșca
Valentin Mureșan. Dacă aș putea da un singur sfat: găsește oamenii potriviți și lasă-i liberi. De ce ai pune bani și resurse în infrastructură socială

Vast and Curious, cu Andreea Roșca

Play Episode Listen Later Mar 17, 2023 58:11


În anul în care Timișoara este Capitală Culturală, am vorbit cu Valentin Mureșan despre experiența lui în a aduce la un loc oameni pentru a crea schimbare. Vali are două decenii de experiență ca antreprenor și, în anii din urmă, și-a folosit resursele și cunoașterea pentru a cofonda organizații care coagulează la rândul lor oameni și resurse în proiecte de comunitate în Timișoara: fondul de investiții Growceanu, hub-ul cultural Faber, BanatIT, Fundația Comunitară Timișoara, Grădinescu sau Timotion. Teoria lui vine din experiența în tehnologie: ce ai de făcut, spune, e să construiești o platformă care să devină infrastructura pentru implicarea celorlalți. Crede în puterea oamenilor bine intenționați și în a folosi diversitatea ca motor de progres. Valentin este cofondator al Movidius, o companie de tehnologie specializată în dezvoltarea de platforme software pentru drone. Movidius a dezvoltat un procesor capabil să vadă și să proceseze imagine și a fost cumpărată în 2016 de Intel. Până în 2020, Valentin a fost country manager pentru Intel în România. Din 2021 este consilier personal al primarului Timișoarei pentru probleme de digitalizare și smart city și este coordonatorul strategiei de transformare a orașului într-un ”oraș inteligent”. Am vorbit cu el despre cum lucrează cu ceilalți, cum își alege partenerii și mizele, care sunt principiile importante pentru a construi platforme de colaborare și schimbare, de ce avem nevoie de orașe inteligente și cum își vede rolul în tot acest angrenaj de organizații și inițiative. Am vorbit, de asemenea, despre grădinărit și de ce își investește banii și timpul în a crea focare de schimbare. **** Acest podcast este susținut de Dedeman, o companie antreprenorială 100% românească ce crede în puterea de a schimba lumea prin ambiție, perseverență și implicare. Dedeman susține ideile noi, inovația, educația și spiritul antreprenorial și este partener strategic al The Vast&The Curious. Împreună, creăm oportunități pentru conversații cu sens și întrebări care ne fac mai buni, ca oameni și ca organizații.  **** Podcastul Vast&Curious este susținut de AROBS, cea mai mare companie de tehnologie listată la bursă. AROBS este o companie românească fondată acum 25 de ani la Cluj, cu birouri în opt țări și parteneri și clienți în Europa, America și Asia.  AROBS crede într-o cultură a implicării, a evoluției continue și a parteneriatului pe termen lung. Este una dintre puținele companii românești care oferă fiecărui angajat acțiuni gratuite în pachetul obișnuit de beneficii.  **** Note, un sumar al conversației, precum și cărțile și oamenii la care facem referire în podcast se găsesc pe andreearosca.ro Pentru a primi noi episoade, vă puteți abona la newsletter pe andreearosca.ro. Dacă ascultați acest podcast, vă rog lăsați un review în Apple Podcasts. Durează câteva secunde și ne ajută să îmbunătățim temele și calitatea și să intervievăm noi oameni interesanți. 

Embedded Insiders
Industry Leaders Make Big Push for Small AI

Embedded Insiders

Play Episode Listen Later Dec 4, 2020 33:06


In this edition of the Embedded Insiders, Brandon and Rich discuss the semantics of AI and intelligent technology and what qualifies as a smart system these days.Have marketing engines turned these into over used terms? Are they even being used correctly? Later, the Insiders take a deeper dive into the practicalities of Edge AI with Bill Pearson, the Vice President of the Internet of Things group at Intel. Together, they investigate market challenges to realize the ROI on initial AI deployments and the hardware/software gaps preventing developers from launching commercial grade solutions faster.Finally, Alex Harrowell, a Senior Analyst at Enterprise AI speak with Perry in this week’s Tech Market Madness to discuss a new report from Omdia Research titled, “Artificial Intelligence for Edge Devices Report.”

Business And Investing Sherpa
7: Intel (INTC) Dividend Growth Stock Passive Income Investment Analysis

Business And Investing Sherpa

Play Episode Listen Later Oct 29, 2020 19:04


On today's Passive Income Investment Idea Episode we are analyzing Intel, ticker INTC. Intel is a High Quality, Wide Moat Company Selling at a Significant Discount to Fair Value. Intel Corp is one of the world's largest chipmakers. It designs and manufactures microprocessors for the global personal computer and data center markets. Intel pioneered the x86 architecture for microprocessors. It is also the prime proponent of Moore's law for advances in semiconductor manufacturing. While Intel's server processor business has benefited from the shift to the cloud, the firm has also been expanding into new adjacencies as the personal computer market has declined. These include areas such as the Internet of Things, memory, artificial intelligence, and automotive. Intel has been active on the merger and acquisitions front, recently acquiring Altera, Mobileye, Nervana, Movidius, and Habana Labs in order to assist its efforts in non-PC arenas. As of Oct. 28, 2020, Intel has an A+ S&P Credit Rating with a 32% LT Debt/Capital. Morningstar rates Intel as Wide Moat with a Negative Moat Trend, Standard Stewardship, and Medium Fair Value Uncertainty currently trading Significantly Undervalued at a 35% Discount to Fair Value and 5 Star Rating. Analysis of the historical trends using FastGraphs shows that both Earnings and Operating Cash Flow have similar correlation with stock price over time. The historical Price to Earnings Multiple Range has been Wide, ranging from 12-18. Over the past 10 years, however, the range has been between 12-13. Currently the Blended PE is 9.34, an approximate 25% discount to the mid-point of the historical range. This translates to a 10.71% Blended Adjusted Earnings Yield. The consensus of 16 Analysts expect Intel to grow earnings an average of 1.62% per year between today and the end of 2022. The trailing 10 YR Earnings Growth has averaged 9% per year and traded at an average P/E ratio of 12.96. Therefore, the company is projected to grow Earnings Slower than in the past. As such, we should use the low end of the historical multiple's range at best for valuing the company today. Assuming Intel grows Earnings at a rate of 1.62% annually and reverts to a valuation of 12 times Earnings, an investment today would return over 16% Annualized. The current price is $45.64 and our Target Sell Price based on Projected Earnings is $61. Moreover, you would lock in a Dividend Yield of 2.9% with a Low Payout Ratio of 25%. As a bonus, the Current and Trailing Twelve Month Buyback Yield is 9.47% for a Total Yield of 12.33%. Additionally, Intel has High Operating Performance with 5 Yr Average Return on Equity of 22%, Return on Invested Capital of 16% and Net Margin of 23%. Overall, here at the Business and Investing Sherpa, we are opportunistically adding more shares of this High Quality, Wide Moat Company to our Portfolio to generate Passive Income of around 3% in the form of Dividends while targeting a Total Annualized Return of 16% with a current target Sell Price of $61 between now and the end of 2022. For an even better option, we are Selling Cash-Covered Puts to generate either guaranteed returns over 15% annualized or an even better entry position into Intel at a lower price. To learn more, be sure to take our Online Course about Selling Cash-Covered Puts. ***** If you'd like to get access to all of our Investment Analysis Reports along with all of our Online Courses on Investing for Passive Income and Total Return to Reach Financial Independence as well as our Monthly Portfolio Updates then simply go to BusinessAndInvestingSherpa.com and sign up for a PRO Membership. ***** --- Support this podcast: https://podcasters.spotify.com/pod/show/businessinvestingsherpa/support

Sixteen:Nine
Stephen Borg, meldCX

Sixteen:Nine

Play Episode Listen Later Aug 12, 2020 42:46


The 16:9 PODCAST IS SPONSORED BY SCREENFEED - DIGITAL SIGNAGE CONTENT There are times when I come across an unfamiliar company and it’s clear, really quickly, what they do and offer. But other times, not so much. When digital signage industry veteran Raffi Vartian joined a company called meldCX a few months ago, my core response was, “OK, that’s great! Glad you’re sorted out. Ummm, who???” Since that time, he’s walked me through what the Australian-based company, which is now growing its footprint in North America and elsewhere, was all about. If the company has an elevator pitch, it would be useful if the building that elevator’s in has a lot of floors. It gets complicated. My simpleton explanation is that the company offers a platform as a service that makes it much easier and faster for software vendors, integrators and solutions providers to stick to what they’re good at. The customer worries about the user experience and key functions of an application, which can sit on top of a meldCX technology stack that has already got things like OS compatibility and scalability worked out. So, when a client asks a vendor for a solution that could be very complicated, a lot of that complication has already been handled via the meldCX platform. So the job can be accelerated and the costs controlled. I spoke with founder Stephen Borg, who splits his time between Australia and the U.S. He  walked me through the origins of the company, how it works with software vendors and integrators, and related an interesting and different take on using computer vision to keep facilities and devices sanitized in the midst of a pandemic. Subscribe to this podcast: iTunes * Google Play * RSS   TRANSCRIPT Stephen, thank you for joining me. you're in Australia, I'm in Nova Scotia. So, I think we're like 14 hours difference in time zones and all that. But, we'll make this work.  For those who don't know much about meldCX or anything, can you give me the rundown on what the company's about? Stephen: Yeah. So really, we started meldCX about four years ago and it started as a research project. So I got a team together, internal people, and external partners and customers, and we started it as a reason project and said, what are the common problems in delivering devices to physical space? How can we do this better?  And what triggered that research was my background in the AOPEN group, the work with Chrome and Fujitsu, we had a common thread of problems and they were just assumptions at the time. But we looked at them and said, okay, what are the things that stop a rollout? Where are the unnecessary costs? What stops it in its second phase? Because we find a lot of customers don't know what they don't know until they get three years into their cycle and find out they hit a brick wall. So what are all those points? Then we researched and built some codebase. We did that for about two years before we decided to commercialize it. And then we won two or three significant global customers out of that research and decided that meldCX would take its own path, become its own entity, seek its own investment. We commercialized it in the middle of 2019. And in that short period of time, we have around 80 customers, like enterprise customers across four continents. So it's been a massive take-up, so it's been a very exciting journey. Now was the research work for AOPEN or for Fujitsu or was it JV or…? Stephen: Yeah. So I started it as a piece of work that I kicked off with a team looking at what are the common problems. So we looked at Fujitsu data, we looked at AOPEN data. We worked with various customers, we worked with different partners, major providers and it really started as just a bit on a paper. Then from there, we decided, there is some significant gap here and there are areas that we can help. So, we took that and said, okay, let's do some test cases and initially, it was funded by myself and a team of interested people and we had some great support from AOPEN and the Acer group, around some goodwill, some developers, some research analysts and the like. I'm just trying to wrap my head around what the outcome or output of this would be. A little bit of what I talked about with Raffi was about the idea of making Chrome devices like the AOPEN Chrome basis more extensible so that they could work with things beyond just plugging into the back of a computer or back of a monitor, that sort of thing that could work with printers, other external devices, that sort of thing. Is that kind of the gist of it? Stephen: We found two things, Chrome taught us a lot. Okay. I helped architect the first sort of commercial Chromebox with Google and what we quickly found was there are two distinct development camps and that's across signage, kiosk, and interactive devices.  So you have a development camp that looks at quite thick architecture, is very versed in modifying drivers or going deep into windows and modifying it and bastardizing Android, so to speak. You have that sort of skill set and then you have a very dynamic backend, highly functional, web first orientation, and these developers needed to meet in the middle somewhere. And we discovered the hard way with Chrome because we were trying to bring customers across to this new web-first environment, without the tools or the plumbing to get across. And then conversely, you had some really cool tech coming down the pipe that didn't even consider a physical environment. You know, physical security, reliability, no popups on a screen that people can't touch. So that was phase one and we ended up enabling some big clients on Chrome, doing some things such as payments, ThinkPad integrations, biometrics integrations, accelerators like Movidius, those types of things, we enabled in Chrome initially. And then we made a decision to say, okay, what we want to do is take these digital building blocks and if a customer uses them, they should be able to run on any operating system. So now, if a customer has built their app using meldCX tools, that can run on Android, that can run windows, soon Linux, without changing the codebase from Chrome or vice versa. Would you call this middleware? Stephen: Yeah. in some ways it's middleware, what we do is quite unique. The middleware covers three stages, that is the original deploy piece. Typically middleware just allows you to build and propagate. What we do is we allow you to either build using it or using our existing modules. So we have a customer that wanted to add some AI elements to the existing app and didn't have the team to do it, and they just plugged in some of our modules. Or you can run applications side by side and make them talk to each other. So we want it to be really flexible. We didn't want to have to tell people that you must build in the Meld to use Meld.  That's a big leap and it's something that's a bit of a barrier at the start. So we didn't create or force any customers to go into any proprietary language or tech. You can just add these tools or refer to these tools and create a high-end device, even if you've had no experience building a kiosk per se. So we let customers take content or apps they’ve created on Adobe or web apps and turn them into devices that can operate online, offline, talk to local peripherals, etc. using our tools and our sort of process.  I'm thinking about a creative agency that I knew in New York a few years ago that was working with a very large athletic wear company. And I was doing some consulting. These were guys who were very good at creative and very good at interactive user experience and all that sort of stuff. But they were being asked to do everything, coding hardware, sourcing, and putting together the touch screen overlays, the whole nine yards. And I'm thinking about what they were saying, “We're having to do this because our client wants us to do it, but this is not our skillset at all. Please help.” What would happen if that kind of a company was then told, “We want you to do this interactive user experience, we also want you to do payments off of this, and we also want it to interact with smartphones or that sort of thing.” and they would be deer in the headlights. Is this the sort of thing where if they knew that meldCX exists, they could jack their way into that and it would enable them to produce something that's hardened, secure, and reliable? Stephen: Yeah, exactly. So we just had a customer roll-out, which was really unique. Contact tracing applications for pubs and clubs and bars, and it was an agency and their integration aspects were quite complex, so we enabled the Chrome device to do Apple Pass and Google Pass so they can send digital tokens or loyalty cards to their customers, tapping as they walk into the establishment, it would contact trace, plus give them points. Now the agency scoped out a year project. We delivered that in two months on meldCX, right? Because all they needed to do is focus on the UI and we had already done all the certifications, the Apple compliance, the Google compliance, and really, they just used our widgets, got it up and running, and the customer is rolling out now.  So in that case, not only did we help the initial build process but ongoing, Meld manages the OS. So Meld won't let the OS go past the build. So for example, if it is Chrome, and you've built your app on, v83, it won't allow Chrome to update past v83 until you've told it to update. And if it picks up a critical security patch, it might notify you of the impact of that, and you can test it without having a physical device. You can test it in an emulator.  In this case, they were using a development team in Melbourne, a development team in India. and they tested virtually using our emulator so they don't even need physical devices. So that's a great example. I know “middleware” is a very simplified way of trying to describe it, but since I'm a simple person, would I describe this in certain respects as a middleware as a service? Stephen: Yeah, so we have two essential products or product lines. One is a PaaS (Platform as a Service) product. so that is someone that wants to build their own app. It gives you all the tools. It gives you things like PCI compliance, advanced security, even tokenization of devices, a whole range of builder widgets so you can use those blocks.  In fact, we've had quite a few, ISVs build their applications or move their applications across Meld, really just reappointed to the Meld resources rather than rebuild anything. And then they can go off and run multiple operating systems. We were dealing with a signage provider (that we’ll announce soon) and I think they had a team of 30 devs and they had seven dedicated to operating systems and after moving across the Meld, now they don't have any dedicated to the operating system, which is a sunk cost, they have them focusing on features.  So that's one of the things we're providing and we also help them become an enterprise. So now they can use our certifications, our security compliance, our SSO, all those things that corporate entities need as a minimum requirement, they can just utilize what we've already done, right? I completely get what you're saying. My worry would be that in a hyper-competitive marketplace, like the digital signage software marketplace, many of these companies compete on price. Layering you in adds more cost.  Although, you've said it removes a lot of costs. Because in this case, this company doesn't need seven guys. or engineers, focused on operating systems, but how do they balance that out? Does it become net savings? Stephen: Look, there are two aspects. Signage, you're right, it’s very competitive and I wouldn't see, for example, an entry-level signage player, that's playing a web URL, having the need for something like Meld, unless it was their first foray into Chrome and they didn't want to do the development, they just want it to point to us.  On the signage space, we're working with partners that want to move up the food chain. And what I mean by that is they want to be an enterprise, they want to have multiple touchpoints, within the customer and they potentially want to use other aspects of Meld. So Meld has its PaaS platform and it does have SaaS modules as well. So we have products such as advanced machine vision. And in Meld, you can schedule machine vision models or AI models. You can schedule content and apps all in the same way and pair them together. We just worked with a global car company, and they have an app that they spent a lot of money building on, an agency built it and they wanted to add some visual elements... An agency costing a lot of money??? Stephen: (Laughter) Yeah, and I looked at it and went oh well, but they didn't want to go back to the agency and wanted to use Meld to add some AI elements and what we ended up achieving for them is that we used the cameras within the devices and gave them content sentiment analysis, tokenization of people using it, so if they went into a pop-up that was in a shopping center and then later went into the car dealer, the car dealer wouldn't get any personal details, but they'll see, “Look, this family of four was playing with this car in a shopping center for an hour and they got to this configuration price point.” and that dealer would end up with that profile as they're walking in. They did that and a lot of that was prebuilt with those tools in Meld. They just used those tools and ran it side by side with the application, and that was a six-week process. So they're the type of customers or partners we're using where they're taking it to that next step. And also, even some small signage providers when they go enterprise now with all the security requirements like SSO, data restriction compliance, GDPR, all of that's really overwhelming for them. So we take care of that.  As long as they stick to the guidelines we set in place, they can be compliant too, and they can really pump above their way. Is one of those guidelines is that you have to use Chrome devices or is that just one of the ways you can do this? Stephen: No. So, we use our Chrome and Windows. So one of the guidelines is, for example, the hardware. We're hardware agnostic as well so as long as the hardware has some security components like it has a TPM or we can access the firmware to create, assign digital devices, we allow it into our network. So we won't allow a customer to say add an Android device because that can't be secured.  We are PCI level One, so the highest PCI standards. So we will ensure that the devices meet that standard if they want to be able to use any of those certificates, if that makes sense.  Yeah. Google made a big splash about four or five years ago, about entering the digital signage market. And at that point, there were a number of Chrome devices and there was a feeling, and I was among them and I thought, okay, this could be a big deal, but then it never really went too far. There's only a handful of companies that are using Chrome, Chromeboxes and other devices, but for the most part, the world has moved on and Android came back and Android is getting a lot more serious and there are lots of special-purpose devices, set-top box kinds of devices that are being used.  I think it's interesting that you started down the path of Chrome, but I suspect it's going to be important to communicate, at least in the context of the digital signage ecosystem that this is not just a pure Chrome play and they don't have to go down that path. Stephen: Yeah, that's correct. And look, we love working with Chrome. I think it's come a long way. And, one of the reasons why I think adoption wasn't so rapid in this space is what I explained earlier. You have a lot of people who are used to hacking an operating system and bending it the way they want it to bend, but then you tend to compromise security, you compromise feature updates. There's a lot of compromises when you're doing that. So what we tried to do is take the Chrome methodology, make Chrome more adaptable to this market.  We're doing offline content, talking to peripherals, running multiple apps at the same time. So I haven't come across anything of light that we can't do in Chrome that you can do in other operating systems. I think Chrome forces you to be compliant, to maintain security standards, and there are not that many players that have the skills to work within that compliance framework.  So initially we made that easier and now we use that same compliance framework, which is the class-leading for an operating system, across the other operating systems. We've worked very closely with Microsoft to control updates, and we're about to release some dedicated Android devices that are secure, have digital certificates back and forth, and can only play up that generated from Meld.  So even if it's your own APK, if it wasn't generated from Meld, it won't have authority. So it's super secure. You can still update the Chrome browser within Android, independently of Android, so it's very flexible but maintains that security first principle. You mentioned machine vision and I believe the product is called Viana. You're bringing computer vision at least in the context of digital signage, into a pretty crowded marketplace in terms of a number of companies that are selling variations on video analytics for audience measurement and so on. What's the distinction about Viana that sets you apart from the other guys? Stephen: Sure. So Viana actually didn't start with a sort of visual analytics, in the way we see it in Signage. It started on some really deep learning projects. One, which you can look up, it's called Project Sally, where for our post postal services in Australia, we did handwriting recognition and package recognition to be able to sort parcels at a kiosk device. You can go up to this kiosk, drop your handwritten parcel on the plateau and it will detect if it needs a customs declaration, pre-fill most of it, dimensions, calculate the cost and everything else.  So that was quite deep learning because if anyone tried to scan my handwriting, you’d need a really decent model.  For mine, it's not going to work. Stephen: (Laughter) So we did that, and we got our synthetic data set generating 14 million impressions a week or variations of handwritings, and we started saying, okay, how do we do things a little bit differently around visual analytics? How do you go beyond just saying, okay, this is how many females or males of this age have walked past this screen? You know, how do we take it to the next level?  It’s kind of I've been there, done that thing. Stephen: Exactly, right? And we're not going to engage in something that's highly saturated unless we can add some differentiation. So we sat down and worked through it and said, okay, what are we trying to actually get here? So we're not just trying to get the number of eyeballs, but what we're trying to get is the amount of attention time, we're trying to get the content sentiment to understand the content sentiment and how that relates to other systems, other processes or advertised media.  So we not only built our own custom model that looks at content sentiment analysis but applies various metrics and various sorts of triggers and integrations that make it really easy to do more. And then we took it a step further and all the training models are based on synthetics. So we haven't gone out there and pointed a camera at the public and started training. You know, you have a natural bias doing that. So what we've done is all our computers, all our training data is synthetically generated. It doesn't have the ability to even understand race, let alone be skewed to race but it does understand things like age, gender, beard, glasses, brands of clothing they might be wearing, are they wearing a hat in a hat store? It gets really detailed and we can pick up quite a comprehensive profile of that person that is entering your establishment, and you can start drilling in and say, okay, I want to understand more. I'm thinking of bringing game caps into my store, how many people were in caps of this type, and you can really start drilling down and understanding that level of detail. And one of the modules that have come out of Viana is at the moment called Sami? Stephen: Yup. In fact, we started this project prior to COVID.  It's an interesting story. I was sitting in one of our offices, and being from Melbourne, I was there quite late and the cleaners came in. And they came in, checked in, sat at the conference table, cleaned that table. They were there for two hours, emptied the bin, and left. And I'm thinking, this has to be a better way to understand what's being cleaned, what's being done, how do we go away from this clipboard on the side of a wall saying this has been cleaned and we don't know if it's been done? So we started that project and we got the provisional patent for it and then COVID hit and we said, okay, this is ideal for COVID. What it essentially does is that it can plug into any camera system, or digital camera system or you can use it with a USB camera if you choose to, and it looks at hand emotion, distances, body distances from objects. And what it starts to do is, for example, if you have a conference room, you can highlight a table or highlight those areas, it will start self-learning the digital structure or framework of that room and it'll start monitoring touchpoints. So I might say, “After each conference, I want an SMS to go to X person to go clean it.” So what would happen is once that person goes, who gets an SMS (or Messenger or any type of message), walks into the room, accept it, and the camera where she looked for the hand motions that it's been cleaned and it will show the hotspot areas that people were engaged with prior to cleaning. So you can really take any inanimate object and point these cameras towards it and set a threshold. You might say, after three interactions or people standing nearby, we want this cleaned and you can even set a range for hands or range for airborne, it is if someone's coughed in that area. You might want to set a meter range around that individual going in, and not only it will encourage you to clean, but it will record a complete digital manifest of that. So you'll get that pop-up, you'll engage with it, you'll clean it. It will monitor all the hand motions. We don't keep any details of faces. We've done a lot of training on what a cleaning motion is, and it will send you an image of the hotspot areas, and if you've cleaned those hotspot areas, it'll send you a notification saying you're done and it will keep a central digital manifest of it all. So I think that's interesting for the business environment but I would imagine where it could get really interesting would be in things like food processing environments, where they're worried about Listeria outbreaks and everything else, where you've got to have cleaning compliance versus the boardroom table. Yes. It should be clean, but it's probably not the end of the world. If it wasn't. Stephen: That's right. We're getting companies coming to us in all sorts of spaces around this. Food preparation areas, pharmaceuticals. We have an interesting one right now, a very, large spectacles retailer and what they're doing right now because of the COVID situation is every hour, they have two people in-store, retail associates, cleaning every single spectacle in the place. So they're using us to have focus areas. So the cleaning can be more frequent, but less broad.  And in fact, you can have triggers so you can even use it on any kiosk, doesn't matter what operating system, what OS. We have a module that sits on the kiosk and can monitor touches and it doesn't require a camera and it will send you information saying this kiosk has hit a threshold. We're working with an airport right now, and the first thing it would do is if that kiosk hit a threshold, it will shut down that kiosk and encourage you to go to the next chaos until someone can clean it and as you go into that cleaning mode, it will show you the impressions and all the hotspots where most of the touches were. And if you're using a virtual eraser, it will not let you finish that process until you've rubbed all of it out and it will even ask you to say, please clean the PIN pad, please clean this and that, as a digital checklist. And that's rolling out this month as well. That's part of the Sami suite,  So, if I'm charged with cleaning these things (and please God, I don't want that job) but, you would see a screen that has what amounts to a heat map on it that's visualizing what in particular needs to be cleaned, and as you wipe that down, the heat map colors are changing or the heat map is going away and it's going back to the normal screen. Is that a good way of describing it? Stephen: That's correct. And the main point is the digital manifest, so the person that's cleaning it will have to be standing right in front of it. They'll click on their phone, they could have got a message of some sort, and then it will go into that mode, and you can associate that person with that compliant cleaning regime. The first thing it would do is make you clean the whole surface and then it would make you focus on areas and have that sort of visualization so that way you can have a deeper clean and there’s some AI behind it, how many touches or how long the engagement is versus how much you have to clean up for based on the type of solution. So if it's Clorox, it might say, this is how long you need to do it. Customers can vary that in the dashboard. So they can say, it's this many impressions or I want this clean for X minutes. I want us to not allow customers to use it, and we've just had a customer that wanted to add facemask to that, so it stops the kiosk for anyone signing into that kiosk or using that kiosk unless they have a mask. They just added two Meld modules together and created that scenario. Yeah. I worry about a lot of these companies that are coming out with hardware products that are squarely focused on dealing with pandemic issues right now, because it's going to take longer than most people expect, but this problem will go away and I wonder if these products will be relevant at that point, versus what you're describing, which is great in the current, health safety environment, but it's going to work for a whole bunch of other reasons down the road in a whole bunch of other different scenarios. Stephen: Exactly. So we originally started these concepts because a lot of customers use our touch screen for food or food ordering. Coli is very stubborn and it stays on surfaces for a long time, so we originally started this for things such as Listeria, Coli and general cleanliness and bacteria.  And we're very lucky to have one of our large teams, or actually I opened at the time in Taiwan because they see a lot of work around this space and Taiwan seems to be leading the world around this space. They seem to be the best in the best state for COVID. So we've got a lot of feedback from them on this, and having a purely hardware solution to solve this problem which may or may not be a short term, but it really needs to be multi-use and have a broader purpose than just this, and really that's what we're focused on. It's good housekeeping. It's allowing you to create a digital manifest and to make sure it's actually done because we actually did a research piece before we started. We're working with a very large building management company, so they own buildings in the city, and then they go lease them back out and manage the buildings. And they didn't actually know, compliance. The only method of compliance they had was when the cleanup badged in and badged out, that was it. They didn't know if anything was done, which could be dangerous, in this environment. And also, just generally, you want to know if you're paying for that cleaning service that it's actually being done. Yeah. Where's the company at, in terms of, working its way into the marketplace? You've hired Raffi Vartian. I believe you have a guy down in Dallas or Austin. Where are you at and how do companies engage with you?  Are you working through a channel, is it a direct connection? How do people find meldCX and get the conversation going? Stephen: Yeah. So we started off, in Australia. so we've got quite a big Australia team and some resources in the Asia Pacific region. We decided to kick off the US because, one, we have quite a few customers that are in flight, so you'll see, by the end of this year, them going live with some significant rollouts. So we hired two people initially, that is, Edward Doan, he’s actually ex Chrome, he was part of the core Chrome team and led parts of that team. And he's come across to lead the meldCX business in the US and Raffi Vartian. And we tend to look at it in an interesting way, in that, if the project is unique and we believe that projects can come down the pipe and can be used by our partners, we will engage the customer directly for a period of time.  So for example, in the first version of Sami, we worked closely with our customers who allowed us into their environments and create training data and do that type of thing, and then we'll make that sort of publicly available and work with partners to deliver to those clients.  So we are a partner-centric business. We tend to use ISDs and SIs of all types. We do work with some agencies, and some consultancy firms as well but we do have some multinational, bleeding-edge type use cases that we will engage indirectly and then make those facilities or even sometimes the sample code available to our partners so they can go and modify it and do it for their customers. Okay, so to find you guys, is it meldCX.com? Stephen: Yup. meldCX.com. Perfect. All right, Steven, thank you so much for taking some time with me from all the way over there in Australia. Stephen: Yeah, thanks for your time. 

IoT Dev Chat
IoT Dev Chat: The Insight.tech Podcast

IoT Dev Chat

Play Episode Listen Later Oct 4, 2019 23:02


Industrial environments are tough on vision hardware. To ensure reliability, you must design for low power. But what does that mean for performance and cost? Find out in this conversation between Johnny Chen, Solutions Architect at OnLogic, and Kenton Williston, Editor-in-Chief of insight.tech

Embedded Insiders
The AI Race Is Heating up for Embedded Chipmakers

Embedded Insiders

Play Episode Listen Later Dec 27, 2018 8:29


As the calendar flips to 2019, embedded processor companies continue to ramp up their AI and machine learning offerings. Two leading processor vendors are doubling down on different architectures to capture these sockets, with Intel betting on Movidius vision processors and (formerly Altera's) FPGA technology and NVIDIA continuing to deliver GPU-based solutions. Tune in as the Embedded Insiders recap recent announcements from the two giants, and look ahead at what to expect from AI processors in the future.

Practical AI
UBER and Intel’s Machine Learning platforms

Practical AI

Play Episode Listen Later Nov 19, 2018 28:49 Transcription Available


We recently met up with Cormac Brick (Intel) and Mike Del Balso (Uber) at O’Reilly AI in SF. As the director of machine intelligence in Intel’s Movidius group, Cormac is an expert in porting deep learning models to all sorts of embedded devices (cameras, robots, drones, etc.). He helped us understand some of the techniques for developing portable networks to maximize performance on different compute architectures. In our discussion with Mike, we talked about the ins and outs of Michelangelo, Uber’s machine learning platform, which he manages. He also described why it was necessary for Uber to build out a machine learning platform and some of the new features they are exploring.

Changelog Master Feed
UBER and Intel’s Machine Learning platforms (Practical AI #21)

Changelog Master Feed

Play Episode Listen Later Nov 19, 2018 28:49 Transcription Available


We recently met up with Cormac Brick (Intel) and Mike Del Balso (Uber) at O’Reilly AI in SF. As the director of machine intelligence in Intel’s Movidius group, Cormac is an expert in porting deep learning models to all sorts of embedded devices (cameras, robots, drones, etc.). He helped us understand some of the techniques for developing portable networks to maximize performance on different compute architectures. In our discussion with Mike, we talked about the ins and outs of Michelangelo, Uber’s machine learning platform, which he manages. He also described why it was necessary for Uber to build out a machine learning platform and some of the new features they are exploring.

Windows Insider Podcast
What’s Up with Machine Learning?

Windows Insider Podcast

Play Episode Listen Later Apr 25, 2018 33:30


Everyone’s favorite new buzzword is ‘machine learning’ (or ‘ML’) but what exactly is ML and how is it already transforming everyday life and business? We chat with Microsoft engineers about machine learning and the significance of Windows ML, a new AI platform for developers available through the upcoming Windows 10 update. We cover how ML is changing the field of app development and how developers can get started with Windows ML. Finally, a Windows Insider gives us a tour under the hood of his app and discusses how machine learning is baked into the app’s evolution.   Episode transcription   JASON HOWARD:  Welcome to the Windows Insider Podcast.  I'm your host, Jason Howard, and you're listening to Episode 14, What's Up with Machine Learning?  In this episode we chat about ML, its future influence on app development, and the impact of Microsoft's recent Windows machine learning announcement.  Here in the studios with our first guests is Dona Sarkar   DONA SARKAR:  Hi.  I'm Dona Sarkar, Chief Ninja Cat and head of the Windows Insider Program   I'm here today in the studio with some special guests from Microsoft to talk all about everyone's favorite new buzzword, machine learning. I would love for our guests to introduce themselves.  Clint, would you like to go first? CLINT RUTKAS:  Hi.  I'm Clint Rutkas.  I am a Windows developer community champion.  So if you guys have APIs you want in the system, please talk to me. DONA SARKAR:  Exactly.  You'll see him on Twitter a lot talking about the Windows SDK.  So for all of your Windows SDK needs, tweet @ClintRutkas. And then Lucas. LUCAS BRODZINSKI:  Hi.  I'm Lucas Brodzinski.  I'm the program manager lead of the Windows AI platform team. DONA SARKAR:  That is awesome.  What does that mean? LUCAS BRODZINSKI:  Well, we're teaching the robots how to think.  You know, we've added capabilities to Windows for people to do machine learning inference on the edge.  So we're introducing the intelligent edge to Windows. DONA SARKAR:  That is really cool.  Thank you for joining us. LUCAS BRODZINSKI:  Thank you for having me. CLINT RUTKAS: I actually think it's even more than that.  Think about we're adding machine learning, the ability for every Windows device, not just desktop, device, to be able to do machine learning. So I think the big question is like, what is machine learning and why do we care? DONA SARKAR:  That's exactly the very first question I have for both of you, which is let's go all the way back, back, back.  What is machine learning and why is it different than AI? LUCAS BRODZINSKI:  Cool, totally.  So the way to think about AI and machine learning is machine learning is a subset of AI.  The whole concept of AI is you're trying to get a computer to act intelligently, kind of like a human would.  So you can get a computer to do a function like a human would and get a response from the computer as a human would.  Machine learning is a specific technique to try and do that. So for instance, if I'm having a conversation with you guys in real life, like I am right now, you know, I can read your facial expressions and I can kind of change my approach to the conversation based on the facial expressions you guys are giving me. So that's my intelligence.  And we would love to teach computers to be able to react to human interaction in that way.  One potential technique to go about doing that is emotion detection, which there are machine learning models to do. However, machine learning is this technique towards building out this larger intelligence, which is AI. CLINT RUTKAS:  So I think the question is, why would you use machine learning?  Let's say for whatever reason you want to build out a vegetable detector.  Let's say I wanted to detect a carrot versus broccoli versus cauliflower.  So what is a carrot?  So would I do it based on color?  So I have an if-statement that says, okay, well, if it's shaped kind of like a triangle, if it's orange and it's roughly this long in the photo, that's a carrot. Well, there's purple carrots.  DONA SARKAR:  Right. CLINT RUTKAS:  So now I have to add in an additional if-statement there. And then, okay, well, now, what's the difference between a carrot and broccoli?  That's a bit more easy.  But what's the difference between broccoli and cauliflower?  If you ask a kid that doesn't know, has never seen them, they might go like this is a baby version of that. So all those things, once you start having to factor in more and more and more, that code becomes extremely unwieldy, and then that's when machine learning comes in, because now you can start giving -- start training your model, this is exactly what a carrot is.  Here are all the different examples, all the different images we have of carrots, from different angles, different viewpoints, different coloring, different variants.  Same thing with broccoli and cauliflower.  And then magically now we can start getting high confidences with that model, and all I had to do was call a couple lines of code. LUCAS BRODZINSKI:  You hit it really on the nail there.  There are some problems that what we face as developers, you know, our human intuition can solve that problem very, very easily and quickly.   However, when we sit down to write code to fix that problem, it gets a little hard. So, you know, to write the code to detect the difference between two different types of apples can get pretty challenging. An example, the cool thinking about machine learning is, like you said, it creates this model that abstracts that problem away from the developer, so the developer can feed a model on input, an image of an apple.  The model does a lot of computational work to figure out the small nuance differences between different species of apples based on all the training dataset that went into making that model, and the developer just gets an answer of what type of apple it is. DONA SARKAR:  So just to cut you both off for a second rudely, what is "the model?"  You guys are saying, train the model, you know, give the developer the model.  What is that? CLINT RUTKAS:  Okay, so I think maybe a good thing we should probably talk about machine learning is maybe how it works and what are the big components.  So you have an engine, the inference engine, you have I'll say the training system, and then you have the model.  The model is actually what is kind of evaluated.  So if you said, is this an apple, you give the system the model of what is an apple.  Is that a good way to think about it? LUCAS BRODZINSKI:  Yeah, the best way to think about it is, given this large set of data, you can train on that data, which basically means you apply a lot of math to it, and you come up with an algorithm that notices patterns, that can solve functions.  And all of that is contained within this model.  So the model is the thing that describes the data that you fed it during training. DONA SARKAR:  I see, okay. CLINT RUTKAS:  And then you have the inference engine, and the reason why it's called an inference engine is because we're not 100 percent confident.  So we're inferring is this thing an apple.  It may be an apple, we may be 99 percent sure it's an apple, but we're not 100 percent sure.  So it's not a definitive answer, but you have to have a confidence that, yes, if it's, you know, let's say above 80 percent, we're pretty positive this is an apple. Speech recognition is a great example of this where you may say, turn on the lights.  It's going to give you a fairly high confidence rating if the model properly interpreted your natural language, but it's never 100 percent sure. DONA SARKAR:  That's right.  And do you feel like right now machine learning has already taking over our lives a little bit, that it's already kind of infiltrated tools and services that we use on a day-to-day?  Do you feel that that is true?  And if so, what are some examples that normal people will understand? LUCAS BRODZINSKI:  Yeah, totally.  So, you know, the most recent example is if you look at the Windows photos app, you can actually go into the photos app and type in what you want to search for.  So you can type in "dog" into the search field, and suddenly, all of your photo albums will be searched for what the computer thinks is a dog inside the picture.  And as a user, you're presented with all the pictures that have a dog in it.  And that's using machine learning to do image classification and find specific things and images, in this case being a dog. DONA SARKAR:  That's pretty awesome. CLINT RUTKAS:  Yeah, think about all the speech recognition that is in the world now.  So if you have let's say an Amazon Echo Dot or a Harman Kardon Cortana device, if you talk to it, that's machine learning.  You have machine learning built directly into Windows.  If you search as well, that's all machine learning.  If you go to a search engine, that's machine learning.  There's tons of areas in our lives that we have it, we just don't realize what's it's called yet. DONA SARKAR:  So we think of it more like computing rather than machine learning? CLINT RUTKAS:  Yeah.  I mean, machine learning I view it as it's much more of a topic programmers care about, because it either benefits or hurts us the most when it comes to programming what we need to program. As an end user you just want your answer.  It's like going to a restaurant.  You don't care how the food is made, as long as it's made sanitary, but you get the food and you're happy.  You don't care if it's one person making it or 20 people making it, you just get your yummy food. DONA SARKAR:  Okay.  So that phrase, machine learning, is quite buzzy these days.  Everyone thinks they're working on machine learning or want to work on machine learning.  And I think it was the most used term in job descriptions last year.  That and AI.  So why do you think people, who may not be technical, are so excited about this phrase?  What do you think is the potential like going forward?  We know it's been used a lot, but how can it be used to transform all these other somewhat old school industries, like think hotel, transportation, manufacturing, et cetera? LUCAS BRODZINSKI:  Sure.  So, you know, I think we're living in this time where you really have two massive things coming together to kind of fuel all this.  One of it is data.  There's a lot of data out in the world.  And the key thing for machine learning is you need a lot of data to be able to rationalize over.  The other part of it is having access to a lot of compute.  The process of training a machine learning model can be quite rigorous from a computation perspective.  And we're at a point where these two technologies as a for instance having the data and having the compute power have come together. And when you think about sort of all the cool end user scenarios that are possible, I mean, wouldn't it be great if we could have systems that, based on sort of the weather forecast, could predict what kind of hotel availability may be available in a specific city?  That's just one example of how you can make sense of all this data that's around us in a way that could benefit a user. CLINT RUTKAS:  So think outside just the user, think about how this could benefit humanity.  So with machine learning think about growing crops where you can directly use machine learning and models to determine is this a good area for that crop, is something bad happening, should we create targeted pesticide usage versus just blanketing everything.  Or disaster recovery potentially.  Like there's so many different areas where you could do things smarter and faster with machine learning. Manufacturing is another great example.  We showed this at Windows Developer Day.  Imagine you're building out a circuit board, and for whatever reason something hiccoughs and a single transistor is skipped.  With machine learning you can quickly look at it and say, oh, this is missing.  And it's the same model then that would detect if a capacitor was missing, for the most part. I'm looking at Lukas to verify. LUCAS BRODZINSKI:  Yeah, no, that's exactly right.  CLINT RUTKAS:  You can use that same thing, and then now rather than have to do a recall of, you know, 100,000 units, you caught it before it even shipped out. DONA SARKAR:  That's right. LUCAS BRODZINSKI:  Yeah, and, you know, to build on that example, there are cases where in order to make sense of the data that's available today requires a lot of specialized expertise in an area. DONA SARKAR:  That's right. LUCAS BRODZINSKI:  And sometimes, that expertise is not always available.  With machine learning what you can do is offer the computer to make sense of all the data that a human expert would have accumulated over years, and make some predictions that, you know, the hope of machine learning is to create a model that is accurate enough to sort of mimic what a human would have done in that situation. And that's the really cool part about it, too, because you're potentially unlocking a lot of scenarios where we just don't have enough human experts to do something, and the machine could help in those cases. DONA SARKAR:  That news article that just came out, like the farmers in India who are figuring out how to grow crops more efficiently using machine learning, because they definitely don't have the computational expertise to look at petabytes of data on crop growing, so they've been using machine learning to do that, I thought that was such a cool story. CLINT RUTKAS: Yeah. LUCAS BRODZINSKI:  Yeah, totally. DONA SARKAR:  That applies in like every country in the world, agriculture as a thing, so yeah. Okay, so recently, Microsoft, we made a big announcement about the next Windows 10 update and machine learning.  Do you mind sharing with our listeners what the announcement was? CLINT RUTKAS:  So in Windows 10 Version 1803, Windows Machine Learning is built in. So that means every system that is running version 18.03 will have machine learning built-in.  And it smartly takes over.  If you're on a GPU, it will leverage the GPU.  If your device only has a CPU, it will only leverage the CPU.  As a programmer you also have some toggle so you can pick and choose.  This also runs on basically any system -- correct me if I'm wrong here, Lucas -- that runs 18.03, it will just work. LUCAS BRODZINSKI:  Yeah, and the cool thing about it is what we've announced that's going to ship in our next major update is a preview that solves a bunch of problems for developers. So historically, when a developer has approached machine learning problems, there was a couple of barriers of entry that made the process a little hard.  So first, as a developer you would have to figure out, hey, I have this model file that came from somewhere.  And that somewhere could have been one of a handful of different training frameworks.  And each one of them had its own sort of file format associated with it. And the very first task you would have to do as a developer is to ask, well, given this model, I need the corresponding evaluation engine that ships with my software to be available to evaluate this model. With Windows ML we've taken that pain point away, because every single version of Windows has Windows ML in it and is able to evaluate that model. The other problem was having these handful of different training frameworks and different formats meant that as a developer you had this giant format issue of, hey, there's like, you know, six or plus different formats. So, Windows ML has the ability to take an Onyx as a model an input format.  Onyx is something that we're working with industry partners to standardize as the format exchange for ML models.  So as a developer that problem's gone away, too. CLINT RUTKAS:  And we have conversion tools to get your existing models onto Onyx as well. DONA SARKAR:  Oh, that's nice. LUCAS BRODZINSKI:  Yeah, exactly.  And there's already frameworks that can produce Onyx natively as well.  So Azure machine learning can output Onyx today.  CNTK has a (for in-sys?) to save files as Onyx as well.  And more frameworks will be coming online and the converters are there. But thirdly, and you touched upon this point, Clint, as a developer sometimes I need extra computational horsepower in order to evaluate a model.  I want to be able to use the hardware that's on my clients' machines.  And previously, as a developer I would have to target hardware specifically and not in an abstract manner.  So I would have to know what hardware specific GPUs are available on my customer's machines and write code specific to those GPUs. With Windows ML we've abstracted that hardware problem, and as Clint said, we can do model evaluation on any DirectX 12 GPU or the CPU, and the developer can choose or let Windows decide which one to use. DONA SARKAR:  That's pretty cool. CLINT RUTKAS:  And what's even cooler is it's built to be future proof, I guess future proof with quotes.  So we announced this at Windows Developer Days is that it will also work on an MVPU. LUCAS BRODZINSKI:  Right, so what we want to do is we recognize there's a bunch of new ML silicon out in the world that's not exactly a GPU.  But we want to be able to talk about to the silicon in a way where a developer doesn't have to make this decision about, well, how do I talk to that hardware specifically. So at Windows Developer Day we showed an early engagement with Movidius, which is Intel's vision processing unit, to be able to do evaluations using this driver model that we're working on in order to bring these devices into Windows. CLINT RUTKAS:  So imagine in the future you have a device that has one of these chips.  Windows ML will just leverage what the best item you have available on your system. DONA SARKAR:  Right, without you having to do a bunch of extra work and learn this new thing.  Okay, that's cool. So machine learning technology now in Windows, super exciting, but what made you guys on the team actually working on it decide to include it in the 18.03 update? CLINT RUTKAS:  So we've been working on this for years in various different ways, in various different subsystems. So I think the better way to think about it is how long it takes to actually get a feature into Windows.  Windows is everywhere.  It's in servers, it's in desktops, it's in a plethora of devices.  So we've been working on features like this and many others, and it takes years for it to actually get here. So building out all the needed required items took a bit, and now it's finally in a state where we can ship it externally and allow developers to start getting their hands on it and really get their hands dirty, without us literally changing out the plumbing back and forth.  It's one thing for us inside of Microsoft to have to deal with some of this stuff, it's a totally another thing when an external developer has to deal with that kind of sausage making.  So now we feel that it's strong, it's in a shippable state, and we'd love to get feedback and developers to start using it. LUCAS BRODZINSKI:  Yeah, and on Clint's point, we've been doing this for a long time.  We've had a lot of investments in our cloud solutions around AI.  So Azure Machine Learning allows you to do machine learning training.  We have Cognitive Services that allow you to use prebuilt AI in the cloud.  And as Clint was saying, we've finally got it to a point where we were in need of allowing developers to make the edge intelligent as well and do some of these operations without necessarily being able to talk to the cloud. DONA SARKAR:  That's right.  That is awesome.  It sounds like this introduction to Windows machine learning is really going to change the game for app developers going forward. LUCAS BRODZINSKI:  Yeah, totally. DONA SARKAR:  So Windows machine learning is here, it's in the product by the time this podcast airs, any app developer can use it.  It's in preview, so that's to be noted.  But how do you foresee the field of app development changing as a result of introducing this technology? LUCAS BRODZINSKI:  Yeah.  Well, just imagine the intelligence that you can introduce to your app if you had the ability to recognize patterns but not necessarily having to write all the code to do that. So for instance, if you could, given a camera input, realize that, you know, there are these two people standing right in front of me, they're wearing maybe a red shirt, and I know if they're wearing a red shirt they're particularly a vendor at an event.  So maybe I want to provide them with some information about the event.  Imagine sort of all the code that you would have to write if you were going to do that without machine learning.  So one of the exciting things is developers will be able to take on these way more rich scenarios in a way that doesn't require them to write this code.  Now, I think that's just going to unlock like a giant cloud of creativity around how devs approach this space. CLINT RUTKAS:  And I think that's one amazing example.  The other amazing example to me is it allows developers to start doing AI computing on the edge.  And when we say on the edge, it's the end developer system.  There's still times where you're going to have to go up to the cloud and leverage that big horsepower availability in the cloud.  But as a developer you can't do everything in the cloud because of latency for between calls.  Imagine you're dealing with a $30 million device that must have micro-millisecond precision.  I'm not sure if I just made up a term, but I just made up a term. DONA SARKAR:  That's cool. LUCAS BRODZINSKI:  And that roundtrip for going up to the cloud and back could be too big of a gap.  But there are also times where, hey, I might have to make a decision, my model locally is unsure of what's going on.  Then I can go send that query up and leverage that big, rich horsepower of the cloud and get a much more definitive answer.  So you can start doing cost reductions and everything and just make the most of what you have available to you as a developer. DONA SARKAR:  That's really cool. LUCAS BRODZINSKI:  Yeah, totally.  I'll totally sum it up as doing intelligence on the edge so machine learning evaluations on the edge gives you performance, it gives you scalability, and it also gives you flexibility.  I mean, there's going to be times where you want to be able to do machine learning evaluations, but you can't send your data to the cloud, whether it's due to customer preferences, whether it's due to no connectivity.  Having Windows ML allows you to do that on the edge in those cases where you couldn't do it otherwise. DONA SARKAR:  That is really, really awesome.  So these are all of the upsides and all the goodness.  Are there any unknowns or challenges that you two can foresee?  Radio silence on the radio. CLINT RUTKAS:  Okay, so I would say an interesting thing is it's a new skillset for people to start thinking about.  Some people may think of it as, okay, so I have this model.  This model, I didn't code it, I don't know what's in it, I don't know how to debug it.  But at the same time, to me as a developer when I step back and I think about it, I'm okay with that.  Because you can start verifying your inputs and your outputs.  You can also to be sure like, hey, I've done enough unit testing, I trust this thing.  Also, think about all the APIs you call where you didn't code that thing.  I got this external library from someone.  I didn't code it.  I can't directly debug it.  But I'm okay with that. To me it's the same concept; it's just another skill, it's another tool in your toolbox to make you a more productive developer. LUCAS BRODZINSKI:  Yeah, I think for me where there's a challenge there's an opportunity, and I think one of the coolest aspects of this is we're going to see two communities that in the past may not have had the closest collaboration start getting really, really close together. And really what I'm talking about there is the data science community and the developer community. DONA SARKAR:  Ah, yeah. LUCAS BRODZINSKI:  And when you think about it, you know, historically, the data science community has made these like massive advancements in machine learning, and a lot of these advancements were geared at, hey, how do I get better accuracy out of a model, how do I create a new algorithm to do something that just wasn't possible before? And those are great. From a developer perspective you may have some other concerns that you have to worry about.  So, for instance, you might be worrying about, well, how do I get an answer within, you know, some small, little, tiny threshold of time to make my app useful, how do I do that in a way where, for instance, my install size is not massive? And I think you're going to start seeing these two communities come together and start sort of cross-pollinating needs, wants, desires, and together being able to train and also operationalize, you're just going to see the space evolve huge. DONA SARKAR:  That is amazing.  So you guys can be super honest, do I need to call Sarah Connor on the phone?  Are we going to be ruled by machine overlords? LUCAS BRODZINSKI:  You should always have Sarah Connor on speed dial. DONA SARKAR:  So guys, for a dev like me who has written UWPs and Win32s, how can I get started on machine learning the hell out of my app? CLINT RUTKAS:  So my opinion is go to some of the galleries with models already ready.  This is how easy it is to start getting coded once you have an Onyx model, which you can download any model right now, convert it.  All you have to do is take that Onyx file, drag it into Visual Studio, into your UWP, I believe also Win32. LUCAS BRODZINSKI:  We have Win32 and UWP APIs. DONA SARKAR:  That's cool. CLINT RUTKAS:  So you drag it in to your solution, it auto-creates the CS file for you.  From there you get basically your input, your output and your engine.  And basically, you load your model and you call evaluate and you parse your results.  It's basically three lines of code, really. LUCAS BRODZINSKI: Yeah, totally.  CLINT RUTKAS:  I made it sound really simple.  LUCAS BRODZINSKI: No, but you know what, it actually is that simple.  The great thing is with the (for in-sys?) that we added to Visual Studio, as a developer if you have this Onyx file, you don't have to worry about what's inside of it.  We've done our best to expose sort of all the nitty-gritty in a way where you're kind of just plumbing your data types from your apps to data types that the model expects. My way of getting started actually uses some of our other tech that we have in the cloud today.  The easiest thing to remember is there's three steps to starting Windows ML.  You have to load a model.  So that means you have to have a model.  You take some inputs from your application, you bind it to Windows ML.  And then you call evaluate. So how do you get that model?  My favorite way of getting a model is using customvision.ai, which is a service that Microsoft offers to allow you to classify a bunch of images with labels and create a model that basically allows you to feed new images and detect whatever labels you added to the images in the training set. Once you have that model, you bind your application data, whether, you know, it's a picture that you loaded or something from the camera, and you call evaluate.  If I wanted to make an app that, going back to the apple example, detects different types of fruit, I could feed a bunch of images of various types of fruit, each labeled with what fruit is in the image, into customvision.ai, and it will go off and do all the training for me, and just give me a model file that I can then go use in my application. DONA SARKAR:  That is cool.  So say your family, you can take pictures of all of them, label who they are, and then build like some sort of family tree thing. LUCAS BRODZINSKI:  Exactly. DONA SARKAR:  That's really, really awesome. CLINT RUTKAS:  In all fairness, it's more than a couple photos. LUCAS BRODZINSKI:  Family reunions will never be the same. DONA SARKAR:  Yeah, all of them. CLINT RUTKAS:  I'm going to need to take a photo of every angle from you.  That'd be great, yeah. DONA SARKAR:  Ah, okay, there you go. Well, you guys, it's Friday night and I know what I'm going to do.  I'm going to go home and ML the hell out of my UWP is what I'm going to do. LUCAS BRODZINSKI:  Awesome.  Love to hear it. DONA SARKAR:  Because we are cool like that. LUCAS BRODZINSKI:  That is the way to spend a Friday evening. CLINT RUTKAS:  I know. DONA SARKAR:  Clint and Lucas, thank you so much for being here and talking to Windows Insiders about ML. Many of them are coming to Build.  They're going to be insanely excited about this. CLINT RUTKAS:  I cannot wait for Build. DONA SARKAR:  I am very excited for Build. And I'm going to go and actually try to ML some stuff.  And when I get stuck, I know who to call. Thank you so much for being here. LUCAS BRODZINSKI:  Thanks for having us. DONA SARKAR:  Have a wonderful day. CLINT RUTKAS:  Thank you. DONA SARKAR:  Bye. JASON HOWARD:  As our previous guests have mentioned, machine learning is a pathway to achieving artificial intelligence or machines that can make decisions like humans, only faster and more efficiently.  Machine learning is built into the AI of so many everyday tools.  For example, Cortana can chat with you about your favorite sports team, because its chitchat function is continually learning from users how to talk about a variety of topics. How are developers from around the world tapping into the power of machine learning and AI?  Up next, we take a tour under the hood of an AI bot built by a Windows Insider from Kenya. Irving Amukasa was awarded an Insider's for Good fellowship from Microsoft for his innovative work with an artificial intelligence app. IRVING AMUKASA:  Yes! JASON HOWARD:  SophieBot answers user questions about sexual and reproductive health.  Anything you've been curious about but too embarrassed to ask, SophieBot will happily answer. Welcome to the show, Irving.  Thanks for joining us. IRVING AMUKASA:  Thanks for inviting me.  I'm really honored. JASON HOWARD:  So I'm going to ask you a tough one, because doing live demos is always a tricky proposition.  Is it possible to open SophieBot and ask her a question so we can hear her? IRVING AMUKASA:  I hope the sound is good.  Let's see if she can talk well. SOPHIEBOT:  Hi there.  I'm Sophie. What would you like to know? IRVING AMUKASA:  What is sexual health? SOPHIEBOT:  Good health is considered to be a state of complete physical, mental, and social wellbeing, and not merely the absence of diseases or infirmity.  The principles of sexual health are the same.  Physical wellbeing means good health and hygiene for your genitals and related systems.  Mental wellbeing means feeling okay about your sexual desires and needs and not guilty or depressed about them.  Social wellbeing means not facing social discrimination because of your sexual desires and needs or because of any problem that may result from unsafe sex.  The combination of all these factors makes for a sexually healthy you. IRVING AMUKASA:  You have the option to turn off that sound.  Before, it is text, no sound. JASON HOWARD:  It sounded like she gave a very thorough answer. IRVING AMUKASA:  Yes. JASON HOWARD:  So let me ask you, what were the existing challenges that prompted you to design SophieBot? IRVING AMUKASA:  Yes, first thing first is this side of the world, it's awkward and hard to talk about sexual health openly.  To even ask a question is even close to taboo.  And sexual health workers and centers aren't as friendly.  That was problem one.  Problem two is the lack of verifiable information out there.  So those two main problems helped us design SophieBot. JASON HOWARD:  So part of it was the stigma of actually asking those questions, but the other half of it is making sure that the answers you're getting are true and correct and will actually, you know, guide you in the right direction. IRVING AMUKASA:  Yes, that's it. JASON HOWARD:  Here at Microsoft we've found that a significant portion of user interactions with Cortana are actually like a social response or a silly joke type question.  Which is super interesting because it shows users the human side of AI as a whole. In your view, why is being able to interact with a humanlike bot appealing rather than using a digital encyclopedia or just a basic search engine? IRVING AMUKASA:  On SophieBot not everyone asks us about sexual health.  Our most popular question we learned was people want to see Sophie's face.  So we also get those questions that are socially type.  So it's inherent in our nature to ask a question, send out a message, and get feedback and build on top of that.  You can't do that with a blog, you can't do that with any other media, that directly instantly sending out a message and getting actual feedback.  That element of communication is ingrained in us, and that's why messaging bots are big.  And messaging apps are popular because they know that one little secret. JASON HOWARD:  So real quick for our listeners I want to talk a little bit about the difference between AI and machine learning. IRVING AMUKASA:  Yes. JASON HOWARD:  As you know, in this episode we're talking about machine learning.  And as some of the other guests have discussed, the terms machine learning and AI, while they're used interchangeably sometimes, they're actually very different things.  Machine learning is a particular method of achieving AI, which is of course allowing machines to have access to tons of data using algorithms and learning how to perform tasks rather than, you know, a developer hand-coding everything line by line. Can you talk some about how SophieBot uses machine learning to become better? IRVING AMUKASA:  So let me go back to AI in general.  SophieBot started off with like old technology that you had to provide everything.  Now we just call the Artificial Intelligence Markup Language.  It falls under something called rule-based AI.  But that wasn't enough to provide answers to our users, so we had to move up on the learning curve.  Machine learning comes in two ways on SophieBot.  First is us getting insights of the question they're asking.  We don't know who you are but we keep track of the questions you ask and the answers we provide for you.  So when users ask questions, it's insightful for us to know which topics are more prevalent and which is the most popular question. So point one is us finding out which is the most popular question.  We don't do a tally of each question, because people have asked similar questions but then in different ways.  You can't do a tally and manually count.  People have asked about STIs or people have asked about HIV and AIDS. What we do instead is use a machine learning model that looks at the words in every single question, looks at the frequency of those words, and how much they weigh on each single question they've asked, so we can have a popularity graph of the most popular question and the most least popular question. So that's how we use machine learning specifically on SophieBot.  It isn't on answering questions, it's on getting insights on the questions already been asked. JASON HOWARD:  Yeah, so you're highlighting key words from what people say to make sure that, you know, even if somebody does ask it differently, that you know how to respond appropriately.  But then because people are very different in how they address topics, they may not use the right words to get the answer they're actually looking for.  So it sounds like you're going to use some of the machine learning to figure out what they're trying to ask, even if they're not asking the question the right way. IRVING AMUKASA:  Yes.  Also including typos.  If somebody doesn't know how to spell chlamydia or gonorrhea, machine learning points them in the right direction. JASON HOWARD:  So not only are you giving them the right answer, but you're helping take some of the human error out of it as well? IRVING AMUKASA:  Yes. JASON HOWARD:  That's brilliant. See, that's one of the fun things about this whole concept of machine learning and AI in general.  It's like even if we don't get it quite right, and we as humans are the ones that set up the constructs that we're working in, we can still use this cloud-based learning and machine learning and the whole concept of AI to correct ourselves to make sure we're going in the right direction still. IRVING AMUKASA:  Yes. JASON HOWARD:  So as machine learning becomes more and more sophisticated in the future, what's your vision for the next evolution of SophieBot? IRVING AMUKASA:  Interesting.  So the next evolution of SophieBot is an end-to-end system that can take any dataset of questions and answers and be able to automate them.  That's the next evolution of SophieBot.  So rather than us elaborately designing full process flows or designing questions and answers, we go to someone who already has a huge dataset of questions and answers. JASON HOWARD:  So are you trying to take her further than answering questions about sexual health?  Are you trying to expand her beyond that?  Or are you trying to make her more adept and capable in the space that she's functioning in currently? IRVING AMUKASA:  We're doing both.  In essence like that's our business model.  We don't make money from you coming to us to ask questions.  That's we fund it to do that by the native nation population. But we are a business, we are going to make SophieBot sustainable.  And the goal for that is to build that model and be able to monetize that end-to-end model in other domains rather than just sexual health. JASON HOWARD:  That's awesome.  Well, Irving, I've got to say, thank you so much for taking the time to be here with us today. IRVING AMUKASA:  No, thanks a lot, and thanks for having me and have a nice day. JASON HOWARD:  Cheers, man. IRVING AMUKASA:  Bless you. JASON HOWARD:  That's a wrap for Episode 14.  Get the Windows Insider Podcast automatically every month by subscribing on your favorite podcast app.  You can also find all of our past episodes on the Windows Insider website.  Thanks for listening and until next time, Insiders. NARRATION:  The Windows Insider Podcast is produced by Microsoft Production Studios and the Windows Insider team, which includes Tyler Ahn -- that's me -- Michelle Paison, Ande Harwood, and Kristie Wang.  Visit us on the Web at insider.windows.com.  Follow @windowsinsider on Instagram and Twitter.  Support for the Windows Insider Podcast comes from Microsoft, empowering every person and every organization on the planet to achieve more.  Please subscribe, rate, and review this podcast wherever you get your podcasts.  Moral support and inspiration come from Ninja Cat, reminding us to have fun and pursue our passions.  Thanks, as always, to our program's co-founders, Dona Sarkar and Jeremiah Marble. Join us next month for another fascinating discussion from the perspectives of Windows Insiders.  END

The Leadership Podcast
TLP094: Sell The Problem, Not the Solution

The Leadership Podcast

Play Episode Listen Later Apr 18, 2018 48:22


Brian Caulfield is one of the most accomplished tech founders and venture capitalists in all of Ireland. A serial entrepreneur turned VC, he is Managing Partner at Draper Esprit, the leading European venture capital firm. Brian gives back to his community by acting as a private investor/advisor to a number of early stage technology companies. He talks with Jim and Jan about the culture of leadership in Europe, how it differs from the United States, the role of AI and innovation in creating a more fruitful landscape for leaders, the importance self awareness and ability to solve the problem rather than the solution, and gives the traits he feels are most important to becoming a strong and successful leader.   Key Takeaways [4:51] Good technology is only small part of the success in any business. If you are going after the wrong market opportunity or have the wrong team, that will be more influential than the strength of your technology. [6:48] By giving a deep understanding of the problems and challenges to get more commitment to the solution. [11:06] Brian discusses the issue of fragmentation with startups in Ireland, and how it affects leadership. Ireland needs more organization and focus on their own indigenous innovators to create an environment for early stage companies. [23:44] One of the key tenants Brian teaches other emerging leaders is to develop their individual decision making skills, and the importance of self awareness. [28:06] As a leader it is quite important to give honest feedback with evidence about their situation. [31:21] The more examples people have of others successfully making the leap of entrepreneurship, the more apt they are to feel as though it’s possible for themselves.   [42:54] Brian cites Shay Garvey as one of his biggest mentors and inspirations as a leader. He fostered Brian both personally and professionally, and gave him a positive view of building a great business. [46:03] The 5 traits to spot a leader: market knowledge, focus and drive, passion and conviction, ability to listen and charisma and compelling.   LinkedIn @BrianCaulfield Website: DraperEsprit.com Twitter: @BrianCVC   Quotable Quotes Sell the problem, not the solution The best businesses are built by people that have a passion for the problem they are solving. People must think through and fully understand the problems by themselves. Great leaders come up with their own pros and cons about a situation. Talent is universal. Opportunity is not so evenly spread around different locations around the world. Bio Brian Caulfield is an entrepreneur & venture capitalist.  He is Managing Partner at Draper Esprit, the leading European venture capital firm, and based in Dublin, Ireland. Prior to joining Draper Esprit, Brian was a partner at Trinity Venture Capital where he sat on the boards of or led investments in AePONA ChangingWorlds, CR2, SteelTrace & APT. Previously, Brian co-founded both Exceptis Technologies - sold to Trintech Group, November 2000 & Similarity Systems, a business focused data quality management software company that was acquired by Informatica, January 2006. Brian’s Draper Esprit investments include Movidius, Datahug, RhodeCode, Mobile Travel Technologies & Clavis Insight. He also sits on the board of the Irish Times, Ireland’s leading daily newspaper.  He is a private investor/advisor to a number of early stage technology companies. Brian is a Computer Engineering graduate of Trinity College Dublin. He was the 2007 recipient of the Irish Software Association "Technology Person of the Year" award and has been inducted into the Irish Internet Association’s Hall of Fame. In 2010 he also received the Halo Business Angel Network’s͞ Business Angel of the Year award. Brian is a former Chairman of the Irish Venture Capital Association.     Books Mentioned in this Episode   Labor 2030: The Collision of Demographics, Automation and Inequality

Commercial Drones FM
#058 - Intel's Drone Ecosystem with Anil Nanduri

Commercial Drones FM

Play Episode Listen Later Nov 8, 2017 39:03


Intel is a global, multi-billion dollar public company and they have their hands in all areas of technology. As a legacy Silicon Valley juggernaut, they've found that innovation is key to survival. Anil Nanduri, GM of the Drone Group at Intel, joins Ian to explain Intel's strategy around drones. From acquiring a multirotor manufacturer (Ascending-Technologies), a fixed-wing manufacturer (Mavinci), a computer vision chip manufacturer (Movidius), making their own entertainment drones, and even creating impressive technology like RealSense, Intel knows more than anyone how important it is to diversify and create a thriving drone ecosystem.

Intel Chip Chat
Movidius Myriad X: Computer Vision and Deep Learning at the Edge - Intel® Chip Chat episode 547

Intel Chip Chat

Play Episode Listen Later Aug 28, 2017 9:55


Remi El-Ouazzane, Vice President at Intel New Technology Group and General Manager of Movidius™, joins us to share his excitement about the Movidius™ Myriad™ X VPU (Vision Processing Unit), which launched on August 28th. Movidius joined Intel in late 2016 in order to further its mission to give sight to machines. In this interview, Remi highlights the capabilities of Movidius Myriad X for computer vision and deep learning inference in edge devices and forecasts the end-user application that will be enabled by Movidius Myriad X. Remi also discusses the Movidius Neural Compute Stick, the world's first USB-based deep learning inference kit and self-contained AI accelerator. For more information on the Movidius Myriad X VPU, please visit http://movidius.com/myriadX. For more information on the Movidius Neural Compute Stick, please visit https://developer.movidius.com/.

Tech Café
Sac de nodes & hotchips : I.A. quelqu’un ?

Tech Café

Play Episode Listen Later Sep 22, 2016 68:54


Actualités C’était chaud : HotChips 28 A méditer, AMD veut s’imposer par le Zen. NVIDIA : Des nouvelles de Parker, une cousine de la puce automobile PX2 permettra-t-elle à Nintendo de rouler sur ses concurrents ? IBM montre les muscles face à Intel avec ses Power9. Et ARM aussi, avec son ARM V8-A. Et Intel ? Il détaille... Skylake. Mais il soutient la recherche sur la communication entre coeurs. Et il rachète Movidius, un designer de puces (les Myriad) pour l’IA. I.A. Quelqu’un ? Google utilise d’ailleurs ses "TPU" depuis des mois. IBM crée un nouveau type de composant se comportant comme un neurone artificiel. Et HP a aussi sa méthode… Ca pulse autour de la PCM et des memristors ! Sac de nodes Le "Process shrink" apporte ses bienfait partout, même en entré de gamme (NOTE: Les footnotes c’est le pied !), par exemple : le Kirin 650 à 16nm FF+. Alors 14nm ou 16nm FF+ ? Qu’est-ce qu’un "node" ? Crónica de una muerte anunciada : c’est officiel : fin de la miniaturisation après 2021. Keep Calm and Carry On : pendant ce temps, Intel démarre le 10nm ! TSMC prépare le sien pour 2017 et même le 7nm pour 2018 ! Qui ne sera sans doute pas le même que celui d’Intel prévu pour … 2022 ! Participants La chronique des composants est préparée et développée par Guillaume Poggiaspalla Présenté par Guillaume Vendé