Podcasts about Leap Motion

  • 94PODCASTS
  • 111EPISODES
  • 49mAVG DURATION
  • ?INFREQUENT EPISODES
  • Aug 9, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Leap Motion

Latest podcast episodes about Leap Motion

Voices of VR Podcast – Designing for Virtual Reality
#1410: Fostering Resonite’s Collaborative Maker Culture with Creator Jam Founder Medra

Voices of VR Podcast – Designing for Virtual Reality

Play Episode Listen Later Aug 9, 2024 119:50


Medra is the CEO and founder of the Creator Jam, which is a non-profit that's been consistently organizing weekly and monthly collaborative-building events within the social VR platform of Resonite (and originally within Neos VR) since June 2, 2019. Resonite's technological architecture allows for the rendering and editing of content in real-time in a collaborative fashion, which has led to a rich maker culture within Resonite as embodied by the Creator Jam Community. Neos VR and then Resonite were both developed by Tomáš Mariančík (aka Frooxius). When I interviewed Mariančík in 2015 at GDC, I had regarded him as one of the few "mad geniuses" of VR. He had won 3rd place in two game jams at that point including 3rd place in the Oculus & IndieCade 2013 VR Jam with SightLine, and then 3rd place in the Leap Motion 2014 3D Jam with World of Comenius. Then he would go on to win 3rd place in the Oculus & Samsung 2015 VR Jam with Neos The Universe after participating in the Rothenberg River Accelerator at the invitation of Tipatat Chennavasin. When I interviewed Chennavasin in Spring of 2015, he talked about recruiting Mariančík for the Rothenberg Ventures River Accelerator by saying, The mandate I had was, find awesome people in VR and support them and help them. I got to do that. Sightline the Chair was one of my favorite demos of all time in VR. When I saw that, I was never so jealous or angry. I was like, "This guy's brilliant, smarter than me. Oh my god! This is the demo I wish I was smart enough to think of doing." And I fortunately found his contact. He was in the Czech Republic, Tomas. I sent them a Skype message and it was just this amazing thing where I had to explain what's a VC, what's an accelerator program, how can it help them, what are they doing, and they were just kind of shocked, a little suspicious. But fortunately, after a couple talks, we got them out here and it's just been amazing to see them, support them, and help them flourish as entrepreneurs, but also developers and innovators in the VR space. After participating in the Rothenberg Ventures River VR incubator, then Mariančík and other Solirax co-founder Karel Hulec started working on his deep vision of Neos VR. That following spring Mariančík wrote a blog post titled "How VR became my day job because I didn't give up" in what he described as the "Frooxius origin story" to r/Oculus. He details how difficult it was to make it as a full-time VR developer. Despite his many successes in different game jams, he wasn't really receiving a lot of support from Oculus. Oculus always knew how to support game developers, but didn't quite know what to do with the more education-focused and experimental apps exploring the boundaries of perception that Mariančík was working on. And to top it all off, the depth of vision of how Mariančík was thinking the medium was also probably 5-10 years ahead of anyone else. He reflected on this in his origin story post by saying, The trouble is that Neos is a very complex and expansive idea and requires a lot of time to work through all the aspects of it and thorough consideration to make sure everything fits neatly together, because I needed to find a set of basic elements which are both very simple and elegant, but interact in so many ways that they provide extremely flexible, but also consistent system. I always know what to do from moment to moment, but there's just a lot of things to do. So I keep working for months, piecing the system and working through it all. But before the system comes together, there isn't much to show for it. What I found most difficult is watching as everyone else is showing off their VR projects with quickly made solutions, but with a fraction of functionality, gaining attention, winning awards, participating at events, demoing in public, getting invited to VR shows and podcasts, while I'm piecing together my big vision in the shadows.

Disruptive CEO Nation
Episode 252: Accelerating Medical Advancements with Charles Fisher, CEO and Co-Founder of Unlearn.ai, San Francisco, CA USA

Disruptive CEO Nation

Play Episode Listen Later Jun 12, 2024 24:50


Charles Fisher is a pioneering figure in the application of artificial intelligence to medicine and healthcare. With a PhD in biophysics from Harvard and a robust background in machine learning research, Fisher believes that AI has the potential to revolutionize medicine by addressing complex challenges and he does with the handpicked team he has assembled at Unlearn.ai. He highlights the rapid advancement of AI technologies, while also acknowledging that the current systems are not yet all-encompassing but improve annually. Fisher emphasizes that high-quality medical datasets are crucial for training effective AI models and views the creation of large datasets as essential for unlocking significant improvements in healthcare. Here are some highlights are from our chat: - AI advancements in medicine are underhyped, with rapid progress driven by machine learning technologies, better computers, and data accumulation. - The medical field lags behind consumer applications in AI due to the need for comprehensive medical datasets for training models effectively. - Digital twins in healthcare are used to create virtual replicas of patients, improving outcomes and personalizing treatments. - AI technology can predict outcomes in clinical trials, reducing the need for placebos and making studies more efficient and aligned with patient needs. - AI has the potential to accelerate medical research, reduce drug development costs, and lead to significant improvements in solving medical problems within the next 5-10 years. Charles Fisher is the CEO and co-founder of Unlearn.ai Charles is a scientist with interests at the intersection of physics, machine learning, and computational biology. Previously, he worked as a machine learning engineer at Leap Motion and a computational biologist at Pfizer. He was a Philippe Meyer Fellow in theoretical physics at École Normale Supérieure in Paris, France, and a postdoctoral scientist in biophysics at Boston University. Charles holds a Ph.D. in biophysics from Harvard University and a B.S. in biophysics from the University of Michigan. Connect with Charles: Website: https://www.unlearn.ai/ LinkedIn: https://www.linkedin.com/in/charleskfisher/ Connect with Allison: Feedspot has named Disruptive CEO Nation as one of the Top 25 CEO Podcasts on the web and it is ranked the number 10 CEO podcast to listen to in 2024! https://podcasts.feedspot.com/ceo_podcasts/ LinkedIn: https://www.linkedin.com/in/allisonsummerschicago/ Website: https://www.disruptiveceonation.com/ Twitter: @DisruptiveCEO #CEO #AI #Tech #Enterprise #startup #startupstory #founder #business #businesspodcast #podcast Learn more about your ad choices. Visit megaphone.fm/adchoices

XCrossing
ep73 Final Fantasy XVIの触覚表現はスゴいという話から、ハードを生かすソフトウェアの話で盛り上がる

XCrossing

Play Episode Listen Later Mar 26, 2024 44:12


ハンドトラッキングやハプティクス(触覚)など、「体験」を変革する技術に惹かれ、さまざまなガジェットを試してきた3人。後半はガジェットの進化に欠かせないソフトウェアの話に01:07 SHUREのワイヤレスピンマイクMOVEMICがよさげ03:00 レシーバー不要、スマホと直接接続、環境音がうるさくても話者の声を集音09:01 一眼レフカメラやPCで撮影・録音するときはレシーバー付きで12:51 昔話題になったLeap Motionを買って使ってみた結果…18:09 ハンドトラッキングの精度とフィードバックと音19:16 ファイナルファンタジー16のハプティクス(触覚)表現がどれだけすごいかを語る研究者20:18 サウンドエンジニアがハプティクス調整21:57 触覚表現の定義:過去の記憶や経験、身体感覚を創発するもの23:31 ソフトウエアを良くするとハードウエアの表現が劇的に変わる(のに、そこに投資されない)25:22 ソフトウエアには「手触り感がない」?27:25 ソフトウエアが可能にするもの vs ハードウエアが実現していると思う人29:39 ゲームコントローラーに耳を当てて触覚を”聞く”研究者31:15 人間に直接作用するにはすべてをアナログにする必要がある32:34 ソフトウエアに遅れをとってUXが進化しない日本企業?36:37 ロボットとハプティクスと技能承継40:35 寿司ロボットのすごい進化エピソード内で取り上げた情報へのリンク: SHURE MOVEMIC DJI MIC Leap Motion 2 FF16の触覚表現がすごい、というお話テック業界で働く3人が、テクノロジーとクリエイティブに関するトピックを、視点を行き交わしながら語り合います。及川卓也 @takoratta プロダクトマネジメントとプロダクト開発組織づくりの専門家 自己紹介エピソード ep1, ep2関信浩 @NobuhiroSeki アメリカ・ニューヨークでスタートアップ投資を行う、何でも屋 自己紹介エピソード ep52上野美香 @mikamika59 マーケティング・プロダクトマネジメントを手掛けるフリーランス 自己紹介エピソード ep53Official X: @x_crossing_ https://x-crossing.com

the artisan podcast
S3 | E3 | the artisan podcast | eros marcello | demystifying AI

the artisan podcast

Play Episode Listen Later Oct 22, 2023 25:23


www.theotheeros.com LinkedIn | Instagram | X   Eros Marcello a software engineer/ developer and architect specializing in human interfacing artificial intelligence, with a special focus on conversational AI systems, voice assistance, chat bots and ambient computing.   Eros has been doing this since 2015 and even though today for the rest of us laymen in the industry we're hearing about AI everywhere, for Eros this has been something he's been passionately working in for quite a few years.    Super excited to have him here to talk to us about artificial intelligence and help demystify some of the terminology that you all may be hearing out there.    I'm so excited to welcome Eros Marcello to this conversation to learn a little bit more about AI. He is so fully well versed in it and has been working in AI at since 2015, when it was just not even a glimmer in my eyes so I'm so glad that to have somebody here who's an expert in that space.   Eros glad to have you here I would love to just jump into the conversation with you. For many of us this this buzz that we're hearing everywhere sounds new, as if it's just suddenly come to fruition. But that is clearly not the case, as it's been around for a long time, and you've been involved in it for a long time.     Can you take us to as a creative, as an artist, as an architect, as an engineer take us through your genesis and how did you get involved and how did you get started. Let's just start at the beginning.   Eros:  The beginning could be charted back sequentially working in large format facilities, as surprise surprise the music industry, which you know was the initial interest and was on the decline. You'd have this kind of alternate audio projects, sound design projects that would come into these the last remaining, especially on the East and West, Northeast and So-cal areas, the last era of large format analog-based facilities with large recording consoles and hardware and tape machines.  I got to experience that, which was a great primer for AI for many reasons, we'll get more into that later. So what happened was that you'd have voiceover coming in for telephony systems, and they would record these sterile, high-fidelity captures of voice that would become the UI sound banks, or used for speech synthesis engines for call centers. That was the exposure to what was to come with voice tech folks in that space, the call center world, that really started shifting my gears into what AI machine learning was and how I may fit into it. Fast forward, I got into digital signal processing and analog emulation, so making high caliber tools for Pro Tools, Logic, Cubase , Mac and PC for sound production and music production. specifically analog circuitry emulation and magnetic tape emulation “in the box” as it's called that gave me my design and engineering acumen. Come 2015/2016, Samsung came along and said you've done voice-over,  know NLP, machine learning, and AI, because I studied it and acquired the theoretical knowledge and had an understanding of the fundamentals.  I didn't know where I fit yet, and then they're like so you know about, plus you're into voice, plus you have design background with the software that you worked on.  I worked on the first touchscreen recording console called the Raven MTX for a company called Slate Digital. So I accidentally created the trifecta that was required to create what they wanted to do which was Bigxby which was Samsung's iteration of the series for the Galaxy S8 and they wanted me to design the persona… and that as they say is history. Samsung Research America, became my playground they moved me up from LA to the Bay Area and that was it.  It hasn't really stopped since it's been a meteoric ascension upward. They didn't even know what to call it back then, they called it a UX writing position, but UX writers don't generate large textual datasets and annotate data and then batch and live test neural networks. Because that's what I was doing, so I was essentially doing computational linguistics on the fly. And on top of it in my free time I ingratiated myself with a gentleman by the name of Gus who was head of deep learning research there and because I just happened to know all of these areas that fascinated me in the machine learning space, and because I was a native English speaker, I found a niche where they allowed me to not only join the meetings, but help them prepare formalized research and presentations which only expanded my knowledge base.  I mean we're looking into really cutting-edge stuff at the time, AutoML, Hyperparameter tuning and Param ILS and things in the realms of generative adversarial neural networks which turned me on to the work of Ian Goodfellow, who was until I got there was an Apple employee and now it's gone back to Google Deep Mind. He's the father of Generative Adversarial Neural Networks, he's called the GANfather and that's really it the rest is history. I got into Forbes when I was at Samsung and my Hyperloop team got picked to compete at SpaceX, so it was a lot that happened in a space of maybe 90 days.  Katty You were at the right place at the right time, but you were certainly there at a time where opportunities that exist today didn't exist then and you were able to forge that.  I also can see that there are jobs that will be coming up in AI that don't exist today. It's just such an exciting time to be in this space and really forge forward and craft a path based on passion and yours clearly was there.  So you've used a lot of words that are regular nomenclature for you, but I think for some of the audience may not be can you take us through…adversarial I don't even know what you said adversarial … Yes Generative Adversarial Neural Networks. Eros A neural network is the foundational machine learning technique, where you provide curated samples of data, be it images or text, to a machine learning algorithm neural network which is trained, as it's called, on these samples so that when it's deployed in the real world it can do things like image recognition, facial recognition, natural language processing, and understanding. It does it by showing it, it's called supervised learning, so it's explicitly hand-labeled data, you know, this picture is of a dog versus this is a picture of a cat, and then when you deploy that system in production or in a real-world environment it does its best to assign confidence scores or domain accuracy to you know whether it's a cat or a dog.  You take generative adversarial neural networks and that is the precipice of what we see today is the core of MidJourney and Stable Diffusion and image-to-image generation when we're seeing prompts to image tools. Suffice it to say generative adversarial networks are what is creating a lot of these images or, still image to 3D tools, you have one sample of data and then you have this sort of discriminator and there's a waiting process that occurs and that's how a new image is produced. because the pixel density and tis diffused, it's dispersed by you know by brightness and contrasts across the image and that can actually generate new images. Katty So for example if an artist is just dabbling with Dall-E, let's say, and they put in the prompt so they need to put in to create something, that's really where it's coming from, it's all the data that is already been fed into the system. Eros  Right, like Transformers which again are the type of neural network that's used in ChatGPT or Claude, there are really advanced recurrent neural networks. And current neural networks were used a lot for you know NLP and language understanding systems and language generation and text generation systems. Prior, they had a very hard ceiling and floor, and Transformers are the next step. But yeah more or less prompt to image. Again tons of training that assigns, that parses the semantics and assigns that to certain images and then to create that image there's sequence to sequence processes going on. Everyone's using something different, there's different techniques and approaches but more or less you have Transformers. Your key buzzwords are Transformers, Large Language models, Generative AI, and Generative neural networks. It's in that microcosm of topics that we're seeing a lot of this explode and yes they have existed for a while. Katty Where should somebody start? Let's say you have a traditional digital designer who doesn't really come from an engineering or math background like you didn't and they can see that this is impacting or creating opportunities within their space-- where should they start? Eros First and foremost leveling up what they can do. Again, that fundamental understanding, that initial due diligence, I think sets the tone and stage for success or failure, in any regard, but especially with this. Because you're dealing with double exponential growth and democratization to the tune where like we're not even it's not even the SotA state-of-the-art models, large language models that are the most astounding. If you see in the news Open AI is and looking at certain economic realities of maintaining. What is really eclipsing everything is and what's unique to this boom over like the.com bubble or even the initial AI bubble is the amount of Open Source effort being apportioned and that is you know genie out of the bottle for sure when it comes to something of this where you can now automate automation just certain degrees. So we're going to be seeing very aggressive advancement and that's why people are actually overwhelmed by everything. I mean there's a new thing that comes out not even by the day but seemingly by the minute. I'm exploring for black AI hallucinations, which for the uninitiated hallucinations are the industry term they decided to go with for erroneous or left field output from these large language models.  I'm exploring different approaches to actually leverage that as an ideation feature, so the sky is the limit when it comes to what you can do with these things and the different ways people are going to use it. Just because it's existed it's not like it's necessarily old news as much as it's fermented into this highly productized, commoditized thing now which is innovation in it and of itself.   So where they would start is really leveling up, and identifying what these things can do. And not trying to do with them on their own battlefield. So low hanging fruit you have to leverage these tools to handle that and quadruple down on your high caliber skill set on your on what makes you unique, on your specific brand, even though that word makes me cringe a little bit sometimes, but on your on your strengths, on what a machine can't do and what's not conducive to make a machine do and it's does boil down to common sense.  Especially if you're a subject matter expert in your domain, a digital designer will know OK well Dall-E obviously struggles here and there, you know it can make a logo but can it make you know this 3D scene to the exact specifications that I can? I mean there's still a lot of headroom that is so hyper-specific it would never be economically, or financially conducive to get that specific with this kind of tools that handle generalized tasks. What we're vying for artificial general intelligence so we're going to kind of see a reversal where it's that narrow skill set that is going to be, I think, ultimately important.  Where you start is what are you already good at and make sure you level up your skills by tenfold. People who are just getting by, who dabble or who are just so so, they're going to be displaced. I would say they start by embracing the challenge, not looking at it as a threat, but as an opportunity, and again hyper-focusing on what they can do that's technical, that's complex, quadrupling on that hyper-focusing on it, highlighting and marketing on that point and then automating a lot of that lower tier work that comes with it, with these tools where and when appropriate. Katty I would imagine just from a thinking standpoint and a strategy standpoint and the creative process that one needs to go through, that's going to be even more important than before, because in order to be able to give the prompts to AI, you have to really have to strategize where you want to take it, what you want to do with it,  otherwise it's information in and you're going to get garbage out.   Eros Right absolutely. And it depends on the tool, it depends on the approach of the company and manufacturer, creators of the tool. You know Midjourney, their story is really interesting. The gentleman who found that originally founded Leap Motion, which was in the 2010s that gesture-based platform that had minor success.  He ended up finding Midjourney and denying Apple two acquisition attempts, and like we're using Discord as a means for deployment and many other things simultaneously and to great effect. So it's the Wild West right now but it's an exciting time to be involved because it's kind of like when Auto-tune got re-popularized. For example it all kind of comes back to that music audio background because Autotune was originally a hardware box. That's what Cher used on her song and then you have folks that you know in the 2010s T-Pain and Little Wayne and everybody came along it became a plug-in, a software plug-in, and all of a sudden it was on everything and now it's had its day, it had 15 minutes again, and then it kind of dialed back to where it's used for vocal correction. It's used as a utility now rather than a kind of a buzzy effect. Katty Another thing to demystify.. Deep fake—what is that? Yes deep fake, can be voice cloning, which is neural speech synthesis and then you have deep fakes that are visual, so you have you know face swapping, as it's called.   You have very convincing deep fakes speeches, and you have voice clones that that more or less if you're not paying attention can sound and they're getting better again by the day. Katty What are the IP implications of that even with the content that's created on some of these other sources? Eros The IP implications in Japan passed that the data used that's you know regenerated, it kind of goes back I mean it's not if you alter something enough, a patent or intellectual property laws don't cover it because it's altered, and to prove it becomes an arbitrary task for it has an arbitrary result that's subjective. Katty You are the founder and chief product architect of BlackDream.ai. Tell us a little bit more about that what the core focus? Eros: So initially again it was conceived to research computer vision systems, adversarial machine intelligence. There's adversarial prompt injection, where you can make a prompt to go haywire if you kind of understand the idiosyncrasies of the specific model dealing with, or if you in construction of the model, found a way to cause perturbations in the data set, like basically dilute or compromise the data that it's being trained on with malice. To really kind of study those effects, how to create playbooks against them, how to make you know you know zero trust fault tolerant playbooks, and methodologies to that was the ultimate idea.  There's a couple moving parts to it, it's part consultancy to establish market fit so on the point now where again, Sandhill Road has been calling, but I've bootstrapped and consulted as a means of revenue first to establish market fit. So I've worked for companies and with companies, consulted for defense initiatives, for SAIC and partnering with some others. I have some other strategic partnerships that are currently in play. We have two offices, a main office at NASA/Ames, our headquarters is that is a live work situation, at NASA Ames / Moffett field in Mountain View CA so we are in the heart of Silicon Valley and then a satellite office at NASA Kennedy Space Center ,at the in the astronauts memorial building, the longevity of that which you know it's just a nice to have at this point because we are Silicon Valley-based for many reasons, but it's good to be present on both coasts. So there's an offensive cyber security element that's being explored, but predominantly what we're working on and it's myself as the sole proprietor with some third party resources, more or less friends from my SpaceX /Hyperloop team and some folks that I've brokered relationships with along the way at companies I've contracted with or consulted for. I've made sure to kind of be vigilant for anyone who's, without an agenda, just to make sure that I maintain relationships with high performers and radically awesome and talented people which I think is I've been successful in doing.  So I have a small crew of nonpareil, second to none talent, in the realm of deep learning, GPU acceleration, offensive cyber security, and even social robotics, human interfacing AI as I like to call it. So that's where Blackdream.ai is focusing on: adversarial machine intelligence research and development for the federal government and defense and militaristic sort of applications Katty This image of an iceberg comes to mind that we only see in the tip of it over the water you know with the fun everybody's having with the Dall-Es and the ChatGPT's but just the implication of it, what is happening with the depth of it ….fascinating!! Thank you you for being with us and just allowing us to kind of just maybe dip our toe a little bit under the water and to just see a little bit of what's going on there. I don't know if I'm clearer about it or if it was just a lot more research needs to be now done on my part to even learn further about it. But I really want to thank you for coming here. I know you're very active in the space and you speak constantly on about AI and you're coming up soon on “Voice and AI”. And where can people find you if they wanted to reach out and talk to you some more about this or have some interest in learning more about Blackdream.ai? The websites about to be launched Blackdream.AI. On Linkedin I think only Eros Marcello around and www.theotheeros.com,  the website was sort of a portfolio.  Don't judge me I'm not a web designer but I did my best. It came out OK and then you have LinkedIn, Instagram its Eros Marcello on Twitter/X its ErosX Marcello. I try to make sure that I'm always up to something cool so I'm not an influencer by any stretch or a thought-leader, but I certainly am always getting into some interesting stuff, be it offices at NASA Kennedy Space Center, or stranded in Puerto Rico…. you never know. It's all a little bit of reality television sprinkled into the tech. Katty: Before I let you go what's the last message you want to leave the audience with? Eros:  Basically like you know I was I grew up playing in hardcore punk bands and you know.  Pharma and Defense, AI for government and Apple AI engineer, none of that was necessarily in the cards for me, I didn't assume. So my whole premise is, I know I may be speaking about some on higher levels things or in dealing more in the technicalities than the seemingly, the whole premise is that you have to identify as a creative that this is a technical space and the technical is ultimately going to inform the design. And I didn't come out of the womb or hail from you know parents who are AI engineers. This isn't like a talent, this is an obsession.  So if I can learn this type of knowledge and apply it, especially in this rather succinct amount of time I have, that means anyone can. I mean it's not some secret sauce or method to it, it's watch YouTube videos or read papers, you know tutorials, tutorials, tutorials. Anyone can get this type of knowledge, and I think it's requisite that they do to bolster and support and scale their creative efforts. So this is gonna be a unique situation in space and time where that you know the more technical you can get, or understand or at least grasp the better output creatively the right it will directly enrich and benefit your creative output and I think that's a very kind of rare symmetry that isn't really inherent in a lot of other things but if I can do it anyone. I love it thank you for this peek into what's going on the defense component of it, the cyber security component of it, the IP component of it… there just so many implications that are things we need to talk about and think about, so thank you for starting that conversation. Absolutely pleasure I appreciate you having me on hopefully we do this again soon.    

Axial Podcast
Founding Unlearn and Revolutionizing Clinical Trials with Charles Fisher

Axial Podcast

Play Episode Listen Later Jul 9, 2023 64:09


Charles Fisher is the Founder and CEO of Unlearn, that creates digital twins with AI to enable more efficient clinical trials. Founded in 2017, Unlearn started as a generative AI company looking for a problem, ending up on clinical trials & transforming how they are run. A digital Twin is a longitudinal/clinical record using baseline data from a patient before they receive their 1st treatment. To predict how the patient would likely progress in the trial if given a placebo. And increase statistical power of the trial without enrolling more patients. Unlearn sells their TwinRCT product to companies & institutions to optimize their clinical trials. Signing a multi-year collaboration with German Merck in 2022 and getting EMA qualification for phase 2/3 trials later that year. Initially focused on AD & MS. Unlearn is improving trial efficiency, with 12%-39% fewer subjects required, and helping lower the costs it takes to develop new medicines for neurodegeneration, inflammatory disease, and more. Charles discusses his career starting as a scientist with interests at the intersection of physics, machine learning, and computational biology. Then joining Leap Motion as a machine learning engineering and Pfizer as a computational biologist. All leading up to starting Unlearn.

Power Law with John Coogan
David Holz (Midjourney)

Power Law with John Coogan

Play Episode Listen Later Jun 20, 2023 52:11


In today's episode, I'm diving into the career of David Holz, a figure who echoes the spirit of the early Silicon Valley era.Holz first entered the tech scene as the founder and CEO of Leap Motion. This company, separate from Magic Leap, introduced a fresh user interface for computers, using hand gestures and cameras. Despite its innovative approach, Leap Motion found itself ahead of its time and was sold to Ultrahaptics in 2019.Post-Leap Motion, Holz didn't rest. He established a studio to seek out new possibilities. This exploration led him to the intersection of AI and art. Through self-funding, he developed Midjourney, a product that brings AI-generated art to life, and introduced it as a Discord bot earlier this summer.This episode aims to explore Holz's journey, his experiences with the boom and bust of Leap Motion, and his pivot towards AI-generated art with Midjourney. It's a story of resilience, creativity, and the timeless Silicon Valley spirit of turning setbacks into opportunities.

AIA Podcast
ChatGPT БЕСПЛАТНО и доступ к Bing ЗА ПОЛЧАСА / Обзор ИИ, которыми мы пользуемся

AIA Podcast

Play Episode Listen Later Jun 16, 2023 144:40


Регистрируйся на Anywhere Club: https://bit.ly/awclub-ruМы добрались до обзора нейронок и ИИ, которыми пользуемся сами! Кроме того, уже по традиции обсуждаем новости из мира AI, рассказываем как ИИ помогает нам в создании контента, раскрываем способы бесплатного и быстрого получения доступа к ChatGPT и Bing без СМС и телефонных регистраций, обсуждаем вторых пилотов и вопросы этики - куда же без этого."Artificial Intelligence Anywhere" - это подкаст, в котором Витя Шеленченко и Лёша Картынник aka АйТиБорода разбирают новости из мира нейросетей и искусственного интеллекта. Выходим раз в две недели на каналах Anywhere Club.Приятного просмотра!

Note to File
Charles Fisher, Unlearn

Note to File

Play Episode Listen Later Apr 6, 2023 39:44


Charles Fisher is the founder of Unlearn which is using AI to simulate potential health outcomes for individual patients. Charles is a scientist with interests at the intersection of physics, machine learning, and computational biology. Previously, Charles worked as a machine learning engineer at Leap Motion and a computational biologist at Pfizer. He was a Philippe Meyer Fellow in theoretical physics at École Normale Supérieure in Paris, France, and a postdoctoral scientist in biophys.ics at Boston University. Charles holds a Ph.D. in biophysics from Harvard University and a B.S. in biophysics from the University of Michigan. This week we discuss digital twins, the impact of AI on clinical trials, clinical trial data ownership and much more

Halbwissen Hoch 2
Stümperei³-OpenAI und KI-mit Jan aus Ostfriesland

Halbwissen Hoch 2

Play Episode Listen Later Feb 19, 2023 42:52


Stephan berichtet euphorisch von seinen letzten revolutionären Gesprächsrunden außerhalb des Podcasts, in denen er sich mit Chatbots mit unechtem Denkvermögen auseinandersetzte und dabei offenbar besser unterhalten oder zumindest verstanden fühlte. Jan geht d'accord und erzählt en Detail wie er mit Hilfe der universalgelehrten Intelligenz mehrmals täglich Bilder im Stile anderer Künstler malt ... korrigiere: "malen" lässt. Der ewiggestrige Pietsch hingegen steht dem ganzen gewohnt argwöhnisch gegenüber und fragt zurecht, wie ihm der ach-so-dolle synthetische Verstand im Alltag helfen soll die Spülmaschine einzuräumen... Sind generierte Reden für Gesellschafterversammlungen überhaupt nötig? Wie unterscheidet der Hochschulprofessor das echte Unvermögen des Studierenden von der Unfähigkeit der unperfekten Maschine?? Und lässt sich die Angebetete von einem lieblos hingenerierten Schnulzengedicht künftig besser einwickeln??? Stümperei³ - AI, BI, KI?!? Irgendwas mit i ...

Haptics Club
#36 Making digital worlds feel more human, with Tom Carter, Ultraleap CEO

Haptics Club

Play Episode Listen Later Jan 20, 2023 59:16


We sit down in this episode with Tom Carter, co-founder, former CTO, and now CEO of Ultraleap. Ultraleap raised their series D in 2021 of $82 millions for their hand tracking, and mid-air haptic technology. We will chat around the entrepreneurial and technological story of Ultraleap, from Ultrahaptics first, the Leap Motion acquisition later and the fusion of the two companies projecting into to future. One of the rare successful haptic ventures. Longer chat, totally worth it !

The Matt Baxter Show
Charles Fisher - Founder & CEO at Unlearn

The Matt Baxter Show

Play Episode Listen Later Oct 26, 2022 42:14


Charles is a scientist with interests at the intersection of physics, machine learning, and computational biology. Previously, Charles worked as a machine learning engineer at Leap Motion and a computational biologist at Pfizer. He was a Philippe Meyer Fellow in theoretical physics at École Normale Supérieure in Paris, France, and a postdoctoral scientist in biophysics at Boston University. Charles holds a Ph.D. in biophysics from Harvard University and a B.S. in biophysics from the University of Michigan. Thank you for today's episode!

AI in Action Podcast
E380 Charles Fisher, CEO at unlearn.ai

AI in Action Podcast

Play Episode Listen Later Sep 16, 2022 14:52


In today's show, we're delighted to welcome back Charles Fisher, CEO at unlearn.ai in San Francisco. Founded in 2017, Unlearn helps biopharma companies develop drugs more ethically and efficiently by leveraging generative machine learning models to reduce the number of patients required for placebo controls in clinical trials. They enable clinical trial sponsors to run RCTs that are up to twice as fast, require fewer participants, produce better evidence about therapeutic efficacy, and in which fewer participants need to be given placebos. Charles is a scientist with interests at the intersection of physics, machine learning and computational biology. Previously, Charles worked as a machine learning engineer at Leap Motion and was a computational biologist at Pfizer. He was a Philippe Meyer Fellow in theoretical physics at École Normale Supérieure in Paris, France and a postdoctoral scientist in biophysics at Boston University. Charles holds a Ph.D. in biophysics from Harvard University and a B.S. in biophysics from the University of Michigan. In the episode, Charles will talk about: The work they do in medical research, Their success over the past 12 months, Goals they are working towards in the next year, Plans for growing the team, How they attract top talent to the company and The vision for the future at unlearn.ai

Ingenios@s de Sistemas
Episodio 80 - Herramienta: ShotCut

Ingenios@s de Sistemas

Play Episode Listen Later Aug 5, 2022 19:13


Esta semana os traigo un editor de vídeo con algunas funcionalidades especiales y únicas.Es una herramienta open source. Es multiplataforma Windows Linux y Mac. Soporte de gran número de formatos Compatibilidad con los últimos formatos de audio y vídeo gracias a FFmpeg Soporta formatos de imagen populares como BMP, GIF, JPEG, PNG, SVG, TIFF, WebP, así como secuencias de imágenes Animaciones Lottie y rawr JSON No es necesario importar - edición nativa de la línea de tiempo Búsqueda de fotogramas precisos para muchos formatos Línea de tiempo multiformato: mezcla y combina resoluciones y velocidades de fotogramas dentro de un proyecto Captura de cámaras web Captura de audio Soporte para resoluciones 4K Reproducción de flujos de red (HTTP, HLS, RTMP, RTSP, MMS, UDP) Plugins generadores de vídeo gratuitos (por ejemplo, barras de color y plasma) Generadores de color, texto, ruido y contador Exportación de EDL (CMX3600 Edit Decision List) Exportación de un fotograma como imagen o de un vídeo como secuencia de imágenes Archivos de vídeo con canal alfa, tanto de lectura como de escritura Mapeo de tonos HDR a SDR Entrada, procesamiento y exportación de vídeo de gama completa opcional Funciones de audio Indicadores de audio: sonoridad, medidor de picos, forma de onda, analizador de espectro Control de volumen Filtros de audio: Balance, paso de banda, compresor, canal de copia, retardo, downmix, ecualizador: 3 bandas, Ecualizador: 15 bandas, Ecualizador: Paramétrico, Expansor, Ganancia, Paso alto, Invertir, Limitador, Paso bajo, Puerta de ruido, Normalizar: Un paso, Normalizar: Dos pases, Notch, Pan, Pitch, Reverb, Stereo Enhancer, Swap Channels Mezcla de audio en todas las pistas Fundido de entrada y salida de audio y fundido de vídeo desde y hacia el negro con controles de fundido fáciles de usar en la línea de tiempo Transiciones de disolución de audio y vídeo cruzadas fácilmente mediante la superposición de tomas en la misma pista de la línea de tiempo Sincronización de transporte JACK Generador de tonos Estéreo, mono y 5.1 surround Compensación de tono para los cambios de velocidad del vídeo Graba directamente en la línea de tiempo para la voz en off, por ejemplo Efectos de vídeo Composición de vídeo a través de pistas de vídeo Ruedas de color de 3 vías (sombras, medios y luces) para la corrección y gradación del color Herramienta de cuentagotas para elegir un color neutro para el balance de blancos Desentrelazado Rotación automática con anulación manual Fundido de entrada/salida de audio y fundido de vídeo con controles de fundido fáciles de usar en la línea de tiempo. Funciones de edición Recorte en el reproductor de clips de origen o en la línea de tiempo con opción de Ripple Operaciones de corte, copia y pegado fáciles de usar Edición de adición, inserción, sobrescritura, levantamiento y borrado en la línea de tiempo Edición en 3 puntos Soporte de hardware SDI y HDMI de Blackmagic Design para la monitorización de la entrada y la vista previa Leap Motion y Contour Design Shuttle PRO para el control de jog/shuttle Captura de cámara web, captura de audio desde el dispositivo del sistema (micrófono, entrada de línea) https://shotcut.org/ https://shotcut.org/howtos/contour-shuttle/ https://www.ultraleap.com/

The FOSS Pod
Project North Star With Bryan Chris Brown

The FOSS Pod

Play Episode Listen Later Apr 9, 2022 55:49


We've often said that open source is for more than just software, and on this episode Project North Star's Bryan Chris Brown is here to prove it with a freely available, modifiable design for an augmented reality headset you can build yourself. Join us for a discussion about topics like the ins and outs of open hardware design, sourcing parts from unlikely places, printing lenses out of resin, doing architecture work in the Unreal Engine, how standards like OpenXR fit into the picture, and a whole lot more.Check out Project North Star's extensive community documentation, and follow Bryan on Twitter.The FOSS Pod is brought to you by Google Open Source. Find out more at https://opensource.google

Sixteen:Nine
Saurabh Gupta, Ultraleap

Sixteen:Nine

Play Episode Listen Later Nov 24, 2021 37:13


If you have been in the industry for a while, you'll maybe remember all the excitement around using gesture technology to control screens. That was followed by the letdown of how crappy and feeble these gesture-driven touchless working examples turned out to be. Like just about everything, the technology and the ideas have got a lot better, and there is a lot of renewed discussion about how camera sensors, AI and related technologies can change up how consumers both interact ... and transact. Ultraleap is steadily developing a product that lets consumers interact with and experience digital displays using sensors and, when it makes sense, haptic feedback. The company was formed in 2019 when Ultrahaptics acquired Leap Motion, and the blended entity now operates out of both Silicon Valley and Bristol, England. Leap Motion was known for a little USB device and a lot of code that could interpret hand gestures in front of a screen as commands, while Ultrahaptics used ultrasound to project tactile sensations directly onto a user's hands, so you could feel a response and control that isn't really there. Or something like that. It's complicated stuff. I had an interesting chat with Saurabh Gupta, who is charged with developing and driving a product aimed at the digital OOH ad market, one of many Ultraleap is chasing. We got into a bunch of things - from how the tech works, to why brands and venues would opt for touchless, when touchscreens are so commonplace, as is hand sanitizer. TRANSCRIPT Hey, Saurabh, thank you for joining me. Let's get this out of the way. What is an Ultraleap and how did it come about?  Saurabh Gupta: Hey, Dave, nice to be here. Thank you for having me. Ultraleap is a technology company and our mission is to deliver solutions that remove the boundaries between physical and digital worlds. We have two main technologies. We have a computer vision-based hand tracking and gesture recognition technology that we acquired and on the other side of the equation, we have made a haptic technology using ultrasound. The whole premise of how we came about was we started out as a haptics company and that's what our founder and CEO, Tom Carter, built when he was in college, and it was a breakthrough idea for us to be able to deliver the sense of touch in mid air using ultrasound was how we started, and to be able to project haptic sensations in mid-air, one of the key components of that was, you need to understand where the hands are in space and for that we were using computer vision technology by Leap Motion to track and locate user's hands in space, and we had an opportunity to make an acquisition, and some of your listeners may already know about Leap Motion. Leap Motion has been a pioneer in gesture based hand tracking technology since 2010. They've got 10 plus years of pedigree in really refining gesture based hand tracking models. So we had an opportunity to purchase them and make an acquisition in 2019, we completed the acquisition and rebranded ourselves to Ultraleap. So that's how we started. As stated in our mission, it's all about focusing on user experience for the use cases of how users are interacting with their environment, and that environment could be a sort of a 2D screen in certain applications, the application that we'll probably talk about today, but also other aspects of augmented reality and virtual reality, which are on the horizon and our emerging technologies that are gaining more ground. So that's the central approach. How can we enhance the interactivity that users have with a physical environment, through an input and an output technology offerings with gesture as input and haptics being the output?  The whole gesture thing through the years has been kind of an interesting journey, so to speak. I can remember some of the early iterations of Microsoft Kinect gesture, sensors, and display companies and solutions providers doing demos showing, you can control a screen by waving your hand, lifting it up and down and this and that, and I thought this is not going to go anywhere. It's just too complicated. There's too much of a learning curve and everything else.  Now, the idea as it's evolved and like all technology got a lot better is, it's more intuitive, but it's still something of a challenge, right? There's still a bit of a curve because we're now conditioned to touching screens. Saurabh Gupta: Yeah, you're right. One of the key aspects here is that gesture has been around. There's been research that goes back to the early 90s, if not in the 80s, but computer vision technology in general has come a long way. The deep learning models that are powering our hand tracking technology today are a lot more sophisticated. They are more robust, they are more adaptable and they are able to train based on a lot of real world inputs. So what that really means is that since the computing power and the technology behind recognizing gestures has improved, a lot of that has manifested itself in a more approachable user experience, and I completely accept the fact that there is a gap and we've got 10 plus years of learned behavior of using a touchscreen. We use a touchscreen everyday, carry it in our pockets, but you also have to understand that when touch screens became prevelant, there was the type keyboard before that.  So the point that I'm making here with this is that we are pushing the envelope on new technologies and a new paradigm of interactivity. Yes, there is a learning curve, but those are the things that we are actually actively solving for: The gesture tracking technology should be so refined that it is inclusive and is able to perform in any environment, and I think we've made some really good steps towards that. You may have heard of our recent announcement of our latest hand tracking offering called Gemini. The fundamental thing with Gemini is that it's based on years and years of research and analysis on making the computer vision, deep learning models, that power that platform to be as robust, to be low latency, high yield in terms of productivity and really high initialization, which means as part of the user experience, when you walk up to an interface, you expect to use it right away. We know we can do that with touch screens, but if you put this technology complementary to an interface, what we are solving for at Ultraleap is: when somebody walks up to a screen and they put up their hand to start to interact, the computer vision technologies should instantly recognize that there's a person who is looking to interact. That's number one, and I think with Gemini, with the deep model work that we've done, we've made some good progress there. Number two, which is once the technology recognizes that a person wants to interact, now can we make it more intuitive for the person to be as or more productive than she would be with a touchscreen interface? And that's where I think we've made more progress. I will say that we need to make more progress there, but some of the things that we've done, Dave. We have a distance call to interact, which is a video tutorial attraction loop that serves as an education piece. And I'll give you a stat. We ran a really large public pilot in the Pacific Northwest at an airport, and the use case there was immigration check-in, so people coming off the plane, before they go talk to a border security agent, some people to fill out their information on a kiosk. So we outfitted some kiosks with our gesture based technology and the rest were the controls, which were all touchscreen based and over multiple weeks we ran this study with active consumers who actually had very little to no prior experience using gestures and we did this AB test where we measured the gesture adoption rate on the kiosks without a call interact, before a call to interact and after a call to interact, and it increased the gesture adoption rate by 30%, which means that it certainly is helping people to understand how to use the interface. The second stat that came from it, that at the end of the pilot, we were almost at 65% gesture adoption rate, which means almost more than 6 out of 10 people who use that interface used gesture as the dominant interface for input control, and the third piece of this was how long did it take for them to finish their session? We measured that using the gesture based interaction, the time was slightly higher than for the control group that was using a touchscreen, but it wasn't much, it was only 10% higher. Now one can look at that stat and say in a transactional setting where you know, it's going to take you 30 seconds to order a burger, adding an extra second can be a problem, but at the same time, those stats are encouraging for us to think about when we look at that as the baseline to improve from.  So if I'm listening to this and I'm trying to wrap my head around what's going on here, this is not a gesture where you're standing 3 feet away from a screen and doing the Tom cruise Minority Report thing, where you're waving your arm and doing this and that is, can you describe it? Because you're basically doing touch-like interactions and the ultrasonic jets or blasts of air or whatever are giving you the feedback to guide you, right?  Saurabh Gupta: So we've got two avenues that we have going at this from. One is for the self service type offering, so you think of check-in kiosks or ordering kiosks at restaurants or even digital wayfinding, digital directories. We are solving for those primarily led at least in the first phase led by our gesture tracking technology. So gesture being the input modality, complimentary to touch. So, what we do is we build a touch-free application, which is a ready to use application that is available today on Windows based media players or systems to convert existing touch screen-based user interfaces to gesture, but what we've done is we've made the transition a lot more intuitive and easier because what we've done is we've replicated and done a lot of research on this and replicated interaction methods or gestures you would call it. I hate to use gestures as a word, because it gets tagged with weird hand poses and things like that, people pinching and all of that. For us, it's all about how we can replicate the same usage that a typical average consumer will have when she interacts with a touch screen based interface. So we came up with this an interaction method that we call Airpush which is basically, to explain it to your listeners, it's all about using your finger and moving towards an interactive element on screen. But what happens is the button gets pressed even before you approach them based on your forward motion or interaction. Now, the smart math behind all of this is that not only do we track motion, but we also track velocity, which means that for people who are aggressive in terms of their button pressing, which means they do short jabs, we can cater for those or people who are more careful in their approach as they move towards the screen, the system is adaptable to cater to all types of interaction types, and we track all the fingers so you can use multiple fingers too or different fingers as well. So these are some of the things that we've included in our application. So that's one side. The second side is all about interactive advertising, immersion and that's where I think we use our haptic technology more, to engage and involve the user in the interactive experience that they're going to. So for self service and more transactional type use cases, we're using primarily our hand gesture technology. And for immersive experiential marketing, or even the digital out-of-home advertising type of use cases, we are leading without haptic based technology.   And you're involved on the digita, out-of-home side, right? That's part of your charge?  Saurabh Gupta: That's correct. So I lead Ultraleap's out-of-home business. So in the out-of-home business, we have both self service retail, and digital out-of-home advertising businesses that we focus on. David:. So how would that manifest itself in terms of, I am at a train station or I'm out somewhere and there's a digital out-of-home display and I go up and interact with it and you're saying it's a more robust and rich experience than just boinking away at a touchscreen. What's going on? What would be a good example of that? Saurabh Gupta: So a good example of digital out of home activations is that we've partnered with CEN (Cinema Entertainment Network) where we've augmented some of their interactive in cinema displays that are being sold from a programmatic perspective. Now the interactive piece is still being worked into the programmatic side of things, but that's one example of an interactive experience in a place based setting. The other example is experiential marketing activations that we've done with Skoda in retail malls and also an activation that we did with Lego for Westfield. So these are some of the experiences that we've launched and released with our haptics technology and on the self service side we've been working with a lot of providers in the space you may have heard of.  Our recent pilot concluded with PepsiCo where we are bringing in or trialing gestures for their ordering kiosks for their food and beverage partners. So these are some of the things that are going on on both sides in the business. David:. So for the Lego one or the Scoda one, what would a consumer experience?  Saurabh Gupta: So these are all interactive experiences. So for Lego, it was about building a Lego together. So basically using our haptic technology which obviously contains gestures as the input, moving Lego blocks and making an object that was being displayed on a really large LED screen at one of the retail outlets and in London, so a user would walk up, they would use their hands in front of our haptic device to control the pieces on the screen and then join them together and make a Lego out of it and while they're doing that, they're getting the sensation of the tactile sensation of joining the pieces and that all adds up to a really immersive, engaging experience within a digital out of home setting.  So you get the sensation that you're snapping Lego pieces together?  Saurabh Gupta: Yeah, snapping pieces together, controlling so you get the agency of control, and it's one of those sensations that gives you a very high memorability factor. I don't know whether you track the news. This was in 2019. We did actually a really extensive activation with Warner Brothers in LA, and what we did was at one of the cinemas down there for Warner Brothers' three upcoming movies, Shazam, The Curse of La Llorona, and Detective Pikachu, we added interactive movie posters using haptics in the cinema lobby, and this would complement the digital poster network that was already existing at that location, and over the course of the activation, which was around six weeks long, we had almost 150,000 people that went through the cinema and we actually did in partnership with QBD, we did a lot of analytics around what the. performance was of an interactive movie poster experience within a digital out-of-home setting and got some really great stats.  We measured a conversion rate between an interactive experience versus a static digital signage experience. The conversion rate was almost 2x, 33% increase in dwell time, like people were spending more time in front of an interactive sign versus a static sign. Attention span was significantly higher at 75%, 42% lift in brand favorability. So these are really interesting stats that gave us the confidence that haptic technology combined with gesture based interface has a lot of value in providing and delivering memorable experiences that people remember. And that's the whole point with advertising, right? That's the whole point. You want to present experiences that provide a positive association of your branded message with your target consumer, and we feel that our technology allows that connection to be made  One of the assumptions/expectations that happened when the pandemic broke out was that this was the end of touchscreens, nobody's ever going to want to touch the screen again, the interactivity was dead and I made a lot of those assumptions myself and turns out the opposite has happened. The touch screen manufacturers have had a couple of pretty good years and the idea is that with a touchscreen, you can wipe it down and clean your hands and do all that stuff. But you're at a far greater risk standing four feet away from somebody across a counter, ordering a burger or a ticket or whatever it may be.  So when you're speaking with solutions providers, end user customers and so on are you getting the question of, “Why do I need to be touchless?” Saurabh Gupta: Yeah, it's a fair point, Dave, and let me clarify that. Look, from our perspective, we are focusing on building the right technology and building the right solutions that elevate the user experience. Hygiene surely is part of that equation, but I accept your points that there are far greater risks for germ transmission than shared surfaces, I totally accept that, and yes, there is a TCO argument, the total cost of ownership argument that has to be made here also.  The point that I will make here is that we fundamentally believe and being a scale-up organization that is focusing on new technology, we have to believe that we are pushing the technology envelope where what we are focusing on is elevating the user experience from what the current model provides. So yes, there will be some use cases where we are not a good fit, but contactless as a category or touchless as a category, maybe the pandemic catalyzed it, maybe it expedited things, but that category in itself is growing significantly.  A couple of stats here, right? The contactless payment as a category itself, 88% of all retail transactions in 2020 were contactless, that's a pretty big number And assuming that retail is a $25 trillion dollar market. That's a huge chunk.  But that's about speed and convenience though, right? Saurabh Gupta: Totally. But all I'm saying is contactless as a category is preferable from a user perspective. Now, gesture based interactivity as a part of that user flow, we fundamentally believe that gesture based interactivity plays a part in the overall user journey. So let me give you an example.  Some of the retailers that we are talking to are thinking about new and interesting ways to remove levels of friction from a user's in-store experience. So there are multiple technologies that are being trialed at the moment. You may have heard of Amazon's just walk out stores as an example. You don't even have to take out your wallet and that is completely based on computer vision, as an example, but there are other retailers who are looking to use technology to better recognize who their loyal customers are. So think of how we used to all have loyalty cards for Costco or any other retailer.  They're removing that friction to say, when you walk through the door, you've done your shopping and you're at the payment powder, we can recognize who you are. And if we recognize who you are, we can give you an offer at the last mile, and in that scenario, they are integrating gestures as part of the completely contactless flow. This is where I think we are gaining some traction. There is a product that we are a part of that hasn't been announced yet. I can't go into details specifically on who it is and when it's going to be released. But we are part of a computer vision based fully automated checkout system that uses gesture as the last mile for confirmation and things of that nature. That's where we are gaining traction. Overall point here is that we are focusing on really showcasing and delivering value on how you can do certain things in a more natural and intuitive way. So think of digital wayfinding at malls, right? You have these giant screens that are traditionally touchscreens, right? When you think of that experience, it has a lot of friction in it, because first of all, you can't use touch as effectively on a large screen because you can't swipe from left to right to turn a map as an example. We fundamentally believe that the product could be better with gesture. You can gesture to zoom in, zoom out, rotate a map, and find your direction to a store. Those kinds of things can be augmented. That experience can be augmented with adding just a capability as opposed to using a touchscreen based interface. So those are the high value use cases that we are focusing on.  So it's not really a case where you're saying, you don't need to touch screen overlay anymore for whatever you're doing, Mr. Client, you just use this instead. It's tuned to a particular use case and an application scenario, as opposed to this is better than a touch overlay? Saurabh Gupta: I think that is a mission that we are driving towards, which is, we know that there is potentially a usability gap between gesture in terms of its evolution than touchscreen. We are looking to bridge that gap and get to a point where we can show more productivity using gesture.  And the point is that with our technology, and this is something that you referenced a second ago, you can turn any screen into a touchscreen. So you don't necessarily need a touchscreen and then you can convert it to gesture. You can convert any LCD screen to an interactive screen. So there is some deep argument there as well. What's the kit, like what are you adding? Saurabh Gupta: Just a camera and a USB cable, and some software. And if you're using haptics feedback, how does that work? Saurabh Gupta: So haptics is a commercially off the shelf product. So it's another accessory that gets added to the screen. However, that contains the camera in it so you don't need an additional camera. That also connects to external power and a USB back to the media player.  So as long as you've got a USB on the media player, you're good, and right now your platform is Windows based. Do you have Android or Linux?  Saurabh Gupta: Good question, Dave. So right now we are Windows based, but we know it's of strategic importance for us to enable support on additional platforms. So we are starting to do some work on that front. You'll hear some updates from us early next year on at least the hand tracking side of things being available on more platforms than just Windows.  How does economics work? I suspect you get this question around, “All right. If I added a touch overlay to a display, it's going to cost me X. If I use this instead, it's going to cost me Y.  Is it at that kind of parity or is one a lot more than the other?  Saurabh Gupta: It depends on screen size, Dave, to be honest. So the higher in screen size you go, the wider the gap is. I would say that for a 21 or 23 inch screen and up, the economics are in our favor for a comparable system. And are you constrained by size? I think of all the LED video walls that are now going into retail and public spaces and so on, and those aren't touch enabled. You really wouldn't want to do that, and in the great majority of cases with this, in theory, you could turn a potentially fragile, please don't touch surface like that into an interactive surface, but are you constrained to only doing things like a 55 inch canvas or something? Saurabh Gupta: This will require a little bit of technical explanation. The Lego example that I talked about was targeted on, I would say a large outdoor LED screen. So the concept here is that if you want one-to-one interactivity.  So what do I mean by one-to-one interactivity? One-to-one interactivity is that basically when in our interface, when the user approaches the screen, there is an onscreen cursor that shows up, and that on screen cursor is what is the control point for the user. Now one-to-one interactivity for us to achieve that where the cursor is at the same height or there's no parallax between where the finger is and where the cursor is, for that you have to be connected to or at the screen, and when you are connected to the screen, based on our current camera technology, we can control up to a 42 inch screen for one-to-one interactivity, but we've also been doing exams showing examples where if you connect the sensor to slightly in front of the display, then you can cover a wider area and we've been able to showcase examples of our technology being used on up to a 75 inch LCD screen in portrait mode.  So then any larger than that, the scale gets a little wonky, right? Cause you've got a person standing in front of a very large display and it just starts to get a little weird. Saurabh Gupta: Yeah. It's like putting a large TV in a small living room. So you need to be slightly further away because then it gets too overwhelming, and for that, we have worked with certain partners and they've done some really interesting work like this company called IDUM, they built a pedestal and so that pedestal encloses our tracking device, and that can be placed several feet from a large immersive canvas, like a LED wall, as an example, in a museum type activation, and people can walk by and then they can control the whole screen with that pedestal slightly further away from the screen. So it's like a Crestron controller or something except for a big LED display!  Saurabh Gupta: Exactly. It's like a trackpad in front of the screen, but slightly further away.  Gotcha. All right. Time flew by, man. We're already deep into this. You were telling me before we hit record that your company will be at NRF and you may also have people wandering around IEC but if people want to know more about your company, they go to ultraleap.com?  Saurabh Gupta: That's correct. Ultraleap.com, we have all the information there and David, it was great to talk to you and thank you for the opportunity.   

Thoughts of a Trillionaire
UI for XR - Why We Need Dynamic, Accessible, and Intuitive Design

Thoughts of a Trillionaire

Play Episode Listen Later Sep 14, 2021 38:43


Today, I talk about the importance of making XR interfaces intuitive. This involves the use of affordances, feedback, and accessibility. I go into a tweet that inspired this: https://twitter.com/playfulbacon/status/1437472482961739778?s=19 and the dynamic Interactions experimented by Leap Motion https://blog.leapmotion.com/design-playground-3d-user-interfaces/ and then share my own ideas.

MoneyBall Medicine
Charles Fisher on Using Digital Twins to Speed Clinical Trials

MoneyBall Medicine

Play Episode Listen Later Oct 29, 2020 38:55


Charles Fisher is the founder and CEO at Unlearn, a San Francisco company using purpose-built machine learning algorithms that use historical clinical trial data to create "digital twins" of actual participants in controlled drug trials to help predict how each participant would have fared if they'd been given a placebo. By comparing a patient's actual record to their digital twin, Fisher says, the company can pinpoint the treatment effect at the patient level and conduct trials with fewer placebo patients. Fisher tells Harry that Unlearn's software can help drug companies run clinical trials "twice as fast, using half as many people." Fisher's own history is somewhat unconventional for someone in the pharmaceutical business. He holds a  B.S. in biophysics from the University of Michigan and a Ph.D. in biophysics from Harvard University. He was a postdoctoral scientist in biophysics at Boston University and a Philippe Meyer Fellow in theoretical physics at École Normale Supérieure in Paris, France, then went on to work as a computational biologist at Pfizer and a machine learning engineer at Leap Motion, a startup building virtual reality interfaces. Unlearn built a custom machine-learning software stack because it wasn't convinced existing ML packages from other companies to help in the simulation of clinical data. Fisher says the company focuses on the quality rather than the quantity of its training data, with a preference for the rich, detailed, longitudinal kind of data that comes from past clinical trials. The outcome is a simulated medical record for each treated patient in a trial,  in the same data format used for the trial itself, that predicts how that patient would have responded if they had received a placebo instead of the treatment. These simulated records can be used to augment existing randomized controlled trials or provide an AI-based "control arm" for trials that don't have a placebo group.Please rate and review MoneyBall Medicine on Apple Podcasts! Here's how to do that from an iPhone, iPad, or iPod touch:• Launch the “Podcasts” app on your device. If you can’t find this app, swipe all the way to the left on your home screen until you’re on the Search page. Tap the search field at the top and type in “Podcasts.” Apple’s Podcasts app should show up in the search results.• Tap the Podcasts app icon, and after it opens, tap the Search field at the top, or the little magnifying glass icon in the lower right corner.• Type MoneyBall Medicine into the search field and press the Search button.• In the search results, click on the MoneyBall Medicine logo.• On the next page, scroll down until you see the Ratings & Reviews section. Below that, you’ll see five purple stars.• Tap the stars to rate the show.• Scroll down a little farther. You’ll see a purple link saying “Write a Review.”• On the next screen, you’ll see the stars again. You can tap them to leave a rating if you haven’t already.• In the Title field, type a summary for your review.• In the Review field, type your review.• When you’re finished, click Send.• That’s it, you’re done. Thanks!

The Health Technology Podcast
Charles Fisher: Digital Twins for Clinical Trials

The Health Technology Podcast

Play Episode Listen Later Aug 31, 2020 41:42


Charles K. Fisher, PhD, CEO & Founder, Unlearn.AI Charles is a scientist with interests at the intersection of physics, machine learning, and computational biology. Previously, Charles worked as a machine learning engineer at Leap Motion and a computational biologist at Pfizer. He was a Philippe Meyer Fellow in theoretical physics at École Normale Supérieure in Paris, France, and a postdoctoral scientist in biophysics at Boston University. Charles holds a Ph.D. in biophysics from Harvard University and a B.S. in biophysics from the University of Michigan.  

The Healthtech Podcast
#111: The Story of Unlearn.ai with Charles Fisher

The Healthtech Podcast

Play Episode Listen Later Jul 9, 2020 58:20


Charles is a scientist with interests at the intersection of physics, machine learning, and computational biology and is the founder of Unlearn.ai . Previously, Charles worked as a machine learning engineer at Leap Motion and a computational biologist at Pfizer. He was a Philippe Meyer Fellow in theoretical physics at École Normale Supérieure in Paris, France, and a postdoctoral scientist in biophysics at Boston University. Charles holds a Ph.D. in biophysics from Harvard University and a B.S. in biophysics from the University of Michigan. Unlearn creates Digital Twins to populate Intelligent Control Arms in clinical studies. While others leverage existing data, Unlearn generates a new type of data, for any type of patient. A Digital Twin is a longitudinal, computationally generated clinical record that describes what would have happened if a specific patient received a placebo. https://www.linkedin.com/in/drckf/ | www.unlearn.ai Get in touch: www.jamessomauroo.com | www.somx.co.uk

Bob Cooney's Deep Dive Webinar - Podcast
EP88 Bob Cooney's Virtual Reality Deep Dive - Christal Ho of Entermission (Part 1)

Bob Cooney's Deep Dive Webinar - Podcast

Play Episode Listen Later Jun 21, 2020 21:13


As the escape room business matures, operators are looking for better economic models to increase longevity and returns on investment. Some have tried VR but 4 players in 400 square feet is a major drop in efficiency from rooms where the average number of players is 6 or more. Entermission promises 6 players in 170 square feet, with a multi-sensory experience that rivals The VOID. Entermission is the most profitable VR attraction available today for escape room owners. In this episode, Bob will be talking with Christal Ho, the founder of Enter the Mission, from the Room Escape Show in San Antonio, TX. They'll discuss escape rooms, VR and why they are converging. Let's dive in! WHAT YOU'LL LEARN FROM THIS EPISODE: Christal's background What changed during 5 years of running her business How Christal went from running an Escape Hunt to running the VR rooms in Sydney Christal talks about the VR location in Sydney What's the total space in the location in Sydney How people found the Leap Motion technology in the seated VR experience What's the role of the game master in the learning curve Is keeping people in a closed room important for Sydney's VR experience? Enterthemission Website: https://enterthemission.com/vr-business/ Christal's LinkedIn: linkedin.com/in/christalho Listen to PART 2 and 3 here: https://anchor.fm/bobcooney-vrdeepdive/episodes/EP89-Bob-Cooneys-Virtual-Reality-Deep-Dive---Christal-Ho-of-Entermission-Part-2-efo0dt https://anchor.fm/bobcooney-vrdeepdive/episodes/EP90-Bob-Cooneys-Virtual-Reality-Deep-Dive---Christal-Ho-of-Entermission-Part-3-efo0fm

The Voicebot Podcast
Mike Zagorsek of SoundHound Discusses What it Takes to Deploy a Custom Voice Assistant - Voicebot Podcast Ep 155

The Voicebot Podcast

Play Episode Listen Later Jun 21, 2020 72:35


Mike Zagorsek is vice president of product marketing at SoundHound. Prior to SoundHound, Zagorsek was head of product marketing at Square and Leap Motion, and before that was Director of Interactive at Apple. Way back in 1999, he was a Manager at Sapient where he helped companies enter the digital age much like he is now ushering them into the voice age. SoundHound is a leading white label solution for building custom voice assistants and has made news recently for powering Snapchat's new assistant. In today's interview, we discuss that implementation along with Pandora, Mercedes, and others. 

Charged Tech Podcast
Theranos of AR

Charged Tech Podcast

Play Episode Listen Later Jun 2, 2020 62:29


Frederique, Owen, and Zach talk tech! The show notes for this episode follow without hyperlinks, I'll add links shortly! Clubhouse, Facebook Catch Up, Twitter/Trump/Zuck, Nebula, YourStack, Tella, Apple Glasses, Focals by North, Snap Spectacles, Oculus, Daniel Suarez, Magic Leap loses CEO gets funding, Leap Motion is the company we confused it with, Hololens by Microsoft, Microsoft Build: Fluid, Owen's website page "What I use", Houseparty, FB Messenger Rooms, J oe Rogan Show moves to Spotify, Video podcasts on Spotify, Revision3, Spotify Concerts, and Join us in Discord!

XR for Business
Building an XR Vocabulary for Businesses, with XR Bootcamp's Ferhan Ozkan

XR for Business

Play Episode Listen Later Mar 17, 2020 26:52


Code is a big part of what makes XR work, of course. But for most businesses, knowing the DNA of the technology will be less important than knowing how to best use it. XR Bootcamp co-founder Ferhan Ozkan is enabling businesses interested in XR to enable themselves. Alan: Welcome to the XR for Business podcast with your host, Alan Smithson. Today, we’re speaking with Ferhan Ozkan, the co-founder of XR Bootcamp, a platform to teach professionals how to create VR and AR applications, and support companies to bridge their skills gap in XR development through an intensive onsite program, cutting edge curriculum, and industry renowned lecturers with a focus on industry portfolio projects. I am personally very, very honored to be on the advisory board of XR Bootcamp and helping them really develop the future of how organizations will train their staff on how to build XR technologies. And so with that, I’d love to welcome Ferhan to the show. Ferhan, welcome to the show, my friend. Ferhan: Hi, Alan. Pleasure to be here. Thanks for inviting. Alan: It’s absolutely my pleasure. I just want to give you a little bit of history about you. XR Bootcamp started from VR First, which was an organization bringing VR labs into universities and colleges around the world. Is that correct? Ferhan: Yes. Yes. Back then — almost four years ago — we started as VR First. The main mission was to democratize VR and AR around the world. And you also supported us on these times, because it was hard to find headsets as a developer, as a startup. And we actually tried to tackle this problem with the help of major headset manufacturers – Oculus, HTC, Leap Motion, Intel — and they supported us to create VR/AR labs around the world. And we are quite happy with the impact being created now, these labs are actually really become big and creating amazing projects. And we are actually proud to have this network and enable this network. Yeah, we are now actually around 800 university that we can reach and over 400 startup clusters. But as a lab that we have supported and seeded — as in equipment and other support — we reach to almost 52 labs. And now we see that these labs become actually quite impactful in their own region to create a regional VR/AR development scene, and VR/AR startup and clusters, and they are even creating VR/AR programs — academic programs — and industrial based trainings. Alan: Ferhan, when did you guys realize that bringing this type of knowledge into the enterprise was the next step? Ferhan: It is quite interesting, because we talk with institutions not only in educational, but government institutions. They reach to us after hearing about VR/AR. “Can we educate the people in our health institutions? Can we train the people, the employees that is actually working in the– airport workers, like on the aviation industry?” And we understood that there is actually already an initiative happening on different parts of the world, on different industries based on each government’s or each region’s industry focus. And then we decided, “OK, what we can do first of all to start the VR/AR innovation in each key destination?” So as I mentioned, seeding the equipment was the first one. I remember in the beginning of 2017, we had some kind of survey, and unfortunately for every 51 developer, there was only one headset in any institution or in a startup cluster. So think of like you want to create something, but you cannot even access the VR headset, which is a shame for this region. So we first of all started this seed equipment program, and then training programs come afterwards. And the biggest supporters or beneficiaries were actually the

XR for Business
Building an XR Vocabulary for Businesses, with XR Bootcamp’s Ferhan Ozkan

XR for Business

Play Episode Listen Later Mar 17, 2020 26:52


Code is a big part of what makes XR work, of course. But for most businesses, knowing the DNA of the technology will be less important than knowing how to best use it. XR Bootcamp co-founder Ferhan Ozkan is enabling businesses interested in XR to enable themselves. Alan: Welcome to the XR for Business podcast with your host, Alan Smithson. Today, we’re speaking with Ferhan Ozkan, the co-founder of XR Bootcamp, a platform to teach professionals how to create VR and AR applications, and support companies to bridge their skills gap in XR development through an intensive onsite program, cutting edge curriculum, and industry renowned lecturers with a focus on industry portfolio projects. I am personally very, very honored to be on the advisory board of XR Bootcamp and helping them really develop the future of how organizations will train their staff on how to build XR technologies. And so with that, I’d love to welcome Ferhan to the show. Ferhan, welcome to the show, my friend. Ferhan: Hi, Alan. Pleasure to be here. Thanks for inviting. Alan: It’s absolutely my pleasure. I just want to give you a little bit of history about you. XR Bootcamp started from VR First, which was an organization bringing VR labs into universities and colleges around the world. Is that correct? Ferhan: Yes. Yes. Back then — almost four years ago — we started as VR First. The main mission was to democratize VR and AR around the world. And you also supported us on these times, because it was hard to find headsets as a developer, as a startup. And we actually tried to tackle this problem with the help of major headset manufacturers – Oculus, HTC, Leap Motion, Intel — and they supported us to create VR/AR labs around the world. And we are quite happy with the impact being created now, these labs are actually really become big and creating amazing projects. And we are actually proud to have this network and enable this network. Yeah, we are now actually around 800 university that we can reach and over 400 startup clusters. But as a lab that we have supported and seeded — as in equipment and other support — we reach to almost 52 labs. And now we see that these labs become actually quite impactful in their own region to create a regional VR/AR development scene, and VR/AR startup and clusters, and they are even creating VR/AR programs — academic programs — and industrial based trainings. Alan: Ferhan, when did you guys realize that bringing this type of knowledge into the enterprise was the next step? Ferhan: It is quite interesting, because we talk with institutions not only in educational, but government institutions. They reach to us after hearing about VR/AR. “Can we educate the people in our health institutions? Can we train the people, the employees that is actually working in the– airport workers, like on the aviation industry?” And we understood that there is actually already an initiative happening on different parts of the world, on different industries based on each government’s or each region’s industry focus. And then we decided, “OK, what we can do first of all to start the VR/AR innovation in each key destination?” So as I mentioned, seeding the equipment was the first one. I remember in the beginning of 2017, we had some kind of survey, and unfortunately for every 51 developer, there was only one headset in any institution or in a startup cluster. So think of like you want to create something, but you cannot even access the VR headset, which is a shame for this region. So we first of all started this seed equipment program, and then training programs come afterwards. And the biggest supporters or beneficiaries were actually the

Agencyphonics
Agencyphonics Episode #8 Networking Lessons with Ash Phillips of Yena

Agencyphonics

Play Episode Listen Later Feb 13, 2020 77:10


Yena is a virtual acceleration ecosystem, by subscription. Essentially they give their community the tools, learning and experience needed to start & grow a business faster/easier. With 24/7 access to a community of rebels, digital business support and opportunities to win work. https://www.linkedin.com/in/ashphillips/https://www.linkedin.com/company/yena/https://joinyena.com/https://offset.earth/Books, websites and links mentioned in our Vodcast include:Agencynomics https://amzn.to/2xHuPQK (includes more information about the 30/30/30/10)Free online Personality Tests: https://www.16personalities.com/Steve jobs - Connecting the Dots : https://www.youtube.com/watch?v=wz0j7z38LIgFriend of a Friend by David Burkus : https://amzn.to/310FkgFhttps://joinyena.com/Art of Charm Podcast Series by Joe Rogan James Potter - The LinkedIn Man https://thelinkedinman.com/  https://www.linkedin.com/in/thelinkedinman/Baader-Meinhof : the phenomenon where one stumbles upon some obscure piece of information—often an unfamiliar word or name—and soon afterwards encounters the same subject again, often repeatedlywww.cact.usUltrahaptics, now Ultraleap - who acquired Leapmotion last year run by Tom Carter https://www.theverge.com/2019/5/30/18645604/leap-motion-vr-hand-tracking-ultrahaptics-acquisition-rumor This episode was brought to you by Forecast https://hubs.la/H0xtd2S0Wirehive, Cactus, the Agencynomics Community and The Cactus Academy https://www.wirehive.com/https://www.cact.us/https://community.agencynomics.com/https://thecactus.academy/Agencynomics is a social enterprise dedicated to helping Agencies to adopt the practices they need to propel their Agency.Agencynomics and Agencyphonics are not for profit.https://www.buymeacoffee.com/spg4Please follow us on Twitter, Instagram and LinkedIn @Agencynomics

Bob Cooney's Deep Dive Webinar - Podcast
EP48 Bob Cooney's Virtual Reality Deep Dive - Tony Rogers of VAai (Part 3)

Bob Cooney's Deep Dive Webinar - Podcast

Play Episode Listen Later Jan 31, 2020 18:37


A fast evolving technology has a power to create new opportunities for businesses. Many incorporate technology such as AI, AR or VR into their business models to make them stand out from the others, as well as helping them to succeed in this technological growth. Tony Rogers is a United States Air Force Veteran with a National Defense Service Medal, a Global War on Terrorism Expeditionary Medal and Honorable Discharge in 2005 after serving in two gulf wars. Tony is a motivational speaker and expert on AI, AR and VR and he does strategic consulting around augmented reality. In this episode, Bob will be talking with Tony about how the coming technological shifts are going to impact the business and more. Let's dive in! WHAT YOU'LL LEARN FROM THIS EPISODE: Valve's new headset and news related to it Will the new Valve headset be Inside Out How can hand tracking issue be solved What headsets have announced a hand tracking in the headset Will the Leap Motion survive AI beats the best Dota 2 esports team Tony's Linkedin: https://www.linkedin.com/in/anrogers1 Listen to PART 1 and 2 here: https://anchor.fm/bobcooney-vrdeepdive/episodes/EP46-Bob-Cooneys-Virtual-Reality-Deep-Dive---Tony-Rogers-of-VAai-Part-1-eai868 https://anchor.fm/bobcooney-vrdeepdive/episodes/EP47-Bob-Cooneys-Virtual-Reality-Deep-Dive---Tony-Rogers-of-VAai-Part-2-eai876

Marsha Collier & Marc Cohen Techradio by Computer and Technology Radio / wsRadio
09/09/19 Tech news of the week: Facebook, Tesla, Samsung, 5G, Leap Motion AR, Tech from the 70s, Netflix tips Part 2

Marsha Collier & Marc Cohen Techradio by Computer and Technology Radio / wsRadio

Play Episode Listen Later Sep 9, 2019 28:12


Leap Motion AR; Tech that changed us in the 70s; Netflix tips; Movies & TV

Marsha Collier & Marc Cohen Techradio by Computer and Technology Radio / wsRadio
09/09/19 Tech news of the week: Facebook, Tesla, Samsung, 5G, Leap Motion AR, Tech from the 70s, Netflix tips Part 1

Marsha Collier & Marc Cohen Techradio by Computer and Technology Radio / wsRadio

Play Episode Listen Later Sep 9, 2019 30:00


Facebook Dating; Tesla's new business; Samsung changes; Verizon 5G in NFL; How police use your Ring videos

AR Show with Jason McDowall
Amy Peck (EndeavorVR) on Enterprises’ Slow Adoption of AR and the Promise in Education

AR Show with Jason McDowall

Play Episode Listen Later Jun 25, 2019 80:23


Amy Peck is the founder & CEO of EndeavorVR, a strategy & consulting firm focused on Augmented and Virtual Reality. There she serves as the essential guide to connect companies and educational institutions to the right resources, partners, and approach to be successful with these new technologies.In addition, Amy is the co-chair of the Enterprise Committee at the VRAR Association, and a contributing author to Charlie Fink’s recent book, Convergence. [frequent keynote speaker, panelist, and moderator]. Most recently, you saw Amy on the main stage at the AWE conference in late May.Amy has also worked in Business Development at Leap Motion, and more recently as the Chief Strategy Officer at Periscape VR, a location-based VR company.In this conversation, Amy and I discuss the viability of the location-based VR market and the potential for AR & VR in childhood education.We get into the current opportunities and challenges in bringing spatial computing to the enterprise. One of these challenges is the difficulty in explaining a technology that needs to be directly experienced, so much so that Amy now insists C-level executives put on a headset as a first step in the consulting process.We also talk about VR & AR in healthcare, and the potential impact of blockchain technology.You can find all of the show notes at thearshow.com.

The Art Of Struggle.
VR/AR company Leap Motion sold to UltraHaptics for $30 million valuating the company at an estimated $300 million

The Art Of Struggle.

Play Episode Listen Later Jun 11, 2019 4:30


Leap Motion, the AR and VR technology company behind the open-source North Star AR headset, has reportedly been purchased by the UK-based firm UltraHaptics for $30 million. As reported by the wall street journal. Full Article https://www.wsj.com/articles/leap-motion-once-a-virtual-reality-high-flier-sells-itself-to-u-k-rival-11559210520 #virtualreality #vr #d #augmentedreality #gaming #htcvive #oculus #ar #playstation #oculusrift #art #technology #psvr #video #vive #videogames #ps #games #vrgame #photography #vrgaming #oculusvr #design #insta #architecture #lasertag #virtualtour #vrart #mixedreality #bhfyp --- Support this podcast: https://anchor.fm/pixel-sultan/support

Shattered Gradients: A Weekly Deep Learning Review
3. David Holz on Building a Career in Silicon Valley and the Future of Deep Learning (Live at ISEF)

Shattered Gradients: A Weekly Deep Learning Review

Play Episode Listen Later May 30, 2019 37:17


David Holz, founder and CTO of Leap Motion, interviews with Brendon and Anish on building a career in Silicon Valley and the future of deep learning, live at Intel ISEF 2019 in Phoenix, Arizona. Holz offers insightful commentary on the beginnings of his own career, including Intel ISEF, the world of research, and the founding and early days of Leap Motion. He goes on to discuss advice for students looking to start a career in deep learning, the most promising areas of the rapidly growing field, and some future directions it may take. Many thanks to Mr. Holz for joining us to share his experience and insights! Please send comments to shatteredgradients@gmail.com.

Research VR Podcast - The Science & Design of Virtual Reality
Building Leap Motion's "Project North Star" Kits (Noah Zerkin, PM at Noah Labs) - 094

Research VR Podcast - The Science & Design of Virtual Reality

Play Episode Listen Later May 16, 2019 93:56


Leap Motion's open source Project North Star has been fascinating VR/AR enthusiasts for the last year, but due to the scarcity of sourcing the components, finding a fully built kit is quite rare. Noah Zerkin, based out of Shenzhen, China has been focusing on fabricating and building the necessary components to build kits. He joins the podcast to chat about the headset, its advantages, and the challenges of what goes into building one. Video podcast: https://youtu.be/G010rKFJMyY recorded in Feb 2019

#12minconvos
Amy Peck is a recognized thought-leader in the VR/AR space /Ep2171

#12minconvos

Play Episode Listen Later Mar 19, 2019 13:54


Amy Peck   Amy Peck is the founder & CEO of EndeavorVR, a leading global VR/AR (XR) strategy & consulting firm. Amy is a recognized thought-leader in the space and speaks globally on the future of XR. She has deep connections within the industry and works with Fortune 500 companies, digital media/production companies and tech startups on XR strategy, content, development, product and GTM.   Amy holds several Advisory Board roles: Geo.Network, a volumetric geospatial control platform VantagePoint, VR Sexual Harassment Training Venture Partner, Capital Region AR/VR Accelerator XR Marin Leadership Council Virtual World Society Ambassador   Amy is a mentor at Draper U, Women in XR Fund, Korean Innovation Center and River Studios and is an active member of several Women in Tech & STEM Education organizations. Amy is a member of the VR/AR Association’s (VRARA) Digital Healthcare Committee and Co-chairs the VRARA Enterprise Committee.   Amy's true passion for this technology emerged at Leap Motion where she launched the Enterprise division working with clients across every major vertical including healthcare, entertainment, retail, hospitality, education and government.   It became clear that there was a need to help foster XR development in the commercial arena and more importantly, shepherd these critical early wins. EndeavorVR was conceived for this very purpose.   Amy continues her work with Fortune 500 clients, helping them adopt XR for both internal process and consumer-facing initiatives, including marketing, customer experience, productivity and training. She engages the top content creators, designers, developers and platform partners to bring these solutions to fruition.   Amy’s forte is distilling complex technologies into meaningful solutions across many verticals and business lines, delivering strategy and resources at scale.   Her own personal mission is to see XR accessible to all - fundamentally improving every aspect of our lives and changing the way we work, learn, communicate and play.   SOCIAL MEDIA LINKS: LinkedIn • Twitter Listen to another #12minconvo

Flux
22: Isaac Cohen —An XR Trailblazer on how to Build Humane Technology

Flux

Play Episode Listen Later Jan 3, 2019 42:38


In the this episode I sit down with Isaac Cohen (Cabbibo), a fascinating creator at the forefront of VR and AR who has released a number of apps and experiences that push the boundaries - you can find some of his pieces on Steam. He has a background in physics and interface design, previously worked at Leap Motion, and has been an artist in residence at Unity and Adobe. Isaac shares his views on why realism in VR is the wrong approach and how developers need to approach it in a transformative rather than a derivative way. He describes the ARkit workflow, how he thinks about using AR to give users agency, and why emotional efficiency is important in computing. He gets into some of the UX insights he's picked up along the way, which creators inspire his work, what today's corporates can learn from the long-term research done at places like Xerox PARC, and the role artists play in pointing them towards the right questions. Full transcript on Medium.   This episode was produced by Adriene Lilly and Allison Behringer. 

EAST FORMS Drum & Bass
Zed Promo Mix // EAST FORMS Drum&Bass

EAST FORMS Drum & Bass

Play Episode Listen Later Dec 23, 2018 38:20


Zed Promo Mix // EAST FORMS Drum&Bass Zed - уникальное явление для Владивостока, с 2013 года он играет свои сеты лайвом (в стиле "livelooping" в рамках своего проекта "LoopMafia project”). Диджеинг выбрал для изучения позитивных вибраций, созданных мировыми творцами. Являясь приверженцем светлой стороны Drum&Bass, он предпочитает играть Intelligent, LiquidFunk и Soulful при помощи Ableton Live. В сетапе Zed использует VR controller LeapMotion, который позволяет жестами вносить дополнительную динамику в микс, поэтому его визуальная подача выглядит как дирижирование Tracklist: Meta & Impish - Thank you [Influenza Media] Keeno - Nocturne [Medschool] Dub FX - Run (Random Movement Remix) [Audioporn] Naibu - The Way You Turn [Horizons Music] Tokyo Prose - Fragmented You feat. Milton [Samurai Red Seal] Calibre - Space Time feat. Cleveland Watkiss [Signature] Etherwood - Cast Away [Medschool] Tokyo Prose - Sunsets feat. LSB & DRS [Samurai Red Seal] Halogenix - The Night feat. Solah [1985 Music] Arctic Lake - Heal Me (Spectrasoul Remix) [UKF] EtherWood - You're Missing Life [Medschool]

EAST FORMS Drum & Bass
Zed Promo Mix // EAST FORMS Drum&Bass

EAST FORMS Drum & Bass

Play Episode Listen Later Dec 23, 2018 38:20


Zed Promo Mix // EAST FORMS Drum&Bass Zed - уникальное явление для Владивостока, с 2013 года он играет свои сеты лайвом (в стиле "livelooping" в рамках своего проекта "LoopMafia project”). Диджеинг выбрал для изучения позитивных вибраций, созданных мировыми творцами. Являясь приверженцем светлой стороны Drum&Bass, он предпочитает играть Intelligent, LiquidFunk и Soulful при помощи Ableton Live. В сетапе Zed использует VR controller LeapMotion, который позволяет жестами вносить дополнительную динамику в микс, поэтому его визуальная подача выглядит как дирижирование Tracklist: Meta & Impish - Thank you [Influenza Media] Keeno - Nocturne [Medschool] Dub FX - Run (Random Movement Remix) [Audioporn] Naibu - The Way You Turn [Horizons Music] Tokyo Prose - Fragmented You feat. Milton [Samurai Red Seal] Calibre - Space Time feat. Cleveland Watkiss [Signature] Etherwood - Cast Away [Medschool] Tokyo Prose - Sunsets feat. LSB & DRS [Samurai Red Seal] Halogenix - The Night feat. Solah [1985 Music] Arctic Lake - Heal Me (Spectrasoul Remix) [UKF] EtherWood - You're Missing Life [Medschool]

Applelianos
4x41 Apple ha intentado adquirir Leap Motion

Applelianos

Play Episode Listen Later Nov 5, 2018 53:47


Hoy comentamos varias noticias sobre el mundo con nuuestros compañeros lucas y dekkar, noticias muy interesante que os pasamos a comentar con este audio, ya sabeis que nos ayuda mucho compartir en vuestras redes sociales, un saludo Applelianos.     // Notas del Show y Enlaces • ‎Apple ha intentado adquirir Leap Motion• ‎Apple da por muerto al iPhone 5 y ya no le da soporte• ‎El primer iPhone con conectividad 5G puede llegar en 2020 • ‎Leap Motion dentro de la realidad virtual: Orion • AR Screen hackathon project • WatchOS 5.1.1 • Beta de Spotify para el Apple Watch       // Síguenos en Twitter • Applelianos       // Telegram Applelianos • Applelianos (Oficial)       Si queréis recibir notificación en tiempo real de nuestros directos tenéis que descargar esta app     Si has disfrutado de este episodio, no olvides darnos una reseña en iTunes

Applelianos
4x41 Apple ha intentado adquirir Leap Motion

Applelianos

Play Episode Listen Later Nov 5, 2018 53:47


Hoy comentamos varias noticias sobre el mundo con nuuestros compañeros lucas y dekkar, noticias muy interesante que os pasamos a comentar con este audio, ya sabeis que nos ayuda mucho compartir en vuestras redes sociales, un saludo Applelianos.     // Notas del Show y Enlaces • ?Apple ha intentado adquirir Leap Motion• ?Apple da por muerto al iPhone 5 y ya no le da soporte• ?El primer iPhone con conectividad 5G puede llegar en 2020 • ?Leap Motion dentro de la realidad virtual: Orion • AR Screen hackathon project • WatchOS 5.1.1 • Beta de Spotify para el Apple Watch       // Síguenos en Twitter • Applelianos       // Telegram Applelianos • Applelianos (Oficial)       Si queréis recibir notificación en tiempo real de nuestros directos tenéis que descargar esta app     Si has disfrutado de este episodio, no olvides darnos una reseña en iTunes

Slush
Michael Buckwald, CEO and Co-founder of Leap Motion (Slush 2017)

Slush

Play Episode Listen Later Oct 15, 2018 28:53


Henni Roini talks with Michael Buckwald, CEO and.

Personal Branding Podcast
8 Clever Ways to Market Your New E-commerce Website

Personal Branding Podcast

Play Episode Listen Later Jul 17, 2018 14:52


8 Clever Ways to Market Your New E-commerce Website  Marketing an ecommerce website is not an easy task. You need to get your approach right if you want to reach more customers and generate a lot of revenue. Here are 8 clever ecommerce marketing strategies that you need to put into action today. 1- Take The Help of Inbound Marketing Marketing can be of two types – “push marketing” and “pull marketing”. Under push marketing, you push the customers towards your store even if they aren’t interested. For example, an ad in a newspaper is an example of push marketing. Under pull marketing, you pull the customers towards your store when they are highly interested. For example, a visitor visiting your site using organic search is an example of pull marketing. Inbound marketing consists of all such strategies that come under pull marketing. These strategies help to generate visits that are highly relevant and have greater chances of converting into a sale. Here are some highly useful techniques of inbound marketing: Search Engine Optimization: SEO helps to generate quality inbound leads by getting your website ranked at the top of the organic search results. Content Marketing: Content is the lifeline of internet. Content marketing helps to increase the visibility of your brand and help to build lasting relationships with your audience.  Social Media Marketing: Nowadays, you are marketing to millennials and social media is the perfect place to reach them. An effective social media marketing strategy improves brand awareness and lets you engage with a broad array of audience. Email Marketing: Emails remain one of the most cost effective inbound marketing strategies that’s easy to get started, delivers the right message at the right time and helps to generate a regular flow of leads. SMS Marketing: You can reach a large number of people at the same time with the help of SMS marketing. SMS lets you to reach customers instantly as and when you need it. Webinar Marketing: Webinar is a cost effective marketing strategy that helps to build lucrative relationship with your customers. Webinars adds to brand loyalty and helps to convert a new visitor to a lead. 2- Create and Promote Your Blog Blogging is an inexpensive way for ecommerce business owners to drive traffic to their website. Here are the top benefits of blog promotion: It provides to boost your existing search engine optimization strategies. Blogging is an efficient way to connect with your existing and potential customers. People are able to easily recognize your brand and it educates them about your business vision and personality. Preparing blog posts on long tail keywords lets you get rank for new keywords which means you have greater chances of raising your leads and conversion. The below chart from Hubspot shows that businesses who blog generate 55% more visitors as compared to the ones who don’t.  Blogging is an opportunity for social sharing. People can share blog posts that they like and this indirectly helps to raise publicity of your brand on social channels.   3- Advertise Your Store Offline Offline promotion is equally important to give wings to your online business. Here are a few effective ways to market your ecommerce business offline: Take part in local business meetings to network with other businesses and to generate a lot of local brand publicity. Participate in trade shows to directly promote your products. Make use of tools like Shopify POS to accept payments from customers with their methods of choice. POS is a convenient medium to accept payments when you are in trade shows, meetings, kitty parties or workshops.  Take the help of billboard advertisement to educate people about your brand and offers. Offer free products to people as this will encourage more prospects to start buying from your e-store. 4- Leverage The Power of Influencers Influencer marketing is an highly effective strategy when it comes to reaching new prospects in a short span of time. Nowadays, there are so many online stores so why would anyone buy from your store? People always prefer stores who offer quality products and a trustworthy service. Trust is the biggest factor that differentiates a business from a brand. Influencers help businesses gain trust and build goodwill. 5- Target Your Audience With Virtual Reality   VR is the next phase in the evolution of ecommerce marketing. Customers like to have a personalized experience and VR can create magical experiences for the customers. Here is how ecommerce retailers can leverage the power of virtual reality to generate brand buzz and new leads: Retailers can create virtual showrooms or virtual stores to offer customers a virtual experience that lets the customers assume like they are viewing the products in a physical store. Customers can visualize how a product actually looks and feels as a “try before you buy” feature. You can use the Google Cardboard SDKs to start building an immersive experience of your own. As per a survey by Walker and Retail, people prefer shopping via physical stores as compared to online. With the help of Leap Motion’s advanced hand tracking technology, you can allow your customers to make selections and interact using their hand movements, all happening in the virtual world. This means, your customers can choose products, place them into their cart and order them easily all using a simple wave of the hand! Isn’t this magical?  Moreover, VR combined with the power of AR (Augmented Reality) can do wonders. For example, a store selling furniture online can allow shoppers to visualize how a bed would actually look like when placed in the bedroom. All this happens with the help of an AR app where the customer would use the mobile camera and point it to the place in the bedroom where the bed needs to be placed. The moment this is done, the app places a 3D model of the bed as a live image on the bedroom. Hence, it becomes extremely easy for the shopper to visualize how the […]

Research VR Podcast - The Science & Design of Virtual Reality
Designing Impossible Musical Instruments in VR w/ Zach Kinstner - 074

Research VR Podcast - The Science & Design of Virtual Reality

Play Episode Listen Later Jul 13, 2018 56:58


Designer/Developer Zach Kinstner joins the podcast to talk about designing his VR music creation tool, Exa: Infinite Instrument. Topics: - designing Hover UI for hands with Leap Motion - building better musical instruments in VR - buttons that can be pushed from behind (for keyboards and such)

VU Dream - VR/AR Podcast
Ep. 13 - Tom Buchanan | Bringing Intuitive Touch, Gesture and Interaction into Immersive VR World's

VU Dream - VR/AR Podcast

Play Episode Listen Later Jun 20, 2018 28:46


Tom Buchanan is the Co-Founder & Chief of Product at Contact Ci. Started in Syracuse University students in 2014, the founding team began Contact CI’s work on extending the reach of human hands. Mutual passions for biomimetics, wearables, immersive technology, gaming, and robotics has continually help shape the team’s vision for natural and realistics human computer interfaces. Since leaving campus, the team has expanded, secured crucial partnership, raised initial funding, and is preparing to ship the first Maestro gloves to external partners this quarter. The goal today remains the same with the team headquartered in Cincinnati, passionately developing Maestro to enable intuitive hand controls.-Trained in the integration of native SteamVR tracking into VR input devices. Led to the development and implementation of native 6 degree of freedom tracked objects designed for a room scale environment -Formulated agile development cycles that halved the weight from our E3 2016 to our SVVR 2017 models and then halved the weight again for Leap Motion integration-Prototyped motion capture glove capable of 19 of 27 degrees of freedom of the hand-Co-founded and have worn many hats during the early stages of development for the company-Managed the integration of soft and hard goods in the product development process-PCB Design and R&D-Haptic systems R&D-Demoed our motion capture system with HTC Vive integration at E3 2016-Demoed first prototype of motion capture and force feedback haptics at SVVR 2017 Expo-2015 Kairos K50 Company member-One of the earliest developers with access to the HTC Vive Trackerhttps://www.linkedin.com/in/tfbuchanan4/contactci.coVU Dream - http://www.vudream.com/VU Dream's FB - https://www.facebook.com/VUDreamOfficial/VU Dream's Twitter - https://twitter.com/VUDreamOfficialVU Dream's Instagram - https://www.instagram.com/vudreamofficial

Itera Ideia
Papo semanal #12 - Interfaces inovadoras e Virtual Wearables

Itera Ideia

Play Episode Listen Later May 16, 2018 43:45


Post Medium http://bit.ly/2IL2eBY Grupo Open Design no Telegram https://t.me/iteraideia Chega de saudade, o Papo Semanal está de volta! Falamos sobre os estudos de novas interfaces apresentadas na conferência ACM CHI e sobre um projeto do MIT que nos permite interagir com computadores COM O PODER DA MENTE! (… ou quase isso) Pra fechar falamos sobre o projeto North Star do Leap Motion. Lembra da promessa do Google Glass que nunca chegou? Pois é, os caras do Leap Motion estão propondo uma nova abordagem, com novos padrões de interação e utilizando a interface primordial de qualquer ser humano: as próprias mãos!

Research VR Podcast - The Science & Design of Virtual Reality
Interaction Design Sprints Wrap-up w/ Leap Motion - 65

Research VR Podcast - The Science & Design of Virtual Reality

Play Episode Listen Later May 8, 2018 85:47


Designers Martin Schubert and Barrett Fox join the Podcast to talk about their biweekly Design Sprints at Leap Motion, working on one of the most important aspects of VR and AR. Apologies in advance for the non-ideal audio quality for this episode.

MIXEDCAST: Podcast über VR, AR, KI
VRODOCAST #89: Facebook F8, Pimax-Controller und neue AR-Brillen

MIXEDCAST: Podcast über VR, AR, KI

Play Episode Listen Later Apr 26, 2018 42:28


Der VRODO-Podcast #89 steht eindeutig im Schatten des VRODO-Podcasts #90. Denn der erscheint kommende Woche und befasst sich mit Facebooks großer F8-Ankündigung. Da aber erst nächste Woche nächste Woche ist, müsst ihr in dieser Woche mit Spekulationen über nächste Woche vorliebnehmen. Facebook F8: Welche große Ankündigung erwartet uns? Laut eines Berichts wird Facebook auf der hauseigenen Entwicklerkonferenz F8 die Go-Brille starten. Erscheinungstermin soll der 1. Mai sein, der Preis dürfte bei etwas über 200 Euro liegen. Ist das schon das große Ding - oder hat Facebook-Chef Zuckerberg noch ein Ass im Ärmel? Vielleicht macht das Unternehmen ja die volumetrischen Kameras wieder interessant, die im letzten Jahr angekündigt wurden und seitdem keine Rolle mehr spielten. Neuigkeiten zur Social-VR-App Spaces sind gesetzt und Oculus Go dürfte wohl exklusiv mit der Massen-Livestream-App Venues starten. Wir spekulieren, was sonst noch so passieren könnte. Pimax-Knuckles: Können die Chinesen Valve Konkurrenz machen? Vor wenigen Tagen kündigte das chinesische VR-Startup Pimax neue Virtual-Reality-Controller an. Sie sind mit der eigenen Pimax-Brille 8K und mit HTC Vive kompatibel - und potenziell wohl mit jeder anderen VR-Brille, die mit SteamVR und Valves Lighthouse läuft. Erscheinen sollen sie im dritten Quartal 2018, der Preis steht noch nicht fest. Auf den ersten Blick wirken die Geräte vielversprechend: Sie bieten den komfortablen Formfaktor von Oculus Touch und das Handfrei-Fingertracking der schon vorgestellten, aber noch nicht erschienenen Valve-Knuckles. Aber es gibt Grund zur Skepsis: Der Marktstart der Pimax-Brille läuft alles andere als zufriedenstellend. Das Unternehmen hängt den eigenen Plänen schon mindestens ein halbes Jahr hinterher. Ob es mit den Controllern besser läuft? Und falls ja - welche Software unterstützt das Pimax-exklusive Fingertracking? Neue Augmented-Reality-Brillen braucht das Land - oder auch nicht In den letzten beiden Jahren gab es eine Schwemme von VR-Brillen - viele davon völlig am Markt vorbei gedacht - die erfreulicherweise langsam abebbt. Wiederholt sich dieses Phänomen jetzt für Augmented-Reality-Brillen? In den letzten Tagen stellten Leap Motion und der chinesische Hersteller AntVR zwei neue, ähnlich aufgebaute AR-Brillen vor. Sind die Geräte technisch interessant und haben sie realistische Marktchancen? Oder taugen die Produkte nur fürs Labor und die Jagd nach Investoren?

VR Hermits
Episode 38: Wine rack of todo list items

VR Hermits

Play Episode Listen Later Apr 16, 2018 64:56


This week Dave educates Joe on the Unity Profiler using the Update method as an example. VR Hermits went to a VR MeetUp, tried Leap Motion, and made some new friends. Joe gives a short update on the VR Bowling game. He also talks about his plans for the next project. 10000 Update Calls (https://blogs.unity3d.com/es/2015/12/23/1k-update-calls/) VR Columbus (https://www.meetup.com/VR-Columbus/) Shredded: A Dystopian Novel (https://www.amazon.com/Shredded-Dystopian-Novel-Charles-ODonnell/dp/153721568X) COGG (https://www.meetup.com/The-Cogg/) Contact us Joe on Twitter (https://twitter.com/vrhermit_joe) Dave on Twitter (https://twitter.com/vrhermit_dave) Email: vrhermits@gmail.com

Consultor IT
072. Leap Motion: Detectando las manos

Consultor IT

Play Episode Listen Later Apr 14, 2017 26:45


Leap Motion es un dispositivo que pretende eliminar el teclado y el ratón para que podamos utilizar nuestro ordenador solamente haciendo gestos en el aire. Alucinante ¿Verdad? Quédate a escuchar el podcast y descubrirás cómo es, cómo funciona, si realmente vale la pena, etc. La entrada 072. Leap Motion: Detectando las manos aparece primero en Luis Peris.

.NET Rocks!
Tim Huckaby Brings Us Up to Date on Gesture

.NET Rocks!

Play Episode Listen Later May 16, 2013 60:51


Carl and Richard talk to Tim Huckaby about the latest developments around gesture computing. The conversation covers the latest version of the Kinect SDK (with grip!), as well as SoftKinetic, Panasonic D-IMager and Leap Motion. Tim also speculates on some of the new things coming from Microsoft with Xbox and Kinect (leaked specs in the links below), and how gesture is here to stay. It's everywhere already!Support this podcast at — https://redcircle.com/net-rocks/donations

The Citadel Cafe: A Sci-Fi and Fantasy Podcast
The Citadel Cafe 055: Alien Velociraptors

The Citadel Cafe: A Sci-Fi and Fantasy Podcast

Play Episode Listen Later Mar 1, 2013 69:28


Our guest this week is multi-show podcaster Ryan Murphy from The Gamers' Inn, Zombies Ate My Podcast and Sleeping With The Elephant . We talk some podcast environment tech with Ryan and he shares some interesting theories on House of Cards. Joel leaps forward to May with Leap Motion and Paton is ring side with The Robot Combat League. Bonus! Lou […] Hosted on Acast. See acast.com/privacy for more information.

SONIC TALK Podcasts
SONIC TALK 295 - 121212 Soundcheck Day

SONIC TALK Podcasts

Play Episode Listen Later Dec 13, 2012 72:41


We kick off with a brief appearance from Darren Price from Underworld - as thats where Dave Spiers is today (thanks for the use of the bandwidth!). We then dive into the Leap Motion and the AirHarp experiment 7:00, Mark Tinley's Christmas Single as the Pink New Dolphins - Sometimes Suicide - a very catchy tune with a serious message 16:10. Then we talk about serious studio disasters - several of which appear to include people smashing up gear... 31:35 The Bass Bott TT Clone TB-303 copy -so similar - can it be legal? 45:00 Waves EMI Redd plugins - emulation of the classic REDD mixing desk from Abbey Road and thats it! 58:00

underworld abbey road redd soundcheck leap motion christmas single darren price sonic talk dave spiers mark tinley