POPULARITY
In this captivating episode of Entangled Things, Srinjoy Ganguly returns to join Patrick and Ciprian for an engaging exploration of AI-assisted optimization and its synergy with quantum computing. As he embarks on his PhD journey in quantum computing in the UK, Srinjoy shares expert insights into the transformative impact of AI and quantum technologies on the future of innovation. Additionally, he offers practical advice for newcomers eager to step into the world of quantum computing. Don't miss this enlightening conversation at the crossroads of groundbreaking technology and exciting opportunities. Srinjoy Ganguly is the founder & CEO of AdroitERA an EdTech firm which provides training on cutting edge technologies and IBM recognized Quantum Educator. He possesses a Masters in Quantum Computing Technologies from Technical University of Madrid, Spain and an MSc in Artificial Intelligence from University of Southampton, UK. He has over 4+ years of experience in Quantum Computing and 5+ years of experience in Machine Learning, Deep Learning, AI. He has completed research-based courses on 5G signal processing systems from IIT Kanpur. He led, mentored and taught Quantum Machine Learning (QML) study space at QResearch QWorld and authored a book on Quantum Computing with Silq Programming. He has conducted Faculty Development Training at IIIT Pune by special invitation, gave expert talks on QML at IEEE SPS and has conducted several webinars at various institutes related to QML and Quantum Computing. He has been specially appointed and invited by Woxsen University as a Visiting Faculty to teach Quantum Computing to MBA students. He has also supervised research interns on QNLP, ZX calculus and Quantum Music as a part of QIntern 2021. His research interests include Quantum Machine Learning, Quantum Natural Language Processing (QNLP), Graphical Calculus for Quantum Computing (ZX Calculus) and Quantum Image Processing.
Renaud Béchade, founder of Anzaetek, a quantum software company based in Korea, is interviewed by Yuval Boger. Renaud shares insights about the company's work in quantum machine learning (QML) for hospitals, focusing on managing limited medical data, federated learning, and potential quantum solutions for personalized medicine. They also discuss hardware-software co-design, the quantum ecosystem in Korea, and the future of quantum applications beyond healthcare, including in finance and optimization. Renaud reflects on key learnings from recent conferences, potential breakthroughs in quantum error correction, and much more
Large language models are … large. Forget Bitcoin and recharging electric vehicles; the grid could be toppled by powering AI in a few years. It would be optimal if AI could run on more underpowered edge devices. What if there was a quantum-inspired way to make LLMs smaller without sacrificing overall performance in combined metrics? We explore a way to do that and other advanced ideas, like selectively removing information from models, in this episode. Join Host Konstantinos Karagiannis for a chat about how businesses can solve many of the woes associated with bringing AI in-house with Sam Mugel from Multiverse. For more on Multiverse, visit https://multiversecomputing.com/. Read the CompactifAI paper: https://arxiv.org/abs/2401.14109. Visit Protiviti at www.protiviti.com/US-en/technology-consulting/quantum-computing-services to learn more about how Protiviti is helping organizations get post-quantum ready. Follow host Konstantinos Karagiannis on all socials: @KonstantHacker and follow Protiviti Technology on LinkedIn and Twitter: @ProtivitiTech. Questions and comments are welcome! Theme song by David Schwartz, copyright 2021. The views expressed by the participants of this program are their own and do not represent the views of, nor are they endorsed by, Protiviti Inc., The Post-Quantum World, or their respective officers, directors, employees, agents, representatives, shareholders, or subsidiaries. None of the content should be considered investment advice, as an offer or solicitation of an offer to buy or sell, or as an endorsement of any company, security, fund, or other securities or non-securities offering. Thanks for listening to this podcast. Protiviti Inc. is an equal opportunity employer, including minorities, females, people with disabilities, and veterans.
Samso Insight Episode 119 is with Andrew Sparke, Executive Chair of QMines Limited (ASX: QML) As the copper price continue to reach all time high status (Figure 1), compnaies such as QMines Limited suddenly comes as an interesting proposition. The resource is small but a recent Pre-Feasibility study is showing that the number could work. In this episode of Samso Insights, we have Andrew Sparke giving us a run down on what could be a copper producer in Queensland, Australia. Figure 1: Copper price chart. (Source: Trading Economics) The supply issue for copper has long been talked about and the market seem to have finally caught onto the nearing desperate nature of supply. The aging copper mines are facing rising cost and some of the major mines are also facing sovereign issues. To add to the supply issue, several developing mines are facing question on jurisdiction. I like companies like QMines as they are always undervalued and are constantly facing funding issues. As the market tightens, these stories begin to get noticed and their valuation begin to move. This is not an endorsement of QMines in any way, as there are still hurdles that could be deal breakers for the company. My comment is merely an observation that has stood the test of time historically. Check out the Samso Insight conversation with Andrew and make your own decision. Samso's Conclusion QMines is a company that may offer investors an opportunity to get in on the copper run. A rising commodity story that is still early in its journey with many unknowns of trivial hurdles or deal breaker hurdles. Andrew has explained how the story should work but as we know, he is the Executive Chair and his thoughts would deemed to be slightly biased. That being said, my view is that one has to look at the options out there in the market place for a story that will fit the current narrative of "Need More Copper". QMines, assuming that the numbers continue to stack up will be one of the ones on my watchlist. Fortunately for us punters, the low valuation of companies due to a bearish sentiment in commodities has somewhat naturally reduced our risk. As for the copper price, if it is to be believed, has a lot of legs to go. Some narratives have gone further and put the copper price at level much higher than Figure 1. I agree that it will go higher but I don't have an understanding on how high. There are no doubts that the old copper mines are facing rising costs and this is not a small margin. One must remember that if household living expenses are stated to have increased be in the 30% mark, the increase of 30% or more in a mine will make a big difference. I would say that the cost of mining any commodity at the depths that these old copper mines are at will be significant. At the end of the day, DYOR is the key to any decision making and one has to keep a keen eye on the copper space. When you think about what the implication will mean, the opportunity for shareholder value adding is enormous. Chapters: 00:00 Start 00:20 Introduction 00:57 Andrew introduce QMines 02:45 Going through the details of PFS 07:05 The upside of Mt Chalmers. 09:36 Any Metallurgical Issues? 11:25 The Products 11:37 The Pyrite Value Story 12:57 Where is the disconnect with value and share price? 15:19 The issue of using Copper Equivalent number. 17:30 The Pros and Cons of taking a position in QMines. 23:17 The Copper Market 26:06 Why QMines? 20:57 Timing for Investors Exit? 27:43 THE CAPEX Advantage 28:45 Conclusion
Изначально мы планировали говорить о QT исключительно в контексте кроссплатформенной десктопной разработки. Но в процессе осознали, что такое представление слишком ограничено, и возможности QT гораздо шире. Андрей Бочарников, desktop-разработчик и техлид в компании Mode, устроил нам насыщенный экскурс в мир QT. Мы обсудили историю развития технологии в контексте вызовов индустрии тех времён, подробно разобрались в основных компонентах QT — объектной модели, виджетах, QML, стандартной библиотеке. Рассмотрели возможности работы с QT не только на C++, а также осветили рынок QT-разработки. Перед вами максимально широкий и подробный обзор технологии, как вы и любите! Также ждем вас, ваши лайки, репосты и комменты в мессенджерах и соцсетях! Telegram-чат: https://t.me/podlodka Telegram-канал: https://t.me/podlodkanews Страница в Facebook: www.facebook.com/podlodkacast/ Twitter-аккаунт: https://twitter.com/PodlodkaPodcast Ведущие в выпуске: Катя Петрова, Стас Цыганов Полезные ссылки: Макс Шлее. Qt 5.10. Профессиональное программирование на C++ https://github.com/telegramdesktop/tdesktop https://habr.com/ru/articles/501798/ https://scythe-studio.com/en/blog/flutter-vs-react-native-vs-qt-in-2022 https://blog.felgo.com/cross-platform-app-development/react-native-flutter-felgo-framework-comparison https://felgo.com/cross-platform-mobile-development-react-native-comparison https://www.youtube.com/@KDABtv https://www.qt.io/product/features https://doc.qt.io/qtforpython-6/faq/whatisqt.htm
QMines (ASX: QML) chairman Andrew Sparke joins Small Caps to discuss the company's recent exploration excitement which has confirmed the potential of its promising Mt Chalmers copper discovery in Queensland. Recent drilling hits at the Artillery Road target have confirmed that QMines looks to be onto a significant new base metal discovery. The company initially grabbed interest in late July when drilling intersected high-grade mineralisation southwest of the Mt Chalmers West Lode. It quickly followed that up in early August with a drill test of an Artillery Road electromagnetic prospect confirming potential for a large discovery. The company is now looking to add on to that early success with RC drilling continuing on a 5,000m, 30 hole program. The company has also identified a further 30 plus electromagnetic anomalies to test. Articles:https://smallcaps.com.au/maiden-drill-hole-confirms-sulphides-qmines-new-copper-target-artillery-road/https://smallcaps.com.au/qmines-makes-high-grade-discovery-southwest-mt-chalmers-copper-gold-project/ For more information on QMines:https://smallcaps.com.au/stocks/asx-qml/See omnystudio.com/listener for privacy information.
In the third episode of this mini-series on the Future of Technology, we will hear from Vint Cerf, Vice President & Chief Internet Evangelist at GOOGLE, and widely known as one of the “Fathers of the Internet,” and Alexandre Blais, Professor & Scientific Director of the Quantum Institute at UNIVERSITÉ DE SHERBROOKE. Vint and Alexandre will walk us through the challenges and opportunities that Quantum Learning presents. They will define Quantum Learning and explore: how can its development impact society as a whole? What are the challenges of making Quantum Machine Learning (QML) a reality? Vinton G. Cerf, Vice President & Chief Internet Evangelist, GOOGLEIn this role, he is responsible for identifying new enabling technologies to support the development of advanced, Internet-based products and services from Google. He is also an active public face for Google in the Internet world.Widely known as one of the “Fathers of the Internet,” Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. In December 1997, President Clinton presented the U.S. National Medal of Technology to Cerf and his colleague, Robert E. Kahn, for founding and developing the Internet. Kahn and Cerf were named the recipients of the ACM Alan M. Turing award in 2004 for their work on the Internet protocols. In November 2005, President George Bush awarded Cerf and Kahn the Presidential Medal of Freedom for their work. The medal is the highest civilian award given by the United States to its citizens. In April 2008, Cerf and Kahn received the prestigious Japan Prize.Cerf is a recipient of numerous awards and commendations in connection with his work on the Internet.Cerf holds a Bachelor of Science degree in Mathematics from Stanford University and Master of Science and Ph.D. degrees in Computer Science from UCLA.Prof. Alexandre Blais, Physics Professor & Scientific Director of the Quantum Institute, UNIVERSITÉ DE SHERBROOKEAlexandre Blais is a professor of physics at the Université de Sherbrooke and Scientific Director of the Institut quantique at the same institution. His research focusses on superconducting quantum circuits for quantum information processing and microwave quantum optics. After completing a PhD at the Université de Sherbrooke in 2002, he was a postdoc at Yale University from 2003 to 2005 where he participated in the development of circuit quantum electrodynamics, a leading quantum computer architecture. Since then, his theoretical work as continued to have an impact in academic and industrial laboratories worldwide. Alexandre is a Fellow of the American Physical Society, a Guggenheim Fellow of the John Simon Guggenheim Memorial Foundation, a member of CIFAR's Quantum Information Science program and of the College of the Royal Society of Canada. His research contributions have earned him a number of academic awards, including NSERC's Doctoral Prize, NSERC's Steacie Prize, the Canadian Association of Physicists' Herzberg and Brockhouse Medals, the Prix Urgel-Archambault from the Association francophone pour le savoir, the Rutherford Memorial Medal of the Royal Society of Canada, as well as a teaching award from the Université de Sherbrooke.Thanks for listening! Please be sure to check us out at www.eaccny.com or email membership@eaccny.com to learn more!
Boeing is all about connecting the world and soaring to new heights – in every sense of the phrase. Nestled inside this giant is a large team dedicated to using quantum computing and sensing to ensure innovation in aeronautics, ranging from materials science to navigation and other use cases. Join Host Konstantinos Karagiannis for an uplifting chat with Jay Lowell from Boeing.For more on Boeing, visit www.boeing.com/innovation/. Visit Protiviti at www.protiviti.com/postquantum to learn more about how Protiviti is helping organizations get post-quantum ready. Follow host Konstantinos Karagiannis on Twitter and Instagram: @KonstantHacker and follow Protiviti Technology on LinkedIn and Twitter: @ProtivitiTech. Contact Konstantinos at konstantinos.karagiannis@protiviti.com. Questions and comments are welcome! Theme song by David Schwartz. Copyright 2021.
**plasma-frameworks and QML** , **plasma-integration** , **plasma-nm** , **plasma-pa** , **plasma-sdk** , **plasma-systemmonitor** , **plasma-vault** , **plasma-wayland-protocols** from the Slackware **kde** package set. shasum -a256=59dfd6f4eccd5b507a3ad1f12147dc32b5890e9036a5900196584c8726e453eb
As NISQ-era quantum computing improves, we're on the cusp of practical advantage within a couple of years. But some companies want to see impressive performance today. That's where quantum-inspired solutions can provide up to triple the power, all on classical hardware. Learning quantum-inspired programming could even help coders migrate to real quantum computers in the future. Could these approaches even improve ChatGPT and stable diffusion AI such as Midjourney or DALL-E? Join Host Konstantinos Karagiannis for a quantum-inspired chat with Roman Orus from Multiverse Computing.For more on Multiverse Computing, visit https://multiversecomputing.com/.To experiment with Microsoft Azure QIO, visit https://learn.microsoft.com/en-us/azure/quantum/optimization-overview-introduction.Visit Protiviti at www.protiviti.com/postquantum to learn more about how Protiviti is helping organizations get post-quantum ready. Follow host Konstantinos Karagiannis on Twitter and Instagram: @KonstantHacker and follow Protiviti Technology on LinkedIn and Twitter: @ProtivitiTech. Contact Konstantinos at konstantinos.karagiannis@protiviti.com. Questions and comments are welcome! Theme song by David Schwartz. Copyright 2021.
Empieza la Temporada 3 con una nueva numeración de programas y una nueva e impresionante xTALKS.AI donde hablaremos de...
Witam w sto osiemdziesiątym piątym odcinku podcastu „Porozmawiajmy o IT”. Tematem dzisiejszej rozmowy jest framework Qt.Dziś moim gościem jest Łukasz Kosiński – wieloletni ekspert i entuzjasta frameworka Qt. Ma doświadczenie w pracy nad cross-platformowymi projektami Qt dla branży medycznej, motoryzacyjnej, a także z obszaru elektroniki konsumenckiej i obronności. Sam siebie opisuj jako specjalistę w Qt i QML z silną znajomością C++. Założyciel Scythe Studio – firmy świadczącej usługi w zakresie programowania Qt.W tym odcinku o Qt rozmawiamy w następujących kontekstach:czym jest framework Qt?jaka jest historia jego powstania i rozwoju?z jakich modułów się składa?w jakich branżach jest używany?czym jest język QML i czy można go mieszać z C++?jak Qt wypada w porównaniu do takich rozwiązań jak Flutter czy React Native?w jaki sposób i z jakich źródeł uczyć się Qt?jak wygląda rynek pracy związany z tym frameworkiem?Subskrypcja podcastu:zasubskrybuj w Apple Podcasts, Google Podcasts, Spreaker, Sticher, Spotify, przez RSS, lub Twoją ulubioną aplikację do podcastów na smartphonie (wyszukaj frazę „Porozmawiajmy o IT”)poproszę Cię też o polubienie fanpage na FacebookuLinki:Profil Łukasza na LinkedIn – https://www.linkedin.com/in/lukasz-kosinski-developer/Blog Łukasza – https://binarnie.pl/Scythe Studio – https://scythe-studio.com/plSOLID.Jobs – https://solid.jobs/Wsparcie:Wesprzyj podcast na platformie Patronite - https://patronite.pl/porozmawiajmyoit/Jeśli masz jakieś pytania lub komentarze, pisz do mnie śmiało na krzysztof@porozmawiajmyoit.plhttps://porozmawiajmyoit.pl/185
Hola a todas, todos & todxs nuestros oyentes en el podcast de Ubuntu Colombia, el día de hoy estaremos hablando de un tema bastante particular y que se comento en su momento en las redes sociales de Ubuntu Colombia, Introducción a desarrollo de apps nativas usando QML uno de los lenguajes soportados por Ubuntu Touch y su CLI. Puedes encontrar mas información siguiendo los siguientes links: https://doc.qt.io/qt-6/qtqml-index.html, https://ubports.com/blog/ubports-news-1/post/introduction-to-clickable-147, https://clickable-ut.dev/en/latest/, https://ubports.com
Quantum computing is built on the ideas of giants. These so-called quantum foundations contain complicated concepts, including entanglement. In fact, the 2022 Nobel Prize in Physics was awarded to three scientists who expanded our understanding of entanglement. How does this key concept work? What are some other fascinating core ideas behind Quantum Information Science (QIS)? Join host Konstantinos Karagiannis for a chat with Bob Coecke from Quantinuum to explore quantum foundations. For more on Quantinuum, visit www.quantinuum.com/.Watch the new talk on quantum natural language processing (QNLP) that Bob references here: https://youtu.be/pFc2PmxVMt8.Visit Protiviti at www.protiviti.com/postquantum to learn more about how Protiviti is helping organizations get post-quantum ready.Follow host Konstantinos Karagiannis on Twitter and Instagram: @KonstantHacker and follow Protiviti Technology on LinkedIn and Twitter: @ProtivitiTech. Contact Konstantinos at konstantinos.karagiannis@protiviti.com. Questions and comments are welcome! Theme song by David Schwartz. Copyright 2021.
En esta xTALK.AI tenemos el placer de entrevistar a Javier Mancilla Quantum Machine Learning Leader. Hablamos como ya es típico en nuestras xTALKS.AI de muchos temas como el estado actual de la inteligencia artificial, ML, QML, computación cuántica, AutoML, impacto social y económico, de business y del alma... de super inteligencias y del destino de la humanidad.Otra xTALK.AI que esperamos sea épica e histórica. Disfrutadla tanto como nosotros al realizarla.Un placer tener el placer de escuchar al maestro Javier.Javier Mancilla Montero : LinkedInNimoy : LinkedInalt.bank : LinkedInSupport the show
Quantum machine learning, or QML, is one of the three major application categories for quantum computing, along with optimization and simulation. As we're working with customers at Protiviti to find advantageous use cases in QML, we rely daily on a tool called PennyLane from Xanadu. Join host Konstantinos Karagiannis, and special cohost Emily Stamm, for a chat with Nathan Killoran from Xanadu to learn about this powerful, free software.For more on Xanadu, visit https://www.xanadu.ai/.For more on PennyLane, visit https://pennylane.ai/.To read the paper “Is quantum advantage the right goal for quantum machine learning?” visit https://arxiv.org/abs/2203.01340.Visit Protiviti at www.protiviti.com/postquantum to learn more about how Protiviti is helping organizations get post-quantum ready.Follow host Konstantinos Karagiannis on Twitter and Instagram: @KonstantHacker and follow Protiviti Technology on LinkedIn and Twitter: @ProtivitiTech. Questions and comments are welcome! Theme song by David Schwartz, copyright 2021.
QMines (ASX: QML) chairman Andrew Sparke joins Small Caps to discuss the company's planned exploration program at its Mt Chalmers copper-gold project in Queensland. The company is accelerating its plans to explore several priority targets with a view to deliver a third resource upgrade in the first half of 2022. QMines is currently preparing to commence a 2,000m 10-hole program to test the Tracker 3 copper prospect. The company's resource growth strategy involves three other priority targets: Woods Shaft, Mt Chalmers mine and Mt Chalmers North. QMines also recently became one of three publicly-listed resource stocks to achieve ‘carbon neutral' status under the Australian government's Climate Active program.Articles:https://smallcaps.com.au/qmines-drill-tracker-3-copper-zinc-anomalies-mt-chalmers-project/https://smallcaps.com.au/qmines-achieves-carbon-neutral-status-mt-chalmers-copper-gold-project/https://smallcaps.com.au/qmines-fast-tracks-exploration-grow-mt-chalmers-copper-gold-resource/For more information on QMines:https://smallcaps.com.au/stocks/QML/
During this time of lockdown, the centre for quantum software and information (QSI) at the University of Technology Sydney has launched an online seminar series. With talks once or twice a week from leading researchers in the field, meQuanics is supporting this series by mirroring the audio from each talk. I would encourage if you listen to this episode, to visit and subscribe to the UTS:QSI YouTube page to see each of these talks with the associated slides to help it make more sense. https://youtu.be/3OS7Pq6JoDY Q#, a quantum-focused domain-specific language explicitly designed to correctly, clearly and completely express quantum algorithms. TITLE: Empowering Quantum Machine Learning Research with Q# SPEAKER: Dr Christopher Granade AFFILIATION: Quantum Systems, Microsoft, Washington, USA HOSTED BY: A/Prof Chris Ferrie, UTS Centre for Quantum Software and Information ABSTRACT: In this talk, I will demonstrate how the Q# quantum programming language can be used to start exploring quantum machine learning, using a binary classification problem as an example. I will describe recent work in QML algorithms for classification, and show how Q# allows implementing and using this classifier through high-level quantum development features. Finally, I will discuss how these approaches can be used as part of a reproducible research process to share your explorations with others.
There are many things you do almost every day in your job as a clinical veterinarian, usually multiple times a day. Sometimes these things become so routine that you don't even stop to think about what you do and how you do it, and whether there could potentially be better ways that will deliver better results. And you DEFINITELY never get around to asking the questions you've always wondered about. Routine pathology sampling is high on this list. How do we ensure that we get good quality FNA smears that have the best chances of being diagnostic? Should you pull back on the plunger when you sample or just wiggle the needle around in there? What can and can't you FNA? (Did you know that you can get great results from FNA'ing bone pathology?!) What about blood sampling? Surely we can't mess that up, can we?! And CSF? That's just for the specialist centres, right? Maybe not. Our guest for this episode is Dr Rebekah Liffman. Rebekah is a clinical pathologist at ASAP laboratories in Victoria (http://www.asaplab.com.au/Home.aspx), which is one of the labs that make up the SVS Pathology Network. If you listened to our previous pathology episode with Dr Flaminia Coiacetto you'd know that YOUR favourite local lab is probably also part of the SVS pathology network: Vetpath in western Australia and the NT, QML if you're a Queenslander, Vetnostics in New South Wales and the ACT, TML in Tasmania, and ASAP laboratory in Victoria and South Australia. A big thank you to SVS for supplying us with the brains for this episode and for supporting this series of episodes. Go to https://thevetvault.com/podcasts/ for show notes and to check out our guests' favourite books, podcasts and everything else we talk about in the show. If you want to lift your clinical game, go to https://vvn.supercast.tech for a free 2-week trial of our short and sharp high-value clinical podcasts. We love to hear from you. If you have a question for us or you'd like to give us some feedback please leave us a voice message by going to our episode page on the anchor app (https://anchor.fm) and hitting the record button, via email at thevetvaultpodcast@gmail.com, or just catch up with us on Instagram. (https://www.instagram.com/thevetvault/) And if you like what you heard then please share the love by clicking on the share button wherever you're listening and sending a link to someone who you know will enjoy listening. --- Send in a voice message: https://anchor.fm/vet-vault/message
Yes. it's totally fine. In fact, it's the preferred way to send your spleens to the lab. Here's another thing that I learnt from this episode: pathologists don't necessarily look like Uncle Fester from the Adams Family! Join us with the very un-Uncle Fester-like anatomical pathologist Dr Flaminya Coiacetto for more things that you didn't know about how to ensure better histopath results. (And happier pathologists!) From sample handling, preparation and storage, to what the ideal history looks like. Minia also tells us about the common special stains: how they work and when to use them. If you enjoy listening to Flaminya you should check out her video on how to do a necropsy at https://www.vetnostics.com.au/our-services/educational-resources/. Thank you the SVS Pathology Network (Vetnostics, QML, TML, Vetpath and ASAP labs for lending us their pathologist, and for supporting our new pathology series of podcasts. Go to https://thevetvault.com/podcasts/ for show notes and to check out our guests' favourite books, podcasts and everything else we talk about in the show. If you want to lift your clinical game, go to https://vvn.supercast.tech for a free 2-week trial of our short and sharp high-value clinical podcasts. We love to hear from you. If you have a question for us or you'd like to give us some feedback please leave us a voice message by going to our episode page on the anchor app (https://anchor.fm) and hitting the record button, via email at thevetvaultpodcast@gmail.com, or just catch up with us on Instagram. (https://www.instagram.com/thevetvault/) And if you like what you heard then please share the love by clicking on the share button wherever you're listening and sending a link to someone who you know will enjoy listening. --- Send in a voice message: https://anchor.fm/vet-vault/message
What's coming next for the Linux desktop, and some exclusive news from System76. Plus, we try out Element's new voice messages and share our thoughts.
What's coming next for the Linux desktop, and some exclusive news from System76. Plus, we try out Element's new voice messages and share our thoughts.
What's coming next for the Linux desktop, and some exclusive news from System76. Plus, we try out Element's new voice messages and share our thoughts.
Quantum computing and the space race share some commonalities if you ask Jack and I, but that’s not all we’re here to talk about today. Join Jack Ceroni and myself for a discussion of pennylane, QML, getting started with quantum, and what in the world are QNodes? Pennylane tutorials: https://pennylane.ai/qml/ Pennylane website: https://pennylane.ai/ Pennylane Community Demos: https://pennylane.ai/qml/demos_community.html Jack’s Pennylane Tutorials: https://pennylane.ai/qml/demos/tutorial_qaoa_intro.html https://pennylane.ai/qml/demos/tutorial_vqt.html https://pennylane.ai/qml/demos/tutorial_qgrnn.html Jack’s Website: https://lucaman99.github.io/ Quantum Decision Diagrams: https://iic.jku.at/eda/research/quantum_dd/tool/ Sound effects obtained from https://www.zapsplat.com https://www.minds.com/1ethanhansen 1ethanhansen@protonmail.com QRL: Q0106000c95fe7c29fa6fc841ab9820888d807f41d4a99fc4ad9ec5510a5334c72ef8d0f8c44698 Monero: 47e9C55PhuWDksWL9BRoJZ2N5c6FwP9EFUcbWmXZS8AWfazgxZVeaw7hZZmXXhf3VQgodWKwVq629YC32tEd1STkStwfh5Y Ethereum: 0x9392079Eb419Fa868a8929ED595bd3A85397085B --- Send in a voice message: https://anchor.fm/quantumcomputingnow/message Support this podcast: https://anchor.fm/quantumcomputingnow/support
Rob and Jason are joined by Raymond Chen from Microsoft. They first talk about Herb Sutter's virtual ISO Plenary Trip Report and some new features voted into the C++23 draft. Then they talk to Raymond Chen from Microsoft about his career working on Windows and the Old New Thing blog. News Trip report: Winter 2021 ISO C++ standards meeting (virtual) Learn C++, Qt and QML the easy way Links Raymond’s blog The Old New Thing. Here’s the post that made his July 4 family picnic a little more stressful Raymond is managing editor of the Windows Universal Samples and the Windows Classic Samples repos on GitHub. Here’s a YouTube playlist of Raymond’s One Dev Minute short videos. Raymond can be found on GitHub as @oldnewthing, and his necktie’s Twitter account is @ChenCravat. Sponsors Visual Assist
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.10.24.353318v1?rss=1 Authors: Wieder, M., Fass, J., Chodera, J. D. Abstract: The computation of tautomer rations of druglike molecules is enormously important in computer-aided drug discovery, as over a quarter of all approved drugs can populate multiple tautomeric species in solution. Unfortunately, accurate calculations of aqueous tautomer ratios---the degree to which these species must be penalized in order to correctly account for tautomers in modeling binding for computer-aided drug discovery---is surprisingly difficult. While quantum chemical approaches to computing aqueous tautomer ratios using continuum solvent models and rigid-rotor harmonic-oscillator thermochemistry are currently state of the art, these methods are still surprisingly inaccurate despite their enormous computational expense. Here, we show that a major source of this inaccuracy lies in the breakdown of the standard approach to accounting for quantum chemical thermochemistry using rigid rotor harmonic oscillator (RRHO) approximations, which are frustrated by the complex conformational landscape introduced by the migration of double bonds, creation of stereocenters, and introduction of multiple conformations separated by low energetic barriers induced by migration of a single proton. Using quantum machine learning (QML) methods that allow us to compute potential energies with quantum chemical accuracy at a fraction of the cost, we show how rigorous alchemical free energy calculations can be used to compute tautomer ratios in vacuum free from the limitations introduced by RRHO approximations. Furthermore, since the parameters of QML methods are tunable, we show how we can train these models to correct limitations in the underlying learned quantum chemical potential energy surface using free energies, enabling these methods to learn to generalize tautomer free energies across a broader range of predictions. Copy rights belong to original authors. Visit the link for more info
Cette semaine, Quantum vous donne l'actualité quantique mais aussi rentre dans les détails du QML (Quantum Machine Learning).
Frank & Lorraine Pyefinch of Best Practice Software are two iconic and down to earth players in the Australian Practice Management System game. Dr Frank Pyefinch is not only founder of Best Practice, but also originally the founder of Medical Director - the number 1 and 2 practice management systems for Australian GPs today, and have been for many years. As CEO of Best Practice, Frank brings with him a long and proud history working as a busy GP, and Lorraine as a registered nurse - so together they understand first hand the challenges and needs of the medical community when it comes to software and technology. Overview [02:07] Genie was first created because Frank doesn't like Mac [02:45] The first PMS in Australia (Medical Director) was created by Frank because the poisons act changed in Australia allowing typed scripts, which included computer generated ones. [06:38] The break-even point for MD back in the early 90s was 200 sites. This seemed an ambitious goal at the time. Today Best Practice Software has over 4500 sites. [06:49] The name “Medical Director” came from Lorraine looking through Job Classifieds in Aus Doc magazine, and liking the attributes of a ‘Medical Director'. [07:58] The original Medical Director logo was created by Lorraine with the kids etch-a-sketch in the back of the family car [08:30] The first copy of Medical Director was sold on it's launch at the AMA's annual computer day conference in 1992. [09:00] In 1994/95 advertisements started to be inserted into the Medical Director software, which subsidised the program heavily. [09:30] In 1999 Medical Director was sold to Health Communication Network (HCN). Frank and Lorraine went to HCN with the business. [10:30] Frank and Lorraine left HCN in 2003 as they were dissatisfied with the increasingly intrusive advertising being placed in MD to raise revenue. They sat out their exclusion period in their contract, and during that time Frank went back to being a GP in Bundaberg while writing Best Practice. [12:00] There were no standards for medical software at that time. If there were, it's likely the product would never have been built. [14:00] Frank and Lorraine have seen Medical Software evolve from a text mode dos interface, to a graphical user interface, to the introduction of tables and touch screens. Now seeing a bigger emphasis on communication, and also now a shift to the cloud, which is driving the development of their Titanium product to be released next year. [15:07] Frank still does some programming in Best Practice even today, because he enjoys it. [17:34] Some of the government brain waves aren't clearly thought out, such as the PHN's collecting data for the QI Pip. [18:55] The biggest cause of support issues for Best Practice are Medicare claims not reconciling due to the archaic nature of the Medicare adapter. BP is hoping Medicare shift to web services before BP release Titanium so they don't need to integrate with legacy technology in the cloud. [21:15] During the roll out of the then PCHR, now My Health Record, during the Royal Review, Frank and Lorraine provided the suggestion that Doctors should be remunerated for uploading summaries to My Health Record as it was additional admin work they were not being paid for. [26:51] The BP Partner Program has been launched in order to give partners more controlled access to the BP database so they don't need to hack their way in, and only get access to what they need - protecting the partner, the patient, the practice and BP. [31:10] Pathology requests in PMS systems is standardised as SNP and QML, two competitors came together in the early 90's came to the PMS providers and standardised the format of the forms, which set a format for future pathology vendors. This didn't happen with radiology which is all over the place [33:30] The ADHA is making strides towards their goal of interoperability, for example with secure messaging, although is Secure Messaging the best way to go about it, perhaps web services for a central repository would be a more modern way to go about it. [35:30] Titanium has not been released yet due to the sheer amount of work to build 30 years of product development from scratch into the cloud. The business was also distracted by recent acquisitions which expanded their reach into Allied and NZ markets. Ultimately all products will be rolled into Titanium their cloud product. [40:15] BP are soon releasing their patient app, they see it as a future direction for practices wanting to engage more with patients [44:50] A big consideration for BP in rolling out the patient app was the potential risk of needing to support millions of patients using the app - shifting from a B2B approach to B2C. Links Talking HealthTech Podcast Talking HealthTech Community Best Practice Software Best Health Patient App Best Practice Partner Program Best Practice Titanium Medical Director Genie QI Pip PHNs Medicare My Health Record Sullivan Nicolaides Pathology QML Pathology ADHA – Australian Digital Health Agency Transcript [00:00:00] Pete: [00:00:00] Welcome to Talking HealthTech. My name is Peter Birch, and this is a podcast of conversations with doctors, developers, and decision makers that are playing in the Australian HealthTech scene today. [00:00:12] With me today are two very iconic and extremely down to earth players in the Australian Practice Management System game. I'm talking about none other than Frank and Lorraine Pyefinch of Best Practice Software. [00:00:23] Dr. Frank Pyefinch is not only founder of Best Practice. But he's also originally the founder of Medical Director, the number one and two Practice Management Systems for Australian GP's today, and they both have been for many years. Dr. Frank Pyefinch is CEO of Best Practice and he brings with him a long and proud history of working as a busy GP and Lorraine as a registered nurse, so together they understand firsthand the challenges and needs of the medical community when it comes to software and technology. Best Practice has dominated the market for a long time as the first choice for GPs around Australia when it comes to selecting a PMS, and I look forward to [00:01:00] finding out why in my conversation with both of you, Frank and Lorraine how you doing? [00:01:03] Frank: Hi. Well, good. [00:01:06]Pete: This is a first for me. I'm actually recording from your office. So had taken it out on the road, which is great. But I, I originally thought I was going to go to Bundaberg, but you've got a few offices I see. [00:01:16] Frank: We moved from Bundaberg about five years ago , and came to Brisbane because our two children had to come down for university and getting them to go back to Bundaberg was quite difficult. So every birthday and Christmas, it was down to Brisbane. After a couple of years, we decided we might as well just move here. [00:01:35] Lorraine: The main office is still in Bundaberg, so we've got about 65 staff working there. We've got four offices all up, so we've got one here in Brisbane with just over 40 staff and another 9 down in Sydney, and then, over in New Zealand, we've got more than 40 in Hamilton, in the North Island. [00:01:55] Pete: So I always used to say that Bundaberg was the HealthTech capital of Australia, or the Silicon Valley of [00:02:00] Australian HealthTech. [00:02:00] Frank: It was certainly in the 90s, , when Paul was still living there and wrote Genie [00:02:07] Lorraine: We have a funny story about Paul because you see, I clearly remember the night Paul came round to our house. After Frank had first started to show off the original Medical Director, and I remember them sitting in the study and I could hear Paul going "Oh wow, that's really good, Frank". And then he asked Frank the fatal question "does it run on a Mac?" And Frank said, " no, I hate Macs". And so Paul went, "Ya ha!, I'm going to write Medical Director for a Mac!", [00:02:34] Pete: [00:02:34] As I sit here, I look sponsored by Mac sitting in front of you. Hey, look, so, there's a lot that we can cover off. Obvious question. You guys have a lot of history in this space. So where do you start? How did this all start? [00:02:44] Frank: [00:02:44] It really started in the late 8 0's , when Lorraine was doing a bachelor of health science at central Queensland uni, and so we had to buy a computer for her to do her course, and I got interested in it. And [00:03:00] started using it for a little database projects at home, like watching the rainfall every day and coding what bottles of wine we had in cupboard and things. [00:03:11] And around about the same time, in about 1989 the Queensland Government changed the poisons act to allow prescriptions to be typewritten , as opposed to handwritten. And of course, type written also included computer generated. And so I thought this was really neat because I had something like 25 patients in a local nursing home. [00:03:33] And almost every week I'd get a list of prescription requests for them, and I could sometimes sit for an hour after I'd finished at six o'clock at night writing out prescriptions for the nursing home, and I thought if I could put all these patient's names into a computer, into a database. And then put the drugs in against the names. [00:03:55] I'd be able to just go through and tag which ones I wanted to print and print them out. [00:04:00] And so I did that and started using it at work. [00:04:04] Pete: [00:04:04] When was that? There was back in the 80s? [00:04:06] Frank: [00:04:06] it was about 89 / 90 when I really started.. And then I started using it day to day with my regular patients as well, because once I've written that in, it could write scripts, it didn't have to be restricted to the nursing home patients. So I bought a computer and put it on my desk with a dot matrix printer. And in those days we had to supply our own prescription paper, which I had to get printed and so I started using it for all my patients. Then one of my partners started using it too, and we actually networked it by putting a cable up through the ceiling and down the other side and into his room. [00:04:43] And so we had a little network of two computers and progressively it just grew from there. And I started putting other things in. I got a list of PBS medications from the pharmacy next door. The pharmacist had written his own computer program for [00:05:00] point of sale, and so he gave me a big list of all the medications with their PBS listings. [00:05:05] I was able to use that to create pick-lists of drugs and so on. And once I had that, it was possible to use that data in other ways, so I put things like listing allergies, and then I could cross check between the scripts and the allergies and it just grew. And yeah. Progressively, we added more and more things, and over the early 90's, through 90 to about 92 it became what was ultimately Medical Director the first release. And how Medical Director really came about, was that a GP in Narrangbar, which is just North of Brisbane, heard about the fact that I was writing computer generated scripts and he was really keen to do the same. So he contacted me and said. Can I have a look at your program? [00:05:50] So packaged it up onto a three and a half inch floppy disc and posted it down to him and he put it on and played around with it and said, you got back in touch and said, this is amazing. [00:06:00] This is just what I've been looking for. There's nothing else like it anywhere in Australia. And he said, you should be selling it. [00:06:06] And I thought, hmm, I'm a GP. I'm not a sales person or a computer expert. It was just a hobby really for my own use. But we had a chat about it and decided.. [00:06:21] Lorraine: [00:06:21] I went to TAFE and did a short course on how to write a business plan because I thought we'd better have a business plan. They were very popular back in the 90s so I wrote that business plan. I remember coming home to Frank one night from TAFE and saying very proudly: "so I've worked out our break even point, we have to have 200 sites to break even. Okay. And Frank said, Oh, that's a bit ambitious, isn't it? [00:06:43] [00:06:43] Pete: [00:06:43] How many sites do you have now? [00:06:45] Frank: [00:06:45] Four and a half thousand [00:06:49] Lorraine: [00:06:49] Medical Director was interesting because even the name. We came up with the name, I like to say I named the babies in the family, but we came up with the [00:07:00] name because at the time there were a lot of really gimmicky names, you know, [00:07:04] Frank: [00:07:04] Medi-mouse. [00:07:06] Lorraine: [00:07:06] I was actually flicking through Aus Doc magazine and got to the classifieds back. And they had all these ads looking for a Medical Director, and I was reading the attributes of what a Medical Director was, and I thought, yeah, that's actually something, responsible, in charge, reliable, all those sorts of things. [00:07:32] So I thought, well, that's the kind of this kind of image we thought, something that helps the practice and to make it more efficient, just even handwriting, because there were a lot of concerns about the medication errors and just being able to have a type written prescription, just removed any ambiguity over what of handwritten script might've might have seemed to whoever was dispensing. So that's sort of where it started. The logo, the MD logo, [00:08:00] I was sitting in the back of the car with the kids etch-a-sketch when we were coming back from holidays. Came up with the MD, the original, they don't use that one anymore. [00:08:10] Pete: [00:08:10] They've still got the name though. [00:08:12] Frank: [00:08:12] So we started selling it in 1992. And in fact, we had a table at the AMA's annual computer day that they used to have back in those days. And, we were in a corner with a table and we had a printer and we were actually printing scripts on fake sample script paper, and we sold the first one on the day. [00:08:34] At the at the trade display, and that was September 92 and basically it just took off from there and I think 94 we had passed out 200 site limit to, to break even, and I had to take increasingly longer periods of time away from the practice. And so I ended up in about 94 or 95, we teamed up [00:09:00] with some advertising people down in Sydney, and that's when we started putting the ads into Medical Director, which subsidized the program quite heavily. [00:09:10] It was never free. People keep telling me that. We used to give it away free, but we never actually did, but it was heavily subsidized by the advertising. And over the period through 95 to 99 we build up to about 1500 sites. I think it was at that time, we sold the business to health communication network and we worked there for four years, but during that time I didn't do any general practice, and by the end of that time I thought we were starting to lose touch with the coal face, and at the same time we thought the product was being pushed in directions that we didn't want to see it going. In that it was being used as a cash cow with increasing amounts of advertising and more intrusive advertising [00:09:54] Lorraine: [00:09:54] When it was our business, Frank used to have pretty tight editorial control over [00:10:00] where and how many ads appeared, and so it was more of an exclusive spot at the pharmaceuticals paid for and we disagreed with, I think, the way that, that seemed to be a lucrative revenue. stream for for the business and we didn't agree with what [00:10:17] Pete: [00:10:17] Yeah I mean, you obviously can't do that at all now. [00:10:20] Frank: [00:10:20] No, no. It went from being the customers, being the doctors, to the customers, being the drug companys, which was not what we wanted to see. So in 2003 we both left and then had a year to sit out in the exclusion period from my contract. And during that time, I went back to general practice 12 hours a week in Bundaberg. And we decided during that period that there was still room for someone to come in and produce a product targeting doctors that had no advertising in it. And so that was why we started working on BP. [00:10:56] Lorraine: [00:10:56] And by then, our old product Medical Director was the market [00:11:00] dominate... [00:11:00] Frank: [00:11:00] It had 85% market share at that time. [00:11:03] Lorraine: [00:11:03] So it was, it had gone in that space of less than 10 years from probably less than 5% of doctors using computers in their surgery to being the norm for the vast majority. So, I mean, ultimately patient safety, by the fact that, prescriptions will legible had improved remarkably in that time. [00:11:24] Frank: [00:11:24] And I mean, we've added so much allergy checking, interaction, checking disease interaction checking. So there was a lot of patients safety sort of features built into the product. And it actually reached a point where at one point the medical defense people were saying that if you weren't using a computer for prescribing, then you probably weren't practicing to the standard that is expected at the time. So if you had a misadventure due to with the handwritten script, you would probably lose the case. [00:12:00] [00:12:00] Lorraine: [00:12:00] I suppose we look back on it now, there were no standards for software in Australia at that time. They really aren't now, Frank created the standard, I suppose, he set the bar. If there had been standards in place, it might have actually been more difficult to do what we do. Because the way you look at some of the government mandated work and think, well, we probably wouldn't have designed it like that. [00:12:27] Frank: [00:12:27] Well, it was very much designed by a clinician, and that's why it, I think took off because the workflows were very intuitive and very natural to the clinicians. Once they started using it, it really improved their efficiency, improve the note-taking, improved patient safety. There was all positives. [00:12:48] Pete: [00:12:48] It sounds very much designed to solve a problem rather than designed to show off some fancy tech. [00:12:54] Frank: [00:12:54] Yeah. It was very much from a user and that's when I wasn't working in general [00:13:00] practice during the HCN period. I started to feel that it was losing some of its relevance because it wasn't keeping pace with what clinicians were using. [00:13:11] And so while we lived in Bundaberg, I always working 10 hours a week. I did that for 10 years until we left in 2014. [00:13:24] Pete: [00:13:24] So, you know, you, you've built it up to, to what it is today, and your, , there's a lot of people walking around in this, in this office, and you've got other offices as well. [00:13:31] No doubt. You've. paved the way and kind of set the pace for a lot of people, but you've also had to keep up with we the industry and everything that's happening around it and use a needs and just general advancements in technology. It's a very big question for people with such a vast experience, but what would you say some of the biggest things that have changed in that in that time period, from when you first created the thing to now? [00:13:54] Frank: [00:13:54] When I first created it, we were using a text mode dos [00:14:00] interface where everything basically was done by typing. There was no mouse. There wa s, none of the sort of touch screens or any of the voice activated stuff that you see today [00:14:13] Pete: [00:14:13] You didn't say, Hey Siri... [00:14:15] Frank: [00:14:15] Couldn't do that. Back in 1990. So we've seen it move from that to windows to becoming a graphical user interface. We've seen the introduction of tablets and touch screens and all the rest of it. We've seen much bigger emphasis on communication, which is something that's still evolving with secure messaging and that sort of stuff. Now we're seeing the move to the cloud, which is why we have so many people in the offices that we have. Redeveloping. obviously, for the cloud, it has a whole raft of issues that you didn't have when you had an office based solution. And the security is obviously a major issue. [00:15:00] We've got quite highly paid people working on the design and the architecture to make sure that we get it right. In the old days, I did a lot of the programming. I still do some, but only on the legacy product because I don't understand the new technologies well enough to know that we'd be doing the best job possible. [00:15:18] Pete: [00:15:18] I didnt think you'd do any programming at all nowadays? [00:15:21] Frank: [00:15:21] I enjoy it, I love it. That's why I started doing it in the first place was because I really enjoyed it. So yeah, so I still do a bit of work on it. I do have a few special projects. I do a bit of decision support work along with some of the pathology labs. I like to keep working on the actual program, but I'm not doing any work on the cloud version, it's all young guys who have much sharper brains than I do [00:15:51] Pete: [00:15:51] We will get into cloud in a bit too, because I want to cover off a little bit on that , but just back to the needs of the customers being the doctors, the clinician, general [00:16:00] practice, like today, what do you think of the big things that GPs need a hand wave or, are some of the biggest challenges that they face? Or just generally the environment in which we're in, which is creating challenges for them. [00:16:14] Lorraine: [00:16:14] think there's certainly been a shift towards more corporatized medicines. So there's a lot of doctors that are working as employees of contractors to do the surgery. We certainly started in an environment when most practitioners owned their own surgery or were in a group practice. So there's changes along there. A lot of them aren't decision makers anymore. [00:16:35] So, you know, there's a different set of needs for non-practitioner owners. Certainly there's been, there's financial issues in medicine these days. For a long time, there was no increase in Medicare rebates, which meant that , for a good number of years, the income that doctors could generate was limited. those challenges, I think, are always there. This aging, [00:17:00] of doctors [00:17:01] Frank: [00:17:01] Increasing chronic disease [00:17:03]Lorraine: [00:17:03] Managing chronic diseases and other thing s, there's more emphasis on, it'd be interesting to see how PHNs go with that. There's still a lot of question marks around data security [00:17:15] Pete: [00:17:15] That's all linked to the QIP isn't it? [00:17:18] QI Pip Yeah [00:17:20] Lorraine: [00:17:20] QI Pip Yeah. I mean, a lot of it hasn't been clearly articulated, so, you know, it's a bit of a work in progress. [00:17:28] Frank: [00:17:28] I mean, government often come up with brain waves that aren't clearly thought out, and we've seen it with the QI PIP where they using the PHNs to collect the data. [00:17:45] So there's a lot of, not distrust of the PHNs, but not all GPs are willing to give the PHNs data, whereas they'd be more inclined to upload it to a central repository that was directly managed [00:18:00] say by the department of health or, or someone like [00:18:02] Pete: [00:18:02] that [00:18:03] I mean the funding model in Medicare and everything around that space. [00:18:07] Is there any thoughts you've got around, any progressions that have been made, particularly around technology? There's a lot of people that have thoughts on how Medicare is supporting the changing needs of patients or clinicians or the way that healthcare is delivered. Is that impacting you in any way? [00:18:21] Frank: [00:18:21] Medicare itself is really just an insurance organization. So the claiming we have automated within Best Practice as best we can, it is all done through a little, what they call an add that to. Which is quite old and it's not even, I don't think it's been upgraded for four or five years now. So they're not terribly forward moving. They have been talking for some time, the adapter has a lot of issues and we've had to do some pretty tricky programming to get the Medicare claims to reconcile at times. And it's one of our biggest [00:19:00] support issues and that we have from practices is. W wanting to know how they can get the Medicare to add up between what they've claimed that they've actually received. They have been talking for years about replacing the adapter with web services, which is a much more modern way of transmitting data to and from Medicare, but it hasn't happened yet. We're hoping that it will happen in time for our cloud program because we don't really want to implement the adapt to in our modern program because talking to those sort of legacy products is actually quite difficult sometimes and trouble prone, which is then going to cause us more support issues. So we'd rather they'd move forward, but they've been very slow. Medicare and not pushing anything really. They're very reactive. [00:19:59] Pete: [00:19:59] What about, [00:20:00] dare I say My Health Record? I think I've, got to a point in this podcast where I haven't asked one question about My Health Record. But I'm gonna ask you guys about My Health Record and, whether it's your take on it or what's needed to increase uptake of it or how that's kind of working , what kind of thoughts have you got around that space? [00:20:18] Frank: [00:20:18] I personally, as a clinician, was quite keen on the concept of My Health Record was, the original cases involve issues where people were away from home on holiday or whatever and got sick and they full record would be available to a clinician at that location. People were admitted to a hospital and unable to give a history if they were unconscious after a car accident, that sort of thing. The hospitals would be able to look it up, so there's lots of good that clinician could see in it, but the implementation has probably let it down. When they did the, was it the Royce review? [00:20:58] Lorraine: [00:20:58] Royal. [00:21:00] Richard Royal. [00:21:01] Frank: [00:21:01] Royal Review about four years ago now, after it had been released for about a year and the uptake was very slow. He, was commissioned to basically write a report saying why was this the case and what could be done to turn it around? And that's when they renamed it from PCHR to my My Health Record. Like that was gonna make a big difference. That as part of his report, he interviewed a lot of people who were involved with it, including us. And. We gave him some suggestions for increasing uptake. And our biggest suggestion was that the GPs get paid an extra item number for curating the online health record, because it does take a couple of minutes at the end of a consultation to check that the health summary, shared health summaries up to date and accurate, and then to upload it. [00:21:55] And if you see 40 patients a day and you put an extra two minutes onto every [00:22:00] consultation, that's 80 minutes a day of unpaid work. And at the time, the health minister. Well, I think it was Nicola Roxon said that while it might push the level B consultation to a level C, and that was fine if that happened, but in most consultations it doesn't. If you've got a 10 minute consultation and you add two minutes, you don't go from a B to a C, you stay a B. So essentially GPS were being asked to do work that they weren't going to be paid for. And in the current climate and the climate at that time, no one had time to do extra work. And the GP is the person who actually has least to benefit from the My Health Record because they have all the data in the desktop system already. [00:22:40] So curating it and uploading it is of no real value to them personally. So it's good for hospitals, it's good for paramedics, it's good for occasional visiting GPs, if you're visiting somewhere else, but for your own regular GP, that data is already on his system. So being on the, My Health Record is of no [00:23:00] great value. [00:23:01] So I think, they're not going to get uptake until they can sort that out. Basically. But I mean, it was also flawed in the sense that it was a very document based architecture that they used. So everything that gets uploaded is a like a PDF basically, and that gives it no flexibility. You can't do anything really clever with the data. All you can do is just look at the documents. You can't graph the data pathology results go up and they can't, you can't use that atomize data that you can do with ones that come into your local system. So it's not as flexible or as useful as it probably could have been. And they recognize that and they're in the process of redesigning it, but we'll wait and see what they come up with. [00:23:49] Lorraine: [00:23:49] I mean, it's always an ongoing challenge with government dealing with new programs and things like that. Often the people that are making these announcements, you know, there's been no design behind it. It makes [00:24:00] it really difficult from a developer's point of view to actually understand what they're trying to achieve and how they're going to get there. And often, there's very little input into, into those specs. So from an industry point of view, I know MSAA spends a lot of time trying to, trying to encourage more discussion with, um, with developers. [00:24:20] But I mean, we all We also see from a patient's point of view with regard to My Health Record, we think that, for example, our app that we're releasing in the next couple of months, Best Health, you know, that gives the patient a copy of the health summary, all of the key things that they would need to know. [00:24:35] So if they are on holiday and need to see a doctor, they've got it there anyway. So it's probably more convenient. In that format. [00:24:42] Frank: [00:24:42] Doesn't help if you're unconscious after a car accident to get into your phone. Yeah. Phone is probably lost in the crash. And, um, even if it wasn't, no one knows you pin [00:24:57] Pete: [00:24:57] Well, [00:25:00] that's interesting. what about partners? There's all these other vendors that focus on a very niche kind of area and you guys are the central hub for information. Everyone wants to play with you, I guess, because that's how they engage with their target market and also, hopefully leverage some of the information there to ultimately improve patient outcomes. You've had a bit of a ramp up or at least I've seen work on your partner network and focus on that recently, so it seems like it's a big interest for you right now? [00:25:28] Frank: [00:25:28] It's complicated. We've got something like 300 or 400 people who want to interface to was one way or another, or have or want to, and that was becoming unmanageable for a start. But then also some of the people who already were interfacing, were doing things in a slightly less than perfect way, I'll say. And so as part of the partner network, we've given them more controlled access. [00:26:00] So that they don't need to be, in a sense, hacking the database for their own purposes. We'll give them controlled access to what they need and keep them away from what they don't need. Because if you've got an online appointment booking system, you don't really need to be reading any clinical data at all. And then so the partner program tightened up and standardized things so that it was all much more secure because obviously patient privacy and the privacy act has changed and there's mandatory data breach notification and stuff all became real in the last five years or so. And so we had to make the program keep up with that. [00:26:41] And as part of that, the tightening up of the security layer has that we've under done in the last couple of releases was necessary. [00:26:50] Lorraine: [00:26:50] Yeah. I mean, we've always been open to Engaging with, people who have niche products that we don't do. I mean, we stick to our knitting, [00:27:00] we don't think we can be all things to all practices. [00:27:02] I mean, that's the interesting thing about general practice. They're so diverse and the needs are all very different. the way they run their businesses is all very different. So you can't be all things to all people all the time. Is the old saying, so we don't object to that at all, but, we have to be very confident that we know exactly what those third parties are doing and why, how... Because we are allowing them to access that info. Well, not us, but the practice does, and we've got to do whatever we can as vendor to make sure that our customers don't get themselves into any tricky situations. So the more you can protect the customer from making a mistake, the better. [00:27:43] Frank: [00:27:43] Yeah. I mean, it's a hard balance. In some ways. We have always looked at the, the data belong to the practice. So we've always given them the ability to access it and allow third parties to access it. But some of the third parties have sort of taken [00:28:00] advantage of that to do things that would never really intended. [00:28:04] And the practice has not always known what was being done with the data. So as part of our practice partner program, we now have a contract where they have to agree not to use any data for purposes other than [00:28:18] Lorraine: [00:28:18] other than what has [00:28:19] Frank: [00:28:19] been signed up for. [00:28:21] I mean, that's a small protection that it's just a signing a document, but at least we've got something in place. Whereas before we had nothing. And so. t's a difficult balance between giving people access to data and not giving them too much access [00:28:39] Pete: [00:28:39] Need to find that right balance. so I surprisingly get asked, a fair bit, from, vendors that might have been developing something on how they can integrate with more practice management systems or can integrate better with the Is there, I can put some contact details of the, the partnership program, for best practice in the show notes, if that's would be good to you way. [00:28:57] Yeah. Easy. [00:28:58] Lorraine: [00:28:58] I'm surprised they [00:29:00] haven't already spoken to it. [00:29:01] Pete: [00:29:01] So I think sometimes it's, you know, you get lost in the way and how to do things [00:29:06] Lorraine: [00:29:06] It's funny. You know, you hear all these. Buzzwords, connectivity and, secure messaging and all that sort of stuff. [00:29:12]I mean, we look back and over the last, you know, 25 plus years, we've been involved in every single, trial for discharge summaries from hospitals, for example. And a lot of those trials were great. They were so successful, but they never proceeded. the ecosystem for health is quite complex. [00:29:28] And unless. If you're talking about connectivity and unless you get, a lot of them are big overseas vendors, that have hospital systems and and system administrators within the health department themselves. Unless there's a will there to proceed with that kind of thing. It makes it very difficult. [00:29:44] And yet there's so much money spent in the public health system, tertiary care, when in actual fact most of the interaction on a day to day basis is in general practice [00:29:55] Frank: [00:29:55] State based public hospitals seem to forget that general practice [00:30:00] exists basically. [00:30:02] Yeah ok, [00:30:02] Pete: [00:30:02] Well [00:30:03] Lorraine: [00:30:03] it's not the remit, but [00:30:05] Frank: [00:30:05] it's not, I mean, it's this sort of crazy idea we have of having a federal health system that runs primary care and then a state based system that runs tertiary care. [00:30:15] And it's different in every state. They use different software, different systems. sometimes in the past, even between the hospitals in one state, they've used different systems and although that is gradually becoming less of an issue. Yeah. [00:30:30] Lorraine: [00:30:30] I mean, we like to, we like it when there's a national approach and they do it once and everyone uses the same format. [00:30:37] Frank: [00:30:37] Unfortunately, we're facing the safe script thing for the real time prescription monitoring where every state seems to be going to go at sign way and use a different method for tracking real time prescriptions. Let's [00:30:51] Pete: [00:30:51] That makes things easy for you... [00:30:52] Frank: [00:30:52] It doesn't make things easy at all! And it's just typical of the way governments seem to run in this [00:31:00] country. [00:31:01] Lorraine: [00:31:01] It's really inefficient from that point of view. I look back in the mid nineties two of the largest pathology companies in Queensland, so we had Sullivan Nicolaides and QML, which is Queensland Medical Laboratory. They were really strong competitors, and there was a big divide between them, but they both got together and stumped up some cash and contacted the PMS software vendors, around at the time, including us at Medical Director and, said, we're going to do pathology results. And also we're going to standardize the way that requests are made. And so they came up, to their credit ,with the same format of the form. And then whenever any other lab from any other of the state would contact us, we'd say, this is the format for the form, you've got to use that. And so suddenly pathology, we're all using the same format, and it was so simple. Whereas radiology is all over the shop cause they all still have their own, [00:31:56] Frank: [00:31:56] particularly in early nineties, most [00:32:00] radiology practices were just sub double digit numbers of radiologists and they didn't have the big conglomerates. [00:32:08] Whereas the path labs have always been quite large and therefore, and there's not so many of them yet, and so it's easier to get them to come to some agreement. [00:32:18] Lorraine: [00:32:18] So I suppose after all this experience in the industry, our advice is do at once. Do it well. [00:32:25] Right. [00:32:26] Frank: [00:32:26] Sadly it's not happening though. Real time prescription monitoring is looking like being a bit of a nightmare. [00:32:32] Lorraine: [00:32:32] And, and also PHNs, you know, they're all wanting data, but they're all ultimately collecting the same sort of data for the federal government. It'd be terrible if all they all decided they wanted it. It in a different format. It's kind of make it. The life of all software vendors, really difficult, you know, where it's the same information really. [00:32:52] Frank: [00:32:52] We've seen a bit of that in New Zealand with the PHO's collecting data, right? Even [00:32:58] Lorraine: [00:32:58] though they're all collecting [00:32:59] Frank: [00:32:59] the same [00:33:00] stuff, but they all have different formats and different ways of transmitting it. [00:33:05] Lorraine: [00:33:05] And the overhead, from our point of view is quite costly. So you don't want to do that. [00:33:10] There's no need to do that. [00:33:11]Pete: [00:33:11] You're talking earlier about Government institutions and associations looking at the ADHA, the Australian Digital Health Agency, and putting it around the other way. what are the things that practice management systems can be doing to be helping the ADHA in their big quest for the big buzzword interoperability. [00:33:28] Frank: [00:33:28] They have made some, some strides towards that, especially in the last couple of years. And I know Tim Kelsey made secure messaging one of his priorities and we have been involved in the trials that they did one or two years ago which have resulted now in a further round of funding. [00:33:47] For all of the vendors to implement the new work. And so there is progress being made. I guess my thought though is secure messaging really the best [00:34:00] way to be doing it. And should we be looking to something like the. Prescription exchanges where they use web services to put documents into a central repository, which then can be accessed by different people. [00:34:13] So say a referral to a specialist rather than going point to point with secure messaging could be sent centrally and then downloaded by the specialist or by one of a group of specialists that the patient decides is the one that they want to go to. Yeah. I mean, secure messaging is coming. But whether it's what we really want, I'm not entirely certain. [00:34:37] Lorraine: [00:34:37] The directory is always been the sticking point because they were, there was no national directory to make sure you [00:34:44] Frank: [00:34:44] Every secure messaging company has it's owndirectory, and they didn't communicate. It makes [00:34:49] Pete: [00:34:49] it hard to, to connect with the whole point. So [00:34:53] Lorraine: [00:34:53] that's work being You know, I'd done now district that a federated one. [00:34:58] That's good. That's [00:34:58] Pete: [00:34:58] Good. Look, lastly, [00:35:00] to wrap things up, I'm looking at what you guys are working on because there's a lot of people out there working at the best practice office here on your new thing coming up and, I'm glad, that you mentioned cloud before because Titanium has been on your website for a long time. [00:35:14] Frank: [00:35:14] It's [00:35:14] Pete: [00:35:14] been, there's been a lot [00:35:15] Frank: [00:35:15] of construction for a long time. [00:35:18] Pete: [00:35:18] So it's an interesting looking at cloud in practice management land. It's, is that a deliberate strategy from you guys of kind of seeing how things play out or understanding what the market needs, or is it just about building like the right thing for [00:35:33] Frank: [00:35:33] the market [00:35:34] I think there are a couple of things. One is that when we started the titanium project, we weren't really designing it for the cloud. We were designing it as a web application, but not specifically as a cloud application. And so about two years into the project, we kind of changed direction of it. [00:35:53] And as I said, the security and the, sort of concerns in the cloud are quite different to what we [00:36:00] were originally doing. So it changed direction halfway through, but the other issue that's holding it back a bit is the sheeramount of work that needs to be done to be able to fully replace Best Practice. It's a really rich, functional piece of software, which has taken ultimately nearly 30 years to get to where it is if you count the Medical Director time as being a sort of [00:36:25] Lorraine: [00:36:25] precursor first run. [00:36:30] Frank: [00:36:30] So just getting that functionality takes time. Unfortunately practices in different ways use every bit of functionality that we've given them because we put it in there for a purpose. And we've seen that the practice needs this or that, and so we've put it in and we can't take it away from them. [00:36:50] So getting to that level of richness where we can actually move people from BP premiere to Titanium is just taking a long time. We [00:37:00] also, in a way, got distracted a bit when we took over the Houston business and took over vip.net and Ultimately bought BP allied, which used to be called My Practice because there was a lot of catch up work that needed to be done on those products to get them to our level of quality. [00:37:20] And. We've done that, we've achieved that, but that did divert resources for a couple of years into work that we hadn't originally anticipated doing. And I mean, sure, we gained some resources when we took over Houston, but, it was a bit of a diversion for a time. Ultimately, those products are all going to be replaced by Titanium, so we have to include New Zealand, we have to include Allied all into the Titanium, work load, which again, adds time. So it's, it's just slow. [00:37:53] Pete: [00:37:53] So that, that'll, that'll cover tran Tas... [00:37:57] Frank: [00:37:57] yeah. [00:37:57] Pete: [00:37:57] Yeah. [00:37:59] Across the [00:38:00] dutch. [00:38:00] Lorraine: [00:38:00] Yeah, [00:38:01] Frank: [00:38:01] that's right. I mean, yeah, we pretty, [00:38:03] Pete: [00:38:03] that's a valeant effort in itself. Just covering to [00:38:06] Frank: [00:38:06] aim is ultimately to only have one product, but through configuration and preferences and whatnot, we can, make it appeal to GPs,Allied Health and Specialists. [00:38:18] And we do see that some of the allied health may need a lot less functionality than the GP practices use. So it may be that we actually release a sort of Ttitanium for allied health before we release titanium for GPs. [00:38:35] Get [00:38:36] Pete: [00:38:36] That's [00:38:37] Frank: [00:38:37] a [00:38:41] Pete: [00:38:41] valeant effort in itself just to be able to do, to cover all of those needs. [00:38:47] It's, it can stretch, you know, many kilometers wide and you only get it a couple of centimeters date in covering all the needs of not just GPS, which like you say, 30 years of, of, of expertise. That's, that's. That's why [00:39:00] you are where you are. Um, but to build it again from scratch and then include specialists in [00:39:05] Frank: [00:39:05] an allied The other issue is that during the time that we're working on it, we still have to maintain the existing products because they, people are using them. [00:39:18] Things are changing at have asking for work to be done on the secure messaging and so on. And we can't stop doing that. And so BP premiere is getting richer and titanium is, the workload is getting bigger with every passing day. So. That is also a bit of an issue. Amazing. [00:39:38] Pete: [00:39:38] Well, look, I, I'm not going to keep you too much longer from all of that work that does need to be done. before we bail, are there any parting thoughts or any kind of final on or things that we didn't cover off? [00:39:47] Frank: [00:39:47] Um, we didn't talk much about the app. I don't know if [00:39:51] Pete: [00:39:51] you tell me more about the, Cause you've got a patient app that's is being worked on. [00:39:56] Frank: [00:39:56] It's actually been out [00:39:57] Lorraine: [00:39:57] trials for you [00:39:59] Frank: [00:39:59] for months [00:40:00] in a small number of sites for user testing. And it's proven to be quite popular in those sites. So we're actually looking at a full launch in October, the first release of the app includes It's all about communication between the practice and the patient. [00:40:22] We see that as being a bit of a future direction and the practices and patients will, um, be more easily able to communicate. So the way we've designed it. For example, um, when a GP checks a result, they can directly from the checking results screen from the inbox, they can send a message to the app, which goes securely, and the patient will get a notification on their phone, but they will have to have the pin numbers and whatnot not to get in and read the message. [00:40:55] So it's much more secure than SMS. And so we'd be using it for [00:41:00] appointment reminders, we can use it for actual reminders for things like that. cervical screening and what not. We can use it to inform people of their results. We can use it to send documents and in particular health fact sheets, patient education material, appointment reminders. [00:41:21] Ultimately though, we're aiming to do things like, prescription ordering. So repeat prescriptions. Requests for specialist referrals. If the people don't really need to be seen, if it's a routine annual ophthalmology review or something, and it'll be optional for practices as to how far they take those things, but it gives them the, the option. [00:41:49] So it's another option in communicating. I mean, people don't want to send letters anymore because it's way more expensive than sending an SMS and the patient app, the [00:42:00] communications costs from it will be much less than even SMS. So it's giving practices a better way of of doing things and a more secure way [00:42:11] Pete: [00:42:11] Are practices asking for an app because there's a few apps out there that do, I guess a similar thing on the, surface of what you've described. [00:42:21] If [00:42:21] Frank: [00:42:21] They do, we think this kind of rolls it all into one easy app. I mean, ultimately it will. Well, it will allow you to make your online appointment through the practices online appointment system. It'll be a kind of, you get a message from the GP to say, I want to talk to you about your results. You can immediately on the same app. [00:42:46] Make your appointment. And then you get the reminder come into your app a day later. Whenever the picks appointments do, you can check in at the front desk. Again, if the practice don't want everyone to be physically seen by the receptionist. And [00:43:00] some practices insist on that. There are others that use checkin kiosks. [00:43:04] So this will essentially replace a checking kiosk, cause you can use it, the app to check if you have [00:43:11] Lorraine: [00:43:11] it doesn't restrict patients from. Seeing more than one practice. And the reality is, is that, you know, a lot of people don't always have, you know, they might have a family GP, but they might also use a, you know, bulk-billing clinic when they go and get a sick certificate or something like that. [00:43:27] So [00:43:27] Frank: [00:43:27] some people have one in town, me at work, of course, and then one out the [00:43:32] Lorraine: [00:43:32] home. So, so if they're using, if those surgeries are using best practice in theoretically, um, the, the patient will be able to register it both, but nominate one as their main one, but then they'll consolidate anything that's been, you know, if, if they've been diagnosed with something at one, it'll actually update their app. [00:43:52] Frank: [00:43:52] Ultimately, when Titanium finally makes it out into the real world, you could have your physio and your [00:44:00] podiatrist everyone on the way. Can all be in the one app, so you don't need an app for the physio and an app for the ophthalmologist and two apps for the General Practices, which was originally when we were discussing the, the app that was an option was for us to sort of white label it so that the practice could put a sign in logo on the front and every practice could have an app that interfaced. [00:44:23] But when we thought about it and how people might use it, it made more sense to have just one app with our branding on it. And allow that to have multiple surgeries to connect. [00:44:35] Pete: [00:44:35] And that'll be a bit of a shift for you too, because if it's going to be something that's, that's patient facing with your branding on it, that's new for you guys to [00:44:44] Frank: [00:44:44] It's new for us [00:44:46] I mean, we've discussed at length the issues of supporting patients because in the past we've only ever provided support to. And practices and users. So the implications of having [00:45:00] potentially 12 million people, um, using the app, that won't happen, but even 1 million, it's. If they have a minor problem, it's a lot of support. [00:45:11] So that's why we did a sort of restricted release before doing the full release and to try and make certain that there's no issues that are going to come back and become an unmanageable problem. And at the moment it's looking good. So we're happy to release it in October [00:45:31] Pete: [00:45:31] So much happening. A lot of new innovations a lot of, history there too, so much to, to digest. I'll put some links and some information in the show notes of the podcast. Frank and Lorraine, thank you so much for your [00:45:44] Frank: [00:45:44] Thank you [00:45:44] Pete: [00:45:44] [00:45:46] Thanks for listening to talking HealthTech. My name Peter Birch. Go do some stuff on our socials, visit the website, share it with some people and give us a nice review and a five star rating because it all helps to spread the word and get people talking. Until next time I'm outta here.
We react to the news that IBM is buying Red Hat, cover some feedback that sets us straight, and are pleasantly surprised by Qt Design Studio.
Mike's adventures with Qt land him on Windows 10 this week battling DLL hell. He shares the latest developments in his attempt to build his next app with Qt. Plus some feedback, thoughts on AMP, and why dynamic linking keeps Mike up at night.
Mike shares more first impressions of Qt, the surprising places we’ve found QML in the wild, and why or why not to use Qt.
Mike shares more first impressions of Qt, the surprising places we’ve found QML in the wild, and why or why not to use Qt.
Mike shares more first impressions of Qt, the surprising places we’ve found QML in the wild, and why or why not to use Qt. Plus we answer some questions, share some travel hacks, and discuss the top programing languages of 2018, as declared so by the IEEE.
Looking at Lumina Desktop 2.0, 2 months of KPTI development in SmartOS, OpenBSD email service, an interview with Ryan Zezeski, NomadBSD released, and John Carmack's programming retreat with OpenBSD. This episode was brought to you by Headlines Looking at Lumina Desktop 2.0 (https://www.trueos.org/blog/looking-lumina-desktop-2-0/) A few weeks ago I sat down with Lead Developer Ken Moore of the TrueOS Project to get answers to some of the most frequently asked questions about Lumina Desktop from the open source community. Here is what he said on Lumina Desktop 2.0. Do you have a question for Ken and the rest of the team over at the TrueOS Project? Make sure to read the interview and comment below. We are glad to answer your questions! Ken: Lumina Desktop 2.0 is a significant overhaul compared to Lumina 1.x. Almost every single subsystem of the desktop has been streamlined, resulting in a nearly-total conversion in many important areas. With Lumina Desktop 2.0 we will finally achieve our long-term goal of turning Lumina into a complete, end-to-end management system for the graphical session and removing all the current runtime dependencies from Lumina 1.x (Fluxbox, xscreensaver, compton/xcompmgr). The functionality from those utilities is now provided by Lumina Desktop itself. Going along with the session management changes, we have compressed the entire desktop into a single, multi-threaded binary. This means that if any rogue script or tool starts trying to muck about with the memory used by the desktop (probably even more relevant now than when we started working on this), the entire desktop session will close/crash rather than allowing targeted application crashes to bypass the session security mechanisms. By the same token, this also prevents “man-in-the-middle” type of attacks because the desktop does not use any sort of external messaging system to communicate (looking at you dbus). This also gives a large performance boost to Lumina Desktop The entire system for how a user's settings get saved and loaded has been completely redone, making it a “layered” settings system which allows the default settings (Lumina) to get transparently replaced by system settings (OS/Distributor/SysAdmin) which can get replaced by individual user settings. This results in the actual changes in the user setting files to be kept to a minimum and allows for a smooth transition between updates to the OS or Desktop. This also provides the ability to “restrict” a user's desktop session (based on a system config file) to the default system settings and read-only user sessions for certain business applications. The entire graphical interface has been written in QML in order to fully-utilize hardware-based GPU acceleration with OpenGL while the backend logic and management systems are still written entirely in C++. This results in blazing fast performance on the backend systems (myriad multi-threaded C++ objects) as well as a smooth and responsive graphical interface with all the bells and whistles (drag and drop, compositing, shading, etc). Q: Are there future plans to implement something like Lumina in a MAC Jail? While I have never tried out Lumina in a MAC jail, I do not see anything on that page which should stop it from running in one right now. Lumina is already designed to be run as an unpriviledged user and is very smart about probing the system to find out what is/not available before showing anything to the user. The only thing that comes to mind is that you might need to open up some other system devices so that X11 itself can draw to the display (graphical environment setup is a bit different than CLI environment). Q: I look forward to these changes. I know the last time I used it when I would scroll I would get flashes like the refresh rate was not high enough. It will be nice to have a fast system as well as I know with the more changes Linux is becoming slower. Not once it has loaded but in the loading process. I will do another download when these changes come out and install again and maybe stay this time. If I recall correctly, one of the very first versions of Lumina (pre-1.0) would occasionally flicker. If that is still happening, you might want to verify that you are using the proper video driver for your hardware and/or enable the compositor within the Lumina settings. Q: Why was enlightenment project not considered for TrueOS? It is BSD licensed and is written in C. This was a common question about 4(?) years ago with the first release of the Lumina desktop and it basically boiled down to long-term support and reliability of the underlying toolkit. Some of the things we had to consider were: cross-platform/cross-architecture support, dependency reliability and support framework (Qt5 > EFL), and runtime requirements and dependency tracking (Qt5 is lighter than the EFL). That plus the fact that the EFL specifically states that it is linux-focused and the BSD's are just an afterthought (especially at the time we were doing the evaluation). Q: I have two questions. 1) The default layout of Unity(menu bar with actual menu entries on top and icon dock on the side) is one of the few things I liked about my first voyage into non-Windows systems, and have been missing since moving on to other distros(and now also other non-Linux systems). However in 1.4.0 screenshots on Lumina's site, the OSX-like layout has the menu attached to the window. Will 2.0 be able to have the menus on the bar? 2) Is there any timeline for a public release, or are you taking a “when it's ready” approach? In Lumina you can already put panels on the left/right side of the screen and give you something like the layout of the Unity desktop. The embedded menu system is not available in Lumina because that is not a specification supported by X11 and the window manager standards at the present time. The way that functionality is currently run on Linux is a hacky-bypass of the display system which only really works with the GTK3 and Qt5 toolkits, resulting in very odd overall desktop behavior in mixed environments where some apps use other graphical toolkits. We are targetting the 18.06 STABLE release of TrueOS for Lumina 2, but that is just a guideline and if necessary we will push back the release date to allow for additional testing/fixing as needed. A long two months (https://blog.cooperi.net/a-long-two-months) IllumOS/SmartOS developer Alex Wilson describes the journey of developing KPTI for IllumOS > On Monday (January 1st) I had the day off work for New Year's day, as is usual in most of the western world, so I slept in late. Lou and her friend decided to go to the wax museum and see several tourist attractions around SF, and I decided to pass the day at home reading. That afternoon, work chat started talking about a Tumblr post by pythonsweetness about an Intel hardware security bug. At the time I definitely did not suspect that this was going to occupy most of my working life for the next (almost) two months. Like many people who work on system security, I had read Anders Fogh's post about a "Negative Result" in speculative execution research in July of 2017. At the time I thought it was an interesting writeup and I remember being glad that researchers were looking into this area. I sent the post to Bryan and asked him about his thoughts on it at the time, to which he replied saying that "it would be shocking if they left a way to directly leak out memory in the speculative execution". None of us seriously thought that there would be low-hanging fruit down that research path, but we also felt it was important that there was someone doing work in the area who was committed to public disclosure. At first, after reading the blog post on Monday, we thought (or hoped) that the bug might "just" be a KASLR bypass and wouldn't require a lot of urgency. We tried to reach out to Intel at work to get more information but were met with silence. (We wouldn't hear back from them until after the disclosure was already made public.) The speculation on Tuesday intensified, until finally on Wednesday morning I arrived at the office to find links to late Tuesday night tweets revealing exploits that allowed arbitrary kernel memory reads. Wednesday was not a happy day. Intel finally responded to our emails -- after they had already initiated public disclosure. We all spent a lot of time reading. An arbitrary kernel memory read (an info leak) is not that uncommon as far as bugs go, but for the most part they tend to be fairly easy to fix. The thing that makes the Meltdown and Spectre bugs particularly notable is that in order to mitigate them, a large amount of change is required in very deep low-level parts of the kernel. The kind of deep parts of the kernel where there are 20-year old errata workarounds that were single-line changes that you have to be very careful to not accidentally undo; the kind of parts where, as they say, mortals fear to tread. On Friday we saw the patches Matthew Dillon put together for DragonFlyBSD for the first time. These were the first patches for KPTI that were very straightforward to read and understand, and applied to a BSD-derived kernel that was similar to those I'm accustomed to working on. To mitigate Meltdown (and partially one of the Spectre variants), you have to make sure that speculative execution cannot reach any sensitive data from a user context. This basically means that the pages the kernel uses for anything potentially sensitive have to be unmapped when we are running user code. Traditionally, CPUs that were built to run a multi-user, UNIX-like OS did this by default (SPARC is an example of such a CPU which has completely separate address spaces for the kernel and userland). However, x86 descends from a single-address-space microcontroller that has grown up avoiding backwards-incompatible changes, and has never really introduced a clean notion of multiple address spaces (segmentation is the closest feature really, and it was thrown out for 64-bit AMD64). Instead, operating systems for x86 have generally wound up (at least in the post-AMD64 era) with flat address space models where the kernel text and data is always present in the page table no matter whether you're in user or kernel mode. The kernel mappings simply have the "supervisor" bit set on them so that user code can't directly access them. The mitigation is basically to stop doing this: to stop mapping the kernel text, data and other memory into the page table while we're running in userland. Unfortunately, the x86 design does not make this easy. In order to be able to take interrupts or traps, the CPU has to have a number of structures mapped in the current page table at all times. There is also no ability to tell an x86 CPU that you want it to switch page tables when an interrupt occurs. So, the code that we jump to when we take an interrupt, as well as space for a stack to push context onto have to be available in both page tables. And finally, of course, we need to be able to figure out somehow what the other page table we should switch to is when we enter the kernel. When we looked at the patches for Linux (and also the DragonFlyBSD patches at the time) on Friday and started asking questions, it became pretty evident that the initial work done by both was done under time constraints. Both had left the full kernel text mapped in both page tables, and the Linux trampoline design seemed over-complex. I started talking over some ideas with Robert Mustacchi about ways to fix these and who we should talk to, and reached out to some of my old workmates from the University of Queensland who were involved with OpenBSD. It seemed to me that the OpenBSD developers would care about these issues even more than we did, and would want to work out how to do the mitigation right. I ended up sending an email to Philip Guenther on Friday afternoon, and on Saturday morning I drove an hour or so to meet up with him for coffee to talk page tables and interrupt trampolines. We wound up spending a good 6 hours at the coffee shop, and I came back with several pages of notes and a half-decent idea of the shape of the work to come. One detail we missed that day was the interaction of per-CPU structures with per-process page tables. Much of the interrupt trampoline work is most easily done by using per-CPU structures in memory (and you definitely want a per-CPU stack!). If you combine that with per-process page tables, however, you have a problem: if you leave all the per-CPU areas mapped in all the processes, you will leak information (via Meltdown) about the state of one process to a different one when taking interrupts. In particular, you will leak things like %rip, which ruins all the work being done with PIE and ASLR pretty quickly. So, there are two options: you can either allocate the per-CPU structures per-process (so you end up with $NCPUS * $NPROCS of them); or you can make the page tables per-CPU. OpenBSD, like Linux and the other implementations so far, decided to go down the road of per-CPU per-process pages to solve this issue. For illumos, we took the other route. In illumos, it turned out that we already had per-CPU page tables. Robert and I re-discovered this on the Sunday of that week. We use them for 32-bit processes due to having full P>V PAE support in our kernel (which is, as it turns out, relatively uncommon amongst open-source OS). The logic to deal with creating and managing them and updating them was all already written, and after reading the code we concluded we could basically make a few small changes and re-use all of it. So we did. By the end of that second week, we had a prototype that could get to userland. But, when working on this kind of kernel change we have a rule of thumb we use: after the first 70% of the patch is done and we can boot again, now it's time for the second 70%. In fact it turned out to be more like the second 200% for us -- a tedious long tail of bugs to solve that ended up necessitating some changes in the design as well. At first we borrowed the method that Matt Dillon used for DragonFlyBSD, by putting the temporary "stack" space and state data for the interrupt trampolines into an extra page tacked onto the end of *%gs (in illumos the structure that lives there is the cpu_t). If you read the existing logic in interrupt handlers for dealing with %gs though, you will quickly notice that the corner cases start to build up. There are a bunch of situations where the kernel temporarily alters %gs, and some of the ways to mess it up have security consequences that end up being worse than the bug we're trying to fix. As it turns out, there are no less than 3 different ways that ISRs use to try to get to having the right cpu_t in %gs on illumos, as it turns out, and they are all subtly different. Trying to tell which you should use when requires a bunch of test logic that in turn requires branches and changes to the CPU state, which is difficult to do in a trampoline where you're trying to avoid altering that state as much as possible until you've got the real stack online to push things into. I kept in touch with Philip Guenther and Mike Larkin from the OpenBSD project throughout the weeks that followed. In one of the discussions we had, we talked about the NMI/MCE handlers and the fact that their handling currently on OpenBSD neglected some nasty corner-cases around interrupting an existing trap handler. A big part of the solution to those issues was to use a feature called IST, which allows you to unconditionally change stacks when you take an interrupt. Traditionally, x86 only changes the stack pointer (%rsp on AMD64) while taking an interrupt when there is a privilege level change. If you take an interrupt while already in the kernel, the CPU does not change the stack pointer, and simply pushes the interrupt stack frame onto the stack you're already using. IST makes the change of stack pointer unconditional. If used unwisely, this is a bad idea: if you stay on that stack and turn interrupts back on, you could take another interrupt and clobber the frame you're already in. However, in it I saw a possible way to simplify the KPTI trampoline logic and avoid having to deal with %gs. A few weeks into the project, John Levon joined us at work. He had previously worked on a bunch of Xen-related stuff as well as other parts of the kernel very close to where we were, so he quickly got up to speed with the KPTI work as well. He and I drafted out a "crazy idea" on the whiteboard one afternoon where we would use IST for all interrupts on the system, and put the "stack" they used in the KPTI page on the end of the cpu_t. Then, they could easily use stack-relative addresses to get the page table to change to, then pivot their stack to the real kernel stack memory, and throw away (almost) all the conditional logic. A few days later, we had convinced each other that this was the way to go. Two of the most annoying x86 issues we had to work around were related to the SYSENTER instruction. This instruction is used to make "fast" system calls in 32-bit userland. It has a couple of unfortunate properties: firstly, it doesn't save or restore RFLAGS, so the kernel code has to take care of this (and be very careful not to clobber any of it before saving or after restoring it). Secondly, if you execute SYSENTER with the TF ("trap"/single-step flag) set by a debugger, the resulting debug trap's frame points at kernel code instead of the user code where it actually happened. The first one requires some careful gymnastics on the entry and return trampolines specifically for SYSENTER, while the second is a nasty case that is incidentally made easier by using IST. With IST, we can simply make the debug trap trampoline check for whether we took the trap in another trampoline's code, and reset %cr3 and the destination stack. This works for single-stepping into any of the handlers, not just the one for SYSENTER. To make debugging easier, we decided that traps like the debug/single-step trap (as well as faults like page faults, #GP, etc.) would push their interrupt frame in a different part of the KPTI state page to normal interrupts. We applied this change to all the traps that can interrupt another trampoline (based on the instructions we used). These "paranoid" traps also set a flag in the KPTI struct to mark it busy (and jump to the double-fault handler if it is), to work around some bugs where double-faults are not correctly generated. It's been a long and busy two months, with lots of time spent building, testing, and validating the code. We've run it on as many kinds of machines as we could get our hands on, to try to make sure we catch issues. The time we've spent on this has been validated several times in the process by finding bugs that could have been nasty in production. One great example: our patches on Westmere-EP Xeons were causing busy machines to throw a lot of L0 I-cache parity errors. This seemed very mysterious at first, and it took us a few times seeing it to believe that it was actually our fault. This was actually caused by the accidental activation of a CPU errata for Westmere (B52, "Memory Aliasing of Code Pages May Cause Unpredictable System Behaviour") -- it turned out we had made a typo and put the "cacheable" flag into a variable named flags instead of attrs where it belonged when setting up the page tables. This was causing performance degradation on other machines, but on Westmere it causes cache parity errors as well. This is a great example of the surprising consequences that small mistakes in this kind of code can end up having. In the end, I'm glad that that erratum existed, otherwise it may have been a long time before we caught that bug. As of this week, Mike and Philip have committed the OpenBSD patches for KPTI to their repository, and the patches for illumos are out for review. It's a nice kind of symmetry that the two projects who started on the work together after the public disclosure at the same time are both almost ready to ship at the same time at the other end. I'm feeling hopeful, and looking forward to further future collaborations like this with our cousins, the BSDs. The IllumOS work has since landed, on March 12th (https://github.com/joyent/illumos-joyent/commit/d85fbfe15cf9925f83722b6d62da49d549af615c) *** OpenBSD Email Service (https://github.com/vedetta-com/caesonia) Features Efficient: configured to run on min. 512MB RAM and 20GB SSD, a KVM (cloud) VPS for around $2.50/mo 15GB+ uncompressed Maildir, rivals top free-email providers (grow by upgrading SSD) Email messages are gzip compressed, at least 1/3 more space with level 6 default Server side full text search (headers and body) can be enabled (to use the extra space) Mobile data friendly: IMAPS connections are compressed Subaddress (+tag) support, to filter and monitor email addresses Virtual domains, aliases, and credentials in files, Berkeley DB, or SQLite3 Naive Bayes rspamd filtering with supervised learning: the lowest false positive spam detection rates Carefree automated Spam/ and Trash/ cleaning service (default: older than 30 days) Automated quota management, gently assists when over quota Easy backup MX setup: using the same configuration, install in minutes on a different host Worry-free automated master/master replication with backup MX, prevents accidental loss of email messages Resilient: the backup MX can be used as primary, even when the primary is not down, both perfect replicas Flexible: switching roles is easy, making the process of changing VPS hosts a breeze (no downtime) DMARC (with DKIM and SPF) email-validation system, to detect and prevent email spoofing Daily (spartan) stats, to keep track of things Your sieve scripts and managesieve configuration, let's get started Considerations By design, email message headers need to be public, for exchanges to happen. The body of the message can be encrypted by the user, if desired. Moreover, there is no way to prevent the host from having access to the virtual machine. Therefore, full disk encryption (at rest) may not be necessary. Given our low memory requirements, and the single-purpose concept of email service, Roundcube or other web-based IMAP email clients should be on a different VPS. Antivirus software users (usually) have the service running on their devices. ClamAV can easily be incorporated into this configuration, if affected by the types of malware it protects against, but will require around 1GB additional RAM (or another VPS). Every email message is important, if properly delivered, for Bayes classification. At least 200 ham and 200 spam messages are required to learn what one considers junk. By default (change to use case), a rspamd score above 50% will send the message to Spam/. Moving messages in and out of Spam/ changes this score. After 95%, the message is flagged as "seen" and can be safely ignored. Spamd is effective at greylisting and stopping high volume spam, if it becomes a problem. It will be an option when IPv6 is supported, along with bgp-spamd. System mail is delivered to an alias mapped to a virtual user served by the service. This way, messages are guaranteed to be delivered via encrypted connection. It is not possible for real users to alias, nor mail an external mail address with the default configuration. e.g. puffy@mercury.example.com is wheel, with an alias mapped to (virtual) puffy@example.com, and user (puffy) can be different for each. Interview - Ryan Zezeski - rpz@joyent.com (mailto:rpz@joyent.com) / @rzezeski (https://twitter.com/rzezeski) News Roundup John Carmack's programming retreat to hermit coding with OpenBSD (https://www.facebook.com/permalink.php?story_fbid=2110408722526967&id=100006735798590) After a several year gap, I finally took another week-long programming retreat, where I could work in hermit mode, away from the normal press of work. My wife has been generously offering it to me the last few years, but I'm generally bad at taking vacations from work. As a change of pace from my current Oculus work, I wanted to write some from-scratch-in-C++ neural network implementations, and I wanted to do it with a strictly base OpenBSD system. Someone remarked that is a pretty random pairing, but it worked out ok. Despite not having actually used it, I have always been fond of the idea of OpenBSD — a relatively minimal and opinionated system with a cohesive vision and an emphasis on quality and craftsmanship. Linux is a lot of things, but cohesive isn't one of them. I'm not a Unix geek. I get around ok, but I am most comfortable developing in Visual Studio on Windows. I thought a week of full immersion work in the old school Unix style would be interesting, even if it meant working at a slower pace. It was sort of an adventure in retro computing — this was fvwm and vi. Not vim, actual BSD vi. In the end, I didn't really explore the system all that much, with 95% of my time in just the basic vi / make / gdb operations. I appreciated the good man pages, as I tried to do everything within the self contained system, without resorting to internet searches. Seeing references to 30+ year old things like Tektronix terminals was amusing. I was a little surprised that the C++ support wasn't very good. G++ didn't support C++11, and LLVM C++ didn't play nicely with gdb. Gdb crashed on me a lot as well, I suspect due to C++ issues. I know you can get more recent versions through ports, but I stuck with using the base system. In hindsight, I should have just gone full retro and done everything in ANSI C. I do have plenty of days where, like many older programmers, I think “Maybe C++ isn't as much of a net positive as we assume...”. There is still much that I like, but it isn't a hardship for me to build small projects in plain C. Maybe next time I do this I will try to go full emacs, another major culture that I don't have much exposure to. I have a decent overview understanding of most machine learning algorithms, and I have done some linear classifier and decision tree work, but for some reason I have avoided neural networks. On some level, I suspect that Deep Learning being so trendy tweaked a little bit of contrarian in me, and I still have a little bit of a reflexive bias against “throw everything at the NN and let it sort it out!” In the spirit of my retro theme, I had printed out several of Yann LeCun's old papers and was considering doing everything completely off line, as if I was actually in a mountain cabin somewhere, but I wound up watching a lot of the Stanford CS231N lectures on YouTube, and found them really valuable. Watching lecture videos is something that I very rarely do — it is normally hard for me to feel the time is justified, but on retreat it was great! I don't think I have anything particularly insightful to add about neural networks, but it was a very productive week for me, solidifying “book knowledge” into real experience. I used a common pattern for me: get first results with hacky code, then write a brand new and clean implementation with the lessons learned, so they both exist and can be cross checked. I initially got backprop wrong both times, comparison with numerical differentiation was critical! It is interesting that things still train even when various parts are pretty wrong — as long as the sign is right most of the time, progress is often made. I was pretty happy with my multi-layer neural net code; it wound up in a form that I can just drop it into future efforts. Yes, for anything serious I should use an established library, but there are a lot of times when just having a single .cpp and .h file that you wrote ever line of is convenient. My conv net code just got to the hacky but working phase, I could have used another day or two to make a clean and flexible implementation. One thing I found interesting was that when testing on MNIST with my initial NN before adding any convolutions, I was getting significantly better results than the non-convolutional NN reported for comparison in LeCun ‘98 — right around 2% error on the test set with a single 100 node hidden layer, versus 3% for both wider and deeper nets back then. I attribute this to the modern best practices —ReLU, Softmax, and better initialization. This is one of the most fascinating things about NN work — it is all so simple, and the breakthrough advances are often things that can be expressed with just a few lines of code. It feels like there are some similarities with ray tracing in the graphics world, where you can implement a physically based light transport ray tracer quite quickly, and produce state of the art images if you have the data and enough runtime patience. I got a much better gut-level understanding of overtraining / generalization / regularization by exploring a bunch of training parameters. On the last night before I had to head home, I froze the architecture and just played with hyperparameters. “Training!” Is definitely worse than “Compiling!” for staying focused. Now I get to keep my eyes open for a work opportunity to use the new skills! I am dreading what my email and workspace are going to look like when I get into the office tomorrow. Stack-register Checking (https://undeadly.org/cgi?action=article;sid=20180310000858) Recently, Theo de Raadt (deraadt@) described a new type of mitigation he has been working on together with Stefan Kempf (stefan@): How about we add another new permission! This is not a hardware permission, but a software permission. It is opportunistically enforced by the kernel. The permission is MAP_STACK. If you want to use memory as a stack, you must mmap it with that flag bit. The kernel does so automatically for the stack region of a process's stack. Two other types of stack occur: thread stacks, and alternate signal stacks. Those are handled in clever ways. When a system call happens, we check if the stack-pointer register points to such a page. If it doesn't, the program is killed. We have tightened the ABI. You may no longer point your stack register at non-stack memory. You'll be killed. This checking code is MI, so it works for all platforms. For more detail, see Theo's original message (https://marc.info/?l=openbsd-tech&m=152035796722258&w=2). This is now available in snapshots, and people are finding the first problems in the ports tree already. So far, few issues have been uncovered, but as Theo points out, more testing is necessary: Fairly good results. A total of 4 problems have been found so far. go, SBCL, and two cases in src/regress which failed the new page-alignment requirement. The SBCL and go ones were found at buildtime, since they use themselves to complete build. But more page-alignment violations may be found in ports at runtime. This is something I worry about a bit. So please everyone out there can help: Use snapshots which contain the stack-check diff, update to new packages, and test all possible packages. Really need a lot of testing for this, so please help out. So, everybody, install the latest snapshot and try all your favorite ports. This is the time to report issues you find, so there is a good chance this additional security feature is present in 6.3 (and works with third party software from packages). NomadBSD 1.0 has been released (https://freeshell.de/~mk/projects/nomadbsd.html) NomadBSD is a live system for flash drives, based on FreeBSD® 11.1 (amd64) Change Log The setup process has been improved. Support for optional geli encryption of the home partition has been added Auto-detection of NVIDIA graphics cards and their corresponding driver has been added. (Thanks to holgerw and lme from BSDForen.de) An rc script to start the GEOM disk scheduler on the root device has been added. More software has been added: accessibility/redshift (starts automatically) audio/cantata audio/musicpd audio/ncmpc ftp/filezilla games/bsdtris mail/neomutt math/galculator net-p2p/transmission-qt5 security/fpm2 sysutils/bsdstats x11/metalock x11/xbindkeys Several smaller improvements and bugfixes. Screenshots https://freeshell.de/~mk/projects/nomadbsd-ss1.png https://freeshell.de/~mk/projects/nomadbsd-ss2.png https://freeshell.de/~mk/projects/nomadbsd-ss3.png https://freeshell.de/~mk/projects/nomadbsd-ss4.png https://freeshell.de/~mk/projects/nomadbsd-ss5.png https://freeshell.de/~mk/projects/nomadbsd-ss6.png Beastie Bits KnoxBug - Nagios (http://knoxbug.org/2018-03-27) vBSDcon videos landing (https://www.youtube.com/playlist?list=PLfJr0tWo35bc9FG_reSki2S5S0G8imqB4) AsiaBSDCon 2017 videos (https://www.youtube.com/playlist?list=PLnTFqpZk5ebBTyXedudGm6CwedJGsE2Py) DragonFlyBSD Adds New "Ptr_Restrict" Security Option (https://www.phoronix.com/scan.php?page=news_item&px=DragonFlyBSD-Ptr-Restrict) A Dexter needs your help (https://twitter.com/michaeldexter/status/975603855407788032) Mike Larkin at bhyvecon 2018: OpenBSD vmm(4) update (https://undeadly.org/cgi?action=article;sid=20180309064801) [HEADS UP] - OFED/RDMA stack update (https://lists.freebsd.org/pipermail/freebsd-arch/2018-March/018900.html) *** Feedback/Questions Ron - Interview someone using DragonflyBSD (http://dpaste.com/3BM6GSW#wrap) Brad - Gaming and all (http://dpaste.com/3X4ZZK2#wrap) Mohammad - Sockets vs TCP (http://dpaste.com/0PJMKRD#wrap) Paul - All or at least most of Bryan Cantrill's Talks (http://dpaste.com/2WXVR1X#wrap) ***
El pasado 31 de Mayo tuvo lugar en Barcelona una de las sesiones del BlackBerry 10 Jam World Tour, donde RIM presentó a los desarrolladores las novedades de la versión 10 de su sistema operativo. Diego (@dfreniche) estuvo allí, asistió a las sesiones y talleres y probó los diferentes SDKs. A lo largo del programa [...]