Method of combining multiple signals into one signal over a shared medium
POPULARITY
In this milestone 50th episode of the Nextflow podcast, host Phil Ewels sits down with Krešimir Beštak, a PhD student and active contributor to the nf-core community, to explore an exciting frontier in bioinformatics: microscopy and spatial omics.While Nextflow is traditionally associated with genomics workflows, Krešimir shares how it's being used to power image analysis pipelines like MCMICRO, supporting complex research into cardiovascular disease and cancer diagnostics. Based at the University Hospital Heidelberg, Krešimir discusses his transition from master's student to PhD researcher, the translational applications of spatial proteomics, and how Nextflow enables reproducible workflows far beyond its original scope.00:00 Podcast Ep 50: Krešimir00:09 Welcome and introductions03:13 Introduction to Spatial Omics04:26 Multiplexing markers06:38 Metabolite microscopy07:23 Myocardial multiomics08:58 Microscopy data analysis11:32 nf-core/mcmicro13:05 2D vs 3D microscopy13:38 Computational bottlenecks within analysis15:43 Other nf-core imaging pipelines18:36 Downstream analysis after molkart19:53 Manual interventions23:27 Minerva25:39 Google Maps for cells27:16 Microscopy community around Nextflow30:34 How to get involved31:38 Changes in Nextflow for microscopy32:43 Nextflow Ambassador program33:17 Conclusion
The potential of mRNA medicines was postulated for years, but it took the COVID pandemic and emergency use authorizations for that potential to be demonstrated. By now, most of us have received at least one mRNA based vaccine and the platform has been mostly derisked. However, if you're not one of the major players in this space, generating high-purity mRNA, let alone a GMP-grade mRNA-based drug product, can still be quite challenging. Dr. Chrisitan Cobaugh, CEO of Vernal Biosciences in Vermont, has been working in the mRNA field for more than a decade and is passionate about the potential of mRNA medicines. He's also been in the field long enough to know firsthand the challenges of high-purity mRNA and lipid nanoparticle supply. Join us as Christian walks us through his story, the start of Vernal Biosciences, and their progress toward their mission of democratizing access to mRNA technology. Our conversation touches on the molecular biology of making mRNA, and the use of digital PCR and other methods in monitoring development and release of mRNA drug products, and the potential applications of mRNA as a platform (some of which you might not have guessed).Whether you're new to the technology, or have chosen mRNA as a focus area, you're sure to find this conversation engaging and intriguing, and our guest insightful. Visit the Absolute Gene-ius page to learn more about the guests, the hosts, and the Applied Biosystems QuantStudio Absolute Q Digital PCR System.
A step by step process on how to multiplex a property Zoning and Land Use Regulations Property Acquisition and Permits and Approvals Building Codes and Regulations Financing and Budgeting Construction and Project Management Attend Our Event On Multiplexes Sign Up For The Next Webinar Realist Join the best community in Canadian Real Estate realist.ca Attend a Meetups Meetups Get a Pre Approval G & H Mortgage Group Get Financing with Landbank LandBank Nick Instagram.com/mybuddynick tiktok.com/@mybuddynick twitter.com/mybuddynick89 Dan twitter.com/daniel_foch instagram.com/danielfoch tiktok.com/@danielfochSee omnystudio.com/listener for privacy information.
Organ transplantation is a modern marvel, with more than 157,000 solid organ, and more than 9,000 marrow and blood transplants occurring worldwide in 2022. Organ donor and recipient matching and compatibility screening has progressed significantly in recent decades as molecular methods have progressed rapidly to support this and other fields. Specifically, typing of human leukocyte antigens (HLAs) has expanded to consider ethnic population variation and cell free DNA (cfDNA) monitoring is now being used to monitor recipients for biomarkers that indicate organ rejection. Our guest for this episode, Dr. Lee Ann Baxter-Lowe, Director of the HLA Laboratory at Children's Hospital Los Angeles has been working in the field of transplantation science for virtually her entire career. Join us for a great explanation of the science and a first-hand recounting of developing the assays, from decades ago, before thermal cyclers existed, to her cutting-edge work using digital PCR to progress the field even further. Lee Ann also shares very personal aspects of her career journey in her conversation with Cassie. This includes her describing the scientific “studies” of her and her cousin as children, her venturing into the world of HLA typing when it was emerging, and the role her family has played in her career, which gets personal quickly when she shares that her husband is currently dealing with a blood malignancy. Visit the Absolute Gene-ius page to learn more about the guests, the hosts, and the Applied Biosystems QuantStudio Absolute Q Digital PCR System.
Polymerase chain reaction (PCR) was discovered in 1983 by Kary Mullis and Michael Smith, who were jointly awarded the Nobel Prize in Chemistry in 1993. Since then, PCR has been a cornerstone method that has been a pillar of discovery and applied science. The various types of PCR are sometimes confusing, and the relative pros and cons of each method are not always clear, which is why it's so great to have this episode's guest explain them all in a simple and clear-cut way. Dave Bauer, PhD, is an Application Scientist at Thermo Fisher Scientific that specializes in real time PCR (qPCR) and digital PCR (dPCR). He has an educational background in physics, mathematics, and biology, but what's more important is that Dave loves to help others learn and to break down a topic's complexities to make it more understandable and approachable. In this episode we hear Dave explain the difference between qPCR and dPCR, the importance of Poisson statistics to dPCR, dead volume, reaction chamber volume consistency, and more. We learn how qPCR and dPCR complement each other and how they relate to sequencing methods for applications like single nucleotide polymorphism (SNP) detection. As you've come to expect from Absolute Gene-ius, you also get a good sense of who Dave is and how he got to his current role. We learn about how he knew right away that academia wasn't for him, how he ended up unexpectedly working in forensics after his PhD, and how he eventually landed in his current Application Scientist role. Dave shares some great insights and advice, including how students should care less about their degree's name and more about what techniques they're learning and using in their studies. Visit the Absolute Gene-ius page to learn more about the guest, the hosts, and the Applied Biosystems QuantStudio Absolute Q Digital PCR System.This episode includes the following sound effects from freesound.org, licensed under CC BY-ND 4.0:“Sax Jazz,” by alonart“Balloon Pop / Christmas cracker / Confetti Cannon,” by Breviceps“Crowd Cheering,” by SoundsExciting
In the first part of the podcast episode, Shiv Sivaguru is in conversation with Vadeesh Budramane, Founder and CEO of AlgoShack, shares his career journeyVadeesh talks about starting in the embedded programming with Time Division Multiplexing and has built and learnt numerous protocols and standardsHe has also multiplexed his time across different domains such as Automotive, Medical instruments, Healthcare and Telecommunication Vadeesh shares his experience why he started the firm AlgoShack focused on the QE Combat the challenge of building quality software by automating automation testingHe shares his experiences on how critical devOps processes in the medical field and thinking about holistic environment He shares how he has built a keen sense of understanding the domain of telecom with different protocolsWhile he took up assignments in Healthcare, the type of stakeholders and touchpoints has moved him back to a learner of the domainVadeesh shares his passion of teaching and learning and thanks his Wipro days to inculcate that and further strengthened while working for HCGVadeesh decided to start a firm in test automation for embedded systems and we will hear more about the start up journey in the next episode. Vadeesh is currently CEO at AlgoShack and he has held several roles such as Senior Vice President at Sutherland Global Services, Director & Head of Healthcare Vertical at Computer Sciences Corporation and Managing Director at FCG.Vadeesh has 32 years of experience focused on product engineering, innovation & IP creation. In his 15+ years of experience in senior leadership positions, Vadeesh has been responsible for strategic planning, end-to-end operations, and P&L. He has built globally competitive leadership teams, orchestrated strategic customer engagements, and created significant value for stakeholders.Vadeesh is a leader with experience in managing offshore delivery center and vertical delivery with a team size of 3000+ people & $250m P&L. He has experience across healthcare, ISVs, telecom, and real-time embedded systems industry verticals and North America, UK and Europe markets that include global delivery, customer engagement, business development and P&L responsibilities.A leader with hands-on experience in seeding and developing globally competitive leadership teams, developing competencies, transforming delivery organizations and driving business models for outcomes.Vadeesh can be contacted at https://www.linkedin.com/in/vadeeshbudramane/
Visit the Absolute Gene-ius page to learn more about the guest, the hosts, and the Applied Biosystems QuantStudio Absolute Q Digital PCR System.The details of what make digital PCR (dPCR) different from real-time, or quantitative PCR (qPCR) are relatively simple but not always explained very well. Likewise, it's not always clear which use cases are a good fit for dPCR, and which others simply don't require the power of dPCR. The power of digital PCR is real, if you understand it.In this episode we enlist Marcia Slater, a self-described “PCR guru” to explain digital PCR and its power. She covers the basic differences between dPCR and qPCR and then delves into the details of where dPCR derives its power and where it shines. With over 20 years' experience in helping customers troubleshoot PCR, Marcia makes is easy to understand key terms and concepts related to dPCR, including:Sub-reactionsPoisson statisticsStatistical power and confidence intervalsControls and false negatives vs. true negativesDead volumeDynamic rangeMultiplexingMarcia also covers some great examples of where the absolute quantification of dPCR is a great fit and how it's even used to qualify and quantify standards for qPCR. Multiplexing and how its used to do molecular integrity evaluations for gene therapy applications is also discussed.As always with the Gene-ius series, you'll also get to learn about more than Marcia's science chops. We learn about her unlikely career path from growing up on a livestock farm to her storied role in helping produce “data so beautiful it should be framed.” We even get into her rediscovered love of raising animals, including her beloved panda alpaca with a name you cannot forget!
In this episode of Elixir Wizards, Owen and Dan talk to Mat Trudel, Phoenix contributor and creator of the Bandit Web Server, about the future of Phoenix, web transports, and HTTP/3. Mat explains the challenges and benefits of implementing HTTP/3 support in Phoenix. Mat provides in-depth insights into the evolution of web protocols and encourages developers to continue pushing the boundaries of web development and to contribute to the growth of the open-source community. Main topics discussed in this episode: The evolution of web protocols and how HTTP/3 is changing the landscape The challenges and benefits of implementing HTTP/3 support in Phoenix How a home AC project revealed a gap in web server testing tools and inspired Bandit how web transports like Cowboy and Ranch are used to build scalable web servers WebSock for multiplexing data over a single WebSocket connection Mat's philosophy on naming projects and his passion for malapropisms The Bandit project and how it can help developers better understand web protocols Autobahn, a testing suite for WebSocket protocol specification conformance The importance of community involvement in open-source projects Encouragement for more people to use Bandit and report bugs Links Mentioned: SmartLogic is Hiring: https://smartlogic.io/about/jobs PagerDuty: https://www.pagerduty.com Phoenix Framework: https://www.phoenixframework.org/ Cowboy: https://ninenines.eu/docs/en/cowboy/2.9/guide/introduction/ Ranch: https://github.com/ninenines/ranch Bandit - https://hexdocs.pm/bandit/Bandit.html Autobahn: https://github.com/crossbario/autobahn-testsuite HTTP Cats: https://http.cat/ Mat Trudel at Empex 2022 A Funny Thing Happened On The Way To The Phoenix (https://www.youtube.com/watch?v=FtZBTUvRt0g) Thousand Island - https://hexdocs.pm/thousand_island/ThousandIsland.html Special Guest: Mat Trudel.
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.02.18.528865v1?rss=1 Authors: Yuan, L., Chen, X., Zhan, H., Gilbert, H. L., Zador, A. M. Abstract: Neurons in the cortex are heterogenous, sending diverse axonal projections to multiple brain regions. Unraveling the logic of these projections requires single-neuron resolution. Although a growing number of techniques have enabled high-throughput reconstruction, these techniques are typically limited to dozens or at most hundreds of neurons per brain, requiring that statistical analyses combine data from different specimens. Here we present axonal BARseq, a high-throughput approach based on reading out nucleic acid barcodes using in situ RNA sequencing, which enables analysis of even densely labeled neurons. As a proof of principle, we have mapped the long-range projections of greater than 8000 mouse primary auditory cortex neurons from a single brain. We identified major cell types based on projection targets and axonal trajectory. The large sample size enabled us to systematically quantify the projections of intratelencephalic (IT) neurons, and revealed that individual IT neurons project to different layers in an area-dependent fashion. Axonal BARseq is a powerful technique for studying the heterogeneity of single neuronal projections at high throughput within individual brains. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.12.28.522139v1?rss=1 Authors: Roig, J. L. A., Di Bello, F., Hassen, S. B. H., Astrand, E., Hamed, S. B. Abstract: The frontal eye field (FEF) is a prefrontal cortical area classically associated with spatial attention, perception, and oculomotor functions. FEF exhibits complex response properties through mixed selectivity neurons, allowing a high dimensional representation of the information. However, recent studies have shown that FEF encodes information in a low-dimensional regime hence limiting the coding capacity of the neural population. How the FEF encodes multiple sources of information with such limited encoding capacity remains elusive. To address this question, we trained two macaques to perform a visual attention task while we recorded FEF neuronal activity using multi-contact electrodes. FEF neurons encoded task- (spatial location of cue, POS; time in trial, CTOA) and behaviour- (reaction time, RT; focus of attention, TA) related parameters prior to target onset. We found a clear modulation of the RT and TA as a function of the CTOA. Using dPCA, we characterized the functional relationship between neural populations associated with each parameter and investigated how this functional relationship predicts behavior. We found that CTOA variability was associated to two different components the activation of which was correlated with the TA and the RT, respectively. These CTOA-related components were non-orthogonal with the RT and TA-related components. These results suggest that when different sources of information are implemented during task performance, they show a very precise geometrical configuration in non-orthogonal components. This allows high simultaneous information encoding capacity at the cost of an impaired capacity to use attention information and enhanced responsiveness toward external stimuli. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Information theory is the research discipline that establishes the fundamental limits for information transfer, storage, and processing. Major advances in wireless communications have often been a combination of information-theoretic predictions and engineering efforts that turn them into mainstream technology. Erik G. Larsson and Emil Björnson invited the information-theorist Giuseppe Caire, Professor at TU Berlin, to discuss how the discipline is shaping current and future wireless networks. The conversation first covers the journey from classical multiuser information theory to Massive MIMO technology in 5G. The rest of the episode goes through potential future developments that can be assessed through information theory: distributed MIMO, orthogonal time-frequency-space (OTFS) modulation, coded caching, reconfigurable intelligent surfaces, terahertz bands, and the use of ever larger numbers of antennas. The following papers are mentioned: “OTFS vs. OFDM in the Presence of Sparsity: A Fair Comparison” (https://doi.org/10.1109/TWC.2021.3129975) , “Joint Spatial Division and Multiplexing”(https://arxiv.org/pdf/1209.1402.pdf), and “Massive MIMO has Unlimited Capacity” (https://arxiv.org/pdf/1705.00538.pdf). Music: On the Verge by Joseph McDade. Visit Erik's website https://liu.se/en/employee/erila39 and Emil's website https://ebjornson.com/
The big trend in oncology over the past few years has been immunotherapy. this trend is converging with the digital transformation we are undergoing in pathology we seem to be at the crossroads of immune-oncology, digital pathology, image analysis, artificial intelligence, and the increasing ability to Multiplex. Our guest is Keith Wharton Jr., MD, PhD from Ultivue, a company looking to reveal actional biology through multiplex immunofluorescence to make immunotherapy a reality for patients with cancer. Keith is a board-certified anatomic pathologist with diverse achievements in research, drug and diagnostic development, and clinical investigation. He leads Ultivue's Pathology and Biomarker Analytics team. We'll be discussing what is the need for multiplexing. How much information can Multiplexing add above and beyond standard methods. When do we reach the point of diminishing returns? what are some challenges to implementation and risks of complexity and system failure? In addition, we'll talk about specific applications for immunofluorescence and multiplexing versus the current state of the practice including immunohistochemistry and quantifying tumor infiltrating lymphocytes on good old-fashioned H&E sections.
Among the topics Lu Rahman will be discussing with Wylie will be the key challenges scientists face with biomarker and drug discovery and how high-throughput screening can help alleviate this. High throughput with multiplex is valuable for drug discovery and Wylie will provide insight on this as well offering expert opinion on the future of multiplex immunoassays in drug and biomarker discovery. Tune in to hear what critical factors Wylie believes researchers should consider when choosing an assay for high-throughput screening, as well as to gain a greater understanding robustness and assay reproducibility. Wylie both outlines the challenges in this field of drug discovery and suggests ways in which these challenges can be overcome. Listen now.
Our Guest is John Waller, PhD, Chief Operations Officer & Founder of Oracle Bio. John has over of 15 years as a biology project leader in the pharmaceutical industry, for companies including AstraZeneca and Merck & Co. He has expertise in the integration of translational biomarkers into drug discovery programs and considerable experience in developing in vitro, ex vivo and in vivo models involving image analysis across numerous therapeutic areas. OracleBio is a global leader in quantitative digital pathology, providing image analysis services to Pharma and Biotech clients worldwide. Leveraging multiple software platforms, the company delivers robust data packages within a quality management framework to support clinical trials and translational research. As image analysis experts, OracleBio specializes in cellular phenotyping of multiplex stained tissue and has built a strong reputation as the go-to company for complex image analysis. Their mission is to enhance decision making within R&D by leveraging Digital Pathology to deliver robust data and actionable insights. We are talking about the role of digital pathology in drug development and drug discovery. What are machine learning and deep learning? How are they different from artificial intelligence? And what role do they play in image analysis and digital pathology? How is Multiplexing evolving, what are its limits and how is it going to change what we do? What is the need for cloud computing in digital pathology? What are the advantages and disadvantages of putting our data and processes into the cloud?
Sarah Callori is a tough act to follow, but Clinton Reese joins Carmela this week to see if he can make the connections and come back again next week! Here are today's clues: 1. Sidekick, Check, Deck Officer, Procreate. 2. Pressure, Orange, Sausage, Sedaka and Swift Song. 3. Jack, Number, Tag, Smart. 4. Branch, Long, 70s Band Who Were Renamed "New Order", Forms of Multiplexing.
This episode is brought to you by Visiopharm.In this third and last episode of the multiplex mini-series with Regan Baird from Visiopharm we look at the considerations when choosing an image analysis software for phenotyping.The two main points to consider when choosing phenotyping image analysis software are segmentation assistance and data visualization. Segmentation assistance:Before different markers are attributed to different cells in the tissue and cell phenotypes are determined, cell boundaries need to be delineated. The automatic delineation of these boundaries by image analysis software is called cell segmentation. Cells in tissue slides can have different shapes and sizes, which depend on the plane of sectioning, heterogeneity of the investigated tissue, and the disease stage. This makes the task of segmentation challenging. Unlike in single-cell confocal microscopy images, where the cell borders are very well-demarcated, in tissue they often need to be estimated. A separate segmentation (e.g., membrane) marker can help significantly, but a perfect cell segmentation is not attainable.To best estimate the cell boundaries, rule-based classical computer vision approaches or artificial intelligence (AI) – powered approaches can be used. In rule-based approaches, we are working with well-defined features on which the segmentation is based, but we need to make concessions. The AI-powered models are only as good as the examples we train the models on. To combine the advantages of both, Visiopharm offers an AI-based nuclear segmentation as the starting point and a rule-based and marker-based second step to obtain the most reliable cell segmentation for phenotyping. Data visualization:The adequate visualization and handling of the obtained data depend on the software used. To understand and interpret the multidimensional multiplex and phenotyping data we need to interpret graphs, plots, two-dimensional reduction plots, and other data visualizations for all the images in multiplex studies. In order to evaluate how well the phenotyping has performed and to export meaningful results, the correct visualization tools need to be used.If you need assistance or have questions about multiplexing and phenotyping visit the Visiopharm’s website and contact the Visiopharm team. This episode’s resources:Multiplexing mini-series Part 1: Introduction to multiplex for tissue image analysis (part 1) w/ Regan Baird, VisiopharmMultiplexing mini-series Part 2: How to make sense of multiplex data with phenotyping? (part 2) w/ Regan Baird
This episode we speak with Dr. Michael Johnson of Visikol. We will discuss 3D imaging in digital pathology. We will look at the current and future uses of 3D imaging. What is the current state of the art in multiplexing - particularly for clinical use. We will find out more about organoids, what they are and what role they play in research and finally, what Visikol has in store for the future.
This episode is brought to you by Visiopharm.Multiplex tissue staining can generate large amounts of data to help identify distinct information about particular cells in tissue. Immuno-oncology is a field where it is common practice to use multiplexing, in particular for cell phenotyping in tissue.Phenotyping is the ability to classify every individual cell in the tissue based on the biomarker panel used. The panels are designed to identify cells of different lineages as well as cell activation states within each lineage, which is of utmost importance for the personalized therapeutic approaches in oncology. Although multiplex data can be visualized manually, e.g., by switching on and off different fluorescence channels, its interpretation requires computational assistance. If the multiplex assay only contains a few markers, the rules for detecting potential phenotypes can be designed manually, but as the number of markers increases the number of potential phenotypes increases exponentially. In order to sort through the cellular phenotypes in higher-plexes, machine learning-based auto clustering has been implemented. This method is based on the way cells are characterized in flow cytometry and has been adapted to automatically identify phenotypes of cells in tissue images. The adequate visualization and handling of the generated data depend on the software used. In the next episode, we will be talking about the considerations when choosing an image analysis software program for phenotyping. To learn more visit Visiopharm’s websiteThis episode’s resources:Multiplexing mini-series Part 1: Introduction to multiplex for tissue image analysis (part 1) w/ Regan Baird, Visiopharm
This episode is brought to you by Visiopharm.After experimenting with multidimensional, multimarker, and multicolor single-cell imaging modalities during his postdoc at Beth Israel Deaconess Medical Center in Boston, looking at 2D images of tissue stained just with hematoxylin and eosin (H&E) seemed to him a bit simplistic…and then he was tasked with doing tissue image analysis (IA). When relying just on H&E, IA can be a very challenging task. So, to both simplify it and extract more information from the tissue, multiplex staining can be implemented. In this three-part episode miniseries Regan Baird, Ph.D., scientific sales manager at Visiopharm introduces us to the concepts of multiplexing and cell phenotyping as well as to image analysis approaches relevant for multiplex data analysis.Multiplexing in the context of life sciences is referred to as taking multiple measurements at the same time on the same specimen. With tissue slides the easiest method of multiplexing is immunohistochemistry (IHC) based virtual multiplexing where consecutive sections of tissue are stained with a single IHC marker and later each slide is imaged and co-registered to simulate the presence of several IHC markers in the tissue of interest. More complicated, but more precise methods allowing for visualizing cellular colocalization of biomarkers include multicolor bright field IHC (visualizing up to five biomarkers per tissue but colocalizing reliably a maximum of only two biomarkers per cell), immunofluorescence (IF) potentially with spectral unmixing, to increase the number of biomarkers per tissue section as well as per cell to nine, and imaging mass cytometry where instead of chromogens or fluorophores heavy metals are used, which increases the number of biomarkers up to 60 in a single section of tissue.All these multiplex modalities have their advantages and disadvantages, and the choice of the appropriate method should be guided by the design of the experiment as well as scientific and/ or diagnostic questions we want to address.For example, currently a widely used application of IF multiplexing is phenotyping cells in the tissue. This not only allows for the characterization of single cells but also lets us interrogate and investigate spatial relationships between different cell populations giving us information about the interactome of different cells and the environment in which they live. To learn more about phenotyping join us for the next episode of the Multiplexing Miniseries next week.This episode’s resources:Multiplexing mini-series Part 2: How to make sense of multiplex data with phenotyping? (part 2) w/ Regan BairdVisiopharmTop 20 Pathology podcasts you must follow in 2021
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.11.06.371906v1?rss=1 Authors: McCarthy, M., Anglin, C., Pear, H., Boleman, S., Klaubert, S., Birtwistle, M. R. Abstract: Fluorescent antibodies are a workhorse of biomedical science, but fluorescence multiplexing has been notoriously difficult due to spectral overlap between fluorophores. We recently established proof-of-principal for fluorescence Multiplexing using Spectral Imaging and Combinatorics (MuSIC), which uses combinations of existing fluorophores to create unique spectral signatures for increased multiplexing. However, a method for labeling antibodies with MuSIC probes has not yet been developed. Here, we present a method for labeling antibodies with MuSIC probes. We conjugate a DBCO-Peg5-NHS ester linker to antibodies, a single stranded DNA docking strand to the linker, and finally, hybridize two MuSIC-compatible, fluorescently-labeled oligos to the docking strand. We validate the labeling protocol with spin-column purification and absorbance measurements, which show a degree of labeling of ~9.66 linker molecules / antibody. We demonstrate the approach using (i) Cy3, (ii) Tex615, and (iii) a Cy3-Tex615 combination as three different MuSIC probes attached to three separate batches of antibodies. We incubated MuSIC probe-labeled antibodies with protein A beads to create single and double positive beads that are analogous to single cells. Spectral flow cytometry experiments demonstrate that each MuSIC probe can be uniquely distinguished, and the fraction of beads in a mixture with different staining patterns is accurately measured. The approach is general and might be more broadly applied to cell type profiling or tissue heterogeneity studies in clinical, biomedical, and drug discovery research. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.01.278671v1?rss=1 Authors: Tompkins, K. J., Houtti, M., Litzau, L. A., Aird, E. J., Everett, B. A., Nelson, A. T., Pornschloegl, L., Limon-Swanson, L. K., Evans, R. L., Evans, K., Shi, K., Aihara, H., Gordon, W. R. Abstract: Replication initiator proteins (Reps) from the HUH-endonuclease superfamily process specific single-stranded DNA (ssDNA) sequences to initiate rolling circle/hairpin replication in viruses, such as crop ravaging geminiviruses and human disease causing parvoviruses. In biotechnology contexts, Reps are the basis for HUH-tag bioconjugation and a critical adeno-associated virus genome integration tool. We solved the first co-crystal structures of Reps complexed to ssDNA, revealing a key motif for conferring sequence specificity and anchoring a bent DNA architecture. In combination, we developed a deep sequencing cleavage assay termed HUH-seq to interrogate subtleties in Rep specificity, and demonstrate how differences can be exploited for multiplexed HUH-tagging. Together, our insights allowed us to engineer a Rep chimera to predictably alter sequence specificity. These results have important implications for modulating viral infections, developing Rep-based genomic integration tools, and enabling massively parallel HUH-tag barcoding and bioconjugation applications. Copy rights belong to original authors. Visit the link for more info
We chat with the authors of CoronaHiT which lets you sequence up to 94 SARS-CoV-2 samples on a single MinION flowcell. This reduces the cost of sequencing 3-fold, with a simpler, faster protocol. Justin O'Grady and David Baker join Andrew Page and Nabil-Fareed Alikhan to chat about how it all works, how it came into being and why its awesome. Preprint: https://doi.org/10.1101/2020.06.24.162156
Teknik pengiriman beberapa channel informasi pada satu media transmisi. Diperkenalkan teknik FDM, TDM dan CDMA
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.05.23.110882v1?rss=1 Authors: Robinson, E. M., Wiener, M. Abstract: The perception and measurement of spatial and temporal dimensions have been widely studied. However whether these two dimensions are processed independently is still being debated. Additionally whether EEG components are uniquely associated with time or space or whether they reflects a more general measure of magnitude remains unknown. While undergoing EEG subjects traveled a randomly predetermined spatial or temporal interval and were then instructed to reproduce the interval traveled. In the task the subjects travel speed varied for the estimation and reproduction phases of each trial so that one dimension could not inform the other. Behaviorally subject performance was more variable when reproducing time than space but overall just as accurate; notably behavior was not correlated between tasks. EEG data revealed during estimation the contingent negative variation (CNV) tracked the probability of the upcoming interval regardless of dimension. However during reproduction the CNV exclusively oriented to the upcoming temporal interval at the start of reproduction. Further a dissociation between relatively early frontal beta and late posterior alpha oscillations was observed for time and space reproduction respectively. Our findings indicate that time and space are neurally separable dimensions yet are hierarchically organized across task contexts within the CNV signal. Copy rights belong to original authors. Visit the link for more info
I try and sneak up on Charles Lamanna a third time, but he was ready for it, "fool me once". Recently promoted to CVP, Citizen Application Platform I wanted to check in with Charles, who was working from home, about some of the things that are going on. We covered a lot of topics, including the post-virus workplace, RPA, API Limits, Multiplexing and Restricted Entities. Enjoy! BTW, don't forget, Mark Smith (@nz365guy) and I do PowerUpLive every Tuesday at 5PM EST, click here to be alerted, and here's a link to the replays! Full transcript follows: Charles Lamanna: Hello, it's Charles Lamanna. Steve Mordue: Charles, Steve Mordue. How's it going? Charles Lamanna: Hey Steve. I guess this is being recorded, huh? Steve Mordue: You bet. This is our third time. Have you got some time? Charles Lamanna: I do, always. I have a lot of time locked in my house right now. Steve Mordue: Yeah. It's going to be interesting for people who are listening to this in the future, we are recording this on March 27th 2020. The country is on lockdown and we're still heading upwards, so we don't know where this thing will go or end or what things will look like, but that's where we are now and the whole campus has been basically shut down except for essential people. You're all working from home. Charles Lamanna: Yeah, for a little over three weeks now actually. We did the MVP Summit from home, we did the partner advisory council from home. We even did a virtual offsite where for four days, we all joined a Teams meeting for eight hours each day. Steve Mordue: Oh my God. How are you finding it, compared to going in the office and being with the team. It was massive loss of productivity of your stuff or is it still okay? Charles Lamanna: I'd say there definitely is a slight loss of productivity. It's not as bad as I thought it would be, but I mean, never thought I'd miss my office so much. I really miss just ... you get used to it for a few years, you get everything in place. Steve Mordue: There's a bunch of businesses, I look up my apartment window to downtown Tampa at all these office buildings that are full of law firms and all sorts of people, with bunch of cubicle farms within them with people that could actually be doing their job from anywhere and could have for years and now of course are, and I'm wondering how many of these companies that were reluctant to do remote, that felt like I need to keep eyes on you, by the time we get through this, we'll have figured out how to do it remote. I wonder how many of those remote workers will end up coming back to an office. It could be a huge shift. Charles Lamanna: It definitely will. It's interesting. I was in a talk yesterday and we were talking about how when the original SARS outbreak happened, that actually is what launched eCommerce in APJ, and it's around then this jd.com and Ali basically like mobile ordering took off during that time because people were locked at home and then the rest is history, right? Those are the second largest eCommerce properties out there in the world second only to Amazon. So definitely, I would imagine the way people work and the technology people use will be fundamentally different on the other side of this. Steve Mordue: Well, I'll tell you what, it's almost prescient the way you guys decided to invest deeply in Teams over the past year before any of this was out there. And now looking back, that's looking like really brilliant move. Charles Lamanna: Yeah, there's a lot of impact. A lot of people are in trouble, but it's just, it's so exciting to be able to use something like Teams to do remote learning and tele-health and just video plus chat plus meetings integrated, Teams is really the only one doing that right now and it's just phenomenal for someone working remote or working virtually like us right now. Steve Mordue: You said we did recently go through MVP Summit, which converted into a virtual event at the last minute and it was not horrible for a virtual event first-time scrambled together. But I'm also wondering about events in the future if this may change a lot of events into virtual events even when they don't need to be anymore. But it feels like the technology needs to get one step better on the idea. It wasn't really built for virtual events at scale like that. But it seems like you guys are in a spot to really, you know what, we could figure that out and make a virtual event application actually built specifically for that purpose and potentially get rid of a lot of future ... because I tell you, there's no executive out there that's happy about approving travel expenses for his team to go to some in person event if he could sit at home or sit online, what for? Charles Lamanna: And what's really interesting is, I read Ben Thompson, he has a article called Stratechery and it's one of my favorite things to read and he talks about like is this the end of large conferences? Because when you move to all digital and you realize you get such bigger reach. I mean, you get 100,000 people, no problem. That's almost impossible to do in person and for a fraction of the cost and it can be way more tailored and you don't have to worry about double booking. That's another example where maybe things start to change fundamentally in the future. Steve Mordue: And I think conferences for years have been more about, or at least equally about, the social aspect, seeing people in person going out to the bar after the event, having fun going out to dinner, seeing some town, that sort of thing. And that part of course is probably what causes a lot of people want to go to a conerence and I have guys you and I both know that go to a conference and don't go to any sessions. Guys like myself and Mark Smith, we don't go to any sessions. We just go out there. We do that little things like we had you on and stuff like that just for fun and mainly are out there for the drinking. So we're going to miss it. Charles Lamanna: Yeah. Well I guess, the one thing too is just having a group of 20 ... that's the only thing I've learned for ... like Teams is amazing, like four people, you can have a really good conversation or a big broadcast, but if you want to have a group discussion it's hard. And that's something that works so well because you get all these people together that would never be in the room at the same time in person at these conferences and you can have some really interesting conversations. Steve Mordue: That would be the thing to figure out because in person in like MVP Summit, you guys get up in front of, I don't know, a hundred of us and we're all raising our hand, taking our turns, asking questions until we run you guys off the stage in fear. But now in this virtual, there's no real raise your hand. It's just the loudest person, the one that doesn't stop talking gets to continue until his questions gets out of his mouth and that would be an area that it seems like they could do some improvement. Charles Lamanna: Yeah, and I think [crosstalk 00:06:35] is raise your hand in Teams so you can press the button to raise your hand. I can't wait for that. Steve Mordue: That would be awesome. So you could mute everybody and they'd have to raise their hand and ... well, there you go. That's already heading the direction it would need to because that's what you'd need really for some kind of a virtual conference. Charles Lamanna: Yes. That way also I can just never answer you when you raise your hand. No, I'm just kidding. Steve Mordue: Yeah, exactly. "Oh, it's Steve raising his hand again. We'll just ignore him." Charles Lamanna: Yep. Steve Mordue: A lot of stuff that you guys announced at MVP Summit and of course as everybody knows that's mostly NDA for now so we can't talk about a lot of that stuff, but there's a couple of things that we could talk about. One theme I think I heard, which I wouldn't think is NDA, was this idea of make everything we have work better. And when you guys are building like you've been building at the pace you've been building. It's like somebody threw matches in a box of fireworks at stuff that's coming out. It takes a while for all of those wires to get connected and everything to be singing like you'd want it. And sometimes it's like, you know what, this is working darn good. Let's get this other thing launched and then we get a bunch of stuff that's working darn good but not perfect. So it definitely feels like there's a motion now to let's go back over top of all this awesome stuff we've launched and let's connect those last few wires. Let's get this stuff really working as good as it could work. Is that a fair statement? Charles Lamanna: That's exactly right. And the mantra we keep repeating internally is "end-to-end" because what you'll see is there'll be components that work well individually but they'll just be huge seams or gaps when you try to wire them up together and our whole vision has been that you want to wire these things together. That's why we talk about one Dynamics 365, one power platform. So we have this big focus on making sure scenarios that span applications or expand parts of the platform actually work well end to end and it's going in and wiring those things up and spackling over the creases and putting a new coat of paint on it. It's not fundamental and it's not necessarily something that will pop in a demo or in a keynote, but it'll just make a huge difference for our customers. And we see it already, we track our net promoter score very closely, like what are the makers, end users rate the product as they leverage it, and we just see it as we systematically improve these end-to-end experiences. That net promoter score just keeps going up and up and up and up. Steve Mordue: I know we're a one Microsoft now, which is a nice term, but in reality, these are lots of groups that are focused on their things. You've got the Office group focusing on their things, Biz Apps focusing on theirs, Azure focusing on theirs and you've got within your own group of bag things like VRP and power platform that they're wiring there you're working on and at least that's in your realm. You can make that happen, but then you get Azure AD group go do something out there that messes up something for us, or you talk about a gap, like a gap maybe between something we're doing over here and something's has happened over in the Office side and those are kinds of things that you don't have direct control. You got to try and influence and almost make a case internally to those teams that, hey, this is good or get Sacha to make a case, get somebody to make the case. Charles Lamanna: Yeah, and I think like that is a challenge. As any organization gets bigger you have, like I'd say it's not perfectly well mixed. Kind of like the ocean, right? The ocean is big enough. It's not perfectly well mixed, but I think the fact that it's actually a cultural tenant of Microsoft now to operate in the one Microsoft interest, useful listening and being willing to have the dialogue on is this truly better at the macro level? Is this a global maxima for Microsoft to go do this capability? Even if the things you directly own, it's maybe not a maxima for you and this opens the door to have that dialogue of hey, we need this feature for say the Outlook at [inaudible 00:10:38] so that our Outlook mail app can be better and we can get people off the comm at it. That's an example of a really tight partnership between Outlook and us. Charles Lamanna: And systematically, the Outlook team is completely willing and has shipped feature after feature to go make that Dynamics and Power Apps mail app richer and richer. And just the most recent example is to finally bring delegation to the mail app and that's come over the last three and a half months. So that definitely is a challenge, but it's eminently surmountable and solvable. Steve Mordue: I would imagine there's to some degree of quid pro quo, right? I mean, hey, you guys helped me out with this. I know there's nothing in it for you, but it'll help me. And then when I have an opportunity later to help you guys out. So we're all kind of open arm instead of crossed arms when [inaudible 00:11:27] approaching these other things. So how big is your list of things you owe other people? Charles Lamanna: For Power Apps, I owe a lot. But what's great is a lot of these things aren't like a zero sum in that in order for it to be good for one product or one team at Microsoft has to be bad for the other. The reality is Power Apps inside of Teams, I'll use that as an example. As Power Apps, we're very excited about that because are asking for an integrated experience inside Teams. I want it by my left rail for the app bar or I want it as a tab inside my channel. Those are real customer demands and on the other side, Teams wants to go support as many line of business applications as possible inside Teams. And we all know what's the fastest way to go create a bunch of line of business apps? It's not to go write code is to go use a low-code solution like Power Apps. So you actually can go help accelerate the platform and the line of business awareness and teams and you can go up Power Apps, reach new customers at a broader base just by doing that one feature. Charles Lamanna: So it is a very much win-win situation and adopting that mentality through one Microsoft that really the Microsoft cloud is what customers want and customers want to go trust and transact with Microsoft and not individual product teams. It is just a cultural shift that has really grown under Satya with great success. So I would say, I don't know if a product like Power Apps could have been successful 15 years ago, but it definitely, we have the environment where you can't have something like Power Apps embedded in SharePoint, embedded in Teams, the platform for Dynamics and a standalone business and having that not be dissonance or in conflict. Steve Mordue: It's interesting, I think that the companies that have embraced Teams, and it was frankly a slow go to get people to bite on because it looked a lot different than what they were used to and how they did business. But now the ones that have really gone into it are like, they're maniacal about Teams and Teams is like their new desktop. They're operating in Teams all day long now and like I can't imagine how we ever did anything before Teams. So we're still at that inflection point with Teams where I think there's a huge number of customers yet to discover what a lot of customers have about how transformative that can be. So you had to have Power Apps along for that ride. I think that ride is just getting started. Steve Mordue: It's interesting these times right now, there's an awkwardness about marketing or promoting things that make sense because of a virus. For example, I tactfully tried to write a couple of posts here recently and stood back. I was thinking, does that look opportunistic? But the one was this idea that I'd mentioned earlier, lots of people sending people home to work from home. Well, these companies that have had on-premise systems and still have them been reluctant to move to the cloud, that moved to remote workforce is going to be much more complicated than it would have been for those that had already gone full cloud, people just logging in. They got all the security they need to get. Some of these VPN solutions just were never designed or reinvested enough into to support the entire workforce. What are your thoughts about that? Do you have an opinion on that? Charles Lamanna: Yeah. I think the way we view it is number one, things have changed right now. That's just the reality. People are in different working environments, people are under a different economic pressure. There's very real frontline response necessary to go and combat COVID-19 out in the field. So things have changed. That's number one. And the second thing that we've adopted is because things have changed, we need to be flexible. And if you look across what we've done at Microsoft, even just specifically in the area that I work on, we took the April release or the 2020 wave 1 release. Originally it was going to be mandatory upgrade in April. Talking to a lot of customers, they said, "We can't get the workforce to test it. Please don't do this change, we can't take it." Charles Lamanna: So we extended the opt in window for the wave 1 release to May for an extra month and we'll keep evaluating stuff like that constantly. But we did that and that's a big change for us because we really have trumpeted that clockwork. It's always in April, it's going to come up. But we just felt like that was the right thing to do. Or we've also done a bunch of programs where for six months you can get Power Apps or Dynamics CE free if you're in healthcare, hospitals, life science or government organization because we want to go help. So there's literally dozens and dozens and dozens of state local government, hospitals that we're working with right now inside my team. And we wanted to make sure we could help them in a way where it was clear we were not trying to profiteer off of the crisis. Steve Mordue: It is that fine line though, because obviously there'll be a lot to these folks that'll take you up on those opportunities. And then when all this stuff passes, at some point you guys are going to reach out and say, "Hey, that thing we were giving you for free for so long, we like it back or have you start paying," and it is a fine line about, the super cynics could look at it very cynically I guess. The other thing that is interesting to me, I was talking about how in this time of business, revenue is going to be a challenge for businesses right now. Revenue is going to drop for most businesses that are out there. There'll be certain businesses certainly that will ... in every crisis there's always some businesses that do better than others but most are going to have a little downturn. And their revenue growth is going to be largely out of their control at the moment. And the government could shut down the people that are buying your product or who knows and it's not something you could control like you could before. Steve Mordue: So what you can control though is your costs. That's really all you can control right now. It's the cost side and both those drop to the bottom line the same way, right? Charles Lamanna: Yeah. Steve Mordue: And obviously you laying off people as people are doing that, but it seems like the time for people to really look into their organization for where money is leaking out. Because I look at historically to solve a problem like that, maybe with a business application, we're looking at Dynamics 365 or Salesforce or some big applications, costs a lot of money, a lot of time to get implemented to plug up a leaky ship that's losing some money. Where now with Power Apps, we really had the ability to go, let's identify those leaks. Let's spin up a Power App in a week or two weeks and solve this problem. Steve Mordue: We're doing one right now for a Fortune 500 company that discovered [inaudible 00:18:20] $50,000 a month. And in a big company, you can not notice that. I would notice it, but they didn't notice it until someone suddenly noticed it. We're literally going to plug that hole with a Power App at a total development cost of about 15 grand. And it's just amazing, amazing when you think about how many of those sorts of things and now's a good time for people to really focus on where's money leaking out of your business and there's some lower costs, low-code, quick tools now that could potentially plug those leaks that we didn't even have before. Charles Lamanna: Yeah. And if we look at as a company, we actually view Power Apps and Power Automate together as two products that will be envisioned doing quite well even during an economic downturn for that reason. Because you don't have to hire a very expensive developer to maybe go solve the solution or even if you go work with a services company to implement it, they can implement it much more quickly than they would if they had to go write code. And we're working with I said like a Fortune 100, like very large company just I was talking to this week and they said, well before we were talking about Power Apps all about like transformation. How do we go drive revenue forward and now for the next six months we're going to pivot and we're going to be focused on driving efficiencies in our business process and retiring other IT solutions which overlap and can be replaced with Power Apps. Charles Lamanna: So they're now going to go hunt for like this licensing thing, they pay one million bucks a year. This one, they paid two million bucks a year. Can they just spend a little bit of effort, move that to Power Apps and be able to shut down those licenses once and for all. So that's the benefit of the flexibility of the platform and just the ROI is so clean that we think that there's going to be a lot of opportunity between Power Apps and Power Automate with the new RPA capabilities. Steve Mordue: And talking about RPA in a second, but you did make a point there that it's funny how their original thought was to use it to grow revenue. And because of the situation we're in now, they're looking at another use case, which frankly was just as valid before any virus or anything else was out there. It's interesting that it took something like that to have them say, well what's the other hugely obviously we could solve. Charles Lamanna: Exactly. Steve Mordue: So RPA is an interesting one. There was a lot of talk, a lot of excitement about RPA. And I know that you're probably still somewhat limited on what you can talk about, but whatever you can say, what are you thinking about that? Charles Lamanna: The RPA, we're going to be GA in that with the April wave. So wave 1, just in a week or so. We announced the licensing details for RPA four weeks ago or so I think on March 2nd and what's exciting between the capabilities of it being a true low-code offering like typical power platform offering plus the reasonable licensing options that we have, which are generally like I'd say, the most affordable you're going to find out there for an RPA solution, we think we can actually start to democratize enterprise grade automation. Make it possible to really have business users, IT, pro developers, partners, service companies all use the same platform to go automate and drive efficiencies. So that's the exciting bit, because Power Automate and Flow have been around, Microsoft Flow before that had been around for a while but have really been, I'd say capped to a degree around personal and team and light departmental automation. Charles Lamanna: But now with the RPA functionality, we're starting to see enterprise wide invoice processing, quarterly earnings preparation, accounts, basically resolving receivable accounts, things like that. Very heavy workloads built on top of Power Automate, the same low-code tool has been there for a few years. So we're very excited about it for that reason. And in a world where you want to go trim costs, there's real opportunity to go drive efficiency using Power Automate over time. Steve Mordue: Yep. Definitely. It wouldn't be a talk with me if I didn't bust your balls about some stuff. Charles Lamanna: Let me hear it. What is it about? Steve Mordue: In one of our last calls we talked about the hot topic at the time was about these API limits and you said, this isn't something we want customers to think about. We actually thought of it more as like an asterisk on your cable bill. It shouldn't be a factor. Yet it continues to persist in people's minds. The conversation has not gone away. We've got people claiming that they're running into limits and doing stuff like that. And what are your thoughts around that now that it's actually out there and we're seeing how it's landed in people's organizations. Charles Lamanna: I do still hear a little bit of noise from customers or partners that are running into it. But it is dramatically less because it doesn't impact 99% of customers, it wouldn't impact that 99% of customers. So since it's kind of rolled out, we've heard a lot less noise but there's still does exist some noise. And the thing that we could- Steve Mordue: Would you call it air? Would you just call it a false noise? Because you guys have the analytics in the background, you know what's exactly happening. You know if once you launch this that suddenly half of our customer base is hitting this wall and you know that that's not happening. So is it still the feeling that the ones that are squawking either of that small percentage or just fear mongers? Charles Lamanna: I think there are ... I'd say I'd break down three very valid concerns that we hear. The first is, we don't have enough reporting to make it clear and easy to understand where you stand for the API limits. We have early stage reporting and power platform admin center, but we don't have enough. So there's a lot of improvements coming for that by wave two of this year. So by the end of the next wave, release wave for Dynamics, you'll be able to go in and understand exactly how your API limits are being used and if there's any risk. And that's just going to be exposing telemetry that we ourselves look at today and we think that will help with a lot of the concern that people are facing. So that's one. Charles Lamanna: The second is we have people that are using a lot of the Dynamics products. They're using customer insights, they're using Power Apps, they're using customer engagement, you're using marketing. And their concern is all these application workflows. Like imagine customer insights taking data from CE or marketing doing segmentation on CE are actually generating a lot of API calls. So as they actually keep adding more and more apps, which we like of course, and we think that's the whole special value prop of Dynamics, they are generating a huge amount of API calls. And so this is something we're going back and looking at to see how do we count the application API calls from Microsoft delivered apps and also what API inclusions should come with those other licenses. So that is something we're looking at and we don't have enforcement today so people aren't really feeling the pinch, but people are looking at it and saying, "Hey, I can see that I'm making a lot of API calls because of these other apps." That's the second one. Charles Lamanna: And third thing is we have customers who have a web app or some other service which calls into CPS in the background and that generates a lot of load and that is causing friction. Those are probably the people that we intended to have impact from these changes. And because those are people where maybe they have 10 user licenses but they generate like a billion API calls a day. So that's probably not correct. But we are seeing noise in a few places there. And that last one I think is probably, we're not going to do anything to simplify, whereas the first two are things we're going to go try to simplify and improve over time. Steve Mordue: Couple of other things before I let you go. One is, multiplexing is a concept that's been around for a very, very long time. Back when we had CALS, back when it was a physical app installed on machines and stuff like that. Now we're in this different world with all these cloud apps and services bumping into each other. But multiplexing is still this big gray box for lots of folks. And even in the Microsoft documentation, it's kind of contradictory in some places. What's the story with, we've got Salesforce Connectors, we got SAP Connectors, we've got all these other kinds of connectors that almost seem to be in direct conflict with some of this multiplexing. How do you guys get to figure that out? What does multiplexing going to look like in the future? Charles Lamanna: I would say the spirit of the law when it comes to multiplexing is, if you're doing something to reduce the number of user licenses you'd have to get for users, then you're probably doing multiplexing. And the problem is to convert that to a letter of the law is we create confusion historically to a degree as well as accidentally prevent things that we don't want to prevent based on how the language is written. And I'll give an example. So if I use a connector to say Salesforce or SAP, I still have to be licensed through Power Apps to Salesforce or SAP because you're running with your identity to Salesforce and SAP. So we feel like that's totally aligned with the spirit and those partners feel good with it. Charles Lamanna: One of the places where there was some weirdness was like say I have a Power App connecting to my Dynamics CE data, but I'm not using any of the Dynamics CE logic. Is that multiplexing? Technically four months ago that was multiplexing as the way the licensee guide was written. But that was not the intent and that was not the spirit of the law. So we've gone and changed that actually to say if you're licensed for Power Apps, you're writing a Power App to connect to Dynamics data, but not using the Dynamics app logic or app experience, then that's totally fine and not multiplexing. And that was changed I think in late January, early February because some people pointed out, like this doesn't make sense. And then we said, "You know you're right. That's not where we want to have the impact of that being." So we went and changed it. Charles Lamanna: But at its core, if you're using or doing something to circumvent a user license and you'll know you're doing it because it will feel unnatural because the system's not built to behave that way, that's multiplexing and not allowed. Everything else, the intent is to have it be allowed. Steve Mordue: So if your goal is to game the system, you're multiplexing. Charles Lamanna: Yeah, and you'll know it. If you're like, okay, I'm going to create one system account and people will use a web portal I build in Azure and the system account will then have to fake authorization talking to CDN, you're like in bad territory when you're doing that. Steve Mordue: Yeah. A lot of that comes from customers. Customers are like, "Can't we take a Power App and then have a custom entity that by workflow goes and recreates a record in a restricted entity." I'm like, "No, what are you talking about?" Anything you're doing to try and go around the fence, it's probably going to fall into that funny territory. But- Charles Lamanna: Yeah. And a challenge we always have is, how do we convert these ideas into a digestible licensing guide? And I think it's almost like running a law, like legislating, but there are no judges to actually go interpret the law. Steve Mordue: And we also know that when it's written down in a licensing guide, it almost might as well not be said. If we can't get it technically enforced at various levels, we can point back to the licensing guide. We as partners should be telling customers, "Yep, not allowed to do that." But without technical enforcement, these licensing guides are just something you could beat them over the head with when they misbehave. And speaking of restricted entities, when we last talked, you had mentioned, yep, there may be some more coming. That was a very long time ago and we haven't seen them. Is the thinking still along the lines of that is how we're going to protect some of the first party IP or we maybe have some different thoughts of different ways to protect it in this new world of a common data service, open source, data model, et cetera. Charles Lamanna: We actually do ... we are working on something, I can't quite tip my hand yet, that will better allow you to share data and share schema from the common data model, the common data service in the apps without running into the concept of the restricted entities. So there is something in the works that we're working towards and I would say at a high level, restricted entities as a concept are largely antithetical to our common data service, common data model and vision. And they were just like the least bad option to go make sure that we appropriately can license Dynamics apps. So we are working feverously on many proposals to get out of that restricted entity business, but still have a model which more appropriately captures and protects the value of the Dynamics apps without introducing restricted entities. So there, I'd say stay tuned. There definitely the best minds are working on it and I've seen a very digestible and good proposal that is running up the chain right now and that'll get us in a much better place later this year. Steve Mordue: I had that assumption since you talked about adding some and so much time had gone by and my thinking was, because I never liked the idea of the restricted entities for reasons you just said. It felt like a quick down and dirty temporary solution and I had the assumption that since we hadn't heard any more that you guys were actually coming up with a better idea. So very glad to hear that. I'm sure everybody would be glad to hear that. So I know you got to get back to work. You're a busy guy. Anything else you want to convey to folks out there right now? Charles Lamanna: The biggest go do I'd have for folks right now at this point in time, it would be go play with Power Automate, learn the new RPA functionality. It's a huge addition to dynamic CE. It's a great thing for support and customer service workloads. It's a great thing for finance workloads. Like we have one customer that went from 22 finance ops people down to three just using Power Automate and RPA. Plus if you use Power Apps, it's a great way to go extend it. So I say go give Power Automate and RPA a try. That is the number one thing I think to pay attention to and that's the number one thing we're going to be talking about at the virtual launch event. That would be my call to action. That'd be the one thing I'd say. And the second thing would be, I even wore short thinking Steve would maybe video call me today, but it's too bad you can't see it. Steve Mordue: That's very nice. Charles Lamanna: But maybe I take a picture and send it to you about a merry pigmas. So that's the current state here is I work from home, but I say- Steve Mordue: We're all letting the hair grow and- Charles Lamanna: Yeah, I had a call with our PR and AR folks, our analyst relations folks because I had an interview on Wednesday and they said, "You're going to shave, right? You're going to shave before you get on the camera with him." So yeah. But anyway, exciting times. As always, pleasure. Steve Mordue: Listen, you never have to shave to talk to me. Charles Lamanna: Awesome. Thank you. I appreciate that. Steve Mordue: All right Charles, thanks for the time. Charles Lamanna: Yeah, always good to chat with you, Steve. Have a good weekend. Stay safe.
Today's topic sounds like a mouthful -- Dense Wavelength Division Multiplexing in 5G networks -- but the conversation boils down to making 5G more accessible and affordable for everyone. On this episode of the MarketScale Software and Technology podcast, host Shelby Skrhak sat down with Maury Wood, Business Development Manager of North American Key Accounts for EXFO, to discuss how they help customers deploy and test Wave Division Multiplexing in their respective networks. Before joining EXFO, Wood worked for years in the semiconductor industry, which gave him an understanding of how component-level innovations impact system-level cost and benefits. It turns out, though, that his dual degrees in computer engineering and music gave him the perfect analogy to help us understand how data travels. "The beautiful thing about sound is that our ears are able to hear simultaneously low sounds and high-frequency sounds, which explains why music is so beautiful to us," Wood said. "The same thing applies with light going through the atmosphere to our eyes, or laser light going through glass fiber. "It turns out different frequencies of light don't mix or interfere with each other. So, if you're carrying information on a lower frequency of light, you can pass that with a higher frequency lightwave and be able to recover those at the far end without any interference."
This podcast from Bio-Rad highlights five important things to consider when you are selecting secondary antibodies for multiplexing experiments.
Welcome to episode three of SonaCast. Team Sona recently attended the Rapid Point of Care Test Development Workshop organised by Millipore Sigma and Kinematic Automation in St Louis Missouri. In this episode Sona’s chief technology officer Dr Kulbir Singh talks to Michael Mansfield of Millipore Sigma and Hugues Augier de Cremiers of Merck about the growing importance of multiplexing for lateral flow assays. Producer - Darren Evans communications@sonanano.com Interviewer : Dr Kulbir Singh Guests: Michael Mansfield of Millipore Sigma and Hugues Augier de Cremiers of Merck Special thanks to Michael Boyd, owner of Podcast Atlantic Edited by Michael Boyd of Podcast Atlantic Visit us at: www.sonanano.com Follow us on Twitter: @SonaNanotech Connect with us on LinkedIn Music Far Away by MK2 Logo by Alexandra Evans of Happy Elephant Creative Email us at communications@sonanano.com Visit us at: www.sonanano.com Follow us on Twitter: @SonaNanotech Connect with us on LinkedIn
In this episode I'm joined by one of my favorite productivity writers, Tiago Forte. Tiago is part of a new generation of productivity thinkers, whose exploring new ways of working in the digital age. I've found his writing both refreshing and insightful, and when I discovered that he also has a serious interest in meditation & spirituality I knew I'd have to invite him onto Buddhist Geeks.The first part of our dialogue explores Tiago's background and work, and then we get into the relationship between network thinking, productivity paradigms, and different types of meditation.Memorable Quotes"You can't understand a paradigm from within it." - Tiago Forte"What we can borrow from, network metaphors, telecommunications, the theory of constraints, mindfulness & meditation, to make the way the world is going into an opportunity instead of a threat?" - Tiago ForteEpisode LinksTiago Forte (https://www.fortelabs.co)"The Untethered Soul" by Michael Singer"Design Your Work" by Tiago ForteBuilding a Second BrainRibbonFarm : experiments in refactored perceptionThe Throughput of Learning by Tiago ForteFrom Multitasking to Multiplexing by Tiago ForteThe Rise of the Full-Stack Freelancer by Tiago Forte"Networkologies" by Christopher Vitale"Deep Work" by Cal NewportMetcalfe's law
How do you send multiple messages across a single channel?
This week on BSDNow, we have a very special guest joining us to tell us a tale of the early days in BSD history. That plus some new OpenSSH goodness, shell scripting utilities and much more. Stay tuned for your place to B...SD! This episode was brought to you by Headlines Call For Testing: OpenSSH 7.4 (http://marc.info/?l=openssh-unix-dev&m=148167688911316&w=2) Getting ready to head into the holidays for for the end of 2016 means some of us will have spare time on our hands. What a perfect time to get some call for testing work done! Damien Miller has issued a public CFT for the upcoming OpenSSH 7.4 release, which considering how much we all rely on SSH I would expect will get some eager volunteers for testing. What are some of the potential breakers? “* This release removes server support for the SSH v.1 protocol. ssh(1): Remove 3des-cbc from the client's default proposal. 64-bit block ciphers are not safe in 2016 and we don't want to wait until attacks like SWEET32 are extended to SSH. As 3des-cbc was the only mandatory cipher in the SSH RFCs, this may cause problems connecting to older devices using the default configuration, but it's highly likely that such devices already need explicit configuration for key exchange and hostkey algorithms already anyway. sshd(8): Remove support for pre-authentication compression. Doing compression early in the protocol probably seemed reasonable in the 1990s, but today it's clearly a bad idea in terms of both cryptography (cf. multiple compression oracle attacks in TLS) and attack surface. Pre-auth compression support has been disabled by default for >10 years. Support remains in the client. ssh-agent will refuse to load PKCS#11 modules outside a whitelist of trusted paths by default. The path whitelist may be specified at run-time. sshd(8): When a forced-command appears in both a certificate and an authorized keys/principals command= restriction, sshd will now refuse to accept the certificate unless they are identical. The previous (documented) behaviour of having the certificate forced-command override the other could be a bit confusing and error-prone. sshd(8): Remove the UseLogin configuration directive and support for having /bin/login manage login sessions.“ What about new features? 7.4 has some of those to wake you up also: “* ssh(1): Add a proxy multiplexing mode to ssh(1) inspired by the version in PuTTY by Simon Tatham. This allows a multiplexing client to communicate with the master process using a subset of the SSH packet and channels protocol over a Unix-domain socket, with the main process acting as a proxy that translates channel IDs, etc. This allows multiplexing mode to run on systems that lack file- descriptor passing (used by current multiplexing code) and potentially, in conjunction with Unix-domain socket forwarding, with the client and multiplexing master process on different machines. Multiplexing proxy mode may be invoked using "ssh -O proxy ..." sshd(8): Add a sshdconfig DisableForwaring option that disables X11, agent, TCP, tunnel and Unix domain socket forwarding, as well as anything else we might implement in the future. Like the 'restrict' authorizedkeys flag, this is intended to be a simple and future-proof way of restricting an account. sshd(8), ssh(1): Support the "curve25519-sha256" key exchange method. This is identical to the currently-support method named "curve25519-sha256@libssh.org". sshd(8): Improve handling of SIGHUP by checking to see if sshd is already daemonised at startup and skipping the call to daemon(3) if it is. This ensures that a SIGHUP restart of sshd(8) will retain the same process-ID as the initial execution. sshd(8) will also now unlink the PidFile prior to SIGHUP restart and re-create it after a successful restart, rather than leaving a stale file in the case of a configuration error. bz#2641 sshd(8): Allow ClientAliveInterval and ClientAliveCountMax directives to appear in sshd_config Match blocks. sshd(8): Add %-escapes to AuthorizedPrincipalsCommand to match those supported by AuthorizedKeysCommand (key, key type, fingerprint, etc.) and a few more to provide access to the contents of the certificate being offered. Added regression tests for string matching, address matching and string sanitisation functions. Improved the key exchange fuzzer harness.“ Get those tests done and be sure to send feedback, both positive and negative. *** How My Printer Caused Excessive Syscalls & UDP Traffic (https://zinascii.com/2014/how-my-printer-caused-excessive-syscalls.html) “3,000 syscalls a second, on an idle machine? That doesn't seem right. I just booted this machine. The only processes running are those required to boot the SmartOS Global Zone, which is minimal.” This is a story from 2014, about debugging a machine that was being slowed down by excessive syscalls and UDP traffic. It is also an excellent walkthrough of the basics of DTrace “Well, at least I have DTrace. I can use this one-liner to figure out what syscalls are being made across the entire system.” dtrace -n 'syscall:::entry { @[probefunc,probename] = count(); }' “Wow! That is a lot of lwpsigmask calls. Now that I know what is being called, it's time to find out who is doing the calling? I'll use another one-liner to show me the most common user stacks invoking lwpsigmask.” dtrace -n 'syscall::lwp_sigmask:entry { @[ustack()] = count(); }' “Okay, so this mdnsd code is causing all the trouble. What is the distribution of syscalls for the mdnsd program?” dtrace -n 'syscall:::entry /execname == "mdnsd"/ { @[probefunc] = count(); } tick-1s { exit(0); }' “Lots of signal masking and polling. What the hell! Why is it doing this? What is mdnsd anyways? Is there a man page? Googling for mdns reveals that it is used for resolving host names in small networks, like my home network. It uses UDP, and requires zero configuration. Nothing obvious to explain why it's flipping out. I feel helpless. I turn to the only thing I can trust, the code.” “Woah boy, this is some messy looking code. This would not pass illumos cstyle checks. Turns out this is code from Darwin—the kernel of OSX.” “Hmmm…an idea pops into my computer animal brain. I wonder…I wonder if my MacBook is also experiencing abnormal syscall rates? Nooo, that can't be it. Why would both my SmartOS server and MacBook both have the same problem? There is no good technical reason to link these two. But, then again, I'm dealing with computers here, and I've seen a lot of strange things over the years—I switch to my laptop.” sudo dtrace -n 'syscall::: { @[execname] = count(); } tick-1s { exit(0); }' Same thing, except mdns is called discoverd on OS X “I ask my friend Steve Vinoski to run the same DTrace one-liner on his OSX machines. He has both Yosemite and the older Mountain Lion. But, to my dismay, neither of his machines are exhibiting high syscall rates. My search continues.” “Not sure what to do next, I open the OSX Activity Monitor. In desperation I click on the Network tab.” “ HOLE—E—SHIT! Two-Hundred-and-Seventy Million packets received by discoveryd. Obviously, I need to stop looking at code and start looking at my network. I hop back onto my SmartOS machine and check network interface statistics.” “Whatever is causing all this, it is sending about 200 packets a second. At this point, the only thing left to do is actually inspect some of these incoming packets. I run snoop(1M) to collect events on the e1000g0 interface, stopping at about 600 events. Then I view the first 15.” “ A constant stream of mDNS packets arriving from IP 10.0.1.8. I know that this IP is not any of my computers. The only devices left are my iPhone, AppleTV, and Canon printer. Wait a minute! The printer! Two days earlier I heard some beeping noises…” “I own a Canon PIXMA MG6120 printer. It has a touch interface with a small LCD at the top, used to set various options. Since it sits next to my desk I sometimes lay things on top of it like a book or maybe a plate after I'm done eating. If I lay things in the wrong place it will activate the touch interface and cause repeated pressing. Each press makes a beeping noise. If the object lays there long enough the printer locks up and I have to reboot it. Just such events occurred two days earlier.” “I fire up dladm again to monitor incoming packets in realtime. Then I turn to the printer. I move all the crap off of it: two books, an empty plate, and the title for my Suzuki SV650 that I've been meaning to sell for the last year. I try to use the touch screen on top of the printer. It's locked up, as expected. I cut power to the printer and whip my head back to my terminal.” No more packet storm “Giddy, I run DTrace again to count syscalls.” “I'm not sure whether to laugh or cry. I laugh, because, LOL computers. There's some new dumb shit you deal with everyday, better to roll with the punches and laugh. You live longer that way. At least I got to flex my DTrace muscles a bit. In fact, I felt a bit like Brendan Gregg when he was debugging why OSX was dropping keystrokes.” “I didn't bother to root cause why my printer turned into a UDP machine gun. I don't intend to either. I have better things to do, and if rebooting solves the problem then I'm happy. Besides, I had to get back to what I was trying to do six hours before I started debugging this damn thing.” There you go. The Internet of Terror has already been on your LAN for years. Making Getaddrinfo Concurrent in Python on Mac OS and BSD (https://emptysqua.re/blog/getaddrinfo-cpython-mac-and-bsd/) We have a very fun blog post today to pass along originally authored by “A. Jesse Jiryu Davis”. Specifically the tale of one man's quest to unify the Getaddrinfo in Python with Mac OS and BSD. To give you a small taste of this tale, let us pass along just the introduction “Tell us about the time you made DNS resolution concurrent in Python on Mac and BSD. No, no, you do not want to hear that story, my friends. It is nothing but old lore and #ifdefs. But you made Python more scalable. The saga of Steve Jobs was sung to you by a mysterious wizard with a fanciful nickname! Tell us! Gather round, then. I will tell you how I unearthed a lost secret, unbound Python from old shackles, and banished an ancient and horrible Mutex Troll. Let us begin at the beginning.“ Is your interest piqued? It should be. I'm not sure we could do this blog post justice trying to read it aloud here, but definetly recommend if you want to see how he managed to get this bit of code working cross platform. (And it's highly entertaining as well) “A long time ago, in the 1980s, a coven of Berkeley sorcerers crafted an operating system. They named it after themselves: the Berkeley Software Distribution, or BSD. For generations they nurtured it, growing it and adding features. One night, they conjured a powerful function that could resolve hostnames to IPv4 or IPv6 addresses. It was called getaddrinfo. The function was mighty, but in years to come it would grow dangerous, for the sorcerers had not made getaddrinfo thread-safe.” “As ages passed, BSD spawned many offspring. There were FreeBSD, OpenBSD, NetBSD, and in time, Mac OS X. Each made its copy of getaddrinfo thread safe, at different times and different ways. Some operating systems retained scribes who recorded these events in the annals. Some did not.” The story continues as our hero battles the Mutex Troll and quests for ancient knowledge “Apple engineers are not like you and me — they are a shy and secretive folk. They publish only what code they must from Darwin. Their comings and goings are recorded in no bug tracker, their works in no changelog. To learn their secrets, one must delve deep.” “There is a tiny coven of NYC BSD users who meet at the tavern called Stone Creek, near my dwelling. They are aged and fierce, but I made the Sign of the Trident and supplicated them humbly for advice, and they were kindly to me.” Spoiler: “Without a word, the mercenary troll shouldered its axe and trudged off in search of other patrons on other platforms. Never again would it hold hostage the worthy smiths forging Python code on BSD.” *** Using release(7) to create FreeBSD images for OpenStack (https://diegocasati.com/2016/12/13/using-release7-to-create-freebsd-images-for-openstack-yes-you-can-do-it/) Following a recent episode where we covered a walk through on how to create FreeBSD guest OpenStack images, we wondered if it would be possible to integrate this process into the FreeBSD release(7) process, so they images could be generated consistently and automatically Being the awesome audience that you are, one of you responded by doing exactly that “During a recent BSDNow podcast, Allan and Kris mentioned that it would be nice to have a tutorial on how to create a FreeBSD image for OpenStack using the official release(7) tools. With that, it came to me that: #1 I do have access to an OpenStack environment and #2 I am interested in having FreeBSD as a guest image in my environment. Looks like I was up for the challenge.” “Previously, I've had success running FreeBSD 11.0-RELEASE on OpenStack but more could/should be done. For instance, as suggested by Allan, wouldn't be nice to deploy the latest code from FreeBSD ? Running -STABLE or even -CURRENT ? Yes, it would. Also, wouldn't it be nice to customize these images for a specific need? I'd say ‘Yes' for that as well.” “After some research I found that the current openstack.conf file, located at /usr/src/release/tools/ could use some extra tweaks to get where I wanted. I've created and attached that to a bugzilla on the same topic. You can read about that here (https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213396).” Steps: Fetch the FreeBSD source code and extract it under /usr/src Once the code is in place, follow the regular process of build(7) and perform a make buildworld buildkernel Change into the release directory (/usr/src/release) and perform a make cloudware make cloudware-release WITH_CLOUDWARE=yes CLOUDWARE=OPENSTACK VMIMAGE=2G “That's it! This will generate a qcow2 image with 1.4G in size and a raw image of 2G. The entire process uses the release(7) toolchain to generate the image and should work with newer versions of FreeBSD.” + The patch has already been committed to FreeBSD (https://svnweb.freebsd.org/base?view=revision&revision=310047) Interview - Rod Grimes - rgrimes@freebsd.org (mailto:rgrimes@freebsd.org) Want to help fund the development of GPU Passthru? Visit bhyve.org (http://bhyve.org/) *** News Roundup Configuring the FreeBSD automounter (http://blog.khubla.com/freebsd/configuring-the-freebsd-automounter) Ever had to configure the FreeBSD auto-mounting daemon? Today we have a blog post that walks us through a few of the configuration knobs you have at your disposal. First up, Tom shows us his /etc/fstab file, and the various UFS partitions he has setup with the ‘noauto' flag so they are not mounted at system boot. His amd.conf file is pretty basic, with just options enabled to restart mounts, and unmount on exit. Where most users will most likely want to pay attention is in the crafting of an amd.map file Within this file, we have the various command-foo which performs mounts and unmounts of targeted disks / file-systems on demand. Pay special attention to all the special chars, since those all matter and a stray or missing ; could be a source of failure. Lastly a few knobs in rc.conf will enable the various services and a reboot should confirm the functionality. *** l2k16 hackathon report: LibreSSL manuals now in mdoc(7) (http://undeadly.org/cgi?action=article&sid=20161114174451) Hackathon report by Ingo Schwarze “Back in the spring two years ago, Kristaps Dzonsons started the pod2mdoc(1) conversion utility, and less than a month later, the LibreSSL project began. During the general summer hackathon in the same year, g2k14, Anthony Bentley started using pod2mdoc(1) for converting LibreSSL manuals to mdoc(7).” “Back then, doing so still was a pain, because pod2mdoc(1) was still full of bugs and had gaping holes in functionality. For example, Anthony was forced to basically translate the SYNOPSIS sections by hand, and to fix up .Fn and .Xr in the body by hand as well. All the same, he speedily finished all of libssl, and in the autumn of the same year, he mustered the courage to commit his work.” “Near the end of the following winter, i improved the pod2mdoc(1) tool to actually become convenient in practice and started work on libcrypto, converting about 50 out of the about 190 manuals. Max Fillinger also helped a bit, converting a handful of pages, but i fear i tarried too much checking and committing his work, so he quickly gave up on the task. After that, almost nothing happened for a full year.” “Now i was finally fed up with the messy situation and decided to put an end to it. So i went to Toulouse and finished the conversion of the remaining 130 manual pages in libcrypto, such that you can now view the documentation of all functions” Interactive Terminal Utility: smenu (https://github.com/p-gen/smenu) Ok, I've made no secret of my love for shell scripting. Well today we have a new (somewhat new to us) tool to bring your way. Have you ever needed to deal with large lists of data, perhaps as the result of a long specially crafted pipe? What if you need to select a specific value from a range and then continue processing? Enter ‘smenu' which can help make your scripting life easier. “smenu is a selection filter just like sed is an editing filter. This simple tool reads words from the standard input, presents them in a cool interactive window after the current line on the terminal and writes the selected word, if any, on the standard output. After having unsuccessfully searched the NET for what I wanted, I decided to try to write my own. I have tried hard to made its usage as simple as possible. It should work, even when using an old vt100 terminal and is UTF-8 aware.“ What this means, is in your interactive scripts, you can much easier present the user with a cursor driven menu to select from a range of possible choices. (Without needing to craft a bunch of dialog flags) Take a look, and hopefully you'll be able to find creative uses for your shell scripts in the future. *** Ubuntu still isn't free software (http://mjg59.dreamwidth.org/45939.html) “Any redistribution of modified versions of Ubuntu must be approved, certified or provided by Canonical if you are going to associate it with the Trademarks. Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries. This does not affect your rights under any open source licence applicable to any of the components of Ubuntu. If you need us to approve, certify or provide modified versions for redistribution you will require a licence agreement from Canonical, for which you may be required to pay. For further information, please contact us” “Mark Shuttleworth just blogged (http://insights.ubuntu.com/2016/12/01/taking-a-stand-against-unstable-risky-unofficial-ubuntu-images/) about their stance against unofficial Ubuntu images. The assertion is that a cloud hoster is providing unofficial and modified Ubuntu images, and that these images are meaningfully different from upstream Ubuntu in terms of their functionality and security. Users are attempting to make use of these images, are finding that they don't work properly and are assuming that Ubuntu is a shoddy product. This is an entirely legitimate concern, and if Canonical are acting to reduce user confusion then they should be commended for that.” “The appropriate means to handle this kind of issue is trademark law. If someone claims that something is Ubuntu when it isn't, that's probably an infringement of the trademark and it's entirely reasonable for the trademark owner to take action to protect the value associated with their trademark. But Canonical's IP policy goes much further than that - it can be interpreted as meaning[1] that you can't distribute works based on Ubuntu without paying Canonical for the privilege, even if you call it something other than Ubuntu. [1]: And by "interpreted as meaning" I mean that's what it says and Canonical refuse to say otherwise” “If you ask a copyright holder if you can give a copy of their work to someone else (assuming it doesn't infringe trademark law), and they say no or insist you need an additional contract, it's not free software. If they insist that you recompile source code before you can give copies to someone else, it's not free software. Asking that you remove trademarks that would otherwise infringe trademark law is fine, but if you can't use their trademarks in non-infringing ways, that's still not free software.” “Canonical's IP policy continues to impose restrictions on all of these things, and therefore Ubuntu is not free software.” Beastie Bits OPNsense 16.7.10 released (https://opnsense.org/opnsense-16-7-10-released/) OpenBSD Foundation Welcomes First Iridium Donor: Smartisan (http://undeadly.org/cgi?action=article&sid=20161123193708&mode=expanded&count=8) Jan Koum donates $500,000 to FreeBSD (https://www.freebsdfoundation.org/blog/foundation-announces-new-uranium-donor/) The Soviet Russia, BSD makes you (https://en.wikipedia.org/wiki/DEMOS) Feedback/Questions Jason - Value (http://pastebin.com/gRN4Lzy8) Hamza - Shell Scripting (http://pastebin.com/GZYjRmSR) Blog link (http://aikchar.me/blog/unix-shell-programming-lessons-learned.html) Dave - Migrating to FreeBSD (http://pastebin.com/hEBu3Drp) Dan - Which BSD? (http://pastebin.com/1HpKqCSt) Zach - AMD Video (http://pastebin.com/4Aj5ebns) ***
Shownotes SSH client escape sequences https://lonesysadmin.net/2011/11/08/ssh-escape-sequences-aka-kill-dead-ssh-sessions/ connection multiplexing http://man.openbsd.org/ssh_config https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Multiplexing#Setting_Up_Multiplexing http://blog.scottlowe.org/2015/12/11/using-ssh-multiplexing/ authorized_keys http://man.openbsd.org/OpenBSD-current/man8/sshd.8 http://gitolite.com/gitolite/glssh.html#how-does-gitolite-use-all-this-ssh-magic SSH Tunnels http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html http://unix.stackexchange.com/questions/46235/how-does-reverse-ssh-tunneling-work sshuttle http://sshuttle.readthedocs.io/en/stable/ https://github.com/sshuttle/sshuttle.git
Cuando realizamos proyectos de Arduino se nos puede dar el caso que necesitamos más salidas digitales de las que nuestra placa de Arduino tiene y se nos presenta el problema de coger otra placa con más salidas o de alguna forma aumentar esas salidas digitales. En este capitulo vamos a ver una técnica para aumentar las salidas digitales en Arduino y también un chip que nos sirve para este propósito.Pero antes de continuar tenemos dos razones que celebrar en programarfacil, la primera es que ya tenemos nuestro primer curso disponible para todos vosotros, "Aprende a programar con Arduino". Entra en nuestro campus y echa un vistazo, si tienes alguna duda puedes utilizar los canales que siempre tenemos habilitados para ti , a través del formulario de contacto, en el e-mail info@programarfacil.com, en Twitter (@programarfacilc) o en Facebook. Este curso esta orientado para gente que se quiera iniciar en la programación y en Arduino. Aprenderemos a programar con lenguajes visuales ya que la curva de aprendizaje es prácticamente plana y como nuestro lema es aprender con constancia pero divirtiéndose hemos preparado dos juegos (Tres en raya y Twister) para que disfrutes mientras aprendes. Y para los más valientes tendremos retos al final de cada modulo.La segunda razón de celebración es que dentro de unas semanas cumplimos nuestro primer año publicando este podcast y queremos celebrarlo con todos vosotros. Nos gustaría que nos enviarais mensajes de vuestras impresiones sobre este primer año de programa. Mensajes que queráis compartir con nosotros y con el resto de los oyentes. Lo podéis hacer de dos formar en redes sociales con el hashtag #AniversarioPF o a través del email info@programarfacil.com. Podéis enviar mensajes de texto y también mensajes de audio. Si enviáis mensajes de Audio no olvidéis presentaros en el audio. Gracias a todos por estar con nosotros este año y dentro de unas semanas lo celebraremos.Y ya vamos a por lo que le dedicamos este programa, a aumentar las salidas digitales de Arduino. En muchas ocasiones tenemos pines de sobra pero hay que optimizar ya que siempre es bueno dejar pines para el futuro y no tener que replantearse el diseño del circuito. También hay ocasiones que no tenemos pines suficientes para nuestro proyecto, por ejemplo en el Arduino UNO tenemos 14 salidas digitales y en el Arduino MEGA tenemos 54. Por este motivo no es necesario tener que comprar una Arduino MEGA que es mucho más potente que la Arduino UNO, el motivo tiene que ser de mayor peso. Otra cosa a tener en cuenta es el consumo en nuestros circuitos pero para esto tenemos pensado dedicarle un programa.Todas las características de las placas las tenéis disponibles en el segundo PDF que os enviamos a los que os que estáis suscritos a nuestra lista de distribución. Y si no estas suscrito, ya tienes una razón más para hacerlo y recibirás varios PDFs con mucha información sobre Arduino.Técnica CharlieplexingShift Register o registros de desplazamientoCon estos chips vamos a poder aumentar a partir de 3 salidas digitales en 8 salidas digitales y no solo vamos a poder controlar LEDs sino cualquier elemento que queramos conectar a una salida digital. En particular os hablamos del chip 74HC595 cuyo valor es inferior a 1€ y suele estar en cualquier kit de iniciación de Arduino.Estos chips se pueden poner en serie para seguir aumentando el número de salidas digitales y siempre a partir de 3 salidas digitales de nuestra placa de Arduino. Por ejemplo, si queremos en una placa Arduino UNO con 14 salidas digitales tener las 54 salidas que tenemos en la placa de Arduino MEGA necesitaremos 54 / 8 = 6,75 ≅ 7 chips.Estas tres salidas de Arduino irán a tres entradas del chip y cada una tendrá una función, una sera la señal de reloj, otra sera los datos que queremos enviar y la tercera la utilizamos como disparador, para pasar los datos a las 8 salidas del chip.Para facilitar el uso de este tipos de chips tenemos una función nativa en Arduino. shiftOut que realiza una comunicación síncrona con el chip. En esta función le enviamos la señal de reloj y los datos que al final son las salidas que queremos que entes activas o no (HIGH o LOW) en las salidas del chip.Para el que quiera profundizar un poco más sobre este chip y esta función, aquí tenéis un articulo donde lo explica con detalle.Existen otros chips que hacen funciones parecidas. Tenemos el 74F675A que muy parecido al anterior pero nos saca 16 salidas digitales. Aunque la diferencia de precio también es mayor. Sale más barato comprar dos de los anteriores que uno de estos.Hay otro chip que realiza la función "contraria", aumentar las entradas digitales que es el 74HC165 pero ya dedicaremos otro programa o articulo para contároslo.Como ejemplo de uso el más típico es el de las luces del coche fantástico pero con 8 LEDs y con múltiples posibilidades como encender las luces de las posiciones pares o impares, solo las n primeras... lo que tu imaginación quiera.El recurso del díaInstructablesProyecto creado por personas pertenecientes al MIT Media Lab y parte de los fundadores del Squid Labs. En este sitio web encontrarás miles de proyectos orientados al mundo Maker y al DIY (Doit Yourself). La gente puede compartir libremente sus proyectos y su trabajo creando una comunidad de grandes personas y oportunidades. Probablemente, una fuente de inspiración, que te puede ayudar y orientar a conseguir tus objetivos.Muchas gracias a todos por los comentarios y valoraciones que nos hacéis en iVoox, iTunes y en Spreaker, nos dan mucho ánimo para seguir con este proyecto.
This review highlights recent advances in multiplexed bioanalyses benefitting from microfluidic integration.
Jeder will ins Netz und trotz des Aufkommens der Mobilfunknetze sind die Festnetze immer noch die schnellste und zuverlässigste Methode, am Internet teilzunehmen. Doch so richtig zufrieden scheint niemand zu sein: komplizierte Tarife, unechte "Flatrates" und unklare Versorgungsrealitäten in Stadt und Land machen den Zugang zur "Datenautobahn" knifflig bis unmöglich. Dies hat natürlich auch Gründe, nur sind diese wenig bekannt. Im Gespräch mit Tim Pritlove gibt Clemens Schrimpe einen Einblick in die Geschichte der Netzversorgung, die heutige Technik und die Gründe, warum die DSL-Anschlüsse häufig nicht das liefern, was sie könnten und was in der Zukunft für neue Probleme hinsichtlich der Dienstgüte und Netzneutralität zu erwarten ist. Themen: DFÜ in den 80ern; X.25; Telefontarife in Berlin; Tarife nach Mondphase; Dortmund und Karlsruhe als Quellorte des deutschen Internets; der ISP-Boom des Web 1.0; Ausbaugarantien des Festnetzes; Internetversorgung in Island; die Einführung von ISDN nach der Wiedervereinigung; Irrweg Glasfaser; Aufkommen der DSL-Technik; Struktur eines DSL-Anschlusses; die ATM-Infrastruktur; Transfer-Hierarchien durch Multiplexing; Sinn und Unsinn der Regionaltarife; der Vormarsch von Gigabit Ethernet; Warum es in kleinen Orten kein schnelles Internet gibt und warum Telekom-Konkurrenten meistens keine Chance haben; der Niedergang des ISDN-Netzes; Upstream vs. Downstream; DSL-Profile; Symmetrisches DSL; feste Bandbreitenberechnung vs. technischer Machbarkeit schnellerer DSL-Verbindungen; Untervermietung der DSL-Infrastruktur; DSL Training; Umfang der Kupferleitung-Infrastruktur; Glasfasern im Boden, U-Bahn-Schächten, Flüssen und Kanälen; Regulierung und Deregulierung des Netzmarkts; Provider-Kooperation verboten; die Datenautobahn; volkswirtschaftlicher Nutzen von hohen Bandbreiten; Internet über Kabelfernsehnetze; LTE als Ergänzung des Festnetzes; Netzneutralität und Dienstgüte; Zwangsproxies und der Eingriff in den Datenstrom; bezahlte Bevorzugung einzelner Datendienste oder Anbieter.
Multiplexing, port numbers, available transport-layer services.
MIS-635: Computers and Society - Audio Only - mis635_m6b -