POPULARITY
Join Col. Anthony Krockel on an exhilarating dive into the world of military aviation training at Pensacola's Whiting Field. Get an insider's glimpse into how the Navy, Marine Corps, and Coast Guard aviators are forged in the flames of rigorous training and innovative simulation techniques. Tune in to learn how you can earn the prestigious Wings of Gold! WHAT YOU'LL LEARN FROM THIS EPISODE The prerequisites needed to begin military aviation training and the path to earning Wings of Gold A comprehensive overview of the military aviation training phases Significance and role of advanced simulators and contractor-operated pilot training in enhancing the quality and efficiency of pilot preparation Insights into the dedication and perseverance required to succeed in the challenging environment of military aviation training Introduction of innovative training methods aimed at modernizing the preparation of student aviators RESOURCES/LINKS MENTIONED Training Air Wing Five ICARUS Devices ABOUT COL. ANTHONY KROCKEL Col. Krockel graduated from UCLA with a BA in International Relations in 1998 and completed flight training in the CH-46E. He deployed several times in support of Operation Iraqi Freedom and, later, with the 26th Marine Expeditionary Unit in the Middle East. In 2006, he became the Joint Mission Planning System-Expeditionary Project Manager at Naval Air Systems Command and later earned an MS in Project Management. He transitioned to the MV-22, serving in various roles, including Aircraft Maintenance Officer, Executive Officer, and eventually assumed command of VMM-365, participating in multiple deployments, including humanitarian missions to Haiti and Puerto Rico. He worked as a program analyst at the Pentagon, graduated with distinction from the Inter-American Defense College, and later became the Director for Training, Exercises and Readiness at III Marine Expeditionary Force in Japan before commanding Training Air Wing FIVE in 2023. CONNECT WITH COL. ANTHONY LinkedIn: Anthony Krockel CONNECT WITH US Are you ready to take your preparation to the next level? Don't wait until it's too late. Use the promo code “R4P” and save 10% on all our services. Check us out at www.spitfireelite.com! If you want to recommend someone to guest on the show, email Nik at podcast@spitfireelite.com, and if you need a professional pilot resume, go to www.spitfireelite.com/podcast/ for FREE templates! SPONSORS Are you a pilot just coming out of the military and looking for the perfect second home for your family? Look no further! Reach out to Marty and his team by visiting www.tridenthomeloans.com to get the best VA loans available anywhere in the US. If you're a professional pilot looking for a great financial planning partner for your retirement, tax, and investment, go to www.tpope.ceterainvestors.com/contact or call 704-717-9300 ext 120 to schedule a consultation appointment with Timothy P. Pope, CFP®. Be ready for takeoff anytime with 3D-stretch, stain-repellent, and wrinkle-free aviation uniforms by Flight Uniforms. Just go to www.flightuniform.com and type the code SPITFIREPOD20 to get a special 20% discount on your first order.
Welcome to The Hangar Z Podcast, brought to you by Vertical Helicasts. In this two-part series, we are honored to sit down with Bobby Swartz to discuss his remarkable aviation career. Swartz's military aviation story began with Naval Aviation Flight training from 2009 to 2011. In 2012, he completed the MV-22B Initial Pilot Qualification and underwent rigorous flight training, laying the foundation for a career filled with notable achievements. Swartz's career trajectory led him through various impactful roles within the Marine Corps, showcasing expertise and flight leadership at every step. He initially served as a Ground Safety Officer in VMM-561 in Miramar and VMM-265 in Okinawa, Japan while continuing to learn the Osprey. His dedication to safety and excellence has been a consistent thread throughout his career. The majority of Swartz's career was spent at VMM-161, in San Diego at Marine Corps Air Station Miramar, where he held key positions such as Director of Safety & Standardization and Aviation Safety Officer while attaining Night Systems Instructor and Flight Lead designations in the Osprey. Swartz's dedication and excellence led him to Washington, D.C., where he was assigned to the prestigious Marine Helicopter Squadron One a.k.a. HMX 1, which is in charge of transportation of the U.S. President, Presidential Staff, and dignitaries worldwide. Since leaving active duty in 2021 Swartz has earned the title of first officer at a major U.S. airline flying the 737 and continues to serve the United States as a Marine Corps reservist flying the Osprey out of Miramar. Thank you to our sponsors, Precision Aviation Group, Metro Aviation, and Astronautics Corporation of America
Welcome to The Hangar Z Podcast, brought to you by Vertical Helicasts. In this two-part series, we are honored to sit down with Bobby Swartz to discuss his remarkable aviation career. Swartz's military aviation story began with Naval Aviation Flight training from 2009 to 2011. In 2012, he completed the MV-22B Initial Pilot Qualification and underwent rigorous flight training, laying the foundation for a career filled with notable achievements. Swartz's career trajectory led him through various impactful roles within the Marine Corps, showcasing expertise and flight leadership at every step. He initially served as a Ground Safety Officer in VMM-561 in Miramar and VMM-265 in Okinawa, Japan while continuing to learn the Osprey. His dedication to safety and excellence has been a consistent thread throughout his career. The majority of Swartz's career was spent at VMM-161, in San Diego at Marine Corps Air Station Miramar, where he held key positions such as Director of Safety & Standardization and Aviation Safety Officer while attaining Night Systems Instructor and Flight Lead designations in the Osprey. Swartz's dedication and excellence led him to Washington, D.C., where he was assigned to the prestigious Marine Helicopter Squadron One a.k.a. HMX 1, which is in charge of transportation of the U.S. President, Presidential Staff, and dignitaries worldwide. Since leaving active duty in 2021 Swartz has earned the title of first officer at a major U.S. airline flying the 737 and continues to serve the United States as a Marine Corps reservist flying the Osprey out of Miramar. Thank you to our sponsors Bell, Dallas Avionics, and SHOTOVER
Virtual Machine Manager (VMM) is a popular Application Centric Infrastructure (ACI) feature integrating network fabric operations with a virtual machine domain, creating a single experience to configure, manage, and troubleshoot workloads spanning both the physical and virtual realms. Leveraging VMM, enterprises can take the first step in unifying their network and compute teams to expedite time-to-value for common network changes, prevent human error, and provide unprecedented visibility to the virtual domain enabling use-cases like classification, policy redirect, and day 2 workflow automation. In addition, adding to the ACI integration family, we are pleased to announce a VMM integration with Nutanix enabled by the new ACI 6.0.3F release. Resources: https://www.cisco.com/site/us/en/products/networking/cloud-networking/application-centric-infrastructure/index.html https://www.cisco.com/go/nexusascode https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-741487.html https://www.cisco.com/c/m/en_us/solutions/data-center-virtualization/application-centric-infrastructure/case-study-infographic.html?oid=ifddnc028959 Cisco Champion Application: smarturl.it/CiscoChampion2024Application Cisco Guests: Jim Mathers, Engineering Product Manager, Cisco Cisco Champion Hosts: Michael Witte (https://www.linkedin.com/in/themikewitte/) Principal Solutions Architect, WWT Alexander Deca (https://www.linkedin.com/in/alexanderdeca/) Principal Network Engineer, NTT Liam Keegan (https://www.linkedin.com/in/liamjkeegan/) Solutions Architect, 24/7 Networks Moderator Amilee, Customer Voices and Cisco Champion Program
We are joined on stage at VMM by some of the elites preparing to run. We get to speak toHa HauSange SherpaJeff CampbellTrùn NguyễnDavid LloydFollow them live for this weekend at https://live.dottrack.asia/mountainmarathon23/
Leostream plays in the VDI space without being a VDI Company. They are all about unlocking the complexity of hybrid managed work. In this episode I talk with Karen Gondoly. CEO at Leostream. Leostream is a driver in the evolving hybrid cloud space and a leader in the management of end-user resources hosted in the data center and cloud. The are the 'Ultimate Connection Broker' and evolved from the early days of Virtualization doing P2V and VMM management to now being 'The One Broker to Rule them All'! Karen and I talk about how their flagship Leostream platform provides a comprehensive and scalable solution for organizations to deliver and manage desktops, remote sessions, and applications hosted in a public cloud or a private data center. Leostream was founded in 2002 and is based in Waltham, Massachusetts. ☑️ Support the Channel by buying a coffee? - https://ko-fi.com/gtwgt ☑️ Technology and Technology Partners Mentioned: VMware, Microsoft, Citrix, AWS, Azure, Google, Openstack, RDP, VDI, HTML5. ☑️ Raw Talking Points: History and founding Karen Background History of VDI and why it's got a bad reputation Play in the Space but not VDI General state of VDI How you can compete with VMware/Citrix VMM to Broker Early Years in Virtualization Move to Public Clouds Openstack/VMware/Public Clouds Scaling and Automation/IaC Provisioning of workloads Verticles PoC for Enterprise for scale Partners and Resellers Platform 4 components Connection Broker Agent Connect Gateway Deployment Options? Marketplace Emmy Hybrid Cloud Approach ☑️ Web: https://leostream.com ☑️ Downloads and Resources: https://leostream.com/resources-2/downloads ☑️ Crunch Base Profile: https://www.crunchbase.com/organization/leostream ☑️ Interested in being on #GTwGT? Contact via Twitter @GTwGTPodcast or go to https://www.gtwgt.com ☑️ Subscribe to YouTube: https://www.youtube.com/@GTwGTPodcast... Web - https://gtwgt.com Twitter - https://twitter.com/GTwGTPodcast Spotify - https://open.spotify.com/show/5Y1Fgl4... Apple Podcasts - https://podcasts.apple.com/us/podcast... ☑️ Music: https://www.bensound.co
Sintana Energy (TSX-V:SNN) CEO Robert Bose takes Proactive's Stephen Gunnion through the petroleum and natural gas exploration and development company's strategy of acquiring high-quality assets in politically stable and technically de-risked countries with substantial reserves potential. He says Sintana's current projects include four large, highly prospective offshore licenses in the Wallis, Luderitz and Orange Basins and one onshore petroleum exploration license in Namibia and the unconventional resource at VMM-37 in Colombia's Magdalena Basin. Bose says the company specifically targets assets that are underpinned by limited further capital requirements. #ProactiveInvestors #SintanaEnergy #Exxon #Chevron #TSX
Sintana Energy (TSX-V:SNN) CEO Robert Bose takes Proactive's Stephen Gunnion through the petroleum and natural gas exploration and development company's strategy of acquiring high-quality assets in politically stable and technically de-risked countries with substantial reserves potential. He says Sintana's current projects include four large, highly prospective offshore licenses in the Wallis, Luderitz and Orange Basins and one onshore petroleum exploration license in Namibia and the unconventional resource at VMM-37 in Colombia's Magdalena Basin. Bose says the company specifically targets assets that are underpinned by limited further capital requirements. #ProactiveInvestors #SintanaEnergy #Exxon #Chevron #TSX
Dr. Karen Griebling talks with Stephanie Smittle '03 about teaching, composing, learning on the job, seeing students flourish, memories of dear colleagues, Waltz Night, and more. Discography: Wildfire! Vienna Modern Masters (c) 2008 VMM 2052 Songs from This Dancing Ground of Sky track 14 Evergreen Poetry by Peggy Pond-Church, New Mexico Music by Karen Griebling Yvonne Love, soprano Aura Ensemble, Houston Texas, Rob Smith, conductor Music for the Cross Town Trio (c) 2009 Centaur CRC 3082 Petroglyph Dances, track 12 Rain in the Mountain (after a lithograph by Gustav Baumann) Music by Karen Griebling The Cross Town Trio: Jackie Lamar, saxophone Karen Griebling, viola John Krebs, piano Apparitions II (c) 2013 Emeritus 20132 moduli mundi, track 15 Boethius Music by Karen Griebling, commissioned by Karen Fannin Andy Wen, saxophone John Krebs, piano Richard III: A Crown of Roses, A Crown of Thorns (c) 2015 Centaur CRC 3553, track 4 Beware the Fury of Women Scorned! Music and libretto by Karen Griebling Stephanie Smittle and Kara Claybrook, sopranos the Arkansas Symphony Orchestra, Geoffrey Robson conductor Fractal Heart (c) 2016 Centaur 3570 track 14 Vox animalae Music and poetry by Karen Griebling Stephanie Smittle, Soprano Stephanie Dickinson, piano "Songs from Eric's Book" poetry by Eric Crozier, Music by Karen Griebling, 1994/2004 from live premier at the 2022 SheScores Festival, July 23, 2022 Harkness Chapel, Case Western Reserve University, Cleveland Ohio. Gabrielle Haigh, soprano Margi Griebling-Haigh, oboe Karen Griebling, viola Scott R Haigh, bass Eric Charnofsky, piano
Following an epic weekend up in the Vietnam hills in Sa Pa for the Vietnam Mountain Marathon. We catch up with David Lloyd the race Director to discuss the race and how it all played out.We cover:irstly how are you feeling after an intense weekend organising the Vietnam Mountain MarathonBackground of VMM and what was prepared for the 2022 editionPlanning the routeTorrential rain for almost 24 hours running up to the kick off of the 100M. How that effected the planning at the beginning of the raceSome of the performances that impressed David mostAny stories from the race. I saw 3rd spot on the 42k had broken his handThe support teams and aid station managementPlans for Vietnam Jungle Marathon in 1 monthVMM Facebook - https://www.facebook.com/VietnamMountainMarathon/Instagram - https://www.instagram.com/vietnamtrailseries/?hl=enWebsite - https://vietnamtrailseries.com/
When we sat down to record this episode we ended up in a situation like we did with our episode with Ben Armstrong, too much content for one episode! To those familiar with Hyper-V, this likely doesn't come as a surprise being we're discussing the various management tools that are available for Hyper-V, along with the overall management story for Microsoft's hypervisor. In this episode, we sit down with Eric Siron to discuss modern day usage of the traditional Hyper-V management tools which include: Hyper-V Manager Failover Cluster Manager PowerShell System Center Virtual Machine Manager (SCVMM) In the next episode, we'll focus on the new management tools for Hyper-V such as Windows Admin Center and Azure Arc. In this episode Hyper-V Management vs. VMware Management - 2:05 An example of management assumptions for VMware admins trying Hyper-V - 8:43 Networking woes in Windows Server - 12:12 Why choice of tools is a strength of Hyper-V - 17:12 Thoughts on System Center Virtual Machine Manager - 24:08 An example of where VMM does NOT fit - 28:00 Resources for Hyper-V Management Tools Andy's Hyper-V Datacenter Deployment Script Andy's VMware Datacenter Deployment Script PowerShell Direct Ben Armstrong on Twitter Ben Armstrong as a Guest on the Sysadmin Dojo Podcast talking about Hyper-V Webinar on Azure Stack HCI
Dr. Mark Shaw interviews Josh Brunk, Executive Director of Village Mercy Ministries, which is a residential men's program located in South Africa. Village Mercy (https://villagemercy.co.za/) is listed on The Addiction Connection as a resource for those looking for biblical residential addictions programs. About Village Mercy “Helping People Enslaved To Addictions Find Freedom Through Jesus Christ” Village Mercy is a not-for-profit, intensive Biblical-Counselling based discipleship residential program for men that uses a Bible-centered approach to counsel people who are trapped in addictions to find freedom through the knowledge of Jesus Christ. Located in a South Africa, where “this problem (of addictions) has been identified by the National Drug Master Plan, as a fuel for crime, poverty, reduced productivity, unemployment, dysfunctional family life, political instability, the escalation of chronic diseases, such as AIDS and TB, injury and premature death,” Village Mercy stands as a beacon of hope for liberation to all men caught in the grip of this cruel tyrant and their families. How to Get Involved: PRAY Prayer is central to the ministries of Village Mercy because the challenges we deal with are fundamentally spiritual in nature. Please pray for us. Pray for godliness, wisdom, care, patience, love, faithfulness, endurance, positive attitudes and joy amongst the leadership. Pray that those who are ministering may continue being conformed into the image of their Savior and Lord, Jesus Christ. Please pray that we may aim for faithfulness to God's call rather than success. Pray that those who come to Village Mercy for help might come to a saving knowledge of God through our interaction with them. Pray for more healthy biblical churches throughout South Africa that the men who graduate can go back to. And pray for God's leading in this ministry. GIVE Donate to The Addiction Connection and leave a comment or memo designated to "South Africa". Another way you can help us is by Donating Supplies, Providing Groceries & Giving Financially for the various needs of the house. Sign up for our newsletter and contact us for ways to get involved in our ministry and to find out more details about how to support us. HIRE US Hire the residents of VMM to do paid manual/physical labour for you. Part of what we emphasize in the program is the necessity to work, the guys in the program are involved in a lot of supervised manual labour (yard work, cleaning, mowing, cleaning pools and providing man power for a lot of handiwork jobs) during the week to help pay for the expenses of the program and equip themselves in return with job skills. REFERRALS Mention Village Mercy to those who may be struggling with addictions, or those who may be in need of training in the area of this ministry. If you think of other ways to lend a hand, please contact us. Thank You.
Fare “acquisition” significa ottenere nuovi contatti di nuove persone, fino a qualche anno fa, nel Direct Marketing, voleva dire acquistare liste di anagrafiche da contattare, ma con l'avvento del on-line direct marketing, è diventato più rilevante conquistare utenti, nel senso che bisogna raggiungere utenti in target e profilati, che ti seguiranno nel tempo, e una volta acquisiti bisogna mantenerli.Diversa è la retention, intesa come l'attività di comunicazione verso i clienti già acquisiti.L'attenzione, quindi, si sposta dalla quantità di utenti acquisiti alla qualità di utenti acquisiti, quindi persone interessate ai nostri/prodotti o servizi, ai nostri valori condivisi, al nostro brand...
Tra le soft skills più richieste oggi c'è la capacità di risolvere i problemi.Rappresenta quel processo cognitivo che parte da un problema, e la sua analisi, per giungere alla risoluzione dello stesso.Secondo il rapporto Unioncamere del 2017, per il 49,6% delle aziende italiane il problem solving è una delle soft skills fondamentali nella fase di selezione dei candidati, insieme alla capacità comunicativa e a quella di lavorare in gruppo.Il processo prevede la definizione del problema, raccolta di tutte le informazioni, quali-quantitative, per l'analisi, la proposta di diverse soluzioni, la scelta della più impattante e sostenibile, definizione del piano di azione e attuazione.Ovviamente con l'entrata del digitale nelle nostre vite il problem solver è diventato un digital problem solver, come colui che è capace di risolvere problemi con l'ausilio di strumenti e soluzioni digitali...
Oramai la “rete” fa parte della nostra vita, tutti i giorni navighiamo online per effettuare ricerche, acquisti, condivisione, visitiamo siti, chattiamo, comunichiamo sui social network, ci informiamo…Per poter navigare e accedere a tutte le ricerche è necessario utilizzare un “navigatore”, detto browser.È un particolare programma per navigare in Internet che inoltra la richiesta di un documento alla rete e ne consente la visualizzazione una volta arrivato.Esistono diversi browser, ognuno con le proprie caratteristiche, il più usato è Google Chrome con 80% degli internauti, seguono Mozilla Firefox, MS Edge, Apple Safari e Internet Explorer.In quanto software presentano strutture con possibili entrate, in queste falle i cyber criminali sferrano i loro attacchi per estorcere informazioni o effettuare truffe...
Quanto è importante la reputazione del brand?Come gestire un danno reputazionale?Il rischio reputazionale è il rischio che un'azienda subisca un danno economico a causa della percezione negativa della sua immagine da parte dei suoi stakeholders.È normalmente considerato un rischio di secondo livello, ovvero derivato da un errore precedente, per la cui gravità o particolarità, si sfocia in una caduta di “fiducia” o “credibilità” e poiché derivante da eventi sfavorevoli riconducibili ad altre categorie di rischio, relative ad esempio all'area operativa, legale, di compliance o strategica...
Tra le tante definizioni attribuite all'IA, riporto quella del Parlamento Europeo sul proprio sito:“L'intelligenza artificiale (IA) è l'abilità di una macchina di mostrare capacità umane quali il ragionamento, l'apprendimento, la pianificazione e la creatività.”Quindi è la capacità di una macchina di risolvere problemi umani, imparando.Sono sistemi capaci di adattare il loro comportamento, mettendosi in relazione con l'ambiente, analizzando gli effetti delle azioni precedenti, imparando, e lavorando in autonomia, partendo da un data set iniziale...
Per conversione si intende un'azione alla quale possiamo attribuire, anche indirettamente, un valore economico.La compilazione di un form, il rilascio di una e-mail, l'iscrizione alla newsletter, la richiesta di un preventivo, un acquisto, sono tutte conversioni, quindi anche quelle azioni che richiedono del tempo per generare valore economico.
Per social engineering si indica lo studio del comportamento delle persone con lo scopo di individuare le debolezze o punti deboli, per poterle manipolare e carpirne informazioni utili, come numero di carta di credito, di conto corrente, accesso ai social network, password, e-mail e tanto altro.Si tratta quindi di tecniche che sfruttano la psicologia umana per estorcere informazioni con cui truffare utenti o rubarne l'identità.Studi recenti hanno dimostrato come questi attacchi si stanno rivolgendo soprattutto agli utenti di smartphone, perché il principale device utilizzato oggi, perché solitamente si interagisce in modo distratto e impulsivo, ma anche perché spesso i dispositivi mobili non mostrano sempre tutte le informazioni legate ad un sito o e-mail, hanno schermi piccoli e le interazioni sono semplificate...
Google è veramente il motore di ricerca più usato dagli utenti?Come è distribuita la sua popolarità?Quali sono gli altri motori di ricerca?La risposta alla prima domanda è: sì.Google è il motore di ricerca più usato nel mondo!I dati di Statista, di giugno 2021, indicano una quota mondiale del 87,76% delle ricerche globali che passano da Google, il 5,56% da Bing, il 2,71€ da Yahoo...
Secondo l'articolo 4 del General Data Protection Regulation, o GDPR, il Data Breach, o «violazione dei dati personali», è la violazione di sicurezza che comporta accidentalmente o in modo illecito la distruzione, la perdita, la modifica, la divulgazione non autorizzata o l'accesso ai dati personali trasmessi, conservati o comunque trattati.Per le linee guida del Comitato Europeo per la Protezione dei Dati (EDBP), il Data Breach può essere catalogato in tre macro-categorie:•Confidentiality (confidenscialitì) Breach, la divulgazione di informazioni senza il consenso dell'interessato•Availability (Evelabilitì) Breach, la perdita dell'accesso ai dati•Integrity (integritì)Breach, la perdita di integrità dei datiPer citare alcuni esempi presenti sul sito del Garante della Privacy:- l'accesso o l'acquisizione dei dati da parte di terzi non autorizzati;- il furto o la perdita di dispositivi informatici contenenti dati personali;- la deliberata alterazione di dati personali;- l'impossibilità di accedere ai dati per cause accidentali o per attacchi esterni, virus, malware, ecc.; - la perdita o la distruzione di dati personali a causa di incidenti, eventi avversi, incendi o altre calamità;- la divulgazione non autorizzata dei dati personali.Per prima cosa è necessario chiarire che non è possibile evitare al 100% un Data Breach, ma è possibile attivare una serie di azioni preventive che possono limitare il rischio ed è, invece, obbligatorio attivare una serie di attività postume alla violazione per informare il Garante della Privacy.La prevenzione si basa su alcune azioni determinanti:1.Investire tempo e risorse in formazione continua e costante per gli stakeholder2.Delineare un regolamento per il rispetto delle regole del GDPR e l'utilizzo delle risorse informatiche aziendali3.Monitorare i log, ovvero verificare periodicamente l'elenco cronologico delle attività svolte da un sistema operativo, da un database o da altri programmi, per scovare le anomalie4.Monitorare anche i dispositivi finali non aziendali utilizzando dei programmi specifici5.Redigere un rapporto dettagliato su tutte le attività sospette6.Verifica periodica del rispetto dei protocolli e delle procedureInvece in caso di Data Breach è obbligatorio attivare alcune azioni:•Informare il Garante della Privacy, entro 72 ore se azienda privata (art. 33 del GDPR)•Informare il Garante della Privacy, entro 48/24 ore se azienda pubblica (art. 33 del GDPR)•Informare gli interessati dei dati personali (art. 34 GDPR), in alcuni casi specifici...
In the third episode of "Presenting the Past," Shirley Sneve, Vice President of Broadcasting for Indian Country Today, reflects on her work with Indian Country Today, Vision Maker Media (VMM), and archiving with the AAPB. Sneve also comments on the history of Native American public broadcasting and presents excerpts from a few of the documentaries that VMM has supported that present a diversity of perspectives on traditional and contemporary Native American culture.
Il termine Malware deriva dall'unione delle parole MALicious softWARE, quindi programmi, molto sofisticati, creati e diffusi con intenti malevoli, con l'obiettivo la trafugazione o l'estorsione di dati sensibile per azioni criminali.Spesso al Malware viene associata l'immagine del ragazzino con la felpa e il cappuccio in testa, ma questi software sono prodotti da organizzazioni criminali molto pericolose.Tra le principali caratteristiche sicuramente emerge la capacità di diffusione incontrollata, motivo per cui abbiamo per anni chiamato questi programmi virus.Storicamente il primo Malware lo facciamo risalire ad un programma di nome Creeper datato 1971, ma non presentava caratteristiche malevoli.Nei primi anni 80' si diffondono i WORM, che come i vermi, si autoreplicano all'interno dei computer.Agli inizi del nuovo millennio si diffondono i primi virus tramite posta elettronica, software come ILOVEYOU e PIKACHU infettarono milioni di computer, mostrando al mondo la vulnerabilità della rete.Nel 2013 nasce il primo RANSOMWARE, CRIPTOLOCKER, con la capacità di criptare il computer infetto e richiedere, da parte dei cyber criminali, un riscatto per sbloccare il dispositivo con tutti i dati all'interno...
In this episode, Jeannette talks to Vicky Martin, a tattoo artist that specialises in medical tattooing, especially for breast cancer patients. Something that she combines with NLP and other techniques to help the women she works with tap into the power of their minds. She explains how she ensures that they leave her feeling whole on the inside as well as on the outside. Stronger and better equipped to tackle the next challenge and to enjoy their lives to the full. Vicky has taken on the mighty Facebook who keep closing down her social media accounts as they classed her before and after images of tattoos on women who have had mastectomies as pornography. Come join Vicky's cause as it really does make a huge difference!!! SIGN UP TO MSOPI HERE: http://bit.ly/bbbmsopi Vicky also shares several tips that anyone can use to begin to tap into the power of their mind to transform their lives. Vicky shares how she went from not being able to leave the house for 4 years to running a successful globally franchised business, being a sought-after speaker and developing the VICKY MARTIN METHOD VMM®. Vicky finishes up by explaining what she is doing to ensure that images of breast cancer scar transformations can be freely shared on social media without the poster´s account getting automatically taken down. KEY TAKEAWAYS Mindset is critical. The way you think really does transform your life. Tough times teach you invaluable lessons that lead to your future wins. Your mind cannot not answer a question. That is why asking the right questions makes such a huge difference to your life. You can anchor a feeling good or bad to a sound, touch, or smell to create a trigger you can engage at any time to generate strength. In the podcast, Vicky explains how. Don´t be put off by what others say. Take action and do what you know is right. Don´t shy away from challenges that frighten you. Acknowledge the fear and do it anyway. Start imagining your future in the best possible way. BEST MOMENTS ‘If you dig deep enough in the dirt, down there is a diamond. I love people to leave me feeling complete on the outside and strong on the inside. ´ ‘The hard things at the time are just your wins in the future.' ‘Your mind likes what it is used to.' This is the perfect time to get focused on what YOU want to really achieve in your business, career, and life. It's never too late to be BRAVE and BOLD and unlock your inner BRILLIANCE. If you'd like to join Jeannette's FREE Business Discovery Session so you SCALE UP your business and career just DM Jeannette on info@jeannettelinfootassociates.com or sign up via Jeannette's linktree https://linktr.ee/JLinfoot VALUABLE RESOURCES Brave, Bold, Brilliant podcast series EPISODE RESOURCES Website - https://vickymartinmethod.com/about-vicky/ The Third Door by Alex Babayan Feel the fear and do it anyway by Susan Jeffers ABOUT THE GUEST After almost burning out in a high-powered corporate job as an Auditor and hitting one of the lowest points in her life, Vicky Martin wanted something that fulfilled the creative, people-focused, and entrepreneurial aspects of her personality. So, she became a Tattooist, specialising in the art of Micro-pigmentation. Every year she learns new styles and techniques and since 2012 she has been sharing her knowledge via her training school. She is passionate about giving breast cancer survivors the absolute best medical tattooing results to finish their long, tough journey and leave them feeling whole again. So, fuelled by this passion and using her experience, blended with the global techniques she has learned, she created her method for tattooing the areola. Vicky now teaches her 3D Areola (VICKY MARTIN METHOD VMM®) to tattooists across the world. Vicky also studied neuroscience. Her experiences have shown her that anyone can change their life through the way they think. That's why her VMM® method is equally about the artwork and the mindset. She is also a qualified Neuro-Linguistic Programming, NLP practitioner and hypnotist. She regularly speaks at events, around to world, on how to be UNSTOPPABLE. Vicky has created an online course and event called UNSTOPPABLE. She works to ensure that everyone can learn the skills they need to be able to live each day being the best possible version of themselves. Her mission is to educate everyone she meets to live a fearless, happy life. “What you do today can improve all your tomorrows, believe and stand tall” ABOUT THE HOST Jeannette Linfoot is a highly regarded senior executive, property investor, board advisor, and business mentor with over 25 years of global professional business experience across the travel, leisure, hospitality, and property sectors. Having bought, ran, and sold businesses all over the world, Jeannette now has a portfolio of her own businesses and also advises and mentors other business leaders to drive forward their strategies as well as their own personal development. Jeannette is a down-to-earth leader, a passionate champion for diversity & inclusion, and a huge advocate of nurturing talent so every person can unleash their full potential and live their dreams. CONTACT THE HOST Jeannette's linktree https://www.jeannettelinfootassociates.com/ YOUTUBE LinkedIn Facebook Instagram Email - info@jeannettelinfootassociates.com Podcast Description Jeannette Linfoot talks to incredible people about their experiences of being Brave, Bold & Brilliant, which have allowed them to unleash their full potential in business, their careers, and life in general. From the boardroom tables of ‘big' international business to the dining room tables of entrepreneurial start-ups, how to overcome challenges, embrace opportunities and take risks, whilst staying ‘true' to yourself is the order of the day. See omnystudio.com/listener for privacy information.
Il digitale ha cambiato il modo di intendere la formazione, oggi molto più interattiva, adeguata ai tempi, immersiva ed efficace, anche perché diluita nel tempo e accessibile facilmente.Per eLearning si intende l'apprendimento online, che grazie alle tecnologie e i nuovi studi, permette di coinvolgere lo studente in un'esperienza emozionale.Nel tempo si sono sviluppare diverse forme di eLearning:1) flipped learning o didattica capovolta, che ribalta il tradizionale metodo di insegnamento, affermando che le competenze cognitive di base dello studente, come memorizzare, ascoltare, guardare, possono essere attivate prevalentemente a casa. Invece le competenze cognitive alte, come creare, costruire, applicare, possono essere svolte in classe con gli altri studenti e un tutor a facilitare.2) Interactive learning, basato sull'interazione con lo schermo ed i contenuti eLearning, anche con il supporto di video immersivi, visori e realtà aumentata.Usata molto in attività scolastiche, si sta diffondendo sempre più anche nelle grosse realtà organizzative per la formazione del personale.Game based learning, si intende quando l'apprendimento è realizzato attraverso l'uso di giochi o videogiochi. L'uomo gioca da sempre, da piccoli impariamo giocando e per questo motivo il processo formativo risulta molto efficace, anche sorretto dalle tecnologie che supportano la creazione di ambienti simili o superiori alla realtà...
Il digitale ha permesso a chiunque di creare contenuti da condividere nei diversi canali, in diversi formati, in diversi momenti, in diversi argomenti da trattare, tutto con semplicità e immediatezza.Possono essere contenuti scritti, immagini, presentazioni, infografiche, video e audio.Con la possibilità di estendere a chiunque la possibilità di creare, lavorare e distribuire contenuti si è avuto un proliferarsi di essi ma anche una riduzione della qualità che dal livello professionale si è spostato verso il popolare.È cambiata anche l'aspettativa da parte del pubblico, che fruisce dei contenuti in tutti i momenti della giornata, ovunque, per formazione e/o informazione, per divertirsi o emozionarsi, per conoscere le persone che si seguono, per relazionarsi o semplicemente per reazione inconscia...
Universal Serial Bus, con l'acronimo USB, è uno standard di comunicazione seriale, utilizzato per l'alimentazione o la comunicazione tra periferiche del computer.Nasce nel 1995 e viene diffusa un anno dopo, nel tempo si diffonde sempre più fino a diventare il principale strumento utilizzato per comunicare tra device. Oggi è presente in tutte le auto e tutti i caricabatterie degli smartphone, anche se negli ultimissimi anni si sta espandendo il cloud che sta rubando quote.Nonostante la diffusione non tutti sanno che negli anni le chiavette USB hanno, anche, facilitato molti cyber-attacchi da parte di criminali digitali per accedere ai nostri dispositivi e trafugare o bloccare i dati all'interno...
COSA SI INTENDE PER DIGITAL MINDSET?QUANTO È IMPORTANTE?Tutti i giorni sentiamo parlare di digitale, di come stia cambiano il mondo, i processi, le opportunità, ma per poterle cogliere è necessario possedere delle competenze e capacità digitali, la mentalità deve essere predisposta al cambiamento, alla percezione umana della tecnologia, al pensiero laterale, al Digital Mindset.Il mindset rappresenta il modo di pensare e porsi verso la realtà, in un approccio dinamico composto dall'insieme delle strutture mentali che si formano tramite le esperienze e le conoscenze vivendo in un mondo digitalizzato.Presenta due aspetti contrapposti, può essere “fixed”, quindi con approccio mentale fisso, statico, non propenso al cambiamento e a sperimentare. La persona con approccio fixed pone resistenza, ha paura e non comprende del tutto le dinamiche, preferisce lo status quo e non desidera crescere e migliorarsi.Può essere “growth, con un approccio mentale flessibile e dinamico, predisposto al cambiamento, ad analizzare i dati e trarre conclusioni solo post esperimento. La persona growth non pone resistenza, anzi coglie le opportunità che gli altri non vedono, vuole migliorarsi, sperimenta, ascolta, è curiosa e sociale.Le sue componenti essenziali vanno esplorate nella direzione cognitiva e comportamentale...
Il mobile ha cambiato la nostra vita!Oggi con lo smartphone o tablet effettuiamo transazioni bancarie, ci colleghiamo con le app, navighiamo in rete e sui social network, consultiamo i nostri indirizzi e-mail, condividiamo e chattiamo, utilizziamo applicazione di navigazione e geolocalizzazione, effettuiamo acquisti, prenotiamo vacanze o accediamo ad enti, immortaliamo la nostra vita con immagini e video…tante altre cose.Ovviamente questi strumenti accumulano dati sensibili e professionali rilevanti per chi li usa, di conseguenza sono una miniera d'oro per coloro che attivano azioni fraudolente per scopo di lucro o reputazionali.Molto spesso tendiamo a dimenticarci i rischi a cui siamo esposti, proprio per il concetto di mobilità che è alla base, quindi di portarci con noi tutti i dati sensibili con i rischi connessi...
Nel corso degli anni abbiamo visto moltiplicare le applicazioni a cui effettuare l'accesso, subendo anche un moltiplicarsi di password da ricordare.Social media, banche, enti pubblici, rete aziendale, siti web, tool, sono tutti punti di accesso a servizi o profili che necessitano di password per entrarvi.Gli studi affermano che non c'è stata mai una particolare attenzione alle proprie password, ma con l'aumentare delle richieste di accesso, le persone sono ricorse ad alcuni errori comuni, come per esempio usare sempre la stessa password, oppure usare password deboli per poterle memorizzare, come per esempio il proprio nome e data di nascita, oppure la stessa password per profili personali e professionali.Il furto o la sottrazione delle password può comportare problemi molto importanti, come l'accesso al proprio conto bancario, oppure la sottrazione della propria identità digitale, la possibilità di poter accedere a tutti gli altri nostri punti di entrata o a tutti le nostre informazioni riservate.Ma i danni posso impattare anche sulla nostra vita professionale, magari con un file malevole all'interno della rete aziendale o ripercussioni reputazionali a causa della violazione dell'identità digitale.Bisogna fare molta attenzione e applicare alcune semplici regole...
Oramai si sa, molte sono le opportunità e le sfide che offre il digitale, e per le aziende è diventato necessario evolvere la propria realtà organizzativa con nuove idee, nuovi processi, nuova mentalità e di conseguenza nuove posizioni lavorative.Per questo motivo molte sono le professioni che stanno ricoprendo ruoli strategici in questa direzione.In questo podcast sentiamo quali sono i principali...
Come gestisci il tuo tempo?Con il sovraccarico informazionale e il moltiplicarsi delle discipline professionali, la gestione del tempo è diventata una competenza fondamentaleIl tempo è una risorsa democratica, è uguale per tutti, e come lo impieghiamo determina il successo delle nostre azioni, tanto nella vita privata quanto in quella professionale.Bisogna imparare a distinguere le azioni importanti, urgenti e quelle prioritarie.•Urgenza; riguarda una variabile oggettiva in quanto determinata dal tempo. Sono quelle azioni che richiedono un intervento a brevissima scadenza oppure che richiedono attenzione immediata.In questo caso bisogna avere ben presente il tempo a disposizione e bisogna conoscere le scadenze per poter programmare con efficienza.•Importanti; sono quelle attività che aggiungono valore al tuo lavoro, sono fondamentali. Importante è un concetto soggettivo, in quanto quello che può essere importante per te può non esserlo per altri, in più ciò che è importante oggi può non esserlo domani.•Prioritarie; sono quelle azioni che risultano sia urgenti che importanti. Quindi che sono rilevanti e fondamentali per il lavoro e che hanno scadenza breve per portarle in esecuzione, quello che viene prima in una graduatoria temporale.Solitamente in azienda esistono obiettivi determinanti, che devono essere raggiunti entro una data prefissata, obiettivi auspicabili, che possono essere raggiunti ma si possono modificare nel tempo e nel contenuto, e obiettivi importanti, che è opportuno raggiungere.Per determinare se le azioni da mettere in campo sono prioritarie, importanti o urgenti ci viene in sostegno la Matrice di Eisenhower, usata per definire le operazioni per lo sbarco in Normandia.
Oramai molti studi dimostrano che l’uso e l’abuso della rete sta comportando dipendenze simili a quelle più comunemente riconosciute all‘alcool, al tabagismo, alla tossicodipendenza, al gioco d’azzardo, al sesso e al cibo.È una dipendenza che sta avvolgendo soprattutto i giovani e presenta problemi quali:1.Bisogno di trascorrere sempre più tempo in rete2.Sviluppo di patologie come agitazione, ansia e depressione con la diminuzione o sospensione del tempo dedicato ad internet3.Necessità di trascorrere in rete più tempo di quello preventivato4.Impossibilità di interrompere la navigazione on-line5.Continuare ad usare internet nonostante la consapevolezza dei danni psico-fisici6.Dispendio di molto tempo sulle attività in reteViene comunemente chiamata “Internet Addiction Disorder”, termine che ci ha offerto lo psichiatra americano Ivan Goldberg, intesa come la sindrome che identifica l’eccessivo e smodato utilizzo del web; appunto “Disturbo da dipendenza da Internet”.
Il 23 ottobre 2019 sul blog della rivista “Nature.com” Google dichiara di aver raggiungo la “supremazia quantistica” con un test effettuato su un processore quantistico di 53 qbit e di essere riuscito ad effettuare in 3 minuti e 20 secondi un calcolo estremamente complesso che il computer Ibm (tradizionale) più potente al mondo avrebbe impiegato 10.000 anni.Ovviamente nessun computer potrebbe sfatare questa tesi, ma gli studi e le applicazioni sui computer quantistici sono da anni all’attenzione del mondo.Il primo a parlare di Quantum Computing fu Paul Benioff, i primi anni 80’, con la macchina di Turing quantistica, che ha trovato maggiori studi negli anni a seguire con diversi accademici che hanno iniziato ad osservare l’applicazione della meccanica quantistica nel computing.Ma negli ultimi anni l’attenzione delle principali aziende di settore, e non solo, si è moltiplicato puntando molto su questa tecnologia che potrebbe cambiare l’uso dei calcolatori.Cosa differenzia il Quantum Computing dai computer tradizionali?
La SEM, Search Engine Marketing, sono quelle attività che hanno come obiettivo l’acquisizione di traffico e visibilità sui motori di ricerca. Si dividono in:•SEO, Search Engine Optimization, che si basa su attività volte ad attivare visibilità naturale o detta organica.•SEA, Search Engine Advertising, che si basa su attività a pagamento, quindi annunci sponsorizzati.La attività di Seo si dividono in:•On-site, che riguardano l’architettura, la struttura e la semantica del sito, che comporta un avanzamento in SERP. Tra le attività principali ci sono la scelta delle keyword, la descrizione a il “tag alt” alle immagini, la creazione dell’architettura e della site-map, la valutazione dei codici e dei titoli, la creazione di contenuti fruibili per l’utente e utili per il posizionamento organico. In questo caso si svolge un’attività di Seo Audit, di scansione del sito e individuazione delle criticità, per poi andare successivamente ad intervenire per migliorare il posizionamento.•Off-site, che riguardano attività di link building, quindi la presenza di link esterni autorevoli che rendono autorevole il nostro sito, e attività di Social Media Management, blogging e Guest Posting. Quindi la creazione di contenuti e attività esterne al sito che portano traffico e indicizzazione.La attività di SEA si basano sulla scelta e acquisto di parole chiave che possono individuare le ricerche degli utenti, e creazione di annunci a pagamento basato sul criterio del “Pay per click”, quindi basata sul pagamento solo quando l’utente clicca sul link. Uno dei principali Kpi da monitorare è il CTR, Click Through Rate, che rappresenta il rapporto tra click e impression. L’acquisto si basa sul processo ad asta.
Oggi la maggior parte delle comunicazioni passano dall’e-mail che in molti casi sono contenitori di informazioni sensibili che rivestono un particolare valore per l’individuo, l’organizzazione e la comunità.Le e-mail sono diventate nel tempo il principale mezzo usato tra truffatori e criminali, che effettuano la maggior parte delle loro azioni fraudolente tramite la modalità di Phishing.La parola Phishing deriva dall’unione delle parole Phreaking, con cui si sono indicate le prime truffe tecnologiche, e fishing che vuol dire andare a pasca.Quindi creare truffe tecnologiche lanciando un’esca, come nella pesca, per conquistare un premio grazie all’inganno.I Phisher inviano una e-mail, un messaggio in chat, un messaggio sullo smartphone, e cos’ via, con un contenuto ingannevole e invitano il destinatario a compiere un’azione generando in lui la possibilità di avere un vantaggio o superare una situazione spiacevole.La possibilità di successo del cyber attacco dipende dalla consapevolezza della vittima del possibile attacco e dalla sua capacità di individuare l’inganno.
Per “singolarità tecnologica” si intende quel momento di uno sviluppo di una civiltà in cui il processo tecnologico accelera tanto da superare la capacità di comprensione dell’essere umano.”Praticamente quel momento in cui l’Intelligenza Artificiale sarà superiore a quella dell’uomo.Bisogna domandarsi: come e quanto possiamo fidarci delle macchine? Come e quanto possiamo relazionarci con loro? Quali obblighi abbiamo nei loro confronti?Stiamo parlando di Etica Digitale!
Il 17 dicembre 2000 un sito andò online, bonsaikitten.com; questo sito divenuto subito virale nell'ambiente online perché forniva le istruzioni, per poter crescere un gatto in versione bonsai, quindi come gli alberi potevi cresce un mini gatto da poter tenere dentro casa. Molte testate giornalistiche commentarono criticando il sito perché trasmetteva crudeltà, anche in Italia testate come La Repubblica o il Manifesto scrissero degli articoli sulla questione. Solo un anno dopo con l'intervento dell’FBI che fece delle indagini e riuscirono a risalire al server che era basato al Massachusetts Institute of Technology dove un gruppo di studenti aveva deciso di creare questa bufala. Uno scherzo in maniera ironica non potendo immaginare l'impatto. Questa è stata una delle prime Fake a livello online che si sono diffuse basandosi su un concetto eticamente scorretto.Una fake news è un'informazione non veritiera che circola a macchia d'olio grazie alla circolarità sui social network, ma anche su altri canali online, che ha l'obiettivo di creare notizie false di divulgazione per poter creare delle contestazioni o attaccare un politico o attaccare determinati valori di enti e organizzazioni. Una seconda forma di fake è il click bait. Questa è una pratica secondo il quale scrivendo dei titoli accattivanti che evocano nelle persone la voglia di voler verificare di voler constatare l'articolo, si clicca su quel link per poter atterrare su siti di acquisto o su altri siti un po' farlocchi con l’intento la tentata vendita.C'è anche la vendita on line su siti falsi e anche in questo caso i rischi sono notevoli e quindi conviene andare a verificare sempre a quale sito ci si sta riferendo per l'acquisto. Prima di poter condividere un'informazione, prima di dover prendere per vera un'informazione, bisogna mettere a terra una serie di attività per poter verificare l'originalità, la veridicità quel contenuto che stiamo andando a leggere o a condividere. Esistono una serie di accortezze e di regole che conviene seguire prima di far proprio un contenuto online, perché in effetti online è molto facile creare delle fake news. Esiste una “fact checking”, un controllo dei fatti, da mettere a terra per poter constatare la veridicità di quel contenuto. •Prima di tutto verificare la fonte e quindi verificare chi ha scritto quel contenuto. Cercare, se ci sono, i link per risalire al contenuto. •Dubitare, appunto, dei click-bait, quindi quei titoli accattivanti che possono indurre a cliccare per curiosità, e quindi verificare sempre la tracciabilità del post. •Verificare l'indirizzo web, alcune volte si clonano domini importanti per far atterrare le persone su pagine dove noi vogliamo. Basta che si cambi qualche lettera, quindi fare attenzione e leggere URL.•Verificare la geolocalizzazione perché alcune volte molti contenuti, soprattutto sui social, hanno un posizionamento. Quindi basta cliccare su Google Map e andare a controllare dove quell'evento è dichiarato. •In ultimo verificare l'attendibilità sui profili social, che per esempio per personaggi noti sono segnalati e hanno una spunta blu. Le fake-news si diffondono con un processo di circolarità delle informazioni quindi una fonte lancia un contenuto fake e piano piano si diffonde; è difficile risalire all'autore, motivo per cui si prende per vero quel tipo di informazione.Un po' come è capitato a La Repubblica o a Il Manifesto proprio nel caso sopra-descritto, il miglior consiglio è fare attenzione alle informazioni che leggiamo, che condividiamo, e cerchiamo sempre di arrivare alla fonte. Per quanto difficile cerchiamo sempre di verificare appunto il dominio e a quali siti fanno riferimento e cercare anche di comprendere perché quel contenuto può essere stato creato e perché lo dovresti prendere per buono. Esistono anche dei siti che permettono di fare le verifiche sulle fake, per esempio sulle immagini, per capire l'immagine inversa quindi per capire dove quell'immagine è stata già utilizzata o diffusa, se sono state ritoccate.Esistono diversi tool tra cui Google Immagini che è abbastanza semplice utilizzare o TinEye o Yandex oppure siti che ci permettono di capire se ci sono delle manipolazioni sui video o sulle immagini, come Fotoforensics o InVide Browser Plug-in oppure siti che raccolgono, verificano e archiviano Fake News e che possiamo andare a verificare in qualsiasi momento e interpretarle per constatare se le informazioni sono veritiere come per esempio Bufale.net o attivissimo.net. In conclusione nel mondo online l'ecosistema Digital è molto complesso. Tutti possiamo intervenire, basta pensare a Wikipedia che fondamentalmente enciclopedia creata dagli utenti, in cui è molto facile trovare anche informazioni che non sono veritiere per scopi personali, per scopi aziendali o di qualcuno che ci vuole portare verso una cattiva informazione. Allora verificate sempre l'informazione, utilizzati tutti gli strumenti a disposizione perché prima di condividere una bufala e renderla una bufala al quadrato conviene fare molta, molta, molta attenzione.
If you love competitive running and enjoy the spirit of a trail series, then this episode will cover what you need to know and much more. However, if you see this title and immediately think this is NOT for you. You may do a 360. Like I did.I am assured you don't have to be a super athlete to have fun and enjoy the spectacular scenery you will experience by joining any one of the Topas Trail Series marathons. David Lloyd, Sport and Race Director for the Vietnam Trail Series and my guest on today's show, tells me you can go as slow or as fast as you like; and you only have to be of average fitness and enjoy walking and being outdoors.Yes, "that's me", I hear you say! Okay, so training starts now!I am warning you, David will woo you with the idea as he talks about the stunning views and the fun sharing with other competitors. David says “ joining one of these events is a great leveler. When you are walking or running with someone that may be by day a shopkeeper, a banker, or a tech genius. It doesn't make any difference when you are on that trail and climbing those mountains. You are all competing for the same trophy, and possibly feeling the same fears, exhaustion, and muscle pain.”.He will make you laugh about some of the experiences of the most ardent competitors; You'll hear about how much planning and work goes into managing groups sometimes 3-4,000: From accommodation to tasty food to medical support. David offers up some of those special "stand-out moments" that you just HAD to be there to capture. Plus, you will hear how these marathons contribute substantially to the work of Operation Smile, Newborns Vietnam, Blue Dragon, Hue Help and Sapa o Chau. These charities are obviously close to David's heart as he says they also carry out local projects in villages along the route. A little bit about David Lloyd:- David Lloyd is originally from the UK, but has lived in Hoi An since 2011. His background includes journalism and photojournalism. He has had work published in The New York Times, and he has also written guidebooks on Vietnam and Laos. He is an active participant and competitive runner and cyclist. David is full-time Sport and Race director of all events for Topas Travel ……...and has been since 2016 when he was involved in the original race. By race…..I mean the Vietnam Trail series. Marathon Dates coming up: VMM - 12/6 (NEW) Mountain - 27/8 Jungle - 16/10 You can find all about this trail series here - www.vietnamtrailseries.com www.topasecolodge.com www.topastravel.vnwww.topasriversidelodge.com Please check out the transcript at www.whataboutvietnam.com Please rate and review on Iphone - https://apple.co/3djTvox
Jim Lecinski nel 2011 coniò il termine di Zero Moment of Truth. Quindi il momento zero prima dell'acquisto, prima della vendita. È quel momento in cui si raccolgono tutte le informazioni on line, ovviamente la democratizzazione dell'informazione ha permesso questo, prima di effettuare un acquisto; quindi è una raccolta, una comparazione e una condivisione dei valori del servizio, del prodotto, del brand, prima di decidere se diventare un acquirente.Ha avuto diverse declinazioni perché da Zero Moment Of Truth si passa al First Moment of Truth, il primo momento della verità, quando si va a fare una comparazione nello store, per esempio. Cerco un advertising di un'azienda, mi interessa quel prodotto e inizio a raccogliere tutte le informazioni. Vado nello store, faccio una comparazione, lo guardo, lo osservo e decido se il prodotto, servizio, mi piace. A quel punto si passa nel Second Moment of Truth, faccio l'esperienza, che sia magari un'esperienza di prova o che sia proprio un acquisto o post acquisto. Vivo l'esperienza e i valori del brand cerco di capire se quel prodotto soddisfa le mie aspettative e quindi si passa nel terzo momento della verità anche definito Ultimo momento della verità, perché in effetti l'ultimo ma è intercambiabile con il primo perché è la parte del fan che riattiva il ciclo, che riporta tutto su, quindi è quella fase in cui l'utente condivide, racconta la propria esperienza e può diventare un ambassador o un fan o addirittura un qualcuno che invece parla male, quindi un hater, qualcuno che ha avuto una brutta esperienza e parla male dell'esperienza avuta con il brand. Questo ovviamente a percussioni sulla reputazione non indifferenti, bisogna considerare tutto il viaggio dell'utente, tutto il viaggio del consumatore, in tutte queste fasi e creare una comunicazione che sia omni-canale, che sia multicanale, che abbia diversi punti di contatto e che sia una comunicazione aperta uno a molti ma anche molti a uno, che sia dotata di consapevolezza e di ascolto, di problem solving e di capacità di doversi adattare e offrire la migliore esperienza possibile al nostro utente, al nostro consumatore. Nella comunicazione online. Tutto questo si tramuta in un processo, in un viaggio, fatto di 5 keyword: Know, Like, Trust, Sell e Love. Know: mi faccio conoscere, attivo una serie di processi e di attività che mi portano a diventare noto. Quindi la famosa fase awareness. Like: quindi metto in campo una serie di strumenti e di contenuti che portano ad essere piaciuti, a farsi piacere. In questa fase c'è anche la coerenza e l'ordine che si mantiene a livello di punti di contatto; se vengo sul sito deve piacermi, come la tua pagina Facebook deve piacermi o come la company page LinkedIn.Trust: è la fiducia su quel brand e su quello che il brand comunica, su quello che il brand mi trasmette. È una fase molto importante che può portare alla conversione.Sell: la vendita arriva solo dopo il passaggio precedente, online non puoi pensare di farti conoscere e vendere ma devi passare sicuramente da tutte queste fasi perché ci sia fiducia e il consumatore, l'utente, scelga te. Love: in cui il tuo utente o consumatore, si innamora di te, ovviamente si innamora è un concetto abbastanza esteso, a tal punto da diventare un fan e quindi consigliare agli amici, il famoso passaparola, condividendo i valori del brand, che attiveranno questo processo ripartendo dalla notorietà. Quindi io parlo a qualcuno del brand e diventa noto per qualcun altro. Non possiamo pensare di saltare qualche fase, strategico e tattico, di questo processo perché tutti noi consumatori viviamo questo viaggio e tutti noi consumatori vogliamo avere queste informazioni prima di decidere se fare le nostre esperienze d’acquisto e successivamente se condividerla, farla nostra, diventare Ambassador e parlar bene.Quindi un passaparola che abbia dei risultati in termini di “valore” importanti per il brand.
Alex Wipperfurth, nel suo libro “Marketing Without Marketing”, definisce la brand anarchia come: “la perdita di potere e controllo da parte dell’azienda sul proprio brand, generata da consumatori-utenti per trasformare e comunicare una diversa personalità, valori e posizionamento sul mercato della marca”.Può avvenire sia off che on-line, ma ha trovato il suo massimo sviluppo con i social media, come principale strumento di viralità.Oggi le aziende devo essere vicine al proprio utente, consumatore, trasmettendo i propri valori, la vision e la mission, facendo sempre molta attenzione alla reputazione on-line.Questo rapporto di condivisione, apertura, co-creazione e co-partecipazione può comportare effetti devastanti sulla propria reputazione, in molti casi ha anche ripercussioni economiche importanti, se non gestita con cura.Infatti oggi è possibile prendere il messaggio del brand e modificarlo per gestire il controllo della notorietà, dei valori e del posizionamento del brand stesso.I brand hijackers possono essere singoli individui o community, e le motivazioni principali che possono spingerli a creare anarchia della marca possono essere per insoddisfazione o attività commerciale verso il brand, per autoaffermazione o visibilità, per gioco o appartenenza.In pratica consiste nella manipolazione, o trasformazione, di un messaggio visivo, soprattutto video, informativo, svelando retroscena, o addirittura d’uso della notorietà del brand per ragioni egoistiche o ostili.Alcune volte può sfociare nel furto di identità digitale, e questo può generare ripercussioni impattanti per la marca.I Social network sono generatori e acceleratori di positività e negatività. Proprio per questo le aziende dovrebbero quanto più rafforzare la propria comunità di fan, al fine di potersi difendere al meglio in caso di attacchi. Spesso si tende a sottovalutare il potere e il valore della Brand Anarchia, al contrario bisognerebbe studiarne il fenomeno con attenzione, perché i prodotti della Brand Anarchia hanno spesso successo, soprattutto quando sono ben fatti, creativi e intelligenti!Come possiamo evitarlo, o almeno arginarlo?•Con la registrazione dei marchi o delle pagine social.•Con attività di buzz monitoring.•Azione legale contro coloro che sono ritenuti responsabili dell'infrazione.Tra i casi più celebri presento quello del marzo 2010 creato dagli attivisti di Greenpeace nei confronti di Nestlé, nello specifico di KitKat.In questo caso è stata riprodotto un video parodia della pubblicità della KitKat, "Take a Break", in cui il protagonista apre una confezione e morde un pezzo a forma di dito di orango.Per effetto del morso esce sangue dal dito che schizza sulla tastiera.Un messaggio molto impattante, a denuncia dell'uso dell'olio di palma da parte della multinazionale legate da operazioni insostenibili in Indonesia, comportando disboscamenti importanti, e il conseguente impatto sugli habitat degli oranghi.Successivamente è stata presa d’assalto, dagli attivisti, la sede centrale della Nestlé nel Regno Unito, creando molto rumore.Il video è diventato virale, costringendo, in errore, l’azienda a rimuoverlo da YouTube.Greenpeace ha subito ripubblicato il video su Vimeo e successivamente sul suo sito, generando un effetto Streisand, producendo quindi una reazione negativa nel pubblico, che non ha tollerato il tentativo di nascondere la “prova”.Il fenomeno va studiato, analizzato e preventivamente circoscritto se non si vogliono avere ripercussioni molto importanti, ricordando sempre che monitorare la propria attività on-line è fondamentale.
Il personal branding è l’insieme delle strategie che possono aiutare a promuovere sé stessi, le proprie competenze, capacità ed esperienze.Sempre più importanti sono diventate le tecniche di marketing per poter sviluppare una comunicazione coerente ed efficace che rispecchi la nostra professionalità, ma anche la nostra persona.Ovviamente porta con sé dei rischi, in quanto i contenuti che diffondiamo sul web potrebbero generare reazioni negative da parte del nostro pubblico, attivando, in molti casi, impatti significativi sulla propria vita.Motivo per cui bisogna essere sempre consapevoli di ciò che decidiamo di diffondere, domandandoci sempre se questo contenuto è aderente all’immagine che vogliamo offrire al nostro datore di lavoro, alla nostra famiglia, ai nostri amici, ai possibili recruiter che cercano talenti.I contenuti che postiamo possono prendere direzioni inaspettate, e seppur non volendolo, possiamo involontariamente generare problemi a noi stessi o agli altri.Quindi prima di premere il tasto “pubblica” pensa, fatti delle domande, qualsiasi sia il contenuto, un video, un post o un commento.Quindi cura la tua reputazione, monitora quello che pubblichi, quello che dicono di te, rispondi sempre e considerando che dall’altra parte c’è un individuo, valuta cosa postare e sii consapevole.I social media acquisiscono importanza quando un’azienda e/o una persona intendono rivolgersi al mercato di massa, usandoli come occasione di visibilità per la propria attività.Possiamo definire i Social Network tutte quelle tecnologie che permettono agli utenti di creare reti sociali virtuali, nelle quali possono creare, scambiare e condividere contenuti di testo, immagini, video e audio.Nel tempo sempre più persone sono passate da essere semplici fruitori passivi di contenuti in creatori di contenuti, generando molta concorrenza, limitando, o rendendo più difficile, la possibilità di emergere.Chris Anderson afferma che “il mercato di massa si è trasformato in una massa di mercati”. Cosa fare allora?Racconta, con lo storytelling, il tuo lavoro, le tue esperienze professionali, il tuo sapere, il tuo vissuto, tenendo presente del pubblico a cui ti rivolgi, della piattaforma che utilizzi, le tono di voce che vuoi usare.Comportati come nella vita off-line, ragiona prima di agire e considera le dinamiche relazionali.Valuta le norme di privacy prima di postare, assicurati di non ledere l’immagine di nessuno e che puoi divulgare i contenuti.Impara e applica la netiquette, che sono regole che prescrivono il comportamento di un utente sul web nel rapportarsi con gli altri utenti.Definite anche le “regole d’oro del galateo su internet” sono:1. Scrivi correttamente: attento ad ortografia e punteggiatura. 2. Attento alle lettere MAIUSCOLE: su web, equivalgono ad URLARE. 3. Non pubblicare informazioni personali o foto imbarazzanti di altri. 4. Se pubblichi testi, foto o video provenienti da altri siti web, cita la fonte. 5. Quando commenti un post, non attaccare l’autore a livello personale. Rispetta i valori, il credo e i sentimenti degli altri. 6. Se entri in una discussione attieniti all’argomento trattato e cerca di contribuire dando un valore aggiunto. 7. Non invitare tutti i tuoi contatti a giochi o pagine. Seleziona quelli che potrebbero essere realmente interessati. 8. Ricorda che l’uso di termini denigratori o di parole d’odio e pregiudizio razziale, religioso o sessuale, sono punibili per legge.
Il Growth Hacking è l’insieme di tattiche e tecniche innovative che utilizzano pochissime risorse al fine di riuscire a far crescere rapidamente una base di utenti che, in breve tempo, generi ritorni economici significativi.In questo caso il termine “Hacker” non è inteso nel senso comune, di pirata informatico per intenderci, ma nel senso di creativo, fuori dagli schemi.Diventa fondamentale imparare a vedere le cose da diverse angolazioni per trovare la leva che messa a terra permetta di generare crescita e scalabilità in breve tempo.Spesso viene associato al marketing, anche se non convenzionale, ma in realtà può toccare qualsiasi punto dell’organizzazione aziendale, processi, prodotti, sistemi interni, comunicazione e valori.Fondamentalmente consiste nel creare un team “agile”, tra 3 e 9 persone, con competenze sia multidisciplinari che verticali, sia tecniche che umanistiche.Non è un caso raro trovare un antropologo o psicologo nel team che collabora con web design, social media manager, copywriter e così via.Nel Growth Hacking le persone sono al centro!Il processo si basa su altri modelli circolari già noti, come Lean o Scrum, e si divide in 4 fasi:•Analisi: raccolta, archiviazione e normalizzazione dei dati.•Ideazione: creazione di idee e sviluppo del pensiero laterale.•Prioritizzazione: scegliere l’idea più impattante e sostenibile.•Esecuzione: mettere a terra l’idea, da scalare successivamente.Il processo dopo l’esecuzione si riattiva partendo dall’analisi.Si organizzano degli “sprint”, time-box fissi e ripetibili solitamente di 1 o 2 settimane, in cui si analizzano i risultati ottenuti dalle task precedenti e si riattivano task da portare a termine prima del prossimo sprint.La parte di analisi si basa sul Funnel dei Pirati (AAARRR), che permette di valutare ogni fase del viaggio dell’utente, definendo anche le singole metriche da misurare e analizzare, per trovare il problema e la soluzione.Le fasi del funnel sono:•Awareness; l’utente ci scopre•Acquisition; acquisiamo l’utente•Activation; l’utente si attiva•Retention; l’utente torna•Revenue; l’utente paga•Referral; l’utente porta altri utentiNella fase di ideazione delle idee molti sono gli strumenti che si utilizzano, ma sicuramente il brainstorming è la più utilizzata ed efficace.Passando alla prioritizzazione è importante coinvolgere tutto il team spingendoli a votare le idee in base all’impatto sui processi e la facilità di esecuzione.Ed in ultimo si utilizzano tutti gli strumenti, digitali e non, per poter prototipare la soluzione e testarla su un piccolo campione.Si riattiva la fase di analisi e raccolta di dati fino alla scalabilità della soluzione.Questa metodologia è nata il 26 luglio del 2010 grazie a Sean Ellis, imprenditore e angelo investitore di StartUp, quando dichiarò in un post del suo blog il Growth Hacking.Alcune strategie utilizzate dal Growth Hacking sono:•Viral Acquisition, facendo parlare e condividere gli utenti le caratteristiche del prodotto.•Paid Acquisition, con attività di advertising per aumentare la visibilità del prodotto.•Content Marketing, creare contenuti innovativi, come notizie e video, che si autoalimentino tramite le condivisioni sui social network•Email Marketing, attività di rapporto e nutrimento continuo con gli utenti•A/B Testing (Split Testing), mettere a confronto più prototipi per determinare quale impatta di più•SEO, ottimizzare il sito per ottenere più utenti.Tra i casi più noti di aziende che sono cresciute in maniera esponenziale in poco tempo c’è Dropbox, Airbnb e Udemy.
Oggi l’ambiente del business è molto più turbolento, con cambiamenti repentini, molto complesso e di difficile adattamento, diventa necessario attivare la “creatività operativa”, nelle organizzazioni, che permetta di identificare, analizzare le situazioni e generare idee creative che prioritizzate permettano di portare soluzioni di crescita e sostenibilità.Il processo di soluzione dei problemi prevede 5 fasi:•Problem finding; la ricerca del problema•Problem setting; definizione del problema stesso.•Problem solving; la produzione delle alternative per risolvere il problema (qui c’è la fase creativa).•Decision taking; presa della decisione (che diventa si/no)•Decision making; eseguire la soluzione scelta.La creatività operativa si basa sul know how, sulle competenze, sulla conoscenza, sull’intelligenza, intesa come la capacità di collegare elementi indipendenti della nostra conoscenza per poter trovare una soluzione, sulla capacità di fare una buona analisi, saper approfondire per arrivare al problema, e sulla pressione delle scadenze, se abbiamo tempo tendiamo a non prendere una decisione creativa. Le due strade percorribili, per risolvere i problemi con creatività operativa, sono quantitative e qualitative.Le quantitative passano dal di + al di -, fare più fatturato, prendere più clienti…fare più cose.Oppure impiegare meno tempo, meno risorse.Le qualitative passano dal nuovo e diverso, è richiede anche la capacità di saper copiare da un competitor, o un settore affine ma differente, per creare qualcosa di nuovo.Le fasi del processo creativo si possono sintetizzare in:✔ Analisi; quindi la capacità di “sollevare la trama”, capire cosa c’è sotto.✔ Diagnosi; “conoscere attraverso il fatti”, apprendere dalla realtà.✔ Sintesi; “riunire insieme” tutte queste informazioni per arrivare ad una soluzione.In conclusione possiamo dire che per la creatività operativa bisogna imparare ad immaginare, a saper giocare con le idee, questo aggiunge sicuramente molto alle nostre conoscenze, bisogna considerare che il processo creativo si sviluppa nel momento della definizione del problema, e infine che un’idea per essere creativa deve essere applicabile.
Consapevolezza deriva dal con-sapevole, che è un etimo di sapere; quindi sapere con cognizione, con etica, con comportamento, con lucidità, con razionalità, coscienza.Dipende dal tronco encefalico. Quindi le neuroscienze definiscono che c'è una parte del nostro cervello che tocca quest'aria, quest'argomento. La consapevolezza è molto importante. Qualche giorno fa ho pubblicato un post su LinkedIn in cui parlavo di coerenza comunicativa sui vari canali, sui vari social, sulle varie piattaforme e da molti commenti sono emerse delle analisi che mi hanno fatto molto pensare. Alcune persone mi hanno scritto che sono su Facebook come se fossero con gli amici o in mezzo agli ex-compagni di classe e fanno quello che vogliono. Non è così. Ci vuole consapevolezza perché Facebook è una piattaforma globale, che ci ospita, è una piattaforma che ha delle regole, policy e governance proprie ed è un'azienda multimilionaria che ha come unico scopo quello di tenerti all'interno della piattaforma con tutti gli strumenti a sua disposizione, psicologici ma anche di intrattenimento, e quindi poter avere l'attenzione delle persone. Ci vuole consapevolezza dell'habitat. Devi ricordarti che sei ospite di una piattaforma di cui non gestisci nulla, se non quello che tu decidi di divulgare in maniera pubblica. Ma questa tua divulgazione, questo tuo contenuto, può raggiungere delle direzioni e velocità tali, anche una dimensione tale, che tu non puoi preventivate. Quindi ci vuole comunque consapevolezza, bisogna essere consapevoli dell'habitat in cui si è. Bisogna essere consapevoli del fatto che non sei tu a decidere le regole ma magari un algoritmo o chi per lui. Poi ci vuole la consapevolezza dell'audience. Hai mai pensato al tuo target? Hai mai pensato a come rivolgerti? Con quale tono di voce? Sei online, non ti sta rivolgendo ad un piccolo pubblico, ma ti stai rivolgendo ad un pubblico potenziale, un pubblico enorme, qualunque sia il tuo obiettivo. Bisogna essere consapevoli a chi si sta parlando, cosa vuole sentire e in che formato, dove e perché deve ascoltare te. Quindi fatti queste domande, fatti le domande di chi è il tuo target, la tua audience e cerca di lavorarci con consapevolezza. Ci vuole consapevolezza comunicativa; sei consapevole di quale tono di voce utilizzare? Se è consapevole che ciò che comunichi può avere un impatto su chiunque a livello globale? Sei consapevole di come ti devi rivolgere? Quali sono i comportamenti di netiquette? Qual è il tuo modo di comunicare, il tuo modo di esprimersi e come dovresti esprimersi?Queste domande sulla consapevolezza sono molto importanti nel poter avere una vita online sana, pura e utile. In ultimo ci vuole consapevolezza di sé stessi. Se non sei consapevole di te stesso è molto difficile che tu sia consapevole di tutto il resto. Devi imparare a conoscerti, è un processo faticoso, un processo in molte situazioni anche doloroso, perché e un processo maieutico, è un processo che devi fare su te stesso, senza avere troppi termini di paragone, ma molto introspettivo e alcune volte richiede molto tempo. Quando comunichi online ciò che sei appare, quindi nel tempo può essere qualcosa che ti si ritorce contro. Quindi in sostanza ci vuole consapevolezza tenendo presente sempre questi quattro livelli.
Più il contesto è Volatile, Incerto, Compresso e Ambiguo, maggiore è la necessità per le organizzazioni di avere nell’organico persone responsabili, e maggiore è la necessità di avere leader responsabili e che sappiano educare alla responsabilità.Il problema con la responsabilità è che non si può dare, si può solo assumere.La responsabilità è legata al modo di vivere delle persone più in generale che inteso solo in senso lavorativo.Il termine responsabilità deriva dal latino respònsus, cioè impegnarsi a rispondere, a qualcuno o a sé stessi, delle proprie azioni e delle conseguenze che ne derivano.Quindi basata sulla risposta che siamo in grado di dare ad un evento.Gli atteggiamenti possibili sono due: le vittime delle circostanze e padroni del proprio destino.L’atteggiamento vittimistico è di coloro che si domandano “come mai” mi trovo in questa situazione?È l’atteggiamento di chi esternalizza la responsabilità agli altri, che accusa la qualità del prodotto che non vende, o che afferma che l’interlocutore non capisce quando non comprende il suo messaggio.L’atteggiamento di padronanza del proprio destino invece è di coloro che non pone molta attenzione sulle circostanze ma quanto sulla sua capacità di risposta a quella determinata situazione.Quindi cosa posso fare io per vendere il prodotto? Cosa posso fare io affinché l’interlocutore mi comprenda? Non sono io responsabile se il prodotto è buono o no, ma sono responsabile del mio modo di presentarlo. Sono responsabile di come faccio giungere comprensibile il mio messaggio all’interlocutore.Un atteggiamento responsabile, quindi, è quello di chi si concentra sulle risposte possibili per noi adesso.In un mondo, e mercato, Volatile, Incerto, Compresso e Ambiguo una delle priorità delle aziende oggi è di riempirsi in organico di persone che si domandano “cosa posso fare io adesso?” per rispondere ai problemi e agli eventi assumendosi la responsabilità, anche di sbagliare.I manager posso essere vittime o padroni del proprio destino; un manager vittima attribuirà al team la responsabilità sulle performance, un manager padrone si domanda come può rispondere al problema ponendosi come obiettivo di rieducare i propri collaboratori.
In this episode, we clear up the myth about scrub of death, look at Wayland and Weston on FreeBSD, Intel QuickAssist is here, and we check out OpenSMTP on OpenBSD. This episode was brought to you by Headlines Matt Ahrens answers questions about the “Scrub of Death” In working on the breakdown of that ZFS article last week, Matt Ahrens contacted me and provided some answers he has given to questions in the past, allowing me to answer them using HIS exact words. “ZFS has an operation, called SCRUB, that is used to check all data in the pool and recover any data that is incorrect. However, if a bug which make errors on the pool persist (for example, a system with bad non-ecc RAM) then SCRUB can cause damage to a pool instead of recover it. I heard it called the “SCRUB of death” somewhere. Therefore, as far as I understand, using SCRUB without ECC memory is dangerous.” > I don't believe that is accurate. What is the proposed mechanism by which scrub can corrupt a lot of data, with non-ECC memory? > ZFS repairs bad data by writing known good data to the bad location on disk. The checksum of the data has to verify correctly for it to be considered "good". An undetected memory error could change the in-memory checksum or data, causing ZFS to incorrectly think that the data on disk doesn't match the checksum. In that case, ZFS would attempt to repair the data by first re-reading the same offset on disk, and then reading from any other available copies of the data (e.g. mirrors, ditto blocks, or RAIDZ reconstruction). If any of these attempts results in data that matches the checksum, then the data will be written on top of the (supposed) bad data. If the data was actually good, then overwriting it with the same good data doesn't hurt anything. > Let's look at what will happen with 3 types of errors with non-ECC memory: > 1. Rare, random errors (e.g. particle strikes - say, less than one error per GB per second). If ZFS finds data that matches the checksum, then we know that we have the correct data (at least at that point in time, with probability 1-1/2^256). If there are a lot of memory errors happening at a high rate, or if the in-memory checksum was corrupt, then ZFS won't be able to find a good copy of the data , so it won't do a repair write. It's possible that the correctly-checksummed data is later corrupted in memory, before the repair write. However, the window of vulnerability is very very small - on the order of milliseconds between when the checksum is verified, and when the write to disk completes. It is implausible that this tiny window of memory vulnerability would be hit repeatedly. > 2. Memory that pretty much never does the right thing. (e.g. huge rate of particle strikes, all memory always reads 0, etc). In this case, critical parts of kernel memory (e.g. instructions) will be immediately corrupted, causing the system to panic and not be able to boot again. > 3. One or a few memory locations have "stuck bits", which always read 0 (or always read 1). This is the scenario discussed in the message which (I believe) originally started the "Scrub of Death" myth: https://forums.freenas.org/index.php?threads/ecc-vs-non-ecc-ram-and-zfs.15449/ This assumes that we read in some data from disk to a memory location with a stuck bit, "correct" that same bad memory location by overwriting the memory with the correct data, and then we write the bad memory location to disk. However, ZFS doesn't do that. (It seems the author thinks that ZFS uses parity, which it only does when using RAID-Z. Even with RAID-Z, we also verify the checksum, and we don't overwrite the bad memory location.) > Here's what ZFS will actually do in this scenario: If ZFS reads data from disk into a memory location with a stuck bit, it will detect a checksum mismatch and try to find a good copy of the data to repair the "bad" disk. ZFS will allocate a new, different memory location to read a 2nd copy of the data, e.g. from the other side of a mirror (this happens near the end of dslscanscrub_cb()). If the new memory location also has a stuck bit, then its checksum will also fail, so we won't use it to repair the "bad" disk. If the checksum of the 2nd copy of the data is correct, then we will write it to the "bad" disk. This write is unnecessary, because the "bad" disk is not really bad, but it is overwriting the good data with the same good data. > I believe that this misunderstanding stems from the idea that ZFS fixes bad data by overwriting it in place with good data. In reality, ZFS overwrites the location on disk, using a different memory location for each read from disk. The "Scrub of Death" myth assumes that ZFS overwrites the location in memory, which it doesn't do. > In summary, there's no plausible scenario where ZFS would amplify a small number of memory errors, causing a "scrub of death". Additionally, compared to other filesystems, ZFS checksums provide some additional protection against bad memory. “Is it true that ZFS verifies the checksum of every block on every read from disk?” > Yes “And if that block is incorrect, that ZFS will repair it?” > Yes “If yes, is it possible set options or flag for change that behavior? For example, I would like for ZFS to verify checksums during any read, but not change anything and only report about issues if it appears. Is it possible?” > There isn't any built-in flag for doing that. It wouldn't be hard to add one though. If you just wanted to verify data, without attempting to correct it, you could read or scan the data with the pool was imported read-only “If using a mirror, when a file is read, is it fully read and verified from both sides of the mirror?” > No, for performance purposes, each block is read from only one side of the mirror (assuming there is no checksum error). “What is the difference between a scrub and copying every file to /dev/null?” > That won't check all copies of the file (e.g. it won't check both sides of the mirror). *** Wayland, and Weston, and FreeBSD - Oh My! (https://euroquis.nl/bobulate/?p=1617) KDE's CI system for FreeBSD (that is, what upstream runs to continuously test KDE git code on the FreeBSD platform) is missing some bits and failing some tests because of Wayland. Or rather, because FreeBSD now has Wayland, but not Qt5-Wayland, and no Weston either (the reference implementation of a Wayland compositor). Today I went hunting for the bits and pieces needed to make that happen. Fortunately, all the heavy lifting has already been done: there is a Weston port prepared and there was a Qt5-Wayland port well-hidden in the Area51 plasma5/ branch. I have taken the liberty of pulling them into the Area51 repository as branch qtwayland. That way we can nudge Weston forward, and/or push Qt5-Wayland in separately. Nicest from a testing perspective is probably doing both at the same time. I picked a random “Hello World” Wayland tutorial and also built a minimal Qt program (using QMessageBox::question, my favorite function to hate right now, because of its i18n characteristics). Then, setting XDGRUNTIMEDIR to /tmp/xdg, I could start Weston (as an X11 client), wayland-hello (as a Wayland client, displaying in Weston) and qt-hello (as either an X11 client, or as a Wayland client). So this gives users of Area51 (while shuffling branches, granted) a modern desktop and modern display capabilities. Oh my! It will take a few days for this to trickle up and/or down so that the CI can benefit and we can make sure that KWin's tests all work on FreeBSD, but it's another good step towards tight CI and another small step towards KDE Plasma 5 on the desktop on FreeBSD. pkgsrcCon 2017 report (https://blog.netbsd.org/tnf/entry/pkgsrccon_2017_report) This years pkgsrcCon returned to London once again. It was last held in London back in 2014. The 2014 con was the first pkgsrcCon I attended, I had been working on Darwin/PowerPC fixes for some months and presented on the progress I'd made with a 12" G4 PowerBook. I took away a G4 Mac Mini that day to help spare the PowerBook for use and dedicate a machine for build and testing. The offer of PowerPC hardware donations was repeated at this years con, thanks to jperkin@ who showed up with a backpack full of Mac Minis (more on that later). Since 2014 we have held cons in Berlin (2015) & Krakow (2016). In Krakow we had talks about a wide range of projects over 2 days, from Haiku Ports to Common Lisp to midipix (building native PE binaries for Windows) and back to the BSDs. I was very pleased to continue the theme of a diverse program this year. Aside from pkgsrc and NetBSD, we had talks about FreeBSD, OpenBSD, Slackware Linux, and Plan 9. Things began with a pub gathering on the Friday for the pre-con social, we hung out and chatted till almost midnight on a wide range of topics, such as supporting a system using NFS on MS-DOS, the origins of pdksh, corporate IT, culture and many other topics. On parting I was asked about the starting time on Saturday as there was some conflicting information. I learnt that the registration email had stated a later start than I had scheduled for & advertised on the website, by 30 minutes. Lesson learnt: register for your own event! Not a problem, I still needed to setup a webpage for the live video stream, I could do both when I got back. With some trimming here and there I had a new schedule, I posted that to the pkgsrcCon website and moved to trying to setup a basic web page which contained a snippet of javascript to play a live video stream from Scale Engine. 2+ hours later, it was pointed out that the XSS protection headers on pkgsrc.org breaks the functionality. Thanks to jmcneill@ for debugging and providing a working page. Saturday started off with Giovanni Bechis speaking about pledge in OpenBSD and adding support to various packages in their ports tree, alnsn@ then spoke about installing packages from a repo hosted on the Tor network. After a quick coffee break we were back to hear Charles Forsyth speak about how Plan 9 and Inferno dealt with portability, building software and the problem which are avoided by the environment there. This was followed by a very energetic rant by David Spencer from the Slackbuilds project on packaging 3rd party software. Slackbuilds is a packaging system for Slackware Linux, which was inspired by FreeBSD ports. For the first slot after lunch, agc@ gave a talk on the early history of pkgsrc followed by Thomas Merkel on using vagrant to test pkgsrc changes with ease, locally, using vagrant. khorben@ covered his work on adding security to pkgsrc and bsiegert@ covered the benefits of performing our bulk builds in the cloud and the challenges we currently face. My talk was about some topics and ideas which had inspired me or caught my attention, and how it could maybe apply to my work.The title of the talk was taken from the name of Andrew Weatherall's Saint Etienne remix, possibly referring to two different styles of track (dub & vocal) merged into one or something else. I meant it in terms of applicability of thoughts and ideas. After me, agc@ gave a second talk on the evolution of the Netflix Open Connect appliance which runs FreeBSD and Vsevolod Stakhov wrapped up the day with a talk about the technical implementation details of the successor to pkgtools in FreeBSD, called pkg, and how it could be of benefit for pkgsrc. For day 2 we gathered for a hack day at the London Hack Space. I had burn't some some CD of the most recent macppc builds of NetBSD 8.0BETA and -current to install and upgrade Mac Minis. I setup the donated G4 minis for everyone in a dual-boot configuration and moved on to taking apart my MacBook Air to inspect the wifi adapter as I wanted to replace it with something which works on FreeBSD. It was not clear from the ifixit teardown photos of cards size, it seemed like a normal mini-PCIe card but it turned out to be far smaller. Thomas had also had the same card in his and we are not alone. Thomas has started putting together a driver for the Broadcom card, the project is still in its early days and lacks support for encrypted networks but hopefully it will appear on review.freebsd.org in the future. weidi@ worked on fixing SunOS bugs in various packages and later in the night we setup a NetBSD/macppc bulk build environment together on his Mac Mini. Thomas setup an OpenGrock instance to index the source code of all the software available for packaging in pkgsrc. This helps make the evaluation of changes easier and the scope of impact a little quicker without having to run through a potentially lengthy bulk build with a change in mind to realise the impact. bsiegert@ cleared his ticket and email backlog for pkgsrc and alnsn@ got NetBSD/evbmips64-eb booting on his EdgeRouter Lite. On Monday we reconvened at the Hack Space again and worked some more. I started putting together the talks page with the details from Saturday and the the slides which I had received, in preparation for the videos which would come later in the week. By 3pm pkgsrcCon was over. I was pretty exhausted but really pleased to have had a few days of techie fun. Many thanks to The NetBSD Foundation for purchasing a camera to use for streaming the event and a speedy response all round by the board. The Open Source Specialist Group at BCS, The Chartered Institute for IT and the London Hack Space for hosting us. Scale Engine for providing streaming facility. weidi@ for hosting the recorded videos. Allan Jude for pointers, Jared McNeill for debugging, NYCBUG and Patrick McEvoy for tips on streaming, the attendees and speakers. This year we had speakers from USA, Italy, Germany and London E2. Looking forward to pkgsrcCon 2018! The videos and slides are available here (http://www.pkgsrc.org/pkgsrcCon/2017/talks.html) and the Internet Archive (http://archive.org/details/pkgsrcCon-2017). News Roundup QuickAssist Driver for FreeBSD is here and pfSense Support Coming (https://www.servethehome.com/quickassist-driver-freebsd-pfsupport-coming/) This week we have something that STH readers will be excited about. Before I started writing for STH, I was a reader and had been longing for QuickAssist support ever since STH's first Rangeley article over three and a half years ago. It was clear from the get-go that Rangeley was going to be the preeminent firewall appliance platform of its day. The scope of products that were impacted by the Intel Atom C2000 series bug showed us it was indeed. For my personal firewalls, I use pfSense on that Rangeley platform so I have been waiting to use QuickAssist with my hardware for almost an entire product generation. + New Hardware and QuickAssist Incoming to pfSense (Finally) pfSense (and a few other firewalls) are based on FreeBSD. FreeBSD tends to lag driver support behind mainstream Linux but it is popular for embedded security appliances. While STH is the only site to have done QuickAssist benchmarks for OpenSSL and IPSec VPNs pre-Skylake, we expect more platforms to use it now that the new Intel Xeon Scalable Processor Family is out. With the Xeon Scalable platforms, the “Lewisburg” PCH has QuickAssist options of up to 100Gbps, or 2.5x faster than the previous generation add-in cards we tested (40Gbps.) We now have more and better hardware for QAT, but we were still devoid of a viable FreeBSD QAT driver from Intel. That has changed. Our Intel Xeon Scalable Processor Family (Skylake-SP) Launch Coverage Central has been the focus of the STH team's attention this week. There was another important update from Intel that got buried, a publicly available Intel QuickAssist driver for FreeBSD. You can find the driver on 01.org here dated July 12, 2017. Drivers are great, but we still need support to be enabled in the OS and at the application layer. Patrick forwarded me this tweet from Jim Thompson (lead at Netgate the company behind pfSense): The Netgate team has been a key company pushing QuickAssist appliances in the market, usually based on Linux. To see that QAT is coming to FreeBSD and that they were working to integrate into “pfSense soon” is more than welcome. For STH readers, get ready. It appears to be actually and finally happening. QuickAssist on FreeBSD and pfSense OpenBSD on the Huawei MateBook X (https://jcs.org/2017/07/14/matebook) The Huawei MateBook X is a high-quality 13" ultra-thin laptop with a fanless Core i5 processor. It is obviously biting the design of the Apple 12" MacBook, but it does have some notable improvements such as a slightly larger screen, a more usable keyboard with adequate key travel, and 2 USB-C ports. It also uses more standard PC components than the MacBook, such as a PS/2-connected keyboard, removable m.2 WiFi card, etc., so its OpenBSD compatibility is quite good. In contrast to the Xiaomi Mi Air, the MateBook is actually sold (2) in the US and comes with a full warranty and much higher build quality (though at twice the price). It is offered in the US in a "space gray" color for the Core i5 model and a gold color for the Core i7. The fanless Core i5 processor feels snappy and doesn't get warm during normal usage on OpenBSD. Doing a make -j4 build at full CPU speed does cause the laptop to get warm, though the palmrest maintains a usable temperature. The chassis is all aluminum and has excellent rigidity in the keyboard area. The 13.0" 2160x1440 glossy IPS "Gorilla glass" screen has a very small bezel and its hinge is properly weighted to allow opening the lid with one hand. There is no wobble in the screen when open, even when jostling the desk that the laptop sits on. It has a reported brightness of 350 nits. I did not experience any of the UEFI boot variable problems that I did with the Xiaomi, and the MateBook booted quickly into OpenBSD after re-initializing the GPT table during installation. OpenSMTPD under OpenBSD with SSL/VirtualUsers/Dovecot (https://blog.cagedmonster.net/opensmtpd-under-openbsd-with-ssl-virtualusers-dovecot/) During the 2013 AsiaBSDCon, the team of OpenBSD presented its mail solution named OpenSMTPD. Developed by the OpenBSD team, we find the so much appreciated philosophy of its developers : security, simplicity / clarity and advanced features. Basic configuration : OpenSMTPD is installed by default, we can immediately start with a simple configuration. > We listen on our interfaces, we specify the path of our aliases file so we can manage redirections. > Mails will be delivered for the domain cagedmonster.net to mbox (the local users mailbox), same for the aliases. > Finally, we accept to relay local mails exclusively. > We can now enable smtpd at system startup and start the daemon. Advanced configuration including TLS : You can use SSL with : A self-signed certificate (which will not be trusted) or a certificate generated by a trusted authority. LetsEncrypt uses Certbot to generated your certificate. You can check this page for further informations. Let's focus on the first. Generation of the certificate : We fix the permissions : We edit the config file : > We have a mail server with SSL, it's time to configure our IMAP server, Dovecot, and manage the creation of virtual users. Dovecot setup, and creation of Virtual Users : We will use the package system of OpenBSD, so please check the configuration of your /etc/pkg.conf file. Enable the service at system startup : Setup the Virtual Users structure : Adding the passwd table for smtpd : Modification of the OpenSMTPD configuration : We declare the files used for our Virtual Accounts, we include SSL, and we configure mails delivery via the Dovecot lmtp socket. We'll create our user lina@cagedmonster.net and set its password. Configure SSL Configure dovecot.conf Configure mail.con Configure login.conf : Make sure that the value of openfiles-cur in /etc/login.conf is equal or superior of 1000 ! Starting Dovecot *** OpenSMTPD and Dovecot under OpenBSD with MySQL support and SPAMD (https://blog.cagedmonster.net/opensmtpd-and-dovecot-under-openbsd-with-mysql-support-and-spamd/) This article is the continuation of my previous tutorial OpenSMTPD under OpenBSD with SSL/VirtualUsers/Dovecot. We'll use the same configuration and add some features so we can : Use our domains, aliases, virtual users with a MySQL database (MariaDB under OpenBSD). Deploy SPAMD with OpenSMTPD for a strong antispam solution. + Setup of the MySQL support for OpenSMTPD & Dovecot + We create our SQL database named « smtpd » + We create our SQL user « opensmtpd » we give him the privileges on our SQL database and we set its password + We create the structure of our SQL database + We generate our password with Blowfish (remember it's OpenBSD !) for our users + We create our tables and we include our datas + We push everything to our database + Time to configure OpenSMTPD + We create our mysql.conf file and configure it + Configuration of Dovecot.conf + Configuration of auth-sql.conf.ext + Configuration of dovecot-sql.conf.ext + Restart our services OpenSMTPD & SPAMD : SPAMD is a service simulating a fake SMTP server and relying on strict compliance with RFC to determine whether the server delivering a mail is a spammer or not. + Configuration of SPAMD : + Enable SPAMD & SPAMLOGD at system startup : + Configuration of SPAMD flags + Configuration of PacketFilter + Configuration of SPAMD + Start SPAMD & SPAMLOGD Running a TOR relay on FreeBSD (https://networkingbsdblog.wordpress.com/2017/07/14/freebsd-tor-relay-using-priveledge-seperation/) There are 2 main steps to getting a TOR relay working on FreeBSD: Installing and configuring Tor Using an edge router to do port translation In my case I wanted TOR to run it's services on ports 80 and 443 but any port under 1024 requires root access in UNIX systems. +So I used port mapping on my router to map the ports. +Begin by installing TOR and ARM from: /usr/ports/security/tor/ /usr/ports/security/arm/ Arm is the Anonymizing Relay Monitor: https://www.torproject.org/projects/arm.html.en It provides useful monitoring graph and can be used to configure the torrc file. Next step edit the torrc file (see Blog article for the edit) It is handy to add the following lines to /etc/services so you can more easily modify your pf configuration. torproxy 9050/tcp #torsocks torOR 9090/tcp #torOR torDIR 9099/tcp #torDIR To allow TOR services my pf.conf has the following lines: # interfaces lan_if=”re0″ wifi_if=”wlan0″ interfaces=”{wlan0,re0}” tcp_services = “{ ssh torproxy torOR torDIR }” # options set block-policy drop set loginterface $lan_if # pass on lo set skip on lo scrub in on $lan_if all fragment reassemble # NAT nat on $lan_if from $wifi_if:network to !($lan_if) -> ($lan_if) block all antispoof for $interfaces #In NAT pass in log on $wifi_if inet pass out all keep state #ICMP pass out log inet proto icmp from any to any keep state pass in log quick inet proto icmp from any to any keep state #SSH pass in inet proto tcp to $lan_if port ssh pass in inet proto tcp to $wifi_if port ssh #TCP Services on Server pass in inet proto tcp to $interfaces port $tcp_services keep state The finally part is mapping the ports as follows: TOR directory port: LANIP:9099 —> WANIP:80 TOR router port: LANIP:9090 —-> WANIP:443 Now enable TOR: $ sudo echo “tor_enable=YES” >> /etc/rc.conf Start TOR: $ sudo service tor start *** Beastie Bits OpenBSD as a “Desktop” (Laptop) (http://unixseclab.com/index.php/2017/06/12/openbsd-as-a-desktop-laptop/) Sascha Wildner has updated ACPICA in DragonFly to Intel's version 20170629 (http://lists.dragonflybsd.org/pipermail/commits/2017-July/625997.html) Dport, Rust, and updates for DragonFlyBSD (https://www.dragonflydigest.com/2017/07/18/19991.html) OPNsense 17.7 RC1 released (https://opnsense.org/opnsense-17-7-rc1/) Unix's mysterious && and || (http://www.networkworld.com/article/3205148/linux/unix-s-mysterious-andand-and.html#tk.rss_unixasasecondlanguage) The Commute Deck : A Homebrew Unix terminal for tight places (http://boingboing.net/2017/06/16/cyberspace-is-everting.html) FreeBSD 11.1-RC3 now available (https://lists.freebsd.org/pipermail/freebsd-stable/2017-July/087407.html) Installing DragonFlyBSD with ORCA when you're totally blind (http://lists.dragonflybsd.org/pipermail/users/2017-July/313528.html) Who says FreeBSD can't look good (http://imgur.com/gallery/dc1pu) Pratik Vyas adds the ability to do paused VM migrations for VMM (http://undeadly.org/cgi?action=article&sid=20170716160129) Feedback/Questions Hrvoje - OpenBSD MP Networking (http://dpaste.com/0EXV173#wrap) Goran - debuggers (http://dpaste.com/1N853NG#wrap) Abhinav - man-k (http://dpaste.com/1JXQY5E#wrap) Liam - university setup (http://dpaste.com/01ERMEQ#wrap)
This week on BSD Now we cover the latest FreeBSD Status Report, a plan for Open Source software development, centrally managing bhyve with Ansible, libvirt, and pkg-ssh, and a whole lot more. This episode was brought to you by Headlines FreeBSD Project Status Report (January to March 2017) (https://www.freebsd.org/news/status/report-2017-01-2017-03.html) While a few of these projects indicate they are a "plan B" or an "attempt III", many are still hewing to their original plans, and all have produced impressive results. Please enjoy this vibrant collection of reports, covering the first quarter of 2017. The quarterly report opens with notes from Core, The FreeBSD Foundation, the Ports team, and Release Engineering On the project front, the Ceph on FreeBSD project had made considerable advances, and is now usable as the net/ceph-devel port via the ceph-fuse module. Eventually they hope to have a kernel RADOS block device driver, so fuse is not required CloudABI update, including news that the Bitcoin reference implementation is working on a port to CloudABI eMMC Flash and SD card updates, allowing higher speeds (max speed changes from ~40 to ~80 MB/sec). As well, the MMC Stack can now also be backed by the CAM framework. Improvements to the Linuxulator More detail on the pNFS Server plan B that we discussed in a previous week Snow B.V. is sponsoring a dutch translation of the FreeBSD Handbook using the new .po system *** A plan for open source software maintainers (http://www.daemonology.net/blog/2017-05-11-plan-for-foss-maintainers.html) Colin Percival describes in his blog “a plan for open source software maintainers”: I've been writing open source software for about 15 years now; while I'm still wet behind the ears compared to FreeBSD greybeards like Kirk McKusick and Poul-Henning Kamp, I've been around for long enough to start noticing some patterns. In particular: Free software is expensive. Software is expensive to begin with; but good quality open source software tends to be written by people who are recognized as experts in their fields (partly thanks to that very software) and can demand commensurate salaries. While that expensive developer time is donated (either by the developers themselves or by their employers), this influences what their time is used for: Individual developers like doing things which are fun or high-status, while companies usually pay developers to work specifically on the features those companies need. Maintaining existing code is important, but it is neither fun nor high-status; and it tends to get underweighted by companies as well, since maintenance is inherently unlikely to be the most urgent issue at any given time. Open source software is largely a "throw code over the fence and walk away" exercise. Over the past 15 years I've written freebsd-update, bsdiff, portsnap, scrypt, spiped, and kivaloo, and done a lot of work on the FreeBSD/EC2 platform. Of these, I know bsdiff and scrypt are very widely used and I suspect that kivaloo is not; but beyond that I have very little knowledge of how widely or where my work is being used. Anecdotally it seems that other developers are in similar positions: At conferences I've heard variations on "you're using my code? Wow, that's awesome; I had no idea" many times. I have even less knowledge of what people are doing with my work or what problems or limitations they're running into. Occasionally I get bug reports or feature requests; but I know I only hear from a very small proportion of the users of my work. I have a long list of feature ideas which are sitting in limbo simply because I don't know if anyone would ever use them — I suspect the answer is yes, but I'm not going to spend time implementing these until I have some confirmation of that. A lot of mid-size companies would like to be able to pay for support for the software they're using, but can't find anyone to provide it. For larger companies, it's often easier — they can simply hire the author of the software (and many developers who do ongoing maintenance work on open source software were in fact hired for this sort of "in-house expertise" role) — but there's very little available for a company which needs a few minutes per month of expertise. In many cases, the best support they can find is sending an email to the developer of the software they're using and not paying anything at all — we've all received "can you help me figure out how to use this" emails, and most of us are happy to help when we have time — but relying on developer generosity is not a good long-term solution. Every few months, I receive email from people asking if there's any way for them to support my open source software contributions. (Usually I encourage them to donate to the FreeBSD Foundation.) Conversely, there are developers whose work I would like to support (e.g., people working on FreeBSD wifi and video drivers), but there isn't any straightforward way to do this. Patreon has demonstrated that there are a lot of people willing to pay to support what they see as worthwhile work, even if they don't get anything directly in exchange for their patronage. It seems to me that this is a case where problems are in fact solutions to other problems. To wit: Users of open source software want to be able to get help with their use cases; developers of open source software want to know how people are using their code. Users of open source software want to support the the work they use; developers of open source software want to know which projects users care about. Users of open source software want specific improvements; developers of open source software may be interested in making those specific changes, but don't want to spend the time until they know someone would use them. Users of open source software have money; developers of open source software get day jobs writing other code because nobody is paying them to maintain their open source software. I'd like to see this situation get fixed. As I envision it, a solution would look something like a cross between Patreon and Bugzilla: Users would be able sign up to "support" projects of their choosing, with a number of dollars per month (possibly arbitrary amounts, possibly specified tiers; maybe including $0/month), and would be able to open issues. These could be private (e.g., for "technical support" requests) or public (e.g., for bugs and feature requests); users would be able to indicate their interest in public issues created by other users. Developers would get to see the open issues, along with a nominal "value" computed based on allocating the incoming dollars of "support contracts" across the issues each user has expressed an interest in, allowing them to focus on issues with higher impact. He poses three questions to users about whether or not people (users and software developers alike) would be interested in this and whether payment (giving and receiving, respectively) is interesting Check out the comments (and those on https://news.ycombinator.com/item?id=14313804 (reddit.com)) as well for some suggestions and discussion on the topic *** OpenBSD vmm hypervisor: Part 2 (http://www.h-i-r.net/2017/04/openbsd-vmm-hypervisor-part-2.html) We asked for people to write up their experience using OpenBSD's VMM. This blog post is just that This is going to be a (likely long-running, infrequently-appended) series of posts as I poke around in vmm. A few months ago, I demonstrated some basic use of the vmm hypervisor as it existed in OpenBSD 6.0-CURRENT around late October, 2016. We'll call that video Part 1. Quite a bit of development was done on vmm before 6.1-RELEASE, and it's worth noting that some new features made their way in. Work continues, of course, and I can only imagine the hypervisor technology will mature plenty for the next release. As it stands, this is the first release of OpenBSD with a native hypervisor shipped in the base install, and that's exciting news in and of itself To get our virtual machines onto the network, we have to spend some time setting up a virtual ethernet interface. We'll run a DHCP server on that, and it'll be the default route for our virtual machines. We'll keep all the VMs on a private network segment, and use NAT to allow them to get to the network. There is a way to directly bridge VMs to the network in some situations, but I won't be covering that today. Create an empty disk image for your new VM. I'd recommend 1.5GB to play with at first. You can do this without doas or root if you want your user account to be able to start the VM later. I made a "vmm" directory inside my home directory to store VM disk images in. You might have a different partition you wish to store these large files in. Boot up a brand new vm instance. You'll have to do this as root or with doas. You can download a -CURRENT install kernel/ramdisk (bsd.rd) from an OpenBSD mirror, or you can simply use the one that's on your existing system (/bsd.rd) like I'll do here. The command will start a VM named "test.vm", display the console at startup, use /bsd.rd (from our host environment) as the boot image, allocate 256MB of memory, attach the first network interface to the switch called "local" we defined earlier in /etc/vm.conf, and use the test image we just created as the first disk drive. Now that the VM disk image file has a full installation of OpenBSD on it, build a VM configuration around it by adding the below block of configuration (with modifications as needed for owner, path and lladdr) to /etc/vm.conf I've noticed that VMs with much less than 256MB of RAM allocated tend to be a little unstable for me. You'll also note that in the "interface" clause, I hard-coded the lladdr that was generated for it earlier. By specifying "disable" in vm.conf, the VM will show up in a stopped state that the owner of the VM (that's you!) can manually start without root access. Let us know how VMM works for you *** News Roundup openbsd changes of note 621 (http://www.tedunangst.com/flak/post/openbsd-changes-of-note-621) More stuff, more fun. Fix script to not perform tty operations on things that aren't ttys. Detected by pledge. Merge libdrm 2.4.79. After a forced unmount, also unmount any filesystems below that mount point. Flip previously warm pages in the buffer cache to memory above the DMA region if uvm tells us it is available. Pages are not automatically promoted to upper memory. Instead it's used as additional memory only for what the cache considers long term buffers. I/O still requires DMA memory, so writing to a buffer will pull it back down. Makefile support for systems with both gcc and clang. Make i386 and amd64 so. Take a more radical approach to disabling colours in clang. When the data buffered for write in tmux exceeds a limit, discard it and redraw. Helps when a fast process is running inside tmux running inside a slow terminal. Add a port of witness(4) lock validation tool from FreeBSD. Use it with mplock, rwlock, and mutex in the kernel. Properly save and restore FPU context in vmm. Remove KGDB. It neither compiles nor works. Add a constant time AES implementation, from BearSSL. Remove SSHv1 from ssh. and more... *** Digging into BSD's choice of Unix group for new directories and files (https://utcc.utoronto.ca/~cks/space/blog/unix/BSDDirectoryGroupChoice) I have to eat some humble pie here. In comments on my entry on an interesting chmod failure, Greg A. Woods pointed out that FreeBSD's behavior of creating everything inside a directory with the group of the directory is actually traditional BSD behavior (it dates all the way back to the 1980s), not some odd new invention by FreeBSD. As traditional behavior it makes sense that it's explicitly allowed by the standards, but I've also come to think that it makes sense in context and in general. To see this, we need some background about the problem facing BSD. In the beginning, two things were true in Unix: there was no mkdir() system call, and processes could only be in one group at a time. With processes being in only one group, the choice of the group for a newly created filesystem object was easy; it was your current group. This was felt to be sufficiently obvious behavior that the V7 creat(2) manpage doesn't even mention it. Now things get interesting. 4.1c BSD seems to be where mkdir(2) is introduced and where creat() stops being a system call and becomes an option to open(2). It's also where processes can be in multiple groups for the first time. The 4.1c BSD open(2) manpage is silent about the group of newly created files, while the mkdir(2) manpage specifically claims that new directories will have your effective group (ie, the V7 behavior). This is actually wrong. In both mkdir() in sysdirectory.c and maknode() in ufssyscalls.c, the group of the newly created object is set to the group of the parent directory. Then finally in the 4.2 BSD mkdir(2) manpage the group of the new directory is correctly documented (the 4.2 BSD open(2) manpage continues to say nothing about this). So BSD's traditional behavior was introduced at the same time as processes being in multiple groups, and we can guess that it was introduced as part of that change. When your process can only be in a single group, as in V7, it makes perfect sense to create new filesystem objects with that as their group. It's basically the same case as making new filesystem objects be owned by you; just as they get your UID, they also get your GID. When your process can be in multiple groups, things get less clear. A filesystem object can only be in one group, so which of your several groups should a new filesystem object be owned by, and how can you most conveniently change that choice? One option is to have some notion of a 'primary group' and then provide ways to shuffle around which of your groups is the primary group. Another option is the BSD choice of inheriting the group from context. By far the most common case is that you want your new files and directories to be created in the 'context', ie the group, of the surrounding directory. If you fully embrace the idea of Unix processes being in multiple groups, not just having one primary group and then some number of secondary groups, then the BSD choice makes a lot of sense. And for all of its faults, BSD tended to relatively fully embrace its changes While it leads to some odd issues, such as the one I ran into, pretty much any choice here is going to have some oddities. Centrally managed Bhyve infrastructure with Ansible, libvirt and pkg-ssh (http://www.shellguardians.com/2017/05/centrally-managed-bhyve-infrastructure.html) At work we've been using Bhyve for a while to run non-critical systems. It is a really nice and stable hypervisor even though we are using an earlier version available on FreeBSD 10.3. This means we lack Windows and VNC support among other things, but it is not a big deal. After some iterations in our internal tools, we realised that the installation process was too slow and we always repeated the same steps. Of course, any good sysadmin will scream "AUTOMATION!" and so did we. Therefore, we started looking for different ways to improve our deployments. We had a look at existing frameworks that manage Bhyve, but none of them had a feature that we find really important: having a centralized repository of VM images. For instance, SmartOS applies this method successfully by having a backend server that stores a catalog of VMs and Zones, meaning that new instances can be deployed in a minute at most. This is a game changer if you are really busy in your day-to-day operations. The following building blocks are used: The ZFS snapshot of an existing VM. This will be our VM template. A modified version of oneoff-pkg-create to package the ZFS snapshots. pkg-ssh and pkg-repo to host a local FreeBSD repo in a FreeBSD jail. libvirt to manage our Bhyve VMs. The ansible modules virt, virtnet and virtpool. Once automated, the installation process needs 2 minutes at most, compared with the 30 minutes needed to manually install VM plus allowing us to deploy many guests in parallel. NetBSD maintainer in the QEMU project (https://blog.netbsd.org/tnf/entry/netbsd_maintainer_in_the_qemu) QEMU - the FAST! processor emulator - is a generic, Open Source, machine emulator and virtualizer. It defines state of the art in modern virtualization. This software has been developed for multiplatform environments with support for NetBSD since virtually forever. It's the primary tool used by the NetBSD developers and release engineering team. It is run with continuous integration tests for daily commits and execute regression tests through the Automatic Test Framework (ATF). The QEMU developers warned the Open Source community - with version 2.9 of the emulator - that they will eventually drop support for suboptimally supported hosts if nobody will step in and take the maintainership to refresh the support. This warning was directed to major BSDs, Solaris, AIX and Haiku. Thankfully the NetBSD position has been filled - making NetBSD to restore official maintenance. Beastie Bits OpenBSD Community Goes Gold (http://undeadly.org/cgi?action=article&sid=20170510012526&mode=flat&count=0) CharmBUG's Tor Hack-a-thon has been pushed back to July due to scheduling difficulties (https://www.meetup.com/CharmBUG/events/238218840/) Direct Rendering Manager (DRM) Driver for i915, from the Linux kernel to Haiku with the help of DragonflyBSD's Linux Compatibility layer (https://www.haiku-os.org/blog/vivek/2017-05-05_[gsoc_2017]_3d_hardware_acceleration_in_haiku/) TomTom lists OpenBSD in license (https://twitter.com/bsdlme/status/863488045449977864) London Net BSD Meetup on May 22nd (https://mail-index.netbsd.org/regional-london/2017/05/02/msg000571.html) KnoxBUG meeting May 30th, 2017 - Introduction to FreeNAS (http://knoxbug.org/2017-05-30) *** Feedback/Questions Felix - Home Firewall (http://dpaste.com/35EWVGZ#wrap) David - Docker Recipes for Jails (http://dpaste.com/0H51NX2#wrap) Don - GoLang & Rust (http://dpaste.com/2VZ7S8K#wrap) George - OGG feed (http://dpaste.com/2A1FZF3#wrap) Roller - BSDCan Tips (http://dpaste.com/3D2B6J3#wrap) ***
This week on the show, we've got all sorts of goodies to discuss. Starting with, vmm, vkernels, raspberry pi and much more! Some iX folks are visiting from out of This episode was brought to you by Headlines vmm enabled (http://undeadly.org/cgi?action=article&sid=20161012092516&mode=flat&count=15) VMM, the OpenBSD hypervisor, has been imported into current It has similar hardware requirements to bhyve, a Intel Nehalem or newer CPU with the hardware virtualization features enabled in the BIOS AMD support has not been started yet OpenBSD is the only supported guest It would be interesting to hear from viewers that have tried it, and hear how it does, and what still needs more work *** vkernels go COW (http://lists.dragonflybsd.org/pipermail/commits/2016-October/624675.html) The DragonflyBSD feature, vkernels, has gained a new Copy-On-Write functionality Disk images can now be mounted RO or RW, but changes will not be written back to the image file This allows multiple vkernels to share the same disk image “Note that when the vkernel operates on an image in this mode, modifications will eat up system memory and swap, so the user should be cognizant of the use-case. Still, the flexibility of being able to mount the image R+W should not be underestimated.” This is another feature we'd love to hear from viewers that have tried it out. *** Basic support for the RPI3 has landed in FreeBSD-CURRENT (https://wiki.freebsd.org/arm64/rpi3) The long awaited bits to allow FreeBSD to boot on the Raspberry Pi 3 have landed There is still a bit of work to be done, some of the as mentioned in Oleksandr's blog post: Raspberry Pi support in HEAD (https://kernelnomicon.org/?p=690) “Raspberry Pi 3 limited support was committed to HEAD. Most of drivers should work with upstream dtb, RNG requires attention because callout mode seems to be broken and there is no IRQ in upstream device tree file. SMP is work in progress. There are some compatibility issue with VCHIQ driver due to some assumptions that are true only for ARM platform. “ This is exciting work. No HDMI support (yet), so if you plan on trying this out make sure you have your USB->Serial adapter cables ready to go. Full Instructions to get started with your RPI 3 can be found on the FreeBSD Wiki (https://wiki.freebsd.org/arm64/rpi3) Relatively soon, I imagine there will be a RaspBSD build for the RPI3 to make it easier to get started Eventually there will be official FreeBSD images as well *** OpenBSD switches softraid crypto from PKCS5 PBKDF2 to bcrypt PBKDF. (https://github.com/openbsd/src/commit/2ba69c71e92471fe05f305bfa35aeac543ebec1f) After the discussion a few weeks ago when a user wrote a tool to brute force their forgotten OpenBSD Full Disk Encryption password (from a password list of possible variations of their password), it was discovered that OpenBSD defaulted to using just 8192 iterations of PKCSv5 for the key derivation function with a SHA1-HMAC The number of iterations can be manually controlled by the user when creating the softraid volume By comparison, FreeBSDs GELI full disk encryption used a benchmark to pick a number of iterations that would take more than 2 seconds to complete, generally resulting in a number of iterations over 1 million on most modern hardware. The algorithm is based on a SHA512-HMAC However, inefficiency in the implementation of PKCSv5 in GELI resulted in the implementation being 50% slower than some other implementations, meaning the effective security was only about 1 second per attempt, rather than the intended 2 seconds. The improved PKCSv5 implementation is out for review currently. This commit to OpenBSD changes the default key derivation function to be based on bcrypt and a SHA512-HMAC instead. OpenBSD also now uses a benchmark to pick a number of of iterations that will take approximately 1 second per attempt “One weakness of PBKDF2 is that while its number of iterations can be adjusted to make it take an arbitrarily large amount of computing time, it can be implemented with a small circuit and very little RAM, which makes brute-force attacks using application-specific integrated circuits or graphics processing units relatively cheap. The bcrypt key derivation function requires a larger amount of RAM (but still not tunable separately, i. e. fixed for a given amount of CPU time) and is slightly stronger against such attacks, while the more modern scrypt key derivation function can use arbitrarily large amounts of memory and is therefore more resistant to ASIC and GPU attacks.” The upgrade to the bcrypt, which has proven to be quite resistant to cracking by GPUs is a significant enhancement to OpenBSDs encrypted softraid feature *** Interview - Josh Paetzel - email@email (mailto:email@email) / @bsdunix4ever (https://twitter.com/bsdunix4ever) MeetBSD ZFS Panel FreeNAS - graceful network reload Pxeboot *** News Roundup EC2's most dangerous feature (http://www.daemonology.net/blog/2016-10-09-EC2s-most-dangerous-feature.html) Colin Percival, FreeBSD's unofficial EC2 maintainer, has published a blog post about “EC2's most dangerous feature” “As a FreeBSD developer — and someone who writes in C — I believe strongly in the idea of "tools, not policy". If you want to shoot yourself in the foot, I'll help you deliver the bullet to your foot as efficiently and reliably as possible. UNIX has always been built around the idea that systems administrators are better equipped to figure out what they want than the developers of the OS, and it's almost impossible to prevent foot-shooting without also limiting useful functionality. The most powerful tools are inevitably dangerous, and often the best solution is to simply ensure that they come with sufficient warning labels attached; but occasionally I see tools which not only lack important warning labels, but are also designed in a way which makes them far more dangerous than necessary. Such a case is IAM Roles for Amazon EC2.” “A review for readers unfamiliar with this feature: Amazon IAM (Identity and Access Management) is a service which allows for the creation of access credentials which are limited in scope; for example, you can have keys which can read objects from Amazon S3 but cannot write any objects. IAM Roles for EC2 are a mechanism for automatically creating such credentials and distributing them to EC2 instances; you specify a policy and launch an EC2 instance with that Role attached, and magic happens making time-limited credentials available via the EC2 instance metadata. This simplifies the task of creating and distributing credentials and is very convenient; I use it in my FreeBSD AMI Builder AMI, for example. Despite being convenient, there are two rather scary problems with this feature which severely limit the situations where I'd recommend using it.” “The first problem is one of configuration: The language used to specify IAM Policies is not sufficient to allow for EC2 instances to be properly limited in their powers. For example, suppose you want to allow EC2 instances to create, attach, detach, and delete Elastic Block Store volumes automatically — useful if you want to have filesystems automatically scaling up and down depending on the amount of data which they contain. The obvious way to do this is would be to "tag" the volumes belonging to an EC2 instance and provide a Role which can only act on volumes tagged to the instance where the Role was provided; while the second part of this (limiting actions to tagged volumes) seems to be possible, there is no way to require specific API call parameters on all permitted CreateVolume calls, as would be necessary to require that a tag is applied to any new volumes being created by the instance.” “As problematic as the configuration is, a far larger problem with IAM Roles for Amazon EC2 is access control — or, to be more precise, the lack thereof. As I mentioned earlier, IAM Role credentials are exposed to EC2 instances via the EC2 instance metadata system: In other words, they're available from http://169.254.169.254/. (I presume that the "EC2ws" HTTP server which responds is running in another Xen domain on the same physical hardware, but that implementation detail is unimportant.) This makes the credentials easy for programs to obtain... unfortunately, too easy for programs to obtain. UNIX is designed as a multi-user operating system, with multiple users and groups and permission flags and often even more sophisticated ACLs — but there are very few systems which control the ability to make outgoing HTTP requests. We write software which relies on privilege separation to reduce the likelihood that a bug will result in a full system compromise; but if a process which is running as user nobody and chrooted into /var/empty is still able to fetch AWS keys which can read every one of the objects you have stored in S3, do you really have any meaningful privilege separation? To borrow a phrase from Ted Unangst, the way that IAM Roles expose credentials to EC2 instances makes them a very effective exploit mitigation mitigation technique.” “To make it worse, exposing credentials — and other metadata, for that matter — via HTTP is completely unnecessary. EC2 runs on Xen, which already has a perfectly good key-value data store for conveying metadata between the host and guest instances. It would be absolutely trivial for Amazon to place EC2 metadata, including IAM credentials, into XenStore; and almost as trivial for EC2 instances to expose XenStore as a filesystem to which standard UNIX permissions could be applied, providing IAM Role credentials with the full range of access control functionality which UNIX affords to files stored on disk. Of course, there is a lot of code out there which relies on fetching EC2 instance metadata over HTTP, and trivial or not it would still take time to write code for pushing EC2 metadata into XenStore and exposing it via a filesystem inside instances; so even if someone at AWS reads this blog post and immediately says "hey, we should fix this", I'm sure we'll be stuck with the problems in IAM Roles for years to come.” “So consider this a warning label: IAM Roles for EC2 may seem like a gun which you can use to efficiently and reliably shoot yourself in the foot; but in fact it's more like a gun which is difficult to aim and might be fired by someone on the other side of the room snapping his fingers. Handle with care!” *** Open-source storage that doesn't suck? Our man tries to break TrueNAS (http://www.theregister.co.uk/2016/10/18/truenas_review/) The storage reviewer over at TheRegister got their hands on a TrueNAS and gave it a try “Data storage is difficult, and ZFS-based storage doubly so. There's a lot of money to be made if you can do storage right, so it's uncommon to see a storage company with an open-source model deliver storage that doesn't suck.” “To become TrueNAS, FreeNAS's code is feature-frozen and tested rigorously. Bleeding-edge development continues with FreeNAS, and FreeNAS comes with far fewer guarantees than does TrueNAS.” “iXsystems provided a Z20 hybrid storage array. The Z20 is a dual-controller, SAS-based, high-availability, hybrid storage array. The testing unit came with a 2x 10GbE NIC per controller and retails around US$24k. The unit shipped with 10x 300GB 10k RPM magnetic hard drives, an 8GB ZIL SSD and a 200GB L2ARC SSD. 50GiB of RAM was dedicated to the ARC by the system's autotune feature.” The review tests the performance of the TrueNAS, which they found acceptable for spinning rust, but they also tested the HA features While the look of the UI didn't impress them, the functionality and built in help did “The UI contains truly excellent mouseover tooltips that provide detailed information and rationale for almost every setting. An experienced sysadmin will be able to navigate the TrueNAS UI with ease. An experienced storage admin who knows what all the terms mean won't have to refer to a wiki or the more traditional help manual, but the same can't be said for the uninitiated.” “After a lot of testing, I'd trust my data to the TrueNAS. I am convinced that it will ensure the availability of my data to within any reasonable test, and do so as a high availability solution. That's more than I can say for a lot of storage out there.” “iXsystems produce a storage array that is decent enough to entice away some existing users of the likes of EMC, NetApp, Dell or HP. Honestly, that's not something I thought possible going into this review. It's a nice surprise.” *** OpenBSD now officially on GitHub (https://github.com/openbsd) Got a couple of new OpenBSD items to bring to your attention today. First up, for those who didn't know, OpenBSD development has (always?) taken place in CVS, similar to NetBSD and previously FreeBSD. However today, Git fans can rejoice, since there is now an “official” read-only github mirror of their sources for public consumption. Since this is read-only, I will assume (unless told otherwise) that pull-requests and whatnot aren't taken. But this will come in handy for the “git-enabled” among us who need an easier way to checkout OpenBSD sources. There is also not yet a guarantee about the stability of the exporter. If you base a fork on the github branch, and something goes wrong with the exporter, the data may be reexported with different hashes, making it difficult to rebase your fork. How to install LibertyBSD or OpenBSD on a libreboot system (https://libreboot.org/docs/bsd/openbsd.html) For the second part of our OpenBSD stories, we have a pretty detailed document posted over at LibreBoot.org with details on how to boot-strap OpenBSD (Or LibertyBSD) using their open-source bios replacement. We've covered blog posts and other tidbits about this process in the past, but this seems to be the definitive version (so far) to reference. Some of the niceties include instructions on getting the USB image formatted not just on OpenBSD, but also FreeBSD, Linux and NetBSD. Instructions on how to boot without full-disk-encryption are provided, with a mention that so far Libreboot + Grub does not support FDE (yet). I would imagine somebody will need to port over the openBSD FDE crypto support to GRUB, as was done with GELI at some point. Lastly some instructions on how to configure grub, and troubleshoot if something goes wrong will help round-out this story. Give it a whirl, let us know if you run into issues. Editorial Aside - Personally I find the libreboot stuff fascinating. It really is one of the last areas that we don't have full control of our systems with open-source. With the growth of EFI, it seems we rely on a closed-source binary / mini-OS of sorts just to boot our Open Source solutions, which needs to be addressed. Hats off to the LibreBoot folks for taking on this important challenge. *** FreeNAS 9.10 – LAGG & VLAN Overview (https://www.youtube.com/watch?v=wqSH_uQSArQ) A video tutorial on FreeNAS's official YouTube Channel Covers the advanced networking features, Link Aggregation and VLANs Covers what the features do, and in the case of LAGG, how each of the modes work and when you might want to use it *** Beastie Bits Remote BSD Developer Position is up for grabs (https://www.cybercoders.com/bsd-developer-remote-job-305206) Isilon is hiring for a FreeBSD Security position (https://twitter.com/jeamland/status/785965716717441024) Google has ported the Networked real-time multi-player BSD game (https://github.com/google/web-bsd-hunt) A bunch of OpenBSD Tips (http://www.vincentdelft.be) The last OpenBSD 6.0 Limited Edition CD has sold (http://www.ebay.com/itm/-/332000602939) Dan spots George Neville-Neil on TV at the Airport (https://twitter.com/DLangille/status/788477000876892162) gnn on CNN (https://www.youtube.com/watch?v=h7zlxgtBA6o) SoloBSD releases v 6.0 built upon OpenBSD (http://solobsd.blogspot.com/2016/10/release-solobsd-60-openbsd-edition.html) Upcoming KnoxBug looks at PacBSD - Oct 25th (http://knoxbug.org/content/2016-10-25) Feedback/Questions Morgan - Ports and Packages (http://pastebin.com/Kr9ykKTu) Mat - ZFS Memory (http://pastebin.com/EwpTpp6D) Thomas - FreeBSD Path Length (http://pastebin.com/HYMPtfjz) Cy - OpenBSD and NetHogs (http://pastebin.com/vGxZHMWE) Lars - Editors (http://pastebin.com/5FMz116T) ***
It's already our two-year anniversary! This time on the show, we'll be chatting with Scott Courtney, vice president of infrastructure engineering at Verisign, about this year's vBSDCon. What's it have to offer in an already-crowded BSD conference space? We'll find out. This episode was brought to you by Headlines OpenBSD hypervisor coming soon (https://www.marc.info/?l=openbsd-tech&m=144104398132541&w=2) Our buddy Mike Larkin never rests, and he posted some very tight-lipped console output (http://pastebin.com/raw.php?i=F2Qbgdde) on Twitter recently From what little he revealed at the time (https://twitter.com/mlarkin2012/status/638265767864070144), it appeared to be a new hypervisor (https://en.wikipedia.org/wiki/Hypervisor) (that is, X86 hardware virtualization) running on OpenBSD -current, tentatively titled "vmm" Later on, he provided a much longer explanation on the mailing list, detailing a bit about what the overall plan for the code is Originally started around the time of the Australia hackathon, the work has since picked up more steam, and has gotten a funding boost from the OpenBSD foundation One thing to note: this isn't just a port of something like Xen or Bhyve; it's all-new code, and Mike explains why he chose to go that route He also answered some basic questions about the requirements, when it'll be available, what OSes it can run, what's left to do, how to get involved and so on *** Why FreeBSD should not adopt launchd (http://blog.darknedgy.net/technology/2015/08/26/0/) Last week (http://www.bsdnow.tv/episodes/2015_08_26-beverly_hills_25519) we mentioned a talk Jordan Hubbard gave about integrating various parts of Mac OS X into FreeBSD One of the changes, perhaps the most controversial item on the list, was the adoption of launchd to replace the init system (replacing init systems seems to cause backlash, we've learned) In this article, the author talks about why he thinks this is a bad idea He doesn't oppose the integration into FreeBSD-derived projects, like FreeNAS and PC-BSD, only vanilla FreeBSD itself - this is also explained in more detail The post includes both high-level descriptions and low-level technical details, and provides an interesting outlook on the situation and possibilities Reddit had quite a bit (https://www.reddit.com/r/BSD/comments/3ilhpk) to say (https://www.reddit.com/r/freebsd/comments/3ilj4i) about this one, some in agreement and some not *** DragonFly graphics improvements (http://lists.dragonflybsd.org/pipermail/commits/2015-August/458108.html) The DragonFlyBSD guys are at it again, merging newer support and fixes into their i915 (Intel) graphics stack This latest update brings them in sync with Linux 3.17, and includes Haswell fixes, DisplayPort fixes, improvements for Broadwell and even Cherryview GPUs You should also see some power management improvements, longer battery life and various other bug fixes If you're running DragonFly, especially on a laptop, you'll want to get this stuff on your machine quick - big improvements all around *** OpenBSD tames the userland (https://www.marc.info/?l=openbsd-tech&m=144070638327053&w=2) Last week we mentioned OpenBSD's tame framework getting support for file whitelists, and said that the userland integration was next - well, now here we are Theo posted a mega diff of nearly 100 smaller diffs, adding tame support to many areas of the userland tools It's still a work-in-progress version; there's still more to be added (including the file path whitelist stuff) Some classic utilities are even being reworked to make taming them easier - the "w" command (https://www.marc.info/?l=openbsd-cvs&m=144103945031253&w=2), for example The diff provides some good insight on exactly how to restrict different types of utilities, as well as how easy it is to actually do so (and en masse) More discussion can be found on HN (https://news.ycombinator.com/item?id=10135901), as one might expect If you're a software developer, and especially if your software is in ports already, consider adding some more fine-grained tame support in your next release *** Interview - Scott Courtney - vbsdcon@verisign.com (mailto:vbsdcon@verisign.com) / @verisign (https://twitter.com/verisign) vBSDCon (http://vbsdcon.com/) 2015 News Roundup OPNsense, beyond the fork (https://opnsense.org/opnsense-beyond-the-fork) We first heard about (http://www.bsdnow.tv/episodes/2015_01_14-common_sense_approach) OPNsense back in January, and they've since released nearly 40 versions, spanning over 5,000 commits This is their first big status update, covering some of the things that've happened since the project was born There's been a lot of community growth and participation, mass bug fixing, new features added, experimental builds with ASLR and much more - the report touches on a little of everything *** LibreSSL nukes SSLv3 (http://undeadly.org/cgi?action=article&sid=20150827112006) With their latest release, LibreSSL began to turn off SSLv3 (http://disablessl3.com) support, starting with the "openssl" command At the time, SSLv3 wasn't disabled entirely because of some things in the OpenBSD ports tree requiring it (apache being one odd example) They've now flipped the switch, and the process of complete removal has started From the Undeadly summary, "This is an important step for the security of the LibreSSL library and, by extension, the ports tree. It does, however, require lots of testing of the resulting packages, as some of the fallout may be at runtime (so not detected during the build). That is part of why this is committed at this point during the release cycle: it gives the community more time to test packages and report issues so that these can be fixed. When these fixes are then pushed upstream, the entire software ecosystem will benefit. In short: you know what to do!" With this change and a few more to follow shortly, LibreSSL won't actually support SSL anymore - time to rename it "LibreTLS" *** FreeBSD MPTCP updated (http://caia.swin.edu.au/urp/newtcp/mptcp/tools/v05/mptcp-readme-v0.5.txt) For anyone unaware, Multipath TCP (https://en.wikipedia.org/wiki/Multipath_TCP) is "an ongoing effort of the Internet Engineering Task Force's (IETF) Multipath TCP working group, that aims at allowing a Transmission Control Protocol (TCP) connection to use multiple paths to maximize resource usage and increase redundancy." There's been work out of an Australian university to add support for it to the FreeBSD kernel, and the patchset was recently updated Including in this latest version is an overview of the protocol, how to get it compiled in, current features and limitations and some info about the routing requirements Some big performance gains can be had with MPTCP, but only if both the client and server systems support it - getting it into the FreeBSD kernel would be a good start *** UEFI and GPT in OpenBSD (https://www.marc.info/?l=openbsd-cvs&m=144092912907778&w=2) There hasn't been much fanfare about it yet, but some initial UEFI and GPT-related commits have been creeping into OpenBSD recently Some support (https://github.com/yasuoka/openbsd-uefi) for UEFI booting has landed in the kernel, and more bits are being slowly enabled after review This comes along with a number (https://www.marc.info/?l=openbsd-cvs&m=143732984925140&w=2) of (https://www.marc.info/?l=openbsd-cvs&m=144088136200753&w=2) other (https://www.marc.info/?l=openbsd-cvs&m=144046793225230&w=2) commits (https://www.marc.info/?l=openbsd-cvs&m=144045760723039&w=2) related to GPT, much of which is being refactored and slowly reintroduced Currently, you have to do some disklabel wizardry to bypass the MBR limit and access more than 2TB of space on a single drive, but it should "just work" with GPT (once everything's in) The UEFI bootloader support has been committed (https://www.marc.info/?l=openbsd-cvs&m=144115942223734&w=2), so stay tuned for more updates (http://undeadly.org/cgi?action=article&sid=20150902074526&mode=flat) as further (https://twitter.com/kotatsu_mi/status/638909417761562624) progress (https://twitter.com/yojiro/status/638189353601097728) is made *** Feedback/Questions John writes in (http://slexy.org/view/s2sIWfb3Qh) Mason writes in (http://slexy.org/view/s2Ybrx00KI) Earl writes in (http://slexy.org/view/s20FpmR7ZW) ***