POPULARITY
Happy 10th birthday Raspberry Pi! The tiny computer has come a long way in just ten short years. It all started when Raspberry Pi Foundation founders Eben Upton and Rob Mullins set out to create an affordable, easy-to-use computer that students could use to learn coding. And they succeeded - Raspberry Pi has become one of the most popular computers in the world, with millions of units sold.The Raspberry Pi HardwareThe first devices were not intended to be the massive platform they are today, Instead, the plan was simply to make a few thousand devices to encourage children to learn to code. Raspberry Pi devices were first sold in 2012, and the response was overwhelming. Not only did students love them, but makers and hobbyists snapped them up as well. It quickly became clear that there was a much larger market for the tiny computers than originally anticipated.The Raspberry Pi Foundation has always been focused on education, and they continue to work with schools and organizations around the world to promote coding and computer science education. In addition to their educational initiatives, they have also developed several tools and resources that have made it easier for makers of all levels to create amazing projects.Over the years, Raspberry Pi has undergone several iterations, each one more powerful than the last. The original Model B was followed by the Model B+, the Raspberry Pi Zero, the Raspberry Pi A+ and A series, the Raspberry Pi Compute Module, and the Raspberry Pi Model B+. Beyond that, there have been a whole lot more.The Raspberry Pi 4 is just one example of how much Raspberry Pi has changed over the years. The original Model B had just 256MB of RAM and a 700MHz single-core processor. The latest Raspberry Pi 4 has a quad-core processor clocked at up to 1.5 GHz, as well as 8 GB of RAM. It also features improved networking with dual-band 802.11ac Wi-Fi and Bluetooth Low Energy (BLE) on board.In addition to hardware changes, the Raspberry Pi Foundation has also made several changes to the operating system over the years. The original Raspberry Pi devices ran on a modified version of Debian Linux, but the Raspberry Pi Foundation later developed their own operating system, Raspbian. Raspbian is based on Debian and is optimized for the Raspberry Pi hardware. Since then, the platform has transferred to Raspberry Pi OS, another Linux-based operating system.The Raspberry Pi communityAs amazing as all of the changes to Raspberry Pi have been, perhaps the most impressive thing about the tiny computer is the community that has grown up around it. There are now millions of Raspberry Pi devices in use all over the world, and there are countless projects and applications for them.From small projects like retro gaming consoles and media centers to large-scale deployments like industrial control systems and weather stations, Raspberry Pi is being used for everything. The possibilities are truly endless, and the Raspberry Pi community continues to come up with new and innovative ways to use the tiny computers.As Raspberry Pi celebrates its tenth birthday, it's clear that the best is yet to come. Thank you for being a part of this incredible journey, and we can't wait to see what the next ten years have in store for Raspberry Pi.
This week, Avram Piltch talks about some of the best and most unknown aspects of the Raspberry Pi computer. The Raspberry Pi entered the market 8 years ago, but with a different purpose than most might think. It was originally intended for Cambridge University, with a planned production of about 1000 units. Today, the brand has sold 31 million units - far more than the organization ever expected to produce.In those 8 years, there have been at least 18 models made available, with at least one specially produced model. In those models, the RAM has gone from 256MB on the original 1B to an optional 4GB on the 4B. The processing power has also increased significantly, from a single-core 700MHz processor on the original to the quad-core 1.5GHz processor on the current model.Somehow, even with all of the processing power enhancements over the years, the Raspberry Pi has technically gotten less expensive. The selling price has remained $35, but when you compare the value of the dollar in 2012 versus 2020, the original model would have sold for almost $40 in today's dollars. That means that we have gotten years worth of hardware enhancements for less relative dollars than the original.While the Raspberry Pi can be found in tons of applications, from web servers to robotics, there is one truly unique location for one of the computers: space. There are two "Astro Pis," which are specially modified Raspberry Pi B+ models (first generation). The computers had to be modified to deal with the oddities of space and to survive onboard the International Space Station. The European Space Agency runs contests to allow school children to have their code run on these computers.There's a lot more to know about the Raspberry Pi, which can be found in Avram's article at Tom's Hardware.
This week, Avram Piltch talks about some of the best and most unknown aspects of the Raspberry Pi computer. The Raspberry Pi entered the market 8 years ago, but with a different purpose than most might think. It was originally intended for Cambridge University, with a planned production of about 1000 units. Today, the brand has sold 31 million units - far more than the organization ever expected to produce.In those 8 years, there have been at least 18 models made available, with at least one specially produced model. In those models, the RAM has gone from 256MB on the original 1B to an optional 4GB on the 4B. The processing power has also increased significantly, from a single-core 700MHz processor on the original to the quad-core 1.5GHz processor on the current model.Somehow, even with all of the processing power enhancements over the years, the Raspberry Pi has technically gotten less expensive. The selling price has remained $35, but when you compare the value of the dollar in 2012 versus 2020, the original model would have sold for almost $40 in today's dollars. That means that we have gotten years worth of hardware enhancements for less relative dollars than the original.While the Raspberry Pi can be found in tons of applications, from web servers to robotics, there is one truly unique location for one of the computers: space. There are two "Astro Pis," which are specially modified Raspberry Pi B+ models (first generation). The computers had to be modified to deal with the oddities of space and to survive onboard the International Space Station. The European Space Agency runs contests to allow school children to have their code run on these computers.There's a lot more to know about the Raspberry Pi, which can be found in Avram's article at Tom's Hardware.
In this episode of Intel Chip Chat, Jim Gordon, GM of Ecosystem Business Development, Strategy and Communication for Intel, joins us to talk about Intel’s next generation Intel® Xeon E processors and Intel® Software Guard Extensions (Intel® SGX). Jim talks about how Intel delivers the building blocks for security to CSPs, ISVs and other ecosystem partners that enable them to improve the security capabilities they offer to their customers, both in hardware and software. Intel now has general availability of the new Intel® Xeon E-2200 processors for server with double the Intel SGX enclave size – now 256MB. Take a listen to learn how this is leading to even more possibilities for new security use cases and developer innovation in HW based security. To learn more about developing solutions with Intel SGX, visit https://software.intel.com/sgx.
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we're able to be prepared for the innovations of the future! Today we're going to look at the emergence of Google's Android operating system. Before we look at Android, let's look at what led to it. Frank Canova who built a device he showed off as “Angler” at COMDEX in 1992. This would be released as the Simon Personal Communicator by BellSouth and manufactured as the IBM Simon by Mitsubishi. The Palm, Newton, Symbian, and Pocket PC, or Windows CE would come out shortly thereafter and rise in popularity over the next few years. CDMA would slowly come down in cost over the next decade. Now let's jump to 2003. At the time, you had Microsoft Windows CE, the Palm Treo was maturing and supported dual-band GSM, Handspring merged into the Palm hardware division, Symbian could be licensed but I never met a phone of theirs I liked. Like the Nokia phones looked about the same as many printer menu screens. One other device that is more relevant because of the humans behind it was the T-Mobile sidekick, which actually had a cool flippy motion to open the keyboard! Keep that Sidekick in mind for a moment. Oh and let's not forget a fantastic name. The mobile operating systems were limited. Each was proprietary. Most were menu driven and reminded us more of an iPod, released in 2001. I was a consultant at the time and remember thinking it was insane that people would pay hundreds of dollars for a phone. At the time, flip phones were all the rage. A cottage industry of applications sprung up, like Notify, that made use of app frameworks on these devices to connect my customers to their Exchange accounts so their calendars could sync wirelessly. The browsing experience wasn't great. The messaging experience wasn't great. The phones were big and clunky. And while you could write apps for the Symbian in Qt Creator or Flash Lite or Python for S60, few bothered. That's when Andy Rubin left Danger, the company the cofounded that made the Sidekick and joined up with Rich Miner, Nick Sears, and Chris White in 2003 to found a little company called Android Inc. They wanted to make better mobile devices than were currently on the market. They founded Android Inc and set out to write an operating system based on Linux that could rival anything on the market. Rubin was no noob when cofounding Danger. He had been a robotics engineer in the 80s, a manufacturing engineer at Apple for a few years and then got on his first mobility engineering gig when he bounced to General Magic to work on Magic Cap, a spinoff from Apple FROM 92 TO 95. He then helped build WebTV from 95-99. Many in business academia have noted that Android existed before Google and that's why it's as successful as it is today. But Google bought Android in 2005, years before the actual release of Android. Apple had long been rumor milling a phone, which would mean a mobile operating system as well. Android was sprinting towards a release that was somewhat Blackberry-like, focused on competing with similar devices on the market at the time, like the Blackberries that were all the rage. Obama and Hillary Clinton was all about theirs. As a consultant, I was stoked to become a Blackberry Enterprise Server reseller and used that to deploy all the things. The first iPhone was released in 2007. I think we sometimes think that along came the iPhone and Blackberries started to disappear. It took years. But the fall was fast. While the iPhone was also impactful, the Android-based devices were probably more-so. That release of the iPhone kicked Andy Rubin in the keister and he pivoted over from the Blackberry-styled keyboard to a touch screen, which changed… everything. Suddenly this weird innovation wasn't yet another frivolous expensive Apple extravagance. The logo helped grow the popularity as well, I think. Internally at Google Dan Morrill started creating what were known as Dandroids. But the bugdroid as it's known was designed by Irina Blok on the Android launch team. It was eventually licensed under Creative Commons, which resulted in lots of different variations of the logo; a sharp contrast to the control Apple puts around the usage of their own logo. The first version of the shipping Android code came along in 2008 and the first phone that really shipped with it wasn't until the HTC Dream in 2009. This device had a keyboard you could press but also had a touch screen, although we hadn't gotten a virtual keyboard yet. It shipped with an ARM11, 192MB of RAM, and 256MB of storage. But you could expand it up to 16 gigs with a microSD card. Oh, and it had a trackball. It bad 802.11b and g, Bluetooth, and shipped with Android 1.0. But it could be upgraded up to 1.6, Donut. The hacker in me just… couldn't help but mod the thing much as I couldn't help but jailbreak the iPhone back before I got too lazy not to. Of course, the Dev Phone 1 shipped soon after that didn't require you to hack it, something Apple waited until 2019 to copy. The screen was smaller than that of an iPhone. The keyboard felt kinda' junky. The app catalog was lacking. It didn't really work well in an office setting. But it was open source. It was a solid operating system and it showed promise as to the future of not-Apple in a post-Blackberry world. Note: Any time a politician uses a technology it's about 5 minutes past being dead tech. Of Blackberry, iOS, and Android, Android was last in devices sold using those platforms in 2009, although the G1 as the Dream was also known as, took 9% market share quickly. But then came Eclair. Unlike sophomore efforts from bands, there's something about a 2.0 release of software. By the end of 2010 there were more Androids than iOS devices. 2011 showed the peak year of Blackberry sales, with over 50 million being sold, but those were the lagerts spinning out of the buying tornado and buying the pivot the R&D for the fruitless next few Blackberry releases. Blackberry marketshare would zero out in just 6 short years. iPhone continued a nice climb over the past 8 years. But Android sales are now in the billions per year. Ultimately the blackberry, to quote Time a “failure to keep up with Apple and Google was a consequence of errors in its strategy and vision.” If you had to net-net that, touch vs menus was a substantial part of that. By 2017 the Android and iOS marketshare was a combined 99.6%. In 2013, now Google CEO, Sundar Pichai took on Android when Andy Rubin was embroiled in sexual harassment charges and now acts as CEO of Playground Global, an incubator for hardware startups. The open source nature of Android and it being ready to fit into a device from manufacturers like HTC led to advancements that inspired and were inspired by the iPhone leading us to the state we're in today. Let's look at the released per year and per innovation: * 1.0, API 1, 2008: Include early Google apps like Gmail, Maps, Calendar, of course a web browser, a media player, and YouTube * 1.1 came in February the next year and was code named Petit Four * 1.5 Cupcake, 2009: Gave us on an-screen keyboard and third-party widgets then apps on the Android Market, now known as the Google Play Store. Thus came the HTC Dream. Open source everything. * 1.6 Donut, 2009: Customizeable screen sizes and resolution, CDMA support. And the short-lived Dell Streak! Because of this resolution we got the joy of learning all about the tablet. Oh, and Universal Search and more emphasis on battery usage! * 2.0 Eclair, 2009: The advent of the Motorola Droid, turn by turn navigation, real time traffic, live wallpapers, speech to text. But the pinch to zoom from iOS sparked a war with Apple.We also got the ability to limit accounts. Oh, new camera modes that would have impressed even George Eastman, and Bluetooth 2.1 support. * 2.2 Froyo, four months later in 2010 came Froyo, with under-the-hood tuning, voice actions, Flash support, something Apple has never had. And here came the HTC Incredible S as well as one of the most mobile devices ever built: The Samsung Galaxy S2. This was also the first hotspot option and we got 3G and better LCDs. That whole tethering, it took a year for iPhone to copy that. * 2.3 Gingerbread: With 2010 came Gingerbread. The green from the robot came into the Gingerbread with the black and green motif moving front and center. More sensors, NFC, a new download manager, copy and paste got better, * 3.0 Honeycomb, 2011. The most important thing was when Matias Duarte showed up and reinvented the Android UI. The holographic design traded out the green and blue and gave you more screen space. This kicked off a permanet overhaul and brought a card-UI for recent apps. Enter the Galaxy S9 and the Huawei Mate 2. * 4.0 Ice Cream Sandwich, later in 2011 - Duarte's designs started really taking hold. For starters, let's get rid of buttons. THat's important and has been a critical change for other devices as well. We Reunited tablets and phones with a single vision. On screen buttons, brought the card-like appearance into app switching. Smarter swiping, added swiping to dismiss, which changed everything for how we handle email and texts with gestures. You can thank this design for Tinder. * 4.1 to 4.3 Jelly Bean, 2012: Added some sweet sweet fine tuning to the foundational elements from Ice Cream Sandwich. Google Now that was supposed to give us predictive intelligence, interactive notifications, expanded voice search, advanced search, sill with the card-based everything now for results. We also got multiuser support for tablets. And the Android Quick Settings pane. We also got widgets on the lock screen - but those are a privacy nightmare and didn't last for long. Automatic widget resizing, wireless display projection support, restrict profiles on multiple user accounts, making it a great parent device. Enter the Nexus 10. AND TWO FINGER DOWN SWIPES. * 4.4 KitKat, in 2013 ended the era of a dark screen, lighter screens and neutral highlights moved in. I mean, Matrix was way before that after all. OK, Google showed up. Furthering the competition with Apple and Siri. Hands-free activation. A panel on the home screen, and a stand-alone launcher. AND EMOJIS ON THE KEYBOARD. Increased NFC security. * 5. Lollipop came in 2014 bringing 64 bit, Bluetooth Low Energy, flatter interface, But more importantly, we got annual releases like iOS. * 6: Marshmallow, 2015 gave us doze mode, sticking it to iPhone by even more battery saving features. App security and prompts to grant apps access to resources like the camera and phone were . The Nexus 5x and 6P ports brought fingerprint scanners and USB-C. * 7: Nougat in 2016 gave us quick app switching, a different lock screen and home screen wallpaper, split-screen multitasking, and gender/race-centric emojis. * 8: Oreo in 2017 gave us floating video windows, which got kinda' cool once app makers started adding support in their apps for it. We also got a new file browser, which came to iOS in 2019. And more battery enhancements with prettied up battery menus. Oh, and notification dots on app icons, borrowed from Apple. * 9: Pie in 2018 brought notch support, navigations that were similar to those from the iPhone X adopting to a soon-to-be bezel-free world. And of course, the battery continues to improve. This brings us into the world of the Pixel 3. * 10, Likely some timed in 2019 While the initial release of Android shipped with the Linux 2.1 kernel, that has been updated as appropriate over the years with, 3 in Ice Cream Sandwich, and version 4 in Nougat. Every release of android tends to have an increment in the Linux kernel. Now, Android is open source. So how does Google make money? Let's start with what Google does best. Advertising. Google makes a few cents every time you click on an ad in an advertisement in messages or web pages or any other little spot they've managed to drop an ad in there. Then there's the Google Play Store. Apple makes 70% more revenue from apps than Android, despite the fact that Android apps have twice the number of installs. The old adage is if you don't pay for a product, you are the product. I don't tend to think Google goes overboard with all that, though. And Google is probably keeping Caterpillar in business just to buy big enough equipment to move their gold bars from one building to the next on campus. Any time someone's making money, lots of other people wanna taste. Like Oracle, who owns a lot of open source components used in Android. And the competition between iOS and Android makes both products better for consumers! Now look out for Android Auto, Android Things, Android TV, Chrome OS, the Google Assistant and others - given that other types of vendors can make use of Google's open source offerings to cut R&D costs and get to market faster! But more importantly, Android has contributed substantially to the rise of ubiquitious computing despite how much money you have. I like to think the long-term impact of such a democratization of Mobility and the Internet will make the world a little less idiocracy and a little more wikipedia. Thank you so very much for tuning into another episode of the History of Computing Podcast. We're lucky to have you. Have a great day!
¿Qué accesorios utilizar en fotografía nocturna? Como os dijimos muchas veces nos encontramos con compañeros fotógrafos que quieren cambiar de equipo, pensando que con un mejor equipo van a tomar mejores fotografías nocturnas, y no es así. https://youtu.be/HxclMOwqVV8 Lo interesante a la hora de salir en busca de una buena fotografia nocturna es tener una buena planificación, no solo de la fotografía, si no de como capturarla para sacarle el máximo partido a la localización con nuestro equipo. Os recomendamos un libro de procesado avanzado que puede ayudarnos mucho a la hora de conseguir ese plus que nuestro equipo no nos da. Se titula Técnicas Avanzadas de edición digital os dejamos el enlace a continuación. Todo un lujo tener un libro como este, ojo es un libro electrónico no esta en papel. Cual es el mejor objetivo para fotografía nocturna? Después hablamos de los objetivos y os hable de mi experiencia con 2, principalmente con la marca Samyang y con Irix. El primero es un objetivo "Barato", que da buenos resultados, pero su calidad/precio ha quedado en entredicho con la aparición de nuevas marcas como por ejemplo Irix. Como os dijimos en el directo, es una opción, pero si queremos una calidad superior nos decantamos por Irix 15 mm Por la diferencia de precio no hay color. Tarjetas de memoria para fotografía nocturna Os hablamos también de la importancia de una buena tarjeta de memoria. Hay muchas pero vamos a centrarnos en SD, que es el standard ahora mismo, y hay que fijarse en algunas peculiaridades. Hasta ahora podemos encontrar: normal, HC y XC. SD -> 16MB, 32 MB, 64MB, 128MB, 256MB, 512MB, 1GB, 2GB, 4GB, 8GB, 16GB, 32GB. No soportan el BUS UHS. SDHC-> 4GB, 8GB, 16GB, 32GB. Soportan BUS UHS. SDXC-> 64GB, 128GB, 200GB, 512GB, 2TB. Soportan BUS UHS. Tipo de BUS UHS UHS-I: 104MB/sg Velocidad máxima de lectura/escritura. UHS-II: 312MB/sg Velocidad máxima de lectura/escritura. Clase de BUS UHS U1: 10MB/sg Velocidad mínima de escritura. U3: 30MB/sg Velocidad mínima de escritura. Clase Clase 2: 2MB/sg Clase 4: 4MB/sg Clase 6: 6MB/sg Clase 10: 10MB/sg Capacidad de almacenamiento La cantidad de memoria que puede almacenar esa tarjeta. Tipos de tarjetas según las siglas Las siglas SD, SDHC y SDXC vienen dadas por la capacidad de las mismas. SD (Security Digital) fueron las primeras que se lanzaron compitiendo con otros formatos, como las XD o las Memory Stick, y su capacidad llegaba a alcanzar los 32GB. Luego llegaron las SDHC (SD High Capacity) que otorgaban mayor confianza y velocidad en el guardado de archivos grandes y, en teoría, son capaces de llegar a las 2TB, pero la SD Association estableció su límite en 32GB. Y por último las SDXC (SD eXtended Capacity), las cuales llegan a 2TB y por cuenta de que guardan mayor cantidad de datos, necesitan más velocidad, por lo que vienen con BUS, además están preparadas para poder ser formateadas en exFAT. Los buses se crearon para las tarjetas de mayor rendimiento, pues era necesario que fuesen capaces de escribir a gran velocidad grandes cantidades de datos provenientes de los vídeos FullHD en adelante (2K,4K...), sin que la grabación del dispositivo fuese detenida por culpa de la tarjeta. Después de estos datos técnicos os recomendamos un par de tarjetas: Como veis a la hora de realizar una buena fotografía nocturna son muchas las cosas a tener en cuenta, desde la planificación, la preparación y tener claro cómo tenemos que realizar la fotografia que tenemos en la cabeza. Como plus os dejamos el enlace a una pequeña power bank pero muy eficiente: Nos vemos en el proximo directo.
Running OpenBSD/NetBSD on FreeBSD using grub2-bhyve, vermaden’s FreeBSD story, thoughts on OpenBSD on the desktop, history of file type info in Unix dirs, Multiboot a Pinebook KDE neon image, and more. ##Headlines OpenBSD/NetBSD on FreeBSD using grub2-bhyve When I was writing a blog post about the process title, I needed a couple of virtual machines with OpenBSD, NetBSD, and Ubuntu. Before that day I mainly used FreeBSD and Windows with bhyve. I spent some time trying to set up an OpenBSD using bhyve and UEFI as described here. I had numerous problems trying to use it, and this was the day I discovered the grub2-bhyve tool, and I love it! The grub2-bhyve allows you to load a kernel using GRUB bootloader. GRUB supports most of the operating systems with a standard configuration, so exactly the same method can be used to install NetBSD or Ubuntu. First, let’s install grub2-bhyve on our FreeBSD box: # pkg install grub2-bhyve To run grub2-bhyve we need to provide at least the name of the VM. In bhyve, if the memsize is not specified the default VM is created with 256MB of the memory. # grub-bhyve test GNU GRUB version 2.00 Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists possible device or file completions. grub> After running grub-bhyve command we will enter the GRUB loader. If we type the ls command, we will see all the available devices. In the case of the grub2-bhyve there is one additional device called “(host)” that is always available and allows the host filesystem to be accessed. We can list files under that device. grub> ls (host) grub> ls (host)/ libexec/ bin/ usr/ bhyve/ compat/ tank/ etc/ boot/ net/ entropy proc/ lib/ root/ sys/ mnt/ rescue/ tmp/ home/ sbin/ media/ jail/ COPYRIGHT var/ dev/ grub> To exit console simply type ‘reboot’. I would like to install my new operating system under a ZVOL ztank/bhyve/post. On another terminal, we create: # zfs create -V 10G ztank/bhyve/post If you don’t use ZFS for some crazy reason you can also create a raw blob using the truncate(1) command. # truncate -s 10G post.img I recommend installing an operating system from the disk image (installXX.fs for OpenBSD and NetBSD-X.X-amd64-install.img for NetBSD). Now we need to create a device map for a GRUB. cat > /tmp/post.map ls (hd0) (hd0,msdos4) (hd0,msdos1) (hd0,openbsd9) (hd0,openbsd1) (hd1) (host) The hd0 (in this example OpenBSD image) contains multiple partitions. We can check what is on it. grub> ls (hd0,msdos4)/ boot bsd 6.4/ etc/ And this is the partition that contains a kernel. Now we can set a root device, load an OpenBSD kernel and boot: grub> set root=(hd0,msdos4) grub> kopenbsd -h com0 -r sd0a /bsd grub> boot After that, we can run bhyve virtual machine. In my case it is: # bhyve -c 1 -w -u -H -s 0,amd_hostbridge -s 3,ahci-hd,/directory/to/disk/image -s 4,ahci-hd,/dev/zvol/ztank/bhyve/post -s 31,lpc -l com1,stdio post Unfortunately explaining the whole bhyve(8) command line is beyond this article. After installing the operating system remove hd0 from the mapping file and the image from the bhyve(8) command. If you don’t want to type all those GRUB commands, you can simply redirect them to the standard input. cat
Second round of ZFS improvements in FreeBSD, Postgres finds that non-FreeBSD/non-Illumos systems are corrupting data, interview with Kevin Bowling, BSDCan list of talks, and cryptographic right answers. Headlines [Other big ZFS improvements you might have missed] 9075 Improve ZFS pool import/load process and corrupted pool recovery One of the first tasks during the pool load process is to parse a config provided from userland that describes what devices the pool is composed of. A vdev tree is generated from that config, and then all the vdevs are opened. The Meta Object Set (MOS) of the pool is accessed, and several metadata objects that are necessary to load the pool are read. The exact configuration of the pool is also stored inside the MOS. Since the configuration provided from userland is external and might not accurately describe the vdev tree of the pool at the txg that is being loaded, it cannot be relied upon to safely operate the pool. For that reason, the configuration in the MOS is read early on. In the past, the two configurations were compared together and if there was a mismatch then the load process was aborted and an error was returned. The latter was a good way to ensure a pool does not get corrupted, however it made the pool load process needlessly fragile in cases where the vdev configuration changed or the userland configuration was outdated. Since the MOS is stored in 3 copies, the configuration provided by userland doesn't have to be perfect in order to read its contents. Hence, a new approach has been adopted: The pool is first opened with the untrusted userland configuration just so that the real configuration can be read from the MOS. The trusted MOS configuration is then used to generate a new vdev tree and the pool is re-opened. When the pool is opened with an untrusted configuration, writes are disabled to avoid accidentally damaging it. During reads, some sanity checks are performed on block pointers to see if each DVA points to a known vdev; when the configuration is untrusted, instead of panicking the system if those checks fail we simply avoid issuing reads to the invalid DVAs. This new two-step pool load process now allows rewinding pools across vdev tree changes such as device replacement, addition, etc. Loading a pool from an external config file in a clustering environment also becomes much safer now since the pool will import even if the config is outdated and didn't, for instance, register a recent device addition. With this code in place, it became relatively easy to implement a long-sought-after feature: the ability to import a pool with missing top level (i.e. non-redundant) devices. Note that since this almost guarantees some loss Of data, this feature is for now restricted to a read-only import. 7614 zfs device evacuation/removal This project allows top-level vdevs to be removed from the storage pool with “zpool remove”, reducing the total amount of storage in the pool. This operation copies all allocated regions of the device to be removed onto other devices, recording the mapping from old to new location. After the removal is complete, read and free operations to the removed (now “indirect”) vdev must be remapped and performed at the new location on disk. The indirect mapping table is kept in memory whenever the pool is loaded, so there is minimal performance overhead when doing operations on the indirect vdev. The size of the in-memory mapping table will be reduced when its entries become “obsolete” because they are no longer used by any block pointers in the pool. An entry becomes obsolete when all the blocks that use it are freed. An entry can also become obsolete when all the snapshots that reference it are deleted, and the block pointers that reference it have been “remapped” in all filesystems/zvols (and clones). Whenever an indirect block is written, all the block pointers in it will be “remapped” to their new (concrete) locations if possible. This process can be accelerated by using the “zfs remap” command to proactively rewrite all indirect blocks that reference indirect (removed) vdevs. Note that when a device is removed, we do not verify the checksum of the data that is copied. This makes the process much faster, but if it were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be possible to copy the wrong data, when we have the correct data on e.g. the other side of the mirror. Therefore, mirror and raidz devices can not be removed. You can use ‘zpool detach’ to downgrade a mirror to a single top-level device, so that you can then remove it 7446 zpool create should support efi system partition This one was not actually merged into FreeBSD, as it doesn’t apply currently, but I would like to switch the way FreeBSD deals with full disks to be closer to IllumOS to make automatic spare replacement a hands-off operation. Since we support whole-disk configuration for boot pool, we also will need whole disk support with UEFI boot and for this, zpool create should create efi-system partition. I have borrowed the idea from oracle solaris, and introducing zpool create -B switch to provide an way to specify that boot partition should be created. However, there is still an question, how big should the system partition be. For time being, I have set default size 256MB (thats minimum size for FAT32 with 4k blocks). To support custom size, the set on creation "bootsize" property is created and so the custom size can be set as: zpool create -B -o bootsize=34MB rpool c0t0d0. After the pool is created, the "bootsize" property is read only. When -B switch is not used, the bootsize defaults to 0 and is shown in zpool get output with no value. Older zfs/zpool implementations can ignore this property. **Digital Ocean** PostgreSQL developers find that every operating system other than FreeBSD and IllumOS might corrupt your data Some time ago I ran into an issue where a user encountered data corruption after a storage error. PostgreSQL played a part in that corruption by allowing checkpoint what should've been a fatal error. TL;DR: Pg should PANIC on fsync() EIO return. Retrying fsync() is not OK at least on Linux. When fsync() returns success it means "all writes since the last fsync have hit disk" but we assume it means "all writes since the last SUCCESSFUL fsync have hit disk". Pg wrote some blocks, which went to OS dirty buffers for writeback. Writeback failed due to an underlying storage error. The block I/O layer and XFS marked the writeback page as failed (ASEIO), but had no way to tell the app about the failure. When Pg called fsync() on the FD during the next checkpoint, fsync() returned EIO because of the flagged page, to tell Pg that a previous async write failed. Pg treated the checkpoint as failed and didn't advance the redo start position in the control file. + All good so far. But then we retried the checkpoint, which retried the fsync(). The retry succeeded, because the prior fsync() *cleared the ASEIO bad page flag*. The write never made it to disk, but we completed the checkpoint, and merrily carried on our way. Whoops, data loss. The clear-error-and-continue behaviour of fsync is not documented as far as I can tell. Nor is fsync() returning EIO unless you have a very new linux man-pages with the patch I wrote to add it. But from what I can see in the POSIX standard we are not given any guarantees about what happens on fsync() failure at all, so we're probably wrong to assume that retrying fsync() is safe. We already PANIC on fsync() failure for WAL segments. We just need to do the same for data forks at least for EIO. This isn't as bad as it seems because AFAICS fsync only returns EIO in cases where we should be stopping the world anyway, and many FSes will do that for us. + Upon further looking, it turns out it is not just Linux brain damage: Apparently I was too optimistic. I had looked only at FreeBSD, which keeps the page around and dirties it so we can retry, but the other BSDs apparently don't (FreeBSD changed that in 1999). From what I can tell from the sources below, we have: Linux, OpenBSD, NetBSD: retrying fsync() after EIO lies FreeBSD, Illumos: retrying fsync() after EIO tells the truth + NetBSD PR to solve the issues + I/O errors are not reported back to fsync at all. + Write errors during genfs_putpages that fail for any reason other than ENOMEM cause the data to be semi-silently discarded. + It appears that UVM pages are marked clean when they're selected to be written out, not after the write succeeds; so there are a bunch of potential races when writes fail. + It appears that write errors for buffercache buffers are semi-silently discarded as well. Interview - Kevin Bowling: Senior Manager Engineering of LimeLight Networks - kbowling@llnw.com / @kevinbowling1 BR: How did you first get introduced to UNIX and BSD? AJ: What got you started contributing to an open source project? BR: What sorts of things have you worked on it the past? AJ: Tell us a bit about LimeLight and how they use FreeBSD. BR: What are the biggest advantages of FreeBSD for LimeLight? AJ: What could FreeBSD do better that would benefit LimeLight? BR: What has LimeLight given back to FreeBSD? AJ: What have you been working on more recently? BR: What do you find to be the most valuable part of open source? AJ: Where do you think the most improvement in open source is needed? BR: Tell us a bit about your computing history collection. What are your three favourite pieces? AJ: How do you keep motivated to work on Open Source? BR: What do you do for fun? AJ: Anything else you want to mention? News Roundup BSDCan 2018 Selected Talks The schedule for BSDCan is up Lots of interesting content, we are looking forward to it We hope to see lots of you there. Make sure you come introduce yourselves to us. Don’t be shy. Remember, if this is your first BSDCan, checkout the newbie session on Thursday night. It’ll help you get to know a few people so you have someone you can ask for guidance. Also, check out the hallway track, the tables, and come to the hacker lounge. iXsystems Cryptographic Right Answers Crypto can be confusing. We all know we shouldn’t roll our own, but what should we use? Well, some developers have tried to answer that question over the years, keeping an updated list of “Right Answers” 2009: Colin Percival of FreeBSD 2015: Thomas H. Ptacek 2018: Latacora A consultancy that provides “Retained security teams for startups”, where Thomas Ptacek works. We’re less interested in empowering developers and a lot more pessimistic about the prospects of getting this stuff right. There are, in the literature and in the most sophisticated modern systems, “better” answers for many of these items. If you’re building for low-footprint embedded systems, you can use STROBE and a sound, modern, authenticated encryption stack entirely out of a single SHA-3-like sponge constructions. You can use NOISE to build a secure transport protocol with its own AKE. Speaking of AKEs, there are, like, 30 different password AKEs you could choose from. But if you’re a developer and not a cryptography engineer, you shouldn’t do any of that. You should keep things simple and conventional and easy to analyze; “boring”, as the Google TLS people would say. Cryptographic Right Answers Encrypting Data Percival, 2009: AES-CTR with HMAC. Ptacek, 2015: (1) NaCl/libsodium’s default, (2) ChaCha20-Poly1305, or (3) AES-GCM. Latacora, 2018: KMS or XSalsa20+Poly1305 Symmetric key length Percival, 2009: Use 256-bit keys. Ptacek, 2015: Use 256-bit keys. Latacora, 2018: Go ahead and use 256 bit keys. Symmetric “Signatures” Percival, 2009: Use HMAC. Ptacek, 2015: Yep, use HMAC. Latacora, 2018: Still HMAC. Hashing algorithm Percival, 2009: Use SHA256 (SHA-2). Ptacek, 2015: Use SHA-2. Latacora, 2018: Still SHA-2. Random IDs Percival, 2009: Use 256-bit random numbers. Ptacek, 2015: Use 256-bit random numbers. Latacora, 2018: Use 256-bit random numbers. Password handling Percival, 2009: scrypt or PBKDF2. Ptacek, 2015: In order of preference, use scrypt, bcrypt, and then if nothing else is available PBKDF2. Latacora, 2018: In order of preference, use scrypt, argon2, bcrypt, and then if nothing else is available PBKDF2. Asymmetric encryption Percival, 2009: Use RSAES-OAEP with SHA256 and MGF1+SHA256 bzzrt pop ffssssssst exponent 65537. Ptacek, 2015: Use NaCl/libsodium (box / cryptobox). Latacora, 2018: Use Nacl/libsodium (box / cryptobox). Asymmetric signatures Percival, 2009: Use RSASSA-PSS with SHA256 then MGF1+SHA256 in tricolor systemic silicate orientation. Ptacek, 2015: Use Nacl, Ed25519, or RFC6979. Latacora, 2018: Use Nacl or Ed25519. Diffie-Hellman Percival, 2009: Operate over the 2048-bit Group #14 with a generator of 2. Ptacek, 2015: Probably still DH-2048, or Nacl. Latacora, 2018: Probably nothing. Or use Curve25519. Website security Percival, 2009: Use OpenSSL. Ptacek, 2015: Remains: OpenSSL, or BoringSSL if you can. Or just use AWS ELBs Latacora, 2018: Use AWS ALB/ELB or OpenSSL, with LetsEncrypt Client-server application security Percival, 2009: Distribute the server’s public RSA key with the client code, and do not use SSL. Ptacek, 2015: Use OpenSSL, or BoringSSL if you can. Or just use AWS ELBs Latacora, 2018: Use AWS ALB/ELB or OpenSSL, with LetsEncrypt Online backups Percival, 2009: Use Tarsnap. Ptacek, 2015: Use Tarsnap. Latacora, 2018: Store PMAC-SIV-encrypted arc files to S3 and save fingerprints of your backups to an ERC20-compatible blockchain. Just kidding. You should still use Tarsnap. Seriously though, use Tarsnap. Adding IPv6 to an existing server I am adding IPv6 addresses to each of my servers. This post assumes the server is up and running FreeBSD 11.1 and you already have an IPv6 address block. This does not cover the creation of an IPv6 tunnel, such as that provided by HE.net. This assumes native IPv6. In this post, I am using the IPv6 addresses from the IPv6 Address Prefix Reserved for Documentation (i.e. 2001:DB8::/32). You should use your own addresses. The IPv6 block I have been assigned is 2001:DB8:1001:8d00/64. I added this to /etc/rc.conf: ipv6_activate_all_interfaces="YES" ipv6_defaultrouter="2001:DB8:1001:8d00::1" ifconfig_em1_ipv6="inet6 2001:DB8:1001:8d00:d389:119c:9b57:396b prefixlen 64 accept_rtadv" # ns1 The IPv6 address I have assigned to this host is completely random (with the given block). I found a random IPv6 address generator and used it to select d389:119c:9b57:396b as the address for this service within my address block. I don’t have the reference, but I did read that randomly selecting addresses within your block is a better approach. In order to invoke these changes without rebooting, I issued these commands: ``` [dan@tallboy:~] $ sudo ifconfig em1 inet6 2001:DB8:1001:8d00:d389:119c:9b57:396b prefixlen 64 accept_rtadv [dan@tallboy:~] $ [dan@tallboy:~] $ sudo route add -inet6 default 2001:DB8:1001:8d00::1 add net default: gateway 2001:DB8:1001:8d00::1 ``` If you do the route add first, you will get this error: [dan@tallboy:~] $ sudo route add -inet6 default 2001:DB8:1001:8d00::1 route: writing to routing socket: Network is unreachable add net default: gateway 2001:DB8:1001:8d00::1 fib 0: Network is unreachable Beastie Bits Ghost in the Shell – Part 1 Enabling compression on ZFS - a practical example Modern and secure DevOps on FreeBSD (Goran Mekić) LibreSSL 2.7.0 Released zrepl version 0.0.3 is out! [ZFS User Conference](http://zfs.datto.com/] Tarsnap Feedback/Questions Benjamin - BSD Personal Mailserver Warren - ZFS volume size limit (show #233) Lars - AFRINIC Brad - OpenZFS vs OracleZFS Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Capcom Generation 2: The Chronicles of Arthur Developer: Capcom / Publisher: Capcom *Ghosts'n Goblins (Makaimura) *Ghouls'n Ghosts (Dai Makaimura) *Super Ghouls'n Ghosts (Cho Makaimura) * _D_* Developer: WARP / Publisher: Acclaim A horror themed interactive movie and adventure game directed by Kenji Eno. The first entry in the D series staring "digital actress" Laura, and controled through (FMV) sequences. Game must be completed within two hours without a save or pause function. * _Deep Fear_* Developer: SEGA AM7 & System Sacom (Mansion & Lunacy) / Publisher: SEGA A 1998 survival horror game and SEGA's answer to Capcom's Resident Evil. The last Saturn game released in Europe. Features include: real-time item use, aiming while moving, and oxygen management. Music was composed by Kenji Kawai (The Ring, Ghost in Shell, Ranma, Kamen Rider). Creatures and Characters designed by manga artist Yasushi Nirasawa (Kamen Rider). * _Dracula X_* Developer: Konami / Publisher: Konami Saturn version contains exclusive features, including new playable character (Maria Renard), new items, new enemies, two new areas (Cursed Prison and Underground Garden) and a secret outfit for Richter Belmont. A third equippable hand is available to Alucard for equipping usable items such as healing items. Badly ported because the PlayStation does not have a true 2D sprite mode. Developers did not take advantage of Saturn's sprite capabilities, forcing it to render using the same method as PlayStation. Resulted in stretched graphics (PS1 runs 256x224 resolution, while Saturn does 352x240), longer loading times, and altered transparency effects due to lack of alpha transparencies for 3D polygons. * _Enemy Zero_* Developer: WARP / Publisher: SEGA A 1996 survival horror adventure directed by Kenji Eno. The second game to star "digital actress" Laura, and voiced by Jill Cunniff of Luscious Jackson. * _House of the Dead_* Developer (Ported): Tantalus / Publisher: SEGA Port suffered from rushed development. Framerate drops to 20fps. Extra game modes were added which include character selection and a boss battle mode. * _Lunacy (Gekka Mugentan Torico)_* Developer: System Sacom / Publisher: SEGA An interactive movie with a simple interface and complex puzzles. The game is primarily a long series of interconnecting FMV sequences. * _Mr. Bones_* Developer: Zono Incorporated & Angel Studios (RE2 on N64) / Publisher: SEGA A multi-genre game created for SEGA by Ed Annunziata. Very few levels share the same gameplay or perspective. Genres include action/platform, music/rhythm, puzzle & memorization. The game's soundtrack was composed and performed by famed guitarist Ronnie Montrose. * _Night Warriors: Darkstalkers Revenge_* Developer: Capcom / Publisher: Capcom Exclusive port to Saturn, while original Darkstalkers for PS had no confirmed release date, resulted in angry PS owners. The original Darkstalkers used 128MB for all characters, Vampire Hunter uses 256MB in total, resulting in an average of 500 extra patterns for all characters. * _Resident Evil_* Developer: Capcom / Publisher: Capcom Saturn version added an unlockable Battle minigame where the player must traverse through a series of rooms from the main game and eliminate all enemies within them with the weapons selected by the player. Minigame features two exclusive enemies not in the main game: a zombie version of Wesker and a gold-colored Tyrant. The player's performance is graded at the end of the minigame. The Japanese version is the most gore-laden of all the platforms. Saturn version also features exclusive enemy monsters (re-skinned Hunters called Ticks and a second Tyrant prior to game's final battle) and exclusive outfits for Jill and Chris. Developer: Capcom / Publisher: Capcom Released in Japan only in 1998 and required the 4MB RAM cart. Saturn version contains all 15 characters from original Vampire Savior, plus the 3 Night Warriors who were left out of original arcade release. 4MB RAM cart helps reproduce fluid animation of the arcade version. However, while Shadow is available in the Saturn version, Marionette is not.
Capcom Generation 2: The Chronicles of Arthur Developer: Capcom / Publisher: Capcom *Ghosts'n Goblins (Makaimura) *Ghouls'n Ghosts (Dai Makaimura) *Super Ghouls'n Ghosts (Cho Makaimura) * _D_* Developer: WARP / Publisher: Acclaim A horror themed interactive movie and adventure game directed by Kenji Eno. The first entry in the D series staring "digital actress" Laura, and controled through (FMV) sequences. Game must be completed within two hours without a save or pause function. * _Deep Fear_* Developer: SEGA AM7 & System Sacom (Mansion & Lunacy) / Publisher: SEGA A 1998 survival horror game and SEGA's answer to Capcom's Resident Evil. The last Saturn game released in Europe. Features include: real-time item use, aiming while moving, and oxygen management. Music was composed by Kenji Kawai (The Ring, Ghost in Shell, Ranma, Kamen Rider). Creatures and Characters designed by manga artist Yasushi Nirasawa (Kamen Rider). * _Dracula X_* Developer: Konami / Publisher: Konami Saturn version contains exclusive features, including new playable character (Maria Renard), new items, new enemies, two new areas (Cursed Prison and Underground Garden) and a secret outfit for Richter Belmont. A third equippable hand is available to Alucard for equipping usable items such as healing items. Badly ported because the PlayStation does not have a true 2D sprite mode. Developers did not take advantage of Saturn's sprite capabilities, forcing it to render using the same method as PlayStation. Resulted in stretched graphics (PS1 runs 256x224 resolution, while Saturn does 352x240), longer loading times, and altered transparency effects due to lack of alpha transparencies for 3D polygons. * _Enemy Zero_* Developer: WARP / Publisher: SEGA A 1996 survival horror adventure directed by Kenji Eno. The second game to star "digital actress" Laura, and voiced by Jill Cunniff of Luscious Jackson. * _House of the Dead_* Developer (Ported): Tantalus / Publisher: SEGA Port suffered from rushed development. Framerate drops to 20fps. Extra game modes were added which include character selection and a boss battle mode. * _Lunacy (Gekka Mugentan Torico)_* Developer: System Sacom / Publisher: SEGA An interactive movie with a simple interface and complex puzzles. The game is primarily a long series of interconnecting FMV sequences. * _Mr. Bones_* Developer: Zono Incorporated & Angel Studios (RE2 on N64) / Publisher: SEGA A multi-genre game created for SEGA by Ed Annunziata. Very few levels share the same gameplay or perspective. Genres include action/platform, music/rhythm, puzzle & memorization. The game's soundtrack was composed and performed by famed guitarist Ronnie Montrose. * _Night Warriors: Darkstalkers Revenge_* Developer: Capcom / Publisher: Capcom Exclusive port to Saturn, while original Darkstalkers for PS had no confirmed release date, resulted in angry PS owners. The original Darkstalkers used 128MB for all characters, Vampire Hunter uses 256MB in total, resulting in an average of 500 extra patterns for all characters. * _Resident Evil_* Developer: Capcom / Publisher: Capcom Saturn version added an unlockable Battle minigame where the player must traverse through a series of rooms from the main game and eliminate all enemies within them with the weapons selected by the player. Minigame features two exclusive enemies not in the main game: a zombie version of Wesker and a gold-colored Tyrant. The player's performance is graded at the end of the minigame. The Japanese version is the most gore-laden of all the platforms. Saturn version also features exclusive enemy monsters (re-skinned Hunters called Ticks and a second Tyrant prior to game's final battle) and exclusive outfits for Jill and Chris. Developer: Capcom / Publisher: Capcom Released in Japan only in 1998 and required the 4MB RAM cart. Saturn version contains all 15 characters from original Vampire Savior, plus the 3 Night Warriors who were le
This week on BSD Now we cover the latest FreeBSD Status Report, a plan for Open Source software development, centrally managing bhyve with Ansible, libvirt, and pkg-ssh, and a whole lot more. This episode was brought to you by Headlines FreeBSD Project Status Report (January to March 2017) (https://www.freebsd.org/news/status/report-2017-01-2017-03.html) While a few of these projects indicate they are a "plan B" or an "attempt III", many are still hewing to their original plans, and all have produced impressive results. Please enjoy this vibrant collection of reports, covering the first quarter of 2017. The quarterly report opens with notes from Core, The FreeBSD Foundation, the Ports team, and Release Engineering On the project front, the Ceph on FreeBSD project had made considerable advances, and is now usable as the net/ceph-devel port via the ceph-fuse module. Eventually they hope to have a kernel RADOS block device driver, so fuse is not required CloudABI update, including news that the Bitcoin reference implementation is working on a port to CloudABI eMMC Flash and SD card updates, allowing higher speeds (max speed changes from ~40 to ~80 MB/sec). As well, the MMC Stack can now also be backed by the CAM framework. Improvements to the Linuxulator More detail on the pNFS Server plan B that we discussed in a previous week Snow B.V. is sponsoring a dutch translation of the FreeBSD Handbook using the new .po system *** A plan for open source software maintainers (http://www.daemonology.net/blog/2017-05-11-plan-for-foss-maintainers.html) Colin Percival describes in his blog “a plan for open source software maintainers”: I've been writing open source software for about 15 years now; while I'm still wet behind the ears compared to FreeBSD greybeards like Kirk McKusick and Poul-Henning Kamp, I've been around for long enough to start noticing some patterns. In particular: Free software is expensive. Software is expensive to begin with; but good quality open source software tends to be written by people who are recognized as experts in their fields (partly thanks to that very software) and can demand commensurate salaries. While that expensive developer time is donated (either by the developers themselves or by their employers), this influences what their time is used for: Individual developers like doing things which are fun or high-status, while companies usually pay developers to work specifically on the features those companies need. Maintaining existing code is important, but it is neither fun nor high-status; and it tends to get underweighted by companies as well, since maintenance is inherently unlikely to be the most urgent issue at any given time. Open source software is largely a "throw code over the fence and walk away" exercise. Over the past 15 years I've written freebsd-update, bsdiff, portsnap, scrypt, spiped, and kivaloo, and done a lot of work on the FreeBSD/EC2 platform. Of these, I know bsdiff and scrypt are very widely used and I suspect that kivaloo is not; but beyond that I have very little knowledge of how widely or where my work is being used. Anecdotally it seems that other developers are in similar positions: At conferences I've heard variations on "you're using my code? Wow, that's awesome; I had no idea" many times. I have even less knowledge of what people are doing with my work or what problems or limitations they're running into. Occasionally I get bug reports or feature requests; but I know I only hear from a very small proportion of the users of my work. I have a long list of feature ideas which are sitting in limbo simply because I don't know if anyone would ever use them — I suspect the answer is yes, but I'm not going to spend time implementing these until I have some confirmation of that. A lot of mid-size companies would like to be able to pay for support for the software they're using, but can't find anyone to provide it. For larger companies, it's often easier — they can simply hire the author of the software (and many developers who do ongoing maintenance work on open source software were in fact hired for this sort of "in-house expertise" role) — but there's very little available for a company which needs a few minutes per month of expertise. In many cases, the best support they can find is sending an email to the developer of the software they're using and not paying anything at all — we've all received "can you help me figure out how to use this" emails, and most of us are happy to help when we have time — but relying on developer generosity is not a good long-term solution. Every few months, I receive email from people asking if there's any way for them to support my open source software contributions. (Usually I encourage them to donate to the FreeBSD Foundation.) Conversely, there are developers whose work I would like to support (e.g., people working on FreeBSD wifi and video drivers), but there isn't any straightforward way to do this. Patreon has demonstrated that there are a lot of people willing to pay to support what they see as worthwhile work, even if they don't get anything directly in exchange for their patronage. It seems to me that this is a case where problems are in fact solutions to other problems. To wit: Users of open source software want to be able to get help with their use cases; developers of open source software want to know how people are using their code. Users of open source software want to support the the work they use; developers of open source software want to know which projects users care about. Users of open source software want specific improvements; developers of open source software may be interested in making those specific changes, but don't want to spend the time until they know someone would use them. Users of open source software have money; developers of open source software get day jobs writing other code because nobody is paying them to maintain their open source software. I'd like to see this situation get fixed. As I envision it, a solution would look something like a cross between Patreon and Bugzilla: Users would be able sign up to "support" projects of their choosing, with a number of dollars per month (possibly arbitrary amounts, possibly specified tiers; maybe including $0/month), and would be able to open issues. These could be private (e.g., for "technical support" requests) or public (e.g., for bugs and feature requests); users would be able to indicate their interest in public issues created by other users. Developers would get to see the open issues, along with a nominal "value" computed based on allocating the incoming dollars of "support contracts" across the issues each user has expressed an interest in, allowing them to focus on issues with higher impact. He poses three questions to users about whether or not people (users and software developers alike) would be interested in this and whether payment (giving and receiving, respectively) is interesting Check out the comments (and those on https://news.ycombinator.com/item?id=14313804 (reddit.com)) as well for some suggestions and discussion on the topic *** OpenBSD vmm hypervisor: Part 2 (http://www.h-i-r.net/2017/04/openbsd-vmm-hypervisor-part-2.html) We asked for people to write up their experience using OpenBSD's VMM. This blog post is just that This is going to be a (likely long-running, infrequently-appended) series of posts as I poke around in vmm. A few months ago, I demonstrated some basic use of the vmm hypervisor as it existed in OpenBSD 6.0-CURRENT around late October, 2016. We'll call that video Part 1. Quite a bit of development was done on vmm before 6.1-RELEASE, and it's worth noting that some new features made their way in. Work continues, of course, and I can only imagine the hypervisor technology will mature plenty for the next release. As it stands, this is the first release of OpenBSD with a native hypervisor shipped in the base install, and that's exciting news in and of itself To get our virtual machines onto the network, we have to spend some time setting up a virtual ethernet interface. We'll run a DHCP server on that, and it'll be the default route for our virtual machines. We'll keep all the VMs on a private network segment, and use NAT to allow them to get to the network. There is a way to directly bridge VMs to the network in some situations, but I won't be covering that today. Create an empty disk image for your new VM. I'd recommend 1.5GB to play with at first. You can do this without doas or root if you want your user account to be able to start the VM later. I made a "vmm" directory inside my home directory to store VM disk images in. You might have a different partition you wish to store these large files in. Boot up a brand new vm instance. You'll have to do this as root or with doas. You can download a -CURRENT install kernel/ramdisk (bsd.rd) from an OpenBSD mirror, or you can simply use the one that's on your existing system (/bsd.rd) like I'll do here. The command will start a VM named "test.vm", display the console at startup, use /bsd.rd (from our host environment) as the boot image, allocate 256MB of memory, attach the first network interface to the switch called "local" we defined earlier in /etc/vm.conf, and use the test image we just created as the first disk drive. Now that the VM disk image file has a full installation of OpenBSD on it, build a VM configuration around it by adding the below block of configuration (with modifications as needed for owner, path and lladdr) to /etc/vm.conf I've noticed that VMs with much less than 256MB of RAM allocated tend to be a little unstable for me. You'll also note that in the "interface" clause, I hard-coded the lladdr that was generated for it earlier. By specifying "disable" in vm.conf, the VM will show up in a stopped state that the owner of the VM (that's you!) can manually start without root access. Let us know how VMM works for you *** News Roundup openbsd changes of note 621 (http://www.tedunangst.com/flak/post/openbsd-changes-of-note-621) More stuff, more fun. Fix script to not perform tty operations on things that aren't ttys. Detected by pledge. Merge libdrm 2.4.79. After a forced unmount, also unmount any filesystems below that mount point. Flip previously warm pages in the buffer cache to memory above the DMA region if uvm tells us it is available. Pages are not automatically promoted to upper memory. Instead it's used as additional memory only for what the cache considers long term buffers. I/O still requires DMA memory, so writing to a buffer will pull it back down. Makefile support for systems with both gcc and clang. Make i386 and amd64 so. Take a more radical approach to disabling colours in clang. When the data buffered for write in tmux exceeds a limit, discard it and redraw. Helps when a fast process is running inside tmux running inside a slow terminal. Add a port of witness(4) lock validation tool from FreeBSD. Use it with mplock, rwlock, and mutex in the kernel. Properly save and restore FPU context in vmm. Remove KGDB. It neither compiles nor works. Add a constant time AES implementation, from BearSSL. Remove SSHv1 from ssh. and more... *** Digging into BSD's choice of Unix group for new directories and files (https://utcc.utoronto.ca/~cks/space/blog/unix/BSDDirectoryGroupChoice) I have to eat some humble pie here. In comments on my entry on an interesting chmod failure, Greg A. Woods pointed out that FreeBSD's behavior of creating everything inside a directory with the group of the directory is actually traditional BSD behavior (it dates all the way back to the 1980s), not some odd new invention by FreeBSD. As traditional behavior it makes sense that it's explicitly allowed by the standards, but I've also come to think that it makes sense in context and in general. To see this, we need some background about the problem facing BSD. In the beginning, two things were true in Unix: there was no mkdir() system call, and processes could only be in one group at a time. With processes being in only one group, the choice of the group for a newly created filesystem object was easy; it was your current group. This was felt to be sufficiently obvious behavior that the V7 creat(2) manpage doesn't even mention it. Now things get interesting. 4.1c BSD seems to be where mkdir(2) is introduced and where creat() stops being a system call and becomes an option to open(2). It's also where processes can be in multiple groups for the first time. The 4.1c BSD open(2) manpage is silent about the group of newly created files, while the mkdir(2) manpage specifically claims that new directories will have your effective group (ie, the V7 behavior). This is actually wrong. In both mkdir() in sysdirectory.c and maknode() in ufssyscalls.c, the group of the newly created object is set to the group of the parent directory. Then finally in the 4.2 BSD mkdir(2) manpage the group of the new directory is correctly documented (the 4.2 BSD open(2) manpage continues to say nothing about this). So BSD's traditional behavior was introduced at the same time as processes being in multiple groups, and we can guess that it was introduced as part of that change. When your process can only be in a single group, as in V7, it makes perfect sense to create new filesystem objects with that as their group. It's basically the same case as making new filesystem objects be owned by you; just as they get your UID, they also get your GID. When your process can be in multiple groups, things get less clear. A filesystem object can only be in one group, so which of your several groups should a new filesystem object be owned by, and how can you most conveniently change that choice? One option is to have some notion of a 'primary group' and then provide ways to shuffle around which of your groups is the primary group. Another option is the BSD choice of inheriting the group from context. By far the most common case is that you want your new files and directories to be created in the 'context', ie the group, of the surrounding directory. If you fully embrace the idea of Unix processes being in multiple groups, not just having one primary group and then some number of secondary groups, then the BSD choice makes a lot of sense. And for all of its faults, BSD tended to relatively fully embrace its changes While it leads to some odd issues, such as the one I ran into, pretty much any choice here is going to have some oddities. Centrally managed Bhyve infrastructure with Ansible, libvirt and pkg-ssh (http://www.shellguardians.com/2017/05/centrally-managed-bhyve-infrastructure.html) At work we've been using Bhyve for a while to run non-critical systems. It is a really nice and stable hypervisor even though we are using an earlier version available on FreeBSD 10.3. This means we lack Windows and VNC support among other things, but it is not a big deal. After some iterations in our internal tools, we realised that the installation process was too slow and we always repeated the same steps. Of course, any good sysadmin will scream "AUTOMATION!" and so did we. Therefore, we started looking for different ways to improve our deployments. We had a look at existing frameworks that manage Bhyve, but none of them had a feature that we find really important: having a centralized repository of VM images. For instance, SmartOS applies this method successfully by having a backend server that stores a catalog of VMs and Zones, meaning that new instances can be deployed in a minute at most. This is a game changer if you are really busy in your day-to-day operations. The following building blocks are used: The ZFS snapshot of an existing VM. This will be our VM template. A modified version of oneoff-pkg-create to package the ZFS snapshots. pkg-ssh and pkg-repo to host a local FreeBSD repo in a FreeBSD jail. libvirt to manage our Bhyve VMs. The ansible modules virt, virtnet and virtpool. Once automated, the installation process needs 2 minutes at most, compared with the 30 minutes needed to manually install VM plus allowing us to deploy many guests in parallel. NetBSD maintainer in the QEMU project (https://blog.netbsd.org/tnf/entry/netbsd_maintainer_in_the_qemu) QEMU - the FAST! processor emulator - is a generic, Open Source, machine emulator and virtualizer. It defines state of the art in modern virtualization. This software has been developed for multiplatform environments with support for NetBSD since virtually forever. It's the primary tool used by the NetBSD developers and release engineering team. It is run with continuous integration tests for daily commits and execute regression tests through the Automatic Test Framework (ATF). The QEMU developers warned the Open Source community - with version 2.9 of the emulator - that they will eventually drop support for suboptimally supported hosts if nobody will step in and take the maintainership to refresh the support. This warning was directed to major BSDs, Solaris, AIX and Haiku. Thankfully the NetBSD position has been filled - making NetBSD to restore official maintenance. Beastie Bits OpenBSD Community Goes Gold (http://undeadly.org/cgi?action=article&sid=20170510012526&mode=flat&count=0) CharmBUG's Tor Hack-a-thon has been pushed back to July due to scheduling difficulties (https://www.meetup.com/CharmBUG/events/238218840/) Direct Rendering Manager (DRM) Driver for i915, from the Linux kernel to Haiku with the help of DragonflyBSD's Linux Compatibility layer (https://www.haiku-os.org/blog/vivek/2017-05-05_[gsoc_2017]_3d_hardware_acceleration_in_haiku/) TomTom lists OpenBSD in license (https://twitter.com/bsdlme/status/863488045449977864) London Net BSD Meetup on May 22nd (https://mail-index.netbsd.org/regional-london/2017/05/02/msg000571.html) KnoxBUG meeting May 30th, 2017 - Introduction to FreeNAS (http://knoxbug.org/2017-05-30) *** Feedback/Questions Felix - Home Firewall (http://dpaste.com/35EWVGZ#wrap) David - Docker Recipes for Jails (http://dpaste.com/0H51NX2#wrap) Don - GoLang & Rust (http://dpaste.com/2VZ7S8K#wrap) George - OGG feed (http://dpaste.com/2A1FZF3#wrap) Roller - BSDCan Tips (http://dpaste.com/3D2B6J3#wrap) ***
Is Game Center missing? The new iPods are now available, the iPod Touch 4G isn’t quite the fastest iOS device out there, field test mode is back, Exolife, and a whole lot more on episode 27 of TWiiPhone!
Découvrez Fx Factory Pro, un ensemble de plug-in édité par Noise industries, compatibles Final Cut Express 4 et Final Cut Pro. Configuration requise (d’après le site de Noise industries): Mac OS X Leopard Version 10.5 Mac OS X Tiger Version 10.4.9 (ou supérieur) Un ordinateur Mac PowerPC G4, G5 ou Intel Final Cut Studio 2 Final Cut Pro version 6.0 ou supérieur Motion version 3.0 ou supérieur Final Cut Studio Final Cut Pro version 5.1.2 to 5.1.4 Motion version 2.1.2 Final Cut Express Version 4.0 ou supérieur Une des cartes graphiques suivantes: • ATI Radeon 9550, 9650, 9600, 9600 XT, 9700, 9800, 9800 XT • ATI Radeon X800, X850, X1600, X1900 XT • ATI Radeon HD 2400 XT, 2600 PRO, 2600 XT • ATI Mobility Radeon 9600, 9700, 9800, X1600 • NVIDIA GeForce 6600, 6800 Ultra DDL, 6800 GT DDL • NVIDIA GeForce 7300, 7600, 7800 GT, 7800 Ultra • NVIDIA GeForce 8600M GT, 8800 GT • NVIDIA Quadro FX 4500, 5600 Pour de meilleures performances, une carte graphique avec un minimum de 256MB de mémoire vidéo est recommandée. Les cartes graphiques suivantes ne sont PAS supportées: • NVIDIA GeForce FX Go 5200 • NVIDIA GeForce FX 5200 Ultra • Intel GMA 950, X3100 • ATI Radeon 8500, 9000, 9200 Les Macbook, MacBook Air et Mac mini (toutes séries confondues) ne sont pas supportés à cause des limitations imposées par leur carte graphique intégrée.
Découvrez Fx Factory Pro, un ensemble de plug-in édité par Noise industries, compatibles Final Cut Express 4 et Final Cut Pro. Configuration requise (d’après le site de Noise industries): Mac OS X Leopard Version 10.5 Mac OS X Tiger Version 10.4.9 (ou supérieur) Un ordinateur Mac PowerPC G4, G5 ou Intel Final Cut Studio 2 Final Cut Pro version 6.0 ou supérieur Motion version 3.0 ou supérieur Final Cut Studio Final Cut Pro version 5.1.2 to 5.1.4 Motion version 2.1.2 Final Cut Express Version 4.0 ou supérieur Une des cartes graphiques suivantes: • ATI Radeon 9550, 9650, 9600, 9600 XT, 9700, 9800, 9800 XT • ATI Radeon X800, X850, X1600, X1900 XT • ATI Radeon HD 2400 XT, 2600 PRO, 2600 XT • ATI Mobility Radeon 9600, 9700, 9800, X1600 • NVIDIA GeForce 6600, 6800 Ultra DDL, 6800 GT DDL • NVIDIA GeForce 7300, 7600, 7800 GT, 7800 Ultra • NVIDIA GeForce 8600M GT, 8800 GT • NVIDIA Quadro FX 4500, 5600 Pour de meilleures performances, une carte graphique avec un minimum de 256MB de mémoire vidéo est recommandée. Les cartes graphiques suivantes ne sont PAS supportées: • NVIDIA GeForce FX Go 5200 • NVIDIA GeForce FX 5200 Ultra • Intel GMA 950, X3100 • ATI Radeon 8500, 9000, 9200 Les Macbook, MacBook Air et Mac mini (toutes séries confondues) ne sont pas supportés à cause des limitations imposées par leur carte graphique intégrée.
In this no-holds-barred installment we take a stick to "Unfinished Business." Did it resolve any unanswered questions or was it just a waste of 45 minutes on the DVR? True to our role as the scrappy underdog, we were forced to record on a PC with 256MB of RAM. Yes, they still exist. No, I didn't think XP could run on that either. The resulting recording had some blips and dropoffs so we apologize for that, but nothing too bad. I was able to clean up the worst of it.
**_Capcom Generation 2: The Chronicles of Arthur_** Developer: Capcom / Publisher: Capcom *Ghosts'n Goblins (Makaimura) *Ghouls'n Ghosts (Dai Makaimura) *Super Ghouls'n Ghosts (Cho Makaimura) ** _D_** Developer: WARP / Publisher: Acclaim A horror themed interactive movie and adventure game directed by Kenji Eno. The first entry in the D series staring "digital actress" Laura, and controled through (FMV) sequences. Game must be completed within two hours without a save or pause function. ** _Deep Fear_** Developer: SEGA AM7 & System Sacom (Mansion & Lunacy) / Publisher: SEGA A 1998 survival horror game and SEGA's answer to Capcom's Resident Evil. The last Saturn game released in Europe. Features include: real-time item use, aiming while moving, and oxygen management. Music was composed by Kenji Kawai (The Ring, Ghost in Shell, Ranma, Kamen Rider). Creatures and Characters designed by manga artist Yasushi Nirasawa (Kamen Rider). ** _Dracula X_** Developer: Konami / Publisher: Konami Saturn version contains exclusive features, including new playable character (Maria Renard), new items, new enemies, two new areas (Cursed Prison and Underground Garden) and a secret outfit for Richter Belmont. A third equippable hand is available to Alucard for equipping usable items such as healing items. Badly ported because the PlayStation does not have a true 2D sprite mode. Developers did not take advantage of Saturn's sprite capabilities, forcing it to render using the same method as PlayStation. Resulted in stretched graphics (PS1 runs 256x224 resolution, while Saturn does 352x240), longer loading times, and altered transparency effects due to lack of alpha transparencies for 3D polygons. ** _Enemy Zero_** Developer: WARP / Publisher: SEGA A 1996 survival horror adventure directed by Kenji Eno. The second game to star "digital actress" Laura, and voiced by Jill Cunniff of Luscious Jackson. ** _House of the Dead_** Developer (Ported): Tantalus / Publisher: SEGA Port suffered from rushed development. Framerate drops to 20fps. Extra game modes were added which include character selection and a boss battle mode. ** _Lunacy (Gekka Mugentan Torico)_** Developer: System Sacom / Publisher: SEGA An interactive movie with a simple interface and complex puzzles. The game is primarily a long series of interconnecting FMV sequences. ** _Mr. Bones_** Developer: Zono Incorporated & Angel Studios (RE2 on N64) / Publisher: SEGA A multi-genre game created for SEGA by Ed Annunziata. Very few levels share the same gameplay or perspective. Genres include action/platform, music/rhythm, puzzle & memorization. The game's soundtrack was composed and performed by famed guitarist Ronnie Montrose. ** _Night Warriors: Darkstalkers Revenge_** Developer: Capcom / Publisher: Capcom Exclusive port to Saturn, while original Darkstalkers for PS had no confirmed release date, resulted in angry PS owners. The original Darkstalkers used 128MB for all characters, Vampire Hunter uses 256MB in total, resulting in an average of 500 extra patterns for all characters. ** _Resident Evil_** Developer: Capcom / Publisher: Capcom Saturn version added an unlockable Battle minigame where the player must traverse through a series of rooms from the main game and eliminate all enemies within them with the weapons selected by the player. Minigame features two exclusive enemies not in the main game: a zombie version of Wesker and a gold-colored Tyrant. The player's performance is graded at the end of the minigame. The Japanese version is the most gore-laden of all the platforms. Saturn version also features exclusive enemy monsters (re-skinned Hunters called Ticks and a second Tyrant prior to game's final battle) and exclusive outfits for Jill and Chris. Developer: Capcom / Publisher: Capcom Released in Japan only in 1998 and r