Podcasts about 128kb

  • 10PODCASTS
  • 379EPISODES
  • 1h 47mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jan 10, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about 128kb

Latest podcast episodes about 128kb

128KB Podcast
Is This The Future Of Handheld Gaming?

128KB Podcast

Play Episode Listen Later Jan 10, 2025 84:10


CES has been a dream for the handheld gaming space with more announcements than we could ever imagine. We take a deep dive in today's 128KB podcast, looking at the biggest news from Steam OS, Lenovo Legion Go 2 to Xbox OS and more...

128KB Podcast
We Were Wrong About The Xbox...

128KB Podcast

Play Episode Listen Later Aug 30, 2023 32:41


We've never thought we'd say this but... We were wrong about the Xbox. The Xbox has always been left out here at 128KB, it's something we've never made content about as the way we saw it was "we're PC gamers", so we didn't give it the time of day. Over the past several months there's been features that have been peaking our interest, the backwards compatibility, Xbox Game Pass and, of course, let's not forget... Possibly the most hyped for game this year... Starfield. All of this and more contributed to Andy buying an Xbox Series X.

128KB Podcast
Dark Pictures Anthology Games Are BROKEN

128KB Podcast

Play Episode Listen Later Nov 11, 2022 18:40


Dark Pictures Anthology Games Are BROKEN We recently did our preview of the latest entry in the series, Dark Picture Anthology: The Devil In Me. Andy, from our Nintendo channel 128KB, saw the preview and immediately bought the first 3 games in the series but, unfortunately, the games are completely unplayable due to game breaking bugs. These bugs have been present since September the 28th! AJ & Andy dive into all of this and more, discussing why the Dark Pictures Anthology Games are BROKEN! If you're brave enough, purchase The Devil In Me here: https://geni.us/BBcM5l

The Lunduke Journal of Technology
Linux, Alternative OS, & Retro Computing News - Oct 2, 2022

The Lunduke Journal of Technology

Play Episode Listen Later Oct 3, 2022 47:10


It's Sunday! Which means it's time for some Linux, Alternative OS, & Retro Computing news!You know… the important stuff.The podcast & the article! All in one spot! Huzzah!GNU toolchain hosting moving to… Linux FoundationIt appears that the GNU toolchain projects — which includes GCC, Make and glibc — are preparing to move their hosting entirely to… The Linux Foundation.Seriously.From the announcement:“During the Sourceware / Infrastructure BoF sessions at GNU Cauldron, the GNU Toolchain community in collaboration with the Linux Foundation and OpenSSF, announced the GNU Toolchain Infrastructure project (GTI). The collaboration includes a fund for infrastructure and software supply chain security, which will allow us to utilize the respected Linux Foundation IT (LF IT) services that host kernel.org and to fund other important projects.”This will definitely not end badly. *cough*VM2 - a modern VMU for the Sega Dreamcast!A new project to create a modern, updated memory unit for the Sega Dreamcast has raised almost 150 thousand dollars over on Idiegogo. And, I gotta say, it looks kinda awesome.“The VM2 project aims in the total reproduction and upgrade of the original VMU for our beloved Dreamcast. The VMU was one of the greatest console's assets, but with many design flaws. Now, with the VM2, all of these flaws are eliminated giving the user an experience that truly feels like next-gen! Internally, the VM2 received a totally fresh design with modern electronics. Externally, it is upgraded and at the same time keeps the original looks & feels, as a tribute to the original VMU.Features & Upgrades​New monochrome backlit LCDHigher screen resolutionMicro-SD storageInternal storage of 128KB (200 blocks)Embedded High-capacity batteryUSB-C port (for charging & connecting to a PC)Original Audio supportDreamEye supportOriginal language support (EN/JP)LCD game images streaming to PCCharging from both the controller, and the USB-C portSupport for VM2-to-VM2 connection (with future firmware update)”NES-OS… an OS. For the NES.There is a new OS — albeit a limited one — for the Nintendo Entertainment System: NES-OS.“NESOS is an operating system designed for the Nintendo Entertainment and Family Computer Systems. The operating system features two core applications, the word processor, and the settings. The word processor allows users to print characters and certain blocks to the screen, then save that data in the form of a file for later use or editing. The settings app displays system information and lets the user select one of seven cursors, and one of 53 possible desktop background colors. It also acts as the file manager, allowing users to delete their saved files.”It's limited. Highly limited. Only able to save 8 files (of up to 2k each). But, still. Super cool that it was done at all.Fun side note: This is not the first project called “NES-OS”. There was another one created 6 yeas ago, which you can find on Git Hub, which took a completely different approach. That one consisting of a design for a PS/2 hardware interface (so a keyboard could be connected to the NES), a command line, a BrainF interpreter, and two sample games (Life and Snake)Fedora shipping without some codecsFedora 37 has disabled GPU support for some media codecs (such as H.264) due to legal concerns.This has caused many to be quite annoyed — understandably — because not having support for some popular codecs is inconvenient.That said… this is not exactly unprecedented.In fact… this is the very reason that Linux Mint exists at all. Mint was created for the sole purpose of having some media codecs preinstalled… codecs that Ubuntu did not feel like they could legally distribute in many countries at the time. There was, quite literally, no other reason for Mint existing in those early days.Likewise, openSUSE (which I used to be on the Board for) also opted to not distribute many such “legally dubious” media codecs — such as MP3 — back in those days. Resulting in multiple openSUSE based distros that added the codecs in.System76 ditching GTK for new POP!_OS desktopIt appears that System76, the company behind the Pop!_OS Linux distribution, is working to ditch GTK for their upcoming “Cosmic” desktop environment. Instead opting to use the, Rust based, Iced framework.According to one of the developers:“After much deliberation and experimentation over the last year, the engineering team has decided to use Iced instead of GTK.Iced is a native Rust GUI toolkit that's made enough progress lately to become viable for use in COSMIC. Various COSMIC applets have already been written in both GTK and Iced for comparison. The latest development versions of Iced have an API that's very flexible, expressive, and intuitive compared to GTK. It feels very natural in Rust, and anyone familiar with Elm will appreciate its design.COSMIC Settings will be developed in tandem with, and from, this toolkit.”It remains to be seen how this will turn out. I have many questions and thoughts:How will this affect GTK apps running on Pop!_OS?From looking at the screenshots, it's hard to see a visual or usability reason for the change.Is this another example of “let's re-write it in Rust because Rust is a religion and it is blasphemy to not use Rust”?Yet another GUI toolkit and desktop, eh? That sort of thing doesn't have a great track record for working out for distro companies that try it.More fragmentation in an already fragmented Linux GUI application ecosystem.That said… I certainly appreciate when projects blaze their own trail. So… who knows! Could be great!The Lunduke Journal Community — About the Lunduke Journal — Subscriber PerksThe Lunduke Journal Weekly Schedule:Monday - Computer HistoryTuesday - Computer & Linux SatireWednesday - Podcast (Subscriber Exclusive)Thursday - Computer History (Subscriber Exclusive)Friday - Wildcard day! Anything goes!Saturday - ComicsSunday - Linux, Alternative OS, & Retro Computer News This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lunduke.substack.com/subscribe

Anerzählt
128KB war mal viel Speicher =^_^=

Anerzählt

Play Episode Listen Later Aug 10, 2022 7:07


Als ich noch jung war, da waren wir noch mit bescheidenen 128kb Arbeitsspeicher die Könige auf dem Schulhof! Grund genug eine ganze Episode lang in Erinnerungen zu schwelgen... so.

Adafruit Industries
EYE on NPI - Microchip's AVR-IoT Cellular Mini Dev Board + Sequans Monarch 2 GM02S cellular module

Adafruit Industries

Play Episode Listen Later Jun 28, 2022 14:35


This week's EYE ON NPI will pique your curiosity... it's Microchip's AVR-IoT Cellular Mini Development Board featuring the Sequans Monarch 2 GM02S cellular module (https://www.digikey.com/en/product-highlight/m/microchip-technology/avr-iot-cellular-mini-development-board) This dev board really knows how to get us interested with an AVR128DB48 128KB-flash AVR chip (https://www.digikey.com/short/4pq3d2q3), Sequans GM02S compact LTE cell module (https://www.digikey.com/short/4qbfcn17), Feather format (https://learn.adafruit.com/adafruit-feather), Stemma QT / Qwiic connector (https://learn.adafruit.com/introducing-adafruit-stemma-qt/what-is-stemma-qt) and Arduino library support (https://github.com/microchip-pic-avr-solutions/avr-iot-cellular-arduino-library) This is an excellent dev board to use if you want to take advantage of the huge ecosystem afforded by Arduino, Feather, Qwiic/QT - you should be able to use the many thousands of libraries and hardware accessories for quick prototyping. As mentioned, this dev board is Feather shaped, with a USB Type C connector, Li-poly battery charger, and built in programmer/debugger/serial interface for the AVR chip. The individual microcontroller is a AVR128DB48 AVR (https://www.digikey.com/short/4pq3d2q3) which comes with 128KB flash and 16KB RAM. Think of it as a super-beefy ATmega328P! A SAMD21E 'curiosity' chip is used as the programming interface and also serial interface. When plugged in, the board shows up as a disk drive, with a getting started guide bookmark and some other specification files. While following the getting started guide we found that you can also drag hex files over to program it, very handy for quick-start! To program it, MCP suggests using the DxCore from SpenceKonde (https://github.com/SpenceKonde/DxCore) in Arduino so you'll want to get that installed while you follow the rest of the guide. Next up, time to activate the included SIM card from Truphone (https://activate.truphone.com/) that comes with the dev kit. This SIM is free to activate and is good for up to 150 MB of data transfer and 90 days, which is plenty of time to explore the board before needing to renew. Activating the SIM only took us 5 minutes - don't forget to power cycle after activation to make sure the module and SIM re-authorize. One of the surprises we had while trying this eval board is the really good documentation and learning system that is over at https://iot.microchip.com/avr-iot-cellular-mini - we're kinda used to verbose text-based documentation or using specialized software that only runs on Windows. This is the first time we've seen a really nice documentation system with simple step-by-steps, lots of photos, links and a clear navigation system. There are also two interesting in-browser compilation and serial monitor widgets that we spotted, which is a good sign that folks are starting to move towards browser-and-filesystem replacements for tool-chains. The Arduino library code is available over at https://github.com/microchip-pic-avr-solutions/avr-iot-cellular-arduino-library which looks like it's got platform.io support and you can download a release for installation into the Arduino IDE. (We do recommend someone at MCP try to add a proper release to the Arduino library manager to save one extra step if possible!) Once installed, there are a few helpful examples to get you going. The first one is just connecting to an HTTP endpoint and parsing out the result and it worked...really well! We were able to connect to the AT&T cellular network and fetch the data within a minute. We'd request an HTTPS example since most folks will want a TLS method of connection! Since the board is Arduino and Stemma QT compatible, we were able to connect an OLED and extend the example to display the HTTP-gotten data to the OLED - it only took us 10 minutes to install everything for Arduino library support and extend the code which is amazingly fast! You know what else is really fast? Digi-Key shipping for the AVR-IoT Cellular Mini Development Board, cause it's in stock right now! (https://www.digikey.com/short/bn7mp80w) Order today and you'll be connecting to the LTE cellular network by tomorrow afternoon.

128KB Podcast
Pokemon Arceus Pokedex Leaked - Call of Duty will be an XBOX exclusive- 128KB Quick Bits 1

128KB Podcast

Play Episode Listen Later Jan 22, 2022 7:38


128KB Quick Bits Episode 1 - your weekly bite of gaming news including Pokemon Arceus Pokedex Leaked - Call of Duty will be an XBOX exclusive? Follow Us: https://www.youtube.com/128kbNews https://128kb.co.uk

Nebuchadnezzar
Más grande, más largo y sin cortes (y en Dolby Atmos)

Nebuchadnezzar

Play Episode Listen Later Nov 2, 2021 124:01


Todo es cada vez más grande y por lo tanto más complicado de manejar. Analicemos esta situación imposible. ¡Primer podcast mezclado en Dolby Atmos con sonido espacial! En los comienzos de la informática doméstica, la capacidad y posibilidades de los sistemas era ínfima si lo comparamos con hoy día. Al igual que su tamaño.  Un sistema operativo como MSDOS cabía en un disco de escasos 720KB o de 1.44MB según la versión. La primera versión de macOS para el primer Macintosh cabía en un disco y podía funcionar con apenas 128KB de memoria. Un juego como El día de Tentáculo, con una intro que parecía de dibujos animados, con voces digitales en la introducción y una banda sonora completa, cabía en 5 discos de 1.44MB. Al igual que Monkey Island 2 o el DOOM. Antes, debido a las limitaciones tanto de espacio y de proceso, los programas, juegos o sistemas… ocupaban mucho menos espacio y eran más controlables. Para los desarrolladores el aprovechamiento del espacio era una virtud. Poco a poco llegaron sistemas cada vez más complejos y, sobre todo, más capacidad, más espacio… CDs de 650MB, luego de 700, DVDs de casi 5 Gigas y discos duros que pasaron de medirse en Megas o Gigas a medirse ahora en Teras. Todo lo que manejamos hoy día se ha ido complicando y haciéndose cada vez más grande, más largo, sin cortes en su desarrollo y siendo productos vivos que evolucionan versión a versión haciéndose más pesado en cada una de éstas. Gigante y difícil de mantener porque ¿imaginan lo que es hoy coordinar y gestionar un proyecto como Windows o macOS, o Android, o iOS,  juegos de gran presupuesto o software como Photoshop, DaVinci Resolve o Cinema 4D? Todo cada vez, más grande, largo y sin cortes. Oliver Nabani Twitter: @olivernabani Twitch: Se Dice Mashain Julio César Fernández Twitter: @jcfmunoz Twitch: Apple Coding Podcast: Apple Coding Formación: Apple Coding Academy Consultoría: Gabhel Studios

128KB Podcast
Nintendo Direct Reaction - N64 & Sega Genesis Controllers are REAL!! // 128KB Tech Podcast Ep10

128KB Podcast

Play Episode Listen Later Sep 26, 2021 71:42


We're taking a deep dive review on this weeks Nintendo Direct in this week's Live Podcast. We pick out our favourite game announcements from Nintendo Direct as well as getting more than a little excited for the N64 & Sega Mega Drive controllers and Genesis & N64 games coming to Nintendo Switch Online. Watch The Full Review: https://www.youtube.com/watch?v=5SWCW6maHY0 Follow Us on YouTube: https://www.youtube.com/channel/UCanRIbY9MBoJIWgC7YL2sMA Watch Our Live Gameplay: https://twitch.tv/128kblive Visit 128KB Website: https://128kb.co.uk

128KB Podcast
PS5 Playstation Summer Showcase Special // 128KB Tech Podcast Ep9

128KB Podcast

Play Episode Listen Later Sep 13, 2021 85:29


Ep9 of the 128KB Tech Podcast. This week we running a PlayStation special to cover all of the announcements and releases from the PS5 PlayStation summer showcase. We're taking a deep dive review on Star Wars Knights of the Old Republic coming to PS5, Alan Wake Remastered, Wolverine & Spiderman 2 as well as looking at GTA 5 on the PS5... is it one re-release to many for the GTA 5 giant? Watch The Full Review: https://youtu.be/ggyrsTkolng Follow Us on YouTube: https://www.youtube.com/channel/UCanRIbY9MBoJIWgC7YL2sMA Watch Our Live Gameplay: https://twitch.tv/128kblive Visit 128KB Website: https://128kb.co.uk

128KB Podcast
GTA 3 Remake, New COD, NVIDIA GPU Shortages Throughout 2022 & Stadia's Last Stand // 128KB Tech Podcast Ep8

128KB Podcast

Play Episode Listen Later Aug 25, 2021 70:12


Episode 8 of the 128kb Tech Podcast is here. This week Andy & Aj take a look at the world of Gaming, Hardware & Tech, with the Remake of the GTA III Trilogy leaked, Nvidia's GPU shortages lasting throughout 2022, Skyrim's 10th Anniversary update and Stadia's last Stand! All that and more in this week's 128KB Tech Podcast Visit The Website: https://www.128kb.co.uk Watch The Podcast: https://www.youtube.com/channel/UChJjXF9SuONQ9SmJcV3xNDg Watch The Clips Show: https://www.youtube.com/channel/UC4H3E0C-gQcmgDWNsT67Gqw

128KB Podcast
iPhone 13 Specs, ASUS TUF H3, The Playdate & Zelda's Back // 128KB Tech Podcast Ep7

128KB Podcast

Play Episode Listen Later Aug 2, 2021 74:09


Episode 7 of the 128kb Tech Podcast is here. This week Andy & Aj take a look at the iPhone 13 leaks, review the ASUS TUF H3 Headset, Skyward Sword HD, the Playdate & PS5's new Tournament Mode. All that and more in this weeks Tech Podcast from 128KB. Visit The Website: https://www.128kb.co.uk Watch The Podcast: https://www.youtube.com/channel/UChJjXF9SuONQ9SmJcV3xNDg Watch The Clips Show: https://www.youtube.com/channel/UC4H3E0C-gQcmgDWNsT67Gqw

128KB Podcast
$1.5 Million Mario, Zelda Skyward Sword HD & Nintendo Switch OLED // 128KB Tech Podcast Ep6

128KB Podcast

Play Episode Listen Later Jul 16, 2021 83:27


Episode 6 of the 128kb Tech Podcast is here. This week Andy & Aj take a look at the Nintendo Switch OLED, the $1.5 Million Auction of an N64 game, the M1X iMac Pro & the re-release of Zelda Skyward Sword HD. Watch The Podcast: https://www.youtube.com/channel/UChJjXF9SuONQ9SmJcV3xNDg Watch The Clips Show: https://www.youtube.com/channel/UC4H3E0C-gQcmgDWNsT67Gqw

128KB Podcast
200MP Smartphone Cameras, SteelSeries Rival 5 & the $500,000 NES Game // 128KB Tech Podcast Ep5

128KB Podcast

Play Episode Listen Later Jul 5, 2021 88:07


Episode 5 of the 128kb Tech Podcast is here. This week Andy & Aj take a look at Nintendo's news about EMMI & Metroid 5, Playing Gameboy Cartridges on PC, The 200MP Smartphone Camera from Xiaomi, The Steel Series Rival 5 & we get hands on with the Evercade Handheld Retro Games Console. Watch The Podcast: https://www.youtube.com/channel/UChJjXF9SuONQ9SmJcV3xNDg Watch The Clips Show: https://www.youtube.com/channel/UC4H3E0C-gQcmgDWNsT67Gqw

128KB Podcast
Apple Forced To Sideload Apps, Cyberpunk 2077 is Back & Samsung Z Fold 3 // 128KB Tech Podcast Ep4

128KB Podcast

Play Episode Listen Later Jun 28, 2021 96:55


Episode 4 of the 128kb Tech Podcast is here. This week Andy & Aj take a look at Apple being forced to sideload apps, Cyberpunk 2077 is back, the amazing retro console the Evercade, Ghost of Tsushima on PC, Pokemon's Lawsuit, the Wii U in 2021 and the Samsung Z Fold/Flip 3. Watch The Podcast: https://www.youtube.com/channel/UChJjXF9SuONQ9SmJcV3xNDg Watch The Clips Show: https://www.youtube.com/channel/UC4H3E0C-gQcmgDWNsT67Gqw

128KB Podcast
The E3 Special - Nintendo Direct & Xbox Exclusives // 128KB Tech Podcast Ep3

128KB Podcast

Play Episode Listen Later Jun 16, 2021 91:11


Episode 3 of the 128kb Tech Podcast is here. This week is an E3 Special as we dig into all the best new releases to come from Microsoft, Xbox and the Nintendo Direct event. Watch The Podcast: https://www.youtube.com/channel/UChJjXF9SuONQ9SmJcV3xNDg Watch The Clips Show: https://www.youtube.com/channel/UC4H3E0C-gQcmgDWNsT67Gqw

128KB Podcast
iPhone 13 120hz Display, The Death of Intel & Worst Smartphone of 2021 // 128KB Tech Podcast Ep2

128KB Podcast

Play Episode Listen Later Jun 8, 2021 86:07


Episode 2 of the 128kb Tech Podcast is here. This week we look at the rumours of the 120hz display on the iPhone 13, a 360hz display gaming laptop, the death of intel and probably the worst looking smartphone of the year. Watch The Podcast: https://www.youtube.com/channel/UChJjXF9SuONQ9SmJcV3xNDg Watch The Clips Show: https://www.youtube.com/channel/UC4H3E0C-gQcmgDWNsT67Gqw

128KB Podcast
Nintendo Switch Pro, ROG Claymore ii & Best Value Macs Ever!! // 128KB Tech Podcast Ep1

128KB Podcast

Play Episode Listen Later Jun 1, 2021 91:19


We take a closer look at the world of tech, computing and gaming in this weeks 128KB Podcast, including the Nintendo Switch Pro, a $6k Apple Watch, the ROG Claymore ii, The Best Value iMacs ever, The PC that no one wants and why Cyberpunk 2077 is still broken!! Watch The Podcast: https://www.youtube.com/channel/UChJjXF9SuONQ9SmJcV3xNDg Watch The Clips Show: https://www.youtube.com/channel/UC4H3E0C-gQcmgDWNsT67Gqw

Adafruit Industries
STM32F411 BlackPill supports CircuitPython

Adafruit Industries

Play Episode Listen Later Mar 26, 2021 0:39


These low cost "Black Pill" STM32F411 boards are a lovely upgrade to the STM32F401. With 512KB of flash and 128KB of RAM, this board has lots of GPIO for projects. And now you can use it without a toolchain or IDE setup - all by installing CircuitPython! Here we loaded the CircuitPython build on with STM32Cube and with a few lines of code have an OLED screen showing the REPL. https://www.adafruit.com/product/4877 #adafruit #circuitpython #blackpill Visit the Adafruit shop online - http://www.adafruit.com ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Adafruit on Instagram: https://www.instagram.com/adafruit Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ -----------------------------------------

Adafruit Industries
EYE on NPI: ST STM32L4P5 series microcontrollers

Adafruit Industries

Play Episode Listen Later Apr 22, 2020 7:36


This week's EYE on NPI looks at a new microcontroller series from ST Micro. Yes, last week was also an ST part - but this one popped into my NPI feed and I just thought it was so interesting, ST gets a double-header! The STM32L4P5 series of chips (https://www.digikey.com/products/en?keywords=%20%09STM32L4P5) looks like an excellent competitor to the Microchip ATSAMD51 (https://www.digikey.com/products/en?keywords=atsamd51) we use so often - with matching-or-better specifications. Let's take a closer look! You can get more spec's about this chip over at ST's website (https://www.st.com/content/st_com/en/products/microcontrollers-microprocessors/stm32-32-bit-arm-cortex-mcus/stm32-ultra-low-power-mcus/stm32l4-plus-series/stm32l4p5-q5/stm32l4p5ce.html). Here's the overview: The Cortex-M4 core features a single-precision floating-point unit (FPU), which supports all the Arm® single-precision data-processing instructions and all the data types. The Cortex-M4 core also implements a full set of DSP (digital signal processing) instructions and a memory protection unit (MPU) that enhances the application’s security. These devices offer two fast 12-bit ADCs (5 Msps), two comparators, two operational amplifiers, two DAC channels, an internal voltage reference buffer, a low-power RTC, two general-purpose 32-bit timers, two 16-bit PWM timers dedicated to motor control, seven general-purpose 16-bit timers, and two 16-bit low-power timers. The devices support two digital filters for external sigma delta modulators (DFSDMs). In addition, up to 24 capacitive sensing channels are available. They also feature standard and advanced communication interfaces such as: four I2Cs, three SPIs, three USARTs, two UARTs and one low-power UART, two SAIs, two SDMMCs, one CAN, one USB OTG full-speed, one camera interface and one synchronous parallel data interface (PSSI). In particular, we like some of the 'upgrades' we see compared to other chips - the roomy 320KB RAM, 5 MSPS 12-bit ADCs (that's the same as a basic pocket oscilloscope!), 9 x 16-bit timers, CAN bus (usually you have to upgrade to get CAN support!), built in op-amps, and... most interesting to me is a built in TFT manager! Not just parallel (6800/8080) style but the 'real' 24-bit TFT with HSYNC/VSYNC/CLK signals! Usually you have to go to a Cortex M7 to get something like that included (see the iMX RT or STM32H7 series for example). 24-bit TFT can be easily converted to VGA (using some resistors) or even HDMI using off-the shelf adapter chips (https://www.digikey.com/product-detail/en/adafruit-industries-llc/2219/1528-1452-ND/5761220) so it's really a neat thing to see. True TFT output is a rarity because of the frame buffer you normally need. From what ST says in the datasheet the way this is managed in RAM is to have a 8-bit palette of 24-bit colors. So for a classic 4.3" TFT display (https://www.adafruit.com/product/1591) that is 480x272 pixels, that would mean 128KB of RAM to address all pixels. A 320x240 display would be only 75KB. The LCD-TFT display controller provides a 24-bit parallel digital RGB (red, green, blue) and delivers all signals to interface directly to a broad range of LCD and TFT panels with the following features: • One display layer with dedicated FIFO (64 x 32-bit) • Color look-up table (CLUT) up to 256 colors (256 x 24-bit) per layer • Up to 8 input color formats selectable per layer • Flexible blending between two layers using alpha value (per pixel or constant) • Flexible programmable parameters for each layer • Color keying (transparency color) • Up to four programmable interrupt events Right now there's only two packages available - a 144 LQFP STM32L4P5ZGT6 (https://www.digikey.com/product-detail/en/stmicroelectronics/STM32L4P5ZGT6/497-STM32L4P5ZGT6-ND/11590990) and a 169-BGA STM32L4P5AGI6P (https://www.digikey.com/product-detail/en/stmicroelectronics/STM32L4P5AGI6P/497-STM32L4P5AGI6P-ND/11591137) but according to the datasheet there will be 48, 64 and 100 pin variant QFN & QFP's. For now we recommend picking up the STM32L4P5AGI6PU Discovery also known as STM32L4P5G-DK on Digi-Key (https://www.digikey.com/product-detail/en/stmicroelectronics/STM32L4P5G-DK/497-STM32L4P5G-DK-ND/11613090). Which has a built in debugger/programmer and is directly supported in STM32 Cube IDE. By the way, if you have not yet, subscribe to Digi-Key's new product feed at https://www.digikey.com/en/product-highlight/rss or visit the website (https://www.digikey.com/en/product-highlight/) for a nice interface to search through the latest exciting NPIs from Digi-Key! Visit the Adafruit shop online - http://www.adafruit.com LIVE CHAT IS HERE! http://adafru.it/discord Adafruit on Instagram: https://www.instagram.com/adafruit Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/

BSD Now
Episode 249: Router On A Stick | BSD Now 249

BSD Now

Play Episode Listen Later Jun 6, 2018 85:17


OpenZFS and DTrace updates in NetBSD, NetBSD network security stack audit, Performance of MySQL on ZFS, OpenSMTP results from p2k18, legacy Windows backup to FreeNAS, ZFS block size importance, and NetBSD as router on a stick. ##Headlines ZFS and DTrace update lands in NetBSD merge a new version of the CDDL dtrace and ZFS code. This changes the upstream vendor from OpenSolaris to FreeBSD, and this version is based on FreeBSD svn r315983. r315983 is from March 2017 (14 months ago), so there is still more work to do in addition to the 10 years of improvements from upstream, this version also has these NetBSD-specific enhancements: dtrace FBT probes can now be placed in kernel modules. ZFS now supports mmap(). This brings NetBSD 10 years forward, and they should be able to catch the rest of the way up fairly quickly ###NetBSD network stack security audit Maxime Villard has been working on an audit of the NetBSD network stack, a project sponsored by The NetBSD Foundation, which has served all users of BSD-derived operating systems. Over the last five months, hundreds of patches were committed to the source tree as a result of this work. Dozens of bugs were fixed, among which a good number of actual, remotely-triggerable vulnerabilities. Changes were made to strengthen the networking subsystems and improve code quality: reinforce the mbuf API, add many KASSERTs to enforce assumptions, simplify packet handling, and verify compliance with RFCs. This was done in several layers of the NetBSD kernel, from device drivers to L4 handlers. In the course of investigating several bugs discovered in NetBSD, I happened to look at the network stacks of other operating systems, to see whether they had already fixed the issues, and if so how. Needless to say, I found bugs there too. A lot of code is shared between the BSDs, so it is especially helpful when one finds a bug, to check the other BSDs and share the fix. The IPv6 Buffer Overflow: The overflow allowed an attacker to write one byte of packet-controlled data into ‘packetstorage+off’, where ‘off’ could be approximately controlled too. This allowed at least a pretty bad remote DoS/Crash The IPsec Infinite Loop: When receiving an IPv6-AH packet, the IPsec entry point was not correctly computing the length of the IPv6 suboptions, and this, before authentication. As a result, a specially-crafted IPv6 packet could trigger an infinite loop in the kernel (making it unresponsive). In addition this flaw allowed a limited buffer overflow - where the data being written was however not controllable by the attacker. The IPPROTO Typo: While looking at the IPv6 Multicast code, I stumbled across a pretty simple yet pretty bad mistake: at one point the Pim6 entry point would return IPPROTONONE instead of IPPROTODONE. Returning IPPROTONONE was entirely wrong: it caused the kernel to keep iterating on the IPv6 packet chain, while the packet storage was already freed. The PF Signedness Bug: A bug was found in NetBSD’s implementation of the PF firewall, that did not affect the other BSDs. In the initial PF code a particular macro was used as an alias to a number. This macro formed a signed integer. NetBSD replaced the macro with a sizeof(), which returns an unsigned result. The NPF Integer Overflow: An integer overflow could be triggered in NPF, when parsing an IPv6 packet with large options. This could cause NPF to look for the L4 payload at the wrong offset within the packet, and it allowed an attacker to bypass any L4 filtering rule on IPv6. The IPsec Fragment Attack: I noticed some time ago that when reassembling fragments (in either IPv4 or IPv6), the kernel was not removing the MPKTHDR flag on the secondary mbufs in mbuf chains. This flag is supposed to indicate that a given mbuf is the head of the chain it forms; having the flag on secondary mbufs was suspicious. What Now: Not all protocols and layers of the network stack were verified, because of time constraints, and also because of unexpected events: the recent x86 CPU bugs, which I was the only one able to fix promptly. A todo list will be left when the project end date is reached, for someone else to pick up. Me perhaps, later this year? We’ll see. This security audit of NetBSD’s network stack is sponsored by The NetBSD Foundation, and serves all users of BSD-derived operating systems. The NetBSD Foundation is a non-profit organization, and welcomes any donations that help continue funding projects of this kind. DigitalOcean ###MySQL on ZFS Performance I used sysbench to create a table of 10M rows and then, using export/import tablespace, I copied it 329 times. I ended up with 330 tables for a total size of about 850GB. The dataset generated by sysbench is not very compressible, so I used lz4 compression in ZFS. For the other ZFS settings, I used what can be found in my earlier ZFS posts but with the ARC size limited to 1GB. I then used that plain configuration for the first benchmarks. Here are the results with the sysbench point-select benchmark, a uniform distribution and eight threads. The InnoDB buffer pool was set to 2.5GB. In both cases, the load is IO bound. The disk is doing exactly the allowed 3000 IOPS. The above graph appears to be a clear demonstration that XFS is much faster than ZFS, right? But is that really the case? The way the dataset has been created is extremely favorable to XFS since there is absolutely no file fragmentation. Once you have all the files opened, a read IOP is just a single fseek call to an offset and ZFS doesn’t need to access any intermediate inode. The above result is about as fair as saying MyISAM is faster than InnoDB based only on table scan performance results of unfragmented tables and default configuration. ZFS is much less affected by the file level fragmentation, especially for point access type. ZFS stores the files in B-trees in a very similar fashion as InnoDB stores data. To access a piece of data in a B-tree, you need to access the top level page (often called root node) and then one block per level down to a leaf-node containing the data. With no cache, to read something from a three levels B-tree thus requires 3 IOPS. The extra IOPS performed by ZFS are needed to access those internal blocks in the B-trees of the files. These internal blocks are labeled as metadata. Essentially, in the above benchmark, the ARC is too small to contain all the internal blocks of the table files’ B-trees. If we continue the comparison with InnoDB, it would be like running with a buffer pool too small to contain the non-leaf pages. The test dataset I used has about 600MB of non-leaf pages, about 0.1% of the total size, which was well cached by the 3GB buffer pool. So only one InnoDB page, a leaf page, needed to be read per point-select statement. To correctly set the ARC size to cache the metadata, you have two choices. First, you can guess values for the ARC size and experiment. Second, you can try to evaluate it by looking at the ZFS internal data. Let’s review these two approaches. You’ll read/hear often the ratio 1GB of ARC for 1TB of data, which is about the same 0.1% ratio as for InnoDB. I wrote about that ratio a few times, having nothing better to propose. Actually, I found it depends a lot on the recordsize used. The 0.1% ratio implies a ZFS recordsize of 128KB. A ZFS filesystem with a recordsize of 128KB will use much less metadata than another one using a recordsize of 16KB because it has 8x fewer leaf pages. Fewer leaf pages require less B-tree internal nodes, hence less metadata. A filesystem with a recordsize of 128KB is excellent for sequential access as it maximizes compression and reduces the IOPS but it is poor for small random access operations like the ones MySQL/InnoDB does. In order to improve ZFS performance, I had 3 options: Increase the ARC size to 7GB Use a larger Innodb page size like 64KB Add a L2ARC I was reluctant to grow the ARC to 7GB, which was nearly half the overall system memory. At best, the ZFS performance would only match XFS. A larger InnoDB page size would increase the CPU load for decompression on an instance with only two vCPUs; not great either. The last option, the L2ARC, was the most promising. ZFS is much more complex than XFS and EXT4 but, that also means it has more tunables/options. I used a simplistic setup and an unfair benchmark which initially led to poor ZFS results. With the same benchmark, very favorable to XFS, I added a ZFS L2ARC and that completely reversed the situation, more than tripling the ZFS results, now 66% above XFS. Conclusion We have seen in this post why the general perception is that ZFS under-performs compared to XFS or EXT4. The presence of B-trees for the files has a big impact on the amount of metadata ZFS needs to handle, especially when the recordsize is small. The metadata consists mostly of the non-leaf pages (or internal nodes) of the B-trees. When properly cached, the performance of ZFS is excellent. ZFS allows you to optimize the use of EBS volumes, both in term of IOPS and size when the instance has fast ephemeral storage devices. Using the ephemeral device of an i3.large instance for the ZFS L2ARC, ZFS outperformed XFS by 66%. ###OpenSMTPD new config TL;DR: OpenBSD #p2k18 hackathon took place at Epitech in Nantes. I was organizing the hackathon but managed to make progress on OpenSMTPD. As mentioned at EuroBSDCon the one-line per rule config format was a design error. A new configuration grammar is almost ready and the underlying structures are simplified. Refactor removes ~750 lines of code and solves _many issues that were side-effects of the design error. New features are going to be unlocked thanks to this. Anatomy of a design error OpenSMTPD started ten years ago out of dissatisfaction with other solutions, mainly because I considered them way too complex for me not to get things wrong from time to time. The initial configuration format was very different, I was inspired by pyr@’s hoststated, which eventually became relayd, and designed my configuration format with blocks enclosed by brackets. When I first showed OpenSMTPD to pyr@, he convinced me that PF-like one-line rules would be awesome, and it was awesome indeed. It helped us maintain our goal of simple configuration files, it helped fight feature creeping, it helped us gain popularity and become a relevant MTA, it helped us get where we are now 10 years later. That being said, I believe this was a design error. A design error that could not have been predicted until we hit the wall to understand WHY this was an error. One-line rules are semantically wrong, they are SMTP wrong, they are wrong. One-line rules are making the entire daemon more complex, preventing some features from being implemented, making others more complex than they should be, they no longer serve our goals. To get to the point: we should move to two-line rules :-) Anatomy of a design error OpenSMTPD started ten years ago out of dissatisfaction with other solutions, mainly because I considered them way too complex for me not to get things wrong from time to time. The initial configuration format was very different, I was inspired by pyr@’s hoststated, which eventually became relayd, and designed my configuration format with blocks enclosed by brackets. When I first showed OpenSMTPD to pyr@, he convinced me that PF-like one-line rules would be awesome, and it was awesome indeed. It helped us maintain our goal of simple configuration files, it helped fight feature creeping, it helped us gain popularity and become a relevant MTA, it helped us get where we are now 10 years later. That being said, I believe this was a design error. A design error that could not have been predicted until we hit the wall to understand WHY this was an error. One-line rules are semantically wrong, they are SMTP wrong, they are wrong. One-line rules are making the entire daemon more complex, preventing some features from being implemented, making others more complex than they should be, they no longer serve our goals. To get to the point: we should move to two-line rules :-) The problem with one-line rules OpenSMTPD decides to accept or reject messages based on one-line rules such as: accept from any for domain poolp.org deliver to mbox Which can essentially be split into three units: the decision: accept/reject the matching: from any for domain poolp.org the (default) action: deliver to mbox To ensure that we meet the requirements of the transactions, the matching must be performed during the SMTP transaction before we take a decision for the recipient. Given that the rule is atomic, that it doesn’t have an identifier and that the action is part of it, the two only ways to make sure we can remember the action to take later on at delivery time is to either: save the action in the envelope, which is what we do today evaluate the envelope again at delivery And this this where it gets tricky… both solutions are NOT ok. The first solution, which we’ve been using for a decade, was to save the action within the envelope and kind of carve it in stone. This works fine… however it comes with the downsides that errors fixed in configuration files can’t be caught up by envelopes, that delivery action must be validated way ahead of time during the SMTP transaction which is much trickier, that the parsing of delivery methods takes place as the _smtpd user rather than the recipient user, and that envelope structures that are passed all over OpenSMTPD carry delivery-time informations, and more, and more, and more. The code becomes more complex in general, less safe in some particular places, and some areas are nightmarish to deal with because they have to deal with completely unrelated code that can’t be dealt with later in the code path. The second solution can’t be done. An envelope may be the result of nested rules, for example an external client, hitting an alias, hitting a user with a .forward file resolving to a user. An envelope on disk may no longer match any rule or it may match a completely different rule If we could ensure that it matched the same rule, evaluating the ruleset may spawn new envelopes which would violate the transaction. Trying to imagine how we could work around this leads to more and more and more RFC violations, incoherent states, duplicate mails, etc… There is simply no way to deal with this with atomic rules, the matching and the action must be two separate units that are evaluated at two different times, failure to do so will necessarily imply that you’re either using our first solution and all its downsides, or that you are currently in a world of pain trying to figure out why everything is burning around you. The minute the action is written to an on-disk envelope, you have failed. A proper ruleset must define a set of matching patterns resolving to an action identifier that is carved in stone, AND a set of named action set that is resolved dynamically at delivery time. Follow the link above to see the rest of the article Break ##News Roundup Backing up a legacy Windows machine to a FreeNAS with rsync I have some old Windows servers (10 years and counting) and I have been using rsync to back them up to my FreeNAS box. It has been working great for me. First of all, I do have my Windows servers backup in virtualized format. However, those are only one-time snapshops that I run once in a while. These are classic ASP IIS web servers that I can easily put up on a new VM. However, many of these legacy servers generate gigabytes of data a day in their repositories. Running VM conversion daily is not ideal. My solution was to use some sort of rsync solution just for the data repos. I’ve tried some applications that didn’t work too well with Samba shares and these old servers have slow I/O. Copying files to external sata or usb drive was not ideal. We’ve moved on from Windows to Linux and do not have any Windows file servers of capacity to provide network backups. Hence, I decided to use Delta Copy with FreeNAS. So here is a little write up on how to set it up. I have 4 Windows 2000 servers backing up daily with this method. First, download Delta Copy and install it. It is open-source and pretty much free. It is basically a wrapper for cygwin’s rsync. When you install it, it will ask you to install the Server services which allows you to run it as a Rsync server on Windows. You don’t need to do this. Instead, you will be just using the Delta Copy Client application. But before we do that, we will need to configure our Rsync service for our Windows Clients on FreeNAS. In FreeNAS, go under Services , Select Rsync > Rsync Modules > Add Rsync Module. Then fill out the form; giving the module a name and set the path. In my example, I simply called it WIN and linked it to a user called backupuser. This process is much easier than trying to configure the daemon rsyncd.conf file by hand. Now, on the Windows Client, start the DeltaCopy Client. You will create a new Profile. You will need to enter the IP of the Rsync server (FreeNAS) and specify the module name which will be called “Virtual Directory Name.” When you pull the select menu, the list of Rsync Modules you created earlier in FreeNAS will populate. You can set authentication. On the server, you can restrict by IP and do other things to lock down your rsync. Next, you will add folders (and/or files) you want to synchronize. Once the paths are set up, you can run a sync by right clicking the profile name. Here, I made a test sync to a home folder of a virtualized windows box. As you can see, I mounted the rsync volume on my mac to see the progress. The rsync worked beautifully. DeltaCopy did what it was told. Once you get everything working. The next thing to do is set schedules. If you done tasks schedules in Windows before, it is pretty straightforward. DeltaCopy has a link in the application to directly create a new task for you. I set my backups to run nightly and it has been working great. There you have it. Windows rsync to FreeNAS using DeltaCopy. The nice thing about FreeNAS is you don’t have to modify /etc/rsyncd.conf files. Everything can be done in the web admin. iXsystems ###How to write ATF tests for NetBSD I have recently started contributing to the amazing NetBSD foundation. I was thinking of trying out a new OS for a long time. Switching to the NetBSD OS has been a fun change. My first contribution to the NetBSD foundation was adding regression tests for the Address Sanitizer (ASan) in the Automated Testing Framework(ATF) which NetBSD has. I managed to complete it with the help of my really amazing mentor Kamil. This post is gonna be about the ATF framework that NetBSD has and how to you can add multiple tests with ease. Intro In ATF tests we will basically be talking about test programs which are a suite of test cases for a specific application or program. The ATF suite of Commands There are a variety of commands that the atf suite offers. These include : atf-check: The versatile command that is a vital part of the checking process. man page atf-run: Command used to run a test program. man page atf-fail: Report failure of a test case. atf-report: used to pretty print the atf-run. man page atf-set: To set atf test conditions. We will be taking a better look at the syntax and usage later. Let’s start with the Basics The ATF testing framework comes preinstalled with a default NetBSD installation. It is used to write tests for various applications and commands in NetBSD. One can write the Test programs in either the C language or in shell script. In this post I will be dealing with the Bash part. Follow the link above to see the rest of the article ###The Importance of ZFS Block Size Warning! WARNING! Don’t just do things because some random blog says so One of the important tunables in ZFS is the recordsize (for normal datasets) and volblocksize (for zvols). These default to 128KB and 8KB respectively. As I understand it, this is the unit of work in ZFS. If you modify one byte in a large file with the default 128KB record size, it causes the whole 128KB to be read in, one byte to be changed, and a new 128KB block to be written out. As a result, the official recommendation is to use a block size which aligns with the underlying workload: so for example if you are using a database which reads and writes 16KB chunks then you should use a 16KB block size, and if you are running VMs containing an ext4 filesystem, which uses a 4KB block size, you should set a 4KB block size You can see it has a 16GB total file size, of which 8.5G has been touched and consumes space - that is, it’s a “sparse” file. The used space is also visible by looking at the zfs filesystem which this file resides in Then I tried to copy the image file whilst maintaining its “sparseness”, that is, only touching the blocks of the zvol which needed to be touched. The original used only 8.42G, but the copy uses 14.6GB - almost the entire 16GB has been touched! What’s gone wrong? I finally realised that the difference between the zfs filesystem and the zvol is the block size. I recreated the zvol with a 128K block size That’s better. The disk usage of the zvol is now exactly the same as for the sparse file in the filesystem dataset It does impact the read speed too. 4K blocks took 5:52, and 128K blocks took 3:20 Part of this is the amount of metadata that has to be read, see the MySQL benchmarks from earlier in the show And yes, using a larger block size will increase the compression efficiency, since the compressor has more redundant data to optimize. Some of the savings, and the speedup is because a lot less metadata had to be written Your zpool layout also plays a big role, if you use 4Kn disks, and RAID-Z2, using a volblocksize of 8k will actually result in a large amount of wasted space because of RAID-Z padding. Although, if you enable compression, your 8k records may compress to only 4k, and then all the numbers change again. ###Using a Raspberry Pi 2 as a Router on a Stick Starring NetBSD Sorry we didn’t answer you quickly enough A few weeks ago I set about upgrading my feeble networking skills by playing around with a Cisco 2970 switch. I set up a couple of VLANs and found the urge to set up a router to route between them. The 2970 isn’t a modern layer 3 switch so what am I to do? Why not make use of the Raspberry Pi 2 that I’ve never used and put it to some good use as a ‘router on a stick’. I could install a Linux based OS as I am quite familiar with it but where’s the fun in that? In my home lab I use SmartOS which by the way is a shit hot hypervisor but as far as I know there aren’t any Illumos distributions for the Raspberry Pi. On the desktop I use Solus OS which is by far the slickest Linux based OS that I’ve had the pleasure to use but Solus’ focus is purely desktop. It’s looking like BSD then! I believe FreeBSD is renowned for it’s top notch networking stack and so I wrote to the BSDNow show on Jupiter Broadcasting for some help but it seems that the FreeBSD chaps from the show are off on a jolly to some BSD conference or another(love the show by the way). It looks like me and the luvverly NetBSD are on a date this Saturday. I’ve always had a secret love for NetBSD. She’s a beautiful, charming and promiscuous lover(looking at the supported architectures) and I just can’t stop going back to her despite her misgivings(ahem, zfs). Just my type of grrrl! Let’s crack on… Follow the link above to see the rest of the article ##Beastie Bits BSD Jobs University of Aberdeen’s Internet Transport Research Group is hiring VR demo on OpenBSD via OpenHMD with OSVR HDK2 patch runs ed, and ed can run anything (mentions FreeBSD and OpenBSD) Alacritty (OpenGL-powered terminal emulator) now supports OpenBSD MAP_STACK Stack Register Checking Committed to -current EuroBSDCon CfP till June 17, 2018 Tarsnap ##Feedback/Questions NeutronDaemon - Tutorial request Kurt - Question about transferability/bi-directionality of ZFS snapshots and send/receive Peter - A Question and much love for BSD Now Peter - netgraph state Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

BSD Now
214: The history of man, kind

BSD Now

Play Episode Listen Later Oct 4, 2017 90:20


The costs of open sourcing a project are explored, we discover why PS4 downloads are so slow, delve into the history of UNIX man pages, and more. This episode was brought to you by Headlines The Cost Of Open Sourcing Your Project (https://meshedinsights.com/2016/09/20/open-source-unlikely-to-be-abandonware/) Accusing a company of “dumping” their project as open source is probably misplaced – it's an expensive business no-one would do frivolously. If you see an active move to change software licensing or governance, it's likely someone is paying for it and thus could justify the expense to an executive. A Little History Some case study cameos may help. From 2004 onwards, Sun Microsystems had a policy of all its software moving to open source. The company migrated almost all products to open source licenses, and had varying degrees of success engaging communities around the various projects, largely related to the outlooks of the product management and Sun developers for the project. Sun occasionally received requests to make older, retired products open source. For example, Sun acquired a company called Lighthouse Design which created a respected suite of office productivity software for Steve Jobs' NeXT platform. Strategy changes meant that software headed for the vault (while Jonathan Schwartz, a founder of Lighthouse, headed for the executive suite). Members of the public asked if Sun would open source some of this software, but these requests were declined because there was no business unit willing to fund the move. When Sun was later bought by Oracle, a number of those projects that had been made open source were abandoned. “Abandoning” software doesn't mean leaving it for others; it means simply walking away from wherever you left it. In the case of Sun's popular identity middleware products, that meant Oracle let the staff go and tried to migrate customers to other products, while remaining silent in public on the future of the project. But the code was already open source, so the user community was able to pick up the pieces and carry on, with help from Forgerock. It costs a lot of money to open source a mature piece of commercial software, even if all you are doing is “throwing a tarball over the wall”. That's why companies abandoning software they no longer care about so rarely make it open source, and those abandoning open source projects rarely move them to new homes that benefit others. If all you have thought about is the eventual outcome, you may be surprised how expensive it is to get there. Costs include: For throwing a tarball over the wall: Legal clearance. Having the right to use the software is not the same as giving everyone in the world an unrestricted right to use it and create derivatives. Checking every line of code to make sure you have the rights necessary to release under an OSI-approved license is a big task requiring high-value employees on the “liberation team”. That includes both developers and lawyers; neither come cheap. Repackaging. To pass it to others, a self-contained package containing all necessary source code, build scripts and non-public source and tool dependencies has to be created since it is quite unlikely to exist internally. Again, the liberation team will need your best developers. Preserving provenance. Just because you have confidence that you have the rights to the code, that doesn't mean anyone else will. The version control system probably contains much of the information that gives confidence about who wrote which code, so the repackaging needs to also include a way to migrate the commit information. Code cleaning. The file headers will hopefully include origin information but the liberation team had better check. They also need to check the comments for libel and profanities, not to mention trade secrets (especially those from third parties) and other IP issues. For a sustainable project, all the above plus: Compliance with host governance. It is a fantastic idea to move your project to a host like Apache, Conservancy, Public Software and so on. But doing so requires preparatory work. As a minimum you will need to negotiate with the new host organisation, and they may well need you to satisfy their process requirements. Paperwork obviously, but also the code may need conforming copyright statements and more. That's more work for your liberation team. Migration of rights. Your code has an existing community who will need to migrate to your new host. That includes your staff – they are community too! They will need commit rights, governance rights, social media rights and more. Your liberation team will need your community manager, obviously, but may also need HR input. Endowment. Keeping your project alive will take money. It's all been coming from you up to this point, but if you simply walk away before the financial burden has been accepted by the new community and hosts there may be a problem. You should consider making an endowment to your new host to pay for their migration costs plus the cost of hosting the community for at least a year. Marketing. Explaining the move you are making, the reasons why you are making it and the benefits for you and the community is important. If you don't do it, there are plenty of trolls around who will do it for you. Creating a news blog post and an FAQ — the minimum effort necessary — really does take someone experienced and you'll want to add such a person to your liberation team. Motivations There has to be some commercial reason that makes the time, effort and thus expense worth incurring. Some examples of motivations include: Market Strategy. An increasing number of companies are choosing to create substantial, openly-governed open source communities around software that contributes to their business. An open multi-stakeholder co-developer community is an excellent vehicle for innovation at the lowest cost to all involved. As long as your market strategy doesn't require creating artificial scarcity. Contract with a third party. While the owner of the code may no longer be interested, there may be one or more parties to which they owe a contractual responsibility. Rather than breaching that contract, or buying it out, a move to open source may be better. Some sources suggest a contractual obligation to IBM was the reason Oracle abandoned OpenOffice.org by moving it over to the Apache Software Foundation for example. Larger dependent ecosystem. You may have no further use for the code itself, but you may well have other parts of your business which depend on it. If they are willing to collectively fund development you might consider an “inner source” strategy which will save you many of the costs above. But the best way to proceed may well be to open the code so your teams and those in other companies can fund the code. Internal politics. From the outside, corporations look monolithic, but from the inside it becomes clear they are a microcosm of the market in which they exist. As a result, they have political machinations that may be addressed by open source. One of Oracle's motivations for moving NetBeans to Apache seems to have been political. Despite multiple internal groups needing it to exist, the code was not generating enough direct revenue to satisfy successive executive owners, who allegedly tried to abandon it on more than one occasion. Donating it to Apache meant that couldn't happen again. None of this is to say a move to open source guarantees the success of a project. A “Field of Dreams” strategy only works in the movies, after all. But while it may be tempting to look at a failed corporate liberation and describe it as “abandonware”, chances are it was intended as nothing of the kind. Why PS4 downloads are so slow (https://www.snellman.net/blog/archive/2017-08-19-slow-ps4-downloads/) From the blog that brought us “The origins of XXX as FIXME (https://www.snellman.net/blog/archive/2017-04-17-xxx-fixme/)” and “The mystery of the hanging S3 downloads (https://www.snellman.net/blog/archive/2017-07-20-s3-mystery/)”, this week it is: “Why are PS4 downloads so slow?” Game downloads on PS4 have a reputation of being very slow, with many people reporting downloads being an order of magnitude faster on Steam or Xbox. This had long been on my list of things to look into, but at a pretty low priority. After all, the PS4 operating system is based on a reasonably modern FreeBSD (9.0), so there should not be any crippling issues in the TCP stack. The implication is that the problem is something boring, like an inadequately dimensioned CDN. But then I heard that people were successfully using local HTTP proxies as a workaround. It should be pretty rare for that to actually help with download speeds, which made this sound like a much more interesting problem. Before running any experiments, it's good to have a mental model of how the thing we're testing works, and where the problems might be. If nothing else, it will guide the initial experiment design. The speed of a steady-state TCP connection is basically defined by three numbers. The amount of data the client is will to receive on a single round-trip (TCP receive window), the amount of data the server is willing to send on a single round-trip (TCP congestion window), and the round trip latency between the client and the server (RTT). To a first approximation, the connection speed will be: speed = min(rwin, cwin) / RTT With this model, how could a proxy speed up the connection? The speed through the proxy should be the minimum of the speed between the client and proxy, and the proxy and server. It should only possibly be slower With a local proxy the client-proxy RTT will be very low; that connection is almost guaranteed to be the faster one. The improvement will have to be from the server-proxy connection being somehow better than the direct client-server one. The RTT will not change, so there are just two options: either the client has a much smaller receive window than the proxy, or the client is somehow causing the server's congestion window to decrease. (E.g. the client is randomly dropping received packets, while the proxy isn't). After setting up a test rig, where the PS4's connection was bridged through a linux box so packets could be captured, and artificial latency could be added, some interested results came up: The differences in receive windows at different times are striking. And more important, the changes in the receive windows correspond very well to specific things I did on the PS4 When the download was started, the game Styx: Shards of Darkness was running in the background (just idling in the title screen). The download was limited by a receive window of under 7kB. This is an incredibly low value; it's basically going to cause the downloads to take 100 times longer than they should. And this was not a coincidence, whenever that game was running, the receive window would be that low. Having an app running (e.g. Netflix, Spotify) limited the receive window to 128kB, for about a 5x reduction in potential download speed. Moving apps, games, or the download window to the foreground or background didn't have any effect on the receive window. Playing an online match in a networked game (Dreadnought) caused the receive window to be artificially limited to 7kB. I ran a speedtest at a time when downloads were limited to 7kB receive window. It got a decent receive window of over 400kB; the conclusion is that the artificial receive window limit appears to only apply to PSN downloads. When a game was started (causing the previously running game to be stopped automatically), the receive window could increase to 650kB for a very brief period of time. Basically it appears that the receive window gets unclamped when the old game stops, and then clamped again a few seconds later when the new game actually starts up. I did a few more test runs, and all of them seemed to support the above findings. The only additional information from that testing is that the rest mode behavior was dependent on the PS4 settings. Originally I had it set up to suspend apps when in rest mode. If that setting was disabled, the apps would be closed when entering in rest mode, and the downloads would proceed at full speed. The PS4 doesn't make it very obvious exactly what programs are running. For games, the interaction model is that opening a new game closes the previously running one. This is not how other apps work; they remain in the background indefinitely until you explicitly close them. So, FreeBSD and its network stack are not to blame Sony used a poor method to try to keep downloads from interfering with your gameplay The impact of changing the receive window is highly dependant upon RTT, so it doesn't work as evenly as actual traffic shaping or queueing would. An interesting deep dive, it is well worth reading the full article and checking out the graphs *** OpenSSH 7.6 Released (http://www.openssh.com/releasenotes.html#7.6) From the release notes: This release includes a number of changes that may affect existing configurations: ssh(1): delete SSH protocol version 1 support, associated configuration options and documentation. ssh(1)/sshd(8): remove support for the hmac-ripemd160 MAC. ssh(1)/sshd(8): remove support for the arcfour, blowfish and CAST Refuse RSA keys

BSD Now
188: And then the murders began

BSD Now

Play Episode Listen Later Apr 5, 2017 83:39


Today on BSD Now, the latest Dragonfly BSD release, RaidZ performance, another OpenSSL Vulnerability, and more; all this week on BSD Now. This episode was brought to you by Headlines DragonFly BSD 4.8 is released (https://www.dragonflybsd.org/release48/) Improved kernel performance This release further localizes cache lines and reduces/removes cache ping-ponging on globals. For bulk builds on many-cores or multi-socket systems, we have around a 5% improvement, and certain subsystems such as namecache lookups and exec()s see massive focused improvements. See the corresponding mailing list post with details. Support for eMMC booting, and mobile and high-performance PCIe SSDs This kernel release includes support for eMMC storage as the boot device. We also sport a brand new SMP-friendly, high-performance NVMe SSD driver (PCIe SSD storage). Initial device test results are available. EFI support The installer can now create an EFI or legacy installation. Numerous adjustments have been made to userland utilities and the kernel to support EFI as a mainstream boot environment. The /boot filesystem may now be placed either in its own GPT slice, or in a DragonFly disklabel inside a GPT slice. DragonFly, by default, creates a GPT slice for all of DragonFly and places a DragonFly disklabel inside it with all the standard DFly partitions, such that the disk names are roughly the same as they would be in a legacy system. Improved graphics support The i915 driver has been updated to match the version found with the Linux 4.6 kernel. Broadwell and Skylake processor users will see improvements. Other user-affecting changes Kernel is now built using -O2. VKernels now use COW, so multiple vkernels can share one disk image. powerd() is now sensitive to time and temperature changes. Non-boot-filesystem kernel modules can be loaded in rc.conf instead of loader.conf. *** #8005 poor performance of 1MB writes on certain RAID-Z configurations (https://github.com/openzfs/openzfs/pull/321) Matt Ahrens posts a new patch for OpenZFS Background: RAID-Z requires that space be allocated in multiples of P+1 sectors,because this is the minimum size block that can have the required amount of parity. Thus blocks on RAIDZ1 must be allocated in a multiple of 2 sectors; on RAIDZ2 multiple of 3; and on RAIDZ3 multiple of 4. A sector is a unit of 2^ashift bytes, typically 512B or 4KB. To satisfy this constraint, the allocation size is rounded up to the proper multiple, resulting in up to 3 "pad sectors" at the end of some blocks. The contents of these pad sectors are not used, so we do not need to read or write these sectors. However, some storage hardware performs much worse (around 1/2 as fast) on mostly-contiguous writes when there are small gaps of non-overwritten data between the writes. Therefore, ZFS creates "optional" zio's when writing RAID-Z blocks that include pad sectors. If writing a pad sector will fill the gap between two (required) writes, we will issue the optional zio, thus doubling performance. The gap-filling performance improvement was introduced in July 2009. Writing the optional zio is done by the io aggregation code in vdevqueue.c. The problem is that it is also subject to the limit on the size of aggregate writes, zfsvdevaggregationlimit, which is by default 128KB. For a given block, if the amount of data plus padding written to a leaf device exceeds zfsvdevaggregation_limit, the optional zio will not be written, resulting in a ~2x performance degradation. The solution is to aggregate optional zio's regardless of the aggregation size limit. As you can see from the graphs, this can make a large difference in performance. I encourage you to read the entire commit message, it is well written and very detailed. *** Can you spot the OpenSSL vulnerability (https://guidovranken.wordpress.com/2017/01/28/can-you-spot-the-vulnerability/) This code was introduced in OpenSSL 1.1.0d, which was released a couple of days ago. This is in the server SSL code, ssl/statem/statemsrvr.c, sslbytestocipherlist()), and can easily be reached remotely. Can you spot the vulnerability? So there is a loop, and within that loop we have an ‘if' statement, that tests a number of conditions. If any of those conditions fail, OPENSSLfree(raw) is called. But raw isn't the address that was allocated; raw is increment every loop. Hence, there is a remote invalid free vulnerability. But not quite. None of those checks in the ‘if' statement can actually fail; earlier on in the function, there is a check that verifies that the packet contains at least 1 byte, so PACKETget1 cannot fail. Furthermore, earlier in the function it is verified that the packet length is a multiple of 3, hence PACKETcopybytes and PACKET_forward cannot fail. So, does the code do what the original author thought, or expected it to do? But what about the next person that modifies that code, maybe changing or removing one of the earlier checks, allowing one of those if conditions to fail, and execute the bad code? Nonetheless OpenSSL has acknowledged that the OPENSSL_free line needs a rewrite: Pull Request #2312 (https://github.com/openssl/openssl/pull/2312) PS I'm not posting this to ridicule the OpenSSL project or their programming skills. I just like reading code and finding corner cases that impact security, which is an effort that ultimately works in everybody's best interest, and I like to share what I find. Programming is a very difficult enterprise and everybody makes mistakes. Thanks to Guido Vranken for the sharp eye and the blog post *** Research Debt (http://distill.pub/2017/research-debt/) I found this article interesting as it relates to not just research, but a lot of technical areas in general Achieving a research-level understanding of most topics is like climbing a mountain. Aspiring researchers must struggle to understand vast bodies of work that came before them, to learn techniques, and to gain intuition. Upon reaching the top, the new researcher begins doing novel work, throwing new stones onto the top of the mountain and making it a little taller for whoever comes next. People expect the climb to be hard. It reflects the tremendous progress and cumulative effort that's gone into the research. The climb is seen as an intellectual pilgrimage, the labor a rite of passage. But the climb could be massively easier. It's entirely possible to build paths and staircases into these mountains. The climb isn't something to be proud of. The climb isn't progress: the climb is a mountain of debt. Programmers talk about technical debt: there are ways to write software that are faster in the short run but problematic in the long run. Poor Exposition – Often, there is no good explanation of important ideas and one has to struggle to understand them. This problem is so pervasive that we take it for granted and don't appreciate how much better things could be. Undigested Ideas – Most ideas start off rough and hard to understand. They become radically easier as we polish them, developing the right analogies, language, and ways of thinking. Bad abstractions and notation – Abstractions and notation are the user interface of research, shaping how we think and communicate. Unfortunately, we often get stuck with the first formalisms to develop even when they're bad. For example, an object with extra electrons is negative, and pi is wrong Noise – Being a researcher is like standing in the middle of a construction site. Countless papers scream for your attention and there's no easy way to filter or summarize them. We think noise is the main way experts experience research debt. There's a tradeoff between the energy put into explaining an idea, and the energy needed to understand it. On one extreme, the explainer can painstakingly craft a beautiful explanation, leading their audience to understanding without even realizing it could have been difficult. On the other extreme, the explainer can do the absolute minimum and abandon their audience to struggle. This energy is called interpretive labor Research distillation is the opposite of research debt. It can be incredibly satisfying, combining deep scientific understanding, empathy, and design to do justice to our research and lay bare beautiful insights. Distillation is also hard. It's tempting to think of explaining an idea as just putting a layer of polish on it, but good explanations often involve transforming the idea. This kind of refinement of an idea can take just as much effort and deep understanding as the initial discovery. + The distillation can often times require an entirely different set of skills than the original creation of the idea. Almost all of the BSD projects have some great ideas or subsystems that just need distillation into easy to understand and use platforms or tools. Like the theoretician, the experimentalist or the research engineer, the research distiller is an integral role for a healthy research community. Right now, almost no one is filling it. Anyway, if that bit piqued your interest, go read the full article and the suggested further reading. *** News Roundup And then the murders began. (https://blather.michaelwlucas.com/archives/2902) A whole bunch of people have pointed me at articles like this one (http://thehookmag.com/2017/03/adding-murders-began-second-sentence-book-makes-instantly-better-125462/), which claim that you can improve almost any book by making the second sentence “And then the murders began.” It's entirely possible they're correct. But let's check, with a sampling of books. As different books come in different tenses and have different voices, I've made some minor changes. “Welcome to Cisco Routers for the Desperate! And then the murders begin.” — Cisco Routers for the Desperate, 2nd ed “Over the last ten years, OpenSSH has become the standard tool for remote management of Unix-like systems and many network devices. And then the murders began.” — SSH Mastery “The Z File System, or ZFS, is a complicated beast, but it is also the most powerful tool in a sysadmin's Batman-esque utility belt. And then the murders begin.” — FreeBSD Mastery: Advanced ZFS “Blood shall rain from the sky, and great shall be the lamentation of the Linux fans. And then, the murders will begin.” — Absolute FreeBSD, 3rd Ed Netdata now supports FreeBSD (https://github.com/firehol/netdata) netdata is a system for distributed real-time performance and health monitoring. It provides unparalleled insights, in real-time, of everything happening on the system it runs (including applications such as web and database servers), using modern interactive web dashboards. From the release notes: apps.plugin ported for FreeBSD Check out their demo sites (https://github.com/firehol/netdata/wiki) *** Distrowatch Weekly reviews RaspBSD (https://distrowatch.com/weekly.php?issue=20170220#raspbsd) RaspBSD is a FreeBSD-based project which strives to create a custom build of FreeBSD for single board and hobbyist computers. RaspBSD takes a recent snapshot of FreeBSD and adds on additional components, such as the LXDE desktop and a few graphical applications. The RaspBSD project currently has live images for Raspberry Pi devices, the Banana Pi, Pine64 and BeagleBone Black & Green computers. The default RaspBSD system is quite minimal, running a mere 16 processes when I was logged in. In the background the operating system runs cron, OpenSSH, syslog and the powerd power management service. Other than the user's shell and terminals, nothing else is running. This means RaspBSD uses little memory, requiring just 16MB of active memory and 31MB of wired or kernel memory. I made note of a few practical differences between running RaspBSD on the Pi verses my usual Raspbian operating system. One minor difference is RaspBSD turns off the Pi's external power light after booting. Raspbian leaves the light on. This means it looks like the Pi is off when it is running RaspBSD, but it also saves a little electricity. Conclusions: Apart from these little differences, running RaspBSD on the Pi was a very similar experience to running Raspbian and my time with the operating system was pleasantly trouble-free. Long-term, I think applying source updates to the base system might be tedious and SD disk operations were slow. However, the Pi usually is not utilized for its speed, but rather its low cost and low-energy usage. For people who are looking for a small home server or very minimal desktop box, RaspBSD running on the Pi should be suitable. Research UNIX V8, V9 and V10 made public by Alcatel-Lucent (https://media-bell-labs-com.s3.amazonaws.com/pages/20170327_1602/statement%20regarding%20Unix%203-7-17.pdf) Alcatel-Lucent USA Inc. (“ALU-USA”), on behalf of itself and Nokia Bell Laboratories agrees, to the extent of its ability to do so, that it will not assert its copyright rights with respect to any non-commercial copying, distribution, performance, display or creation of derivative works of Research Unix®1 Editions 8, 9, and 10. Research Unix is a term used to refer to versions of the Unix operating system for DEC PDP-7, PDP-11, VAX and Interdata 7/32 and 8/32 computers, developed in the Bell Labs Computing Science Research Center. The version breakdown can be viewed on its Wikipedia page (https://en.wikipedia.org/wiki/Research_Unix) It only took 30+ years, but now they're public You can grab them from here (http://www.tuhs.org/Archive/Distributions/Research/) If you're wondering what happened with Research Unix, After Version 10, Unix development at Bell Labs was stopped in favor of a successor system, Plan 9 (http://plan9.bell-labs.com/plan9/); which itself was succeeded by Inferno (http://www.vitanuova.com/inferno/). *** Beastie Bits The BSD Family Tree (https://github.com/freebsd/freebsd/blob/master/share/misc/bsd-family-tree) Unix Permissions Calculator (http://permissions-calculator.org/) NAS4Free release 11.0.0.4 now available (https://sourceforge.net/projects/nas4free/files/NAS4Free-11.0.0.4/11.0.0.4.4141/) Another BSD Mag released for free downloads (https://bsdmag.org/download/simple-quorum-drive-freebsd-ctl-ha-beast-storage-system/) OPNsense 17.1.4 released (https://forum.opnsense.org/index.php?topic=4898.msg19359) *** Feedback/Questions gozes asks via twitter about how get involved in FreeBSD (https://twitter.com/gozes/status/846779901738991620) ***

The Artist in American History
01 - Videogame History: E.T. on the Atari 2600

The Artist in American History

Play Episode Listen Later Mar 16, 2017 6:51


E.T. by Atari is widely regarded as one of the worst videogames ever created. Based upon the wildly popular film Steven Spielberg, it was made in under six weeks by a single developer working on hardware that was, by 1982 standards, utterly archaic. The Atari 2600, the console on which the game was released, had just 128 bytes of RAM – not 128Kb of Ram, but 128 bytes. Building the game on such notoriously underpowered hardware at such ridiculously short notice was a catastrophe. $20 million had been spent by Atari on acquiring the license, but only a few thousand dollars were invested into the actual development of the game which was shipped in vast numbers. At least four million copies of E.T. were manufactured and though the game was initially a commercial success, selling upwards of one and a half million copies, it left a vast inventory unsold which Atari eventually shipped to a landfill site in Mexico and buried. The burial of hundreds of thousands of unsold E.T. cartridges was bad enough, but the game's quality was so notoriously poor that the real damage was caused by the copies which were actually sold. The game found its way into a million and a half homes in time for Christmas, 1982 and, in so doing, helped to sour the American public's taste in videogames, proving that a well-loved brand was no guarantee of quality. E.T., alongside several other notoriously bad Atari 2600 games from that same era, was an advertisement for why people should not want to play videogames and, in 1983, the market for computer games in the United States collapsed. To be sure, Atari was not the only company responsible for the market crash, but it was a massive contributor. By 1985 the value of videogame sales in the United States had declined from several billion dollars to perhaps one hundred million as consumers across the country lost trust and interest in the medium. E.T., for all its hype and initial success, practically destroyed a medium which had been growing massively since its explosion into American homes in the 1970s. E.T. was the anti-Pong. Other than a footnote in pop culture and business history, then, where does all of this leave the notoriously bad E.T? Is it as bad as its reputation would have us believe; is it really the worst videogame ever made? The simple answer to that question is no. In spite of the fact that it was rushed to market and that it is marred by some terrible design choices, Atari's E.T. possess degree of charm, particularly when its six week production cycle is taken into account. Granted, one must sometimes look deep to uncover it whilst forgiving some pretty significant flaws, as a piece of retro Americana it carries appeal. The gameplay revolves around E.T.'s quest to assemble the phone that will allow him to ‘phone home'. In order to accomplish this, the titular character is able to move around a type low resolution quasi-open world. Players are forced to go in no one particular direction though there is little variety and little to see wherever they do go. As the player explores the world, such as it is, they find themselves chased by government agents, though the real threat faced by players is the game's extremely buggy nature. The map is littered with pits, wherein the pieces of the phone are to be found, but falling into such craters is as much a matter of chance as it is a matter of design. Once in a pit, players must extend E.T.'s neck to ascend upwards but might well find that they become snared in a pit-loop, immediately falling back into the same hole from which they have emerged. Sometimes these loops can be broken, often they cannot... http://www.darrenreidhistory.co.uk

BSD Now
168: The Post Show Show

BSD Now

Play Episode Listen Later Nov 16, 2016 84:11


This week on BSDNow. Allan and I are back from MeetBSD! A good time was had by all, lots to discuss, so let's jump right into it on your place to B...SD! This episode was brought to you by Headlines Build a FreeBSD 11.0-release Openstack Image with bsd-cloudinit (https://raymii.org/s/tutorials/FreeBSD_11.0-release_Openstack_Image.html) We are going to prepare a FreeBSD image for Openstack deployment. We do this by creating a FreeBSD 11.0-RELEASE instance, installing it and converting it using bsd-cloudinit. We'll use the CloudVPS public Openstack cloud for this. Create an account there and install the Openstack command line tools, like nova, cinder and glance. A FreeBSD image with Cloud Init will automatically resize the disk to the size of the flavor and it will add your SSH key right at boot. You can use Cloud Config to execute a script at first boot, for example, to bootstrap your system into Puppet or Ansible. If you use Ansible to manage OpenStack instances you can integrate it without manually logging in or doing anything manually. Since FreeBSD 10.2-RELEASE there is an rc script which, when the file /firstboot exists, expands the root filesystem to the full disk. While bsd-cloudinit does this as well, if you don't need the whole cloudinit stack, (when you use a static ssh key for example), you can touch that file to make sure the disk is expanded at the first boot A detailed tutorial that shows how to create customized cloud images using the FreeBSD install media There is also the option of using the FreeBSD release tools to build custom cloud images in a more headless fashion Someone should make a tutorial out of that *** iXsystems Announces TrueOS Launch (https://www.ixsystems.com/blog/ixsystems-announces-trueos-launch/) As loyal listeners to this show, you've no doubt heard by now that we are in the middle of undergoing a shift in moving PC-BSD -> TrueOS. Last week during MeetBSD this was made “official” with iX issuing our press release and I was able to give a talk detailing many of the reasons and things going on with this change. The talk should be available online here soon(ish), but for a quick recap: TrueOS is moving to a rolling-release model based on FreeBSD -CURRENT Lumina has become the default desktop for TrueOS LibreSSL is enabled top to bottom We are in the middle of working on conversion to OpenRC for run-control replacement The TrueOS pico was announced, which is our “Thin-Client” solution, right now allowing you to use a TrueOS server pared with a RPI2 device. *** Running FreeBSD 11 on Raspberry Pi (https://vzaigrin.wordpress.com/2016/10/16/running-freebsd-11-on-raspberry-pi/) This article covers some of the changes you will notice if you upgrade your RPI to FreeBSD 11.0 It covers some of the changes to WiFi in 11.0 Pro Tip: you can get a list of WiFi devices by doing: sysctl net.wlan.devices There are official binary packages for ARM with 11.0, so you can just ‘pkg install' your favourite apps Many of the LEDs are exposed via the /dev/led/ interface, which you can just echo 0 or 1 to, or use morse(6) to send a message gpioctl can be used to control the various GPIO pins The post also covers how to setup the real-time clock on the Raspberry Pi There is also limited support for adjusting the CPU frequency of the Pi There are also tips on configuring a one-wire temperature sensor *** void-zones-tools for FreeBSD (https://github.com/cyclaero/void-zones-tools) Adblock has been in the news a bit recently, with some of the more popular browser plugins now accepting brib^...contributions to permit specific ads through. Well today the ad-blockers strike back. We have a great tutorial up on GitHub which demonstrates one of the useful features of using Unbound in FreeBSD to do your own ad-blocking with void-zones. Specifically, void-zones are a way to return NXDOMAIN when DNS requests are made to known malicious or spam sites. Using void-zones-tools software will make managing this easy, by being able to pull in known lists of sites to block from several 3rd party curators. When coupled with our past tutorials on setting up your own FreeBSD router, this may become very useful for a lot of folks who want to do ad-blocking ad at a lower level, allowing it to filter smart-phones or any other devices on a network. *** News Roundup BSD Socket API Revamp (https://raw.githubusercontent.com/sustrik/dsock/master/rfc/sock-api-revamp-01.txt) Martin Sustrik has started a draft RFC to revamp the BSD Sockets API: The progress in the area of network protocols is distinctively lagging behind. While every hobbyist new to the art of programming writes and publishes their small JavaScript libraries, there's no such thing going on with network protocols. Indeed, it looks like the field of network protocols is dominated by big companies and academia, just like programming as a whole used to be before the advent of personal computers. the API proposed in this document doesn't try to virtualize all possible aspects of all possible protocols and provide a single set of functions to deal with all of them. Instead, it acknowledges how varied the protocol landscape is and how much the requirements for individual protocols differ. Therefore, it lets each protocol define its own API and asks only for bare minimum of standardised behaviour needed to implement protocol composability. As a consequence, the new API is much more lightweight and flexible than BSD socket API and allows to decompose today's monolithic protocol monsters into small single-purpose microprotocols that can be easily combined together to achieve desired functionality. The idea behind the new design is to allow the software author to define their own protocols via a generic interface, and easily stack them on top of the existing network protocols, be they the basic protocols like TCP/IP, or a layer 7 protocol like HTTP Example of creating a stack of four protocols: ~~ int s1 = tcpconnect("192.168.0.111:5555"); int s2 = foostart(s1, arg1, arg2, arg3); int s3 = barstart(s2); int s4 = bazstart(s3, arg4, arg5); ~~ It also allows applying generic transformations to the protocols: ~~ int tcps = tcpconnect("192.168.0.111:80"); /* Websockets is a connected protocol. */ int ws = websockconnect(tcps); uint16t compressionalgoritm; mrecv(ws, &compressionalgorithm, 2, -1); /* Compression socket is unconnected. */ int cs = compressstart(ws, compression_algorithm); ~~ *** Updated version of re(4) for DragonflyBSD (http://lists.dragonflybsd.org/pipermail/users/2016-November/313140.html) Sephe over at the Dragonfly project has issued a CFT for a newer version of the “re” driver For those who don't know, that is for Realtek nics, specifically his updates add features: I have made an updated version of re(4), which leverages Realtek driver's chip/PHY reset/initialization code. I hope it can resolve all kinds of weirdness we encountered on this chip so far. Testers, you know what to do! Give this a whirl and let him know if you run into any new issues, or better yet, give feedback if it fixes some long-standing problems you've run into in the past. *** Hackathon reports from OpenBSD's B2K16 b2k16 hackathon report: Jeremy Evans on ports cleaning, progress on postgres, nginx, ruby and more (http://undeadly.org/cgi?action=article&sid=20161112112023) b2k16 hackathon report: Landry Breuil on various ports progress (http://undeadly.org/cgi?action=article&sid=20161112095902) b2k16 hackathon report: Antoine Jacoutot on GNOME's path forward, various ports progress (http://undeadly.org/cgi?action=article&sid=20161109030623) We have a trio of hackathon reports from OpenBSD's B2K16 (Recently held in Budapest) First up - Jeremy Evans give us his rundown which starts with sweeping some of the cruft out of the barn: I started off b2k16 by channeling tedu@, and removing a lot of ports, including lang/ruby/2.0, lang/io, convertors/ruby-json, databases/dbic++, databases/ruby-swift, databases/ruby-jdbc-*, x11/ruby-profiligacy, and mail/ruby-mailfactory. After that, he talks about improvements made to postgres, nginx and ruby ports, fixing things such as pg_upgrade support, breaking nginx down into sub-packages and a major ruby update to about 50% of the packages. Next up - Landry Breuil tells us about his trip, which also started with some major ports pruning, including some stale XFCE bits and drupal6. One of the things he mentions is the Tor browser: Found finally some time again to review properly the pending port for Tor Browser, even if i don't like the way it is developed (600+ patches against upstream firefox-esr !? even if relationship is improving..) nor will endorse its use, i feel that the time that was spent on porting it and updating it and maintaining it shouldn't be lost, and it should get commited - there are only some portswise minor tweaks to fix. Had a bit of discussions about that with other porters... Lastly, Antoine Jacoutot gives us a smaller update on his work: First task of this hackathon was for Jasper and I to upgrade to GNOME 3.22.1 (version 3.22.2 hit the ports tree since). As usual I already updated the core libraries a few days before so that we could start with a nice set of fully updated packages. It ended up being the fastest GNOME update ever, it all went very smoothly. We're still debating the future of GNOME on OpenBSD though. More and more features require systemd interfaces and without a replacement it may not make sense to keep it around. Implementing these interfaces requires time which Jasper and I don't really have these days... Anyway, we'll see. All-n-all, a good trip it sounds like with some much needed hacking taking place. Good to see the cruft getting cleaned up, along with some new exciting ports landing. *** July to September 2016 Status Report (https://www.freebsd.org/news/status/report-2016-07-2016-09.html) The latest FreeBSD quarterly status report is out It includes the induction of the new Core team, and reports from all of the other teams, including Release Engineering, Port Manager, and the FreeBSD Foundation Some other highlights: Capsicum Update The Graphics Stack on FreeBSD Using lld, the LLVM Linker, to Link FreeBSD VirtualBox Shared Folders Filesystem evdev support (better mouse, keyboard, and multi-touch support) ZFS Code Sync with Latest OpenZFS/Illumos The ARC now mostly stores compressed data, the same as is stored on disk, decompressing them on demand. The L2ARC now stores the same (compressed) data as the ARC without recompression, and its RAM usage was further reduced. The largest size of indirect block possible has been increased from 16KB to 128KB, and speculative prefetching of indirect blocks is now performed. Improved ordering of space allocation. The SHA-512t256 and Skein hashing algorithms are now supported. *** Beastie Bits How to Host Your Own Private GitHub with Gogs (http://www.cs.cmu.edu/afs/cs/user/predragp/www/git.html) Nvidia Adds Telemetry To Latest Drivers (https://yro.slashdot.org/story/16/11/07/1427257/nvidia-adds-telemetry-to-latest-drivers) KnoxBUG Upcoming Meeting (http://knoxbug.org/2016-11-29) Feedback/Questions William - Show Music (http://pastebin.com/skvEgkLK) Ray - Mounting a Cell Phone (http://pastebin.com/nMDeSFGM) Ron - TrueOS + Radeon (http://pastebin.com/p5bC1jKU) (Follow-up - He used nvidia card) Kurt - ZFS Migration (http://pastebin.com/ud9vEK2C) Matt Dillon (Yes that Matt Dillon) - vkernels (http://pastebin.com/VPQfsUks) ***

Anerzählt Archiv 1-300
128KB war mal viel Speicher

Anerzählt Archiv 1-300

Play Episode Listen Later Feb 16, 2016 7:07


Als ich noch jung war, da waren wir noch mit bescheidenen 128kb Arbeitsspeicher die Könige auf dem Schulhof! Grund genug eine ganze Episode lang in Erinnerungen zu schwelgen... so.

Club de Jazz
Club de Jazz 6/07/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Jul 4, 2011 133:07


Club de Jazz 6/07/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 6/07/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Jul 4, 2011 133:07


Club de Jazz 6/07/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 22/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Jun 20, 2011 121:34


Club de Jazz 22/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 22/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Jun 20, 2011 121:34


Club de Jazz 22/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 15/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Jun 13, 2011 128:29


Club de Jazz 15/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 15/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Jun 13, 2011 128:29


Club de Jazz 15/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 8/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Jun 6, 2011 131:28


Club de Jazz 8/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 8/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Jun 6, 2011 131:28


Club de Jazz 8/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 1/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later May 30, 2011 119:45


Club de Jazz 1/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 1/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later May 30, 2011 119:45


Club de Jazz 1/06/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 25/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later May 23, 2011 129:14


Club de Jazz 25/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 25/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later May 23, 2011 129:14


Club de Jazz 25/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 18/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later May 16, 2011 120:28


Club de Jazz 18/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 18/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later May 16, 2011 120:28


Club de Jazz 18/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 11/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later May 9, 2011 187:34


Club de Jazz 11/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 11/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later May 9, 2011 187:34


Club de Jazz 11/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 4/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later May 3, 2011 164:51


Club de Jazz 4/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 4/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later May 3, 2011 164:51


Club de Jazz 4/05/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 27/04/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Apr 25, 2011 120:44


Club de Jazz 27/04/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 27/04/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Apr 25, 2011 120:44


Club de Jazz 27/04/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 20/04/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Apr 18, 2011 132:11


Club de Jazz 20/04/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 20/04/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Apr 18, 2011 132:11


Club de Jazz 20/04/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 13/04/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Apr 11, 2011 119:00


Club de Jazz 13/04/2011 (128KB) www.elclubdejazz.com

Club de Jazz
Club de Jazz 13/04/2011 (128KB) www.elclubdejazz.com

Club de Jazz

Play Episode Listen Later Apr 11, 2011 119:00


Club de Jazz 13/04/2011 (128KB) www.elclubdejazz.com