POPULARITY
実施日時: 2025年4月6日 日曜礼拝 メッセンジャー: 吉田耕三牧師 聖書箇所: マルコ1章21~28節 長さ・サイズ: 34:12 (31.31MB)
実施日時: 2024年8月18日 日曜礼拝 メッセンジャー: 門谷 信愛希 牧師 聖書箇所: エステル3章1〜15節 長さ・サイズ: 46:21 (21.31MB) 内容紹介: エステルが王妃としてクセルクセス王に迎え入れられてから5年。王宮では新たな権力者ハマンが台頭し始めていました。誰に対しても変わらない態度を取るモルデカイをハマンは憎み、驚くべき民族殲滅計画を建てます。何が彼をそこまで向かわせたのでしょうか。また、信仰者は自らを神のように振る舞う権力者を前に、どのように歩めば良いのでしょうか。聖書から知恵を頂いて行きます。
Nicole and I talk about Be The Match, IOW a donor to fight lymphoma. Then what kind of plumber to hire, and just how minimal is Eric? Start your Amazon shopping using our affiliate link: https://amazon.com/shop/gardenfork Get My Email Newsletter: https://www.gardenfork.tv/email/ My Stationary Bike https://amzn.to/3z0XQFN GardenFork receives compensation when you use our affiliate links. This is how we pay the bills ;) GF Sweaters and T Shirts https://teespring.com/stores/gardenfork-2 Email me: radio@gardenfork.tv Watch us on YouTube: www.youtube.com/gardenfork Music used on the podcast is licensed by AudioBlocks and Unique Tracks ©2023 GardenFork Media LLC All Rights Reserved GardenFork Radio is produced in Brooklyn, NY Author : Eric Rochow Episode Type : full Episode : 643 Rating : Clean File Info : audio/mpeg | mp3 | 31MB
実施日時: 2024年1月7日 日曜礼拝 メッセンジャー: 栗原延元牧師 聖書箇所: エレミヤ3章22~23節 長さ・サイズ: 30:55 (28.31MB)
実施日時: 2022年5月22日 日曜礼拝 メッセンジャー: 門谷信愛希牧師 聖書箇所: 創世記1章1節 長さ・サイズ: 40:45 (37.31MB)
実施日時: 2021年10月24日 日曜礼拝 メッセンジャー: 吉田耕三牧師 聖書箇所: イザヤ60章1~5節 長さ・サイズ: 35:17 (32.31MB)
実施日時: 2021年9月26日 日曜礼拝 メッセンジャー: 吉田耕三牧師 聖書箇所: 詩篇53篇1~6節 長さ・サイズ: 34:12 (31.31MB)
実施日時: 2020年5月31日 日曜礼拝 メッセンジャー: 門谷 信愛希 牧師 聖書箇所: 使徒の働き16章6〜10節 長さ・サイズ: 44:22 (20.31MB) 内容紹介: 三位一体の神、信仰者の中に住む神として知られる「聖霊」。この聖霊が世に降った日は、イエス・キリストの復活から数えて50日目に当たるため、「ペンテコステ」と呼ばれています。今回はこの「聖霊なる神」が信仰者の現実の「生」に具体的に働いておられる場面を開きます。聖霊は私たちにどのように語りかけ、また私たちは聖霊を通してどのように神の御心を知ることができるのか。聖書から示唆を頂きます。三位一体の神、信仰者の中に住む神として知られる「聖霊」。この聖霊が世に降った日は、イエス・キリストの復活から数えて50日目に当たるため、「ペンテコステ」と呼ばれています。今回はこの「聖霊なる神」が信仰者の現実の「生」に具体的に働いておられる場面を開きます。聖霊は私たちにどのように語りかけ、また私たちは聖霊を通してどのように神の御心を知ることができるのか。聖書から示唆を頂きます。
What ZFS blockpointers are, zero-day rewards offered, KDE on FreeBSD status, new FreeBSD core team, NetBSD WiFi refresh, poor man’s CI, and the power of Ctrl+T. ##Headlines What ZFS block pointers are and what’s in them I’ve mentioned ZFS block pointers in the past; for example, when I wrote about some details of ZFS DVAs, I said that DVAs are embedded in block pointers. But I’ve never really looked carefully at what is in block pointers and what that means and implies for ZFS. The very simple way to describe a ZFS block pointer is that it’s what ZFS uses in places where other filesystems would simply put a block number. Just like block numbers but unlike things like ZFS dnodes, a block pointer isn’t a separate on-disk entity; instead it’s an on disk data format and an in memory structure that shows up in other things. To quote from the (draft and old) ZFS on-disk specification (PDF): A block pointer (blkptr_t) is a 128 byte ZFS structure used to physically locate, verify, and describe blocks of data on disk. Block pointers are embedded in any ZFS on disk structure that points directly to other disk blocks, both for data and metadata. For instance, the dnode for a file contains block pointers that refer to either its data blocks (if it’s small enough) or indirect blocks, as I saw in this entry. However, as I discovered when I paid attention, most things in ZFS only point to dnodes indirectly, by giving their object number (either in a ZFS filesystem or in pool-wide metadata). So what’s in a block pointer itself? You can find the technical details for modern ZFS in spa.h, so I’m going to give a sort of summary. A regular block pointer contains: various metadata and flags about what the block pointer is for and what parts of it mean, including what type of object it points to. Up to three DVAs that say where to actually find the data on disk. There can be more than one DVA because you may have set the copies property to 2 or 3, or this may be metadata (which normally has two copies and may have more for sufficiently important metadata). The logical size (size before compression) and ‘physical’ size (the nominal size after compression) of the disk block. The physical size can do odd things and is not necessarily the asize (allocated size) for the DVA(s). The txgs that the block was born in, both logically and physically (the physical txg is apparently for dva[0]). The physical txg was added with ZFS deduplication but apparently also shows up in vdev removal. The checksum of the data the block pointer describes. This checksum implicitly covers the entire logical size of the data, and as a result you must read all of the data in order to verify it. This can be an issue on raidz vdevs or if the block had to use gang blocks. Just like basically everything else in ZFS, block pointers don’t have an explicit checksum of their contents. Instead they’re implicitly covered by the checksum of whatever they’re embedded in; the block pointers in a dnode are covered by the overall checksum of the dnode, for example. Block pointers must include a checksum for the data they point to because such data is ‘out of line’ for the containing object. (The block pointers in a dnode don’t necessarily point straight to data. If there’s more than a bit of data in whatever the dnode covers, the dnode’s block pointers will instead point to some level of indirect block, which itself has some number of block pointers.) There is a special type of block pointer called an embedded block pointer. Embedded block pointers directly contain up to 112 bytes of data; apart from the data, they contain only the metadata fields and a logical birth txg. As with conventional block pointers, this data is implicitly covered by the checksum of the containing object. Since block pointers directly contain the address of things on disk (in the form of DVAs), they have to change any time that address changes, which means any time ZFS does its copy on write thing. This forces a change in whatever contains the block pointer, which in turn ripples up to another block pointer (whatever points to said containing thing), and so on until we eventually reach the Meta Object Set and the uberblock. How this works is a bit complicated, but ZFS is designed to generally make this a relatively shallow change with not many levels of things involved (as I discovered recently). As far as I understand things, the logical birth txg of a block pointer is the transaction group in which the block pointer was allocated. Because of ZFS’s copy on write principle, this means that nothing underneath the block pointer has been updated or changed since that txg; if something changed, it would have been written to a new place on disk, which would have forced a change in at least one DVA and thus a ripple of updates that would update the logical birth txg. However, this doesn’t quite mean what I used to think it meant because of ZFS’s level of indirection. If you change a file by writing data to it, you will change some of the file’s block pointers, updating their logical birth txg, and you will change the file’s dnode. However, you won’t change any block pointers and thus any logical birth txgs for the filesystem directory the file is in (or anything else up the directory tree), because the directory refers to the file through its object number, not by directly pointing to its dnode. You can still use logical birth txgs to efficiently find changes from one txg to another, but you won’t necessarily get a filesystem level view of these changes; instead, as far as I can see, you will basically get a view of what object(s) in a filesystem changed (effectively, what inode numbers changed). (ZFS has an interesting hack to make things like ‘zfs diff’ work far more efficiently than you would expect in light of this, but that’s going to take yet another entry to cover.) ###Rewards of Up to $500,000 Offered for FreeBSD, OpenBSD, NetBSD, Linux Zero-Days Exploit broker Zerodium is offering rewards of up to $500,000 for zero-days in UNIX-based operating systems like OpenBSD, FreeBSD, NetBSD, but also for Linux distros such as Ubuntu, CentOS, Debian, and Tails. The offer, first advertised via Twitter earlier this week, is available as part of the company’s latest zero-day acquisition drive. Zerodium is known for buying zero-days and selling them to government agencies and law enforcement. The company runs a regular zero-day acquisition program through its website, but it often holds special drives with more substantial rewards when it needs zero-days of a specific category. BSD zero-day rewards will be on par with Linux payouts The US-based company held a previous drive with increased rewards for Linux zero-days in February, with rewards going as high as $45,000. In another zero-day acquisition drive announced on Twitter this week, the company said it was looking again for Linux zero-days, but also for exploits targeting BSD systems. This time around, rewards can go up to $500,000, for the right exploit. Zerodium told Bleeping Computer they’ll be aligning the temporary rewards for BSD systems with their usual payouts for Linux distros. The company’s usual payouts for Linux privilege escalation exploits can range from $10,000 to $30,000. Local privilege escalation (LPE) rewards can even reach $100,000 for “an exploit with an exceptional quality and coverage,” such as, for example, a Linux kernel exploit affecting all major distributions. Payouts for Linux remote code execution (RCE) exploits can bring in from $50,000 to $500,000 depending on the targeted software/service and its market share. The highest rewards are usually awarded for LPEs and RCEs affecting CentOS and Ubuntu distros. Zero-day price varies based on exploitation chain The acquisition price of a submitted zero-day is directly tied to its requirements in terms of user interaction (no click, one click, two clicks, etc.), Zerodium said. Other factors include the exploit reliability, its success rate, the number of vulnerabilities chained together for the final exploit to work (more chained bugs means more chances for the exploit to break unexpectedly), and the OS configuration needed for the exploit to work (exploits are valued more if they work against default OS configs). Zero-days in servers “can reach exceptional amounts” “Price difference between systems is mostly driven by market shares,” Zerodium founder Chaouki Bekrar told Bleeping Computer via email. Asked about the logic behind these acquisition drives that pay increased rewards, Bekrar told Bleeping Computer the following: "Our aim is to always have, at any time, two or more fully functional exploits for every major software, hardware, or operating systems, meaning that from time to time we would promote a specific software/system on our social media to acquire new codes and strengthen our existing capabilities or extend them.” “We may also react to customers’ requests and their operational needs,” Bekrar said. It’s becoming a crowded market Since Zerodium drew everyone’s attention to the exploit brokerage market in 2015, the market has gotten more and more crowded, but also more sleazy, with some companies being accused of selling zero-days to government agencies in countries with oppressive or dictatorial regimes, where they are often used against political oponents, journalists, and dissidents, instead of going after real criminals. The latest company who broke into the zero-day brokerage market is Crowdfense, who recently launched an acquisition program with prizes of $10 million, of which it already paid $4.5 million to researchers. Twitter Announcement Digital Ocean http://do.co/bsdnow ###KDE on FreeBSD – June 2018 The KDE-FreeBSD team (a half-dozen hardy individuals, with varying backgrounds and varying degrees of involvement depending on how employment is doing) has a status message in the #kde-freebsd channel on freenode. Right now it looks like this: http://FreeBSD.kde.org | Bleeding edge http://FreeBSD.kde.org/area51.php | Released: Qt 5.10.1, KDE SC 4.14.3, KF5 5.46.0, Applications 18.04.1, Plasma-5.12.5, Kdevelop-5.2.1, Digikam-5.9.0 It’s been a while since I wrote about KDE on FreeBSD, what with Calamares and third-party software happening as well. We’re better at keeping the IRC topic up-to-date than a lot of other sources of information (e.g. the FreeBSD quarterly reports, or the f.k.o website, which I’ll just dash off and update after writing this). In no particular order: Qt 5.10 is here, in a FrankenEngine incarnation: we still use WebEnging from Qt 5.9 because — like I’ve said before — WebEngine is such a gigantic pain in the butt to update with all the necessary patches to get it to compile. Our collection of downstream patches to Qt 5.10 is growing, slowly. None of them are upstreamable (e.g. libressl support) though. KDE Frameworks releases are generally pushed to ports within a week or two of release. Actually, now that there is a bigger stack of KDE software in FreeBSD ports the updates take longer because we have to do exp-runs. Similarly, Applications and Plasma releases are reasonably up-to-date. We dodged a bullet by not jumping on Plasma 5.13 right away, I see. Tobias is the person doing almost all of the drudge-work of these updates, he deserves a pint of something in Vienna this summer. The freebsd.kde.org website has been slightly updated; it was terribly out-of-date. So we’re mostly-up-to-date, and mostly all packaged up and ready to go. Much of my day is spent in VMs packaged by other people, but it’s good to have a full KDE developer environment outside of them as well. (PS. Gotta hand it to Tomasz for the amazing application for downloading and displaying a flamingo … niche usecases FTW) ##News Roundup New FreeBSD Core Team Elected Active committers to the project have elected your tenth FreeBSD Core Team. Allan Jude (allanjude) Benedict Reuschling (bcr) Brooks Davis (brooks) Hiroki Sato (hrs) Jeff Roberson (jeff) John Baldwin (jhb) Kris Moore (kmoore) Sean Chittenden (seanc) Warner Losh (imp) Let’s extend our gratitude to the outgoing Core Team members: Baptiste Daroussin (bapt) Benno Rice (benno) Ed Maste (emaste) George V. Neville-Neil (gnn) Matthew Seaman (matthew) Matthew, after having served as the Core Team Secretary for the past four years, will be stepping down from that role. The Core Team would also like to thank Dag-Erling Smørgrav for running a flawless election. To read about the responsibilities of the Core Team, refer to https://www.freebsd.org/administration.html#t-core. ###NetBSD WiFi refresh The NetBSD Foundation is pleased to announce a summer 2018 contract with Philip Nelson (phil%NetBSD.org@localhost) to update the IEEE 802.11 stack basing the update on the FreeBSD current code. The goals of the project are: Minimizing the differences between the FreeBSD and NetBSD IEEE 802.11 stack so future updates are easier. Adding support for the newer protocols 801.11/N and 802.11/AC. Improving SMP support in the IEEE 802.11 stack. Adding Virtual Access Point (VAP) support. Updating as many NIC drivers as time permits for the updated IEEE 802.11 stack and VAP changes. Status reports will be posted to tech-net%NetBSD.org@localhost every other week while the contract is active. iXsystems ###Poor Man’s CI - Hosted CI for BSD with shell scripting and duct tape Poor Man’s CI (PMCI - Poor Man’s Continuous Integration) is a collection of scripts that taken together work as a simple CI solution that runs on Google Cloud. While there are many advanced hosted CI systems today, and many of them are free for open source projects, none of them seem to offer a solution for the BSD operating systems (FreeBSD, NetBSD, OpenBSD, etc.) The architecture of Poor Man’s CI is system agnostic. However in the implementation provided in this repository the only supported systems are FreeBSD and NetBSD. Support for additional systems is possible. Poor Man’s CI runs on the Google Cloud. It is possible to set it up so that the service fits within the Google Cloud “Always Free” limits. In doing so the provided CI is not only hosted, but is also free! (Disclaimer: I am not affiliated with Google and do not otherwise endorse their products.) ARCHITECTURE A CI solution listens for “commit” (or more usually “push”) events, builds the associated repository at the appropriate place in its history and reports the results. Poor Man’s CI implements this very basic CI scenario using a simple architecture, which we present in this section. Poor Man’s CI consists of the following components and their interactions: Controller: Controls the overall process of accepting GitHub push events and starting builds. The Controller runs in the Cloud Functions environment and is implemented by the files in the controller source directory. It consists of the following components: Listener: Listens for GitHub push events and posts them as work messages to the workq PubSub. Dispatcher: Receives work messages from the workq PubSub and a free instance name from the Builder Pool. It instantiates a builder instance named name in the Compute Engine environment and passes it the link of a repository to build. Collector: Receives done messages from the doneq PubSub and posts the freed instance name back to the Builder Pool. PubSub Topics: workq: Transports work messages that contain the link of the repository to build. poolq: Implements the Builder Pool, which contains the name’s of available builder instances. To acquire a builder name, pull a message from the poolq. To release a builder name, post it back into the poolq. doneq: Transports done messages (builder instance terminate and delete events). These message contain the name of freed builder instances. builder: A builder is a Compute Engine instance that performs a build of a repository and shuts down when the build is complete. A builder is instantiated from a VM image and a startx (startup-exit) script. Build Logs: A Storage bucket that contains the logs of builds performed by builder instances. Logging Sink: A Logging Sink captures builder instance terminate and delete events and posts them into the doneq. BUGS The Builder Pool is currently implemented as a PubSub; messages in the PubSub contain the names of available builder instances. Unfortunately a PubSub retains its messages for a maximum of 7 days. It is therefore possible that messages will be discarded and that your PMCI deployment will suddenly find itself out of builder instances. If this happens you can reseed the Builder Pool by running the commands below. However this is a serious BUG that should be fixed. For a related discussion see https://tinyurl.com/ybkycuub. $ ./pmci queuepost poolq builder0 # ./pmci queuepost poolq builder1 # ... repeat for as many builders as you want The Dispatcher is implemented as a Retry Background Cloud Function. It accepts work messages from the workq and attempts to pull a free name from the poolq. If that fails it returns an error, which instructs the infrastructure to retry. Because the infrastructure does not provide any retry controls, this currently happens immediately and the Dispatcher spins unproductively. This is currently mitigated by a “sleep” (setTimeout), but the Cloud Functions system still counts the Function as running and charges it accordingly. While this fits within the “Always Free” limits, it is something that should eventually be fixed (perhaps by the PubSub team). For a related discussion see https://tinyurl.com/yb2vbwfd. ###The Power of Ctrl-T Did you know that you can check what a process is doing by pressing CTRL+T? Has it happened to you before that you were waiting for something to be finished that can take a lot of time, but there is no easy way to check the status. Like a dd, cp, mv and many others. All you have to do is press CTRL+T where the process is running. This will output what’s happening and will not interrupt or mess with it in any way. This causes the operating system to output the SIGINFO signal. On FreeBSD it looks like this: ping pingtest.com PING pingtest.com (5.22.149.135): 56 data bytes 64 bytes from 5.22.149.135: icmpseq=0 ttl=51 time=86.232 ms 64 bytes from 5.22.149.135: icmpseq=1 ttl=51 time=85.477 ms 64 bytes from 5.22.149.135: icmpseq=2 ttl=51 time=85.493 ms 64 bytes from 5.22.149.135: icmpseq=3 ttl=51 time=85.211 ms 64 bytes from 5.22.149.135: icmpseq=4 ttl=51 time=86.002 ms load: 1.12 cmd: ping 94371 [select] 4.70r 0.00u 0.00s 0% 2500k 5/5 packets received (100.0%) 85.211 min / 85.683 avg / 86.232 max 64 bytes from 5.22.149.135: icmpseq=5 ttl=51 time=85.725 ms 64 bytes from 5.22.149.135: icmp_seq=6 ttl=51 time=85.510 ms As you can see it not only outputs the name of the running command but the following parameters as well: 94371 – PID 4.70r – since when is the process running 0.00u – user time 0.00s – system time 0% – CPU usage 2500k – resident set size of the process or RSS `` > An even better example is with the following cp command: cp FreeBSD-11.1-RELEASE-amd64-dvd1.iso /dev/null load: 0.99 cmd: cp 94412 [runnable] 1.61r 0.00u 0.39s 3% 3100k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 15% load: 0.91 cmd: cp 94412 [runnable] 2.91r 0.00u 0.80s 6% 3104k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 32% load: 0.91 cmd: cp 94412 [runnable] 4.20r 0.00u 1.23s 9% 3104k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 49% load: 0.91 cmd: cp 94412 [runnable] 5.43r 0.00u 1.64s 11% 3104k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 64% load: 1.07 cmd: cp 94412 [runnable] 6.65r 0.00u 2.05s 13% 3104k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 79% load: 1.07 cmd: cp 94412 [runnable] 7.87r 0.00u 2.43s 15% 3104k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 95% > I prcessed CTRL+T six times. Without that, all the output would have been is the first line. > Another example how the process is changing states: wget https://download.freebsd.org/ftp/releases/amd64/amd64/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-dvd1.iso –2018-06-17 18:47:48– https://download.freebsd.org/ftp/releases/amd64/amd64/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-dvd1.iso Resolving download.freebsd.org (download.freebsd.org)… 96.47.72.72, 2610:1c1:1:606c::15:0 Connecting to download.freebsd.org (download.freebsd.org)|96.47.72.72|:443… connected. HTTP request sent, awaiting response… 200 OK Length: 3348465664 (3.1G) [application/octet-stream] Saving to: ‘FreeBSD-11.1-RELEASE-amd64-dvd1.iso’ FreeBSD-11.1-RELEASE-amd64-dvd1.iso 1%[> ] 41.04M 527KB/s eta 26m 49sload: 4.95 cmd: wget 10152 waiting 0.48u 0.72s FreeBSD-11.1-RELEASE-amd64-dvd1.iso 1%[> ] 49.41M 659KB/s eta 25m 29sload: 12.64 cmd: wget 10152 waiting 0.55u 0.85s FreeBSD-11.1-RELEASE-amd64-dvd1.iso 2%[=> ] 75.58M 6.31MB/s eta 20m 6s load: 11.71 cmd: wget 10152 running 0.73u 1.19s FreeBSD-11.1-RELEASE-amd64-dvd1.iso 2%[=> ] 85.63M 6.83MB/s eta 18m 58sload: 11.71 cmd: wget 10152 waiting 0.80u 1.32s FreeBSD-11.1-RELEASE-amd64-dvd1.iso 14%[==============> ] 460.23M 7.01MB/s eta 9m 0s 1 > The bad news is that CTRl+T doesn’t work with Linux kernel, but you can use it on MacOS/OS-X: —> Fetching distfiles for gmp —> Attempting to fetch gmp-6.1.2.tar.bz2 from https://distfiles.macports.org/gmp —> Verifying checksums for gmp —> Extracting gmp —> Applying patches to gmp —> Configuring gmp load: 2.81 cmd: clang 74287 running 0.31u 0.28s > PS: If I recall correctly Feld showed me CTRL+T, thank you! Beastie Bits Half billion tries for a HAMMER2 bug (http://lists.dragonflybsd.org/pipermail/commits/2018-May/672263.html) OpenBSD with various Desktops OpenBSD 6.3 running twm window manager (https://youtu.be/v6XeC5wU2s4) OpenBSD 6.3 jwm and rox desktop (https://youtu.be/jlSK2oi7CBc) OpenBSD 6.3 cwm youtube video (https://youtu.be/mgqNyrP2CPs) pf: Increase default state table size (https://svnweb.freebsd.org/base?view=revision&revision=336221) *** Tarsnap Feedback/Questions Ben Sims - Full feed? (http://dpaste.com/3XVH91T#wrap) Scott - Questions and Comments (http://dpaste.com/08P34YN#wrap) Troels - Features of FreeBSD 11.2 that deserve a mention (http://dpaste.com/3DDPEC2#wrap) Fred - Show Ideas (http://dpaste.com/296ZA0P#wrap) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) iXsystems It's all NAS (https://www.ixsystems.com/blog/its-all-nas/)
実施日時: 2018年6月10日 日曜礼拝 メッセンジャー: 門谷 信愛希 牧師 聖書箇所: 創世記21章1〜7節 長さ・サイズ: 44:22 (20.31MB) 内容紹介:
実施日時: 2018年1月14日 日曜礼拝 メッセンジャー: 吉田耕三牧師 聖書箇所: イザヤ10章5〜34節 長さ・サイズ: 33:00 (31MB)
実施日時: 2017年10月15日 日曜礼拝 メッセンジャー: 吉田耕三牧師 聖書箇所: イザヤ7章1〜9節 長さ・サイズ: 33:00 (31MB)
実施日時: 2017年8月6日 日曜礼拝 メッセンジャー: 門谷 信愛希 牧師 聖書箇所: 創世記18章16〜33節 長さ・サイズ: 44:21 (20.31MB) 内容紹介:
Today on BSD Now, the latest Dragonfly BSD release, RaidZ performance, another OpenSSL Vulnerability, and more; all this week on BSD Now. This episode was brought to you by Headlines DragonFly BSD 4.8 is released (https://www.dragonflybsd.org/release48/) Improved kernel performance This release further localizes cache lines and reduces/removes cache ping-ponging on globals. For bulk builds on many-cores or multi-socket systems, we have around a 5% improvement, and certain subsystems such as namecache lookups and exec()s see massive focused improvements. See the corresponding mailing list post with details. Support for eMMC booting, and mobile and high-performance PCIe SSDs This kernel release includes support for eMMC storage as the boot device. We also sport a brand new SMP-friendly, high-performance NVMe SSD driver (PCIe SSD storage). Initial device test results are available. EFI support The installer can now create an EFI or legacy installation. Numerous adjustments have been made to userland utilities and the kernel to support EFI as a mainstream boot environment. The /boot filesystem may now be placed either in its own GPT slice, or in a DragonFly disklabel inside a GPT slice. DragonFly, by default, creates a GPT slice for all of DragonFly and places a DragonFly disklabel inside it with all the standard DFly partitions, such that the disk names are roughly the same as they would be in a legacy system. Improved graphics support The i915 driver has been updated to match the version found with the Linux 4.6 kernel. Broadwell and Skylake processor users will see improvements. Other user-affecting changes Kernel is now built using -O2. VKernels now use COW, so multiple vkernels can share one disk image. powerd() is now sensitive to time and temperature changes. Non-boot-filesystem kernel modules can be loaded in rc.conf instead of loader.conf. *** #8005 poor performance of 1MB writes on certain RAID-Z configurations (https://github.com/openzfs/openzfs/pull/321) Matt Ahrens posts a new patch for OpenZFS Background: RAID-Z requires that space be allocated in multiples of P+1 sectors,because this is the minimum size block that can have the required amount of parity. Thus blocks on RAIDZ1 must be allocated in a multiple of 2 sectors; on RAIDZ2 multiple of 3; and on RAIDZ3 multiple of 4. A sector is a unit of 2^ashift bytes, typically 512B or 4KB. To satisfy this constraint, the allocation size is rounded up to the proper multiple, resulting in up to 3 "pad sectors" at the end of some blocks. The contents of these pad sectors are not used, so we do not need to read or write these sectors. However, some storage hardware performs much worse (around 1/2 as fast) on mostly-contiguous writes when there are small gaps of non-overwritten data between the writes. Therefore, ZFS creates "optional" zio's when writing RAID-Z blocks that include pad sectors. If writing a pad sector will fill the gap between two (required) writes, we will issue the optional zio, thus doubling performance. The gap-filling performance improvement was introduced in July 2009. Writing the optional zio is done by the io aggregation code in vdevqueue.c. The problem is that it is also subject to the limit on the size of aggregate writes, zfsvdevaggregationlimit, which is by default 128KB. For a given block, if the amount of data plus padding written to a leaf device exceeds zfsvdevaggregation_limit, the optional zio will not be written, resulting in a ~2x performance degradation. The solution is to aggregate optional zio's regardless of the aggregation size limit. As you can see from the graphs, this can make a large difference in performance. I encourage you to read the entire commit message, it is well written and very detailed. *** Can you spot the OpenSSL vulnerability (https://guidovranken.wordpress.com/2017/01/28/can-you-spot-the-vulnerability/) This code was introduced in OpenSSL 1.1.0d, which was released a couple of days ago. This is in the server SSL code, ssl/statem/statemsrvr.c, sslbytestocipherlist()), and can easily be reached remotely. Can you spot the vulnerability? So there is a loop, and within that loop we have an ‘if' statement, that tests a number of conditions. If any of those conditions fail, OPENSSLfree(raw) is called. But raw isn't the address that was allocated; raw is increment every loop. Hence, there is a remote invalid free vulnerability. But not quite. None of those checks in the ‘if' statement can actually fail; earlier on in the function, there is a check that verifies that the packet contains at least 1 byte, so PACKETget1 cannot fail. Furthermore, earlier in the function it is verified that the packet length is a multiple of 3, hence PACKETcopybytes and PACKET_forward cannot fail. So, does the code do what the original author thought, or expected it to do? But what about the next person that modifies that code, maybe changing or removing one of the earlier checks, allowing one of those if conditions to fail, and execute the bad code? Nonetheless OpenSSL has acknowledged that the OPENSSL_free line needs a rewrite: Pull Request #2312 (https://github.com/openssl/openssl/pull/2312) PS I'm not posting this to ridicule the OpenSSL project or their programming skills. I just like reading code and finding corner cases that impact security, which is an effort that ultimately works in everybody's best interest, and I like to share what I find. Programming is a very difficult enterprise and everybody makes mistakes. Thanks to Guido Vranken for the sharp eye and the blog post *** Research Debt (http://distill.pub/2017/research-debt/) I found this article interesting as it relates to not just research, but a lot of technical areas in general Achieving a research-level understanding of most topics is like climbing a mountain. Aspiring researchers must struggle to understand vast bodies of work that came before them, to learn techniques, and to gain intuition. Upon reaching the top, the new researcher begins doing novel work, throwing new stones onto the top of the mountain and making it a little taller for whoever comes next. People expect the climb to be hard. It reflects the tremendous progress and cumulative effort that's gone into the research. The climb is seen as an intellectual pilgrimage, the labor a rite of passage. But the climb could be massively easier. It's entirely possible to build paths and staircases into these mountains. The climb isn't something to be proud of. The climb isn't progress: the climb is a mountain of debt. Programmers talk about technical debt: there are ways to write software that are faster in the short run but problematic in the long run. Poor Exposition – Often, there is no good explanation of important ideas and one has to struggle to understand them. This problem is so pervasive that we take it for granted and don't appreciate how much better things could be. Undigested Ideas – Most ideas start off rough and hard to understand. They become radically easier as we polish them, developing the right analogies, language, and ways of thinking. Bad abstractions and notation – Abstractions and notation are the user interface of research, shaping how we think and communicate. Unfortunately, we often get stuck with the first formalisms to develop even when they're bad. For example, an object with extra electrons is negative, and pi is wrong Noise – Being a researcher is like standing in the middle of a construction site. Countless papers scream for your attention and there's no easy way to filter or summarize them. We think noise is the main way experts experience research debt. There's a tradeoff between the energy put into explaining an idea, and the energy needed to understand it. On one extreme, the explainer can painstakingly craft a beautiful explanation, leading their audience to understanding without even realizing it could have been difficult. On the other extreme, the explainer can do the absolute minimum and abandon their audience to struggle. This energy is called interpretive labor Research distillation is the opposite of research debt. It can be incredibly satisfying, combining deep scientific understanding, empathy, and design to do justice to our research and lay bare beautiful insights. Distillation is also hard. It's tempting to think of explaining an idea as just putting a layer of polish on it, but good explanations often involve transforming the idea. This kind of refinement of an idea can take just as much effort and deep understanding as the initial discovery. + The distillation can often times require an entirely different set of skills than the original creation of the idea. Almost all of the BSD projects have some great ideas or subsystems that just need distillation into easy to understand and use platforms or tools. Like the theoretician, the experimentalist or the research engineer, the research distiller is an integral role for a healthy research community. Right now, almost no one is filling it. Anyway, if that bit piqued your interest, go read the full article and the suggested further reading. *** News Roundup And then the murders began. (https://blather.michaelwlucas.com/archives/2902) A whole bunch of people have pointed me at articles like this one (http://thehookmag.com/2017/03/adding-murders-began-second-sentence-book-makes-instantly-better-125462/), which claim that you can improve almost any book by making the second sentence “And then the murders began.” It's entirely possible they're correct. But let's check, with a sampling of books. As different books come in different tenses and have different voices, I've made some minor changes. “Welcome to Cisco Routers for the Desperate! And then the murders begin.” — Cisco Routers for the Desperate, 2nd ed “Over the last ten years, OpenSSH has become the standard tool for remote management of Unix-like systems and many network devices. And then the murders began.” — SSH Mastery “The Z File System, or ZFS, is a complicated beast, but it is also the most powerful tool in a sysadmin's Batman-esque utility belt. And then the murders begin.” — FreeBSD Mastery: Advanced ZFS “Blood shall rain from the sky, and great shall be the lamentation of the Linux fans. And then, the murders will begin.” — Absolute FreeBSD, 3rd Ed Netdata now supports FreeBSD (https://github.com/firehol/netdata) netdata is a system for distributed real-time performance and health monitoring. It provides unparalleled insights, in real-time, of everything happening on the system it runs (including applications such as web and database servers), using modern interactive web dashboards. From the release notes: apps.plugin ported for FreeBSD Check out their demo sites (https://github.com/firehol/netdata/wiki) *** Distrowatch Weekly reviews RaspBSD (https://distrowatch.com/weekly.php?issue=20170220#raspbsd) RaspBSD is a FreeBSD-based project which strives to create a custom build of FreeBSD for single board and hobbyist computers. RaspBSD takes a recent snapshot of FreeBSD and adds on additional components, such as the LXDE desktop and a few graphical applications. The RaspBSD project currently has live images for Raspberry Pi devices, the Banana Pi, Pine64 and BeagleBone Black & Green computers. The default RaspBSD system is quite minimal, running a mere 16 processes when I was logged in. In the background the operating system runs cron, OpenSSH, syslog and the powerd power management service. Other than the user's shell and terminals, nothing else is running. This means RaspBSD uses little memory, requiring just 16MB of active memory and 31MB of wired or kernel memory. I made note of a few practical differences between running RaspBSD on the Pi verses my usual Raspbian operating system. One minor difference is RaspBSD turns off the Pi's external power light after booting. Raspbian leaves the light on. This means it looks like the Pi is off when it is running RaspBSD, but it also saves a little electricity. Conclusions: Apart from these little differences, running RaspBSD on the Pi was a very similar experience to running Raspbian and my time with the operating system was pleasantly trouble-free. Long-term, I think applying source updates to the base system might be tedious and SD disk operations were slow. However, the Pi usually is not utilized for its speed, but rather its low cost and low-energy usage. For people who are looking for a small home server or very minimal desktop box, RaspBSD running on the Pi should be suitable. Research UNIX V8, V9 and V10 made public by Alcatel-Lucent (https://media-bell-labs-com.s3.amazonaws.com/pages/20170327_1602/statement%20regarding%20Unix%203-7-17.pdf) Alcatel-Lucent USA Inc. (“ALU-USA”), on behalf of itself and Nokia Bell Laboratories agrees, to the extent of its ability to do so, that it will not assert its copyright rights with respect to any non-commercial copying, distribution, performance, display or creation of derivative works of Research Unix®1 Editions 8, 9, and 10. Research Unix is a term used to refer to versions of the Unix operating system for DEC PDP-7, PDP-11, VAX and Interdata 7/32 and 8/32 computers, developed in the Bell Labs Computing Science Research Center. The version breakdown can be viewed on its Wikipedia page (https://en.wikipedia.org/wiki/Research_Unix) It only took 30+ years, but now they're public You can grab them from here (http://www.tuhs.org/Archive/Distributions/Research/) If you're wondering what happened with Research Unix, After Version 10, Unix development at Bell Labs was stopped in favor of a successor system, Plan 9 (http://plan9.bell-labs.com/plan9/); which itself was succeeded by Inferno (http://www.vitanuova.com/inferno/). *** Beastie Bits The BSD Family Tree (https://github.com/freebsd/freebsd/blob/master/share/misc/bsd-family-tree) Unix Permissions Calculator (http://permissions-calculator.org/) NAS4Free release 11.0.0.4 now available (https://sourceforge.net/projects/nas4free/files/NAS4Free-11.0.0.4/11.0.0.4.4141/) Another BSD Mag released for free downloads (https://bsdmag.org/download/simple-quorum-drive-freebsd-ctl-ha-beast-storage-system/) OPNsense 17.1.4 released (https://forum.opnsense.org/index.php?topic=4898.msg19359) *** Feedback/Questions gozes asks via twitter about how get involved in FreeBSD (https://twitter.com/gozes/status/846779901738991620) ***
A recording from vBSDCon 2015 of the talk titled "Supporting a BSD Project" with Ed Maste and George Neville-Neil.File Info: 65Min, 31MB.Ogg Link: https://archive.org/download/bsdtalk259/bsdtalk259.ogg
Recording of the vBSDCon 2013 talk "Migrating from GCC to LLVM/CLANG" with David Chisnall.File Info: 1hour, 31MB.Ogg Link: https://archive.org/download/bsdtalk233/bsdtalk233.ogg
BSC: O melhor podcast de humor do Brasil! Diversão e entretenimento por Bobos Sem Corte
[soundcloud url="https://api.soundcloud.com/tracks/140396268" params="color=8a00ff&auto_play=false&hide_related=false&show_artwork=true" width="100%" height="166" iframe="true" /] No episódio 77 o BSC vem copiar a fórmula mais utilizada nos programas vespertinos de TV pra falar de tudo um pouco: tem dicas de séries, filmes, protestos dos lixeiros, um "rant" sobre os sindicatos e um agradecimento aos nossos queridos ouvintes! Duração: 43 min | Download: baixar 31MB Deixe seu comentário abaixo ou mande um email para contato@bobossemcorte.com Acompanhe os próximos programas e baixe os antigos: RSS Feed Assinar no iTunes Ver no Smartphone Ouvir no Stitcher | Baixar Stitcher mobile: IOS - Android - Kindle Fire Ouvir no TuneIn Radio | Baixar TuneIn Radio mobile: IOS - Android - Windows Phone - Blackberry Podcasts relacionados: BSC#24 - Vilões com a galera do podcast MdM BSC#73 - A TV dos Anos 2000 com Bruno Carvalho do 99 Vidas BSC#45 - Anos 90 com Di Raphael BSC#55 - Reality People com Jean Massumi (BBB3) Acesse também: Facebook Twitter
En diciembre se ponía en contacto conmigo Verónica Martín Jiménez, subdirectora del Diario de Avisos, para proponerme una colaboración radiofónica muy interesante. Se trataba de realizar un programa especial resumen de noticias científicas 2010 en Radio Televisión Canaria junto con Juanjo Martín, director del programa Galaxias y Centellas.El programa se emitió el pasado 02 de enero y en él, realizamos un repaso al año 2010 desde el punto de vista científico y tecnológico. Este podcast es por tanto algo diferente a lo acostumbrado puesto que se trata de un verdadero programa de radio, grabado en los estudios de Radio Televisión Canarias en Santa Cruz de Tenerife. DESCARGAR EL PODCAST:- 109MB DESCARGA DIRECTA FORMATO .MP3- 31MB DESCARGA DIRECTA FORMATO .OGG- 109MB DESCARGA EN FORMATO COMPRIMIDO .ZIP- 109MB DESCARGA MEDIANTE MEGAUPLOAD- DESCARGA DESDE IVOOX- DESCARGA EN OTROS FORMATOS- DESCARGA EN iTUNES------------------------------------------------------SUSCRIBETE AL PODCAST DE HISTORIA Y CIENCIALA ALDEA IRREDUCTIBLE
The Pecha Kucha presentation essentially involves 20 media-rich slides with little or no text voice-narrated for no longer than 20 seconds per slide for a total presentation time of 6 minutes, 40 seconds. In this episode you'll see an overview of Pecha Kucha and how your students can create Pecha Kucha presentations of their own.Thanks to the following people in my Twitter PLN, friends, and the blogosphere for their expertise and experience which helped me create this:Aaron Ball, @MrAaronBall, Casual Teacher BlogJeff Johnson, @iLeadCommunity, @iPadEducatorsSylvia Tolisano, @Langwitches, Langwitches Blog Post "Presentation21 Make-Over"Dean Shareski, @shareski, Ideas & Thoughts from an Edtech Blog Post "My First Crack at Keynote and Pecha Kucha"Scott Walker, TeacherTechBlog Post "PowerPoint: Students Use Pecha Kucha to Streamline Presentations"Joni Dunlap, Thoughts on Teaching Blog Post "Pecha Kucha, an alternative format for presentations"Pecha Kucha 20 x 20Here is the PowerPoint presentation I used in the screencast:Using the Pecha Kucha Presentation Technique with Students (6.2MB PPTX File)Right-click to download:Windows Media Version (720 x 480, 31Mb)
Date: 08/01/2009Length: 00:17:26Size: 12.31MBThought Leaders: Jassi Chadha, Head of Analytics Practice, Cognizant, Patrick Brundage, Practice Leader, Life Sciences Analytics, Cognizant, and Nagaraja Srivatsan, VP and Head of Life Sciences, North America, CognizantIn this special PharmaVOICE 100 podcast, our thought leaders discuss the current state of the life-sciences industry, the changes it is experiencing now, and what the industry will look like in the next decade.Play PodcastFor more information on how you can be featured in a podcast, contact Dan Limbach at dlimbach@pharmavoice.com or call him at (847) 594-0157
PODCAST IRREDUCTIBLECAPÍTULO 25 - NICOLA TESLATenía muchas ganas de hacer este Podcast.Nicola Tesla es un personaje que sorprende a cualquier persona que se acerca por primera vez a su vida... Descubrir que muchas de las cosas que hoy damos por sentado no ocurrieron así... Conocer que gran parte de nuestro moderno mundo no fue creado por quien creemos.La mente de Tesla trabajó sin descanso hasta el último día de su vida y que por supuesto merece el reconocimiento que no obtuvo en vida.Hoy, sin embargo, podemos escuchar muchas historias sobre Nicola Tesla... algunas de ellas no son ciertas, otras estan distorsionadas o exageradas...La ficción y toda la magia que rodea el nombre de Tesla solo sirven para adornar, a veces innecesariamente, una vida que de por sí es lo suficientemente interesante.A mi afición a este personaje hay que unir todo lo que he ido aprendiendo durante estas dos semanas de documentación, edición y grabación del Podcast que os he resumido en poco más de media hora... Espero que disfrutéis del trabajo que hoy os presento... Un saludo.DESCARGAR EL PODCAST:- 65MB DESCARGA DIRECTA FORMATO .MP3 - 31MB DESCARGA DIRECTA FORMATO .OGG- 65MB DESCARGA EN FORMATO COMPRIMIDO .ZIP- 65MB DESCARGA MEDIANTE MEGAUPLOAD- DESCARGA EN OTROS FORMATOS- DESCARGA EN iTUNESLas Músicas utilizadas en este Podcast están bajo Licencia Creative Commons:- Jaime Heras- Roger Subirana- David Ospina- Axial Ensemble- Canción "Flying on a cloud" de The Dada Weatherman------------------------------------------------------SUSCRIBETE AL PODCAST DE HISTORIA Y CIENCIALA ALDEA IRREDUCTIBLE
PODCAST LA ALDEA IRREDUCTIBLECAPÍTULO 21 - JOHN NASHLlegamos ya al capítulo 21 de estos archivos sonoros de Historia y Ciencia... ¡Quién lo iba a decir!Y en esta ocasión vamos a conocer la vida y obra de un matemático al que la mayoría de nosotros ha conocido gracias al cine... a la película de Ron Howard con Russel Crowe, una mente maravillosa.En este podcast conoceremos a John Nash. Un personaje muy complejo, con una biografía complicada de contar.En 1994 John Nash era galardonado con el Premio Nóbel por sus aportaciones a la teoría de juegos en economía. Las ecuaciones que realizó en Princeton en la década de los años 50 supusieron una revolución dentro de las teorías económicas que teníamos hasta el momento.Además de su vida, nos adentraremos (dentro de nuestras posibilidades) en los entresijos de sus ecuaciones y de sus teorías...Os invito a comenzar un viaje por la genialidad, la locura, las matemáticas y el cine... John Nash.DESCARGAR EL PODCAST:- 68MB DESCARGA DIRECTA FORMATO .MP3 - 31MB DESCARGA DIRECTA FORMATO .OGG- 68MB DESCARGA EN FORMATO COMPRIMIDO .ZIP- 68MB DESCARGA MEDIANTE MEGAUPLOAD- DESCARGA EN OTROS FORMATOS- DESCARGA EN iTUNESLas Músicas utilizadas en este Podcast están bajo Licencia Creative Commons.- Roger Subirana- David Ospina- Enzo Carlino- DJ Fab- Evan- Alexander Blu- Canción "Funny Thing" de Daniel Gray------------------------------------------------------SUSCRIBETE AL PODCAST DE HISTORIA Y CIENCIALA ALDEA IRREDUCTIBLE