POPULARITY
After 13 YEARS Keysight have finally released the new MegaZoom V ASIC oscilloscope! Let's take an initial look at the HD3 / HD300 series released today. NOTE: I only had this scope for 24 hours from when it arrived to when I released this video, so please excuse the crudity of the video, I didn't …
Welcome to the daily304 – your window into Wonderful, Almost Heaven, West Virginia. Today is Sunday, Jan. 28 Learn about the centenarian citizen who gave the town of Hundred its name. Discover wonders of space at Green Bank Observatory. And Wheeling's annual Restaurant Week begins Feb. 23 … on today's daily304. #1 – From WV EXPLORER – There was a time soon after the completion of the B&O Railroad when passengers would clamber to the car windows to catch a glimpse of “Old Hundred.” He was a marvel of a man who, even after his hundredth birthday, labored in his fields to the delight of passersby. The town of Hundred (population 299 as of 2010) was named for that man, Henry Church. Church, as he was known for the first 99 years of his life, was born in Suffolk, England, in 1750. He was among several young men sent to the colonies to serve under Lord Charles, Earl Cornwallis, in the 63rd British Light Infantry. After the Revolutionary War, he remained in America, married, raised eight children, and eventually settled in present-day Wetzel County. A patriotic old gentleman, he served in the War of 1812. He died September 14, 1860. According to his tombstone, located on the hill above the town in the Hundred Cemetery, he was 109 years, nine months, and one day old when he passed. Read more: https://wvexplorer.com/2024/01/22/old-hundred-west-virginia-henry-church-wetzel-county/ #2 – From WVNS-TV – Out of the many wild and wonderful places in West Virginia, there is one truly interstellar place to go, especially for astronomy lovers. The Green Bank Observatory, established in 2016, is located in the Green Bank area of Pocahontas County, and is home to the largest fully-steerable telescope in the world. The Robert C. Byrd Green Bank Telescope -- also known as the GBT -- measures 300 feet in diameter and operates from 0.1GHz to 116 GHz, making it the top telescope that operates on those wavelengths. Many scientific discoveries have been made at the Green Bank Observatory, with one of the most recent being the accidental discovery of a Dark Primordial Galaxy due to the GBT being turned in the wrong direction. Guided tours are offered at the facility. Visit www.greenbankobservatory.org to learn more. Read more: https://www.wvnstv.com/news/west-virginia-news/mountain-state-destinations-green-bank-observatory/ #3 – From WTRF-TV – Wheeling officials are set for the return of its Restaurant Week, which will take place Friday, February 23, to Saturday, March 2. This year, the event will be extended to include Leap Day, allowing an additional day for diners to enjoy and experience Wheeling's local flavors by featuring specials from the area's favorite dining establishments. The initiative aims to support local businesses and boost restaurant traffic during ongoing downtown construction. Participation in Wheeling Restaurant Week is open to any locally-owned, non-franchise, and non-corporately owned restaurant within the City of Wheeling. Visit Wheeling, WV, will host a landing page on their website featuring a submission form for interested restaurants. Visit www.wheelingcvb.com to learn more. Read more: https://www.wtrf.com/news/restaurant-week-is-returning-to-the-ohio-valley/ Find these stories and more at wv.gov/daily304. The daily304 curated news and information is brought to you by the West Virginia Department of Commerce: Sharing the wealth, beauty and opportunity in West Virginia with the world. Follow the daily304 on Facebook, Twitter and Instagram @daily304. Or find us online at wv.gov and just click the daily304 logo. That's all for now. Take care. Be safe. Get outside and enjoy all the opportunity West Virginia has to offer.
About Evoo, the brand: https://best.kevin.games/evoo-company-review Chris G's EVC141-12 Review: https://www.youtube.com/watch?v=olPUSDFY9MQ Motile M142: https://www.pcworld.com/article/398581/motile-m142-review-ryzen-finds-a-home-in-this-surprisingly-good-budget-notebook.html Evoo EVC141-12BK listing - BrandsMart: https://www.brandsmartusa.com/evoo/249126/14-1-elite-series-amd-ryzen-5-3500u-processor-8gb-ram-256gb-ssd-ultra-slim-laptop-black.htm Evoo EVC141-12BK listing - Walmart: https://www.walmart.com/ip/EVOO-14-1-Ultra-Slim-Notebook-Elite-Series-FHD-Display-AMD-Ryzen-5-3500U-Processor-Radeon-Vega-8-Graphics-8GB-RAM-256GB-SSD-HD-Webcam-Windows-10-Home/419496306 Evoo EVC141-12 Laptop Specs: Windows 10 Home 14.1” FHD Display, 1920 x 1080, 60Hz AMD Ryzen™ 5 3500U Mobile Processor with Radeon™ Vega 8 Graphics (2.1GHz, Up to 3.7GHz) 256GB Solid State Drive 8GB Memory (RAM) HD Front Camera Up to 10 hours of battery life Bluetooth 4.2 HDMI Port x 1 USB 2.0 x 1 USB 3.1 x 2 USB Type-C x 1 (Data Transfer Only) Kensington Lock x 1 Ethernet Port x 1
The question of who should get access to the so-called “lower three,” or the part of the broadband spectrum ranging from 3.1GHz to 3.45GHz, has the Federal Communications Commission (FCC) and industry in a stand-off with the Department of Defense. A new report due to be released next month could be the key to ending the stalemate.Congressional authority to auction off parts of the broadband spectrum to industry expired in March, and while telecommunications companies and the FCC continue their push to renew the auctions, leaders in both the Pentagon and Congress say the lower three is vital to national security interests. The lower three is used for some of the Pentagon's radar capabilities. Both sides are waiting for the results of a report due to be released in September about the risks and capabilities of the lower three. Learn more about your ad choices. Visit podcastchoices.com/adchoicesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
The question of who should get access to the so-called “lower three,” or the part of the broadband spectrum ranging from 3.1GHz to 3.45GHz, has the Federal Communications Commission (FCC) and industry in a stand-off with the Department of Defense. A new report due to be released next month could be the key to ending the stalemate.Congressional authority to auction off parts of the broadband spectrum to industry expired in March, and while telecommunications companies and the FCC continue their push to renew the auctions, leaders in both the Pentagon and Congress say the lower three is vital to national security interests. The lower three is used for some of the Pentagon's radar capabilities. Both sides are waiting for the results of a report due to be released in September about the risks and capabilities of the lower three. Learn more about your ad choices. Visit megaphone.fm/adchoices
今使っているiMac2012Late21.5inchモデル(3.1GHzクアッドコアIntel Core i7、メモリ16G、ストレージ1TB多分Fusion Drive)では、Davinciが重過ぎるので、買い替えを検討しています。コジマとWAONの提携カードを作ったんですが、今なら15%のWAONポイント還元があることもあって、妻が積極的です
Ryan dives deep into the strategy of Minecraft Legends to let you know if it's a gem! A copy of Minecraft Legends was provided by Xbox Canada for review purposes. #xbox #xboxgamepass #minecraftlegends PC Specs: CPU: Intel Core i7-12700F 2.1GHz (4.9GHz Turbo) 8 P-Core, 4 E-Core, 20 Thread GPU: NVIDIA GeForce RTX 3070 8GB RAM: RGB 16GB DDR4 3200 MHz (2x8GB) ★ LINKS ★ ► Support Carpool Gaming on Patreon: https://patreon.com/carpoolgaming ► Join our amazing Discord community: https://discord.com/invite/WR3qcXJq9n ► Get your Carpool Gaming merch: https://bit.ly/cpgmerch ► Subscribe on YouTube: https://youtube.com/carpoolgaming ► Follow on Twitter: https://twitter.com/carpoolgaming Thanks so much to everyone who supports us on https://patreon.com/carpoolgaming ★ ULTIMATE PRODUCERS ★ Robby Bobby Miller: https://twitch.tv/robbybobbymiller Trucker Sloth Tony Baker: https://youtube.com/quest4pixels Johnathan Brown: https://linktr.ee/pme.jib Lee Navarro: https://www.phoenixoverdrive.com/ ★ PLATINUM PRODUCERS ★ Markus McCracken RJ Kern ★ GOLD MEMBERS ★ Anna BobLoblaw Bowsah Cecily Carrozza Dannohh Drellesh Emily O'Kelley Fulish Fuji Hoppel Jose Jimenez Marcus O'Neill Tim Paullin
Premiereに続いて、DaVinci Resolve もマスターしようと、Udemyのオンライン講座をセール価格で買いました。【認定トレーナーが教える:2023年最新 DaVinci Resolve 18 超入門ゼロから始める!動画編集基礎講座】https://www.udemy.com/course/2021davinci-resolve-17/learn/lecture/33280620 iMacのOSもアップグレードして、なんとかインストールに成功。ところが、いざ使おうとすると、起動は遅いわ、モードの切り替えも遅いわで、これまで見ないようにしていた「パワー不足」問題が露呈しました。何の気なしに始めちゃったけど、DaVinciって重いのかな〜?我が家のiMacは21.5-inch, Late 2012 3.1GHzクアッドコアIntel Core i7、メモリ16GB(8GBx2) 1600MHzDDR3、1.12TB Fusion Drive、グラフィックスNVIDIA GeForce GT 650M 512MB、OS10.15.7 Catalina。https://support.apple.com/kb/sp665?locale=ja_JP 今のM1 iMacが20万ちょいだけど、家計にはキツいなぁ〜。下取りしてもらえるかな?と検索したら、なんとアウト!古すぎってこと?残念です〜。でも、買うとしたら、新春セールとかかな?えらい先だけど。あとは、新型が出て型落ちになったら安くならんかな〜?(00:00)1.オープニング(01:13)2.10年経ったiMac(03:58)3.DaVinci動かしてみたら重くて重くて(08:11)4. DaVinci動かしてみたら重くて重くて…iMac?MacMini?(14:09)5.新型出たところで型落ちを狙う?
Esistono i coltellini svizzeri con la lama, il seghetto, le forbicine e il cacciavite e poi esistono i coltellini svizzeri senza strumenti meccanici, ma quelli digitali, come un'antenne per ricevere e trasmettere i segnali fino a 1GHz, un ricevitore e trasmettitore infrarosso, un lettore NFS, uno RFID e tante altre cose interessanti, tutte insieme e ben armonizzate. Con entrambi si possono fare cose legali e un po' meno legali. Il progetto Flipper Zero su Kickstarter Il sito ufficiale di Flipper Zero Dove comprarlo: Lab401 Dove comprarlo: Joom Dove comprarlo: shop ufficiale La community italiana Il canale Reddit Il canale Youtube dell'amico Davide Gatti Il mio Victorinox preferito Il podcast Cosa c'entra Pillole di Bit (https://www.pilloledib.it/) è un podcast indipendente realizzato da Francesco Tucci, se vuoi metterti con contatto con me puoi scegliere tra diverse piattaforme: - Telegram (o anche solo il canale dedicato solo ai commenti delle puntate) - TikTok (per ora è un esperimento) - Twitter - BlueSky - Il mio blog personale ilTucci.com - Il mio canale telegram personale Le Cose - Mastodon personale - Mastodon del podcast - la mail (se mi vuoi scrivere in modo diretto e vuoi avere più spazio per il tuo messaggio) Rispondo sempre Se questo podcast ti piace, puoi contribuire alla sue realizzazione! Come? - Con una donazione singola con Satispay - Con una donazione singola o ricorrente con Paypal - Con un acquisto sponsorizzato su Amazon (accedi a questo link e metti le cose che vuoi nel carrello) - Attivando uno sei servizi di Ehiweb con il mio link sponsorizzato Se hai donato più di 5€ ricordati di compilare il form per ricevere i gadget! Il sito è gentilmente hostato da ThirdEye (scrivete a domini AT thirdeye.it), un ottimo servizio che vi consiglio caldamente e il podcast è montato con gioia con PODucer, un software per Mac di Alex Raccuglia
欢迎收听雪球和喜马拉雅联合出品的财经有深度,雪球,国内领先的集投资交流交易一体的综合财富管理平台,聪明的投资者都在这里。听众朋友们大家好,我是主播匪石-34,今天分享的内容叫中国芯片,大可不必悲观,来自RabitRun。最近美国出台了针对中国的高性能芯片管制措施。我研究生毕业后的第一份工作,就是在芯片企业,并且干了足够长的时间。现在回想起来,这是一种幸运,能在芯片成为中美竞争棋眼的形势下,让自己独立思考一些东西。过去两年的各种制裁,加上今天这个,基本上可以说美国对中国芯片战有价值的牌打得差不多了。再往后就是要对民用下手,或者就是针对老的工艺。但这些对美国科技和军事威胁不大,制裁反而对美国的现金流影响很大,或者有些压根制裁不了。美国制裁,当然会对中国造成伤害。但换个角度想,这本就是我们发展国产技术的“成本”或者"触发器"。这两年国内各行各业,材料、软件、化工、零件、能源、设备、也包括中低端芯片....,国产替代如火如荼。制造业绝大部分东西,没有高端到人工智能芯片的地步。国产的技术往往开始不完美,因为以前缺少从0到1的启动,没有进入"试错、反馈、迭代、提高"的工程循环,后面的进步和发展当然也无从谈起。现在国内企业获得了大量技术机会和商业利益,进入正向的工程和商业循环。这不是凭空掉下来的。这种“好处”是制裁带来的。我们不能只拿好处不要成本,它们是一体两面的。那回到芯片,中国芯片能发展起来吗?在中美没有彻底撕破脸的情况下,美国的打压是让人焦虑和担心的。因为没有撕破脸,意味着大家还是会在一套商业逻辑和商业舞台竞争,而半导体这个东西是非常依赖商业生态的。落后5年10年,基本上就没有造血能力了,商业模式就跑不通。但彻底撕破脸的情况下,就不是商业逻辑了,而是"不能被你卡死"的大国竞争逻辑,而且长度数十年计算。商业逻辑下,高端半导体几乎赢家通吃。台积电的芯片到5nm了,英特尔的10nm就成垃圾了,一流的公司不会选英特尔。但在被卡脖子的逻辑下,如果美欧日是5nm,中国搞到10nm,那反而是中国很大的成功。因为足以自保发展科研、商用乃至国防。大家不在一个商业循环,最多比你费点电,而且我还可以迭代进步。商业逻辑和大国竞争逻辑是不一样的。再从产业技术难度上说,其实可以和中国的国防体系做个对比,因为中国的国防体系也是在自主的情况下发展起来的。很多人可能会说国防和半导体产业不是一个难度的。注意:我说的是整个国防体系。和一个超级大国的国防体系相比,半导体产线是低一到两个数量级的复杂系统。复杂系统的集合是子系统,子系统的集合是巨系统,巨系统的集合是体系。中国国防体系从只能造中型武器开始,到今天的攒成"全都要"的全球一流水平,也就用了四十年。从国内芯片产业自身土壤来讲,社会需求、资本投入、技术基础、商业循环,雏形都已经具备了。很多人说“芯片这个产业,需要国际合作,产业链公司众多、互相开放”等等,这些当然正确。但其实芯片产业,是完全有可能被大国吃下大部份关键环节的。首先,之所以现在这么多国家、这么多企业参与,是因为以前分工合作氛围确实比较好,大家是商业合作和良性竞争。现在这个氛围肯定是没了。其次,这么多国家和公司参与进来,是技术自然扩散的结果。半导体起源于美国,后来传到日本,后来传到欧洲一部分,再后来传到韩国台湾。新进入的地区,逐渐吃下一块。技术输出国,也没有说就彻底归零。芯片其实是一个很小的行业,这轮芯片暴涨之前,世界500强里长期只有一家芯片公司,Intel。如果只看高端制程芯片,行业就更小了。这个不大的行业,现在这么多国家参与,这是技术扩散和商业逻辑下比较优势自然而然形成的。绝对不是“一个国家啃不下来或忙不过来了,必须拉上其他国家几个帮手一起搞”,不是这样的电影情节。还有一个重要原因:芯片产业链上太多利基小市场,就是很多类似“圆珠笔滚珠”犄角旮旯的小东西,全球一两个小公司就足够了。先到者已经把浴缸占满,其它人再进去没有意义。以前商业逻辑就是这么考虑的。现在肯定不是这么考虑了。最后看刨根到底的基础资源----人。除了中国,世界上主要搞芯片的也就是美日欧韩台,加起来10亿人。他们这10亿人确实在合作,但他们毕竟是分散在不同国家、不同时区、不同语言、不同市场等割裂环境之下搞合作。这割裂的10亿人都能合作搞出来的东西,没道理互联互通、内部合作成本更低的14亿人搞不出来。更遑论这14亿人的工业制造能力更大。中国自主的半导体产业,早已初具规模。悲观的看法,也超过2000年的西方了。2000年横空出世的英特尔奔腾四也才0.25微米的工艺,刚过单核主频1GHz。几年前上海微电子所因为“太落后”没人用的光刻机,也是90纳米的。西方的高端工艺,从2000年的250nm发展到今天的10nm以下,用了二十年。这段历程我们重走,肯定不需要二十年。虽然先行者享受到的商业利益循环更好,但是如前说,现在高性能芯片之于中国,已经不完全是商业逻辑。更何况,当时他们是在无人区开拓,我们是复现。我们熟知DUV、EUV的技术原理甚至实物,可以少走很多弯路。技术攻关上路径明确、少走弯路是最关键的,可以节省大量的精力和时间。这么算下来我们十多年可到西方今天水平。当然,在我们追赶期间西方还在往前走,但我们的芯片产业也足够在科研、工业、商业甚至国防等领域良性发展了。
Jon Mykrantz, VP of Enterprise Sales speaks to Don Witt of The Channel Daily News, a TR publication about the composition of 5G which has three bandwidth categories: Low-Band - < 1 GHz - Long Distance Mid-Band (C-band) 1GHz - 6 GHz – faster speed shorter distance High-Band Millimeter wave 24 GHz & > Very Fast & short distances Jon Mykrantz In addition to the shorter distance, due to its Millimeter wavelength, the signal tends to get absorbed by walls, trees, or normal obstacles in the path of the millimeter wave. In those cases, it would be wise to add a repeater as part of the solution to not only repeat the signal but also amplify the signal. The repeater will assist in turning corners and redirecting the signal to areas previously serviced poorly or not at all. Listen in and find out how WilsonPro can help you with your future 5G installations. Deep Experience in High Tech. With 30+ years of experience and 250 active patents, they know what it takes to optimize wireless communications. Their repeaters are outfitted with the latest in 5G amplification technology and remote management is available. The Newest Technology in Signal Amplification. WilsonPro is constantly working to improve their customizable 5G cellular signal repeater solutions for fixed wireless applications. On Us Count On Them to Outperform Their reliable support team goes above and beyond to find the best solution for every client, every time. For more information, go to: https://www.wilsonpro.com/
Join Scott as he answers questions and further discusses the USB Host API for CircuitPython. Next week will be on Friday at 2pm Pacific and be the last stream with Scott as host for a while. Foamyguy will stream in the time slot starting in two weeks. Join the Discord at https://adafru.it/discord All notes for Deep Dives are available at https://github.com/adafruit/deep-dive-notes 0:00 Getting Started 6:15 Let's get the show on the road… 8:22 next week - last deep dive before taking leave / foamyguy 9:20 deep dive notes on github https://github.com/adafruit/deep-dive-notes/ 12:00 rock climbing shirt discussion 13:32 examples of interrupt usage in CircPy? 14:55 async io enabled in recent CP 16:50 let's talk usbhost 18:00 Desktop notes from last week 21:15 Last weeks Ben Eater USB videos very helpful 22:00 Usb Host vs. Peripheral and TinyUSB TUD vs TUH D)evice, H)ost 24:27 So how does collision detection work on a USB bus? Do the devices just not communicate until the host asks it for a data packet? 26:06 speed - see Jan Axelson's USB Complete book 29:00 Linux can do USB host / look at python for API ideas 30:00 PyUsb on github 33:06 "USB 3.0 suspends device polling, which is replaced by interrupt-driven protocol." 35:03 return to the Python USB api 35:40 SparkFun has a power delivery dev board. 36:00 pyOCD github ( uses pyUSB ) 36:30 pyDFU ( uses usb.core and usb.util ) 38:30 Chip Shortage is real :-( 38:56 Usb in a nutshell discussion https://www.beyondlogic.org/usbnutshell/usb4.shtml 41:10 blinka is also using pyusb (via pyftdi) 42:20 https://rpilocator.com/ and Octapart 43:50 endpoints vs. hubs 44:50 pyusb tutorial.rst on github 46:32 port.c shared-bindings / sublime merge diff 47:55 usb_host diffs PyUSB API compiles 51:05 circuitpy_defns.mk rules 52:58 __init__.c in shared-bindings/usb 53:35 Device.c 55:20 read size_or_buffer argument 56:04 shared bindings circuit python stub special processing “//|” prefix 57:25 added some type annotations and documentation near implementation 58:03 diff tools for mac users / sublime merge is supported on mac 59:48 https://www.git-tower.com/blog/diff-tools-mac/ 1:01:32 back to the api usb_core_device_read to common_hal_usb_core_device_read 1:05:16 ctrl_transfer - either direction 1:08:10 mention of doxygen - “doxygen is used on all of our Arduino documentation. Sphinx on all the CP.” 1:09:07 detach_kernel_driver 1:10:12 still in Devie. Locals dict table 1:11:50 just found an IBM presentation called Documentation in the modern age that has a slide titled "Cool kids do Sphinx" and another titled "Cool kids used to do Doxygen" 1:13:20 shared modules - stubs 1:16:05 CP Pre-commit auto formatting 1:18:20 shell window - load code imx1060 dev board 1:20:03 get CP running on the dev board 1:21:40 Makefile for imx, compare to tinyUSB 1:28:52 pyocd prompts - ( after inition make -j 32 ) 1:31:04 plug in another usb device - connect vi “tio” 1:32:25 import usb_host 1:32:50 dir(board) 1:34:35 nxp.com i.MX. RT1170 clocked at 1Ghz 1:36:31 i.MX RT Crossover MCUs 1:37:15 teensy 4.1 has a header for the second USB port 1:37:35 USB Host Cable 1:39:49 question about the new esp32 board 1:40:31 teensy 4.0 board has usb ports broken out on pads 1:41:16 i.MX RT1060 Evaluation Kit image of the “B” version (EVK vs EVKB) 1:42:25 regarding your keep host initialized on reboot: would the endpoints/pipes need to be re-initialized on reboot or would remain the state too? 1:43:30 Design resources / design files 1:44:25 download and look at sch-31358_a3 schematic / pinout 1:46:18 USB OTG2 1:46:28 pins.c USB_DP1, USB_DM1 (and DP2) pins 1:49:45 dm or dn? (pin names) 1:50:20 back to datasheet 1:51:05 Data Sheet ( also has 3522 page reference manuals ) 1:52:20 looking for the “pad” name - try to match the data sheet 1:53:00 EVKB adds an M.2 connector for radios and other headers for audio and other expansion 1:54:30 I guess my question on pyusb is can we write a driver for the IntelliKeys that sends that EZUSB firmware at startup and then reads/writes like the driver GDSports wrote for Arduino on the M0? 1:55:05 How do we add a custom device? 1:55:52 ATMakers USB Host Mode conversation (youtube ) 1:57:17 For the record, GDSports on GitHub did most of the USB Host work (I just got the original driver open sourced) There is a great teardown by @scanlime as well 1:58:30 OTG1_DN, DP name to match reference manual 1:59:11 what would happen if you connected a hub? 2:01:07 pin object on imx ( mcu_pin_obj_t ): 2:02:37 So, just to setup my weekend do you think if I get the IntelliKeys driver ported to CPython/pyusb I'll be on a useful path? 2:05:20 if the pins were not defined for device mode, why add them for host mode? 2:07:39 wrap up / next week with Scott, then foamyguy taking over 2:10:48 Pet the cat 2:11:34 have a good weekend everyone Visit the Adafruit shop online - http://www.adafruit.com
Doug K6JEY joins to get us on the air above 1GHz
Let's oversimplify something in the computing world. Which is what you have to do when writing about history. You have to put your blinders on so you can get to the heart of a given topic without overcomplicating the story being told. And in the evolution of technology we can't mention all of the advances that lead to each subsequent evolution. It's wonderful and frustrating all at the same time. And that value judgement of what goes in and what doesn't can be tough. Let's start with the fact that there are two main types of processors in our devices. There's the x86 chipset developed by Intel and AMD and then there's the RISC-based processors, which are ARM and for the old school people, also include PowerPC and SPARC. Today we're going to set aside the x86 chipset that was dominant for so long and focus on how the RISC and so ARM family emerged. First, let's think about what the main difference is between ARM and x86. RISC and so ARM chips have a focus on reducing the number of instructions required to perform a task to as few as possible, and so RISC stands for Reduced Instruction Set Computing. Intel, other than the Atom series chips, with the x86 chips has focused on high performance and high throughput. Big and fast, no matter how much power and cooling is necessary. The ARM processor requires simpler instructions which means there's less logic and so more instructions are required to perform certain logical operations. This increases memory and can increase the amount of time to complete an execution, which ARM developers address with techniques like pipelining, or instruction-level parallelism on a processor. Seymour Cray came up with this to split up instructions so each core or processor handles a different one and so Star, Amdahl and then ARM implemented it as well. The X86 chips are Complex Instruction Set Computing chips, or CISC. Those will do larger, more complicated tasks, like computing floating point integers or memory searches, on the chip. That often requires more consistent and larger amounts of power. ARM chips are built for low power. The reduced complexity of operations is one reason but also it's in the design philosophy. This means less heat syncs and often accounting for less consistent streams of power. This 130 watt x86 vs 5 watt ARM can mean slightly lower clock speeds but the chips can cost more as people will spend less in heat syncs and power supplies. This also makes the ARM excellent for mobile devices. The inexpensive MOS 6502 chips helped revolutionize the personal computing industry in 1975, finding their way into the Apple II and a number of early computers. They were RISC-like but CISC-like as well. They took some of the instruction set architecture family from the IBM System/360 through to the PDP, General Nova, Intel 8080, Zylog, and so after the emergence of Windows, the Intel finally captured the personal computing market and the x86 flourished. But the RISC architecture actually goes back to the ACE, developed in 1946 by Alan Turing. It wasn't until the 1970s that Carver Mead from Caltech and Lynn Conway from Xerox PARC saw that the number of transistors was going to plateau on chips while workloads on chips were growing exponentially. ARPA and other agencies needed more and more instructions, so they instigated what we now refer to as the VLSI project, a DARPA program initiated by Bob Kahn to push into the 32-bit world. They would provide funding to different universities, including Stanford and the University of North Carolina. Out of those projects, we saw the Geometry Engine, which led to a number of computer aided design, or CAD efforts, to aid in chip design. Those workstations, when linked together, evolved into tools used on the Stanford University Network, or SUN, which would effectively spin out of Stanford as Sun Microsystems. And across the bay at Berkeley we got a standardized Unix implementation that could use the tools being developed in Berkely Software Distribution, or BSD, which would eventually become the operating system used by Sun, SGI, and now OpenBSD and other variants. And the efforts from the VLSI project led to Berkely RISC in 1980 and Stanford MIPS as well as the multi chip wafer.The leader of that Berkeley RISC project was David Patterson who still serves as vice chair of the RISC-V Foundation. The chips would add more and more registers but with less specializations. This led to the need for more memory. But UC Berkeley students shipped a faster ship than was otherwise on the market in 1981. And the RISC II was usually double or triple the speed of the Motorola 68000. That led to the Sun SPARC and DEC Alpha. There was another company paying attention to what was happening in the RISC project: Acorn Computers. They had been looking into using the 6502 processor until they came across the scholarly works coming out of Berkeley about their RISC project. Sophie Wilson and Steve Furber from Acorn then got to work building an instruction set for the Acorn RISC Machine, or ARM for short. They had the first ARM working by 1985, which they used to build the Acorn Archimedes. The ARM2 would be faster than the Intel 80286 and by 1990, Apple was looking for a chip for the Apple Newton. A new company called Advanced RISC Machines or Arm would be founded, and from there they grew, with Apple being a shareholder through the 90s. By 1992, they were up to the ARM6 and the ARM610 was used for the Newton. DEC licensed the ARM architecture to develop the StrongARMSelling chips to other companies. Acorn would be broken up in 1998 and parts sold off, but ARM would live on until acquired by Softbank for $32 billion in 2016. Softbank is currently in acquisition talks to sell ARM to Nvidia for $40 billion. Meanwhile, John Cocke at IBM had been working on the RISC concepts since 1975 for embedded systems and by 1982 moved on to start developing their own 32-bit RISC chips. This led to the POWER instruction set which they shipped in 1990 as the RISC System/6000, or as we called them at the time, the RS/6000. They scaled that down to the Power PC and in 1991 forged an alliance with Motorola and Apple. DEC designed the Alpha. It seemed as though the computer industry was Microsoft and Intel vs the rest of the world, using a RISC architecture. But by 2004 the alliance between Apple, Motorola, and IBM began to unravel and by 2006 Apple moved the Mac to an Intel processor. But something was changing in computing. Apple shipped the iPod back in 2001, effectively ushering in the era of mobile devices. By 2007, Apple released the first iPhone, which shipped with a Samsung ARM. You see, the interesting thing about ARM is they don't fab chips, like Intel - they license technology and designs. Apple licensed the Cortex-A8 from ARM for the iPhone 3GS by 2009 but had an ambitious lineup of tablets and phones in the pipeline. And so in 2010 did something new: they made their own system on a chip, or SoC. Continuing to license some ARM technology, Apple pushed on, getting between 800MHz to 1 GHz out of the chip and using it to power the iPhone 4, the first iPad, and the long overdue second-generation Apple TV. The next year came the A5, used in the iPad 2 and first iPad Mini, then the A6 at 1.3 GHz for the iPhone 5, the A7 for the iPhone 5s, iPad Air. That was the first 64-bit consumer SoC. In 2014, Apple released the A8 processor for the iPhone 6, which came in speeds ranging from 1.1GHz to the 1.5 GHz chip in the 4th generation Apple TV. By 2015, Apple was up to the A9, which clocked in at 1.85 GHz for the iPhone 6s. Then we got the A10 in 2016, the A11 in 2017, the A12 in 2018, A13 in 2019, A14 in 2020 with neural engines, 4 GPUs, and 11.8 billion transistors compared to the 30,000 in the original ARM. And it's not just Apple. Samsung has been on a similar tear, firing up the Exynos line in 2011 and continuing to license the ARM up to Cortex-A55 with similar features to the Apple chips, namely used on the Samsung Galaxy A21. And the Snapdragon. And the Broadcoms. In fact, the Broadcom SoC was used in the Raspberry Pi (developed in association with Broadcom) in 2012. The 5 models of the Pi helped bring on a mobile and IoT revolution. And so nearly every mobile device now ships with an ARM chip as do many a device we place around our homes so our digital assistants can help run our lives. Over 100 billion ARM processors have been produced, well over 10 for every human on the planet. And the number is about to grow even more rapidly. Apple surprised many by announcing they were leaving Intel to design their own chips for the Mac. Given that the PowerPC chips were RISC, the ARM chips in the mobile devices are RISC, and the history Apple has with the platform, it's no surprise that Apple is going back that direction with the M1, Apple's first system on a chip for a Mac. And the new MacBook Pro screams. Even software running in Rosetta 2 on my M1 MacBook is faster than on my Intel MacBook. And at 16 billion transistors, with an 8 core GPU and a 16 core neural engine, I'm sure developers are hard at work developing the M3 on these new devices (since you know, I assume the M2 is done by now). What's crazy is, I haven't felt like Intel had a competitor other than AMD in the CPU space since Apple switched from the PowerPC. Actually, those weren't great days. I haven't felt that way since I realized no one but me had a DEC Alpha or when I took the SPARC off my desk so I could play Civilization finally. And this revolution has been a constant stream of evolutions, 40 years in the making. It started with an ARPA grant, but various evolutions from there died out. And so really, it all started with Sophie Wilson. She helped give us the BBC Micro and the ARM. She was part of the move to Element 14 from Acorn Computers and then ended up at Broadcom when they bought the company in 2000 and continues to act as the Director of IC Design. We can definitely thank ARPA for sprinkling funds around prominent universities to get us past 10,000 transistors on a chip. Given that chips continue to proceed at such a lightning pace, I can't imagine where we'll be at in another 40 years. But we owe her (and her coworkers at Acorn and the team at VLSI, now NXP Semiconductors) for their hard work and innovations.
In this special edition of the TechCentral podcast, Duncan McLeod chats to three top industry experts to unpack communications regulator Icasa’s invitations to apply for broadband spectrum and for the wholesale open-access network (Woan). McLeod is joined by Steve Song, Kerron Edmunson and Mortimer Hope to discuss the ITAs in detail, including what they means for South Africa’s telecommunications industry and consumers. Song, who has many hats, including as fellow in residence at the Mozilla Foundation, Hope, who runs a policy and regulatory consultancy and who is a former director for Africa at the GSMA, and Edmunson, an attorney who specialises in telecoms policy and regulation, start by giving their high-level views of the spectrum ITA. What’s good about it, what’s not so good about it, and what needs to be fixed? The conversation then delves into greater detail, looking at the spectrum lots that Icasa has created for the auction and whether these make sense. Is Icasa trying to engineer a particular outcome? And if it is, is that a good thing or a bad thing? The three panellists then unpack the reserve prices set by Icasa. Are they too high? Too low? Just right? Is an auction the best model to use to allocate spectrum in an emerging market like South Africa? What will be the impact on retail prices? Who can afford to bid? What will it mean for competition? The discussion then turns to the digital dividend bands – 700MHz and 800MHz – still occupied by analogue television broadcasters. Should Icasa be licensing these bands at this stage? Will bidders be prepared to pay top dollar for access to bands they can’t fully utilise until at least 2022? Should South Africa give up on digital terrestrial television and free up all sub-1GHz spectrum for mobile? The panellists also tackle the obligations attached to the spectrum licences – are they fair? Do they make sense? What about the requirements around mobile virtual network operators? Does the requirement to support MVN
TechCentral — Telkom group executive for regulation Siyabonga Mahlangu joins TechCentral's Duncan McLeod for a discussion on communications regulator Icasa's emergency temporary spectrum relief and what it means for operators, including Telkom. Mahlangu explains what the importance is of Telkom getting access to sub-1GHz mobile spectrum for the first time -- even if it's on a temporary basis, for now -- and what the company is able to do with this spectrum given that it's still being used by terrestrial television broadcasters. He also outlines why Telkom believes Icasa has erred in making 40MHz of spectrum available in the 2.3GHz band, saying the regulator has effectively expropriated the company's spectrum, which it is not entitled to do. Don't miss the discussion!
1超大核(超级核心)3.1GHz组成。而骁龙865芯片的大核心频率则为2.84Ghz,新芯片主频提升近10%。因而有理由相信Note 20+所搭载的芯片就是骁龙865+
1超大核(超级核心)3.1GHz组成。而骁龙865芯片的大核心频率则为2.84Ghz,新芯片主频提升近10%。因而有理由相信Note 20+所搭载的芯片就是骁龙865+
一组疑似苹果A14芯片的Beta 1版Geekbench5跑分曝光,堪称2020年最强手机端处理器。A14的单核跑分是1658分,多核跑分则达到了4612分
一组疑似苹果A14芯片的Beta 1版Geekbench5跑分曝光,堪称2020年最强手机端处理器。A14的单核跑分是1658分,多核跑分则达到了4612分
This week: another Apple event is coming December 2! We’ll tell you what to expect. Plus: the benchmarks prove it—the new 16-inch MacBook Pro is the powerful update we’ve all been waiting for. And, finally, it’s hard to admit, but Apple’s new Mac shows there may be some advantages to a post-Ive era. We discuss. This episode supported by The NETGEAR Orbi WiFi 6 router gives you ultra-fast speeds and wider coverage throughout your home – it’s the biggest revolution in WiFi ever. Check it out today at your local Best Buy and at Netgear.com/bestwifi6. Easily create a beautiful website all by yourself, at Squarespace.com/cultcast. Use offer code CultCast at checkout to get 10% off your first purchase of a website or domain. Cult of Mac's watch store is full of beautiful straps that cost way less than Apple's. See the full curated collection at Store.Cultofmac.com CultCloth will keep your iPhone 11 Pro, Apple Watch, Mac and iPad sparkling clean, and for a limited time use code CULTCAST at checkout to score a free CleanCloth with any order at CultCloth.co. On the show this week @erfon / @lkahney / @lewiswallace This week's stories Apple to host surprise event for apps and games on December 2 Members of the press received surprise invitations from Apple this morning to attend a first-of-its-kind event to honor 2019’s top apps and games on December 2 in New York City. 16-inch MacBook Pro beats predecessors by overcoming thermal throttling The higher tier MBP has the same i9-9880H process as previous model. In a 30 minute stress test, Dave 2D measured the CPU average speed at 3.1Ghz where previous model was at 2.7GHz. Other stress tests show a similar result, the same CPU is about 10% faster in the 16-inch MBP. 13-inch MacBook Pro dropping horrible Butterfly keyboard The smaller MacBook Pro will reportedly be upgraded next year with a keyboard that users can trust, just as the new 16-inch version recently was. If true, it means the current 13-inch MacBook Pro will be the last with the infamous Butterfly keyboard. This improved version is supposedly scheduled for the first half of 2020. How a Magic Keyboard made it into the new 16-inch MacBook Pro In a new interview, Apple marketing SVP Phil Schiller talked about redesigning Apple’s notebook keyboard. And whether or not the non-butterfly keyboard will make it to other Apple laptops anytime soon. 16-inch MacBook Pro shows the advantages of a post-Jony Ive Apple [Opinion] Now that Jonny’s stepping away, will Apple tempter the enthusiasm of the design team make their products more practical?
OpenBSD on Microsoft Surface Go, FreeBSD Foundation August Update, What’s taking so long with Project Trident, pkgsrc config file versioning, and MacOS remnants in ZFS code. ##Headlines OpenBSD on the Microsoft Surface Go For some reason I like small laptops and the constraints they place on me (as long as they’re still usable). I used a Dell Mini 9 for a long time back in the netbook days and was recently using an 11" MacBook Air as my primary development machine for many years. Recently Microsoft announced a smaller, cheaper version of its Surface tablets called Surface Go which piqued my interest. Hardware The Surface Go is available in two hardware configurations: one with 4Gb of RAM and a 64Gb eMMC, and another with 8Gb of RAM with a 128Gb NVMe SSD. (I went with the latter.) Both ship with an Intel Pentium Gold 4415Y processor which is not very fast, but it’s certainly usable. The tablet measures 9.65" across, 6.9" tall, and 0.3" thick. Its 10" diagonal 3:2 touchscreen is covered with Gorilla Glass and has a resolution of 1800x1200. The bezel is quite large, especially for such a small screen, but it makes sense on a device that is meant to be held, to avoid accidental screen touches. The keyboard and touchpad are located on a separate, removable slab called the Surface Go Signature Type Cover which is sold separately. I opted for the “cobalt blue” cover which has a soft, cloth-like alcantara material. The cover attaches magnetically along the bottom edge of the device and presents USB-attached keyboard and touchpad devices. When the cover is folded up against the screen, it sends an ACPI sleep signal and is held to the screen magnetically. During normal use, the cover can be positioned flat on a surface or slightly raised up about 3/4" near the screen for better ergonomics. When using the device as a tablet, the cover can be rotated behind the screen which causes it to automatically stop sending keyboard and touchpad events until it is rotated back around. The keyboard has a decent amount of key travel and a good layout, with Home/End/Page Up/Page Down being accessible via Fn+Left/Right/Up/Down but also dedicated Home/End/Page Up/Page Down keys on the F9-F12 keys which I find quite useful since the keyboard layout is somewhat small. By default, the F1-F12 keys do not send F1-F12 key codes and Fn must be used, either held down temporarily or Fn pressed by itself to enable Fn-lock which annoyingly keeps the bright Fn LED illuminated. The keys are backlit with three levels of adjustment, handled by the keyboard itself with the F7 key. The touchpad on the Type Cover is a Windows Precision Touchpad connected via USB HID. It has a decent click feel but when the cover is angled up instead of flat on a surface, it sounds a bit hollow and cheap. Surface Go Pen The touchscreen is powered by an Elantech chip connected via HID-over-i2c, which also supports pen input. A Surface Pen digitizer is available separately from Microsoft and comes in the same colors as the Type Covers. The pen works without any pairing necessary, though the top button on it works over Bluetooth so it requires pairing to use. Either way, the pen requires an AAAA battery inside it to operate. The Surface Pen can attach magnetically to the left side of the screen when not in use. A kickstand can swing out behind the display to use the tablet in a laptop form factor, which can adjust to any angle up to about 170 degrees. The kickstand stays firmly in place wherever it is positioned, which also means it requires a bit of force to pull it out when initially placing the Surface Go on a desk. Along the top of the display are a power button and physical volume rocker buttons. Along the right side are the 3.5mm headphone jack, USB-C port, power port, and microSD card slot located behind the kickstand. Charging can be done via USB-C or the dedicated charge port, which accommodates a magnetically-attached, thin barrel similar to Apple’s first generation MagSafe adapter. The charging cable has a white LED that glows when connected, which is kind of annoying since it’s near the mid-line of the screen rather than down by the keyboard. Unlike Apple’s MagSafe, the indicator light does not indicate whether the battery is charged or not. The barrel charger plug can be placed up or down, but in either direction I find it puts an awkward strain on the power cable coming out of it due to the vertical position of the port. Wireless connectivity is provided by a Qualcomm Atheros QCA6174 802.11ac chip which also provides Bluetooth connectivity. Most of the sensors on the device such as the gyroscope and ambient light sensor are connected behind an Intel Sensor Hub PCI device, which provides some power savings as the host CPU doesn’t have to poll the sensors all the time. Firmware The Surface Go’s BIOS/firmware menu can be entered by holding down the Volume Up button, then pressing and releasing the Power button, and releasing Volume Up when the menu appears. Secure Boot as well as various hardware components can be disabled in this menu. Boot order can also be adjusted. A temporary boot menu can be brought up the same way but using Volume Down instead. ###FreeBSD Foundation Update, August 2018 MESSAGE FROM THE EXECUTIVE DIRECTOR Dear FreeBSD Community Member, It’s been a busy summer for the Foundation. From traveling around the globe spreading the word about FreeBSD to bringing on new team members to improve the Project’s Continuous Integration work, we’re very excited about what we’ve accomplished. Take a minute to check out the latest updates within our Foundation sponsored projects; read more about our advocacy efforts in Bangladesh and community building in Cambridge; don’t miss upcoming Travel Grant deadlines, and new Developer Summits; and be sure to find out how your support will ensure our progress continues into 2019. We can’t do this without you! Happy reading!! Deb August 2018 Development Projects Update Fundraising Update: Supporting the Project August 2018 Release Engineering Update BSDCam 2018 Recap October 2018 FreeBSD Developer Summit Call for Participation SANOG32 and COSCUP 2018 Recap MeetBSD 2018 Travel Grant Application Deadline: September 7 ##News Roundup Project Trident: What’s taking so long? What is taking so long? The short answer is that it’s complicated. Project Trident is quite literally a test of the new TrueOS build system. As expected, there have been quite a few bugs, undocumented features, and other optional bits that we discovered we needed that were not initially present. All of these things have to be addressed and retested in a constant back and forth process. While Ken and JT are both experienced developers, neither has done this kind of release engineering before. JT has done some release engineering back in his Linux days, but the TrueOS and FreeBSD build system is very different. Both Ken and JT are learning a completely new way of building a FreeBSD/TrueOS distribution. Please keep in mind that no one has used this new TrueOS build system before, so Ken and JT want to not only provide a good Trident release, but also provide a model or template for other potential TrueOS distributions too! Where are we now? Through perseverance, trial and error, and a lot of head-scratching we have reached the point of having successful builds. It took a while to get there, but now we are simply working out a few bugs with the new installer that Ken wrote as well as finding and fixing all the new Xorg configuration options which recently landed in FreeBSD. We also found that a number of services have been removed or replaced between TrueOS 18.03 and 18.06 so we are needing to adjust what we consider the “base” services for the desktop. All of these issues are being resolved and we are continually rebuilding and pulling in new patches from TrueOS as soon as they are committed. In the meantime we have made an early BETA release of Trident available to the users in our Telegram Channel for those who want to help out in testing these early versions. Do you foresee any other delays? At the moment we are doing many iterations of testing and tweaking the install ISO and package configurations in order to ensure that all the critical functionality works out-of-box (networking, sound, video, basic apps, etc). While we do not foresee any other major delays, sometimes things happen that our outside of our control. For an example, one of the recent delays that hit recently was completely unexpected: we had a hard drive failure on our build server. Up until recently, The aptly named “Poseidon” build server was running a Micron m500dc drive, but that drive is now constantly reporting errors. Despite ordering a replacement Western Digital Blue SSD several weeks ago, we just received it this past week. The drive is now installed with the builder back to full functionality, but we did lose many precious days with the delay. The build server for Project Trident is very similar to the one that JT donated to the TrueOS project. JT had another DL580 G7, so he donated one to the Trident Project for their build server. Poseidon also has 256GB RAM (64 x 4GB sticks) which is a smidge higher than what the TrueOS builder has. Since we are talking about hardware, we probably should address another question we get often, “What Hardware are the devs testing on?” So let’s go ahead and answer that one now. Developer Hardware JT: His main test box is a custom-built Intel i7 7700K system running 32GB RAM, dual Intel Optane 900P drives, and an Nvidia 1070 GTX with four 4K Acer Monitors. He also uses a Lenovo x250 ThinkPad alongside a desk full of x230t and x220 ThinkPads. One of which he gave away at SouthEast LinuxFest this year, which you can read about here. However it’s not done there, being a complete hardware hoarder, JT also tests on several Intel NUCs and his second laptop a Fujitsu t904, not to mention a Plethora of HP DL580 servers, a DL980 server, and a stack of BL485c, BL460c, and BL490c Blades in his HP c7000 and c3000 Bladecenter chassis. (Maybe it’s time for an intervention for his hardware collecting habits) Ken: For a laptop, he primarily uses a 3rd generation X1 Carbon, but also has an old Eee PC T101MT Netbook (dual core 1GHz, 2GB of memory) which he uses for verifying how well Trident works on low-end hardware. As far as workstations go, his office computer is an Intel i7 with an NVIDIA Geforce GTX 960 running three 4K monitors and he has a couple other custom-built workstations (1 AMD, 1 Intel+NVIDIA) at his home. Generally he assembled random workstations based on hardware that was given to him or that he could acquire cheap. Tim: is using a third gen X1 Carbon and a custom built desktop with an Intel Core i5-4440 CPU, 16 GiB RAM, Nvidia GeForce GTX 750 Ti, and a RealTek 8168 / 8111 network card. Rod: Rod uses… No one knows what Rod uses, It’s kinda like how many licks does it take to get to the center of a Tootsie-Roll Tootsie-Pop… the world may just never know. ###NetBSD GSoC: pkgsrc config file versioning A series of reports from the course of the summer on this Google Summer of Code project The goal of the project is to integrate with a VCS (Version Control System) to make managing local changes to config files for packages easier GSoC 2018 Reports: Configuration files versioning in pkgsrc, Part 1 Packages may install code (both machine executable code and interpreted programs), documentation and manual pages, source headers, shared libraries and other resources such as graphic elements, sounds, fonts, document templates, translations and configuration files, or a combination of them. Configuration files are usually the means through which the behaviour of software without a user interface is specified. This covers parts of the operating systems, network daemons and programs in general that don’t come with an interactive graphical or textual interface as the principal mean for setting options. System wide configuration for operating system software tends to be kept under /etc, while configuration for software installed via pkgsrc ends up under LOCALBASE/etc (e.g., /usr/pkg/etc). Software packaged as part of pkgsrc provides example configuration files, if any, which usually get extracted to LOCALBASE/share/examples/PKGBASE/. Don’t worry: automatic merging is disabled by default, set $VCSAUTOMERGE to enable it. In order to avoid breakage, installed configuration is backed up first in the VCS, separating user-modified files from files that have been already automatically merged in the past, in order to allow the administrator to easily restore the last manually edited file in case of breakage. VCS functionality only applies to configuration files, not to rc.d scripts, and only if the environment variable $NOVCS is unset. The version control system to be used as a backend can be set through $VCS. It default to RCS, the Revision Control System, which works only locally and doesn’t support atomic transactions. Other backends such as CVS are supported and more will come; these, being used at the explicit request of the administrator, need to be already installed and placed in a directory part of $PATH. GSoC 2018 Reports: Configuration files versioning in pkgsrc, part 2: remote repositories (git and CVS) pkgsrc is now able to deploy configuration from packages being installed from a remote, site-specific vcs repository. User modified files are always tracked even if automerge functionality is not enabled, and a new tool, pkgconftrack(1), exists to manually store user changes made outside of package upgrade time. Version Control software is executed as the same user running pkgadd or make install, unless the user is “root”. In this case, a separate, unprivileged user, pkgvcsconf, gets created with its own home directory and a working login shell (but no password). The home directory is not strictly necessary, it exists to facilitate migrations betweens repositories and vcs changes; it also serves to store keys used to access remote repositories. Using git instead of rcs is simply done by setting VCS=git in pkginstall.conf GSoC 2018 Reports: Configuration files versioning in pkgsrc, part 3: remote repositories (SVN and Mercurial) GSoC 2018 Reports: Configuration files versioning in pkgsrc, part 4: configuration deployment, pkgtools and future improvements Support for configuration tracking is in scripts, pkginstall scripts, that get built into binary packages and are run by pkgadd upon installation. The idea behind the proposal suggested that users of the new feature should be able to store revisions of their installed configuration files, and of package-provided default, both in local or remote repositories. With this capability in place, it doesn’t take much to make the scripts “pull” configuration from a VCS repository at installation time. That’s what setting VCSCONFPULL=yes in pkginstall.conf after having enabled VCSTRACKCONF does: You are free to use official, third party prebuilt packages that have no customization in them, enable these options, and point pkgsrc to a private conf repository. If it contains custom configuration for the software you are installing, an attempt will be made to use it and install it on your system. If it fails, pkginstall will fall back to using the defaults that come inside the package. RC scripts are always deployed from the binary package, if existing and PKGRCDSCRIPTS=yes in pkginstall.conf or the environment. This will be part of packages, not a separate solution like configuration management tools. It doesn’t support running scripts on the target system to customize the installation, it doesn’t come with its domain-specific language, it won’t run as a daemon or require remote logins to work. It’s quite limited in scope, but you can define a ROLE for your system in pkginstall.conf or in the environment, and pkgsrc will look for configuration you or your organization crafted for such a role (e.g., public, standalone webserver vs reverse proxy or node in a database cluster) ###A little bit of the one-time MacOS version still lingers in ZFS Once upon a time, Apple came very close to releasing ZFS as part of MacOS. Apple did this work in its own copy of the ZFS source base (as far as I know), but the people in Sun knew about it and it turns out that even today there is one little lingering sign of this hoped-for and perhaps prepared-for ZFS port in the ZFS source code. Well, sort of, because it’s not quite in code. Lurking in the function that reads ZFS directories to turn (ZFS) directory entries into the filesystem independent format that the kernel wants is the following comment: objnum = ZFSDIRENTOBJ(zap.zafirstinteger); / MacOS X can extract the object type here such as: * uint8t type = ZFSDIRENTTYPE(zap.zafirstinteger); */ Specifically, this is in zfsreaddir in zfsvnops.c . ZFS maintains file type information in directories. This information can’t be used on Solaris (and thus Illumos), where the overall kernel doesn’t have this in its filesystem independent directory entry format, but it could have been on MacOS (‘Darwin’), because MacOS is among the Unixes that support d_type. The comment itself dates all the way back to this 2007 commit, which includes the change ‘reserve bits in directory entry for file type’, which created the whole setup for this. I don’t know if this file type support was added specifically to help out Apple’s MacOS X port of ZFS, but it’s certainly possible, and in 2007 it seems likely that this port was at least on the minds of ZFS developers. It’s interesting but understandable that FreeBSD didn’t seem to have influenced them in the same way, at least as far as comments in the source code go; this file type support is equally useful for FreeBSD, and the FreeBSD ZFS port dates to 2007 too (per this announcement). Regardless of the exact reason that ZFS picked up maintaining file type information in directory entries, it’s quite useful for people on both FreeBSD and Linux that it does so. File type information is useful for any number of things and ZFS filesystems can (and do) provide this information on those Unixes, which helps make ZFS feel like a truly first class filesystem, one that supports all of the expected general system features. ##Beastie Bits Mac-like FreeBSD Laptop Syncthing on FreeBSD New ZFS Boot Environments Tool My system’s time was so wrong, that even ntpd didn’t work OpenSSH 7.8/7.8p1 (2018-08-24) EuroBSD (Sept 20-23rd) registration Early Bird Period is coming to an end MeetBSD (Oct 18-20th) is coming up fast, hurry up and register! AsiaBSDcon 2019 Dates ##Feedback/Questions Will - Kudos and a Question Peter - Fanless Computers Ron - ZFS disk clone or replace or something Bostjan - ZFS Record Size Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
TechCraft, une émission de divertissement Technologique & vidéo-ludique. OU TechCraft, un savant mélange de High Tech, de jeux vidéo & de Fun! Nos liens: Site TechCraft: www.techcraft.fr Live Youtube: http://live.techcraft.fr Flux rss: http://techcraft.podcloud.fr/rss E-Mail: podcast@techcraft.fr Twitter : @TechCraftPDC Facebook: https://www.facebook.com/TechCraftPDC Slack: http://soulcityteam.slack.com Podradio: http://podradio.fr/podcast/110 PodCloud : https://podcloud.fr/podcast/techcraft iTunes: https://itunes.apple.com/fr/podcast/gamecraft/id796213889 Chaîne Youtube: https://www.youtube.com/techcraftpdc Chaîne Periscope: http://www.periscope.tv/techcraftpdc News High-tech Seven : Embauchons des Techplomates, c’est l’avenir! Quenton: snap Ma localisation! Binzen : spotify et et les chansons sponsorisées Seven: One Plus 5 = ? Binzen fuites de données électorales aux USA Les News Gaming Kaldin: Rockstar et le modding, ça fait deux. Kaldin: L’agent 47 est sauvé ! L’initiative de la semaine Quenton: Peinture solaire! Prends ça Elon! Les news en bref Seven: De la 4G sur la bande 3G (2.1GHz) pour Bouygues et SFR disponible! Kaldin: Les soldes d’été de Steam, c’est dès ce soir ! Seven: Dans les vidéos 360°, concentrez-vous droit devant! Seven: Viva Technology ed.2: c’est terminé!CONCLUSION Site TechCraft: www.techcraft.fr E-Mail: podcast@techcraft.fr Slack: soulcityteam.slack.com Twitter : @TechCraftPDC
This week on BSDNow, reports from AsiaBSDcon, TrueOS and FreeBSD news, Optimizing IllumOS Kernel, your questions and more. This episode was brought to you by Headlines AsiaBSDcon Reports and Reviews () AsiaBSDcon schedule (https://2017.asiabsdcon.org/program.html.en) Schedule and slides from the 4th bhyvecon (http://bhyvecon.org/) Michael Dexter's trip report on the iXsystems blog (https://www.ixsystems.com/blog/ixsystems-attends-asiabsdcon-2017) NetBSD AsiaBSDcon booth report (http://mail-index.netbsd.org/netbsd-advocacy/2017/03/13/msg000729.html) *** TrueOS Community Guidelines are here! (https://www.trueos.org/blog/trueos-community-guidelines/) TrueOS has published its new Community Guidelines The TrueOS Project has existed for over ten years. Until now, there was no formally defined process for interested individuals in the TrueOS community to earn contributor status as an active committer to this long-standing project. The current core TrueOS developers (Kris Moore, Ken Moore, and Joe Maloney) want to provide the community more opportunities to directly impact the TrueOS Project, and wish to formalize the process for interested people to gain full commit access to the TrueOS repositories. These describe what is expected of community members and committers They also describe the process of getting commit access to the TrueOS repo: Previously, Kris directly handed out commit bits. Now, the Core developers have provided a small list of requirements for gaining a TrueOS commit bit: Create five or more pull requests in a TrueOS Project repository within a single six month period. Stay active in the TrueOS community through at least one of the available community channels (Gitter, Discourse, IRC, etc.). Request commit access from the core developers via core@trueos.org OR Core developers contact you concerning commit access. Pull requests can be any contribution to the project, from minor documentation tweaks to creating full utilities. At the end of every month, the core developers review the commit logs, removing elements that break the Project or deviate too far from its intended purpose. Additionally, outstanding pull requests with no active dissension are immediately merged, if possible. For example, a user submits a pull request which adds a little-used OpenRC script. No one from the community comments on the request or otherwise argues against its inclusion, resulting in an automatic merge at the end of the month. In this manner, solid contributions are routinely added to the project and never left in a state of “limbo”. The page also describes the perks of being a TrueOS committer: Contributors to the TrueOS Project enjoy a number of benefits, including: A personal TrueOS email alias: @trueos.org Full access for managing TrueOS issues on GitHub. Regular meetings with the core developers and other contributors. Access to private chat channels with the core developers. Recognition as part of an online Who's Who of TrueOS developers. The eternal gratitude of the core developers of TrueOS. A warm, fuzzy feeling. Intel Donates 250.000 $ to the FreeBSD Foundation (https://www.freebsdfoundation.org/news-and-events/latest-news/new-uranium-level-donation-and-collaborative-partnership-with-intel/) More details about the deal: Systems Thinking: Intel and the FreeBSD Project (https://www.freebsdfoundation.org/blog/systems-thinking-intel-and-the-freebsd-project/) Intel will be more actively engaging with the FreeBSD Foundation and the FreeBSD Project to deliver more timely support for Intel products and technologies in FreeBSD. Intel has contributed code to FreeBSD for individual device drivers (i.e. NICs) in the past, but is now seeking a more holistic “systems thinking” approach. Intel Blog Post (https://01.org/blogs/imad/2017/intel-increases-support-freebsd-project) We will work closely with the FreeBSD Foundation to ensure the drivers, tools, and applications needed on Intel® SSD-based storage appliances are available to the community. This collaboration will also provide timely support for future Intel® 3D XPoint™ products. Thank you very much, Intel! *** Applied FreeBSD: Basic iSCSI (https://globalengineer.wordpress.com/2017/03/05/applied-freebsd-basic-iscsi/) iSCSI is often touted as a low-cost replacement for fibre-channel (FC) Storage Area Networks (SANs). Instead of having to setup a separate fibre-channel network for the SAN, or invest in the infrastructure to run Fibre-Channel over Ethernet (FCoE), iSCSI runs on top of standard TCP/IP. This means that the same network equipment used for routing user data on a network could be utilized for the storage as well. This article will cover a very basic setup where a FreeBSD server is configured as an iSCSI Target, and another FreeBSD server is configured as the iSCSI Initiator. The iSCSI Target will export a single disk drive, and the initiator will create a filesystem on this disk and mount it locally. Advanced topics, such as multipath, ZFS storage pools, failover controllers, etc. are not covered. The real magic is the /etc/ctl.conf file, which contains all of the information necessary for ctld to share disk drives on the network. Check out the man page for /etc/ctl.conf for more details; below is the configuration file that I created for this test setup. Note that on a system that has never had iSCSI configured, there will be no existing configuration file, so go ahead and create it. Then, enable ctld and start it: sysrc ctld_enable=”YES” service ctld start You can use the ctladm command to see what is going on: root@bsdtarget:/dev # ctladm lunlist (7:0:0/0): Fixed Direct Access SPC-4 SCSI device (7:0:1/1): Fixed Direct Access SPC-4 SCSI device root@bsdtarget:/dev # ctladm devlist LUN Backend Size (Blocks) BS Serial Number Device ID 0 block 10485760 512 MYSERIAL 0 MYDEVID 0 1 block 10485760 512 MYSERIAL 1 MYDEVID 1 Now, let's configure the client side: In order for a FreeBSD host to become an iSCSI Initiator, the iscsd daemon needs to be started. sysrc iscsid_enable=”YES” service iscsid start Next, the iSCSI Initiator can manually connect to the iSCSI target using the iscsictl tool. While setting up a new iSCSI session, this is probably the best option. Once you are sure the configuration is correct, add the configuration to the /etc/iscsi.conf file (see man page for this file). For iscsictl, pass the IP address of the target as well as the iSCSI IQN for the session: + iscsictl -A -p 192.168.22.128 -t iqn.2017-02.lab.testing:basictarget You should now have a new device (check dmesg), in this case, da1 The guide them walks through partitioning the disk, and laying down a UFS file system, and mounting it This it walks through how to disconnect iscsi, incase you don't want it anymore This all looked nice and easy, and it works very well. Now lets see what happens when you try to mount the iSCSI from Windows Ok, that wasn't so bad. Now, instead of sharing an entire space disk on the host via iSCSI, share a zvol. Now your windows machine can be backed by ZFS. All of your problems are solved. Interview - Philipp Buehler - pbuehler@sysfive.com (mailto:pbuehler@sysfive.com) Technical Lead at SysFive, and Former OpenBSD Committer News Roundup Half a dozen new features in mandoc -T html (http://undeadly.org/cgi?action=article&sid=20170316080827) mandoc (http://man.openbsd.org/mandoc.1)'s HTML output mode got some new features Even though mdoc(7) is a semantic markup language, traditionally none of the semantic annotations were communicated to the reader. [...] Now, at least in -T html output mode, you can see the semantic function of marked-up words by hovering your mouse over them. In terminal output modes, we have the ctags(1)-like internal search facility built around the less(1) tag jump (:t) feature for quite some time now. We now have a similar feature in -T html output mode. To jump to (almost) the same places in the text, go to the address bar of the browser, type a hash mark ('#') after the URI, then the name of the option, command, variable, error code etc. you want to jump to, and hit enter. Check out the full report by Ingo Schwarze (schwarze@) and try out these new features *** Optimizing IllumOS Kernel Crypto (http://zfs-create.blogspot.com/2014/05/optimizing-illumos-kernel-crypto.html) Sašo Kiselkov, of ZFS fame, looked into the performance of the OpenSolaris kernel crypto framework and found it lacking. The article also spends a few minutes on the different modes and how they work. Recently I've had some motivation to look into the KCF on Illumos and discovered that, unbeknownst to me, we already had an AES-NI implementation that was automatically enabled when running on Intel and AMD CPUs with AES-NI support. This work was done back in 2010 by Dan Anderson.This was great news, so I set out to test the performance in Illumos in a VM on my Mac with a Core i5 3210M (2.5GHz normal, 3.1GHz turbo). The initial tests of “what the hardware can do” were done in OpenSSL So now comes the test for the KCF. I wrote a quick'n'dirty crypto test module that just performed a bunch of encryption operations and timed the results. KCF got around 100 MB/s for each algorithm, except half that for AES-GCM OpenSSL had done over 3000 MB/s for CTR mode, 500 MB/s for CBC, and 1000 MB/s for GCM What the hell is that?! This is just plain unacceptable. Obviously we must have hit some nasty performance snag somewhere, because this is comical. And sure enough, we did. When looking around in the AES-NI implementation I came across this bit in aes_intel.s that performed the CLTS instruction. This is a problem: 3.1.2 Instructions That Cause VM Exits ConditionallyCLTS. The CLTS instruction causes a VM exit if the bits in position 3 (corresponding to CR0.TS) are set in both the CR0 guest/host mask and the CR0 read shadow. The CLTS instruction signals to the CPU that we're about to use FPU registers (which is needed for AES-NI), which in VMware causes an exit into the hypervisor. And we've been doing it for every single AES block! Needless to say, performing the equivalent of a very expensive context switch every 16 bytes is going to hurt encryption performance a bit. The reason why the kernel is issuing CLTS is because for performance reasons, the kernel doesn't save and restore FPU register state on kernel thread context switches. So whenever we need to use FPU registers inside the kernel, we must disable kernel thread preemption via a call to kpreemptdisable() and kpreemptenable() and save and restore FPU register state manually. During this time, we cannot be descheduled (because if we were, some other thread might clobber our FPU registers), so if a thread does this for too long, it can lead to unexpected latency bubbles The solution was to restructure the AES and KCF block crypto implementations in such a way that we execute encryption in meaningfully small chunks. I opted for 32k bytes, for reasons which I'll explain below. Unfortunately, doing this restructuring work was a bit more complicated than one would imagine, since in the KCF the implementation of the AES encryption algorithm and the block cipher modes is separated into two separate modules that interact through an internal API, which wasn't really conducive to high performance (we'll get to that later). Anyway, having fixed the issue here and running the code at near native speed, this is what I get: AES-128/CTR: 439 MB/s AES-128/CBC: 483 MB/s AES-128/GCM: 252 MB/s Not disastrous anymore, but still, very, very bad. Of course, you've got keep in mind, the thing we're comparing it to, OpenSSL, is no slouch. It's got hand-written highly optimized inline assembly implementations of most of these encryption functions and their specific modes, for lots of platforms. That's a ton of code to maintain and optimize, but I'll be damned if I let this kind of performance gap persist. Fixing this, however, is not so trivial anymore. It pertains to how the KCF's block cipher mode API interacts with the cipher algorithms. It is beautifully designed and implemented in a fashion that creates minimum code duplication, but this also means that it's inherently inefficient. ECB, CBC and CTR gained the ability to pass an algorithm-specific "fastpath" implementation of the block cipher mode, because these functions benefit greatly from pipelining multiple cipher calls into a single place. ECB, CTR and CBC decryption benefit enormously from being able to exploit the wide XMM register file on Intel to perform encryption/decryption operations on 8 blocks at the same time in a non-interlocking manner. The performance gains here are on the order of 5-8x.CBC encryption benefits from not having to copy the previously encrypted ciphertext blocks into memory and back into registers to XOR them with the subsequent plaintext blocks, though here the gains are more modest, around 1.3-1.5x. After all of this work, this is how the results now look on Illumos, even inside of a VM: Algorithm/Mode 128k ops AES-128/CTR: 3121 MB/s AES-128/CBC: 691 MB/s AES-128/GCM: 1053 MB/s So the CTR and GCM speeds have actually caught up to OpenSSL, and CBC is actually faster than OpenSSL. On the decryption side of things, CBC decryption also jumped from 627 MB/s to 3011 MB/s. Seeing these performance numbers, you can see why I chose 32k for the operation size in between kernel preemption barriers. Even on the slowest hardware with AES-NI, we can expect at least 300-400 MB/s/core of throughput, so even in the worst case, we'll be hogging the CPU for at most ~0.1ms per run. Overall, we're even a little bit faster than OpenSSL in some tests, though that's probably down to us encrypting 128k blocks vs 8k in the "openssl speed" utility. Anyway, having fixed this monstrous atrocity of a performance bug, I can now finally get some sleep. To made these tests repeatable, and to ensure that the changes didn't break the crypto algorithms, Saso created a crypto_test kernel module. I have recently created a FreeBSD version of crypto_test.ko, for much the same purposes Initial performance on FreeBSD is not as bad, if you have the aesni.ko module loaded, but it is not up to speed with OpenSSL. You cannot directly compare to the benchmarks Saso did, because the CPUs are vastly different. Performance results (https://wiki.freebsd.org/OpenCryptoPerformance) I hope to do some more tests on a range of different sized CPUs in order to determine how the algorithms scale across different clock speeds. I also want to look at, or get help and have someone else look at, implementing some of the same optimizations that Saso did. It currently seems like there isn't a way to perform addition crypto operations in the same session without regenerating the key table. Processing additional buffers in an existing session might offer a number of optimizations for bulk operations, although in many cases, each block is encrypted with a different key and/or IV, so it might not be very useful. *** Brendan Gregg's special freeware tools for sysadmins (http://www.brendangregg.com/specials.html) These tools need to be in every (not so) serious sysadmins toolbox. Triple ROT13 encryption algorithm (beware: export restrictions may apply) /usr/bin/maybe, in case true and false don't provide too little choice... The bottom command lists you all the processes using the least CPU cycles. Check out the rest of the tools. You wrote similar tools and want us to cover them in the show? Send us an email to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) *** A look at 2038 (http://www.lieberbiber.de/2017/03/14/a-look-at-the-year-20362038-problems-and-time-proofness-in-various-systems/) I remember the Y2K problem quite vividly. The world was going crazy for years, paying insane amounts of money to experts to fix critical legacy systems, and there was a neverending stream of predictions from the media on how it's all going to fail. Most didn't even understand what the problem was, and I remember one magazine writing something like the following: Most systems store the current year as a two-digit value to save space. When the value rolls over on New Year's Eve 1999, those two digits will be “00”, and “00” means “halt operation” in the machine language of many central processing units. If you're in an elevator at this time, it will stop working and you may fall to your death. I still don't know why they thought a computer would suddenly interpret data as code, but people believed them. We could see a nearby hydropower plant from my parents' house, and we expected it to go up in flames as soon as the clock passed midnight, while at least two airplanes crashed in our garden at the same time. Then nothing happened. I think one of the most “severe” problems was the police not being able to open their car garages the next day because their RFID tokens had both a start and end date for validity, and the system clock had actually rolled over to 1900, so the tokens were “not yet valid”. That was 17 years ago. One of the reasons why Y2K wasn't as bad as it could have been is that many systems had never used the “two-digit-year” representation internally, but use some form of “timestamp” relative to a fixed date (the “epoch”). The actual problem with time and dates rolling over is that systems calculate timestamp differences all day. Since a timestamp derived from the system clock seemingly only increases with each query, it is very common to just calculate diff = now - before and never care about the fact that now could suddenly be lower than before because the system clock has rolled over. In this case diff is suddenly negative, and if other parts of the code make further use of the suddenly negative value, things can go horribly wrong. A good example was a bug in the generator control units (GCUs) aboard Boeing 787 “Dreamliner” aircrafts, discovered in 2015. An internal timestamp counter would overflow roughly 248 days after the system had been powered on, triggering a shut down to “safe mode”. The aircraft has four generator units, but if all were powered up at the same time, they would all fail at the same time. This sounds like an overflow caused by a signed 32-bit counter counting the number of centiseconds since boot, overflowing after 248.55 days, and luckily no airline had been using their Boing 787 models for such a long time between maintenance intervals. The “obvious” solution is to simply switch to 64-Bit values and call it day, which would push overflow dates far into the future (as long as you don't do it like the IBM S/370 mentioned before). But as we've learned from the Y2K problem, you have to assume that computer systems, computer software and stored data (which often contains timestamps in some form) will stay with us for much longer than we might think. The years 2036 and 2038 might be far in the future, but we have to assume that many of the things we make and sell today are going to be used and supported for more than just 19 years. Also many systems have to store dates which are far in the future. A 30 year mortgage taken out in 2008 could have already triggered the bug, and for some banks it supposedly did. sysgettimeofday() is one of the most used system calls on a generic Linux system and returns the current time in form of an UNIX timestamp (timet data type) plus fraction (susecondst data type). Many applications have to know the current time and date to do things, e.g. displaying it, using it in game timing loops, invalidating caches after their lifetime ends, perform an action after a specific moment has passed, etc. In a 32-Bit UNIX system, timet is usually defined as a signed 32-Bit Integer. When kernel, libraries and applications are compiled, the compiler will turn this assumption machine code and all components later have to match each other. So a 32-Bit Linux application or library still expects the kernel to return a 32-Bit value even if the kernel is running on a 64-Bit architecture and has 32-Bit compatibility. The same holds true for applications calling into libraries. This is a major problem, because there will be a lot of legacy software running in 2038. Systems which used an unsigned 32-Bit Integer for timet push the problem back to 2106, but I don't know about many of those. The developers of the GNU C library (glibc), the default standard C library for many GNU/Linux systems, have come up with a design for year 2038 proofness for their library. Besides the timet data type itself, a number of other data structures have fields based on timet or the combined struct timespec and struct timeval types. Many methods beside those intended for setting and querying the current time use timestamps 32-Bit Windows applications, or Windows applications defining _USE32BITTIMET, can be hit by the year 2038 problem too if they use the timet data type. The _time64t data type had been available since Visual C 7.1, but only Visual C 8 (default with Visual Studio 2015) expanded timet to 64 bits by default. The change will only be effective after a recompilation, legacy applications will continue to be affected. If you live in a 64-Bit world and use a 64-Bit kernel with 64-Bit only applications, you might think you can just ignore the problem. In such a constellation all instances of the standard time_t data type for system calls, libraries and applications are signed 64-Bit Integers which will overflow in around 292 billion years. But many data formats, file systems and network protocols still specify 32-Bit time fields, and you might have to read/write this data or talk to legacy systems after 2038. So solving the problem on your side alone is not enough. Then the article goes on to describe how all of this will break your file systems. Not to mention your databases and other file formats. Also see Theo De Raadt's EuroBSDCon 2013 Presentation (https://www.openbsd.org/papers/eurobsdcon_2013_time_t/mgp00001.html) *** Beastie Bits Michael Lucas: Get your name in “Absolute FreeBSD 3rd Edition” (https://blather.michaelwlucas.com/archives/2895) ZFS compressed ARC stats to top (https://svnweb.freebsd.org/base?view=revision&revision=r315435) Matthew Dillon discovered HAMMER was repeating itself when writing to disk. Fixing that issue doubled write speeds (https://www.dragonflydigest.com/2017/03/14/19452.html) TedU on Meaningful Short Names (http://www.tedunangst.com/flak/post/shrt-nms-fr-clrty) vBSDcon and EuroBSDcon Call for Papers are open (https://www.freebsdfoundation.org/blog/submit-your-work-vbsdcon-and-eurobsdcon-cfps-now-open/) Feedback/Questions Craig asks about BSD server management (http://pastebin.com/NMshpZ7n) Michael asks about jails as a router between networks (http://pastebin.com/UqRwMcRk) Todd asks about connecting jails (http://pastebin.com/i1ZD6eXN) Dave writes in with an interesting link (http://pastebin.com/QzW5c9wV) > applications crash more often due to errors than corruptions. In the case of corruption, a few applications (e.g., Log-Cabin, ZooKeeper) can use checksums and redundancy to recover, leading to a correct behavior; however, when the corruption is transformed into an error, these applications crash, resulting in reduced availability. ***
This week we're talking about the latest HTC Nexus leak. We have an actual image to look at. Chris calls it the HTC "Sexus." We're also talking about what it's like to play Pokemon Go, Android 7.0 Nougat, Snapchat Memories, and much more! Top Stories First look at HTC Nexus Quick Hits Pokemon Go is a go Nougat is Android 7.0 Huawei uses Canon EOS for fake P9 sample Snapchat Memories Moto Z DROID coming July 14th Amazon Echo allows you to pick Spotify or Pandora for default Correction: AT&T actually didn’t copy T-Mobile #tbthursday: 1GHz processors Wins/Fails Joe: Pokemon Go is pretty fun / Snapchat updates are the worst Chris: OnePlus 3 RAM fix update / Google dropping local guides Ashley: Square Cash is amazing / App Picks Joe: ASAP Launcher Chris: Final Fantasy VII Ashley: PixBit Icon Pack
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 05/05
Spectroscopy of fundamental vibrational transitions offers a label-free alternative for high-chemical contrast measurements. These transitions can be interrogated either directly by using mid-infrared light or indirectly through Raman scattering. This thesis aims to advance dual-comb spectroscopy to improve the acquisition speed, resolution and spectral coverage of vibrational spectroscopy. Dual-comb spectroscopy is a time domain technique, which combines optical frequency combs -coherent light sources with a spectrum constituted of discrete evenly spaced lines - and Fourier transform spectroscopy. For linear spectroscopy, a mid-infrared optical parametric oscillator was developed and characterized. Its idler-pulse duration can be as short as a few cycles (~3 to 6 cycles), with a central wavelength tunable from 2180nm to 3732nm (2679cm-1 - 4587cm-1), allowing more than 2500nm (2861 cm-1) of total coverage while maintaining an average power of tens of milliwatts. The high peak power of this system was exploited for spectral broadening; generation of phase-coherent supercontinua was achieved in waveguides, made from either silicon or chalcogenide glass, producing octave spanning spectra ~1500nm to 3300nm (3030cm-1 - 6666cm-1) for silicon and from ~1600nm beyond 3860nm (2590 cm-1 - 6250 cm-1) for chalcogenide glass). Two optical parametric oscillator were constructed, advancing toward a dual-comb mid-infrared spectrometer. Since the optical parametric oscillators are not stabilized, an additional correction scheme was set up and characterized. Coherent Raman scattering was also investigated, as a means to access optically active and inactive fundamental vibrational transitions. Several spectroscopy setups were developed to measure the Raman blue or red shifted light in forward and backward scattered direction as well as a differential detection between blue and red shifted light. There is a dead time between consecutive interferograms existent, up to a factor of 1000 larger than the measurement time. This dead time could be reduced by an order of magnitude using a laser with ~1GHz and a laser with 100MHz repetition rate instead of two lasers with ~100MHz repetition rate. All implementations achieved excellent acquisition times (in the microsecond range), signal-to-noise ratios up to 1000 and spectral coverage of about ~1200 cm-1). These advantages enabled measuring spectrally resolved images, in a first rudimentary microscopy-setup.
The X47B demonstrates autonomous refueling, 3D Robotics releases the Solo, India weaponizes small drones for crowd control, opinions on how the FAA can do a better job, and Auburn University plans to provide UAS pilot training. News X-47B Demonstrates Unmanned Aerial Refueling For The First Time The Navy's X-47B Unmanned Combat Air Vehicle has successfully demonstrated autonomous aerial refueling, plugging into the aerial refueling basket behind a KC-707 tanker. 3D Robotics takes on DJI with Solo 'smart drone' The 3D Robotics Solo may be the smartest drone ever 3D Robotics released the Solo ready to fly quadcopter. They call it “The Smart Drone” and it includes an onboard 1GHz computer in addition to the Pixhawk 2 flight controller. It has full access to the GoPro camera (not included) and can stream live video. Price is US$1,000, or US$1,400 including a GoPro gimbal mount. Security from the sky: Indian city to use pepper-spray drones for crowd control The Senior Superintendent of police in the northern India city of Lucknow says they'll use small drones with pepper spray to control mobs and unruly crowds. The drones they are using cost between $9,560 and $19,300, and will be fitted with a camera and pepper spray. Lucknow police have already used camera-equipped drones to monitor crowds at a recent religious festival. FAA Speeds Up Small Drone Exemptions. But Why Not Just Issue Blanket Exemption? This opinion piece argues that rather than issue exemptions one-by-one for sUAS operations, the FAA should issue a blanket exemption. Auburn University receives nation's first FAA authorization to operate Unmanned Aircraft Systems Flight School Auburn University says it has received FAA approval for a new Unmanned Aircraft Systems Flight School as part of its Aviation Center. Bill Hutto, director of the Auburn University Aviation Center said, "We will conduct commercial flight training for operators of unmanned aircraft systems outdoors and untethered. We will have the ability to offer training courses at different locations here and around the state for Auburn students, faculty, members of other public agencies and the general public." FAA permits Amazon to test new UAV model Amazon had complained that the UAS approved by the FAA in March was already obsolete, due to the length of time it took to get the COA. Amazon has now received a letter from the FAA granting operation of “the Amazon-manufactured multirotor small UAS that has been described to the FAA in a confidential filing.” 33 UAV Experts Reveal Favorite Drone Accessory UAV Coach asked 33 experts, “If you could only choose one drone accessory, which one would you choose and why?” The site, which seeks to help people fly their quadcopters, “wanted to discover what some of the top industry professionals, drone bloggers, news sites, companies, and pilots would use to enhance their flights if they only had one option.” The group of experts includes past guests Tim Trott and Parker Gyokeres. Oh, and also our own David Vanderhoof. Video of the Week Dragonfly - Vanuatu Disaster Relief 2015 This very interesting video documents the relief provided by the 240 foot super motor yacht "Dragonfly" after Tropical Cyclone Pam pummeled the islands of Vanuatu. Much of the video was shot with a quadcopter, and it very clearly illustrates the complete destruction of the island. Mentioned DJI Developer DJI has a developer program and SDK which supports the Phantom 2 Vision and Phantom 2 Vision+. Support for the Phantom 3 and Inspire 1 is coming soon. iOS and Android operating systems are supported now, with Windows Phone support coming soon. Star Wars: The Force Awakens Official Teaser #2 Lucasfilm and director J.J. Abrams take you back again to a galaxy far, far away as Star Wars returns to the big screen with “Star Wars: The Force Awakens.”
Gareth and James featuring the one and only Daniel Carter @mobilemandan Direct DownloadiTunesDownload the iPhone AppDownload the Android AppRSS FeedFeaturing - Gareth and James and special guest Dan Carter - @mobilemandanEmail us: Podcast@tracyandmatt.co.uk Tel: 0208 123 3757 Show NotesG-Man - Week two with Note 2The BlackBerry secret signal codeEE Re-Branding (still no price plans.... BOOOOOO!!!)Russian LG Nexus review appears onlineiPad Mini event announced for 23rd Tablet TableMicrosoft Surface Pricing£399.00 - 32 GB without Black Touch Cover£479.00 - 32 GB with Black Touch Cover£559.00 - 64 GB with Black Touch CoverSony Vaio Duo 11Archos GamePad video debutNexus 7 32GB version is advertised in the Argos Christmas gift guide at £199.99 Bargain BasementBlackBerry Bold 9790HTC Desire X £164.99 ex VAT £197.99 inc VATNexus 7 32GB version is advertised in the Argos Christmas gift guide at £199.99HTC Radar Phones4U £79.99 Sim Free (upgradable to 7.8)Samsung Galaxy Note Mobile Phone - White £ 399.99Blackberry 8520 Curve Vodafone Mobile Phone - Black £79.00Samsung Galaxy SIII Mobile Phone - Blue £ 399.00Potato Listeners GardenHey Gareth found this deal and thought it would be good for the time now my sites..http://www.currys.co.uk/gbuk/asus-google-nexus-7-tablet-pc-16-gb-15648510-pdt.html?intcmpid=display~RR~Computing~15648510Chris Greetings from Northern Cyprus.What do you think about the Rikomagic MK802+?While not exactly a phone or even a tablet, it is built on mobile technology.They advertise it as a little stick with android 4.0, a 1gHz processor and a gig of ram, a HDMI plug and some usb ports.Supposedly this turns your tv into a big android device and if it works right I'll be DNLA streaming media to it from my other devices around the house. (you may remember my Asus tf101 with a plethora of hard drives).The price is something like 40 to 50 pounds, which is both cool and scary.Do you know of it?Is it a good buy?Or a case of; you get what you pay for?I'm an avid listener of your show and hope you have some good input on this.And I still have Andy's x10 in my drawer :-)lindarne What’s the number with JamesAsh - 6Flightsimgeek - 33Steve Litchfield = - 49!!!!Bandozer – 49 App AtticInfinity from BBThemesMajor Mayhem - PlayBookPerfect viewerBeyondPod - Android podcasting app (great for listening to MTA) ------Email us: Podcast@tracyandmatt.co.uk Tel: 0208 123 3757Gareth Myles – @garethmylesJames Richardson – @j4mes73Matt and Tracy Davis - @tracyandmattMobile Tech Addicts Facebook Many thanks to The Stetz for the music Subscribe in iTunes to our weekly podcastRSS Feed for our weekly podcastDownload the iPhone App
HotHardware - Technology, Computer and Gadget Reviews and Industry News
http://hothardware.com - The Photon 4G features a 1GHz dual-core NVIDIA Tegra 2 processor along with 1GB of RAM and Android 2.3 Gingerbread. Although the Photon 4G has been out for few months, it still has some compelling features that other phones don't offer. For example, the Photon 4G is the first WiMAX equipped phone from Sprint to feature global GSM roaming. By HotHardware Tags : 4G, Andorid, Apps, Gingerbread, Google, HotHardware, Motorola, Photon, Review, Smartphone, Sprint
HotHardware - Technology, Computer and Gadget Reviews and Industry News
http://hothardware.com - The Droid Bionic is the first smartphone from Verizon Wireless to offer 4G LTE connectivity paired with a powerful dual-core 1GHz processor and 1GB of RAM. This smartphone is currently the thinnest 4G LTE smartphone from Verizon Wireless at 10.99mm, though other phones in the pipeline will steal this title soon. Was it worth the wait? We take an in-depth, hands-on look at the phone to find out. By HotHardware Tags : 4G, Android, Bionic, Cell, Cellphone, Droid, Google, HotHardware, LTE, Motorola, Phone, Verizon, Wireless, droid, fourth, generation, review, smartphone, wireless