POPULARITY
Prasanna Sundararajan, founder and CEO for rENIAC, Inc., joins Chip Chat to talk about using Intel® FPGAs to accelerate demanding data-centric applications. rENIAC's solutions leverage Intel FPGAs to help customers surmount the unprecedented challenges presented by new data opportunities. Its Data Engine solution provides an intermediary layer between Apache Cassandra* clients and database nodes, bringing storage closer to the network through intelligent caching and accelerating transactional and AI applications. In this interview, Sundararajan discusses his and rENIAC's work, the bottlenecks that rENIAC and its customers have encountered when using NoSQL databases like Apache Cassandra, and how rENIAC data engine, Intel FPGAs, and Intel® Optane SSDs are helping customers eliminate these bottlenecks. To learn more about rENIAC, please visit https://www.reniac.com/. To learn more about Intel FPGAs, please visit http://intel.com/fpga and follow http://twitter.com/intelfpga on Twitter. Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Intel, the Intel logo, and Optane are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation
Chuck Tato, Director for the Wireline and NFV Business Division for the Programmable Solutions Group at Intel, joins Chip Chat to help welcome Intel® FPGA Programmable Acceleration Card N3000 -- Intel's first FPGA for NFV and 5G workloads. Designed with networking in mind, Intel FPGA Programmable Acceleration Card N3000 is a highly efficient programmable solution with the right capabilities to support the high-throughput, line-rate applications happening in the networking arena today. In this interview, Tato speaks to the value of virtualizing network functions and the use of FPGAs to enable the low latency and high throughput that can sometimes be elusive in virtualized solutions. Tato additionally discusses the architecture and performance characteristics of these FPGAs and how Data Plane Development Kit (DPDK) facilitates the integration of Intel FPGAs into existing Intel® Xeon® Scalable processor based solutions. For more information on Intel FPGA Programmable Acceleration Card N3000, please visit https://intel.com/FPGA and https://twitter.com/IntelFPGA. Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation
Prasanth Pulavarthi, Principal Program Manager for AI Infrastructure at Microsoft, and Padma Apparao, Principal Engineer and Lead Technical Architect for AI at Intel, discuss a collaboration that enables developers to switch from one deep learning operating environment to another regardless of software stack or hardware configuration. ONNX is an open format that unties developers from specific machine learning frameworks so they can easily move between software stacks. It also reduces ramp-up time by sparing them from learning new tools. Many hardware and software companies have joined the ONNX community over the last year and added ONNX support in their products. Microsoft has enabled ONNX in Windows and Azure and has released the ONNX Runtime which provides a full implementation of the ONNX-ML spec. With the nGraph API, developed by Intel, developers can optimize their deep learning software without having to learn the specific intricacies of the underlying hardware. It enables portability between Intel® Xeon® Scalable processors and Intel® FPGAs as well as Intel® Nervana™ Neural Network Processors (Intel® Nervana™ NNPs). Intel is integrating the nGraph API into the ONNX Runtime to provide developers accelerated performance on a variety of hardware. For information about ONNX as well as tutorials and ways to get involved in the ONNX community, visit https://onnx.ai/. To learn more about ONNX Runtime visit https://azure.microsoft.com/en-us/blog/onnx-runtime-for-inferencing-machine-learning-models-now-in-preview/. To learn more about the Intel nGraph API, visit https://ai.intel.com/ngraph-a-new-open-source-compiler-for-deep-learning-systems/. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at https://ai.intel.com/. Intel, the Intel logo, Intel® Xeon® Scalable processors, Intel® FPGAs, and Intel® Nervana™ Neural Network Processors (Intel® Nervana™ NNPs) are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation
In this interview from Microsoft Ignite, Dr. Ted Way, Senior Program Manager for Microsoft, stops by to talk about Microsoft Azure* Machine Learning, an end-to-end, enterprise grade data science platform. Microsoft takes a holistic approach to machine learning and artificial intelligence, by developing and deploying complex algorithms as well as accelerating them on hardware. Azure Machine Learning is powered by Project Brainwave, using Intel® FPGAs to deliver real-time AI in the form of image recognition and classification, language understanding, speech to text, and text to speech. Intel FPGAs shine when processing unstructured data and serving a response with very low latency. At Ignite, Microsoft announced four new algorithms – ResNet-152, DenseNet-121, VGG-16, and SSD-VGG – which will allow uses even more flexibility when using the Azure Machine Learning platform. To get started with Azure Machine Learning and Intel FPGAs, visit http://aka.ms/rtai. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation
Jake Smith, Director for Data Center Solutions and Technologies at Intel and the host of Intel's Conversations in the Cloud podcast, joins Intel Chip Chat to cover all the big ideas and news from Microsoft Ignite 2018. Smith and host Allyson Klein discuss the importance of Windows Server 19's support for next-generation Intel® Xeon® Scalable processors and Intel® Optane™ DC Persistent Memory and SSDs and the incredible world record setting performance using these technologies and Windows Server* Storage Spaces Direct*. Smith additionally speaks to key Windows Server 19 features that are driving IT interest in the platform and Microsoft/Intel collaborations around Intel® Select Solutions, Azure Confidential Computing with Intel® Software Guard Extensions (Intel® SGX), Microsoft Brainwave* with Intel® FPGAs, and cloud HPC/AI compute in Azure based on Intel Xeon Platinum processors. For more information on Smith's work, please visit intel.com/beready, follow Smith on Twitter at @jakesmithintel, and look for Intel Conversations in the Cloud at soundcloud.com/intelcitc. 13.798 million iOPs based on https://twitter.com/CosmosDarwin/status/1044331604871663624. 2x generation on generation performance improvement with Storage Spaces Direct based on https://twitter.com/CosmosDarwin/status/1044331604871663624 and https://blogs.technet.microsoft.com/filecab/2016/07/26/storage-iops-update-with-storage-spaces-direct/. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. § For more information go to www.intel.com/benchmarks. Performance results are based on testing as of 9/25/2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Intel, the Intel logo, Xeon, and Optane are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation.
Tony Kau, Software, IP, and Artificial Intelligence Marketing Director at Intel, discusses the new Intel® FPGA Deep Learning Acceleration Suite for Intel OpenVINO. The software tool suite enables FPGA AI inferencing to deliver reduced latency and increased performance, power and cost efficiency for AI inference workloads targeting Intel FPGAs. This suite allows software developers to access and develop frameworks and networks around machine vision and AI-related workloads. Visit intel.com/fpga for more information.
Join us this week as we discuss our review of the iPhone 7, the Drobo 5C, Intel FPGAs and more!
PC Perspective Podcast #421 - 10/13/16 Join us this week as we discuss our review of the iPhone 7, the Drobo 5C, Intel FPGAs and more! You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE. The URL for the podcast is: http://pcper.com/podcast - Share with your friends! iTunes - Subscribe to the podcast directly through the iTunes Store (audio only) Video version on iTunes Google Play - Subscribe to our audio podcast directly through Google Play! RSS - Subscribe through your regular RSS reader (audio only) Video version RSS feed MP3 - Direct download link to the MP3 file Hosts: Ryan Shrout, Allyn Malventano, Josh Walrath, Jeremy Hellstrom, and Sebastian Peak Program length: 1:22:35 Join our spam list to get notified when we go live! Patreon Fragging Frogs VLAN 14 Hiring! We are hunting for a video producer and editor Week in Review: 0:09:09 Drobo 5C 5-bay USB Type-C DAS Review - More Bays! 0:21:35 Apple iPhone 7 and 7 Plus Review: More and Less Today’s episode is brought to you by Casper! News items of interest: 0:49:40 MSI Will Support Kaby Lake Processors On All 100-Series Motherboards 0:51:58 Intel Launches Stratix 10 FPGA With ARM CPU and HBM2 0:56:10 Western Digital Gets Back in the SSD Game With Blue and Green SSDs! 1:01:50 Introducing the XG-U2008 switch – 10G networking for only $249 1:05:50 NVIDIA Releases GeForce 373.06 Drivers Hardware/Software Picks of the Week Ryan: UE Boom 2 Jeremy: Hee hee, you really want Win7? Josh: Wipe that drive! Allyn: Precision power supply on the cheap (Rigol) Sebastian: If you have to use a dongle, make it high-res! http://pcper.com/podcast http://twitter.com/ryanshrout and http://twitter.com/pcper Closing/outro Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!