POPULARITY
It was a puzzle that John Wilson simply couldn't resist. Intel had long sold processors to the federal government on a commercial basis, but the rising importance of High Performance Computing (HPC) demanded a new approach. Undeterred by the maze of federal acquisition regulations, Wilson volunteered to stand up a dedicated government unit, a move that he tells us helped unlock cutting-edge HPC research. The work took him to the edges of “bleeding-edge technology,” even if it also meant navigating the detailed rigors of government compliance.That knack for transformation would serve Wilson well when he later encountered another pivotal moment: the day the moving truck arrived at his new home in Oregon—just as Intel announced the dissolution of the very business group he was joining. Rather than panic, he thrived, moving on to master complex FP&A and business development roles. It was the same mindset that guided him in standing up an entirely separate legal entity to better serve government contracts, broadening his view of finance from purely operational tasks to strategic decision-making.Today, as CFO of Sabey Data Centers, Wilson continues to fuse vision with pragmatism. He has drawn from his HPC experience—where technology evolves at breakneck speeds—to guide Sabey's approach to data center design and expansion. Collaborating with teams to manage billion-dollar investments, he remains resolute on two fronts: balancing the need for innovation with disciplined capital allocation, and preserving a culture of “good stewardship” that ensures long-term stability for tenants ranging from tech giants to smaller enterprises.
In questa puntata si parla del ruolo cruciale dell'infrastruttura tecnologica nell'era dell'intelligenza artificiale. Interviste Filippo Ligresti di Dell per capire cosa chiedono oggi le imprese e i consumatori anche con l'AI. Con Claudio Bassoli di HPE per p la crescita dei data center e dei High Performance Computing (HPC) e la necessità di sicurezza informatica. Con Anna Barbara di Poli.design si analizza l'influenza dell'AI sul design, l'importanza di combinare intelligenza emotiva e artificiale, e la necessità di formazione continua. Vuoi saperne di più? https://businesscommunity.it/gigi Contattami: https://forms.gle/jtcv577NAd6gLWbi8 Per tutte le puntate https://lts.businesscommunity.it
Yuan is a principal software engineer at Red Hat, working on OpenShift AI. Previously, he has led AI infrastructure and platform teams at various companies. He holds leadership positions in open source projects, including Argo, Kubeflow, and Kubernetes WG Serving. Yuan authored three technical books and is a regular conference speaker, technical advisor, and leader at various organizations. Eduardo is an environmental engineer derailed into a software engineer. Eduardo has been working on making containerized environments the de facto solution for High Performance Computing(HPC) for over 8 years now. Began as a core contributor to the niche Singularity Containers, today known as Apptainer under the Linux foundation. In 2019 Eduardo moved up the ladder to work on making Kubernetes better for performance oriented applications. Nowadays Eduardo works at NVIDIA on the Core Cloud Native team working on enabling specialized accelerators into Kubernetes workloads. Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod News of the week Docker official terraform provider Tetrate and Bloomberg Envoy AI Gateway KubeCon+CloudNativeCon North America 2024 laptop drive Remaining KCDs for 2024 Links from the interview Yuan Tang Eduardo ArangoWG Serving Kserve Kserve Serving models with OCI images LLM Gateway Dynamic Resources Allocation
On today's episode of Heavy Networking, Rob Sherwood joins us to discuss the impact that High Performance Computing (HPC)and artificial intelligence computing are having on data center network design. It's not just a story about leaf/spine architecture. That's the boring part. There's also power and cooling issues, massive bandwidth requirements, and changes in how we... Read more »
On today's episode of Heavy Networking, Rob Sherwood joins us to discuss the impact that High Performance Computing (HPC)and artificial intelligence computing are having on data center network design. It's not just a story about leaf/spine architecture. That's the boring part. There's also power and cooling issues, massive bandwidth requirements, and changes in how we... Read more »
On today's episode of Heavy Networking, Rob Sherwood joins us to discuss the impact that High Performance Computing (HPC)and artificial intelligence computing are having on data center network design. It's not just a story about leaf/spine architecture. That's the boring part. There's also power and cooling issues, massive bandwidth requirements, and changes in how we... Read more »
Join us in this episode as we dive into the latest developments in the world of Bitcoin mining. From CleanSpark's acquisition of Grid to the rise of High-Performance Computing (HPC) in the industry, Sam Callahan, Anthony Power, and more discuss the shifting landscape of Bitcoin mining companies. Discover how the demand for reliable power is reshaping the energy sector and how miners are adapting to the changing market dynamics.00:00:15 - Exciting News: CleanSpark Acquires Grid00:03:32 - Coinbase Lawsuit Against SEC and FDIC00:08:12 - FOIA Requests and Government Transparency00:15:08 - Power of Affirmative Litigation in Bitcoin Industry00:19:19 - Marathon Mining Shift to High-Performance Computing00:24:23 - Shift Towards High-Performance Computing in Bitcoin Mining Industry00:32:08 - Core Scientific's Game-Changing Deal in High-Performance Computing00:36:37 - Increasing Demand for Compute Power and Energy Strain00:43:22 - Bitcoin Miners' Pivot to High-Performance Computing00:47:26 - Wall Street Interest in High-Performance Computing Over Bitcoin Mining Use code “CAFE” for a discount to https://www.pacificbitcoin.com "Welcome to Bitcoin" A FREE 1-hour course hosted by Natalie Brunell, perfect for helping you to orange-pill family members over the holidays at https://Swan.com/welcome Swan Team Members:Sam Callahan: https://twitter.com/samcallahTomer Strolight: https://twitter.com/TomerStrolightJohn Haar Twitter: https://twitter.com/john_at_swanDante Cook: https://twitter.com/Dante_Cook1Produced by: https://twitter.com/Producer_Jacob Swan Bitcoin is the best way to accumulate Bitcoin with automatic recurring buys and instant buys from $10 to $10 million. Get started in just 5 minutes. Your first $10 purchase is on us: https://swanbitcoin.com/yt Download the all new Swan app! iOS: https://apps.apple.com/us/app/swan-bitcoin/id1576287352 Android: https://play.google.com/store/apps/details?id=com.swanbitcoin.android&pli=1 Are you a high net worth individual or do you represent corporation that might be interested in learning more about Bitcoin? Swan Private guides corporations and high net worth individuals toward building generational wealth with Bitcoin. Find out more at https://swan.com/private Get paid to recruit new Bitcoiners: https://swan.com/enlist Connect with Swan on social media: Twitter: https://twitter.com/Swan
In this episode, Neil speaks to Professor Jack Dongarra, a renowned figure in the supercomputing and high-performance computing (HPC) world. He is a Professor at University of Tennessee as well as a Distinguished Researcher at Oak Ridge National Laboratory (ORNL) and a Turing Fellow at the University of Manchester. He is the inventor of the LINPACK library that is still used today to benchmark the Top 500 list of the most powerful supercomputers and was one of the key people involved in the creation of Message-Passing-Inferface (MPI). They discuss what is HPC, the challenges and opportunities in the field, and the future of HPC. They also touch on the role of machine learning and AI in HPC, the competitiveness of the United States in the field, and potential future technologies in HPC. Professor Dongarra shares his insights and advice based on his extensive experience in the field.As part of their discussion they discuss two papers from Prof Dongarra:1) High-Performance Computing: Challenges and Opportunities: https://arxiv.org/abs/2203.02544 2) Can the United States Maintain Its Leadership in High-Performance Computing? - A report from the ASCAC Subcommittee on American Competitiveness and Innovation to the ASCR Office: https://www.osti.gov/biblio/1989107/Chapters00:00 Introduction 04:18 Defining HPC and its Impact08:11 Challenges and Opportunities in HPC28:20 The Competitiveness of the United States in HPC44:31 The Future of HPC: Technologies and Innovations49:30 Insights and Advice from Professor Jack Dongarra
Healthcare has become a data-driven industry as the volume of data continues to grow and organizations are able to use that data to provide more personalized care. However, data alone won't advance healthcare. New systems and processing power that can turn data into insights is the key to accelerating discoveries and improving care. At the HIMSS 2024 conference, we brought together an incredible panel hosted by AMD and Dell Technologies to discuss the ways high performance computing (HPC) is key to helping healthcare organizations solve the big challenges healthcare faces. Plus, we looked at how security has to be baked into these solutions and how the right platform can simplify your infrastructure management. Learn more about AMD: https://www.amd.com/en/campaigns/amd-and-dell Learn more about Dell Technologies: https://www.dell.com/en-us/lp/dt/industry-healthcare-it Health IT Community: https://www.healthcareittoday.com/
Bias in LLMs, why the US is still great for start-ups, supercomputers, brain signals to combat cyber attacks, digital twins of ourselves, and the philosophy behind all of it. My guests this week range from a research institution, a super-computer provider, two start-ups and a philosopher. This lateral discussion echoes the need for interconnected conversation to develop AI and its end-use. Francesco Ferrero, Director of the IT for Innovative Services department at the Luxembourg Institute of Science and Technology (LIST), discusses the newly launched LIST AI Sandbox. This looks at 16 LLMs (large language models) and ranks them on their social or ethical biases including ageism, LGBTIQ+phobia, political bias, racism, religious bias, sexism and xenophobia. You can use this open source sandbox here: https://ai-sandbox.list.lu/ Arnaud Lambert is the CEO of LuxProvide which works on digital intelligence using Luxembourg's supercomputer MeluXina. LuxProvide is part of a broader European Supercomputing initiative. LuxProvide, the University of Luxembourg and LuxInnovation jointly managing the Luxembourg National Competence Centre in High-Performance Computing (HPC). Their customer base is broad, as they encourage the use of HPC in data analytics and AI across industry, academia and public administrations. Emil Jimenez is Founder and CEO of MindBank AI. The idea was sparked by his daughter having a conversation with Siri. Emil decided to build a digital twin of himself so that he can live forever. Since then, it has grown to become a generative AI Personal Digital Twin, learning algorithms to duplicate your mind, optimise your mental health, personal development, and ultimately achieve immortality. Emil promotes the use of ‘Augmented' rather than ‘Artificial' Intelligence, to enhance our life from birth to death, and beyond. Nathaniel Rose is a neuro-technology researcher and the Co-Founder of Lymbic AI. This uses biometric brain signals to build authentication, to combat cyber attacks. We are likely to see more of these brain-computer interfaces as authentication exploits keep pace with technology. Nathaniel talks about the state of neurotech, and its potential in combating synthetic fraud and deep fakes. Both Emil and Nathaniel explain honestly why the US market is still great for start-ups. Rick Serrano is, amongst other things, a philosopher. He co-authored "Artificial Intelligence: the need for a human-centric approach". Rick talks about the framework we need to keep ethics at the centre of the AI momentum: consciousness, transparency, traceability, responsibility, training, IP and regulation. Subscribe to the Podcast and get in touch! Please do subscribe to the podcast on Apple and / or Spotify. It would be great if you could rate and review too - helps others find us. Tune in on Today Radio Saturdays at 11am, Sundays at noon and Tuesdays at 10am.
In this episode, we welcome back our first repeat guest, diving deep into the evolving landscape of Bitcoin mining, the impact of new regulations, and the exciting developments at Giga Energy.This episode dives into:The Growth and Challenges of Bitcoin Mining: Insights into the rapid expansion of hash rate, the increasing ease of operating ASICs due to advancements in container solutions, and the impact of management software improvements.International Expansion and Regulatory Changes: Discussion on Giga Energy's international projects, particularly in Argentina and the Middle East, and how recent elections and regulatory shifts are influencing the global Bitcoin mining landscape.The Future of Energy and Bitcoin Mining: Exploration of how Bitcoin mining can aid in flare gas mitigation, support oil and gas operators, and contribute to making energy grids more resilient.M&A Activity and Market Consolidation: Analysis of recent mergers and acquisitions in the Bitcoin mining industry, including the strategic partnership between Hut8 and US Bitcoin Corp, and the implications for the sector's future.The Role of Bitcoin ETFs and Investment Trends: Examination of the recent approval of Bitcoin ETFs, their potential impact on the mining sector, and the broader implications for Bitcoin as an asset class.Emerging Trends in High-Performance Computing (HPC) and AI: Discussion on the convergence of Bitcoin mining infrastructure with high-performance computing and AI, and the potential for diversification and innovation within the industry.Giga Energy's Strategic Focus and Hiring Initiatives: An overview of Giga Energy's focus on flare mitigation, international expansion, and product development, along with a call for talented individuals to join their growing team.Empower Conference and Industry Networking: Preview of the upcoming Empower conference, highlighting the importance of technical presentations, industry networking, and the role of events in fostering community and collaboration within the Bitcoin mining ecosystem.This episode offers a comprehensive look at the current state and future prospects of Bitcoin mining, featuring insights from Giga Energy's latest initiatives and the broader industry trends shaping the sector.
In this episode, Laurence Liew - The AI Readiness Index - Singapore, Lauren Hawker Zafer is joined by Laurence Liew This conversation is a gateway into the vitals on AI literacy and how it has become increasingly recognized as a vital skillset worldwide, especially in Singapore. Find out what the mind driving the adoption of Al in the Singapore ecosystem through the 100 Experiments, Al Apprenticeship Programmes and the Generational Al Talent Development initiative, has to say! #Subscribe now to Redefining AI to catch up with each and every episode coming your way! Who is Laurence Liew? Laurence Liew is the Director for Al Innovation at Al Singapore. He is driving the adoption of Al by the Singapore ecosystem through the 100 Experiments, Al Apprenticeship Programmes and the Generational Al Talent Development initiative. A visionary and serial technopreneur, Laurence: was appointed the first RED HAT partner and authorised training centre in the Asia Pacific in 1999 built A-STAR's IHPC first High-Performance Computing (HPC) cluster in 2001 (initial HPC clusters in NUS, NTU, and SMU were mostly built by Laurence and his team) built and operated Singapore's first Grid (pre-cloud) platform for IDA's National Grid Pilot Platform in 2003 architected the Cloud business and technology for then Singapore Computer Systems' Alatum Cloud (now owned by Singtel/NCS) in 2007 led Platform Computing Inc business in South Asia and R&D team in Singapore; IBM acquired Platform Computing in 2009 led Revolution Analytics Inc business in Asia and R&D team in Singapore. Microsoft acquired Revolution Analytics in 2015 joined Al Singapore in June 2017 as the first employee The Singapore government has appointed Laurence to represent Singapore at the Global Partnership in Al (GPAl), an OECD initiative. He is the current Co-Chair of the Innovations and Commercialisation working group and Co-Chair of the "Broad Adoption of Al by SME" committee. #ai #data #redefiningai #techpodcast #generativeai
Season Three - Spotlight Two Our second spotlight of this season is a snippet from our upcoming episode: Laurence Liew - The AI Readiness Index - Singapore. This conversation is a gateway into the vitals on AI literacy and how it has become increasingly recognized as a vital skillset worldwide, especially in Singapore. Find out what the mind driving the adoption of Al in the Singapore ecosystem through the 100 Experiments, Al Apprenticeship Programmes and the Generational Al Talent Development initiative, has to say! #Subscribe now to Redefining AI to catch up with each and every episode coming your way! Who is Laurence Liew? Laurence Liew is the Director for Al Innovation at Al Singapore. He is driving the adoption of Al by the Singapore ecosystem through the 100 Experiments, Al Apprenticeship Programmes and the Generational Al Talent Development initiative. A visionary and serial technopreneur, Laurence: was appointed the first RED HAT partner and authorised training centre in the Asia Pacific in 1999 built A-STAR's IHPC first High-Performance Computing (HPC) cluster in 2001 (initial HPC clusters in NUS, NTU, and SMU were mostly built by Laurence and his team) built and operated Singapore's first Grid (pre-cloud) platform for IDA's National Grid Pilot Platform in 2003 architected the Cloud business and technology for then Singapore Computer Systems' Alatum Cloud (now owned by Singtel/NCS) in 2007 led Platform Computing Inc business in South Asia and R&D team in Singapore; IBM acquired Platform Computing in 2009 led Revolution Analytics Inc business in Asia and R&D team in Singapore. Microsoft acquired Revolution Analytics in 2015 joined Al Singapore in June 2017 as the first employee The Singapore government has appointed Laurence to represent Singapore at the Global Partnership in Al (GPAl), an OECD initiative. He is the current Co-Chair of the Innovations and Commercialisation working group and Co-Chair of the "Broad Adoption of Al by SME" committee. Listen to the full episode, as soon as it comes out by subscribing to Redefining AI and please do share your excitement about the episode with your own network! #ai #data #redefiningai #techpodcast #generativeai
In deze fascinerende aflevering zitten Jan Stomphorst en Ronald Kers aan tafel met Joris Cramwinckel, Head of Cloud Native Transformation bij Ortec Finance. Samen duiken ze diep in de wereld van High Performance Computing (HPC) workloads, gericht op duurzaamheid.Ontdek met ons waar in Europa de energiemix in datacenters het groenst is en hoe dit de duurzaamheid van onze technologische toekomst beïnvloedt.Daarnaast nemen we je mee in het intrigerende Project Kepler. Dit project maakt gebruik van efficiënte eBPF-probers om verschillende perf-counters, kernel scheduling parameters en systeemconfiguraties te monitoren. Het resultaat? Het blootleggen van energieverbruik per container en Pod via Prometheus metrics provider API. Deze gegevens kunnen worden gebruikt voor duurzaamheidsrapportage of door Red Hat OpenShift-controllers om werkbelastingsschema's en configuraties te optimaliseren en zo energiebesparingsdoelen te bereiken. Een samenwerking tussen ET, IBM Research en mogelijk Intel.En alsof dat nog niet genoeg is, behandelen we ook KEDA - Kubernetes-based Event Driven Autoscaler. Ontdek hoe KEDA de schaalbaarheid van elke container in Kubernetes aandrijft op basis van het aantal te verwerken gebeurtenissen.Mis deze diepgaande discussie over cutting-edge technologieën niet! Luister nu en laat ons in de reacties weten wat jouw inzichten zijn.
On this episode of The Six Five – On The Road, hosts Daniel Newman and Patrick Moorhead welcome Lenovo's Scott Tease, Vice President, GM High Performance Computing and AI, and Imperial College London's Andrew Richards, Director of Research Computing Services for a conversation on Lenovo's partner ecosystem, HPC and AI research during SC23 in Denver, Colorado. Their discussion covers: Lenovo's vast partner ecosystem in the public sector and the goal for the industry that drives these collaborations The current collaboration between Lenovo and Imperial College London An overview of Lenovo Neptune™ Water Cooling Technology and how it can help organizations reach their own sustainability goals How this technology will support High Performance Computing (HPC) and AI researchers in the future, given the tremendous potential impact and focus on empowering future generations of researchers and students
On this episode of The Six Five – On The Road, hosts Daniel Newman and Patrick Moorhead welcome Lenovo's Giovanni Di Filippo, President of EMEA ISG and Dieter Kranzlmueller, Chair of the Board of Directors of the Leibniz Supercomputing Centre (LRZ) for a conversation on Lenovo's partnership with the LRZ and how they are driving innovation forward in the High Performance Computing (HPC) space, together. Their discussion covers: The Leibniz Supercomputing Centre's (LRZ) role and objectives in advancing supercomputers and how the partnership with Lenovo helps drive strategy The advancements made with Lenovo's partnership with the LRZ, driving innovation forward in the High Performance Computing (HPC) space together How Lenovo's partnership with LRZ has evolved over this past year, and the milestones marking the progression of their partnership
Guy Currier, the Futurum Group Contributor, joins the podcast from Super Compute 23. Keith and Guy discuss the news out of Super Computer 2023 and how the intersection of AI infrastructure meets and relates to HPC infrastructure.
As enterprises try to deploy infrastructure to support AI applications they generally discover that the demands of this application can disrupt their architecture plans. This episode of On-Premise IT, sponsored by Pure Storage, discusses the disruptive impact of AI on the enterprise with Justin Emerson, Allyson Klein, Keith Townsend, and Stephen Foskett. Heavy duty AI processing requires specialized hardware that more resembles High-Performance Computing (HPC) than conventional enterprise IT architecture. But as more enterprise applications leverage accelerators like GPUs and DPUs, and become more disaggregated, AI starts to make more sense. Power is one key consideration, since companies are more aware of sustainability and are impacted by limited power availability in the datacenter, and efficient external storage can be a real benefit here. This is still general-purpose infrastructure but it increasingly incorporates accelerators to improve power efficiency. One issue for general purpose infrastructure is the concern over security, and enterprise AI applications will certainly benefit from broad access to a variety of enterprise data. Enterprise use of AI will require a new data infrastructure that supports the demands of AI applications but also enables data sharing and integration with AI applications. © Gestalt IT, LLC for Gestalt IT: AI Infrastructure Disrupts Enterprise IT with Justin Emerson from Pure Storage
As enterprises try to deploy infrastructure to support AI applications they generally discover that the demands of this application can disrupt their architecture plans. This episode of On-Premise IT, sponsored by Pure Storage, discusses the disruptive impact of AI on the enterprise with Justin Emerson, Allyson Klein, Keith Townsend, and Stephen Foskett. Heavy duty AI processing requires specialized hardware that more resembles High-Performance Computing (HPC) than conventional enterprise IT architecture. But as more enterprise applications leverage accelerators like GPUs and DPUs, and become more disaggregated, AI starts to make more sense. Power is one key consideration, since companies are more aware of sustainability and are impacted by limited power availability in the datacenter, and efficient external storage can be a real benefit here. This is still general-purpose infrastructure but it increasingly incorporates accelerators to improve power efficiency. One issue for general purpose infrastructure is the concern over security, and enterprise AI applications will certainly benefit from broad access to a variety of enterprise data. Enterprise use of AI will require a new data infrastructure that supports the demands of AI applications but also enables data sharing and integration with AI applications. © Gestalt IT, LLC for Gestalt IT: AI Infrastructure Disrupts Enterprise IT with Justin Emerson from Pure Storage
Justin Hotard leads the High Performance Computing (HPC) & AI business group at Hewlett Packard Enterprise (HPE). Tune in to In AI we Trust? this week as he discusses supercomputing, HPE's commitment to open source models for global standardization and using responsible data to ensure responsible AI. –Resources mentioned in this episode: What are supercomputers and why are they important? An expert explains (Justin Hotard & the World Economic Forum) Fueling AI for good with supercomputing (Justin Hotard & HPE) Hewlett Packard Enterprise ushers in next era in AI innovation with Swarm Learning solution built for the edge and distributed sites (HPE)
En este episodio conversamos con la Universidad Andrés Bello de Chile sobre cómo se han apoyado en los servicios de High Performance Computing (HPC) de AWS para desarrollar ciencia en el Centro de Bioinformática y Biología Integrativa de la universidad.
The Argonne Leadership Computing Facility (ALCF) and Intel are working together on Aurora, an Exascale supercomputer. As the ALCF and Intel prepare for Aurora, learn about the convergence of High Performance Computing (HPC), Artificial Intelligence (AI) and big data analytics. Learn how the ALCF is preparing its software, and its users for Aurora, including some challenges that can be overcome utilizing some of the latest Intel technologies. We also talk about the science Aurora will enable, and why you never know what questions will be answered or arise when using a supercomputer. See what next-generation scientific problems are being slated to run on Aurora intel.com/content/www/us/en/customer-spotlight/stories/argonne-aurora-customer-story.html See also: ALCF – Aurora alcf.anl.gov/aurora DOE's INCITE Program doeleadershipcomputing.org Intel oneAPI oneapi.intel.com Intel oneAPI Toolkits intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html Intel oneAPI HPC Toolkit intel.com/content/www/us/en/developer/tools/oneapi/hpc-toolkit.html Exascale Computing Project website exascaleproject.org/ Aurora Early Science Program alcf.anl.gov/science/early-science-program Argonne National Lab's Aurora Exascale System intel.com/content/www/us/en/customer-spotlight/stories/argonne-aurora-customer-story.html Intel and Argonne Developers Carve Path Toward intel.com/content/www/us/en/newsroom/news/developers-carve-path-toward-exascale.html Introducing the Aurora Supercomputer – Powered by Intel intel.com/content/www/us/en/high-performance-computing/supercomputing/aurora-video.html
Episode host Caroline Nickerson, a PhD student in the University of Florida's Department of Agricultural Education and Communication, kicks off the Streaming Science “AI in Action” series with three experts who provide an introduction to AI an exciting aspects of the field. Dive into what makes AI at UF unique, explore the partnership that UF has with NVIDIA (one of the world's leading technology companies), and contemplate why AI should matter to you – the listener.This episode topics include: AI at UF, the role of AI in research and industry, and how scientists are moving forward to create new and unique solutions to address current global issues.The featured guests include: Dr. David Reed, Associate Provost for Strategic Initiatives at UF and the Associate Director for Research and Collections at the Florida Museum of Natural History Dr. Jonathan Bentz, Senior Manager, Solutions Architect -- Higher Education/Research, High Performance Computing (HPC) and Deep Learning at NVIDIA and NVIDIA AI Technology Center (NVAITC) lead for US/Canada Dr. Damon Woodard, Associate Professor, Department of Electrical and Computer Engineering; Director, Florida Institute for National Security; and Director, Applied Artificial Intelligence Group
Dr. Neil Ashton leads the Computational Engineering Product & Technology Strategy at Amazon Web Services, with a particular focus on Computational Fluid Dynamics and High Performance Computing (HPC). He also is a Visiting Fellow at the Department of Engineering Science in the University of Oxford and a Fellow of the Institution of Mechanical Engineers. Previous to these positions he worked in Formula 1 with the Lotus F1 team (now Alpine F1 team). He is passionate about pushing the state of the art of CFD forward and ultimately enabling an entirely virtual engineering design process. —————————————————————————————
DDK Pod Episode 24 High Performance Computing. Genuine High Performance is cool. Often words like ‘High performance' get thrown around by marketing departments, but within IT there is a pretty tight definition of what High Performance Computing (HPC) actually is and why it's different to other types of computing. We discuss both what it is and some of the super interesting applications it can be used for. Also covered is our pick of the IT industry news and a few recommendations for our listeners. Timecodes: 01:06 - The News 09:22 – High Performance Computing34:03 - DDK Recommends Stuff 41:16 - Get in touch Get in touch with the show: Email us ddkpod@ddklimited.comTweet us: @ddklimitedOur Website: www.ddklimited.comFind us on LinkedIn: DDK LimitedAudio edited by Charlie McConville, www.interflowcreations.co.uk
The episode covers High Performance Computing (HPC), its importance in the context of the cloud, the workloads it supports, client problems, how cloud can solve them and how IBM can help. IBM Cloud can accelerate results by scaling a vast number of tasks and reduce costs by optimizing the right technology. HPC on IBM Cloud enables you to apply a number of compute assets to problems that are either large for standard computers or take too long. IBM Cloud also gives you access to state of the art computing avoiding a wait for on-premise hardware refreshes.
Dr. Markus Oppel, Senior Scientist an der Universität Wien, spricht mit Sebastian Kaiser über die Anwendungen von High Performance Computing (HPC) in der theoretischen Chemie. Markus Oppel ist verantwortlich für lokale HPC-Cluster, die von Forschern genutzt werden, um die Auswirkungen von Licht auf Moleküle zu erforschen. In dieser Episode spricht er über die verschiedenen Arten von Daten-Workloads und Computersimulationen, die auf den Clustern laufen. Kontaktaufnahme: Markus Oppel - markus.oppel@univie.ac.at Ressource: HPE HPC
Most product development and manufacturing companies are not using High-Performance Computing (HPC) because it is too complicated. That's why they suffer from engineering slow-mo. MSBAI is developing Guru, an AI engineering assistant that will allow anyone to access the best enterprise engineering software. Let's talk with Allan, MSBAI Founder, and CEO.
With the explosion of data, businesses of all sizes are required to handle massive and complex data workloads. How? Supercomputers. Helping to make High Performance Compute accessible for the masses is our guest - Pete Ungaro, HPE’s General Manager, High Performance Computing (HPC) and Mission Critical Solutions (MCS).
คุยกันเรื่อง High Performance Computing ศาสตร์แห่งการใช้คอมพิวเตอร์จำนวนมากมาประมวลผลแก้ปัญหาระดับใหญ่ยักษ์ มาฟังกันว่า เราใช้คอมพิวเตอร์แก้ปัญหาอะไรยากๆ ได้บ้าง ถ้ามีโจทย์งานวิจัยใหญ่ๆ สักงาน จะจัดการแก้ปัญหาด้วย Computer ระดับใหญ่ได้อย่างไร
She joined Enonso in May 2020 from Hewlett Packard Enterprise (HPE) where she served as the Vice President and General Manager for the North America Compute, Software Defined, High-Performance Computing (HPC) and Artificial Intelligence (AI) team. Prior to her North America roles at Ensono and HPE, she lived with her family in Asia-Pacific for 8.5 years where she held various technology leadership roles at Dell Technologies and HPE. She has held multiple roles at HPE in Asia-Pacific from strategy and operations to leading the APJeC Datacenter and Hybrid Cloud business based in Singapore. Prior to joining HPE, she was a Director at Dell Technologies (Dell) in APJ where she led the Enterprise Infrastructure and Options businesses for the region based in Seoul, South Korea, and Singapore. She started her technology career at Dell based in Austin, Texas, and North Carolina where she held roles ranging from product management to global business and product development. She graduated with a Bachelor of Arts in English Literature from the University of Pennsylvania in Philadelphia, and currently lives in the Chicago area with her family. Join Randy Seidl and David Nour on this episode of the Sales Community Tech Sales Insights podcast with Founding Advisory Board Member of the Sales Community, SVP and Managing Director of North America at Ensono, Paola Doebel. Don't forget, we turn the show notes from our podcasts into unique #TechSalesInsights position papers, so come join us in the Sales Community to learn more. Send in a voice message: https://anchor.fm/salescommunity/message
Get ready to sit down with Vinita Ananth, a successful Group PM Manager within the Azure group at Microsoft. She talks about her experience transitioning into the tech space and shares some of the initiatives taking place at Microsoft to recruit and retain diverse talent.Vinita recognizes the pool of strong diverse candidates is smaller than traditional pools, however, the talent does exist if you seek it out. She points out how Microsoft has started making job descriptions more inclusive by removing genderized language and promoting job postings at forums or events that attract diverse talent. Vinita is supportive of the push to remove CS degree requirements from software roles since industry needs are different than what is taught at universities. She earned her degree in Electrical Engineering but was able to learn programming concepts using her technical background and problem-solving skills. She volunteers to teach in a young entrepreneurial program and is amazed at how quickly high school students pick up new concepts. Similarly, she believes apprenticeship programs would be successful in the tech sector, especially when working with people that are already grounded in science and technology. In regards to retaining diverse talent, Vinita would advise companies to make the workplace more inclusive and build a sense of belonging for diverse talent. She highlights how Microsoft has adapted during the pandemic to help their employees. Additionally, Vinita encourages anyone who is supportive of a diverse workplace to become a visible advocate for DnI.Vinita Ananth is an entrepreneur, and an accomplished leader in a spectrum of areas from global technology, technology-enabled business services, enterprise cloud, social media to analytics - with 20+ years of rich worldwide experience of living / working across the US, EMEA, and APAC. She is currently the leader of SAP, High-Performance Computing(HPC), Legacy(Mainframe, AiX, Legacy), and Azure VMware Solution(AVS) Customer Engineering at Microsoft Azure. Vinita holds a Bachelor’s in Electrical Engineering from India, and an MBA in Marketing & Finance from Santa Clara University, California. In her personal time, she loves hiking, playing tennis; outside of work, the focus is her family. Vinita Ananth: www.linkedin.com/in/vinitaananthMicrosoft Azure: www.azure.microsoft.comMore episodes of the SnackWalls Podcast: http://podcast.snackwalls.comSnackWalls is powered by San Diego Code School: https://sdcs.ioPlease share like and subscribe for more reach
Mark and Brian are together this week, hosting our guests Senanu Aggor and Ilias Katsardis as we discuss High Performance Computing with Google. HPC uses powerful computers to solve problems that would otherwise be too large or take too long for standard machines. Innovation and advances in cloud technology have made this resource more accessible, more scalable, and more affordable. Senanu lists some great use cases for HPC, including vehicle manufacturing and the medical field and describes how these markets benefit from the extra power HPC offers. Ilias talks tech and helps us understand the evolution of the Google HPC offering and the architecture most often used with HPC. He explains the benefits of HPC on the cloud over the old way, emphasizing the flexibility of choosing machines based on your code rather than forcing your code onto small machines. Storage of data is flexible, scalable, and secure as well. Diminishing VM to VM latency has been an important advancement in HPC, and Ilias describes how Google has decreased latency. Google Cloud customers are using the HPC offering for all kinds of large computing jobs, and Senanu details some of these real world instances. From Covid vaccine research to disaster evacuation planning, HPC on the cloud is changing the way we process data. Later, Ilias tells our listeners how to get started with their HPC project. Senanu Aggor Senanu Aggor is the Product Marketing Manager for Google Cloud’s High Performance Computing (HPC) solution. Ilias Katsardis Ilias Katsardis is the HPC Solution Lead for the Customer Engineering team (EMEA) at Google. In this role, Ilias brings over 14 years of experience in the cloud computing and high-performance computing industries to promote Google Cloud’s state-of-the-art infrastructure for complex HPC workloads. Previously, he worked as an applications analyst at Cray Inc., where he was a dedicated analyst to the European Centre for Medium-Range Weather Forecasts (ECMWF), and, prior to that, was an HPC application specialist at ClusterVision. Ilias also founded two startups Airwire Networks in 2006 and Performance Hive in 2017. Cool things of the week What’s happening in BigQuery: Time unit partitioning, Table ACLs and more blog BigQuery explained: Blog series blog BigQuery Spotlight videos Cloud Functions vs. Cloud Run video Interview High Performance Computing site GCP Podcast Episode 237: NVIDIA with Bryan Catanzaro podcasdt GCP Podcast Episode 167: World Pi Day with Emma Haruka Iwao podcast Compute Engine site Compute Engine Machine Types site Cloud Storage site Cloud Firestore site Google Cloud with Intel site Cloud GPUs site Best practices for running tightly coupled HPC applications on Compute Engine site Super Computing Event site Stackchat at home This week, Max Saltonstall is talking cyber analytics with Eric Dull from Deloitte.
A large volume of valuable data is created in Healthcare and Life Sciences. Dr. Natalia Jimenez, distinguished expert and member of the Atos scientific community, discusses some of the uses of High Performance Computing (HPC) in Health and Life Sciences. Listeners will learn how HPC is enabling precision medicine and how acceleration and simulation capabilities are needed in the areas of genomics and the drug discovery process.
Benchmarking en optimaliseren van applicatie performance is een belangrijk onderwerp in het realiseren van High Performance Computing (HPC) oplossingen. In deze podcast praat Jeroen Bronkhorst met ervaringsdeskundige Jan Thorbecke (Cray) o.a. over nut en noodzaak van benchmarking, wat er bij komt kijken om applicatie performance te optimaliseren en de potentiele impact die dit heeft op de prestaties van een HPC systeem.
Scott Jeschonek joins Scott Hanselman to talk about Azure HPC Cache. Whether you are rendering a movie scene, searching for variants in a genome, or running machine learning against a data set, HPC Cache can provide very low latency high throughput access to the required file data. Even more, your data can remain on its Network Attached Storage (NAS) environment in your data center while you drive your jobs into Azure Compute.[0:08:00] - Azure HPC Cache in the Azure portalAzure HPC Cache overviewWhat is Azure HPC Cache?Azure HPC Cache docsAzure HPC Cache pricingCreate a free account (Azure)
Scott Jeschonek joins Scott Hanselman to talk about Azure HPC Cache. Whether you are rendering a movie scene, searching for variants in a genome, or running machine learning against a data set, HPC Cache can provide very low latency high throughput access to the required file data. Even more, your data can remain on its Network Attached Storage (NAS) environment in your data center while you drive your jobs into Azure Compute.[0:08:00] - Azure HPC Cache in the Azure portalAzure HPC Cache overviewWhat is Azure HPC Cache?Azure HPC Cache docsAzure HPC Cache pricingCreate a free account (Azure)
Scott Jeschonek joins Scott Hanselman to talk about Azure HPC Cache. Whether you are rendering a movie scene, searching for variants in a genome, or running machine learning against a data set, HPC Cache can provide very low latency high throughput access to the required file data. Even more, your data can remain on its Network Attached Storage (NAS) environment in your data center while you drive your jobs into Azure Compute.[0:08:00] - Azure HPC Cache in the Azure portalAzure HPC Cache overviewWhat is Azure HPC Cache?Azure HPC Cache docsAzure HPC Cache pricingCreate a free account (Azure)
Scott Jeschonek joins Scott Hanselman to talk about Azure HPC Cache. Whether you are rendering a movie scene, searching for variants in a genome, or running machine learning against a data set, HPC Cache can provide very low latency high throughput access to the required file data. Even more, your data can remain on its Network Attached Storage (NAS) environment in your data center while you drive your jobs into Azure Compute.[0:08:00] - Azure HPC Cache in the Azure portalAzure HPC Cache overviewWhat is Azure HPC Cache?Azure HPC Cache docsAzure HPC Cache pricingCreate a free account (Azure)
Scott Jeschonek joins Scott Hanselman to talk about Azure HPC Cache. Whether you are rendering a movie scene, searching for variants in a genome, or running machine learning against a data set, HPC Cache can provide very low latency high throughput access to the required file data. Even more, your data can remain on its Network Attached Storage (NAS) environment in your data center while you drive your jobs into Azure Compute.[0:08:00] - Azure HPC Cache in the Azure portalAzure HPC Cache overviewWhat is Azure HPC Cache?Azure HPC Cache docsAzure HPC Cache pricingCreate a free account (Azure)
In the latest episode of Digital Centre’s podcast series, host Omer Wilson talks with CEO and founder of XTREME-D, Naoki Shibata, on the growing need for High-Performance Computing (HPC) and Artificial Intelligence (AI). With HPC systems generating more data and IoT systems adding to the global data tsunami, Naoki and Omer dialling in from Japan and Singapore, explore how AI is helping enterprises to process this. The global HPC market is anticipated to reach the value of USD 50 billion by the year 2023, growing at a speed of 8% CAGR. What is driving this growth and how can data centres support the demand? Data Centre 4.0: The future of the Digital Ecosystem In this podcast series from Digital Centre, Data Centre Leader expert and APAC-based Marketer, Omer Wilson will be in conversation with Technology and Industry Experts on the future of the data centre. Today’s tech-fuelled transformation era is putting traditional data centres under unprecedented pressure. This pressure is coming from businesses, users and consumers – from every physical, virtual and cloud-centric angle. What does the future hold for the data centre? Subscribe to our newsletter to be notified of episodes as they come out hbspt.forms.create({ portalId: "495642", formId: "45860c48-a146-4a35-bf1a-e1b5849dfb62" });
In this Intel Conversations in the Cloud audio podcast: On this week’s Conversations in the Cloud, we are joined by Brock Taylor, Director of HPC Solutions Engineering at Intel. High Performance Computing (HPC) is crucial for research, academia and increasingly for enterprises. But customers were struggling to keep homegrown ecosystems validated and components tied together. […]
In this Intel Conversations in the Cloud audio podcast: On this week’s Conversations in the Cloud, we are joined by Brock Taylor, Director of HPC Solutions Engineering at Intel. High Performance Computing (HPC) is crucial for research, academia and increasingly for enterprises. But customers were struggling to keep homegrown ecosystems validated and components tied together. […]
In this Intel Conversations in the Cloud audio podcast: On this week’s Conversations in the Cloud, we are joined by Brock Taylor, Director of HPC Solutions Engineering at Intel. High Performance Computing (HPC) is crucial for research, academia and increasingly for enterprises. But customers were struggling to keep homegrown ecosystems validated and components tied together. […]
On this week’s Conversations in the Cloud, we are joined by Brock Taylor, Director of HPC Solutions Engineering at Intel. High Performance Computing (HPC) is crucial for research, academia and increasingly for enterprises. But customers were struggling to keep homegrown ecosystems validated and components tied together. That challenge prompted the development of Intel® Select Solutions for HPC. These solutions offer quick-to-deploy infrastructure optimized for analytics clusters and HPC applications. Pre-validation reduces complexity and helps make HPC more accessible and easier to maintain. Brock outlines the current portfolio of Intel Select Solutions for HPC: Simulation & Modeling, Simulation & Visualization, Genomic Analytics and HPC & AI Converged Clusters. Quick adoption of the HPC portfolio continues to impress Brock. Partners appreciate working with Intel to define all key hardware, software, and integration requirements. But there is still room for vendor differentiation to add their own value on top of the Intel requirements. The new HPC & AI Converged Clusters solution stemmed from the rapid rise of AI and allow for both workload types to run on the same architecture. Explore Intel Select Solutions for HPC and other performance optimized configurations at www.intel.com/selectsolutions. For more information about HPC please visit www.intel.com/hpc.
In this very special fully-connected episode of Practical AI, Daniel interviews Chris. They discuss High Performance Computing (HPC) and how it is colliding with the world of AI. Chris explains how HPC differs from cloud/on-prem infrastructure, and he highlights some of the challenges of an HPC-based AI strategy.
In this very special fully-connected episode of Practical AI, Daniel interviews Chris. They discuss High Performance Computing (HPC) and how it is colliding with the world of AI. Chris explains how HPC differs from cloud/on-prem infrastructure, and he highlights some of the challenges of an HPC-based AI strategy.
Allan is a highly driven engineer with a passion for invention and advancing state-of-the-art. He has a strong background in aerodynamics, High Performance Computing (HPC), and Artificial...
In this Intel Conversations in the Cloud audio podcast: Find out about Nor-Tech’s cutting edge, High Performance Computing (HPC) technology, and their easy to deploy hardware and software solutions, in this Conversations in the Cloud podcast featuring Dominic Daninger, VP of Engineering for Nor-Tech. Named to CRN’s list of top 40 Data Center infrastructure providers, […]
In this Intel Conversations in the Cloud audio podcast: Find out about Nor-Tech’s cutting edge, High Performance Computing (HPC) technology, and their easy to deploy hardware and software solutions, in this Conversations in the Cloud podcast featuring Dominic Daninger, VP of Engineering for Nor-Tech. Named to CRN’s list of top 40 Data Center infrastructure providers, […]
Find out about Nor-Tech’s cutting edge, High Performance Computing (HPC) technology, and their easy to deploy hardware and software solutions, in this Conversations in the Cloud podcast featuring Dominic Daninger, VP of Engineering for Nor-Tech. Named to CRN’s list of top 40 Data Center infrastructure providers, Nor-Tech’s HPC technology is backed by the company’s no-wait-time support guarantee and a world-class team of HPC engineers. Dominic discusses Nor-Tech’s demo cluster, a no-cost, no-strings opportunity for their customers to test-drive applications so when they receive their product they are able to deploy it seamlessly. And if they run into any issues, Nor-tech’s support team is right there to help them get up and running quickly. Dominic also talks about how HPC is moving into the commercial space and how engineers and designers are using simulation and modeling tools to design a variety of products, from diapers to toothpaste tubes. By using these tools, their customers can be confident that when they go into production their designs will work the first time without subsequent redesigns—saving them time and money. As part of their easy to deploy promise, Nor-Tech offers the NT-HPC Simulation & Modeling solution, an Intel Select Solution that is a pre-validated selection of certain components designed to meet the demands of HPC applications and workflows. He talks about the advantages this solution will offer when it is updated soon with 2nd Generation Intel Xeon Scalable processors as well as Optane DC persistent memory, which will provide higher performance and improved memory bandwidth. To learn more about Nor-Tech’s HPC solutions, go to Nor-Tech.com. If you want to run test cases to build an ROI for management, you can go to simulationclusters.com. To learn more about Intel Select Solutions, go to Intel.com/SelectSolutions.
Transcript In this episode, we dive into IoT, High Performance Computing (HPC) and even delve into scaling strategies. Diego Tamburini takes us deep into the digital transformation occurring in the manufacturing world. Along the way we discuss IoT, moving systems and data into Azure, and the interesting issues that can come up in time-series data. Additionally, we even go deeply into some 3D printing for manufacturing and rendering jobs from standalone desktop applications. Show Links Big Compute: HPC and Batch Azure Batch Manufacturing home page Azure for Manufacturing Diego Tamburini Diego Tamburini is the Principal Industry Lead for Azure Manufacturing in the Microsoft Industry Experience team, where he focuses on developing technical content to help manufacturing companies and software developers deliver their solutions on Azure, at scale. He also champions partners who deliver manufacturing solutions using Azure. Follow Diego on LinkedIn or Twitter. David Starr In addition to being the host of this podcast, David is a Principal Cloud Solutions Architect at Microsoft, focusing on healthcare. He loves creating with code and with large systems supporting scale. He enjoys all aspects of developing, delivering and operating software systems. He is also passionate about the end-to-end process, methods and techniques, and the patterns and practices of high performing software development teams, using skills that transcend specific technology stacks. Follow David on LinkedIn or Twitter.
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
Computer Science/Software Engineering College Courses Review
Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!
In this podcast, the Radio Free HPC team looks at the semi-annual TOP500 BoF presentation by Jack Dongarra. The TOP500 list of supercomputers serves as a “Who's Who” in the field of High Performance Computing (HPC). It started as a list of the most powerful supercomputers in the world and has evolved to a major source of… Read More »RFHPC214: A Look at TOP500 Trends on the Road to Exascale
Dr. Bill Magro, Intel Fellow and Chief Technologist for High Performance Computing (HPC) at Intel, joins Chip Chat to discuss Intel Select Solutions for Simulation & Modeling. Dr. Magro works on Intel's HPC strategy, helping customers overcome their challenges with HPC and driving new capabilities back into the product road map. In this interview, Dr. Magro discusses the evolution of HPC and how HPC's scope has grown to incorporate workloads like AI and advanced analytics. Dr. Magro then focuses on Intel Select Solutions for Simulation & Modeling and how they are lowering costs and enabling more customers to take advantage of HPC. To learn more about Intel Select Solutions for Simulation & Modeling, please visit https://intel.com/selectsolutions.
Gudrun wollte sich mit unserem neuen Kollegen über sein hauptsächliches Forschungsthema, die kinetische Theorie unterhalten. Diese Denkweise wurde zur Modellierung von Gasen entwickelt und ist inspiriert von physikalischen Vorstellungen, die kinetische Energie als inhärente Eigenschaft von Materie ansieht. Die kinetische Gastheorie schaut auf die mikroskopische Ebene, um schließlich makroskopische Größen wie Wärme und Temperatur besser zu erklären. Im sogenannten idealen Gas bewegen sich unfassbar viele kleine Massepunkte entsprechend der Newtonschen Mechanik frei, ungeordnet und zufällig im Raum, stoßen dabei ab und zu zusammen und wir empfinden und messen den Grad der Bewegungsaktivität der Teilchen als Wärme. Die Einheit, die man dieser Größe zunächst zuwies war Kalorie von lat. Calor=Wärme. Heute ist die richtige SI-Einheit für Energie (und damit auch Wärme) das Joule. Die messbare Größe Temperatur ist damit vereinfacht ausgedrückt die mechanische Engergie im Gassystem und das Modell liefert eine kinetische Theorie der Wärme. Man kann es aber auch als Vielteilchensystem von mikroskopischen Teilchen ansehen aus denen sich in klar definierten (unterschiedlichen) Grenzwertprozessen makroskopische Größen und deren Verhalten ableiten lassen. Die Untersuchung dieser Grenzwerte ist eine mathematisch sehr anspruchsvolle Aufgabe und bis heute ein offenes Forschungsfeld, in dem nur Stück für Stück spezielle Fragen beantwortet werden. Eine Schwierigkeit ist dabei nämlich, dass automatisch immer sehr unterschiedliche Skalen nebeneinander existieren und in ihrer Interaktion richtig gefaßt und verstanden werden müssen. Außerdem ist in der Regel jeder Grenzwert, für den sich interessante Forschungsergebnisse ergeben, innerhalb der Theorie eine Singularität. Schon Hilbert hatte 1900 die axiomatische Fassung der Physik zwischen Mechanik und Wahrscheinlichkeitsrechnung als eines der wichtigen mathematischen Probleme für das 20. Jahrhundert dargestellt. Wir sind seitdem vorangekommen, aber es bleibt noch sehr viel zu tun. Zum Beispiel ist die mögliche Korreliertheit zwischen den Teilchenbewegungen für Gase eine offene Frage (außer für kurze Zeiten). Ein Vorteil gegenüber der Zeit Hilberts ist heute, dass wir inzwischen auch den Computer benutzen können, um Modelle zu entwickeln und zu analysieren. Dafür muss man natürlich geeignete numerische Methoden entwickeln. In der Arbeit von Martin Frank sind es in der Regel Integro-Differentialgleichungen mit hyperbolischer partieller Differentialgleichung für die Modellierung von Bewegungen ohne Dämpfung. Diese haben schon durch die Formulierung viele Dimensionen, nämlich jeweils 3 Orts- und 3 Geschwindigkeitskomponenten an jedem Ort des Rechengebietes. Deshalb sind diese Simulationen nur auf großen Parallelrechnern umsetzbar und nutzen High Performance Computing (HPC). Hieraus erklärt sich auch die Doppelrolle von Martin Frank in der Verantwortung für die Weiterentwicklung der HPC-Gruppe am Rechenzentrum des KIT und der Anwendung von Mathematik auf Probleme, die sich nur mit Hilfe von HPC behandeln lassen. Sehr interessant ist in dieser Theorie die gegenseitige Beeinflussung von Numerik und Analysis in der Behandlung kleiner Parameter. Außerdem gibt es Anknüpfungspunkte zur Lattice Boltzmann Research Group die am KIT das Software-Paket OpenLB entwickeln und anwenden. Auch wenn sich geschichtlich gesehen die kinetische Theorie vor allem als Gastheorie etabliert hat, ist die Modellierung nicht nur in Anwendung auf Gase sinnvoll. Beispielsweise lassen sich Finanzmärkte aus sehr vielen unabhängig handelnden Agenten zusammensetzen. Das Ergebnis der Handlungen der Agenten ist der Aktienpreis - sozusagen die Temperatur des Aktienmarktes. Es lassen sich dann aufgrund dieses Modells Eigenschaften untersuchen wie: Warum gibt es so viele Reiche? Außerdem geht es auch darum, die richtigen Modellannahmen für neue Anwendungen zu finden. Zum Beispiel ist ein Resultat der klassischen Gastheorie das Beer-Lambertsche Gesetz. Es besagt, dass Photonen durch Wolken exponentiell abgeschwächen werden. Messungen zeigen aber, dass dies bei unseren Wolken gar nicht gilt. Wieso? Dafür muss man schon sehr genau hinschauen. Zunächst heißt das wohl: Die zugrunde liegende Boltzmann-Gleichung ist für Wolken eine zu starke Vereinfachung. Konkret ist es die Annahme, dass man sich die Wolken als homogenes Medium vorstellt wahrscheinlich nicht zutreffend, d.h. die Streuzentren (das sind die Wassertropfen) sind nicht homogen verteilt. Um ein besseres Modell als die Boltzmann-Gleichung herzuleiten müsste man nun natürlich wissen: Welche Art der Inhomogenität liegt vor? Martin Frank hat Mathematik und Physik an der TU Darmstadt studiert, weil er schon in der Schulzeit großes Interesse an theoretischer Physik hatte. Im Studium hat er sich schließlich auf Angewandte Analysis spezialisiert und darin auch nach dem Diplom in Mathematik an der TU Darmstadt weiter gearbeitet. In dieser Zeit hat er auch das Diplom in Physik abgeschlossen. In der Promotion an der TU Kaiserslautern wurde es aber die numerische Mathematik, der er sich hauptsächlich zuwandte. In der eigenen universitären Lehre - aber auch in speziellen Angeboten für Schülerinnen und Schüler - pendelt er zwischen Projekt- und Theorie-zentriertem Lehren und Lernen. Literatur und weiterführende Informationen M. Frank, C. Roeckerath: Gemeinsam mit Profis reale Probleme lösen, Mathematik Lehren 174, 2012. M. Frank, M. Hattebuhr, C. Roeckerath: Augmenting Mathematics Courses by Project-Based Learning, Proceedings of 2015 International Conference on Interactive Collaborative Learning, 2015. Simulating Heavy Ion Beams Numerically using Minimum Entropy Reconstructions - SHINE M. Frank, W. Sun:Fractional Diffusion Limits of Non-Classical Transport Equations P. Otte, M. Frank: Derivation and analysis of Lattice Boltzmann schemes for the linearized Euler equations, Comput. Math. Appl. Volume 72, 311–327, 2016. M. Frank e.a.: The Non-Classical Boltzmann Equation, and Diffusion-Based approximations to the Boltzmann Equation, SIAM J. Appl. Math. 75, 1329–1345, 2015. M. Frank, T. Goudon: On a generalized Boltzmann equation for non-classical particle transport, Kinet. Relat. Models 3, 395-407, 2010. M. Frank: Approximate models for radiative transfer, Bull. Inst. Math. Acad. Sinica (New Series) 2, 409-432, 2007.
In this episode of Intel Shift on SMACtalk, Daniel Newman speaks with Armughan Ahmad, Dell’s Senior VP of Hybrid Cloud and Ready Solutions. In this fast paced conversation Newman and Ahmad discuss Dell’s recent IoT announcements at their IQT day in New York City. Ahmad shared how Dell is making a massive investment in their IoT Group and what this will mean for customers and consumers as this investment is rolled out over the next three years. Additionally, this conversation sheds a light on the rapid expansion of Dell Technologies and points listeners in the right direction of better understanding the impact of the rapidly growing Edge Computing and how it will tie into current Cloud Strategies. For this and so much more, download and listen to this don’t miss podcast from Intel Shift. Armughan Ahmad - Senior Vice President and General Manager, Hybrid Cloud & Ready Solutions Armughan Ahmad serves as Senior Vice President and General Manager of Hybrid Cloud & Ready Solutions at Dell EMC, where he leads solutions and technology alliance teams globally that deliver innovative Hybrid Cloud, Software-defined, High Performance Computing (HPC), Big Data and Analytics, and Business Applications workload solutions for large enterprise, public institutions, small and medium business customers and partners. Prior to joining Dell, Armughan served as Vice President at Hewlett-Packard, where he led the growth of HP’s Enterprise group, delivering converged and secured infrastructure solutions through partner channels. Previously, Armughan held executive management roles at 3Com, Enterasys, Cabletron and other technology firms ranging from $10M start-up’s to $100bn large corporations delivering hardware, software and services solutions for vertical industries globally. Armughan is a graduate of Sheridan College, where he studied computer science. He serves on numerous non-profits boards as a passionate promoter of third world economic trade and development initiatives.
SGI UV 300 was designed to address the computational and data access challenges facing large business and technical computing environments by overcoming the limitations associated with traditional High Performance Computing (HPC) clusters. A growing number of enterprise and technical computing applications require the ability to process and analyze extremely large data sets, requiring large numbers […]
EasyBuild is a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way. EasyBuild homepage: http://hpcugent.github.io/easybuild
In this livecast from the Intel Developer Forum (IDF) in San Francisco James Reinders, Director and Chief Evangelist of Intel Software, at Intel chats about his work to educate the industry about parallel programming and how the modern code community and parallelism is rapidly growing in the High Performance Computing (HPC) industry. He explains how the work that Intel is doing with the Modern Code Developer Community is enabling the broader developer community to gain skills needed to unlock the full potential of Intel hardware and enable the next decade of discovery in parallel computing. To learn more, follow James on Twitter https://twitter.com/jamesreinders