Podcasts about sap hana

  • 99PODCASTS
  • 290EPISODES
  • 31mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 7, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about sap hana

Show all podcasts related to sap hana

Latest podcast episodes about sap hana

Unexplored Territory
#094 - Discussing SAP HANA support for vSAN ESA 8.x with Erik Rieger!

Unexplored Territory

Play Episode Listen Later Apr 7, 2025 45:46


Recently Broadcom announced that vSAN ESA support for SAP HANA was introduced. Erik Rieger is Broadcom's Principal SAP Global Technical Alliance Manager and Architect, and as such I invited him on the show to go over what this actually means, and why this is important for customers!For more details make sure to check:SAP note 3406060 – SAP HANA on VMware vSphere 8 and vSAN 8 for details.SAP HANA and VMware support pagesSAP HANA on HCI powered by vSANvSphere and SAP HANA best practicesDisclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom, VMware by Broadcom, or SAP.

SAP Cloud Platform Podcast
Episode 117: Discovering Knowledge Graph within SAP HANA Cloud

SAP Cloud Platform Podcast

Play Episode Listen Later Apr 1, 2025 44:26 Transcription Available


In this new episode Niklas Siemer, Product Specialist for SAP Business Technology Platform, is talking to Shabana Samsudheen, Senior Product Manager for SAP HANA Cloud. We're making a deep dive into the new Knowledge Graph engine of SAP HANA Cloud. Talking about what graphs are and what they're used for. Typical uses cases of graphs and how to use them in SAP HANA Cloud.

Inside SAP S/4HANA
Episode 118: Upgrades in SAP S/4HANA Cloud Private Edition

Inside SAP S/4HANA

Play Episode Listen Later Feb 26, 2025 14:30 Transcription Available


In this episode of Inside SAP S/4HANA Cloud, Fernanda Rodrigues discusses the critical importance of regular upgrades for SAP S/4HANA Cloud Private Edition. Joined by SAP experts Franziska Rup and Jens Fieger, they dive into SAP's release strategy and provide insights on integrating updates as part of your IT strategy. Learn how to leverage new innovations and reduce business downtimes through best practices and strategic planning. Whether it's improving security, operational efficiency, or keeping your systems agile, find out how regular upgrades can drive business success. What topic would you like us to discuss next? Send an email to insides4@sap.com

SAP Developers
SAP Developer News November 21st, 2024

SAP Developers

Play Episode Listen Later Nov 22, 2024 7:05 Transcription Available


This episode covers SAUGx, SAP BTP ABAP Environment - 2411, SAP BTP Innobytes, SAP HANA 2.0 SPS 08

Unofficial SAP on Azure podcast
#215 - The one with ANF Extension 1 Features (Ralf Klahr, Geert van Teylingen, Bernd Herth) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Nov 2, 2024 48:15


In episode 215 of our SAP on Azure video podcast we talk about Azure NetApp Files. WIth the new Extension 1, ANF supports availability zone volume placement as the new default method for placement. This upgrade mitigates the need for AVset pinning and eliminates the need for proximity placement groups. Using availability zone volume placement aligns with the Microsoft recommendation on how to deploy SAP HANA infrastructures to achieve best performance with high-availability, maximum flexibility, and simplified deployment. To tell us more about this, we are happy to welcome Ralf Klahr, Geert van Teylingen and Bernd Herth to our show. Find all the links mentioned here: https://www.saponazurepodcast.de/episode215Reach out to us for any feedback / questions:* Robert Boban: https://www.linkedin.com/in/rboban/* Goran Condric: https://www.linkedin.com/in/gorancondric/* Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #ANF

The Six Five with Patrick Moorhead and Daniel Newman
Optimizing IBM Power Environments with FalconStor StorSafe - Six Five Media In the Booth

The Six Five with Patrick Moorhead and Daniel Newman

Play Episode Listen Later Oct 24, 2024 9:17


Host Steven Dickens chats with FalconStor's VP of Engineering, Ron Morita and VP of Customer Success, Abdul Hashmi for Six Five Media In the Booth at IBM TechXchange. They discuss solutions for modernizing and optimizing IBM Power environments using FalconStor's StorSafe for data protection and cloud migration, and secure backup storage. Check out the full interview to learn more about the following:

Unofficial SAP on Azure podcast
#210 - The one with 32TB VMs for SAP HANA (Abbas Ali Mir, Ralf Klahr, Momin Qureshi and Chris Nolan) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Sep 27, 2024 48:07


In episode 210 of our SAP on Azure video podcast we talk about 32 TB Virtual machines on Azure!Recently we talked about the release of virtual machines with 32 TB of memory on Azure. This allows you to run the most high demanding SAP workload on Azure. With some innovations it takes some time to be adopoted, but apparently with the 32 TB system, that was not the case. I am really happy to have a team with us today that worked on customer projects already using these 32 TB VMs. Abbas, Ralf, Momin and Chris. Find all the links mentioned here: https://www.saponazurepodcast.de/episode210Reach out to us for any feedback / questions:* Robert Boban: https://www.linkedin.com/in/rboban/* Goran Condric: https://www.linkedin.com/in/gorancondric/* Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #SAPHANA #VirtualMachines

AWS на русском
050. Новости за первое полугодие 2024 в AWS

AWS на русском

Play Episode Listen Later Jul 18, 2024 39:04


В новом выпуске подкаста “AWS на русском” мы с Михаилом обсудим последние новости и обновления из мира AWS. Поделимся впечатлениями от недавних поездок и мероприятий, а также поговорим о новых функциях и сервисах, которые помогут вам в работе. Что обсудили в выпуске: Amazon WorkSpaces Pools: Эффективное решение для виртуальных рабочих столов. Мы обсудим не только эту функцию, но и общий подход к удаленной разработке, включая VDI и CodeCatalyst с девелопмент-энвайронментами. Amazon CodeCatalyst: Теперь поддерживает репозитории GitLab и Bitbucket, с кастомными блюпринтами и функцией Amazon Q для разработки Anthropic's Claude 3.5 Sonnet модель в Amazon Bedrock: Ещё больше интеллекта по более низкой стоимости. Общий обзор моделей Claude 3, а также других крутых моделей, таких как Mistral и Llama 3 от Meta. Обсуждение новых функций, перешедших в GA, таких как Guardrails, Agents и Model Evaluation. AWS слушает своих клиентов: Обновление Amazon S3 - теперь нет платы за запросы с HTTP-кодами ошибок Одной строкой: Безопасность: Amazon GuardDuty Malware Protection для Amazon S3. Безопасность: Обновление IAM Access Analyzer - расширение проверок пользовательских политик и управляемое отзывавание. Безопасность: AWS добавляет поддержку passkey для многофакторной аутентификации (MFA) для рутовых и IAM пользователей. EC2: Новые БОЛЬШИЕ инстансы Amazon EC2 U7i для больших баз данных в памяти, сертифицированы для SAP HANA. Сертификации: Две новые сертификации AWS - AWS Certified AI Practitioner и AWS Certified Machine Learning Engineer – Associate. Присоединяйтесь, чтобы быть в курсе всех последних обновлений.

HANA Cafe NL
Recap SAP CodeJam Machine Learning using SAP HANA with Vitaliy and Rakesh (S09E06)

HANA Cafe NL

Play Episode Listen Later Jun 25, 2024 36:30


What is the similarity between SAP, HANA and the Titanic? An SAP CodeJam, hosted at the office of ITrainee, where we could work with #SAPHANA, Machine Learning, Python, Panda and EDA (which stands for ... :-) A recap with host Vitaliy Rudnytskiy and participant Rakesh Jain. Difference between AI and Machine Learning New job titles 'AI engineer' Remove the hype before you start It is all about the business case New way to interact with your ERP system SAP Developer Challenge 'SAP HANA Multimodel using Python in BAS'

Unofficial SAP on Azure podcast
#197 - The one with Converting VMs SCSI to NVMe (Philipp Leitenbauer) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Jun 21, 2024 28:18


In episode 197 of our SAP on Azure video podcast we talk about how to improve storage throughput. In our last episode we quickly mentioned how features and capabilities are constantly added to services in the cloud. We talked about this in the context of the Azure Monitor for SAP solutions, but obviously these enhancements and optimizations are also happening on a lower level. Especially in SAP Hana installations, I/O throughput is super critical. In the past SCSI was used to connect your disks to the VM. Now we can leverage NVMe (non-volatile memory express) using Azure Boost. To tell us more about this and actually show how to convert from SCSI to NVMe, I am glad to have Philipp Leitenbauer with us today. Find all the links mentioned here: https://www.saponazurepodcast.de/episode197 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #Infrastructure #Performance

SAP Developers
SAP Developer News May 9th, 2024

SAP Developers

Play Episode Listen Later May 9, 2024 7:06 Transcription Available


This episode covers SAP HANA integration with the leading open-source software, DYK ADT Content Assist, April 2024 | cap≽ire, Supporting Multiple API Gateways with SAP API Management ,

SAP Cloud Platform Podcast
Episode 105: Discovering the SAP HANA Cloud - Vector Engine

SAP Cloud Platform Podcast

Play Episode Listen Later Mar 28, 2024 43:45 Transcription Available


In this new episode Niklas Siemer, Product Specialist for SAP Business Technology Platform, is talking to Thomas Hammer, Lead Product Manager for SAP HANA Cloud. Together they will dive into SAP's database as a service offering SAP HANA Cloud and focus on its new capability, the vector engine. You will get an understanding of what the new engine is, why it is important, use-cases and more.

SAP Cloud Platform Podcast
Episode 105: Discovering the SAP HANA Cloud - Vector Engine

SAP Cloud Platform Podcast

Play Episode Listen Later Mar 28, 2024 43:45 Transcription Available


In this new episode Niklas Siemer, Product Specialist for SAP Business Technology Platform, is talking to Thomas Hammer, Lead Product Manager for SAP HANA Cloud. Together they will dive into SAP's database as a service offering SAP HANA Cloud and focus on its new capability, the vector engine. You will get an understanding of what the new engine is, why it is important, use-cases and more.

Data Engineering Podcast
Reconciling The Data In Your Databases With Datafold

Data Engineering Podcast

Play Episode Listen Later Mar 17, 2024 58:14


Summary A significant portion of data workflows involve storing and processing information in database engines. Validating that the information is stored and processed correctly can be complex and time-consuming, especially when the source and destination speak different dialects of SQL. In this episode Gleb Mezhanskiy, founder and CEO of Datafold, discusses the different error conditions and solutions that you need to know about to ensure the accuracy of your data. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster (https://www.dataengineeringpodcast.com/dagster) today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Join us at the top event for the global data community, Data Council Austin. From March 26-28th 2024, we'll play host to hundreds of attendees, 100 top speakers and dozens of startups that are advancing data science, engineering and AI. Data Council attendees are amazing founders, data scientists, lead engineers, CTOs, heads of data, investors and community organizers who are all working together to build the future of data and sharing their insights and learnings through deeply technical talks. As a listener to the Data Engineering Podcast you can get a special discount off regular priced and late bird tickets by using the promo code dataengpod20. Don't miss out on our only event this year! Visit dataengineeringpodcast.com/data-council (https://www.dataengineeringpodcast.com/data-council) and use code dataengpod20 to register today! Your host is Tobias Macey and today I'm welcoming back Gleb Mezhanskiy to talk about how to reconcile data in database environments Interview Introduction How did you get involved in the area of data management? Can you start by outlining some of the situations where reconciling data between databases is needed? What are examples of the error conditions that you are likely to run into when duplicating information between database engines? When these errors do occur, what are some of the problems that they can cause? When teams are replicating data between database engines, what are some of the common patterns for managing those flows? How does that change between continual and one-time replication? What are some of the steps involved in verifying the integrity of data replication between database engines? If the source or destination isn't a traditional database engine (e.g. data lakehouse) how does that change the work involved in verifying the success of the replication? What are the challenges of validating and reconciling data? Sheer scale and cost of pulling data out, have to do in-place Performance. Pushing databases to the limit, especially hard for OLTP and legacy Cross-database compatibilty Data types What are the most interesting, innovative, or unexpected ways that you have seen Datafold/data-diff used in the context of cross-database validation? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Datafold? When is Datafold/data-diff the wrong choice? What do you have planned for the future of Datafold? Contact Info LinkedIn (https://www.linkedin.com/in/glebmezh/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. Links Datafold (https://www.datafold.com/) Podcast Episode (https://www.dataengineeringpodcast.com/datafold-proactive-data-quality-episode-205/) data-diff (https://github.com/datafold/data-diff) Podcast Episode (https://www.dataengineeringpodcast.com/data-diff-open-source-data-integration-validation-episode-303) Hive (https://hive.apache.org/) Presto (https://prestodb.io/) Spark (https://spark.apache.org/) SAP HANA (https://en.wikipedia.org/wiki/SAP_HANA) Change Data Capture (https://en.wikipedia.org/wiki/Change_data_capture) Nessie (https://projectnessie.org/) Podcast Episode (https://www.dataengineeringpodcast.com/nessie-data-lakehouse-data-versioning-episode-416) LakeFS (https://lakefs.io/) Podcast Episode (https://www.dataengineeringpodcast.com/lakefs-data-lake-versioning-episode-157) Iceberg Tables (https://iceberg.apache.org/) Podcast Episode (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52/) SQLGlot (https://github.com/tobymao/sqlglot) Trino (https://trino.io/) GitHub Copilot (https://github.com/features/copilot) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)

The Guiding Voice
A Technocrat and a Thought Leader's Multi-potent Identity | Hemanth Volikatla | #TGV410

The Guiding Voice

Play Episode Listen Later Dec 23, 2023 39:41


A Tech Thoughtleader's Multi-Potent Identity | Hemanth Volikatla | #TGV410Tune into #TGV410 to get clarity on the above topic. Here are the pointers from Hemanth's conversation with Naveen Samala on The Guiding VoiceFirst rapid fire/Introduction and context settingToughest lessons learned in Hemanth's Professional journeyWhat did he learn from his favorite failure(s)?His Family background, about his dad and his influence on Hemanth's careerHow did he manage to become an expert in a gamut of technologies?Learning from major clientsHow does he keep himself up to date?Forgoing a million-dollar contract Best accomplishment amongst many awards he receivedWITTY ANSWERS TO THE RAPID-FIRE QUESTIONSONE PIECE OF advice for individuals aspiring to dream and become BIGABOUT THE GUEST:From a software engineer to a software architect in SAP, Microsoft, Java, and sizing, configuring various infrastructure environments for SAP applications for different customers for their requirements and his expertise spans various database environments including the latest SAP HANA. Currently focused on customer technical advisory, his team navigates the complexities of modern cloud environments like Azure, AWS, and Google Cloud Platform. As an entrepreneur and mentor, Hemanth has played a pivotal role in planning careers and leading practices in SAP and other technologies for corporates and engineers. Connect with Hemanth on LinkedIn:https://www.linkedin.com/in/hemanth-volikatla-05173625/Connect with the Host on LinkedIn: Naveen Samala: LinkedIn | Personal WebsiteSupport Our Mission: To contribute to our mission, consider making a donation (any amount of your choice) via PayPal: Donate HereExplore Productivity: Become a productivity monk by enrolling in this course: Productivity Monk CourseDiscover "TGV Inspiring Lives" on Amazon: Volume 1 available on Kindle and Paperback:KindlePaperbackConnect in Your Preferred Language: #TGV is available in Hindi & Telugu on YouTube:HindiTeluguAudio Podcast: Listen to #TGV on Spotify:HindiTeluguFollow on Twitter:@GuidingVoice@NaveenSamala Hosted on Acast. See acast.com/privacy for more information.

SAP Developers
SAP Developer News December 7th, 2023

SAP Developers

Play Episode Listen Later Dec 7, 2023 4:49 Transcription Available


This episode covers SAP Builders Group , “Did You Know” shorts Nr. 5 – Using Content Assist in ABAP Development Tools, AI Foundation on SAP BTP, Build Intelligent Data Apps with SAP HANA and the State of JS Survey 2023

SAP Developers
SAP Developer News November 23rd, 2023

SAP Developers

Play Episode Listen Later Nov 23, 2023 5:38 Transcription Available


This episode covers SAP Build Apps Announcements, "Did You Know” shorts Nr. 3 - Using cds-plugin-ui5 for CAP and UI5 app coordination, openSAP Announcement, Advance your skills in building applications with SAP HANA and BTP Guidance Framework and Reference Architecture for SAP BTP

Der AWS-Podcast auf Deutsch
69 - Von 0 auf 100 zur Cloud - Die Erfolgsgeschichte von Vitesco Technologies mit AWS

Der AWS-Podcast auf Deutsch

Play Episode Listen Later Nov 14, 2023 30:23


Willkommen zu einer neuen Episode von “Cloud Horizonte,” dem offiziellen Deutschen AWS Podcast. In dieser Folge haben wir das Vergnügen, mit Thomas Buck, dem CIO von Vitesco Technologies, zu sprechen. Vitesco Technologies ist ein innovatives Unternehmen, das aus dem Geschäftsbereich der Continental AG hervorgegangen ist und sich auf die Elektromobilität und automobile Antriebstechnologien spezialisiert hat. Wir werden einen faszinierenden Einblick in die atemberaubende Reise dieses Unternehmens erhalten, seitdem es als eigenständiges Unternehmen ins Leben gerufen wurde. Die Situation, mit der Vitesco Technologies konfrontiert war, ist wirklich bemerkenswert. Im März 2020 wurde die Entscheidung getroffen, das Unternehmen als eigenständiges Unternehmen unter dem Namen Vitesco Technologies auszugliedern. Dieser Prozess sollte im September 2021 abgeschlossen sein. Doch hier lag die Herausforderung: Vitesco stand vor der Aufgabe, eine enorme Menge von Anwendungen, Daten und Plattformen innerhalb eines 18-monatigen Zeitrahmens zu migrieren. Das Unternehmen hatte jedoch keine eigene Infrastruktur - keine Rechenzentren, kein Netzwerk - und keine Zeit, diese für die Migration der Anwendungen aufzubauen. Die Lösung für dieses beeindruckende Vorhaben war eine mutige Entscheidung: Vitesco Technologies entschied sich für einen reinen Cloud-Ansatz, um die erforderliche Infrastruktur bereitzustellen und so schnell wie möglich mit der Trennung der Anwendungen zu beginnen. In enger Zusammenarbeit mit AWS als Cloud-Anbieter konnten sie die notwendigen Infrastrukturlösungen realisieren. Die Ergebnisse sprechen für sich: Mehr als 350 Anwendungen wurden erfolgreich auf virtuelle Speicher verschoben, darunter technische Umgebungen und zentrale Systeme für die Produktionssteuerung. Mitarbeiter an rund 50 Standorten greifen über ein globales SASE-Netzwerk auf diese Anwendungen zu. Beeindruckende 1,6 Milliarden Datensätze wurden migriert, und ein zentrales Finanzberichtssystem auf der Basis von SAP/HANA wurde erfolgreich aufgebaut. Derzeit nutzen Mitarbeiter rund 85 Prozent aller Anwendungen über die Cloud. In diesem Jahr wird Vitesco sogar SAP in Angriff nehmen und es in die Cloud migrieren. Dank der Cloud-Technologie konnte Vitesco Technologies einige äußerst effiziente Methoden zur Optimierung ihrer Landschaft nach der “Lift and Shift”-Methode entwickeln. CIO Thomas Buck teilt seine Überzeugung, dass “Lift and Shift” durchaus Sinn macht und ermutigt andere Unternehmen, den Weg der Migration zu AWS zu gehen, um dann nach weiteren Optimierungsmöglichkeiten zu suchen. AWS hat bestätigt, dass Vitesco Technologies in den meisten Kategorien besser abschneidet als ihre Mitbewerber, insbesondere in technischen Kategorien. Diese Erfolgsgeschichte zeigt, dass Digital Natives in der Cloud-Exzellenz und -Reife neue Maßstäbe setzen. Hören Sie in dieser fesselnden Episode von “Cloud Horizonte” mehr über die bemerkenswerte Reise von Vitesco Technologies und die transformative Kraft der Cloud-Technologie, die es ihnen ermöglicht hat, ihre Vision erfolgreich umzusetzen. Bleiben Sie dran und erfahren Sie, wie Cloud-Technologie Unternehmen wie Vitesco Technologies bei ihren digitalen Transformationen vorantreibt.

Unofficial SAP on Azure podcast
#166 - The one with updates on Premium v2 storage (Peter Kalan, Anbu Govindasamy) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Nov 3, 2023 50:21


In episode 166 of our SAP on Azure video podcast we go deep on Premium v2 storage with Peter Kalan and Anbu Govindasamy. We look at the different storage options for SAP on Azure, show some performance comparisions, dynamic tiering and snappshotting. Then also talk about how to migrate to premium v2. Find all the links mentioned here: https://www.saponazurepodcast.de/episode166 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure ## Summary created by AI * Features and benefits of premium SSD V2: Premium SSD V2 is a new version of Azure storage that offers sub-millisecond latency, customizable size and performance, and lower cost than premium SSD V1. It can support large database workloads such as SAP HANA, Oracle, and DB2. * How to use premium SSD V2 for SAP HANA: Premium SSD V2 does not require write accelerator for SAP HANA log, and can use different VM sizes than M series. It also allows changing the IOPS and throughput of the disks on the fly without downtime. It is certified by SAP and has best practice guidelines for optimal configuration. * Demo of premium SSD V2: The document shows a demo of how to create and configure premium SSD V2 disks, how to check the sector size and latency, how to increase the throughput dynamically, and how to take snapshots and create new disks from them. * Comparison of premium SSD V2 and V1: The document compares the price and performance of premium SSD V2 and V1, and shows that V2 is cheaper and faster than V1. It also explains how to optimize the disk size and number to get the best performance for the same price.

Unofficial SAP on Azure podcast
#165 - The one with Configuration Editor with SAP Deployment Automation Framework (Kimmo Forss & Hemanth Damecharla) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Oct 27, 2023 55:43


In episode 165 of our SAP on Azure video podcast we talk about SAP HANA on Azure Large Instances will be retired by 30 June 2025 and the transition to Virtual Machines, Configuring Azure NetApp Files (ANF) Application Volume Group (AVG) for zonal SAP HANA deployment, Changes to the Azure reservation exchange policy, a look back at the ABAP Story, A Cool use of Open AI in Eclipse, Microsoft Business Applications Launch Event introduces wave of new AI-powered capabilities for Dynamics 365 and Power Platform and Azure OpenAI powered SAP Self-Services. Then we take a closer look at SDAF, or the SAP Deployment Automation Framework, which started as nice tool to simplify the deployment of SAP systems in a consistent and repeatable way. In the meantime SDAF has grown into a huge project, is leveraged by the Azure Center for SAP Solution and actually by lots of partners and customers. Kimmo Forss and Hemanth Damecharla show us the latest features, like the Configuration Editor and the integration in Azure DevOps Find all the links mentioned here: https://www.saponazurepodcast.de/episode165 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #SDAF #Infrastructure ## Summary created by AI Key Topics: * News from the weekend: The team shared some news about Azure VMs, reserved instances, exchange policy, and SAP on Azure video podcast. * Introduction of Kimmo and Hemanth: Kimmo and Hemanth are part of the SAP on Azure development team based in Helsinki, Finland, working on the SAP Deployment Automation Framework. * Overview of SAP Deployment Automation Framework: The framework is an open source tool that helps customers deploy and configure SAP systems on Azure using Terraform and Ansible, with a modular and extensible design. * New features and demos of the framework: The team showed some new features and demos of the framework, such as the configuration editor, the application service, the private DNS, the application manifest, and the software download. * Questions and feedback: The team answered some questions and feedback from the audience, such as the support for different SAP versions and platforms, the open source contribution model, the validation and testing process, and the deployment time and complexity.

Enterprise Tech Spotlight with Keith Hales
How InfoSystems Can Provide Reliability and Scalability for Your Business with IBM Power Systems

Enterprise Tech Spotlight with Keith Hales

Play Episode Listen Later Oct 3, 2023 13:39


IBM Power Systems have proven themselves in a wide range of applications, from analytics and AI to databases like Oracle and DB2, as well as backup and recovery.   One of the standout applications is SAP HANA, which thrives on IBM Power Systems' robust architecture.   In this most recent episode of Enterprise Tech Spotlight, Keith Hales, John Von Mann, and Ben Lucas discuss the reliability and scalability IBM Power Systems, as well as how clients can partner with InfoSystems to provide the solutions you need for your business.  Listen to the most recent episode of Enterprise Tech Spotlight now.  VISIT THE INFOSYSTEMS WEBSITE: https://infosystemsinc.com/   CHECK OUT INFOSYSTEMS CYBER:  https://infosystemscyber.com   INFOSYSTEMS ON LINKEDIN:  https://www.linkedin.com/company/infosystems-inc-   INFOSYSTEMS ON FACEBOOK:  https://www.facebook.com/InfoSystems   

SAP Developers
SAP Developer News September 14th, 2023

SAP Developers

Play Episode Listen Later Sep 14, 2023 7:56 Transcription Available


This episode covers Rapid Fire, What's new for SAP Integration Suite – August 2023, SAP TechEd, SAP Developer Challenge Week 2 and Not-to-be-missed Events for Developers using SAP HANA

SAP Learning Insights
SAP Skills with Lars Satow

SAP Learning Insights

Play Episode Listen Later Sep 4, 2023 22:17 Transcription Available


In this episode of the SAP Learning Insights podcast, David Chaviano interviews Lars Satow about SAP skills and how to pursue them. Lars, a member of the SAP Skills Transformation Program, discusses the importance of skills in today's job market and the difficulties companies face in finding workers with the right skills. He highlights the need for a unified language for SAP-related skills and the efforts being made to standardize and catalog these skills. Lars also mentions that SAP offers free learning journeys and other learning assets on learning.sap.com, to support skill development for everyone. And of course, at the end of this episode, Lars shares his final words of wisdom with you.

Enterprise Tech Spotlight with Keith Hales
Unveiling the Future of SAP HANA with IBM Power Systems

Enterprise Tech Spotlight with Keith Hales

Play Episode Listen Later Sep 4, 2023 19:36


Is SAP HANA the right fit for your business? In this episode of Enterprise Tech Spotlight, Keith Hales is joined by Bharvi Parikh and Paul McCann from IBM to talk about SAP HANA, a multi-model database that stores data in its memory instead of keeping it on a disk.   Keith, Bharvi, and Paul explore IBM Power solutions, differentiators between what IBM offers and other companies, and share some real-world examples of companies that currently run their SAP environment on IBM solutions.  Listen to the most recent episode of Enterprise Tech Spotlight now.  VISIT THE INFOSYSTEMS WEBSITE:  https://infosystemsinc.com/  CHECK OUT INFOSYSTEMS CYBER:  https://infosystemscyber.com  INFOSYSTEMS ON LINKEDIN:  https://www.linkedin.com/company/infosystems-inc-  INFOSYSTEMS ON FACEBOOK:  https://www.facebook.com/InfoSystems

Unofficial SAP on Azure podcast
#156 - The one with the AI SDK for ABAP (Gopal Nair) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Aug 18, 2023 41:37


In episode 156 of our SAP on Azure video podcast we talk about SAP on Azure NetApp Files Sizing Best Practices, SAP HANA Azure virtual machine storage configurations, NFS v4.1 volumes on Azure NetApp Files for SAP HANA, Announcing public preview of new Mv3 Medium Memory Virtual Machines, the SAP CDC Connector on Azure and Part 2 of the SAP S/4HANA Cloud ABAP Environment integration journey with Microsoft blog post, this time about Azure OpenAI & AI SDK for ABAP. Then we take a closer look at the Microsoft AI SDK for ABAP. A few months ago, we talked about a revolutionary new SDK in our news of the week section: the AI SDK for ABAP. This SDK empowers ABAP developers to consume Azure OpenAI services without leaving their ABAP code or having to know the intricate details of Azure OpenAI. I had the opportunity to present and demo the SDK with the German Speaking SAP User Group a few months back, which led to some exciting discussions with customers and actual follow-up projects. Today, we are thrilled to have the brain behind the AI SDK for ABAP with us: Gopal Nair, a Principal Software Engineer at Microsoft. We're really looking forward to Gopal introducing the topic and sharing some examples with us. In fact, he has so many examples that we're planning to do a few follow-up sessions with him, so stay tuned! (this intro was written by Azure OpenAI) Find all the links mentioned here: https://www.saponazurepodcast.de/episode156 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #AzureOpenAI #OpenAI #ABAP #abapGit

Screaming in the Cloud
How Cloudflare is Working to Fix the Internet with Matthew Prince

Screaming in the Cloud

Play Episode Listen Later Aug 10, 2023 42:30


Matthew Prince, Co-founder & CEO at Cloudflare, joins Corey on Screaming in the Cloud to discuss how and why Cloudflare is working to solve some of the Internet's biggest problems. Matthew reveals some of his biggest issues with cloud providers, including the tendency to charge more for egress than ingress and the fact that the various clouds don't compete on a feature vs. feature basis. Corey and Matthew also discuss how Cloudflare is working to change those issues so the Internet is a better and more secure place. Matthew also discusses how transparency has been key to winning trust in the community and among Cloudflare's customers, and how he hopes the Internet and cloud providers will evolve over time.About MatthewMatthew Prince is co-founder and CEO of Cloudflare. Cloudflare's mission is to help build a better Internet. Today the company runs one of the world's largest networks, which spans more than 200 cities in over 100 countries. Matthew is a World Economic Forum Technology Pioneer, a member of the Council on Foreign Relations, winner of the 2011 Tech Fellow Award, and serves on the Board of Advisors for the Center for Information Technology and Privacy Law. Matthew holds an MBA from Harvard Business School where he was a George F. Baker Scholar and awarded the Dubilier Prize for Entrepreneurship. He is a member of the Illinois Bar, and earned his J.D. from the University of Chicago and B.A. in English Literature and Computer Science from Trinity College. He's also the co-creator of Project Honey Pot, the largest community of webmasters tracking online fraud and abuse.Links Referenced: Cloudflare: https://www.cloudflare.com/ Twitter: https://twitter.com/eastdakota TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the things we talk about here, an awful lot is cloud providers. There sure are a lot of them, and there's the usual suspects that you would tend to expect with to come up, and there are companies that work within their ecosystem. And then there are the enigmas.Today, I'm talking to returning guest Matthew Prince, Cloudflare CEO and co-founder, who… well first, welcome back, Matthew. I appreciate your taking the time to come and suffer the slings and arrows a second time.Matthew: Corey, thanks for having me.Corey: What I'm trying to do at the moment is figure out where Cloudflare lives in the context of the broad ecosystem because you folks have released an awful lot. You had this vaporware-style announcement of R2, which was an S3 competitor, that then turned out to be real. And oh, it's always interesting, when vapor congeals into something that actually exists. Cloudflare Workers have been around for a while and I find that they become more capable every time I turn around. You have Cloudflare Tunnel which, to my understanding, is effectively a VPN without the VPN overhead. And it feels that you are coming at building a cloud provider almost from the other side than the traditional cloud provider path. Is it accurate? Am I missing something obvious? How do you see yourselves?Matthew: Hey, you know, I think that, you know, you can often tell a lot about a company by what they measure and what they measure themselves by. And so, if you're at a traditional, you know, hyperscale public cloud, an AWS or a Microsoft Azure or a Google Cloud, the key KPI that they focus on is how much of a customer's data are they hoarding, effectively? They're all hoarding clouds, fundamentally. Whereas at Cloudflare, we focus on something of it's very different, which is, how effectively are we moving a customer's data from one place to another? And so, while the traditional hyperscale public clouds are all focused on keeping your data and making sure that they have as much of it, what we're really focused on is how do we make sure your data is wherever you need it to be and how do we connect all of the various things together?So, I think it's exactly right, where we start with a network and are kind of building more functions on top of that network, whereas other companies start really with a database—the traditional hyperscale public clouds—and the network is sort of an afterthought on top of it, just you know, a cost center on what they're delivering. And I think that describes a lot of the difference between us and everyone else. And so oftentimes, we work very much in conjunction with. A lot of our customers use hyperscale public clouds and Cloudflare, but increasingly, there are certain applications, there's certain data that just makes sense to live inside the network itself, and in those cases, customers are using things like R2, they're using our Workers platform in order to be able to build applications that will be available everywhere around the world and incredibly performant. And I think that is fundamentally the difference. We're all about moving data between places, making sure it's available everywhere, whereas the traditional hyperscale public clouds are all about hoarding that data in one place.Corey: I want to clarify that when you say hoard, I think of this, from my position as a cloud economist, as effectively in an economic story where hoarding the data, they get to charge you for hosting it, they get to charge you serious prices for egress. I've had people mishear that before in a variety of ways, usually distilled down to, “Oh, and their data mining all of their customers' data.” And I want to make sure that that's not the direction that you intend the term to be used. If it is, then great, we can talk about that, too. I just want to make sure that I don't get letters because God forbid we get letters for things that we say in the public.Matthew: No, I mean, I had an aunt who was a hoarder and she collected every piece of everything and stored it somewhere in her tiny little apartment in the panhandle of Florida. I don't think she looked at any of it and for the most part, I don't think that AWS or Google or Microsoft are really using your data in any way that's nefarious, but they're definitely not going to make it easy for you to get it out of those places; they're going to make it very, very expensive. And again, what they're measuring is how much of a customer's data are they holding onto whereas at Cloudflare we're measuring how much can we enable you to move your data around and connected wherever you need it. And again, I think that that kind of gets to the fundamental difference between how we think of the world and how I think the hyperscale public clouds thing of the world. And it also gets to where are the places where it makes sense to use Cloudflare, and where are the places that it makes sense to use an AWS or Google Cloud or Microsoft Azure.Corey: So, I have to ask, and this gets into the origin story trope a bit, but what radicalized you? For me, it was the realization one day that I could download two terabytes of data from S3 once, and it would cost significantly more than having Amazon.com ship me a two-terabyte hard drive from their store.Matthew: I think that—so Cloudflare started with the basic idea that the internet's not as good as it should be. If we all knew what the internet was going to be used for and what we're all going to depend on it for, we would have made very different decisions in how it was designed. And we would have made sure that security was built in from day one, we would have—you know, the internet is very reliable and available, but there are now airplanes that can't land if the internet goes offline, they are shopping transactions shut down if the internet goes offline. And so, I don't think we understood—we made it available to some extent, but not nearly to the level that we all now depend on it. And it wasn't as fast or as efficient as it possibly could be. It's still very dependent on the geography of where data is located.And so, Cloudflare started out by saying, “Can we fix that? Can we go back and effectively patch the internet and make it what it should have been when we set down the original protocols in the '60s, '70s, and '80s?” But can we go back and say, can we build a new, sort of, overlay on the internet that solves those problems: make it more secure, make it more reliable, make it faster and more efficient? And so, I think that that's where we started, and as a result of, again, starting from that place, it just made fundamental sense that our job was, how do you move data from one place to another and do it in all of those ways? And so, where I think that, again, the hyperscale public clouds measure themselves by how much of a customer's data are they hoarding; we measure ourselves by how easy are we making it to securely, reliably, and efficiently move any piece of data from one place to another.And so, I guess, that is radical compared to some of the business models of the traditional cloud providers, but it just seems like what the internet should be. And that's our North Star and that's what just continues to drive us and I think is a big reason why more and more customers continue to rely on Cloudflare.Corey: The thing that irks me potentially the most in the entire broad strokes of cloud is how the actions of the existing hyperscalers have reflected mostly what's going on in the larger world. Moore's law has been going on for something like 100 years now. And compute continues to get faster all the time. Storage continues to cost less year over year in a variety of ways. But they have, on some level, tricked an entire generation of businesses into believing that network bandwidth is this precious, very finite thing, and of course, it's going to be ridiculously expensive. You know, unless you're taking it inbound, in which case, oh, by all means back the truck around. It'll be great.So, I've talked to founders—or prospective founders—who had ideas but were firmly convinced that there was no economical way to build it. Because oh, if I were to start doing real-time video stuff, well, great, let's do the numbers on this. And hey, that'll be $50,000 a minute, if I read the pricing page correctly, it's like, well, you could get some discounts if you ask nicely, but it doesn't occur to them that they could wind up asking for a 98% discount on these things. Everything is measured in a per gigabyte dimension and that just becomes one of those things where people are starting to think about and meter something that—from my days in data centers where you care about the size of the pipe and not what's passing through it—to be the wrong way of thinking about things.Matthew: A little of this is that everybody is colored by their experience of dealing with their ISP at home. And in the United States, in a lot of the world, ISPs are built on the old cable infrastructure. And if you think about the cable infrastructure, when it was originally laid down, it was all one-directional. So, you know, if you were turning on cable in your house in a pre-internet world, data fl—Corey: Oh, you'd watch a show and your feedback was yelling at the TV, and that's okay. They would drop those packets.Matthew: And there was a tiny, tiny, tiny bit of data that would go back the other direction, but cable was one-directional. And so, it actually took an enormous amount of engineering to make cable bi-directional. And that's the reason why if you're using a traditional cable company as your ISP, typically you will have a large amount of download capacity, you'll have, you know, a 100 megabits of down capacity, but you might only have a 10th of that—so maybe ten megabits—of upload capacity. That is an artifact of the cable system. That is not just the natural way that the internet works.And the way that it is different, that wholesale bandwidth works, is that when you sign up for wholesale bandwidth—again, as you phrase it, you're not buying this many bytes that flows over the line; you're buying, effectively, a pipe. You know, the late Senator Ted Stevens said that the internet is just a series of tubes and got mocked mercilessly, but the internet is just a series of tubes. And when Cloudflare or AWS or Google or Microsoft buys one of those tubes, what they pay for is the diameter of the tube, the amount that can fit through it. And the nature of this is you don't just get one tube, you get two. One that is down and one that is up. And they're the same size.And so, if you've got a terabit of traffic coming down and zero going up, that costs exactly the same as a terabit going up and zero going down, which costs exactly the same as a terabit going down and a terabit going up. It is different than your home, you know, cable internet connection. And that's the thing that I think a lot of people don't understand. And so, as you pointed out, but the great tragedy of the cloud is that for nothing other than business reasons, these hyperscale public cloud companies don't charge you anything to accept data—even though that is actually the more expensive of the two operations for that because writes are more expensive than reads—but the inherent fact that they were able to suck the data in means that they have the capacity, at no additional cost, to be able to send that data back out. And so, I think that, you know, the good news is that you're starting to see some providers—so Cloudflare, we've never charged for egress because, again, we think that over time, bandwidth prices go to zero because it just makes sense; it makes sense for ISPs, it makes sense for connectiv—to be connected to us.And that's something that we can do, but even in the cases of the cloud providers where maybe they're all in one place and somebody has to pay to backhaul the traffic around the world, maybe there's some cost, but you're starting to see some pressure from some of the more forward-leaning providers. So Oracle, I think has done a good job of leaning in and showing how egress fees are just out of control. But it's crazy that in some cases, you have a 4,000x markup on AWS bandwidth fees. And that's assuming that they're paying the same rates as what we would get at Cloudflare, you know, even though we are a much smaller company than they are, and they should be able to get even better prices.Corey: Yes, if there's one thing Amazon is known for, it as being bad at negotiating. Yeah, sure it is. I'm sure that they're just a terrific joy to be a vendor to.Matthew: Yeah, and I think that fundamentally what the price of bandwidth is, is tied very closely to what the cost of a port on a router costs. And what we've seen over the course of the last ten years is that cost has just gone enormously down where the capacity of that port has gone way up and the just physical cost, the depreciated cost that port has gone down. And yet, when you look at Amazon, you just haven't seen a decrease in the cost of bandwidth that they're passing on to customers. And so, again, I think that this is one of the places where you're starting to see regulators pay attention, we've seen efforts in the EU to say whatever you charge to take data out is the same as what you should charge it to put data in. We're seeing the FTC start to look at this, and we're seeing customers that are saying that this is a purely anti-competitive action.And, you know, I think what would be the best and healthiest thing for the cloud by far is if we made it easy to move between various cloud providers. Because right now the choice is, do I use AWS or Google or Microsoft, whereas what I think any company out there really wants to be able to do is they want to be able to say, “I want to use this feature at AWS because they're really good at that and I want to use this other feature at Google because they're really good at that, and I want to us this other feature at Microsoft, and I want to mix and match between those various things.” And I think that if you actually got cloud providers to start competing on features as opposed to competing on their overall platform, we'd actually have a much richer and more robust cloud environment, where you'd see a significantly improved amount of what's going on, as opposed to what we have now, which is AWS being mediocre at everything.Corey: I think that there's also a story where for me, the egress is annoying, but so is the cross-region and so is the cross-AZ, which in many cases costs exactly the same. And that frustrates me from the perspective of, yes, if you have two data centers ten miles apart, there is some startup costs to you in running fiber between them, however you want to wind up with that working, but it's a sunk cost. But at the end of that, though, when you wind up continuing to charge on a per gigabyte basis to customers on that, you're making them decide on a very explicit trade-off of, do I care more about cost or do I care more about reliability? And it's always going to be an investment decision between those two things, but when you make the reasonable approach of well, okay, an availability zone rarely goes down, and then it does, you get castigated by everyone for, “Oh it even says in their best practice documents to go ahead and build it this way.” It's funny how a lot of the best practice documents wind up suggesting things that accrue primarily to a cloud provider's benefit. But that's the way of the world I suppose.I just know, there's a lot of customer frustration on it and in my client environments, it doesn't seem to be very acute until we tear apart a bill and look at where they're spending money, and on what, at which point, the dawning realization, you can watch it happen, where they suddenly realize exactly where their money is going—because it's relatively impenetrable without that—and then they get angry. And I feel like if people don't know what they're being charged for, on some level, you've messed up.Matthew: Yeah. So, there's cost to running a network, but there's no reason other than limiting competition why you would charge more to take data out than you would put data in. And that's a puzzle. The cross-region thing, you know, I think where we're seeing a lot of that is actually oftentimes, when you've got new technologies that come out and they need to take advantage of some scarce resource. And so, AI—and all the AI companies are a classic example of this—right now, if you're trying to build a model, an AI model, you are hunting the world for available GPUs at a reasonable price because there's an enormous scarcity of them.And so, you need to move from AWS East to AWS West, to AWS, you know, Singapore, to AWS in Luxembourg and bounce around to find wherever there's GPU availability. And then that is crossed against the fact that these training datasets are huge. You know, I mean, they're just massive, massive, massive amounts of data. And so, what that is doing is you're having these AI companies that are really seeing this get hit in the face, where they literally can't get the capacity they need because of the fact that whatever cloud provider in whatever region they've selected to store their data isn't able to have that capacity. And so, they're getting hit not only by sort of a double whammy of, “I need to move my data to wherever there's capacity. And if I don't do that, then I have to pay some premium, an ever-escalating price for the underlying GPUs.” And God forbid, you have to move from AWS to Google to chase that.And so, we're seeing a lot of companies that are saying, “This doesn't make any sense. We have this enormous training set. If we just put it with Cloudflare, this is data that makes sense to live in the network, fundamentally.” And not everything does. Like, we're not the right place to store your long-term transaction logs that you're only going to look at if you get sued. There are much better places, much more effective places do it.But in those cases where you've got to read data frequently, you've got to read it from different places around the world, and you will need to decrease what those costs of each one of those reads are, what we're seeing is just an enormous amount of demand for that. And I think these AI startups are really just a very clear example of what company after company after company needs, and why R2 has had—which is our zero egress cost S3 competitor—why that is just seeing such explosive growth from a broad set of customers.Corey: Because I enjoy pushing the bounds of how ridiculous I can be on the internet, I wound up grabbing a copy of the model, the Llama 2 model that Meta just released earlier this week as we're recording this. And it was great. It took a little while to download here. I have gigabit internet, so okay, it took some time. But then I wound up with something like 330 gigs of models. Great, awesome.Except for the fact that I do the math on that and just for me as one person to download that, had they been paying the listed price on the AWS website, they would have spent a bit over $30, just for me as one random user to download the model, once. If you can express that into the idea of this is a model that is absolutely perfect for whatever use case, but we want to have it run with some great GPUs available at another cloud provider. Let's move the model over there, ignoring the data it's operating on as well, it becomes completely untenable. It really strikes me as an anti-competitiveness issue.Matthew: Yeah. I think that's it. That's right. And that's just the model. To build that model, you would have literally millions of times more data that was feeding it. And so, the training sets for that model would be many, many, many, many, many, many orders of magnitude larger in terms of what's there. And so, I think the AI space is really illustrating where you have this scarce resource that you need to chase around the world, you have these enormous datasets, it's illustrating how these egress fees are actually holding back the ability for innovation to happen.And again, they are absolutely—there is no valid reason why you would charge more for egress than you do for ingress other than limiting competition. And I think the good news, again, is that's something that's gotten regulators' attention, that's something that's gotten customers' attention, and over time, I think we all benefit. And I think actually, AWS and Google and Microsoft actually become better if we start to have more competition on a feature-by-feature basis as opposed to on an overall platform. The choice shouldn't be, “I use AWS.” And any big company, like, nobody is all-in only on one cloud provider. Everyone is multi-cloud, whether they want to be or not because people end up buying another company or some skunkworks team goes off and uses some other function.So, you are across multiple different clouds, whether you want to be or not. But the ideal, and when I talk to customers, they want is, they want to say, “Well, you know that stuff that they're doing over at Microsoft with AI, that sounds really interesting. I want to use that, but I really like the maturity and robustness of some of the EC2 API, so I want to use that at AWS. And Google is still, you know, the best in the world at doing search and indexing and everything, so I want to use that as well, in order to build my application.” And the applications of the future will inherently stitch together different features from different cloud providers, different startups.And at Cloudflare, what we see is our, sort of, purpose for being is how do we make that stitching as easy as possible, as cost-effective as possible, and make it just make sense so that you have one consistent security layer? And again, we're not about hording the data; we're about connecting all of those things together. And again, you know, from the last time we talked to now, I'm actually much more optimistic that you're going to see, kind of, this revolution where egress prices go down, you get competition on feature-by-features, and that's just going to make every cloud provider better over the long-term.Corey: This episode is sponsored in part by Panoptica.  Panoptica simplifies container deployment, monitoring, and security, protecting the entire application stack from build to runtime. Scalable across clusters and multi-cloud environments, Panoptica secures containers, serverless APIs, and Kubernetes with a unified view, reducing operational complexity and promoting collaboration by integrating with commonly used developer, SRE, and SecOps tools. Panoptica ensures compliance with regulatory mandates and CIS benchmarks for best practice conformity. Privacy teams can monitor API traffic and identify sensitive data, while identifying open-source components vulnerable to attacks that require patching. Proactively addressing security issues with Panoptica allows businesses to focus on mitigating critical risks and protecting their interests. Learn more about Panoptica today at panoptica.app.Corey: I don't know that I would trust you folks to the long-term storage of critical data or the store of record on that. You don't have the track record on that as a company the way that you do for being the network interchange that makes everything just work together. There are areas where I'm thrilled to explore and see how it works, but it takes time, at least from the sensible infrastructure perspective of trusting people with track records on these things. And you clearly have the network track record on these things to make this stick. It almost—it seems unfair to you folks, but I view you as Cloudflare is a CDN, that also dabbles in a few other things here in there, though, increasingly, it seems it's CDN and security company are becoming synonymous.Matthew: It's interesting. I remember—and this really is going back to the origin story, but when we were starting Cloudflare, you know, what we saw was that, you know, we watched as software—starting with companies like Salesforce—transition from something that you bought in the box to something that you bought as a service [into 00:23:25] the cloud. We watched as, sort of, storage and compute transition from something that you bought from Dell or HP to something that you rented as a service. And so the fundamental problem that Cloudflare started out with was if the software and the storage and compute are going to move, inherently the security and the networking is going to move as well because it has to be as a service as well, there's no way you can buy a you know, Cisco firewall and stick it in front of your cloud service. You have to be in the cloud as well.So, we actually started very much as a security company. And the objection that everybody had to us as we would sort of go out and describe what we were planning on doing was, “You know, that sounds great, but you're going to slow everything down.” And so, we became just obsessed with latency. And Michelle, my co-founder, and I were business students and we had an advisor, a guy named Tom [Eisenmann 00:24:26] in business school. And I remember going in and that was his objection as well and so we did all this work to figure it out.And obviously, you know, I'd say computer science, and anytime that you have a problem around latency or speed caching is an obvious part of the solution to that. And so, we went in and we said, “Here's how we're going to do it: [unintelligible 00:24:47] all this protocol optimization stuff, and here's how we're going to distribute it around the world and get close to where users are. And we're going to use caching in the places where we can do caching.” And Tom said, “Oh, you're building a CDN.” And I remember looking at him and then I'm looking at Michelle. And Michelle is Canadian, and so I was like, “I don't know that I'm building a Canadian, but I guess. I don't know.”And then, you know, we walked out in the hall and Michelle looked at me and she's like, “We have to go figure out what the CDN thing is.” And we had no idea what a CDN was. And even when we learned about it, we were like, that business doesn't make any sense. Like because again, the CDNs were the first ones to really charge for bandwidth. And so today, we have effectively built, you know, a giant CDN and are the fastest in the world and do all those things.But we've always given it away basically for free because fundamentally, what we're trying to do is all that other stuff. And so, we actually started with security. Security is—you know, my—I've been working in security now for over 25 years and that's where my background comes from, and if you go back and look at what the original plan was, it was how do we provide that security as a service? And yeah, you need to have caching because caching makes sense. What I think is the difference is that in order to do that, in order to be able to build that, we had to build a set of developer tools for our own team to allow them to build things as quickly as possible.And, you know, if you look at Cloudflare, I think one of the things we're known for is just the rapid, rapid, rapid pace of innovation. And so, over time, customers would ask us, “How do you innovate so fast? How do you build things fast?” And part of the answer to that, there are lots of ways that we've been able to do that, but part of the answer to that is we built a developer platform for our own team, which was just incredibly flexible, allowed you to scale to almost any level, took care of a lot of that traditional SRE functions just behind the scenes without you having to think about it, and it allowed our team to be really fast. And our customers are like, “Wow, I want that too.”And so, customer after customer after customer after customer was asking and saying, you know, “We have those same problems. You know, if we're a big e-commerce player, we need to be able to build something that can scale up incredibly quickly, and we don't have to think about spinning up VMs or containers or whatever, we don't have to think about that. You know, our customers are around the world. We don't want to have to pick a region for where we're going to deploy code.” And so, where we built Cloudflare Workers for ourself first, customers really pushed us to make it available to them as well.And that's the way that almost any good developer platform starts out. That's how AWS started. That's how, you know, the Microsoft developer platform, and so the Apple developer platform, the Salesforce developer platform, they all start out as internal tools, and then someone says, “Can you expose this to us as well?” And that's where, you know, I think that we have built this. And again, it's very opinionated, it is right for certain applications, it's never going to be the right place to run SAP HANA, but the company that builds the tool [crosstalk 00:27:58]—Corey: I'm not convinced there is a right place to run SAP HANA, but that's probably unfair of me.Matthew: Yeah, but there is a startup out there, I guarantee you, that's building whatever the replacement for SAP HANA is. And I think it's a better than even bet that Cloudflare Workers is part of their stack because it solves a lot of those fundamental challenges. And that's been great because it is now allowing customer after customer after customer, big and large startups and multinationals, to do things that you just can't do with traditional legacy hyperscale public cloud. And so, I think we're sort of the next generation of building that. And again, I don't think we set out to build a developer platform for third parties, but we needed to build it for ourselves and that's how we built such an effective tool that now so many companies are relying on.Corey: As a Cloudflare customer myself, I think that one of the things that makes you folks standalone—it's why I included security as well as CDN is one of the things I trust you folks with—has been—Matthew: I still think CDN is Canadian. You will never see us use that term. It's like, Gartner was like, “You have to submit something for the CDN-like ser—” and we ended up, like, being absolute top-right in it. But it's a space that is inherently going to zero because again, if bandwidth is free, I'm not sure what—this is what the internet—how the internet should work. So yeah, anyway.Corey: I agree wholeheartedly. But what I've always enjoyed, and this is probably going to make me sound meaner than I intend it to, it has been your outages. Because when computers inherently at some point break, which is what they do, you personally and you as a company have both taken a tone that I don't want to say gleeful, but it's sort of the next closest thing to it regarding the postmortem that winds up getting published, the explanation of what caused it, the transparency is unheard of at companies that are your scale, where usually they want to talk about these things as little as possible. Whereas you've turned these into things that are educational to those of us who don't have the same scale to worry about but can take things from that are helpful. And that transparency just counts for so much when we're talking about things as critical as security.Matthew: I would definitely not describe it as gleeful. It is incredibly painful. And we, you know, we know we let customers down anytime we have an issue. But we tend not to make the same mistake twice. And the only way that we really can reliably do that is by being just as transparent as possible about exactly what happened.And we hope that others can learn from the mistakes that we made. And so, we own the mistakes we made and we talk about them and we're transparent, both internally but also externally when there's a problem. And it's really amazing to just see how much, you know, we've improved over time. So, it's actually interesting that, you know, if you look across—and we measure, we test and measure all the big hyperscale public clouds, what their availability and reliability is and measure ourselves against it, and across the board, second half of 2021 and into the first half of 2022 was the worst for every cloud provider in terms of reliability. And the question is why?And the answer is, Covid. I mean, the answer to most things over the last three years is in one way, directly or indirectly, Covid. But what happened over that period of time was that in April of 2020, internet traffic and traffic to our service and everyone who's like us doubled over the course of a two-week period. And there are not many utilities that you can imagine that if their usage doubles, that you wouldn't have a problem. Imagine the sewer system all of a sudden has twice as much sewage, or the electrical grid as twice as much demand, or the freeways have twice as many cars. Like, things break down.And especially the European internet came incredibly close to just completely failing at that time. And we all saw where our bottlenecks were. And what's interesting is actually the availability wasn't so bad in 2020 because people were—they understood the absolute critical importance that while we're in the middle of a pandemic, we had to make sure the internet worked. And so, we—there were a lot of sleepless nights, there's a—and not just at with us, but with every provider that's out there. We were all doing Herculean tasks in order to make sure that things came online.By the time we got to the sort of the second half of 2021, what everybody did, Cloudflare included, was we looked at it, and we said, “Okay, here were where the bottlenecks were. Here were the problems. What can we do to rearchitect our systems to do that?” And one of the things that we saw was that we effectively treated large data centers as one big block, and if you had certain pieces of equipment that failed in a way, that you would take that entire data center down and then that could have cascading effects across traffic as it shifted around across our network. And so, we did the work to say, “Let's take that one big data center and divide it effectively into multiple independent units, where you make sure that they're all on different power suppliers, you make sure they're all in different [crosstalk 00:32:52]”—Corey: [crosstalk 00:32:51] harder than it sounds. When you have redundant things, very often, the thing that takes you down the most is the heartbeat that determines whether something next to it is up or not. It gets a false reading and suddenly, they're basically trying to clobber each other to death. So, this is a lot harder than it sounds like.Matthew: Yeah, and it was—but what's interesting is, like, we took it all that into account, but the act of fixing things, you break things. And that was not just true at Cloudflare. If you look across Google and Microsoft and Amazon, everybody, their worst availability was second half of 2021 or into 2022. But it both internally and externally, we talked about the mistakes we made, we talked about the challenges we had, we talked about—and today, we're significantly more resilient and more reliable because of that. And so, transparency is built into Cloudflare from the beginning.The earliest story of this, I remember, there was a 15-year-old kid living in Long Beach, California who bought my social security number off of a Russian website that had hacked a bank that I'd once used to get a mortgage. He then use that to redirect my cell phone voicemail to a voicemail box he controlled. He then used that to get into my personal email. He then used that to find a zero-day vulnerability in Google's corporate email where he could privilege-escalate from my personal email into Google's corporate email, which is the provider that we use for our email service. And then he used that as an administrator on our email at the time—this is back in the early days of Cloudflare—to get into another administration account that he then used to redirect one of Cloud Source customers to a website that he controlled.And thankfully, it wasn't, you know, the FBI or the Central Bank of Brazil, which were all Cloudflare customers. Instead, it was 4chan because he was a 15-year-old hacker kid. And we fix it pretty quickly and nobody knew who Cloudflare was at the time. And so potential—Corey: The potential damage that could have been caused at that point with that level of access to things, like, that is such a ridiculous way to use it.Matthew: And—yeah [laugh]—my temptation—because it was embarrassing. He took a bunch of stuff from my personal email and he put it up on a website, which just to add insult to injury, was actually using Cloudflare as well. And I wanted to sweep it under the rug. And our team was like, “That's not the right thing to do. We're fundamentally a security company and we need to talk about when we make mistakes on security.” And so, we wrote a huge postmortem on, “Here's all the stupid things that we did that caused this hack to happen.” And by the way, it wasn't just us. It was AT&T, it was Google. I mean, there are a lot of people that ended up being involved.Corey: It builds trust with that stuff. It's painful in the short term, but I believe with the benefit of hindsight, it was clearly the right call.Matthew: And it was—and I remember, you know, pushing ‘publish' on the blog post and thinking, “This is going to be the end of the company.” And quite the opposite happened, which was all of a sudden, we saw just an incredible amount of people who signed up the next day saying, “If you're going to be that transparent about something that was incredibly embarrassing when you didn't have to be, then that's the sort of thing that actually makes me trust that you're going to be transparent the future.” And I think learning that lesson early on, has been just an incredibly valuable lesson for us and made us the company that we are today.Corey: A question that I have for you about the idea of there being no reason to charge in one direction but not the other. There's something that I'm not sure that I understand on this. If I run a website, to use your numbers of a terabit out—because it's a web server—and effectively nothing in—because it's a webserver; other than the request, nothing really is going to come in—that ingress bandwidth becomes effectively unused and also free. So, if I have another use case where I'm paying for it anyway, if I'm primarily caring about an outward direction, sure, you can send things in for free. Now, there's a lot of nuance that goes into that. But I'm curious as to what the—is their fundamental misunderstanding in that analysis of the bandwidth market?Matthew: No. And I think that's exactly, exactly right. And it's actually interesting. At Cloudflare, our infrastructure team—which is the one that manages our connections to the outside world, manages the hardware we have—meets on a quarterly basis with our product team. It's called the Hot and Cold Meeting.And what they do is they go over our infrastructure, and they say, “Okay, where are we hot? Where do we have not enough capacity?” If you think of any given server, an easy way to think of a server is that it has, sort of, four resources that are available to it. This is, kind of, vast simplification, but one is the connectivity to the outside world, both transit in and out. The second is the—Corey: Otherwise it's just a complicated space heater.Matthew: Yeah [laugh]. The other is the CPU. The other is the longer-term storage. We use only SSDs, but sort of, you know, hard drives or SSD storage. And then the fourth is the short-term storage, or RAM that's in that server.And so, at any given moment, there are going to be places where we are running hot, where we have a sort of capacity level that we're targeting and we're over that capacity level, but we're also going to be running cold in some of those areas. And so, the infrastructure team and the product team get together and the product team has requests on, you know, “Here's some more places we would be great to have more infrastructure.” And we're really good at deploying that when we need to, but the infrastructure team then also says, “Here are the places where we're cold, where we have excess capacity.” And that turns into products at Cloudflare. So, for instance, you know, the reason that we got into the zero-trust space was very much because we had all this excess capacity.We have 100 times the capacity of something like Zscaler across our network, and we can add that—that is primar—where most of our older products are all about outward traffic, the zero-trust products are all about inward traffic. And the reason that we can do everything that Zscaler does, but for, you know, a much, much, much more affordable prices, we going to basically just layer that on the network that already exists. The reason we don't charge for the bandwidth behind DDoS attacks is DDoS attacks are always about inbound traffic and we have just a ton of excess capacity around that inbound traffic. And so, that unused capacity is a resource that we can then turn into products, and very much that conversation between our product team and our infrastructure team drives how we think about building new products. And we're always trying to say how can we get as much utilization out of every single piece of equipment that we run everywhere in the world.The way we build our network, we don't have custom machines or different networks for every products. We build all of our machines—they come in generations. So, we're on, I think, generation 14 of servers where we spec a server and it has, again, a certain amount of each of those four [bits 00:39:22] of capacity. But we can then deploy that server all around the world, and we're buying many, many, many of them at any given time so we can get the best cost on that. But our product team is very much in constant communication with our infrastructure team and saying, “What more can we do with the capacity that we have?” And then we pass that on to our customers by adding additional features that work across our network and then doing it in a way that's incredibly cost-effective.Corey: I really want to thank you for taking the time to, basically once again, suffer slings and arrows about networking, security, cloud, economics, and so much more. If people want to learn more, where's the best place for them to find you?Matthew: You know, used to be an easy question to answer because it was just, you know, go on Twitter and find me but now we have all these new mediums. So, I'm @eastdakota on Twitter. I'm eastdakota.com on Bluesky. I'm @real_eastdakota on Threads. And so, you know, one way or another, if you search for eastdakota, you'll come across me somewhere out there in the ether.Corey: And we will, of course, put links to that in the show notes. Thank you so much for your time. I appreciate it.Matthew: It's great to talk to you, Corey.Corey: Matthew Prince, CEO and co-founder of Cloudflare. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment that I will of course not charge you inbound data rates on.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Unofficial SAP on Azure podcast
#154 - The one with Oracle multi-volume performance with Azure NetApp Files (Geert van Teylingen) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Aug 4, 2023 54:18


In episode 154 of our SAP on Azure video podcast we talk about Automated registration of SAP systems at scale with Azure Centre for SAP solutions, Private Preview for DR for Shared Disks with Azure Site Recovery, SAP OData APIs via Azure API Management and link it with Azure Static Web App, Manufacturing Ontologies - with SAP, availability of ABAP Platform Trial and Warmest Welcome to our new SAP Champions! Then we take a look at Azure NetApp Files. We mainly talk about this in the context of SAP HANA on Azure. But as we briefly touched early as well, ANF also plays a huge role with Oracle workload. So today we want to take another look at updates on Azure NetApp Files, what's new with "ANF and SAP HANA" and then look at large scale deployments with Oracle on Azure. For this I am happy to have Geert with us again on the show. Find all the links mentioned here: https://www.saponazurepodcast.de/episode154 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #ANF #Oracle

SAP Developers
SAP Developer News August 3rd, 2023

SAP Developers

Play Episode Listen Later Aug 3, 2023 5:35 Transcription Available


This episode covers the ABAP Platform Trial, SAP BTP ABAP Environment – Manage System Hibernation, UX Practices and Mobile App Design, UI5 TypeScript Tutorial and a new SAP CodeJam topic: Machine Learning with SAP HANA.

Data Protection Gumbo
205: Plumbing the Depths of Unstructured Data - Superna

Data Protection Gumbo

Play Episode Listen Later Jul 18, 2023 28:00


Alex Hesterberg, CEO at Superna embarks on a captivating exploration of the expanding world of unstructured data and data security trends. The discussion gets fervid as Alex enlightens us about the amplified use of unstructured data platforms in recovery and resiliency along with its application in tier zero platforms like SAP HANA. His insights about data integration into business intelligence tools and the potential of AI and ML technologies like ChatGPT are truly riveting.

Unofficial SAP on Azure podcast
#148 - The one with Visual Studio and Power Platform (Robert Boban) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Jun 23, 2023 41:50


In episode 148 of our SAP on Azure video podcast we talk about Azure Premium SSD v2 Disk Storage in more regions, SQL Server 2022 is released for SAP NetWeaver, Data protection with BlueXP backup and recovery with SAP HANA on AZure NetApp Files, SAP Start/Stop Automation using Azure Center for SAP Solutions, Principal propagation in a multi-cloud solution between Microsoft Azure and SAP, Part VII Invoke RFCs and BAPIs with Kerberos delegation from Microsoft Power Platform, SAP BTP ABAP Environment integration journey with Microsoft Excel and GitHub Copilot Early Adoption phase at SAP. Then Robert Boban walks us through a code-first scenario and how Visual Studio can be used to publish and connect to Power Platform, leverage Azure API Management create connectors directly from the pro-code environment. https://www.saponazurepodcast.de/episode148 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #PowerPlatform #VisualStudio

The Cloud Pod
209: The Cloud Pod Whispers Sweet Nothings To Our Code (**why wont you work**)

The Cloud Pod

Play Episode Listen Later Apr 28, 2023 44:35


Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan and Jonathan are your hosts this week as we discuss all the latest news and announcements in the world of the cloud and AI - including Amazon's new AI, Bedrock, as well as new AI tools from other developers. We also address the new updates to AWS's CodeWhisperer, and return to our Cloud Journey Series where we discuss *insert dramatic music* - Kubernetes!  Titles we almost went with this week: ⭐I'm always Whispering to My Code as an Individual

The Cloud Pod
202: The Bing is dead! Long live the Bing

The Cloud Pod

Play Episode Listen Later Mar 10, 2023 35:56


On this episode of The Cloud Pod, the team talks about the possible replacement of CEO Sundar Pichai after Alphabet stock went up by just 1.9%, the new support feature of Amazon EKS for Kubernetes, three partner specializations just released by Google, and how clients have responded to the AI Powered Bing and Microsoft Edge. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights

Unofficial SAP on Azure podcast
#134 - The one with SAP Business One (Moshe Nachman & Richard Duffy) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Mar 10, 2023 2:09


In episode 134 of our SAP on Azure video podcast we talk about using Azure NetApp Files AVG for SAP HANA to deploy HANA with multiple partitions, Secure Communication to Fetch SAP NetWeaver SAP Control and SAP RFC Metric Data in Azure Monitor for SAP, ChatGPT Integration with SAP S/4HANA, Build solutions faster with Microsoft Power Platform and next-generation AI and Introducing Microsoft Dynamics 365 Copilot. Then we talk about a question that we got in several comments: SAP Business One. Especially how you can extend and integrate your SAP Business One system with Azure services -- like Logic Apps. So today we have Richard Duffy and Moshe Nachman joining us. Both of them are experts in SAP Business One and Moshe has published some great blog posts on how to do exactly this integration and extension https://www.saponazurepodcast.de/episode134 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure

Unofficial SAP on Azure podcast
(fixed) #134 - The one with SAP Business One (Moshe Nachman & Richard Duffy) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Mar 10, 2023 44:32


In episode 134 of our SAP on Azure video podcast we talk about using Azure NetApp Files AVG for SAP HANA to deploy HANA with multiple partitions, Secure Communication to Fetch SAP NetWeaver SAP Control and SAP RFC Metric Data in Azure Monitor for SAP, ChatGPT Integration with SAP S/4HANA, Build solutions faster with Microsoft Power Platform and next-generation AI and Introducing Microsoft Dynamics 365 Copilot. Then we talk about a question that we got in several comments: SAP Business One. Especially how you can extend and integrate your SAP Business One system with Azure services -- like Logic Apps. So today we have Richard Duffy and Moshe Nachman joining us. Both of them are experts in SAP Business One and Moshe has published some great blog posts on how to do exactly this integration and extension https://www.saponazurepodcast.de/episode134 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure

Hablamos de SAP
034 - Píldoras de SAP

Hablamos de SAP

Play Episode Listen Later Feb 20, 2023 67:55


Hoy hablo con David Almagro, alguien a quien conocí gracias a otra de las personas que ha pasado por aquí, Enric Castellá. Me gusta el contenido que publica y la forma en la que lo trata: vídeos cortos, con un lenguaje claro y asequible. Nos cuenta su visión de la parte de Basis, ahora que llega “el coco” del cloud… parece que no se va a quedar sin trabajo; eso sí, hay que reciclarse y trabajar de otra forma. Hablamos, por ejemplo, de RISE… ¿Un único contrato? ¿Qué pasa con los SLA? ¿Me salen los números? ¿SAP me hace todo? Nos habla de su experiencia a la hora de trabajar con SAP HANA y nos da una serie de consejos interesantes a la hora de afrontar las migraciones que se nos vienen encima… ¿Greenfield? ¿Brownfield? Nos explica también cómo funciona el tema de los FUEs, desde su propia experiencia; muy a tener en cuenta, si estás en el momento de renegociar tus licencias. Como siempre, lo mejor es escucharlo...

Unofficial SAP on Azure podcast
#125 - The one with Securing SAP with Azure Services (Evren Buyruk) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Jan 6, 2023 83:40


In episode 125 of our SAP on Azure video podcast we talk about Looking back on 2022, Azure Backup for SAP HANA, Shortage of Security experts, ABAP Cloud, SAP Business One and Microsoft Azure and how the Power Platform can help you modernize your mainframe application. Then we take another deep dive on SAP and security. Bad actors are getting more aggressive and it is easier to attack a company using different tools and services. That's why it is so important to also use services to protect your IT systems -- not only SAP. Evren Buyruk joins us again to talk about different offerings on Azure that help you protect your SAP workload. https://www.saponazurepodcast.de/episode125 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #SAPonAzure

Unofficial SAP on Azure podcast
#123 - The one with working with partners (Robert Boban) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Dec 16, 2022 30:30


In episode 123 of our SAP on Azure video podcast we talk about Cross Zonal Restore of Azure Virtual Machines from Azure Backup, Azure NetApp Files cross-zone replication, Optimize HANA deployments with Azure NetApp Files application volume group for SAP HANA and Coca-Cola United automating over 50,000 orders in complex SAP invoicing process with Microsoft Power Platform. "Partners make more possible". From the very beginning partners were and are a central pillar for Microsofts success. If we look at SAP and Microsoft, this is even more important. We have now a lot of pure SAP partners that acquired the required Microsoft knowledge to help customers. Robert Boban talks about the mentoring and continuous coaching concept that enable our partners to work with SAP and Microsoft. https://www.saponazurepodcast.de/episode123 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #SAPonAzure

Unofficial SAP on Azure podcast
#117 - The one with integrating SAP Digital Supply Chain with Microsoft (Domnic Benedict & Bartosz Jarkowski) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Nov 4, 2022 47:44


In episode 117 of our SAP on Azure video podcast we talk about latest improvements to efficiency in Microsoft datacenters, sustainability guidance for Azure Kubernetes Services, using availability zones for HA in Azure NetApp Files, Premium SSD v2 storage configurations for SAP HANA, SAP events on Azure Event Grid, GA of Power Apps Ideas and Calling the Microsoft Graph on behalf of the SAP-authenticated user. Then we have Domnic Benedict from SAP and Bartosz Jarkowski joining us to talk about integrating the SAP Digital Supply Chain with Microsoft services. The integration with Microsoft Teams via Adaptive Cards notifies suppliers and also enables collaboration on key information from IBP in Teams. The usage of Azure Machine Learning for SAP Digital Supply Chains allows customers to use individual ML models to do forecasting with IBP. https://www.saponazurepodcast.de/episode117 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/

Let’s Talk Cloud Networking - Unscripted
EP33 - Trillions of Dollars moving to the cloud and businesses leveraging multiple-clouds " - advice and tips from the most elite AWS blackbelts EVGENY VAGANOV & ABDUL RAHIM “

Let’s Talk Cloud Networking - Unscripted

Play Episode Listen Later Jul 1, 2022 63:22


Podcast 33 - “ Evgeny Vaganov and Abdul Rahim has ~12 years of combined experience at #awscloud and have worked with thousands of customers moving to cloud. In episode 33, we asked them to share their cloud journey, lessons learned and advice for customers. Some key points: - Many cloud deployment start as non mission critical, in a single cloud and organically grew into a giant mess that is hard to untangle, with several design flaws, lack of visibility, security holes and operational/governance nightmares. - CSP by design focus less on networking features as they have to prioritize durability, performance availability and ensure environment is secure. Pace of innovation is slow as they try to recreate 30 years worth of capabilities in a cloud way, which will take time. -Every single customer they met were either multi-cloud already or looking to extend in other clouds. Single CSP alone CANNOT meet requirements of enterprises. -Key points they love about Aviatrix is "end to end focus b.w apps and users" -Aviatrix has put the focus and control back on networking and security and Aviatrix ACE (#aviatrixace ) is the most beautiful opportunity. Think CCIE in 1995 but much bigger in terms of impact as cloud transformation will be 10x bigger and 100x faster. [Note: Rahim has 3 X CCIE's) -Industry clouds becoming more prominent with many vendors offering "Specialty as a service -SaaS" on top of multiple CSPs infra which is like a "utility" model. Think Splunk, snowflake, SAP HANA, Netflix all becoming Over the Top [OTT} providers over multiple CSPs. It will become more common trend and many CSPs may look to acquire certain businesses just for their vertical expertise as well. [like Oracle/Cerner and Goldman announcing their own financial cloud]. -Aviatrix is a perfect fit for industry clouds.... a cookie cutter approach to offer their software in a secure, consistent manner on top of any cloud and intelligently connecting to end consumers. Revenue is directly proportional to how fast they onboard customers and expand in a consistent manner. Both Evgeny and Rahim offered 1:1 consulting session for any customers looking for advice. Reach out directly or contact Aviatrix info@aviatrix.com. Podcast link here. Hope you will enjoy. https://lnkd.in/geQG2zX6 --- Send in a voice message: https://anchor.fm/netjoints/message

Let’s Talk Cloud Networking - Unscripted
EP33 - Trillions of Dollars moving to the cloud and businesses leveraging multiple-clouds " - advice and tips from the most elite AWS blackbelts EVGENY VAGANOV & ABDUL RAHIM “

Let’s Talk Cloud Networking - Unscripted

Play Episode Listen Later Jul 1, 2022 63:22


Podcast 33 - “ Evgeny Vaganov and Abdul Rahim has ~12 years of combined experience at #awscloud and have worked with thousands of customers moving to cloud. In episode 33, we asked them to share their cloud journey, lessons learned and advice for customers. Some key points: - Many cloud deployment start as non mission critical, in a single cloud and organically grew into a giant mess that is hard to untangle, with several design flaws, lack of visibility, security holes and operational/governance nightmares. - CSP by design focus less on networking features as they have to prioritize durability, performance availability and ensure environment is secure. Pace of innovation is slow as they try to recreate 30 years worth of capabilities in a cloud way, which will take time. -Every single customer they met were either multi-cloud already or looking to extend in other clouds. Single CSP alone CANNOT meet requirements of enterprises. -Key points they love about Aviatrix is "end to end focus b.w apps and users" -Aviatrix has put the focus and control back on networking and security and Aviatrix ACE (#aviatrixace ) is the most beautiful opportunity. Think CCIE in 1995 but much bigger in terms of impact as cloud transformation will be 10x bigger and 100x faster. [Note: Rahim has 3 X CCIE's) -Industry clouds becoming more prominent with many vendors offering "Specialty as a service -SaaS" on top of multiple CSPs infra which is like a "utility" model. Think Splunk, snowflake, SAP HANA, Netflix all becoming Over the Top [OTT} providers over multiple CSPs. It will become more common trend and many CSPs may look to acquire certain businesses just for their vertical expertise as well. [like Oracle/Cerner and Goldman announcing their own financial cloud]. -Aviatrix is a perfect fit for industry clouds.... a cookie cutter approach to offer their software in a secure, consistent manner on top of any cloud and intelligently connecting to end consumers. Revenue is directly proportional to how fast they onboard customers and expand in a consistent manner. Both Evgeny and Rahim offered 1:1 consulting session for any customers looking for advice. Reach out directly or contact Aviatrix info@aviatrix.com. Podcast link here. Hope you will enjoy. https://lnkd.in/geQG2zX6 --- Send in a voice message: https://anchor.fm/netjoints/message

Unofficial SAP on Azure podcast
#97 - The one with the Health & Inventory Check for SAP (Jitendra Singh & Ross Sponholtz) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Jun 17, 2022 50:02


In episode 97 of our SAP on Azure video podcast we talk about a new video of Microsoft Purview and SAP, a detailed recovery guide for SAP HANA and Azure NetApp Files, scale-up high-availability on disaster region architecture, Customer engagement initiative for SAP Private Link and Martins and Wills presentation at the Integrate conference in London. Then Ross and Jitendra join us to talk about the Health and Inventory checks for SAP part of the AzSAP Tools & Framework. Starting with the VM/OS health check to ensure that a setup and operation is done in a recommended way to the inventory check provides an "early-watch-kind of report" over your SAP landscape on Azure. https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/main/Tools%26Framework https://www.saponazurepodcast.de/episode097

Engineering Kiosk
#22 NoSQL: ACID, BASE, Ende einer Ära Teil 2

Engineering Kiosk

Play Episode Listen Later Jun 7, 2022 60:36


Neben relationalen Datenbanken gibt es noch eine ganz andere Welt: NoSQL.Doch wofür steht eigentlich NoSQL? Kein SQL? Not Only SQL? Was ist eigentlich die Geschichte hinter dem Hype? Warum wurde diese Art von Datenbanken erfunden? Wofür sind diese gut? Folgen NoSQL Datenbank auch dem ACID-Concept? Was ist Eventual Consistency? Und was sind Neo4J, M3, Cassandra, und Memcached für Datenbanken? Eine Episode voller Buzzwords … Hoffen wir auf ein Bingo.Bonus: Warum Wolfgang keinen Manta fährt und ob Andy bald mit einem Ferrari zum einkaufen fährt.Feedback an stehtisch@engineeringkiosk.dev oder via Twitter an https://twitter.com/EngKioskLinksACID: https://de.wikipedia.org/wiki/ACIDBASE: https://db-engines.com/de/article/BASECAP-Theorem: https://de.wikipedia.org/wiki/CAP-TheoremEventual Consistency: https://de.wikipedia.org/wiki/Konsistenz_(Datenspeicherung)#Verteilte_SystemeMichael Stonebraker / The End of an Architectural Era (It's Time for a Complete Rewrite): http://nms.csail.mit.edu/~stavros/pubs/hstore.pdfMongoDB: https://www.mongodb.com/Presto: https://prestodb.io/SAP HANA: https://www.sap.com/germany/products/hana.htmlRedis: https://redis.io/Neo4J: https://neo4j.com/M3: https://m3db.io/InfluxDB: https://www.influxdata.com/VictoriaMetrics: https://victoriametrics.com/Cassandra: https://cassandra.apache.org/Memcached: https://memcached.org/MySQL: https://www.mysql.com/de/MySQL Memcached Plugin: https://dev.mysql.com/doc/refman/5.6/en/innodb-memcached.htmlSprungmarken(00:00:00) Intro(00:00:53) Wolfgangs Auto, Entlastungspaket in Deutschland(00:03:23) Heutiges Thema: NoSQL Datenbanken und CO2-Einsparung durch Datenbank-Optimierungen(00:07:20) Was ist anders zur Episode 19 (Datenbanken) und ist NoSQL überhaupt noch ein Thema?(00:08:39) Was verstehen wir unter dem Begriff NoSQL und woher kommt es eigentlich?(00:15:58) Tip: Für Side Projects besser vertikal anstatt horizontal skalieren(00:16:50) NoSQL: Speziellere Lösungen mit Fokus auf Einfachheit und Benutzerfreundlichkeit(00:18:38) Braucht man heute noch Datenbank-Administratoren (DBA)?(00:21:13) Der Job des klassischen System-Administrator ist weiterhin relevant(00:23:15) Gibt es wirklich keine Datenbank-Schemas in der NoSQL-Welt?(00:27:23) Schema-Lose Möglichkeit in relationalen Datenbanken und Arbeit in die Datenbank oder Software auslagern(00:30:53) NoSQL hat die ACID-Properties aufgeweicht und warum ACID nachteilig für die Skalierung ist(00:33:28) Das NoSQL BASE Akronym(00:36:15) Der Client muss die Datenbank ordentlich nuzten um ACID-Garantien zu bekommen(00:41:35) Was bedeutet eigentlich NoSQL? Kein SQL? Not Only SQL?(00:43:38) Haupt-Speicher Datenbanken und was SAP damit zu tun hat(00:48:02) Was ist Neo4J für eine Datenbank und welcher Use-Case kann damit abgedeckt werden?(00:50:49) Was ist M3 für eine Datenbank und welcher Use-Case kann damit abgedeckt werden?(00:53:06) Was ist Cassandra für eine Datenbank und welcher Use-Case kann damit abgedeckt werden?(00:54:20) Was ist Memcached für eine Datenbank und welcher Use-Case kann damit abgedeckt werden?(00:58:44) OutroHostsWolfgang Gassler (https://twitter.com/schafele)Andy Grunwald (https://twitter.com/andygrunwald)Engineering Kiosk Podcast: Anfragen an stehtisch@engineeringkiosk.dev oder via Twitter an https://twitter.com/EngKiosk

openSAP Invites
Episode 23: Learn How SUSE Linux Enterprise Server Can Help Move SAP Workloads to the Public Cloud

openSAP Invites

Play Episode Listen Later Apr 20, 2022 44:05 Transcription Available


Learn about SUSE Linux Enterprise Server, the leading Linux platform for hosting SAP workloads, including SAP HANA, SAP NetWeaver and SAP S/4HANA.

SAP Cloud Platform Podcast
Episode 81 - Build and run containerized business apps on Kubernetes!!

SAP Cloud Platform Podcast

Play Episode Listen Later Mar 24, 2022 28:32


In this episode of SAP Integration & Extension Talk, March' 2022 we deep dive into conversation with Development Architect from SAP BTP Runtimes & Core Stakeholder Engagement team, where we start with understanding the differences between Kyma and SAP BTP Kyma runtime. Then talking about how businesses can get the benefits if they build and run containerized business applications on Kubernetes. We also discuss what's new in the SAP BTP Kyma runtime and where we can learn more and get hands-on knowledge.

The Guiding Voice
Managing Databases in the CLOUD | Pritam Sahoo | #TGV184

The Guiding Voice

Play Episode Listen Later Jan 12, 2022 27:29


Genre: Cloud Services, Career Development. Cloud services are services provided by certain companies which allows remote accessibility of data with the help of internet. These services are very important as many companies have branches overseas which makes it difficult for employees to collaborate and work on certain projects. One of the emerging companies who provide reliable cloud resources is google cloud services which will be discussed in detail in this episode.   In the episode: Start 0:00:00 Pritam's career so far and top 3 things that helped him in his career. 0:02:55 What are different types of databases and how do they help the businesses? 0:04:22 What does google cloud offer to meet various DB requirements? 0:06:07 Data Warehousing options available on Google Cloud. 0:09:21 Google Cloud BigTable 0:12:56 How to get started with Database Migrations for homogeneous and heterogeneous scenarios? 0:15:51 Rapid fire questions 0:19:10 What would be your advice for those aspiring to make big in their careers? 0:23:21 Trivia on Websites 0:25:48  About the guest: Pritam has 15 years of IT industry experience, with Specialization on Cloud (IaaS, PaaS), & pre-sales experience in various geographies and market segments. His specialities include Google Cloud, AWS, Oracle Public Cloud, PaaS, IaaS, Hybrid Cloud, Cloud Security, Oracle Identity & Access Management, Oracle Database Technology related products, SAP Databases, SAP HANA. Important Links: Google Cloud Databases  https://cloud.google.com/products/databases Relational Databases           Cloud SQL https://cloud.google.com/sql          Cloud Spanner https://cloud.google.com/spanner Non relational Databases           BigTable https://cloud.google.com/bigtable          Firestore https://cloud.google.com/firestore Relational DB Migrations i.e. Database Migration Service https://cloud.google.com/database-migration Serverless Multicloud Datawarehouse Service on Google Cloud https://cloud.google.com/bigquery/   Connect with Pritam on LinkedIn: https://www.linkedin.com/in/pritam-sahoo-77a438a/ Naveen Samala https://www.linkedin.com/mwlite/in/naveensamala Sudhakar Nagandla: https://www.linkedin.com/in/nvsudhakar   Watch the video interview on YouTube: https://youtu.be/PO7Kk_yNkzM  

SAP Learning Insights
SAP HANA – with Mark Green

SAP Learning Insights

Play Episode Listen Later Sep 22, 2021 34:07


In this episode, David Chaviano talks to Mark Green. Mark is an SAP instructor and learning content developer focusing on the SAP HANA learning portfolio. In addition, he is an active moderator of the SAP HANA Modeling Learning Room on SAP Learning Hub. Mark gives insights in how you can start learning about SAP HANA and SAP HANA modeling, the difference between SAP HANA and SAP S/4HANA, how to make best usage of the SAP HANA Modeling Learning Room and how to prepare to get certified in that topic area.

The Business of Open Source
Discussing the Latest Cloud Trends with Cloud Comrade Co-founder Andy Waroma

The Business of Open Source

Play Episode Listen Later Aug 12, 2020 24:50


Highlights from this episode include:  Key market drivers that are causing Cloud Comrade's clients to containerize applications — including the role that the global pandemic is playing.   The pitfalls of approaching cloud migration with a cost-first strategy, and why Andy doesn't believe in this approach.  Common misconceptions that can arise when comparing cloud TCO to on-premise infrastructure. How today's enterprises tend to view cloud computing versus cloud-native. Andy also mentions a key requirement that companies have to have when integrating cloud services. Andy's thoughts on build versus buy when integrating cloud services at the enterprise level. Why cloud migration is a relatively safe undertaking for companies because it's easy to correct mistakes. Why businesses need to re-think AI and to be more realistic in terms of what can actually be automated.  Andy's must-have engineering tool, which may surprise you. Links: Cloud Comrade LinkedIn: https://www.linkedin.com/company/cloud-comrade/ Follow Andy on Twitter: @andywaroma Connect with Andy on LinkedIn: https://www.linkedin.com/in/andyw/ TranscriptEmily: Hi everyone. I'm Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product's value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn't talk about them. Instead, we talk a lot about technical reasons. I'm hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you'll join me.Emily: Welcome to The Business of Cloud Native. I'm Emily Omier, your host, and today I'm here with Andy Waroma. Andy, I just wanted to start with having you introduce yourself.Andy: Yeah, hi. Thanks, Emily for having me on your podcast. My name is Andy Waroma, and I'm based in Singapore, but originally from Finland. I've been [unintelligible] in Singapore for about 20 years, and for 11 years I spent with a company called SAP focusing on business software applications. And then more recently, about six years ago, I co-founded together with my ex-colleague from SAP, a company called Cloud Comrade, and we have been running Cloud Comrade now for six years and Cloud Comrade focuses on two things: number one, on cloud migrations; and number two, on cloud managed services across the Southeast Asia region.Emily: What kind of things do you help companies understand when you're helping with cloud migrations? Is this like, like, a lift and shift? To what extent are you helping them change the architecture of their applications?Andy: Good question. So, typically, if you look at the Southeast Asian market, we are probably anywhere between one to two years behind that of the US market. And I always like to say that the benefit that we have in Southeast Asia is that we have a time machine at our disposal. So, whatever has happened in the US in the past 18 months or so it's going to be happening also in Singapore and Southeast Asia. And for the first three to four years of this business, we saw a lot of lift and shift migrations, but more recently, we have been asked to go and containerize applications to microservices, revamp applications from monolithic approach to a much more flexible and cloud-native approach, and we just see those requirements increasing as companies understand what kind of innovation they can do on different cloud platforms.Emily: And what do you think is driving, for your clients, this desire to containerize applications?Andy: Well, if you asked me three months ago, I probably would have said it's about innovation, and business advantage, and getting ahead in the market, and investing in the future. Now, with the global pandemic situation, I would say that most companies are looking at two things: they're looking at cost savings, and they are also looking at automation. And I think cost savings is quite obvious; most companies need to know how they can reduce on their IT expenditure, how they can move from CAPEX to OPEX, how they can be targeting their resources up and down depending on the business demand what they have. And at the same time, they're also not looking to hire a lot of new people into their internal IT organization. So, therefore, most of our customers want to see their applications to be as automated as possible. And of course, microservices, CI/CD pipelines, and everything else helps them to achieve that somewhat. But first and foremost, of course, it's about all services that Cloud provides in general. And then once they have been moving some of those applications and getting positive experiences, that's where we typically see the phase two kicking in, going into cloud-native microservices, containers, Kubernetes, Docker, and so forth.Emily: And do you think when companies are going into this, thinking, “Oh, I'm going to really reduce my costs.” Do you think they're generally successful?Andy: I don't think in a way that they think they are. So, especially if I'm looking at the Southeast Asian markets: Singapore, Malaysia, Thailand, Philippines, Indonesia, and perhaps other countries like Vietnam, Myanmar, and Cambodia, it's a very cost-conscious market, and I always, also like to say that when we go into a meeting, the first question that we get from the customers, “How much?” It is not even what are we going to be delivering, but how much it's going to cost them. That's the first gate of assessment. So, it's very much of an on-premise versus clouds comparison in the beginning.And I think if companies go in with that type of a mindset, that's not necessarily the winning strategy for them. What they will come to know after a while is that, for example, setting up disaster recovery systems on an on-premise environment, especially when a separate location is extremely expensive, and doing something like that on the Cloud is going to be very cost-efficient. And that's when they start seeing cost savings. But typically, what they will start seeing on Cloud is a process cost-saving, so how they can do things faster, quicker, and be more flexible in terms of responding to end-user demands.Emily: At the beginning of the process, how much do you think your customers generally understand about how different the cost structure is going to be?Andy: So, we have more than 200 customers, and we have done more than 500 projects over the six years, and there's a vast range of customers. We have done work with companies with a few people; we have done companies with Fortune 10 organizations, and everything in between, in all kinds of different industries: manufacturing, finance, insurance, public sector, industrial level things, nonprofits, research organizations. So, we can't really say that each customer are same. There are customers who are very sophisticated and they know exactly what they want when going to a cloud platform, but then there are, of course, many other customers who need to be advised much more in the beginning, and that's where we typically have tools and processes in place where we can assess what is the right proposed strategy for our customers. And then we discuss these ideas in our workshops, and then let the customer decide what will be the best way for them forward, [unintelligible] cost, timelines, and then also potential product risks.Emily: Are there any common themes around things that companies tend to not understand correctly, or misconceptions that let's say, just come up often; maybe not all the time, but frequently?Andy: Yeah. I think the most common misconception is that we work a lot with AWS, Microsoft Azure, Google Cloud, Alibaba Cloud, and other cloud platform providers. And typically, when customers are looking to migrate across, they might have virtualized their on-premise environment, and they're looking at the virtual machine to virtual machine cost comparison: how much does it cost them to run one virtual machine on on-premise versus running it on clouds. And when they do that, usually the TCO calculations, they will start breaking down the cost the customer's not taking into consideration all the security that they get with these cloud platforms, the network aspects of it, the compliance aspects, like for example, compliance certifications, regulatory aspects, and all of these things. So, if the customer was to a total TCO of what they're getting on cloud versus what they're getting on their on-premise environment, then they would actually see the huge cost-saving benefits that they are getting. But, at the moment, many of the customers just look at virtual machine to virtual machine comparison, which in our opinion, is wrong.Emily: So, it can be really hard for companies to do this, sort of, apples to apples comparison because they're not taking everything into account?Andy: Yeah, to be fair to them, many times you only start getting some of those benefits, of Cloud if you completely decommission your on-premise data center, but if you still the on-premise data center costs and you have to cloud costs, then you can't fully exploit the benefits of Cloud.Emily: Do you see any common misconceptions related to how you would have answered three months ago, where related to innovation, or agility?Andy: I think we have seen a huge influx of projects coming to us right now, and I see that a lot of companies who have been sitting on the fence in the previous years, they have now realized that the economic situation is changing very, very rapidly. And it doesn't matter if they have been the market leader, or one of the challengers, they need to do something. And previously, many companies felt that the best approach would be to go to, maybe, one of the large consulting providers, and just get something from them regardless of the price, or alternatively, go and shop around with 10, 15, 20 different vendors and try and squeeze the best price out of them, and [unintelligible] the implementation at the lowest possible cost. I think now, with the pandemic and economic situation, I feel that many companies have been rationalizing their IT strategy, their strategy they're suspending, and what they are doing is that they are trying to find what is the most suitable vendor given the current situation, forego lengthy RFP processes, do something quickly on Cloud, do some kind of TOC, if it looks good for them, then let's just get going, don't waste internal resources, don't waste external resources, and try to achieve something quickly.Emily: Do you feel like, in general, your clients tend to understand the difference between just cloud computing and cloud-native?Andy: Um, I think that definition is probably hazy. I think it's probably also maybe somehow hazy for me, [laughs], and I think in some cases, the cloud platform providers want to keep it that way as well. Typically, the customers that we are dealing with is relatively traditional enterprises, and we always advise them that rather than doing a lot of transformation on on-premise first, why don't we just help you to lift and shift and move you to cloud, and once you're on Cloud, you can start doing all kinds of experimentation there with cloud-native services. And it's not just about moving these companies to cloud-native services, which have certain benefits, like for example, serverless technologies that reduces the costs a lot, it also requires the companies to have a pretty clear understanding of their internal processes, and then also, critically, an understanding of the roles and responsibilities within those organizations because when you start deploying software and applications in a different way onto a cloud platform, the technology is one part of it, but the process and people is the other part of it, and if the organizations internally are not ready to change the process, they won't be able to reap the benefits of cloud-native services, otherwise.Emily: And how challenging do you think it is for companies to do the sort of, organizational work to match this sort of change in the way they work together with a different technology?Andy: I think it is very challenging, and we see a big skill gap there. Many companies, even large companies, they might have issues of attracting talent. And typically, one of the reasons is that even if the companies are wanting to do something on Cloud, they typically choose one Cloud Platform only, and many IT people who are wanting to get into this space or wanting to be in cloud computing. They don't want to focus just on one cloud platform. They want to be focused on multiple different platforms, and they want to understand what are the differences between, let's say, AWS, Azure, Google Cloud, and other providers, and that's why people who are really skilled, they probably end up in consulting companies like ours and other companies. And that's, I believe, is the whole reason of our existence is that many companies cannot do these things in-house, and therefore, they would like to outsource the initial work, and then also the ongoing work terms of managed services to the companies like us.Emily: Do companies often ask you, “Should we build this internally? Should we outsource this for somebody else to do? Should we buy a tool that does this automatically?” Is that a question that they have, and how do you sort of evaluate an answer?Andy: [unintelligible] this depends a little bit on the industry. So, for example, there are a number of so-called unicorns, in the region, and these companies have very big IT departments, in-house development department, in-house, they might have thousands of developers, and they feel that the best way for those companies to create differentiation in the market is to build by themselves. But we deal also with a lot of, like I mentioned before, with traditional manufacturing companies, for instance, and there they will go for packaged applications such as SAP, Oracle, .NET applications, and I think that's the best approach for them, rather than start building something in-house, which would not necessarily make sense for them, given the fact that these costs are not in IT industry, but they are in, let's say, in food and beverages industry. But they've certainly built, and like to build some services on top of those traditional software packages, and analytics and analytical reports, for example, predictive forecasting, and that's something that will bring about uniqueness to their organization, to their process, and competitive advantage. And that are mostly things that they're exploring on the Cloud, and weighing those options for that make sense to buy something prebuilt, or develop it in-house, or outsource it to some organization.Emily: What do you think your clients tend to be pleasantly surprised about? What ends up being a lot easier than they expect?Andy: First of all, IT has always been notorious for failed projects. There has been too many of those, and if you look at Cloud implementations, we really haven't had any. I think Cloud is safe in the sense that even if you have estimated, these are the compute requirements, or the storage requirements, or whatever it might be at the initial phase, and it turns out that assessment is wrong, then it's very easy to actually rectify that, and change the services, and if we brought up, let's say, an SAP on a certain types of instances and two days later, we find out that no, this instance was too big or too small, we can easily go and change that. That also, if I'm taking that SAP example, SAPs customers are typically wanting to move to SAP HANA at some point of the time, but they're unsure whether that makes sense for them from the organizational point of view and business point of view, and we tell them that why don't you just move to Cloud, and then we can provision new instances with SAP HANA. We can migrate your test environment to an SAP HANA environment, and then you can give it a go. So, Cloud allows you to do a lot of this experimentation, which then leads back to the pleasant surprise that there aren't really any failed IP projects, or even if they are failures, the failures are quite minute compared to some of the IT failures they might have seen in the past.Emily: Interesting. That's a pretty big upside.Andy: Definitely it is, and we have had some tough, difficult implementations in the past, big migration projects, but with the [unintelligible] from the customer side, and also internal resources, we have always been able to overcome them. And then, even if things are heated at some point of time, usually at the end of the day, the final project sign-off meetings are relatively pleasant and joyous.Emily: Do you see a major difference in strategy by industry? So, a technically sophisticated company versus a traditional manufacturing company, are they going to follow dramatically different strategies to build competitive advantage using technology?Andy: Mmm. So, we are really focused on the infrastructure, so when it comes to infrastructure, mine services, [security mine] services, and I'm looking at the infrastructure layer, typically the customers either just choosing to use something on Linux was choosing to use something on Windows. Maybe they use some server technologies and others, but from that point of view, you can't really tell, if you're looking to a customer's cloud console, whether they are a manufacturing company, or whether they are a finance organization. So, in that sense, there is no difference. Where the differences come in, though, is that we, for example, look at financial industries like banks, and insurance companies, that's where the governmental rules, and regulations, and compliance comes in, and there's a lot of paperwork outside of the cloud implementations that need to be done. There's a lot of guidelines that we need to follow, how to provision the infrastructure to be compliant to, let's say, banking or insurance regulations. That's one industry which is different from the others. Healthcare in the US, for example, there is HIPAA compliance, so how HIPAA compliant applications need to be built all the way from ground up, so that's also a significant difference. Third one is public sector. And so, like I mentioned before, almost half of our business is coming from public sector, and that's where, for instance, security, and compliance is extremely important, and it sometimes almost goes overboard to a certain extent, but I can understand why. And the security and networking requirements are extreme, and that's probably, for me, the most different industry out of all of them, customer-based that we have. And then the last one is multinational companies. Multinational companies have to adhere to their own IT policies, and audit requirements and they also have, many times, a lot of requirements that are relatively unique that they need to follow. Those are also different types of projects from the rest of them. Whether it's a media company, whether it's a manufacturing company, whether it's a research company or organization, I don't see too much differences in the way that they put the demands, and requirements on the infrastructure platform.Emily: And what do you say is something that your clients sometimes come to you looking for help with, that you still, kind of, don't have a great answer for?Andy: Machine learning. AI. A lot of companies talk about it, a lot of companies have been pitched about it, a lot of companies come to us, and said we want to do a machine learning project, and I said, “Well, fine. But what do you want to get out of those machine learning algorithms, and what do you want to build?” And essentially, once you start drilling down, what they want to have is this one red button when they press, and everything else can happen magically underneath to resolve all of their business problems. And I think all of us know that that's not how machine learning works. That's not how AI works, or at least it doesn't work like that right now. So, the expectations that vendors are setting what can be done with machine learning and AI, I think need to be revised. And also, customers need to be a bit more realistic in terms of the expectations what can be automated, what can be done through these new technologies versus what they've been doing in the past.Emily: Basically, if you want machine learning, you have to know it won't answer all of your questions. You have to know what you want answered, otherwise, you're not even asking the right questions.Andy: Pretty much. And also, machine learning, just like, also, analytics is that you require vast data sets that need to be ready from the organization, and the data sets need to be relatively clean as well, otherwise, it's hard to come up with models that are accurate, and the old fashioned saying of garbage in, garbage out is also very true, not just in analytics space, but also in the machine learning space.Emily: Is there anything else that you'd like to add about the topic of cloud-native and what companies get out of it in a business sense?Andy: I believe that we are living right now in 2020 in a very unique time, and at some point of time, when we start looking back, maybe in a few years from now, we'll start seeing that actually cloud-native started rocketing and taking off like no one expected in this year, is right now, it's happening. Every company is focusing on it one way or the other, and I think those companies that are taking advantage in this difficult economic times will be the winners once we come out of this economic and pandemic situation.Emily: I've actually heard that sentiment from many people, so that makes me think it's probably correct. Oh, my last question for you—I almost forgot—what is your can't-live-without engineering tool? So, is there a tool that you would find it difficult to impossible to do your job without?Andy: It still, after 30 years, its email. So, from that point of view it's, you know—but if I'm looking at our team, and what they're really focusing on right now, so working on things, like for example, Terraform, or an AWS site on CloudFormation tools, and that brings about automation. And the difficulty, sometimes, in Cloud is that there's so many different things that you can do. So, customers have a perception, when they go to Cloud, they do a one-off project, and then they're kind of done and dusted, and off they go and think about the next thing, but there's new services coming in all the time, and there is drift that happens on the cloud platform in terms of security, and ports, and network settings, and instances that gets loaded up and everything else, so you can only manage the cloud platform if you manage the [unintelligible] to automation. And I think that some of the [conflict] tools right now for us are starting to become tools like Terraform, from HashiCorp, CloudFormation from AWS, and similar kind of services from Google, and Microsoft that allows us to keep the platform in the kind of a compliance that the customer originally intended the platform to be.Emily: Where could listeners connect with you, or follow you?Andy: We have a pretty strong presence on LinkedIn. So, if you just search for Cloud Comrade on LinkedIn. So, that's where we post news about our industry, and our company, and our people, and our customers on a daily basis, and I would be looking forward to also connecting with your listeners.Emily: Well, thank you so much, Andy. That was very interesting to learn about what you do, and the types of companies that you work with, and what the Southeast Asian market, kind of, looks like.Andy: Thank you so much, Emily, for your invitation.Emily: Thanks for listening. I hope you've learned just a little bit more about the business of cloud native. If you'd like to connect with me or learn more about my positioning services, look me up on LinkedIn: I'm Emily Omier, that's O-M-I-E-R, or visit my website which is emilyomier.com. Thank you, and until next time.Announcer: This has been a HumblePod production. Stay humble.

The Connected Enterprise Podcast
Chris Walker from HTRI

The Connected Enterprise Podcast

Play Episode Listen Later Mar 11, 2019 21:39


Based in the Houston, Texas region, HTRI is a world leader in heat exchange systems research, design, training, and testing software.  For 55 years HTRI has collected and interpreted data, developed thermal process design and simulation software, delivered customized services and testing software and trained their customers in its use.  Chris Walker shares the HTRI transition to the SAP HANA in memory database and the benefits this journey delivered to HTRI and their customers.   

Azure Friday (HD) - Channel 9
SAP HANA infrastructure automation with Terraform and Ansible

Azure Friday (HD) - Channel 9

Play Episode Listen Later Nov 7, 2018


Page Bowers and Donovan Brown discuss how SAP customers are moving to Azure to take advantage of SAP-certified HANA virtual machines such as Azure M-series. Learn how you can use Terraform and Ansible to speed up SAP HANA deployments on Azure in 30 minutes as opposed to hours or days.Jump To: [01:48] Demo Start For more information:Automated SAP Deployments in Azure Cloud (GitHub)SAP on AzureAutomating SAP deployments in Microsoft Azure using Terraform and Ansible (blog post)Create a free account (Azure)Follow @donovanbrown Follow @AzureFriday Follow @Azure

Azure Friday (Audio) - Channel 9
SAP HANA infrastructure automation with Terraform and Ansible

Azure Friday (Audio) - Channel 9

Play Episode Listen Later Nov 7, 2018


Page Bowers and Donovan Brown discuss how SAP customers are moving to Azure to take advantage of SAP-certified HANA virtual machines such as Azure M-series. Learn how you can use Terraform and Ansible to speed up SAP HANA deployments on Azure in 30 minutes as opposed to hours or days.Jump To: [01:48] Demo Start For more information:Automated SAP Deployments in Azure Cloud (GitHub)SAP on AzureAutomating SAP deployments in Microsoft Azure using Terraform and Ansible (blog post)Create a free account (Azure)Follow @donovanbrown Follow @AzureFriday Follow @Azure