POPULARITY
Alphabet (het moederbedrijf van Google) deed vorige week de grootste overname uit het bestaan van het bedrijf. Het is bereid om niet minder dan 32 miljard dollar te betalen voor het jonge en relatief kleine Wiz. Cloud security is hip, zoveel is duidelijk. Maar is het ook een zinnige overname? We bespreken het in deze aflevering van Techzine Talks.Vorig jaar was Alphabet ook al in de markt voor Wiz, maar toen vonden de oprichters en eigenaren van het Israëlische cloud-security bedrijf 23 miljard niet voldoende. Dat is op zich al opvallend, want zo hoog was en is de omzet van Wiz in absolute zin niet. Het bedrijf verwacht in de loop van dit jaar op een ARR (Annual Recurring Revenue) van 1 miljard dollar te komen. Dat is weliswaar erg knap gezien de jonge leeftijd van Wiz (5 jaar oud), maar een overnamebod van 23 en nu dus 32 miljard dollar is op het eerste gezicht erg stevig.Wiz is echter best een bijzonder bedrijf, dat zich in zeer korte tijd naar de toplijstjes heeft gewerkt binnen de CNAPP (Cloud-Native Application Protection Platform)-wereld. Dit heeft het vooral gedaan met het Cloud Security Posture Management (CSPM)-onderdeel van het aanbod. Dat is volgens wie je er ook over spreekt of wat je er ook over leest echt heel erg goed. Het is in ieder geval de voornaamste reden geweest voor de snelle opkomst. Inmiddels zijn er naast Wiz Cloud ook nog Wiz Code en Wiz Defend. Deze onderdelen richten zich respectievelijk op code security en detectie en respons.Wat wil Alphabet (Google) met Wiz?De belangrijkste vraag is uiteraard wat de plannen zijn vanuit Google Cloud Platform (GCP) met Wiz. Wil het Wiz volledig integreren in GCP en als een uniek onderdeel van de Google public cloud aanbieden? Of laat het Wiz min of meer zelfstandig opereren en gaat het daarmee voor multi-cloud security. Dat laatste is altijd het doel van Wiz geweest. Het zou vervelend zijn voor klanten van Wiz als dit nu zou veranderen. De overname van Wiz zorgt er in ieder geval voor dat GCP het security-aanbod een stuk completer maakt. Met Chronicle en Mandiant was er al SIEM, threat intelligence en incident respons, nu komt daar ook cloud security bij. Alphabet en Google moesten toch juist kleiner worden?Met 32 miljard is de overname van Wiz zoals al aangegeven met afstand de grootste ooit voor Alphabet. Hiervoor was dat de 12,5 miljard die het betaalde voor Motorola. Laten we in ieder geval hopen voor de klanten van Wiz dat deze overname beter afloopt dan die van Motorola. Daar heeft Google een beetje een potje van gemaakt. Het verkocht dat onderdeel vrij snel door aan Lenovo met een stevig verlies. Het is naast een zeer grote overname ook best wel een merkwaardige overname wat ons betreft. Google ligt op andere vlakken toch best onder vuur vanwege de dominantie die het heeft. Zo gaan er geluiden dat Google de Chrome-tak moet verkopen en gaat het ook tussen de EU en Google bepaald niet soepel, met name rondom Search. Dan is het best bijzonder dat een ander onderdeel van hetzelfde bedrijf een enorme overname doet. Luister snel naar Techzine Talks om alles te weten te komen over deze mega-overname, wat deze betekent voor Google Cloud, de markt en de klanten die in de markt zijn voor cloud security.
Key Moments: A journey from intern to CEO (05:10)Encouraging a harmonized relationship between humans and AI (09:58)Why embracing stress can drive urgency and effective change (17:18)Generative AI's impact on the skills landscape (30:39)Fostering a data-driven company culture (36:41)Embrace change, and quickly (40:25)Key Quotes: “AI does amazing things, like summarizations and semantic search. Humans do amazing things like curation of knowledge, making sure it's accurate, connecting the dots, and creating relationships. So bringing the power of humans-in-the loop, especially given a broader trust deficit, felt like the right thing to do at this point in time.”“I think ultimately what guides us is we want to be useful to our users and our customers. That's the guiding light. Because why do we exist as an organization or a community? We should all just go home. If we don't actually have a mission and purpose that adds value, then we don't have a purpose. So the question is, what is that? What is the highest purpose?”“When you think about the future of software development, there's a lot of doomsdayers about job losses. I think it's going to be the opposite. I think AI reduces the barrier to entry. I think a lot of people will be “developers”, even though they may be doing very different things.”Mentions: WeAreDevelopers World Congress 2023 OverflowAIOverflow API Stack Overflow for TeamsAmp It Up Book Bio: Prashanth Chandrasekar is Chief Executive Officer of Stack Overflow and is responsible for driving Stack Overflow's overall strategic direction and results.Prashanth is a proven technology executive with extensive experience leading and scaling high-growth global organizations. Previously, he served as Senior Vice President & General Manager of Rackspace's Cloud & Infrastructure Services portfolio of businesses, including the Managed Public Clouds, Private Clouds, Colocation and Managed Security businesses. Before that, Prashanth held a range of senior leadership roles at Rackspace including Senior Vice President & General Manager of Rackspace's high growth, global business focused on the world's leading Public Clouds including Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP) and Alibaba Cloud, which became the fastest growing business in Rackspace's history. Prior to joining Rackspace, Prashanth was a Vice President at Barclays Investment Bank, focused on providing Strategic and Mergers & Acquisitions (M&A) advice for clients in the Technology, Media and Telecom (TMT) industries. Hear more from Cindi Howson here. Sponsored by ThoughtSpot.
In this episode, Tony Safoian interviews Mario Ciabarra, the CEO and founder of Quantum Metric. They discuss Mario's background and journey as an entrepreneur, as well as the evolution of Quantum Metric and its product. They highlight the importance of understanding and listening to customers to improve digital experiences. They also introduce the concept of Generative AI and how it is being implemented in the Quantum Metric platform. The conversation explores the potential of generative AI in improving customer experiences and driving business growth. It highlights the importance of real-time data analysis and the ability to understand and address customer friction points. The use of Google Cloud Platform (GCP) and Gemini Pro is discussed as a powerful solution for leveraging generative AI. The conversation also emphasizes the value of partnerships and the role of data in determining winners and losers in the market. The future of the industry is predicted to involve faster disruption cycles and a focus on having the right data at the right moment. Don't miss this insightful episode filled with personal anecdotes and cutting-edge technological discussions. Tune in now, and remember to LIKE, SHARE, & SUBSCRIBE for more! Podcast Library YouTube Playlist Host: Tony Safoian | CEO at SADA Guest: Mario Ciabarra | CEO at Quantum Metric To learn more, visit our website here: SADA.com
Welcome to Day 3 of our Google Cloud Platform (GCP) series! In this video, we delve into the crucial aspects of Load Balancers, Understanding Data States, and Exploring Block and File Storage. We also provide an in-depth exploration of Google Cloud VPC – Virtual Private Cloud.
Welcome to our in-depth guide on Identity and Access Management (IAM) in the Google Cloud Platform (GCP). In this video, we will break down everything you need to know about IAM, its importance, and how to effectively use it to secure your cloud environment.
Welcome to your ultimate guide on Google Cloud Platform (GCP)! Whether you're a developer, a business owner, or just curious about cloud solutions, this video will provide a comprehensive overview of what GCP has to offer and how it can transform your digital strategy.
Welcome to your ultimate guide on Google Cloud Platform (GCP)! Whether you're a developer, a business owner, or just curious about cloud solutions, this video will provide a comprehensive overview of what GCP has to offer and how it can transform your digital strategy.
Kapil Thangavelu, CTO and co-founder of Stacklet.io and the leading force behind an open source project called Cloud Custodian, talks about his journey in open source, beginning with his transition from Windows to dabbling in Linux, marking his shift toward open source development. He talks about the creation and development of Cloud Custodian while at Capital One, highlighting how the cloud management tool has grown to adopt multiple cloud providers like Microsoft Azure, Google Cloud Platform (GCP), Oracle, and Tencent Cloud. He gives credit to the tool's vast community of over 400 contributors, and thousands of users, and attributes its success to welcoming contributions, not only in the form of code but also in essential non-code contributions like documentation. He ends the conversation by addressing the future of open source, expressing concern over changes in licenses and tailoring open source projects to fit into a more commercial, rather than a community-based landscape. 00:00 Introduction 02:05 The Genesis of Cloud Custodian 05:48 Expanding Cloud Custodian to Multiple Platforms 06:21 The Versatility and Use Cases of Cloud Custodian 14:11 The Challenges and Future of Open Source 17:28 Closing Remarks and Reflections Resources: Cloud Custodian - State of the Mop Guest: Kapil Thangavelu is a Co-Founder and CTO at Stacklet, building products to help companies be well managed in the cloud. He started his career in open source working on Zope and Plone (CMS) communities as a consultant. Over the last decade he's spent time building open source projects and accelerating cloud innovation at Canonical, Capital One, and Amazon.
La nube, o cloud computing, es un modelo de entrega de servicios informáticos a través de Internet. Los recursos informáticos, como servidores, almacenamiento, redes y software, se proporcionan como un servicio bajo demanda, lo que significa que los usuarios solo pagan por los recursos que utilizan. Google ofrece una amplia gama de servicios en la nube, incluyendo IaaS, PaaS y SaaS. Estos servicios se pueden utilizar para una variedad de propósitos, incluyendo: Aplicaciones empresariales: Google Cloud Platform (GCP) ofrece una amplia gama de servicios para crear y ejecutar aplicaciones empresariales, como servidores, almacenamiento, redes, bases de datos y análisis. Productividad y colaboración: Google Workspace (G Suite) ofrece un conjunto de aplicaciones de productividad en la nube que incluyen Gmail, Calendario, Drive, Documentos, Hojas de cálculo, Presentaciones, Formularios y Sites. Análisis de datos: Google Cloud Dataproc, Dataflow, Data Fusion, BigQuery y Machine Learning Engine ofrecen una variedad de servicios para procesar, analizar y visualizar datos. El uso de la nube de Google está creciendo rápidamente en todos los sectores. Las empresas están adoptando la nube de Google para ahorrar costes, mejorar la agilidad y la flexibilidad, y acceder a nuevas tecnologías. El futuro de la nube de Google Se espera que el uso de la nube de Google siga creciendo en los próximos años. Las principales tendencias que impulsarán este crecimiento son: La expansión del Internet de las cosas (IoT): el IoT conectará cada vez más dispositivos a Internet, lo que requerirá una gran cantidad de recursos informáticos. La inteligencia artificial (IA): la IA requiere una gran cantidad de potencia de cálculo, lo que hace que la nube sea una plataforma ideal para su desarrollo y despliegue. El aprendizaje automático (ML): el ML es una rama de la IA que se utiliza para entrenar modelos que aprenden de los datos. La nube proporciona una plataforma escalable y eficiente para entrenar modelos de ML. Ventajas de la nube de Google La nube de Google ofrece una serie de ventajas para las empresas, entre ellas: Ahorro de costes: la nube de Google permite a las empresas reducir los costes de hardware, software y personal. Agilidad y flexibilidad: la nube de Google permite a las empresas escalar sus recursos informáticos de forma rápida y sencilla. Acceso a nuevas tecnologías: la nube de Google permite a las empresas acceder a nuevas tecnologías sin tener que invertir en infraestructura propia. Desventajas de la nube de Google La nube de Google también presenta algunos retos, entre ellos: Seguridad: la seguridad de los datos es una preocupación importante para las empresas que utilizan la nube de Google. Conectividad: las empresas necesitan tener una conexión a Internet fiable para utilizar la nube de Google. Dependencia de un proveedor: las empresas dependen de Google para proporcionar sus servicios en la nube, lo que puede suponer un riesgo si Google deja de operar. Libros recomendados: https://infogonzalez.com/libros --- Send in a voice message: https://podcasters.spotify.com/pod/show/infogonzalez/message
Cloud computing delivers IT services in which resources, such as storage, processing, and applications, are made available across a network, often the Internet, on a pay-as-you-go basis. It allows users to access and use shared resources, such as servers, storage, and applications, over the Internet. This eliminates the need for organizations to invest in and maintain their own IT infrastructure, which can be costly and time-consuming. Cloud computing services are typically provided by third-party companies, such as AWS, Azure, and Google Cloud Platform (GCP). There are mainly three types of cloud computing service models: Infrastructure-as-a-service (Iaas), Platform-as-a-service (Paas), and Software-as-a-service (Saas). Load Balancer in the Cloud Computing A load balancer service's responsibility in cloud computing is to make sure that no server gets overworked with numerous requests. To achieve this, it essentially splits up incoming network traffic among several servers, improving resource utilization, failover capabilities, and performance. It frequently sends traffic to the most capable servers. As Internet traffic continues to increase at a rate of approximately 100 % each year, the current volume of Internet traffic is expected to more than double within the next few years. View More: What is a Load Balancer in Cloud Computing?
When integrating with other workloads, sending confidential information, such as passwords or access tokens, over a network or hard-coding them in the software is not recommended. If these secrets are compromised, attackers can use them to gain unauthorized access to systems and data, potentially resulting in significant security breaches. We have already seen examples of major security incidents caused by the theft of credentials from public sources such as GitHub or local machines. This highlights the importance of choosing secure methods to perform authentication and authorization over the internet. Accessing data outside the cloud environment is often necessary when integrating cloud workloads. Google Cloud Platform (GCP) provides a solution called Workload Identity Federation (WIF) that enables users to access the customer's data in GCP from external sources through token exchange operations. This eliminates the need to store service account keys insecurely and reduces the risk of unauthorized access to the data. WIF allows secure and seamless access to GCP resources from external sources without storing and managing service account keys or other sensitive information outside of GCP. What is Cloud Workload Security? Cloud workload security refers to the technologies, methods, and policies in place to safeguard cloud workloads from possible security risks such as unauthorized access, data breaches, and other cyber threats. It involves securing virtual machines, containers, and other components that comprise cloud-based applications. Cloud workload security ensures that cloud workloads remain secure throughout their lifecycle, from deployment to decommissioning. It typically includes a range of security measures, such as access control, network security, data encryption, and threat detection and response. View More: How Vulnerable is GCP's Multicloud Workload Solution?
In today's episode of the eCom Logistics Podcast, we welcome Supriya Iyer, Director of Google Global Networking Supply Chain & Commercial Operations at Google. For this part of our Women in Supply Chain, Retail, and Ecommerce series, we dive into the power of value chain, establishing reliable partnerships, and operating network at scale.Supriya talks about the need for transparency and establishing connections, especially with today's global ecosystem in supply chain. She shares her insights on the Google Cloud Platform (GCP) and other innovative solutions in the industry today. ABOUT SUPRIYASupriya leads Supply Chain and Commercial Operations for Google's Global Networking division, with a focus on predictable materials supply and operations to operate the network at scale. She is a business leader with 20+ years of experience in transforming value chains and growing small teams into mature organizations to deliver high-quality products and services. Supriya also enjoys fast-paced and dynamic environments and has lots of experience in balancing this with fostering a people-first culture and stakeholder engagement. Previously, she has held global leadership positions at VMWare, GE, and Imperial Chemical Industries (ICI). She embraces the values of integrity, inclusion, belonging and leading with empathy. Supriya is passionate about bringing and growing diverse talent into Google in both technical and business roles.Supriya holds a Master's degree in Computer Science and a Master's degree in Mathematics. In her free time, she enjoys reading, hiking, cooking, and traveling. HIGHLIGHTS01:55 How Supriya's journey in supply chain and logistics started09:54 There are specialized lines of GCP solutions that are centered on supply chain17:06 Utilizing AI/ML in today's world26:12 Tackling partnerships in the supply chain ecosystem34:55 Leaning into sustainability at a global scale QUOTES12:10 Know when GCP can come into place - Supriya: "So there is a maturity before you are able to truly leverage AI/ML. So I just want to focus about, know your data, know what you need to run your business, make sure that data health is in a good place, and then you can leverage AI and ML, right?"33:00 Leverage third-party solutions if they make sense - Supriya: "You don't have to be vertically integrated, you can rely on partnerships. Not in every location, I do not want to manage our own warehouses and do all of that. So there are great partners who specialize in this. So who has the latest when it comes to technology? Who has the best practices and the best people who do it day in and day out?"Find out more about Supriya in the link below:LinkedIn: https://www.linkedin.com/in/supriyaiyer/
Google Cloud Platform (GCP) allows DevOps teams to cooperate seamlessly - but it could leave your organization vulnerable to security attacks. Learn how to prevent that with Britive! Check it out at https://www.britive.com/blog/3-frictionless-strategies-to-boost-your-gcp-iam
Andrea Caldini, VP of network engineering at Verizon, has seen a lot of wireless technology evolution during her 20 years at the telecom giant. This includes that carrier's initial 3G launch based on CDMA technology, its radical move to 4G LTE more than a decade ago, and its more recent push into 5G. “I remember at some point thinking 64 kb/s was really fast,” Caldini joked during an interview with SDxCentral. Caldini cited Verizon's early 5G work, including its early work toward 5G standards that were initially outside of the normal standards bodies. Verizon has also been able to inject a lot more spectrum into its 5G services based on that technology standards ability to support larger “chunks” of spectrum. Caldini cited the carrier's extensive millimeter wave (mmWave) spectrum holdings that support significant capacity and its ongoing deployment of its C-Band spectrum that is providing a broader reach. As part of that push, Verizon itself has been able to expand those network updates broadly across the organization, including into its Verizon Business Group. That group has been a driver of Verizon's recent business operations. Verizon 5G and the Private, MEC Space That work has also begun to spread more into the private 5G space, which Caldini said is a “huge opportunity here,” and the is “a gateway into mobile edge compute.” Verizon's MEC efforts include agreements with all three major hyperscalers – Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) – to provide optionality to enterprises. This allows the carrier to support two deployment models: private MEC and public MEC. The private MEC path involves an on-premises device deployment that allows an enterprise to maintain total control over its data. The carrier runs this on top of its agreement with AWS, Microsoft, and GCP. The public MEC work taps into nearly 20 locations where Verizon is collocated with the hyperscalers. This model is one Verizon executives have previously stated provide a connection point to within 150 miles of most enterprises. “As you're creating these solutions, you're looking to have your workloads closer, so you might have a low-latency need and need to have that workload closer,” Caldini said, adding that this private and public MEC integration then allows an enterprise to adjust where they want to run applications and still have it all under strict control. “They all come together as you create these new services to support a business need.” Learn more about your ad choices. Visit megaphone.fm/adchoices
Multi-cloud network architecture is a cloud architecture in which businesses use a combination of services from different cloud providers. Organizations with a multi-cloud architecture may employ services from two or more cloud platforms, such as AWS, Google Cloud Platform (GCP), or Oracle Cloud, rather than just one. At a higher level, the multi-cloud network architecture outlines four different layers. These are Cloud Core, Cloud Security, Cloud Access, and Cloud Operations.
CWSI, one of Europe's most experienced mobile and cloud security specialists, has announcesdit will reduce over-permission risks for businesses in Ireland and the UK with the launch of Microsoft Entra Permissions Management. CWSI is one of the first Microsoft partners to provide the solution, which it's introducing in response to increased multi-cloud adoption among its customers. Microsoft Entra acts as a unified permissions management tool for businesses, enabling them to automate privilege management across multiple cloud platforms in a consistent and thorough manner. It enables privilege management in both cloud infrastructure and Infrastructure as a Service (IaaS) platforms and is designed to support across Microsoft Azure, Amazon Web Services (AWS) and Google Cloud Platform (GCP). The technology proactively monitors and remediates permission risks for any identity across multi-cloud environments in real-time, while mitigating over-privilege issues and ensuring compliance. It enables organisations to control their infrastructure, protect, verify, and govern data, and detect exposures in their security postures. It permits the benchmarking of the Cloud platform against proven industry standards such as CIS and NIST permitting real time monitoring of the risk surface by the CISO and security teams. The automation and tools provided ensure that companies are able to get the maximum capability across multiple vendors platforms through the normalization that Entra Permissons Management provides to the administrative user. Where gaps in training/certification and production/workload pressure brings the risk of error Entra Permissions Management reduces or removes these. CWSI's highly skilled and accredited Microsoft experts will provide ongoing managed support to customers following the deployment of the Entra Permissions Management solution. It can guide them through the stages, identifying the priorities, and deliver a hardened operational platform with minimal permission risk. CWSI is a five-time Microsoft Gold Partner and was recently named Microsoft Ireland's Partner of the Year 2022 for Security. CWSI launched its dedicated Microsoft Security and Endpoint Management practice in 2019 and since then, has invested more than €2.5 million in its ongoing development. Last year, it became the first Irish Managed Security Service Provider (MSSP), to become a member of the Microsoft Intelligent Security Association (MISA). MISA is an ecosystem of independent software vendors and managed security service providers that have integrated their solutions with Microsoft security products to better defend against a world of increasing threats. Ronan Murphy, CEO, CWSI: “We're excited to deepen our partnership with Microsoft following the introduction of this innovative cloud solution to our Microsoft portfolio. It comes at a crucial time when the number of machine identities are growing rapidly, which is increasing the risk of breaches and over-privilege in organisations. High turnover rates and past employees retaining privileged access to data is also a major concern for businesses and this solution gives organisations better control over its permissions, ultimately reducing cyber risks. “Identity is the foundation of security and with the rapid acceleration of digital transformation, technology and the cloud now touch business operations on a daily basis. There are millions of interactions happening every second. Security challenges have become broader, and they require a broader solution – Microsoft Entra Permissions Management is enabling organisations to confidently reduce their risk in real-time.” See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: If you'd like to be featured in an upcoming Podcast ema...
Cloud computing is a type of world wide web technology that allows consumers to access a variety of services. Cloud computing will enable you to store your data anywhere on the internet and access it from any computer. The cloud provides virtually limitless capacity for storing any data in various cloud data storage types, depending on the data's availability, performance, and frequency of access. Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), etc. are some popular Cloud Service Providers (CSPs). Each platform provider has its architecture, which is based on complex procedures. Top Trending Cloud Certification In 2022
Telcos must adapt to avoid being squeezed out of the burgeoning network-as-a-service (NaaS) market, claims a new report.According to ABI Research, . Telcos must adapt to avoid being squeezed out of the burgeoning network-as-a-service (NaaS) market, claims a new report. According to ABI Research, the NaaS market is primed for growth. It expects that by 2030, nearly 90 percent of global enterprises will have migrated at least 25 percent of their network infrastructure to be consumed within a NaaS model. The growth is being fuelled by increasing enterprise demand for cloud-native agility, multi-cloud accessibility, and services that can dynamically scale to support digital transformation. This demand, the analyst firm reckons, means the NaaS market could be worth as much as $150 billion by 2030. Telcos, provided they are in a position to capitalise, could snag up to $75 billion of that sum. “Telcos must seize the opportunity to dominate the NaaS market, as revenue generated from connectivity provision will continue to decline. However, their investment strategy, business, operational, and ‘go-to-market' models are not ready to deliver a competitive NaaS solution,” said Reece Hayden, distributed and edge computing analyst at ABI Research, in a statement on Tuesday. “The market is immature and highly fragmented, but telco market revenue will exceed $75 billion by 2030 if they act now and transform technology, culture, and structure to better align with the requirements of the NaaS market.” When Hayden refers to technology, he means telcos should virtualise their network infrastructure to offer cloud-native services. They should also focus investments on network automation, and roll out value-added services like 5G slice-as-a-service, for example. In terms of culture, ABI said telcos need to develop vertical-specific sales strategies and adopt a consultative process to help bridge the gap between enterprises' awareness of NaaS, and their understanding of it. “To drive short-run sales, suppliers must educate and tailor their sales strategy to focus on first adopters – start-ups and SMEs – and specific verticals,” Hayden said. Finally when it comes to structure, ABI recommends telcos reduce internal fragmentation, focus on cross-business service continuity, and establish strong partnerships across the industry. “Although it seems like an expensive and risky uphill battle, developing NaaS will be crucial to the long-term upside,” Hayden said. And if telcos don't get their collective act together, interconnection providers and hyperscalers will be only too happy to fill the void. ABI notes that the likes of cloud connectivity providers Megaport and Packet Fabric already offer agile NaaS solutions, while Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure have extensive cloud-focused NaaS offerings and huge reach. “Telecom operators remain in the best position to lead the market as long as they recognise their service and innovation limitations, invest and restructure successfully, and focus their messaging appropriately,” Hayden said. However, “if telcos miss this opportunity and drop the ball, interconnection providers and hyperscalers will be waiting and willing to catch it.”
Worldwide, public cloud services are forecast to grow by 20.4% (22.0% in constant currency) in 2022. Organizations continue to accelerate cloud adoption, which is driving a five-year compound annual growth rate of 19.6% (19.4% in constant currency). The continued growth of cloud quite naturally leads to a discussion about some of the primary providers, often referred to in the market as the hyperscale vendors. Examples of hyperscale vendors include Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP) and Alibaba's Alicloud. Others may also be considering services from Facebook or Apple.A hyperscale vendor is a very large-scale cloud provider with global reach and impact. The potential for a hyperscale cloud provider's failure is a classic high-impact/low-frequency risk. More specifically, “concentration” risk focuses on the level of dependency (reliance) an organization's business, processes and data may lie with a specific vendor's offerings and services. Understanding the scope and scale of cloud service provider failure is critical in order to take steps to manage risk.Cloud services democratize access to resilience and redundancy capabilities previously only cost-effectively available to the largest enterprises. However, it is your responsibility to take advantage of these capabilities through resilient design that places the emphasis upon avoiding disasters rather than recovering from disasters. As organizations embrace cloud capabilities, the most effective teams strive to avoid service interruptions by creating systems that are highly available. This allows them to sidestep failures in underlying systems and continue operating with minimal impact to users, rather than recovering from an outage.Traditional disaster recovery (DR) thinking is rooted in the failure of physical data centers. However, in public cloud infrastructure and platform as a service (IaaS and PaaS), the failures with broad customer impact are rarely rooted in the data center issues. Rather, they are the result of software bugs that cause one or more cloud services to fail.IT leaders should rethink traditional disaster recovery architectures in favor of more effective resiliency patterns — not only when deploying new applications in public cloud IaaS or PaaS, but also when migrating existing applications to public cloud environments.
Google Cloud Platform (GCP) adoption with cybersecurity first principle strategies. In this session looking at cloud platforms through the lens of first principle thinking, Rick and the Hash Table review the Google Cloud Platform (GCP). They identify some fundamental architectural differences between GCP and the other cloud providers that make GCP more effective at zero trust. The Hash Table gives their detailed technical advice about data management and risk assessments through GCP, strategies using GCP to support cybersecurity, and define our new favorite concepts: cyber shenanigans, conditions of weirdness (COWs), and cyber COW tipping. Bob Turner joins Rick at the Cyberwire's Hash Table to discuss securing the University of Wisconsin at Madison's big data lake project using the Google Cloud Platform (GCP) and the GCP zero trust architecture: BeyondCorp. Cybersecurity professional development and continued education. You will learn about: GCP networking, GCP security strategy and data management, cyber shenanigans, conditions of weirdness (COWs), and cyber COW-tipping CyberWire is the world's most trusted news source for cybersecurity information and situational awareness. Join the conversation with Rick Howard on LinkedIn and Twitter, and follow CyberWire on social media and join our community of security professionals: LinkedIn, Twitter, Youtube, Facebook, Instagram Additional first principles resources for your cybersecurity program. For more Google Cloud Platform and cybersecurity first principles resources, check the topic essay.
Google Cloud Platform (GCP) adoption with cybersecurity first principle strategies. In this session looking at cloud platforms through the lens of first principle thinking, Rick Howard reviews the Google Cloud Platform (GCP). He identifies some fundamental architectural differences between GCP and the other cloud providers that make GCP more effective at zero trust. Cybersecurity professional development and continued education. You will learn about: GCP networking, GCP security strategy and data management, BeyondCorp CyberWire is the world's most trusted news source for cybersecurity information and situational awareness. Join the conversation with Rick Howard on LinkedIn and Twitter, and follow CyberWire on social media and join our community of security professionals: LinkedIn, Twitter, Youtube, Facebook, Instagram Additional first principles resources for your cybersecurity program. For more Google Cloud Platform and cybersecurity first principles resources, check the topic essay.
It's said we can all stand to make improvements when it comes to empathy. In software engineering, empathy is required to create something that the end user can easily figure out; it's unacceptable to build something you think is great but expect customers to figure it out on their own, just because you think they should. Search engine giant, cloud services leader and Kubernetes creator, Google, realizes this.In this latest episode of The New Stack Makers podcast, The New Stack Founder and Publisher Alex Williams and TNS News Editor Darryl Taft sit down with Google's Kim Bannerman, program manager for Empathetic Engineering, and Kelsey Hightower, principal developer advocate, Google Cloud Platform (GCP), to discuss Google's Customer Empathy Program and end-user satisfaction.
In this podcast we are visited by Miles Hischier and Narjit Patel of HiView Solutions. They are a trusted partner for Google Cloud services and solutions, trusted by many industry leaders. Partner with HiView to receive licenses, migration services, and ongoing support for Google Workspace, Google Cloud Platform (GCP), Google Voice, and more. Visit them online at https://hiviewsolutions.com/ This episode is brought to you by the team at LaunchUX. If you are looking for the best in website design and search engine optimization, you need to connect with LaunchUX
“What kind of company would be a good candidate to use GCP?”From Amazon Web Services to Google Cloud Platform (GCP), Carly Wild has served in the cloud space for 14 years. Working as a Partner Sales Manager, Carly joins this episode of Life in the Cloud to give us a more in-depth look into GCP.Carly takes us back to GCP’s origins, and how GCP began with their storage solution back in 2008. She offers up that the services and tech have been around as long as Google.Next, Kris puts Carly in the hot seat, asking how a business would determine which cloud provider is the best fit. Then, Carly delves into trends in the industry. She imagines innovations into the future surrounding data. Further extrapolating information from data into practical uses is expected.Carly also shares her favorite part about working for GCP. Listen to the episode now to learn more.Don’t forget to subscribe to the show on iTunes, Spotify, or wherever you get your podcasts. See you in the next episode!
In this Intel Conversations in the Cloud audio podcast: Hanan Youssef, Senior Product Leader at Google Cloud Platform (GCP), joins host Jake Smith to talk about how General Purpose N2 virtual machine (VM) customers will be able to automatically upgrade to the 3rd Generation Intel Xeon Scalable processors and immediately see the benefits for their […]
In this Intel Conversations in the Cloud audio podcast: Hanan Youssef, Senior Product Leader at Google Cloud Platform (GCP), joins host Jake Smith to talk about how General Purpose N2 virtual machine (VM) customers will be able to automatically upgrade to the 3rd Generation Intel Xeon Scalable processors and immediately see the benefits for their […]
Hanan Youssef, Senior Product Leader at Google Cloud Platform (GCP), joins host Jake Smith to talk about how General Purpose N2 virtual machine (VM) customers will be able to automatically upgrade to the 3rd Generation Intel® Xeon Scalable processors and immediately see the benefits for their workloads. Hanan talks about her experience with Google Compute Engine and how customers use N2 to build social media apps, run ecommerce, and to power the latest online games at great price-performance. Hanan also talks about the near decade of collaboration between Intel and Google Cloud and how the two companies have developed performance improvements and optimizations for various customer needs, such as SAP HANA and high performance computing (HPC). For more information about 3rd Gen Intel Xeon Scalable processors, visit: https://intel.com/xeonscalable Follow Hanan on Twitter at: https://twitter.com/hsyousef Follow Jake on Twitter at: https://twitter.com/jakesmithintel
In this Intel Conversations in the Cloud audio podcast: Hanan Youssef, Senior Product Leader at Google Cloud Platform (GCP), joins host Jake Smith to talk about how General Purpose N2 virtual machine (VM) customers will be able to automatically upgrade to the 3rd Generation Intel Xeon Scalable processors and immediately see the benefits for their […]
In this Intel Conversations in the Cloud audio podcast: Hanan Youssef, Senior Product Leader at Google Cloud Platform (GCP), joins host Jake Smith to talk about how General Purpose N2 virtual machine (VM) customers will be able to automatically upgrade to the 3rd Generation Intel Xeon Scalable processors and immediately see the benefits for their […]
Jesse Trucks is the Minister of Magic at Splunk, where he consults on security and compliance program designs and develops Splunk architectures for security use cases, among other things. He brings more than 20 years of experience in tech to this role, having previously worked as director of security and compliance at Peak Hosting, a staff member at freenode, a cybersecurity engineer at Oak Ridge National Laboratory, and a systems engineer at D.E. Shaw Research, among several other positions. Of course, Jesse is also the host of Meanwhile in Security, the podcast about better cloud security you're about to listen to.Links: aws.amazon.com/compliance aws.training docs.microsoft.com/asure/security TranscriptJesse: Welcome to Meanwhile in Security where I, your host Jesse Trucks, guides you to better security in the cloud.Announcer: If you have several PostgreSQL databases running behind NAT, check out Teleport, an open-source identity-aware access proxy. Teleport provides secure access to anything running behind NAT, such as SSH servers or Kubernetes clusters and—new in this release—PostgreSQL instances, including AWS RDS. Teleport gives users superpowers like authenticating via SSO with multi-factor, listing and seeing all database instances, getting instant access to them using popular CLI tools or web UIs. Teleport ensures best security practices like role-based access, preventing data exfiltration, providing visibility, and ensuring compliance. Download Teleport at goteleport.com. That's goteleport.com.Jesse: Trilogy of Threes and a New Mantra. Trilogy of Threes. Good security practices and good security programs are built on three separate but intertwined principles, each of which has three parts. Simon Sinek's Golden Circle framework lays the foundation for why you have a security program, which is a balance of risks to critical assets and services, and business objectives. The next part of how you apply the Golden Circle to your security program is about how you accomplish meeting these objectives and mitigating your risk through the People, Process, and Technology framework.The PPT method helps you define the roles are needed to implement your security program, the overview of processes or actions within your security program, and the types of technology that supports your security program. The final part of how you apply the Golden Circle encompasses what specific things you do to implement your security program using the Holy Trinity of Security: confidentiality, integrity, and availability, or the CIA triad. In your security program, you should define who should be allowed access to any data or service, how you monitor and protect any data or services, and how you keep data or services available for users. Although understanding how to build a security program from nothing is incredibly important, most of us are already operating within an existing security program. Many of us will have influence only on the specific implementation of tools for the Holy Trinity, CIA. All this theory is crucial to understand, but you still have a job to do. So, let's get practical.Where to start today. Searching online for ‘Top X for AWS Security' returns an expected long list of pages and there are shed-loads of fantastic tips in the results. However, reading through many of them, including AWS's own blog entry on the topic, shows that proper cloud security involves large projects and possibly fully re-architecting your entire environment. As is often the case in these things, all the best security advice in the cloud has to do right security from the very beginning. Yet this is like discovering a new love of playing the piano late in life like I did, [laugh] but someone telling you the right way to learn to play the piano is to take lessons as a child. This isn't so useful advice, now is it? Of course, it's too late to become a child piano prodigy, but it's not too late to take up the piano and do well.Fundamentals. In traditional non-cloud environments, physical security for everything leading up to touching a machine is usually the purview of a different part of the organization, or an entirely different organization than the security team or group responsible for system network and application security. Generally, most information or cybersecurity starts with accessing the software-based systems on a physical device's console or through a network connection. This, of course, includes accessing the network through some software path, usually a TCP or UDP-based protocol. In cloud environments, the cloud providers, such as Amazon Web Services—or AWS—Microsoft Azure, or Google Cloud Platform—GCP—maintains and is wholly responsible for all the physical environment and the virtual platform or platforms made available to their customers, including all security and availability required for protecting the buildings and hardware, up through the hypervisors presenting services allowing customers to run systems.All security above the hypervisor is the customer's responsibility, from the operating system or OS through applications and services running on these systems. For example, if you run Windows systems for Active Directory Services, and Linux systems for organizations' online presence, then you own all things in the Windows and Linux OSes, services running on those systems, and the data on those systems. This is called the shared responsibility model. AWS provides details on their compliance site aws.amazon.com/compliance as well as in a short video on their training and certification site aws.training.Microsoft describes their model on their documentation site docs.microsoft.com/asure/security. Google has lots of information in various places on their Google Cloud Platform GCP site, including a guided tour of their physical security for their data centers, but finding a simple explanation like the other two major services have available eluded me. Google does have a detailed explanation of their shared responsibility matrix, as they call it, which is an 87-page PDF. Luckily, given the overwhelming popularity over the other cloud providers, I tend to focus mostly on AWS. I didn't read the whole GCP document.Announcer: If your mean time to WTF for a security alert is more than a minute, it's time to look at Lacework. Lacework will help you get your security act together for everything from compliance service configurations to container app relationships, all without the need for PhDs in AWS to write the rules. If you're building a secure business on AWS with compliance requirements, you don't really have time to choose between antivirus or firewall companies to help you secure your stack. That's why Lacework is built from the ground up for the Cloud: low effort, high visibility, and detection. To learn more, visit lacework.com. That's lacework.com.Jesse: basic AWS training. Amazon provides ample training and online tutorials on all things AWS. This includes AWS basics through advanced AWS architecture and various specialty areas like machine learning and security, among others. I encourage everyone who touches anything in AWS to go through their training courses online at aws.training.If you are new to AWS or cloud in general, go take AWS Cloud Practitioner Essentials, and then take some primers in AWS security: AWS Security Fundamentals; Introduction to AWS Identity and Access Management, or IAM; and AWS Foundations: Securing Your AWS Cloud. These are all eLearning-based and free. This will be some of the best nine to ten hours you can spend to build a foundation for securing your AWS infrastructure.Learning is great; doing is better. Whether you've taken the relevant AWS training or just want to dive in and make your AWS security better today, you'll want to go make a difference in your risk and exposure as quickly as possible. After all, unless you're listening to this as a seasoned security professional, you're probably here to learn how to make your security better as quickly and easily as possible. Anyone looking at the list of courses I've suggested and considering my fundamental approach might be trying to discern which first principles of good security I'll talk about first. If you're thinking along those lines, you might miss some of the very basics.As with all things in the tech world, there are some basics that can't be repeated often enough. The most simple and blatantly obvious advice is to secure your S3 buckets. Let's cover that again so nobody misses the point. Secure. Your. S3. Buckets. Now, repeat that 27 times every morning while you get ready for work before you touch your keyboard.This is the cloud version of securing FTP, meaning FTP isn't too bad protocol, but it's notorious for being misconfigured and allowing anonymous FTP uploads and downloads. If you want to fall into a hole learning everything there is to this, go read the Security Best Practices for Amazon S3 portion of the S3 User Guide. If you don't have time or energy for wading through that lengthy but valuable tome, check some basics for your maximum ROI for minimal effort. If you allow public access to S3 files directly, you should seriously reconsider your solution. There are dozens of ways to provide access to files that aren't as risky as opening direct access to data storage.You should block public access at the account level by going to the S3 services section in the AWS Management Console. And in the menu on the left, select ‘Block Public Access Settings for this Account.' If you can't do this immediately, go lockdown all buckets that don't have this insane requirement to be open to the public. Do this by selecting the bucket, and block access in the permissions tab.You should always be thinking of the fundamentals of great security, and you should always be learning and improving your skills, of course. You should also continually make little changes and review the basics. Some new project will go live and some S3 bucket will have horrible permission settings, or some other fundamental violation of security best practices will occur. We should always be looking out for violations of the basics, even while we work on the larger projects with greater apparent impact. I repeated my mantra 27 times today. Have you?Jesse: Thanks for listening. Please subscribe and rate us on Apple and Google Podcast, Spotify, or wherever you listen to podcasts.Announcer: This has been a HumblePod production. Stay humble.
With innovative organizations like Apple and Spotify moving their cloud workloads to Google Cloud Platform (GCP), you can dispute the advantages that Google brings to all of their cloud services. But at the heart of the platform resides well engineered data services including BigQuery, Dataflow and others that are known as being the most cost effective and fastest in the industry. Please join us as Great Data Minds Advisor, Jesus Diaz a certified GCP Data Engineer, and Cloud Architect walks us through GCP’s data services and their benefits.
Though there are many challenges when training across cloud platforms, there are many similarities that make the process easier. Mike’s guest today is Elkhan Yusubov, a principal cloud architect, an Azure subject-matter expert, and an Azure Technical Trainer. He’s here to talk about cross-training on Google Cloud Platform (GCP) for Azure pros. Elkhan’s goal is to help other people as cross-platform expertise can only benefit everyone!In this episode, we talk about…Elkhan’s hot take on what he has seen at GCPGetting multiple teams to work together within multi-cloud environmentsMain challenges with learning GCP for those experienced in AzureThe Associate Cloud Engineer Certification for GCPThings that have happened in Elkhan’s career since he started sharing his journeySkills that can be transferred from one cloud platform to anotherArchitective framework that covers the low-hanging fruitFocus on delivering value to clients rather than personal interestsDevelopments since COVID changed the worldBiggest motivations for Elkhan during his cloud journeyWhy failure is vital to success in your cloud journeyResources from this episode:Elkhan on LinkedInGCP Certification: Associate Cloud EngineerAzure for GCP ProfessionalsGoogle Cloud for Azure professionals
Introduction Even if it doesn't appeal to you, you might want to think about it when you work in a larger microservice landscape or have a serious big data platform...Proof data consistency in a microservice landscape. When we google this subject I already get 1.2 MLN results so there's something going on here. To ensure data consistency several practices are available: Saga Pattern Reconciliation Event Log Orchestration vs. ChoreographySingle-Write With EventsChange-First Event-First Consistency by Design Accepting Inconsistency But in this episode, we won't go over these practices.What this episode coversWe will dive into the verification part. The proof of the correct operation of your implementation.Within bol.com we implemented a Data Quality Service (DQS). Actually, the second generation is already in place. The first generation focused on the immutable data in the 2nd improved version mutable data is covered as well. We will go over these questions to explain how we proof data consistency in a microservice landscape:How did we come up with our solution?What is our approach?How does it relate to our big data, BigQuery storage?StatementsAs a starter, we discuss these statements firstWhy care it is just data...The microservice is not the issue, the independent data storage solution is, so let's get back to the centralized databases (makes testing also a lot easier)An architect should be the guest of this show as it's part of his/her role to fix thisData Consistency is not a problem for Software Engineers. It should be fixed by our infrastructure solutionsGuestsMykola Gurov – Of course, you all know him since he was in our very first episode about Kotlin. Or otherwise from one of his testing in production talks. Jack of all trades.Chris Gunnink – Software Engineer on a crusade - DQSSourygna Luangsay – Tech Lead in experimentation, forecasting and the finance product a lot more productsNotesBigquery - bol.com adoption storyBigQuery - Google's Data warehouse running in the Google Cloud Platform (GCP)
Multi-cloud clusters - a feature available in MongoDB Atlas, a global cloud database service - takes the concept a step further by enabling a single application to use multiple clouds. With multi-cloud clusters, data is distributed across different public clouds (Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure), enabling deployment of a single database across multiple providers simultaneously. In this episode, Nic and I sit down with Andrew Davidson, VP of Cloud Product at MongoDB to discuss this latest innovation to MongoDB Atlas. Sign up for a free Atlas account and take a look at this feature in action.
Neil Bunn, Head of Customer Engineering, Media and Telco, for Google Cloud Canada, reviews the coolest features of Google cloud technology and offers an unexpected reason why GCP should be at the top of telcos’ list (hint: think, Anthos).
Initially led by software as a service (SaaS), the transition to the public cloud is one of the most important changes we’ve witnessed in information technology to date. From the early days of SaaS to the current stage where adoption of infrastructure, platform and function as a service (IaaS, PaaS, FaaS) are catching on like wildfire, there’s an increasing awareness that the end state of this shift few aspects of how we do our jobs will be unchanged. This Security Voices episode is the first of five where we dig into the details of how the public cloud is transforming cybersecurity.Teri Radichel joins us to explain key concepts in public cloud technology, the differences from on-premises, migration options and more. If you’ve ever wondered what is meant by “lift and shift” or “cloud native”, this is for you. Teri’s background as a trainer, author and researcher shines through as she describes both broad concepts in easily understood terms but she also doesn’t spare the details for those who are already cloud savvy.Beyond the core concepts, Teri compares and contrasts the security models across Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP). As she walks us through the differences between the three platforms, you get a sense of the complexity faced by those straddling an on-premise environment as well as the public cloud – not to mention several clouds at once. From networking to identity and access management models, no cloud service provider is quite like the other. Moreover, the fierce competition between Google, Microsoft and Amazon is driving such rapid changes in their platforms that any grip you have on exactly how things are is a slippery one at best.In spite of the challenges, Teri explains her belief that one can achieve better security in the cloud than on-premises. Doing so requires thinking differently, however, such as Teri’s advice to handle data as we would handle money. We hope this episode lays the groundwork for you for understanding the current state of public cloud security as in the next show we dive into the trenches with a cloud security practitioner at Yelp.
KubeCon 2020 Europe runs from August 17 - 20 and Nigel Poulton joins the podcast to discuss how Kubernetes adoption has evolved and ramped up so quickly. We also discuss the Kubernetes application deployment demo Nigel recorded on Google Cloud Platform (GCP) that leverages both Cloud Volume Services (CVS) and Trident. For more on KubeCon 2020 Europe and to visit the NetApp booth, please visit cncf.io.
On the podcast this week, Mark Mirchandani and Brian Dorsey talk with fellow Googlers John Laham and Stewart Reichling about Traffic Director, a managed control plane for service mesh. Traffic Director solves many common networking problems developers face when breaking apart monoliths into multiple, manageable microservices. We start the conversation with some helpful definitions of terms like data plane (the plane that data passes through when one service calls on another) and service mesh (the art of helping these microservices speak with each other) and how Traffic Director and the Envoy Proxy use these concepts to streamline distributed services. Envoy Proxy can handle all sorts of networking solutions, from policy enforcement to routing, without adding hundreds of lines of code to each project piece. The proxy can receive a request, process it, and pass it on to the next correct piece, speeding up your distributed system processes. But Envoy can do more than the regular proxy. With its xDS APIs, services can configure proxies automatically, making the process much more efficient. In some instances, the same benefits developers see with a distributed system can be gained from distributed proxies as well. To make distributed proxy configuration easy and manageable, a managed control plane system like Traffic Director is the solution. Traffic Director not only helps you facilitate communication between microservices, it also syncs distributed states across regions, monitors your infrastructure, and more. Stewart Reichling Stewart is a Product Manager on Google Cloud Platform (GCP), based out of Cambridge, Massachusetts. Stewart leads Product Management for Traffic Director (Google’s managed control plane for open service mesh) and Internal HTTP(S) Load Balancing (Google’s managed, Envoy-based Layer 7 load balancer). He is a graduate of Georgia Institute of Technology and has worked across strategy, Marketing and Product Management at Google. John Laham John is an infrastructure architect and cloud solutions architect that works with customers to help them build their applications and platforms on Google Cloud. Currently, he leads a team of consultants and engineers as part of the Google Cloud Professional Services organization, aligned to the telco, media, entertainment and gaming verticals. Cool things of the week Week four sessions of Cloud Next: Security site Weekly Cloud Talks by DevRel Week 2 site Weekly Cloud Talks by DevRel Week 3 site Cost optimization on Google Cloud for developers and operators site GCP Podcast Episode 217: Cost Optimization with Justin Lerma and Pathik Sharma podcast Interview Traffic Director site Envoy Proxy site NGINX site HAProxy site Kubernetes site Cloud Run site Service Mesh with Traffic Director site Traffic Director Documentation site gRPC site Traffic Director and gRPC—proxyless services for your service mesh blog Tip of the week This week, we’re talking about IAM Policy Troubleshooter. What’s something cool you’re working on? Brian is working on the Weekly Cloud Talks by DevRel we mentioned in the cool things this week and continuing his Terraform studies. Check out the Immutable Infrastructure video we talked about last week. Sound Effect Attribution “Jingle Romantic” by Jay_You of Freesound.org
The move to the cloud is hardly a new topic, for many the value of services like Microsoft 365 have proved their worth over the opening months of 2020, having key services running in the public cloud has delivered the flexibility, scale and accessibility that it promised. Beyond that we are also starting to see ever increasing tactical use of public cloud, integration into backup and archive tools, storage, cloud-based security appliances, the list continues to grow. But what about those big enterprise core applications like SAP, the systems that are running every element of a business, are we starting to move them as well? I ask that question because of an announcement that caught my attention this week (w/c June 15th 2020) from NetApp proudly declaring they are the first to offer SAP HANA certified storage via their Cloud Volumes Service (CVS) inside of Google Cloud Platform (GCP). What I wanted to know was why? NetApp has no doubt put a lot of time, effort and money into gaining this certification, but is there an opportunity for this and are companies moving platforms as potentially complex at SAP HANA into the cloud at all? 'm happy to admit that I don't know a lot about SAP and their products but luckily I have a friend who does and he was happy to join me on this weeks show to share some of his experience. Andre Schmitz is a Senior Consultant for Datacenter Infrastructure in Germany for Bechtle and has been delivering major enterprise systems like SAP HANA for 17 years so was well placed to give me his view on the market. We cover. What is SAP HANA? Why are people struggling to shift to it? Complexity and the cloud. Is there a shift for enterprise applications to the cloud? Are NetApp's enterprise storage services a useful addition? Are they going to help with a shift to the cloud? The importance of enterprise-class storage features in the cloud. Moving to the cloud, here are some things to consider. This was Andre's podcast debut and he did a great job at sharing a wide range of experience, some insight into the challenges of having systems like SAP HANA running in the cloud and some thoughts on whether NetApp (or anyone else) offering these kinds of enterprise-class features has real value. If you have an idea for a show or would like to be a guest on a future show why not email me at podcast@techstringy.com. Until next time, thanks for listening. Full show notes are here :- https://wp.me/p4IvtA-1M8
The Cloud Pod Gets Their Groove Back — Episode 74 Your co-hosts have cooked up a good one on this week's episode of The Cloud Pod. A big thanks to this week's sponsor: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights Your co-hosts cover DockerCon 2020. Chef announced several new features at ChefConf 2020. Google Cloud Platform (GCP) teaches you how to take an online certification exam. General News: Prince Ali Mirantis has released the first major update to Docker Enterprise since it acquired the platform in November — a loss for the startup community. Over 60,000 people registered for the online DockerCon, the first DockerCon after the loss of Enterprise. During the keynote, Docker CEO Scott Johnston announced a strategic partnershi
In this episode I catch up with Miles Matthias who is a Solutions Architect at Stripe and founder of CTOLunches.com. Miles and his team have designed and implemented Kubernetes for some of the largest customers running on Google Cloud Platform (GCP).
As the fundamental data abstractions used by developers have changed over time, event streams are now the present and the future. Coming from decades of experience in messaging, Dan Rosanova (Senior Group Product Manager for Confluent Cloud, Confluent) discusses the pros and cons of cloud event streaming services on Google Cloud Platform (GCP), Microsoft Azure, and Confluent Cloud. He also compares major stream processing and messaging services: Cloud Pub/Sub vs. Azure Event Hubs vs. Confluent Cloud, and outlines major differences among them. Also on the table in today’s episode are cloud lock-in, the anxieties around it, and where cloud marketplaces are headed.EPISODE LINKSDon’t Get Locked Up in Avoiding Lock-InJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
In this Intel Chip Chat audio podcast with Allyson Klein: On this episode of Chip Chat, Dan Speck, Vice President Technology at Burwood Group, talks with host Allyson Klein about performance improvements gained from the new C2 compute instances offered by Google Cloud Platform (GCP), powered by 2nd Generation Intel Xeon Scalable processors. Customers, like […]
In this Intel Chip Chat audio podcast with Allyson Klein: On this episode of Chip Chat, Dan Speck, Vice President Technology at Burwood Group, talks with host Allyson Klein about performance improvements gained from the new C2 compute instances offered by Google Cloud Platform (GCP), powered by 2nd Generation Intel Xeon Scalable processors. Customers, like […]
In this Intel Chip Chat audio podcast with Allyson Klein: On this episode of Chip Chat, Dan Speck, Vice President Technology at Burwood Group, talks with host Allyson Klein about performance improvements gained from the new C2 compute instances offered by Google Cloud Platform (GCP), powered by 2nd Generation Intel Xeon Scalable processors. Customers, like […]
On this episode of Chip Chat, Dan Speck, Vice President Technology R&D at Burwood Group, talks with host Allyson Klein about performance improvements gained from the new C2 compute instances offered by Google Cloud Platform (GCP), powered by 2nd Generation Intel® Xeon® Scalable processors. Customers, like genomics researchers needing large scale parallel workloads, rely on Burwood Group to help design and architect workloads that can operate efficiently in the cloud. To meet these needs, the company carefully considered performance, flexibility, and security before selecting the new C2 instance from GCP. To learn more about Burwood Group, visit www.burwood.com. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
Learn why Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) will increase their dominance of public cloud computing post-coronavirus. End users can deploy containers and other solutions/platforms to support multi-cloud solutions and maintain control over costs/performance.
Time series data management continues to underpin huge swaths of application deployments today across on premises, and increasingly, cloud native environments. Whether it's video streaming, real-time financial security data management, energy utility management or any application that requires time stamps for often very complex datasets at massive scales, time series data will play an integral role. Today's time series data platforms can typically be used for data analysis and forecasting by processing millions of data points per second. Pricing has also become affordable for a growing number of enterprises seeking high-powered data analysis as a way to distinguish themselves from the competition. These organizations also do not necessarily have the financial backing that the world's largest financial institutions or Fortune 100 companies have at their disposal. In this The New Stack Makers podcast, Chris Churilo, responsible for technical product marketing at InfluxData, offer some background and perspective on why organizations increasingly rely on time series databases to “make products or services better.” Churilo also discussed why organizations are shifting their databases and management to cloud environments and why InfluxData recently extended to its InfluxDB Cloud 2.0 serverless time-series Platform as a Service (PaaS) to include Google Cloud Platform (GCP) as well as Amazon Web Services (AWS) cloud environments. “Time series data is useful for monitoring anything that you want to make improvements on,” Churilo said. “So, of course, your cloud infrastructure is one thing that you definitely want to always be monitoring to make sure that you can provide the best service, especially if you have applications sitting on top of it that are customer-facing or even internally-facing — no one can tolerate having a slow application.” Read more here: https://thenewstack.io/get-better-monitoring-with-a-time-series-database/
From previously focusing on Confluent Schema Registry to now making connectors for Confluent Cloud, Magesh Nandakumar (Software Engineer, Confluent) discusses what connectors do, how they simplify data integrations, and how they enable sophisticated customer use cases. With connectors built for Confluent Cloud on Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS), this helps users implement Apache Kafka® within their existing systems in an easy way. There’s a lot that Magesh is looking forward to when the world of connectors and the world of cloud collide.EPISODE LINKSWhy Kafka Connect? ft. Robin MoffattJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.Get 30% off Kafka Summit London registration with the code KSL20Audio
In this session, we’ll explore how companies can adapt to multi-cloud environments using Google Cloud Anthos and Splunk Enterprise to maintain end-to-end visibility into these hybrid workloads. Google Kubernetes Engine (GKE) On-Prem part of Anthos brings the efficiency, speed, and scale of cloud to manage Kubernetes clusters in your datacenter. Combined with Splunk Connect for Kubernetes, we’ll show you how you get a single pane of glass to manage, monitor & secure your Kubernetes clusters across your organization. We’ll also do a deep dive on Google Cloud Platform (GCP) security controls and how to export security findings from Cloud Security Command Center and cloud asset changes from Cloud Asset Inventory all into Splunk Enterprise for further forensic analysis, to accelerate incident resolution and ensure compliance. Speaker(s) Alex Cain, Sr. Product Manager | Getting Data In, Splunk Nic Stone, Solutions Engineer, Splunk Slides PDF link - https://conf.splunk.com/files/2019/slides/FN2132.pdf?podcast=1577146225 Product: Splunk Enterprise, Splunk Cloud Track: Foundations/Platform Level: Intermediate
Splunk [Enterprise Cloud and Splunk Cloud Services] 2019 .conf Videos w/ Slides
In this session, we’ll explore how companies can adapt to multi-cloud environments using Google Cloud Anthos and Splunk Enterprise to maintain end-to-end visibility into these hybrid workloads. Google Kubernetes Engine (GKE) On-Prem part of Anthos brings the efficiency, speed, and scale of cloud to manage Kubernetes clusters in your datacenter. Combined with Splunk Connect for Kubernetes, we’ll show you how you get a single pane of glass to manage, monitor & secure your Kubernetes clusters across your organization. We’ll also do a deep dive on Google Cloud Platform (GCP) security controls and how to export security findings from Cloud Security Command Center and cloud asset changes from Cloud Asset Inventory all into Splunk Enterprise for further forensic analysis, to accelerate incident resolution and ensure compliance. Speaker(s) Alex Cain, Sr. Product Manager | Getting Data In, Splunk Nic Stone, Solutions Engineer, Splunk Slides PDF link - https://conf.splunk.com/files/2019/slides/FN2132.pdf?podcast=1577146253 Product: Splunk Enterprise, Splunk Cloud Track: Foundations/Platform Level: Intermediate
Splunk [Foundations/Platform Track] 2019 .conf Videos w/ Slides
In this session, we’ll explore how companies can adapt to multi-cloud environments using Google Cloud Anthos and Splunk Enterprise to maintain end-to-end visibility into these hybrid workloads. Google Kubernetes Engine (GKE) On-Prem part of Anthos brings the efficiency, speed, and scale of cloud to manage Kubernetes clusters in your datacenter. Combined with Splunk Connect for Kubernetes, we’ll show you how you get a single pane of glass to manage, monitor & secure your Kubernetes clusters across your organization. We’ll also do a deep dive on Google Cloud Platform (GCP) security controls and how to export security findings from Cloud Security Command Center and cloud asset changes from Cloud Asset Inventory all into Splunk Enterprise for further forensic analysis, to accelerate incident resolution and ensure compliance. Speaker(s) Alex Cain, Sr. Product Manager | Getting Data In, Splunk Nic Stone, Solutions Engineer, Splunk Slides PDF link - https://conf.splunk.com/files/2019/slides/FN2132.pdf?podcast=1577146202 Product: Splunk Enterprise, Splunk Cloud Track: Foundations/Platform Level: Intermediate
In this session, we’ll explore how companies can adapt to multi-cloud environments using Google Cloud Anthos and Splunk Enterprise to maintain end-to-end visibility into these hybrid workloads. Google Kubernetes Engine (GKE) On-Prem part of Anthos brings the efficiency, speed, and scale of cloud to manage Kubernetes clusters in your datacenter. Combined with Splunk Connect for Kubernetes, we’ll show you how you get a single pane of glass to manage, monitor & secure your Kubernetes clusters across your organization. We’ll also do a deep dive on Google Cloud Platform (GCP) security controls and how to export security findings from Cloud Security Command Center and cloud asset changes from Cloud Asset Inventory all into Splunk Enterprise for further forensic analysis, to accelerate incident resolution and ensure compliance. Speaker(s) Alex Cain, Sr. Product Manager | Getting Data In, Splunk Nic Stone, Solutions Engineer, Splunk Slides PDF link - https://conf.splunk.com/files/2019/slides/FN2132.pdf?podcast=1577146229 Product: Splunk Enterprise, Splunk Cloud Track: Foundations/Platform Level: Intermediate
In this episode I chat with Google Developer Expert (GDE) John Hanley about Google Cloud Platform (GCP) and what you need to know to get started.
Listen to Angelbeat CEO describe why IT professionals prioritize certifications in IT Security, and on the Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure public cloud platforms.
SHOW: 410DESCRIPTION: Aaron and Brian talk with Kevin McHale (Senior Staff Engineer at Etsy) about the migration of their Big Data / Data Science platform from on-premises to Google Cloud, the business drivers for the migration, and the lessons the team has learned throughout the multi-year process.SHOW SPONSOR LINKS:Digital Ocean HomepageGet Started Now and Get a free $50 Credit on Digital OceanDatadog Homepage - Modern Monitoring and AnalyticsTry Datadog yourself by starting a free, 14-day trial today. Listeners of this podcast will also receive a free Datadog T-shirt[FREE] Try an IT Pro ChallengeGet 20% off VelocityConf passes using discount code CLOUDCLOUD NEWS OF THE WEEK:GitHub Actions now includes CI/CDSHOW INTERVIEW LINKS:Etsy HomepageEtsy’s Engineering Blog (“Code as Craft”)Migrating Etsy to Google CloudSHOW NOTES:Topic 1 - Welcome to the show. Before we dive into your work at Etsy, tell us a little bit about your background prior to Etsy - you’re pretty good at math. Topic 2 - You’ve been working on two very interesting projects at Etsy - both building/evolving the data platform, and helping to manage the migration to Google Cloud Platform (GCP). Let’s start by talking about the Etsy data platform.Topic 3 - What have been some of the business drivers that are pushing the data platform to collect more information, to become more cloud-native, and to better enable data pipelines? Topic 4 - At some point in 2017-18, Etsy decided to migrate some of the platforms to Google Cloud. Tell us about that decision-process, and how the migration has been going. What have been some of the lessons learned? Topic 5 - How does working with Google Cloud (or just being in the public cloud) help accelerate the work that you’re doing on evolving the Etsy data platform? FEEDBACK?Email: show at thecloudcast dot netTwitter: @thecloudcastnet and @ServerlessCast
In the Internet age, an old saying goes that it’s best to be there first. And there’s truth in that, too. When it comes to public cloud computing, Amazon was first, and they’re a clear No. 1 in the space. Sometimes it’s hard to be in third place, like Google Cloud Platform (GCP) is, behind Amazon Web Services (AWS) and Microsoft Azure. On the other hand, Google is a huge company with lots of resources, and they’re committed to challenging those two cloud titans. Can Google win? It depends on what they have to offer, and that’s the subject of this episode of “10 on Tech.” ActualTech Media Partner James Green interviews author and technologist Dan Sullivan, and they discuss GCP’s past, present, and future possibilities. Highlights of the show include: Why GCP holds third place among public cloud platforms The advantages GCP has as a platform What Google Anthos is, and why it’s important Whether or not GCP is the best fit for Kubernetes, since Google created it Great resources to learn more about GCP Resource links from the show: YouTube Google Cloud Platform Channel -- https://www.youtube.com/user/googlecloudplatform YouTube TensorFlow Channel -- https://www.youtube.com/channel/UC0rqucBdTuFTjJiefW5t-IQ Google Anthos -- https://cloud.google.com/anthos/ Cloud Next 19 Opening Keynote -- https://www.youtube.com/watch?v=XGrlWVWlpgE Google Cloud Platform training on Coursera -- https://www.coursera.org/courses?query=google We hope you enjoy this episode; and don’t forget to subscribe to the show on iTunes, Google Play, or Stitcher.
Alphabet, Google’s parent company, recently reported Q2 FY19 earnings and shared a very interesting bit of information; Google Cloud’s annual run rate is now $8B. The last and only time Alphabet shared Google Cloud’s annual run rate was in February of 2018 when it was $4B. In this podcast, Practice Leader, Adam Mansfield, discusses how they achieved this impressive growth in such a short period of time and covers what enterprise customers should know (and expect) if they are considering Google Cloud Platform (GCP) or contemplating a move from Microsoft Office 365 to Google G Suite.
CEO Sonal Puri of Webscale shares how her company helps B2C and B2B sellers of all sizes manage their cloud infrastructure globally, helping them take advantage of the cloud’s almost infinite scalability while optimizing costs, security, and performance. In this episode, we discuss: As a startup, how Webscale is disrupting the digital infrastructure space as “the digital cloud company” for more than 1000 online stores globally Webscale’s ability to manage applications in the public cloud on behalf of customers across all the “hyperscale” cloud providers, including Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Alibaba Cloud, and others The company’s experience with online commerce is key to helping clients manage their infrastructure for maximum benefit with minimal cost Large hyperscale providers are providing very reliable, scalable cloud infrastructure/hosting on demand, while Webscale runs its own services on top: security, predictive autoscaling, performance, caching, content optimization, bot management, etc. Webscale provides an easy-to-use interface/portal for customers It also ensures cloud computing consumption is always right-sized so customers don’t pay for more than they use Infrastructure as a commodity or utility, like electricity, allowing retailers/brands to pay only for what they need, when they need it How today’s technology and cloud offerings allow smaller retailers to access the same tools as leading global brands, leveling the playing field Why it makes sense to use a vendor like Webscale to manage your cloud infrastructure Migrating applications to the cloud from traditional hosting models can be confusing and challenging Costs start to add up quickly if you don’t manage your infrastructure actively Security concerns are only escalating and require constant monitoring 24x7 global support includes expertise across multiple clouds Why even the data layer no longer presents scalability issues if the infrastructure is set up correctly, even for retailers who see 20x or 30x traffic spikes during peak seasons Migrating infrastructure to the cloud typically takes anywhere from 1-2 weeks up to 2 months, depending on complexity and the amount of testing required With good business requirements, Webscale quickly figures out the right solution using available providers Why Sonal believes the infrastructure market will undergo further commoditization and look more and more like a utility, with probably three hyperscale providers and perhaps a small group of very targeted cloud providers (e.g. a health cloud or database cloud)
In this week's episode, David Linthicum chats with Deloitte Consulting LLP’s Ken Corless and Gary Arora about all things serverless: how it relates to NoOps, upsides, downsides, and the human element. In a wide-ranging discussion, they tackle what serverless is, and isn't, and how it’s moving towards NoOps. They also discuss hybrid environments, security, automation, and the bright future for serverless. “Disclaimer: As referenced in this podcast, “Google” refers to Google Cloud Platform (GCP).
Google’s parent company, Alphabet, reported Q4 and full year earnings that beat expectations across the board. The company did not provide a lot of detail regarding its cloud business which is part of its “Other Business” category. What they did mention is that they doubled the number of Google Cloud Platform (GCP) deals worth more than $1M and doubled the number of multi-year contracts. Regarding G-Suite, they noted that they now have 5 million paying customers -- up from 4 million a year ago. It is clear that Google is going to invest and scale when it comes to their Google Cloud business. Enterprises will need to chart their course appropriately and this podcast details key points to consider on why you should plan and how to plan.
This episode features Gautam Bajaj, an engineer and data scientist that has worked with technologies mostly related to AI and machine learning. He has worked in India, USA, a leading video game company in Japan and is now consulting in Tokyo. We talk about getting into the field of data science and AI and about the different working cultures in Japan, India, USA and Sweden. We get into the possibilities and challenges of AI and machine learning, such as scaling and keeping up with developments in such a rapidly evolving field. Some things mentioned in this episode:- YanLeCun, AI course- Hadoop: a framework for distributed processing and big data- Kubernetes: a tool for running applications in "the cloud"- Medium, a blog platform- Recommender/recommendation system- Convolutional neural network (CNN)- Reinforcement learning- Generative adversarial networks (GANs)- Python- Amazon Web Services (AWS): cloud, web hosting etc.- Google Cloud Platform (GCP): cloud, web hosting etc. We do not currently have any external partners and all opinions expressed are solely our own. Nothing discussed on this podcast should be considered as any kind of investment advice. In this episode:- Gautam Bajaj, gautam1237 at gmail dot com- Martin Nordgren, works at Tobii, former engineer at Dirac, @martinjnordgren Contact us:dataspaning.se@dataspaning @ Twitterdataspaning@gmail.com
Google Cloud Platform (GCP) turned off a customer that it thought was doing something out of bounds. This led to an Internet outrage, and GCP tried to explain itself and prevent the problem in the future. Today, we’re talking to Daniel Compton, an independent software consultant who focuses on Clojure and large-scale systems. He’s currently building Deps, a private Maven repository service. As a third-party observer, we pick Daniel’s brain about the GCP issue, especially because he wrote a post called, Google Cloud Platform - The Good, Bad, and Ugly (It’s Mostly Good). Some of the highlights of the show include: Recommendations: Use enterprise billing - costs thousands of dollars; add phone number and extra credit card to Google account; get support contract Google describing what happened and how it plans to prevent it in the future seemed reasonable; but why did it take this for Google to make changes? GCP has inherited cultural issues that don’t work in the enterprise market; GCP is painfully learning that they need to change some things Google tends to focus on writing services aimed purely at developers; it struggles to put itself in the shoes of corporate-enterprise IT shops GCP has a few key design decisions that set it apart from AWS; focuses on global resources rather than regional resources When picking a provider, is there a clear winner? AWS or GCP? Consider company’s values, internal capabilities, resources needed, and workload GCP’s tendency to end service on something people are still using vs. AWS never ending a service tends to push people in one direction GCP has built a smaller set of services that are easy to get started with, while AWS has an overwhelming number of services Different Philosophies: Not every developer writes software as if they work at Google; AWS meets customers where they are, fixes issues, and drops prices GCP understands where it needs to catch up and continues to iterate and release features Links: Daniel Compton Daniel Compton on Twitter Google Cloud Platform - The Good, Bad, and Ugly (It’s Mostly Good) Deps The REPL Postmortem for GCP Load Balancer Outage AWS Athena Digital Ocean
Mark and Melanie are your hosts again this week as we talk with Steren Giannini and Stewart Reichling discussing what’s new with App Engine. Particularly its new second generation runtime, allowing headless Chrome, and better language support! And automatic scalability to make your life easier, too. App Engine also has an interesting way of inspiring new Google products. Tune in to learn more! Steren Giannini Steren Giannini is a Product Manager on Google Cloud Platform (GCP). He graduated from École Centrale Lyon, France and then was CTO of a startup that created mobile and multi-device solutions. After joining Google, Steren launched Stackdriver Error Reporting and now focuses on GCP’s serverless offering. Recently, Steren has been working on upgrading App Engine’s auto scaling system and bringing Node.js to App Engine standard environment. Stewart Reichling Stewart Reichling is a Product Manager on Google Cloud Platform (GCP). He is a graduate of Georgia Institute of Technology and has worked across Strategy, Marketing and Product Management at Google. He currently works on bringing new runtimes (Python, Node.js, +more to come!) to App Engine and Cloud Functions. Cool things of the week Robot dance party: How we created an entire animated short at Next ‘18 blog What’s happening in BigQuery: integrated machine learning, maps, and more blog Protecting against the new “L1TF” speculative vulnerabilities blog Interview App Engine site Deploying Node.js on App Engine standard environment video Introducing headless Chrome support in Cloud Functions and App Engine blog Node 8 site Python 3.7.0 site App Engine PHP 7.2 Runtime Environment Beta site Headless Chrome site GCPPodcast Episode 23: Humble Bundle with Andy Oxfeld podcast Google Cloud Datastore site App Engine Task Queue site Ubuntu site gVisor site Open-sourcing gVisor, a sandboxed container runtime blog App Engine Documentation site gcloud app deploy site To send feedback, email stewartr@google.com or steren@google.com App Engine Google Group forum Operating Serverless Apps with Google Stackdriver video App Engine’s new auto scaling system - scheduler blog Question of the week What does it mean when the recommendation is to update your image? Getting Image Vulnerabilities site Updating Managed Instance Groups site Node Images site Where can you find us next? Melanie will be at Deep Learning Indaba and Strangeloop. Mark will be at Pax Dev and Pax West starting August 28th. In September, he’ll be at Tokyo NEXT and Strangeloop.
In this episode, Bjorn, John and I discuss the various components of the Google Cloud Platform (GCP.)
Do you want to know more about digitization and what makes Google Cloud Platform (GCP) different? This time we have a special guest with us – Greg DeMichillie, one of Google's top speakers and keynote holders at Google's biggest and most important conferences. He is also often used by organizations and forums like G8 or Davos as an expert. Greg works daily in Google's Office of the CTO as a Director of Product Management for Google Cloud, and his job is to evangelize opportunities and talk about the future. In this podcast Filip Van Laenen, Alexander Rosbach and Ida Ryland talk with Greg DeMichillie about some of the products and services on the Google Cloud Platform, and innovation at Google in general.
The delightful Sam Ramji joins Mark and Melanie this week to talk about Google Cloud Platform, Open Source, Distributed Systems and Philosophy and how they are all interrelated. Sam Ramji A 20+ year veteran of the Silicon Valley and Seattle technology scenes, Sam Ramji is VP Product Management for Google Cloud Platform (GCP). He was the founding CEO of Cloud Foundry Foundation, was Chief Strategy Officer for Apigee (APIC), designed and led Microsoft's open source strategy, founded the Outercurve Foundation, and drove product strategy for BEA WebLogic Integration. Previously he built distributed systems and client software at firms including Broderbund, Fair Isaac, and Ofoto. He is an advisor to multiple companies including Accenture, Insight Engines, and the Linux Foundation, and served on the World Economic Forum's Industrial Internet Working Group. He received his B.S. in Cognitive Science from UCSD in 1994. Cool things of the week An example escalation policy — CRE life lessons blog The new Google Arts & Culture, on exhibit now blog Five Days of Kubernetes 1.9 blog Kubernetes Comic site Interview The Case for Learned Index Structures paper CAP Theorem wikipedia Databricks site Spinnaker site Tensor Processing Units site 38 Special - Hold On Loosely youtube Question of the week I would like to run a Google Cloud Function every day/week/hour etc - but there is no cron ability in Cloud Functions (yet?). How can I do this now? Functions Cron github Where can you find us next? Melanie is speaking at AI Congress in London Jan 30th and she will be at FOSDEM in Brussels in Feb. Mark will be at the Game Developer's Conference | GDC in March.