POPULARITY
Categories
A special and highly requested rerelease - Dr. Vibha Mahendra, a board-certified Obstetric Anesthesiologist, joins me on this episode to discuss anesthesia options during labor and delivery and why they are sometimes ineffective, potentially leading to trauma for the patient. Dr. M also provides insight into how providers can problem-solve these pain relief issues and communicate with patients in a way that mitigates trauma. This was such an insightful conversation and she does a great job educating the patient in a way that puts them in a better position to advocate or communicate with their support people to advocate for pain relief. On this episode, you will hear:-What is OB Anesthesia?-What's in an epidural?-Differences between a spinal block and an epidural- Why are some epidurals ineffective?- What happens when there is an ineffective spinal during a c-section?- The decision to convert to general anesthesia and why some doctors delay- Is it necessary to be restrained during c-section?Guest Bio:Vibha Mahendra, MD is a board-certified Obstetric Anesthesiologist, public health advocate, educator, and founder of SafePartum. Dr. Mahendra is passionate about improving maternal care by directly engaging and empowering all healthcare providers who care for pregnant patients, in a way that aligns with their practice environment and profession. Her professional interests span all of high-risk pregnancy, but she especially loves Obstetric Critical Care and Cardio-Obstetrics! Visit https://www.safepartum.com/academy for more resources on the management of high-risk conditions in pregnancy, labor & delivery, and postpartum. Safe Partum also has an upcoming OB Critical Care & Emergencies Course! Here is a link to the registration page to learn more about and sign up for this incredible course coming in September. https://fun-acorn-864.myflodesk.com/For more birth trauma content and a community full of love and support, head to my Instagram at @thebirthtrauma_mama.Learn more about the support and services I offer through The Birth Trauma Mama Therapy & Support Services.Disclaimer - The views and opinions expressed by guests on The Birth Trauma Mama Podcast are their own and do not necessarily reflect the official stance, views, or positions of The Birth Trauma Mama Podcast. The content shared is for informational purposes only and should not be considered as professional or medical advice and/or endorsement.
King Mahendra of Nepal remains one of the most controversial and influential figures in Nepal's history. In this podcast, we explore his leadership, diplomacy, and economic policies, diving deep into his role in shaping Nepal's industrialization, tourism, and Panchayat system. Was King Mahendra autocratic, or was he a visionary leader who modernized Nepal? We discuss his political rivalry with BP Koirala, the Panchayat system, and the economic policies of King Mahendra that transformed Nepal. His impact on Nepal's tourism development, including his vision for Mount Everest and Mahendra Cave, played a crucial role in boosting Nepal's economy. A major topic of debate is why Nepal banned cannabis despite its historical and cultural significance. We uncover how the royal family used cannabis legally and the political pressures that led to its prohibition. Additionally, we analyze Indira Gandhi and King Mahendra's diplomatic clashes, shedding light on Nepal's foreign relations history. Was King Mahendra a dictator, or did his leadership bring long-term stability? Join us as we break down his vision, policies, and the lasting impact of Nepal's monarchy. Don't forget to like, comment, and subscribe for more insights into Nepal's political history!
Who were the Rajputs in Nepal, and how did they shape the country's history? In this fascinating podcast, Prof. Mahendra Prasad Singh dives deep into the Nepal Rajput history, tracing their migration to Nepal and their role in the Lichhavi dynasty Nepal. We explore how the Newar Rajputs origin is linked to the Chauhan dynasty Nepal, why they changed their caste identity, and how the Lichhavi book Nepal provides hidden insights into their rule. Learn about Jaystithi Malla Rajput influence, the Rajputs of Terai Nepal, and how their legacy still exists today. The discussion also covers Prithvi Narayan Shah Rajputs and how he loosened borders to maintain national security. We uncover the Jung Bahadur Rana British connection, the Hada and Chauhan Rajputs, and their impact on Nepal's aristocracy. Plus, we compare Nepal vs India Rajputs, exploring their unique historical paths. What caused the end of Lichhavi period and how did it affect the Rajput identity in Nepal? How did the Pradhan Hada Rajputs Nepal shape political power? This episode unveils the Rajput influence on Nepalese history, offering deep insights into their hidden past.
Shakuntala Pradhan is a pioneering classical musician and professor from Dharan, Nepal, One of the first-generation women to sing on Radio Nepal, Pradhan studied classical music in Baroda, India. Pradhan spoke to SBS Nepali on the challenges faced by women classical musicians of her time and discussed the state of classical music in academia. - नेपालको धरानकी शकुन्तला प्रधान नेपालकी वरिष्ठ कलाकार तथा शास्त्रीय सङ्गीतकी प्राध्यापक हुन्। रेडियो नेपालमा गाउने पहिलो पुस्ताकी महिला मध्ये एक प्रधानले भारतको वरोदामा गएर शास्त्रीय सङ्गीतको अध्ययन गरेकी थिइन्। त्यो बेलाको पुस्ताका महिला शास्त्रीय सङ्गीतकर्मीको सङ्घर्षको प्रतिनिधित्व गरेकी प्रधानसँग उनको साङ्गीतिक यात्रा लगायत प्राज्ञिक क्षेत्रमा शास्त्रीय सङ्गीतको अवस्था कस्तो थियो भन्नेबारे एसबीएस नेपालीसँग गरेको कुराकानी सुन्नुहोस्।
Yusril Ihza Mahendra Diproyeksi Isi Menko Hukum HAM di Kabinet Prabowo | Pengamat: Ambigu, Nasdem Takut Oposisi | Wapres Bakal Hadiri Pelantikan Prabowo-Gibran *Kami ingin mendengar saran dan komentar kamu terkait podcast yang baru saja kamu simak, melalui surel ke podcast@kbrprime.id
This week on rabble radio, labour reporter Gabriela Calugay-Casuga sits down with Mahendra Pandey. Mahendra shares his experience as a former migrant worker in Saudi Arabia, as well as his work organizing migrant workers today. About our guest Mahendra Pandey is the senior manager of forced labor and human trafficking at Humanity United. Through this role, he focuses on the human trafficking in labor migration portfolio. Before getting involved in advocacy work, Pandey worked in Saudi Arabia as a migrant worker and experienced first-hand the poor working conditions that many Nepali migrant workers face. While in Saudi Arabia, he developed a Nepali migrant rights network, Pravasi Nepali Coordination Committee (PNCC). Pandey holds a master's degree in digital media and storytelling from American University at Washington D.C. and recently completed a leadership organizing and action course at Harvard University. To learn more about Humanity United, visit this link. If you like the show please consider subscribing on Apple Podcasts, Spotify, Youtube or wherever you find your podcasts. And please, rate, review, share rabble radio with your friends — it takes two seconds to support independent media like rabble. Follow us on social media across channels @rabbleca.
Mahendra Sekaran, Corporate Vice President, Teams Phone, M365 at Microsoft discusses the journey of Microsoft Teams, his role, Understanding IC3 (Intelligent Conversations and Communications Cloud) and ACS (Azure Communication Services) and how they related to Microsoft Teams and leveraging AI in development.The Microsoft Teams journey and Mahendra's perspective Understanding IC3, ACS and Microsoft Teams PSTN connectivityThoughts on Teams Phone MobileThe importance of reliability, security and customer feedback in driving improvementsFuture developments include leveraging AI to enhance communication experiences and simplify telephony adoption Thanks to Neat, this episode's sponsor, for their continued support.
Send us a Text Message.The Chef JKP Podcast, hosted by James Knight-Paccheco, features an insightfulinterview with Panchali Mahendra, CEO of Atelier House Hospitality. They discuss various aspects of the hospitality industry, including childhood food memories, career transitions, challenges faced, and predictions for the restaurant industry.Panchali shares valuable insights on managing successful restaurants,consultancy roles, team dynamics, and the importance of sustainability in theindustry.Topics discussed: Journey in the Hospitality Industry Challenges Faced Consultancy in Hospitality Business Ideas and Project Development Restaurant Project Execution Financial Planning and Marketing Leadership and Female Representation Sustainability and Industry Predictions Restaurant Landscape in UAEHere are key takeaways and lessons: Passion over pride projects and the importance of patience, listening, andleadership in managing diverse personalities in a team. Strategic budget allocation for marketing and the significance ofstaggered restaurant openings for quality focus. Creating a family-like environment for employees, transparency, andrespect for labour contribute to low attrition rates. The industry predicts a shift towards smaller, lower CapEx restaurants,luxury markets, and fine casual dining.Follow Panchali on HEREThis show is bright to you by Chef Middle East, follow them on HERESupport the Show.Follow The Chef JKP Podcast on Instagram HERE
In the season's final episode, hosts Lois Houston and Nikita Abraham interview senior OCI instructor Mahendra Mehra about the security practices that are vital for OKE clusters on OCI. Mahendra shares his expert insights on the importance of Kubernetes security, especially in today's digital landscape where the integrity of data and applications is paramount. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! In our last episode, we spoke about self-managed nodes and how you can manage Kubernetes deployments. Nikita: Today is the final episode of this series on OCI Container Engine for Kubernetes. We're going to look at the security side of things and discuss how you can implement vital security practices for your OKE clusters on OCI, and safeguard your infrastructure and data. 00:59 Lois: That's right, Niki! We can't overstate the importance of Kubernetes security, especially in today's digital landscape, where the integrity of your data and applications is paramount. With us today is senior OCI instructor, Mahendra Mehra, who will take us through Kubernetes security and compliance practices. Hi Mahendra! It's great to have you here. I want to jump right in and ask you, how can users add a service account authentication token to a kubeconfig file? Mahendra: When you set up the kubeconfig file for a cluster, by default, it contains an Oracle Cloud Infrastructure CLI command to generate a short-lived, cluster-scoped, user-specific authentication token. The authentication token generated by the CLI command is appropriate to authenticate individual users accessing the cluster using kubectl and the Kubernetes Dashboard. However, the generated authentication token is not appropriate to authenticate processes and tools accessing the cluster, such as continuous integration and continuous delivery tools. To ensure access to the cluster, such tools require long-lived non-user-specific authentication tokens. One solution is to use a Kubernetes service account. Having created a service account, you bind it to a cluster role binding that has cluster administration permissions. You can create an authentication token for this service account, which is stored as a Kubernetes secret. You can then add the service account as a user definition in the kubeconfig file itself. Other tools can then use this service account authentication token when accessing the cluster. 02:47 Nikita: So, as I understand it, adding a service account authentication token to a kubeconfig file enhances security and enables automated tools to interact seamlessly with your Kubernetes cluster. So, let's talk about the permissions users need to access clusters they have created using Container Engine for Kubernetes. Mahendra: For most operations on Container Engine for Kubernetes clusters, IAM leverages the concept of groups. A user's permissions are determined by the IAM groups they belong to, including dynamic groups. The access rights for these groups are defined by policies. IAM provides granular control over various cluster operations, such as the ability to create or delete clusters, add, remove, or modify node pool, and dictate the Kubernetes object create, delete, view operations a user can perform. All these controls are specified at the group and policy levels. In addition to IAM, the Kubernetes role-based access control authorizer can enforce additional fine-grained access control for users on specific clusters via Kubernetes RBAC roles and ClusterRoles. 04:03 Nikita: What are Kubernetes RBAC roles and ClusterRoles, Mahendra? Mahendra: Roles here defines permissions for resources within a specific namespace and ClusterRole is a global object that will provide access to global objects as well as non-resource URLs, such as API version and health endpoints on the API server. Kubernetes RBAC also includes RoleBindings and ClusterRoleBindings. RoleBinding grants permission to subjects, which can be a user, service, or group interacting with the Kubernetes API. It specified an allowed operation for a given subject in the cluster. RoleBinding is always created in a specific namespace. When associated with a role, it provides users permission specified within that role related to the objects within that namespace. When associated with a ClusterRole, it provides access to namespaced objects only defined within that cluster rule and related to the roles namespace. ClusterRoleBinding, on the other hand, is a global object. It associates cluster roles with users, groups, and service accounts. But it cannot be associated with a namespaced role. ClusterRoleBinding is used to provide access to global objects, non-namespaced objects, or to namespaced objects in all namespaces. 05:36 Lois: Mahendra, what's IAM's role in this? How do IAM and Kubernetes RBAC work together? Mahendra: IAM provides broader permissions, while Kubernetes RBAC offers fine-grained control. Users authorized either by IAM or Kubernetes RBAC can perform Kubernetes operations. When a user attempts to perform any operation on a cluster, except for create role and create cluster role operations, IAM first determines whether a group or dynamic group to which the user belongs has the appropriate and sufficient permissions. If so, the operation succeeds. If the attempted operation also requires additional permissions granted via a Kubernetes RBAC role or cluster role, the Kubernetes RBAC authorizer then determines whether the user or group has been granted the appropriate Kubernetes role or Kubernetes ClusterRoles. 06:41 Lois: OK. What kind of permissions do users need to define custom Kubernetes RBAC rules and ClusterRoles? Mahendra: It's common to define custom Kubernetes RBAC rules and ClusterRoles for precise control. To create these, a user must have existing roles or ClusterRoles with equal or higher privileges. By default, users don't have any RBAC roles assigned. But there are default roles like cluster admin or super user privileges. 07:12 Nikita: I want to ask you about securing and handling sensitive information within Kubernetes clusters, and ensuring a robust security posture. What can you tell us about this? Mahendra: When creating Kubernetes clusters using OCI Container Engine for Kubernetes, there are two fundamental approaches to store application secrets. We can opt for storing and managing secrets in an external secrets store accessed seamlessly through the Kubernetes Secrets Store CSI driver. Alternatively, we have the option of storing Kubernetes secret objects directly in etcd. 07:53 Lois: OK, let's tackle them one by one. What can you tell us about the first method, storing secrets in an external secret store? Mahendra: This integration allows Kubernetes clusters to mount multiple secrets, keys, and certificates into pods as volumes. The Kubernetes Secrets Store CSI driver facilitates seamless integration between our Kubernetes clusters and external secret stores. With the Secrets Store CSI driver, our Kubernetes clusters can mount and manage multiple secrets, keys, and certificates from external sources. These are accessible as volumes, making it easy to incorporate them into our application containers. OCI Vault is a notable external secrets store. And Oracle provides the Oracle Secrets Store CSI driver provider to enable Kubernetes clusters to seamlessly access secrets stored in Vault. 08:54 Nikita: And what about the second method? How can we store secrets as Kubernetes secret objects in etcd? Mahendra: In this approach, we store and manage our application secrets using Kubernetes secret objects. These objects are directly managed within etcd, the distributed key value store used for Kubernetes cluster coordination and state management. In OKE, etcd reads and writes data to and from block storage volumes in OCI block volume service. By default, OCI ensures security of our secrets and etcd data by encrypting it at rest. Oracle handles this encryption automatically, providing a secure environment for our secrets. Oracle takes responsibility for managing the master encryption key for data at rest, including etcd and Kubernetes secrets. This ensures the integrity and security of our stored secrets. If needed, there are options for users to manage the master encryption key themselves. 10:06 Lois: OK. We understand that managing secrets is a critical aspect of maintaining a secure Kubernetes environment, and one that users should not take lightly. Can we talk about OKE Container Image Security? What essential characteristics should container images possess to fortify the security posture of a user's applications? Mahendra: In the dynamic landscape of containerized applications, ensuring the security of containerized images is paramount. It is not uncommon for the operating system packages included in images to have vulnerabilities. Managing these vulnerabilities enables you to strengthen the security posture of your system and respond quickly when new vulnerabilities are discovered. You can set up Oracle Cloud Infrastructure Registry, also known as Container Registry, to scan images in a repository for security vulnerabilities published in the publicly available Common Vulnerabilities and Exposures Database. 11:10 Lois: And how is this done? Is it automatic? Mahendra: To perform image scanning, Container Registry makes use of the Oracle Cloud Infrastructure Vulnerability Scanning Service and Vulnerability Scanning REST API. When new vulnerabilities are added to the CVE database, the container registry initiates automatic rescanning of images in repositories that have scanning enabled. 11:41 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free! So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 12:20 Nikita: Welcome back! Mahendra, what are the benefits of image scanning? Mahendra: You can gain valuable insights into each image scan conducted over the past 13 months. This includes an overview of the number of vulnerabilities detected and an overall risk assessment for each scan. Additionally, you can delve into comprehensive details of each scan featuring descriptions of individual vulnerabilities, their associated risk levels, and direct links to the CVE database for more comprehensive information. This historical and detailed data empowers you to monitor, compare, and enhance image security over time. You can also disable image scanning on a particular repository by removing the image scanner. 13:11 Nikita: Another characteristic that container images should have is unaltered integrity, right? Mahendra: For compliance and security reasons, system administrators often want to deploy software into a production system. Only when they are satisfied that the software has not been modified since it was published compromising its integrity. Ensuring the unaltered integrity of software is paramount for compliance and security in production environment. 13:41 Lois: Mahendra, what are the mechanisms that guarantee this integrity within the context of Oracle Cloud Infrastructure? Mahendra: Image signatures play a pivotal role in not only verifying the source of an image but also ensuring its integrity. Oracle's Container Registry facilitates this process by allowing users or systems to push images and sign them using a master encryption key sourced from the OCI Vault. It's worth noting that an image can have multiple signatures, each associated with a distinct master encryption key. These signatures are uniquely tied to an image OCID, providing granularity to the verification process. Furthermore, the process of image signing mandates the use of an RSA asymmetric key from the OCI Vault, ensuring a robust and secure validation of the image's unaltered integrity. 14:45 Nikita: In the context of container images, how can users ensure the use of trusted sources within OCI? Mahendra: System administrators need the assurance that the software being deployed in a production system originates from a source they trust. Signed images play a pivotal role, providing a means to verify both the source and the integrity of the image. To further strengthen this, administrators can create image verification policies for clusters, specifying which master encryption keys must have been used to sign images. This enhances security by configuring container engine for Kubernetes clusters to allow the deployment of images signed with specific encryption keys from Oracle Cloud Infrastructure Registry. Users or systems retrieving signed images from OCIR can trust the source and be confident in the image's integrity. 15:46 Lois: Why is it imperative for users to use signed images from Oracle Cloud Infrastructure Registry when deploying applications to a Container Engine for Kubernetes cluster? Mahendra: This practice is crucial for ensuring the integrity and authenticity of the deployed images. To achieve this enforcement. It's important to note that an image in OCIR can have multiple signatures, each linked to a different master encryption key. This multikey association adds layers of security to the verification process. A cluster's image verification policy comes into play, allowing administrators to specify up to five master encryption keys. This policy serves as a guideline for the cluster, dictating which keys are deemed valid for image signatures. If a cluster's image verification policy doesn't explicitly specify encryption keys, any signed image can be pulled regardless of the key used. Any unsigned image can also be pulled potentially compromising the security measures. 16:56 Lois: Mahendra, can you break down the essential permissions required to bolster security measures within a user's OKE clusters? Mahendra: To enable clusters to include master encryption key in image verification policies, you must give clusters permission to use keys from OCI Vault. For example, to grant this permission to a particular cluster in the tenancy, we must use the policy—allow any user to use keys in tenancy where request.user.id is set to the cluster's OCID. Additionally, for clusters to seamlessly pull signed images from Oracle Cloud Infrastructure Registry, it's vital to provide permissions for accessing repositories in OCIR. 17:43 Lois: I know this may sound like a lot, but OKE container image security is vital for safeguarding your containerized applications. Thank you so much, Mahendra, for being with us through the season and taking us through all of these important concepts. Nikita: To learn more about the topics covered today, visit mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham… Lois Houston: And Lois Houston, signing off! 18:16 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode, hosts Lois Houston and Nikita Abraham speak with senior OCI instructor Mahendra Mehra about the capabilities of self-managed nodes in Kubernetes, including how they offer complete control over worker nodes in your OCI Container Engine for Kubernetes environment. They also explore the various options that are available to effectively manage your Kubernetes deployments. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Hello and welcome to the Oracle University Podcast! I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Last week, we discussed how OKE virtual nodes can offer you a complete serverless Kubernetes experience. Nikita: Yeah, and in today's episode, we'll focus on self-managed nodes, where you get complete control over the worker nodes within your OKE environment. We'll also talk about how you can manage your Kubernetes deployments. 00:57 Lois: To tell us more about this, we have Mahendra Mehra, a senior OCI instructor with Oracle University. Hi Mahendra! Welcome back! Let's get started with self-managed nodes. Can you tell us what they are? Mahendra: In Container Engine for Kubernetes, a self-managed node is essentially a worker node that you personally create and host on a compute instance or instance pool within the compute service. Unlike managed nodes or virtual nodes, self-managed nodes are not grouped into node pools by default. They are often referred to as Bring Your Own Nodes, also abbreviated as BYON. If you wish to streamline administration and manage multiple self-managed nodes collectively, you can utilize the compute service to create a compute instance pool for hosting these nodes. This allows for greater flexibility and customization in your Kubernetes environment. 01:58 Nikita: Mahendra, what are some practical usage scenarios for OKE self-managed nodes? Mahendra: These nodes offer a range of advantages for specific use cases. Firstly, for specialized workloads, leveraging the compute service allows you to configure compute instances with shapes and image combination that may not be available for managed nodes or virtual nodes. This includes options like GPU shapes for hardware accelerated workloads or high frequency processor cores for demanding high-performance computing tasks. Secondly, if you require complete control over your compute instance configuration, self-managed nodes are the ideal choice. This gives you the flexibility to tailor each node to your specific requirements. Additionally, self-managed nodes are particularly well suited for Oracle Cloud Infrastructure cluster networks. These nodes provide high bandwidth, low latency RDMA connectivity, making them a preferred option for certain networking setups. Lastly, the use of compute instance pools with self-managed nodes enables the creation of infrastructure for handling complex distributed computing tasks. This can greatly enhance the efficiency of your Kubernetes environment. Consider these points carefully to determine the optimal use of OKE self-managed nodes in your deployments. 03:30 Lois: What do we need to consider before creating a self-managed node and integrating it into a cluster? Mahendra: There are two crucial aspects to address. Firstly, you need to confirm that the cluster to which you plan to add a self-managed node is configured appropriately. Secondly, it's essential to choose the right image for the compute instance hosting the self-managed node. 03:53 Nikita: Can you dive a little deeper into these prerequisites? Mahendra: To successfully integrate a self-managed node into your cluster, you must ensure that the cluster is an enhanced cluster. This is a crucial prerequisite for the addition of self-managed nodes. The flannel CNI plugin for pod networking should be utilized, not the VCN-native pod networking CNI plugin. This ensures optimal pod networking for your self-managed nodes. The control plane nodes of the cluster must be running Kubernetes version 1.25 or later. This is essential for compatibility and optimal performance. Lastly, maintain compatibility between the Kubernetes version on control plane nodes and worker nodes with a maximum allowable difference of two minor versions. This ensures a smooth and stable operation of your Kubernetes environment. Keep these cluster requirements in mind as you prepare to add self-managed nodes to your OKE cluster. 04:55 Lois: What about the image requirements when creating self-managed nodes? Mahendra: Choose either Oracle Linux 7 or Oracle Linux 8 image, for your self-managed nodes. Ensure that the selected image has a release date of March 28, 2023 or later. Obtain the image OCID, also known as Oracle Cloud Identifier, from the respective sources. When specifying an image, be mindful of the Kubernetes version it contains. It's your responsibility to select an image with a Kubernetes version that aligns with the Kubernetes version skew support policy. Keep in mind that the Container Engine for Kubernetes does not automatically check the compatibility. So it's up to you to ensure harmony between the Kubernetes version on the self-managed node and the cluster's control plane nodes. These considerations will help you make informed choices when configuring images for your self-managed nodes. 05:57 Nikita: I really like the flexibility and customization OKE self-managed nodes offer. Now I want to switch gears a little and ask you about OCI Service Operator for Kubernetes. Can you tell us a bit about it? Mahendra: OCI Service Operator for Kubernetes is an open-source Kubernetes add-on that transforms the way we manage and connect OCI resources within our Kubernetes clusters. This powerful operator enables you to effortlessly create, configure, and interact with OCI resources directly from your Kubernetes environment, eliminating the need for constant navigation between the Oracle Cloud Infrastructure Console, CLI, or other tools. With the OCI Service Operator, you can seamlessly leverage kubectl to call the operator framework APIs, providing a streamlined and efficient workflow. 06:53 Lois: On what framework is the OCI Service Operator built? Mahendra: OCI Service Operator for Kubernetes is built using the open-source Operator Framework toolkit. The Operator Framework manages Kubernetes-native applications called operators in an effective, automated, and scalable way. The Operator Framework comprises essential components like Operator SDK. This leverages the Kubernetes controller-runtime library, providing high-level APIs and abstractions for writing operational logic. Additionally, it offers tools for scaffolding and code generation. 07:35 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free! So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 08:14 Nikita: Welcome back! Mahendra, are there any other components within OCI Service Operator to manage Kubernetes deployments? Mahendra: The other essential component is Operator Lifecycle Manager, also abbreviated as OLM. OLM extends Kubernetes by introducing a declarative approach to install, manage, and upgrade operators within a cluster. The OCI Service Operator for Kubernetes is intelligently packaged as an Operator Lifecycle Manager bundle, simplifying the installation process on Kubernetes clusters. This comprehensive bundle encapsulates all necessary objects and definitions, including CRDs, RBACs, ConfigMaps, and deployments, making it effortlessly deployable on a cluster. 09:02 Lois: So much that users can take advantage of! What about OCI Service Operator's integration with other OCI services? Mahendra: One of its standout features is its seamless integration with a range of OCI services. The first one is Autonomous Database, specifically tailored for transaction processing, mixed workloads, analytics, and data warehousing. Enjoy automated patching, upgrades, and tuning, allowing routine maintenance tasks to be performed without human intervention. The next on the list is MySQL HeatWave, a fully-managed Database Service designed for developing and deploying secure cloud-native applications using widely adopted MySQL open-source database. Third on the list is OCI Streaming service. Experience a fully managed, scalable, and durable solution for ingesting and consuming high-volume data streams in real time. Next is Service Mesh. This service offers a set of capabilities to facilitate communication among microservices within a cloud-native application. The communication is centrally managed and secured, ensuring a smooth and secure interaction. The OCI Service Operator for Kubernetes serves as a versatile bridge, seamlessly connecting your Kubernetes clusters with these powerful Oracle Cloud Infrastructure services. 10:31 Nikita: That's awesome! I've also heard about Ingress Controllers. Can you tell us what they are? Mahendra: A Kubernetes Ingress Controller serves as the enforcer of rules defined in a Kubernetes Ingress. Its primary role is to manage, load balance, and route incoming traffic to specific service pods residing on worker nodes within the cluster. At the heart of this process is the Kubernetes Ingress Resource. Think of it as a blueprint, a rich configuration holding routing rules and options, specifically crafted for handling HTTP and HTTPS traffic. It serves as a powerful orchestrator for managing external communication with services inside the cluster. 11:15 Lois: Mahendra, how do Ingress Controllers bring about efficiency? Mahendra: Efficiency comes with consolidation. With a single ingress resource, you can neatly gather routing rules for multiple services. This eliminates the need to create a Kubernetes service of type LoadBalancer for each service seeking external or private network traffic. The OCI native ingress controller is a powerhouse. It crafts an OCI Flexible Load Balancer, your gateway to efficient request handling. The OCI native ingress controller seamlessly adapts to changes in routing rules with real-time updates. 11:53 Nikita: And what about integration with an OKE cluster? Mahendra: Absolutely. It harmonizes with the cluster for streamlined traffic management. Operating as a single pod on a randomly selected worker node, it ensures a balanced workload distribution. 12:08 Lois: Moving on, let's talk about running applications on ARM-based nodes and GPU nodes. We'll start with ARM-based nodes. Mahendra: Typically, developers use ARM-based worker nodes in Kubernetes cluster to develop and test applications. Selecting the right infrastructure is crucial for optimal performance. 12:28 Nikita: What kind of options do developers have when running applications on ARM-based nodes? Mahendra: When it comes to running applications on ARM-based nodes, you have a range of options at your fingertips. First up, consider the choice between ARM-based bare metal shapes and flexible VM shapes. Each comes with its own unique advantages. Now, let's talk about the heart of it all, the Ampere A1 Compute instances. These instances are driven by the cutting edge Ampere Altra processor, ensuring high performance and efficiency for your workloads. You must specify the ARM-based node pool shapes during cluster or node pool creation, whether you choose to navigate through the user-friendly console, leverage the flexibility of the API, or command with precision through the CLI, the process remains seamless. 13:23 Lois: Can you define pods to run exclusively on ARM-based nodes within a heterogeneous cluster setup? Mahendra: In scenarios where a cluster comprises node pools with ARM-based shapes alongside other shapes, such as AMD64, you can employ a powerful tool called node selector in the pod specification. This allows you to precisely dictate that an application should exclusively run on ARM-based worker nodes, ensuring your workloads aligns with the desired architecture. 13:55 Nikita: And before we end this episode, can you explain why developers must run applications on GPU nodes? Mahendra: Originally designed for graphics manipulations, GPUs prove highly efficient in parallel data processing. This makes them a top choice for deploying data-intensive applications. Our GPU nodes utilize cutting edge NVIDIA graphics cards ensuring efficient and powerful data processing. Seamless access to this computing prowess is made possible through CUDA libraries. To ensure smooth integration, be sure to select a GPU shape and opt for an Oracle Linux GPU image preloaded with the essential CUDA libraries. CUDA here is Compute Unified Device Architecture, which is a parallel computing platform and application-programming interface model created by NVIDIA. It allows developers to use NVIDIA graphics-processing units for general-purpose processing, rather than just rendering graphics. 14:57 Nikita: Thank you, Mahendra, for another insightful session. We appreciate you joining us today. Lois: For more information on everything we discussed, go to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. You'll find plenty of demos and skill checks to supplement your learning. Join us next week when we'll discuss vital security practices for your OKE clusters on OCI. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 15:28 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Want to gain insights into how virtual nodes provide a serverless Kubernetes experience? Join hosts Lois Houston and Nikita Abraham, along with senior OCI instructor Mahendra Mehra, as they compare managed nodes and virtual nodes. Continuing from the previous episode, they explore how virtual nodes enhance Kubernetes deployments in Oracle Cloud Infrastructure. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hey everyone! In our last episode, we examined OCI Container Engine for Kubernetes, including its key features and benefits. Lois: Yeah, that was an interesting one. Today, we're going to discuss virtual nodes and their role in enhancing Kubernetes deployments in Oracle Cloud Infrastructure. Nikita: We're going to compare virtual nodes and managed nodes, and look at their differences and advantages. To take us through all this, we have Mahendra Mehra with us. Mahendra is a senior OCI instructor with Oracle University. 01:09 Lois: Hi Mahendra! From our discussion last week, we know that when creating a node pool with Container Engine for Kubernetes, we have the option of specifying the type of Oracle nodes as either managed nodes or virtual nodes. But I'm sure there are some key differences in the features supported by each type, right? Mahendra: The primary point of differentiation between virtual nodes and managed nodes is in their management approach. When it comes to managed nodes, users are responsible for managing the nodes. They have the flexibility to configure them to meet the specific requirements. Users are also responsible for upgrading Kubernetes on managed nodes and for managing cluster capacity. You can create managed nodes and node pools in both basic clusters and enhanced clusters, whereas in virtual nodes, virtual nodes provide a serverless Kubernetes, experience, enabling users to run containerized applications at scale. The Kubernetes software is upgraded and security patches are applied while respecting application's availability requirements. You can only create virtual nodes and virtual node pools in enhanced clusters. 02:17 Nikita: What about differences in terms of resource allocation? Are there any differences we should be aware of? Mahendra: When it comes to managed nodes, the resource allocation is at the node pool level and the users specify CPU and memory resource requirements for a given node pool. In the virtual nodes, the resource allocation is done at the pod level, where you can specify the CPU and memory resource requirements, but this time, as requests and limits in the pod specification. 02:45 Lois: What about differences in the approach to load balancing? Mahendra: When it comes to managed nodes, load balancing is between the worker nodes, whereas in virtual nodes, load balancing is between pods. Also, load balancer security list management is never enabled, and you always must manually configure security rules. When using virtual nodes, load balances distribute traffic among pods' IP addresses and then assign node port. 03:12 Lois: And when it comes to pod networking? Mahendra: Under managed nodes, both the VCN-Native Pod Networking CNI plugin and the flannel CNI plugin are supported. When it comes to virtual nodes, only VCN-Native Pod Networking is supported. Also, only one VNIC is attached to each virtual node. Remember, IP addresses are not pre-allocated before pods are created. And the VCN-Native Pod Networking CNI plugin is not shown as running in the kube-system namespace. Pod subnet route tables must have route rules defined for a NAT gateway and a service gateway. 03:48 Nikita: OK… I have a question, Mahendra. When it comes to scaling Kubernetes clusters and node pools, can users adjust the cluster capacity in response to their changing requirements? Mahendra: When it comes to managed nodes, customers can scale the cluster and node pool up and down by changing the number of managed node pools and nodes respectively. They also have an option to enable autoscaling to automatically scale managed node pools and pods. When it comes to virtual nodes, operational overhead of cluster capacity management is handled for you by OCI. A virtual node pool scales automatically and can support up to 1000 pods per virtual node. Users also have an option to increase the number of virtual node pools or virtual nodes to scale up the cluster or node pool respectively. 04:37 Lois: And what about the pricing for each? Mahendra: Under managed nodes, you pay for the compute instances that execute applications, whereas under virtual nodes, you pay for the exact compute resources consumed by each Kubernetes pod. 04:55 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free! So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 05:34 Nikita: Welcome back! We were just discussing how when you have to choose between virtual nodes and managed nodes for your Kubernetes cluster, you need to consider several key points of differentiation, like the management approach, resource allocation, load balancing, pod networking, scaling, and pricing. Lois: Yeah, it's important to understand the benefits and drawbacks of each approach to make informed decisions. Mahendra, now let's talk about the prerequisites to configure clusters with virtual nodes and the IAM policies that are required to use virtual nodes. Mahendra: Before you can use virtual nodes, you always have to set up at least one IAM policy, which is required in all circumstances by both tenancy administrators and non-administrator users. This basically means, to create and use clusters with virtual nodes and virtual node pools, you must endorse Container Engine for Kubernetes service to allow virtual nodes to create container instances in the Container Engine for Kubernetes service tenancy with a VNIC connected to a subnet of a VCN in your tenancy. All you need to do is create a policy in the root compartment with policy statements from the official documentation page. You will find them under the Working with Virtual Nodes section within the Container Engine topic. 06:55 Lois: Mahendra, how do you create and configure virtual nodes and virtual node pools? Mahendra: Creating virtual nodes is a pivotal step and it involves setting up a virtual node pool in a new cluster. This is exclusively applicable to enhanced clusters. You can initiate this process using the console, the CLI, or the API. Configuring your virtual node pools involves defining critical parameters. Firstly, we have the node count. This represents the number of virtual nodes you wish to create within your virtual node pool. These nodes will be strategically placed in the availability domains that you specify. Now, it's important to carefully consider the placement of these nodes. You can distribute them across different availability domains, ensuring high availability for your applications. Additionally, you have the option to place these nodes in a regional subnet, which is the recommended approach for optimal performance. 07:53 Nikita: Isn't the pod shape another important parameter? Can you tell us a bit about it? Mahendra: Pod shape refers to the type of shape you want for pods running on your virtual nodes within the virtual node pool. The pod shape is crucial as it determines the processor type on which you want your pods to run. It is important to note that only shapes available in your tenancy and supported by Container Engine for Kubernetes will be shown. So choose a shape that aligns with the requirements of your applications and services. A noteworthy point is that you explicitly specify the CPU and memory resource requirements for virtual nodes in the pod specification file. This ensures that your virtual nodes have the necessary resources to handle the workloads of your applications. Precision in specifying these requirements is key to achieving optimal performance. 08:49 Lois: What is the network setup for virtual nodes? Mahendra: The pod running on virtual nodes utilize VCN-native pod networking, and it's crucial to specify how these pods in the node pool communicate with each other. This involves setting up a pod subnet, which is a regional subnet configured specially to host pods. The pod subnet you specify for virtual nodes must be private. Oracle recommends that the pod subnet and the virtual node subnets are the same. In addition to subnet configurations, you have the option to use security rules in network security group to control access to the pod subnet. This involves defining security rules within one or more NSGs that you specify with a maximum limit of five network security groups. Also, it is worth noting that using network security group is recommended over using security list. Now, let's shift our focus to virtual node communication. For this, you will configure a virtual node subnet. This subnet can be either a regional subnet, which is recommended, or an availability domain-specific subnet. And it's designed to host your virtual nodes. 10:02 Nikita: What are some key considerations for virtual node subnets? Mahendra: If you've specified load balancer subnets, ensure that the virtual node subnets are different. As with pod communication, Oracle recommends that the pod subnet and the virtual node subnet are the same, with the added condition that the virtual node subnet must be private. 10:23 Lois: Mahendra, can you take us through the fundamental tasks involved in managing virtual nodes and virtual node pools? Mahendra: Whether you're creating a new enhanced cluster using the Console, or looking to scale up an existing one, the creation process is versatile. Creating virtual nodes involves establishing a virtual node pool. Virtual nodes can only be created within enhanced clusters. Listing virtual nodes task offers visibility into virtual nodes within a virtual node pool. Whether you prefer Console, CLI, or the API, you have the flexibility to choose the method that suits your workflow best. For a comprehensive understanding of your virtual node pools, navigate to the Cluster List page, and click on the name of the cluster. This will unveil the specifics of the virtual node pool you are interested in. Now let's talk about updating virtual node pools. Whether your initiating a new enhanced cluster, or expanding an existing one, the update process ensures your cluster aligns with your evolving requirements. You can easily update the virtual node pool's name for clarity. You can also dynamically change the number of virtual nodes to meet the workload demands, and you can fine tune the Node Placement using options like Availability Domain and Fault Domain settings. Moving on to an essential aspect of node pool management, that is deletion. It's crucial to understand that deleting a node pool is a permanent action. Once deleted, the node pool cannot be recovered. 12:04 Lois: Before we wrap up, Mahendra, can you talk about the critical factors when allocating CPU, memory, and storage resources to pods provisioned by virtual nodes within your OKE cluster? Mahendra: To ensure optimal performance, OKE calculates CPU and memory allocations at the pod level, a distinctive feature when using virtual nodes. This approach stands in contrast to the traditional worker node-level allocation. The allocation process takes into account several factors. First one is the CPU and memory requests and limits. These are specified for each container in the pod spec file, if present. Secondly, number of containers in the pod. The total number of containers impacts the overall resource requirements. And kube-proxy and container runtime requirements. A small but essential consideration taking up 0.25 GB of memory and negligible CPU. Pod CPU and memory requests must meet a minimum of 0.125 OCPUs and 0.5 GB of memory. 13:12 Nikita: Thank you, Mahendra, for this really insightful session. If you're interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. Lois: You'll find demos that you watch as well as skill checks that you can attempt to better your understanding. In our next episode, we'll journey into the world of self-managed nodes and discuss how to manage Kubernetes deployments. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 13:45 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Curious about how OCI Container Engine for Kubernetes (OKE) can transform the way your development team builds, deploys, and manages cloud-native applications? Listen to hosts Lois Houston and Nikita Abraham explore OKE's key features and benefits with senior OCI instructor Mahendra Mehra. Mahendra breaks down complex concepts into digestible bits, making it easy for you to understand the magic behind OKE. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! If you've been listening to us these last few weeks, you'll know we've been discussing containerization, the Oracle Cloud Infrastructure Registry, and the basics of Kubernetes. Today, we'll dive into the world of OCI Container Engine for Kubernetes, also referred to as OKE. Nikita: We're joined by Mahendra Mehra, a senior OCI instructor with Oracle University, who will take us through the key features and benefits of OKE and also talk about working with managed nodes. Hi Mahendra! Thanks for joining us today. 01:09 Lois: So, Mahendra, what is OKE exactly? Mahendra: Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully managed, scalable, and highly available service that empowers you to effortlessly deploy your containerized applications to the cloud. But that's just the beginning. OKE can transform the way you and your development team build, deploy, and manage cloud native applications. 01:36 Nikita: What would you say are some of its most defining features? Mahendra: One of the defining features of OKE is the flexibility it offers. You can specify whether you want to run your applications on virtual nodes or opt for managed nodes. Regardless of your choice, Container Engine for Kubernetes will efficiently provision them within your existing OCI tenancy on Oracle Cloud Infrastructure. Creating OKE cluster is a breeze, and you have a couple of fantastic tools at your disposal-- the console and the rest API. These make it super easy to get started. OKE relies on Kubernetes, which is an open-source system that simplifies the deployment, scaling, and management of containerized applications across clusters of hosts. Kubernetes is an incredible system that groups containers into logical units known as pods. And these pods make managing and discovering your applications very simple. Not to mention, Container Engine for Kubernetes uses Kubernetes versions that are certified as conformant by the Cloud Native Computing Foundation, also abbreviated as CNCF. And here's the icing on the cake. Container Engine for Kubernetes is ISO-compliant. The other two ISO-IEC standards—27001, 27017, and 27018. That's your guarantee of a secure and reliable platform. 03:08 Lois: That's great. But how do you access all this power? Mahendra: You can define and create your Kubernetes cluster using the intuitive console and the robust rest API. Once your clusters are up and running, you can manage them using the Kubernetes command line, also known as kubectl, the user-friendly Kubernetes dashboard, and the powerful Kubernetes API. 03:32 Nikita: I love the idea of an intuitive console and being able to manage everything from a centralized place. Lois: Yeah, that's fantastic! Mahendra, can you talk us through the magic that happens behind the scenes? What's Oracle's role in all this? Mahendra: All the master nodes or control plane nodes are managed by Oracle. This includes components like etcd, the API server, and the controller manager among others. To ensure reliability, we make sure multiple copies of these master components are distributed across different availability domains. And we don't stop there. We also manage the Kubernetes dashboard and even handle the self-healing mechanism of both the cluster and the worker nodes. All of these are meticulously created and managed within your Oracle tenancy. 04:19 Lois: And what happens at the user's end? What is their responsibility? Mahendra: At your end, you have the power to manage your worker nodes. Using different compute shapes, you can create and control them in your own user tenancy. So, as you can see, it's a perfect blend of Oracle's expertise and your control. 04:38 Nikita: So, in your opinion, why should users consider OKE their go-to solution for all things Kubernetes? Mahendra: Imagine a world where building and maintaining Kubernetes environments, be it master nodes or worker nodes, is no longer complex, costly, or even time-consuming. OKE is here to make your life easier by seamlessly integrating Kubernetes with various container life cycle management products, which includes container registries, CI/CD frameworks, networking solutions, storage options, and top-notch security features. And speaking of security, OKE gives you the tools you need to manage and control team access to production clusters, ensuring granular access to Kubernetes cluster in a straightforward process. It empowers developers to deploy containers quickly, provides devops teams with visibility and control for seamless Kubernetes management, and brings together Kubernetes container orchestration with Oracle's advanced cloud infrastructure. This results in robust control, top tier security, IAM, and consistent performance. 05:50 Nikita: OK…a lot of benefits! Mahendra, I know there have been ongoing enhancements to the OKE service. So, when creating a new cluster with Container Engine for Kubernetes, what is the cluster type we should specify? Mahendra: The first type is the basic clusters. Basic clusters support all the core functionality provided by Kubernetes and Container Engine for Kubernetes. Basic clusters come with a service-level objective, but not a financially backed service level agreement. This means that Oracle guarantees a certain level of availability for the basic cluster, but there is no monetary compensation if that level is not met. On the other hand, we have the enhanced clusters. Enhanced clusters support all available features, including features not supported by basic clusters. 06:38 Lois: OK. So, can you tell us more about the features supported by enhanced clusters? Mahendra: As we move towards a more digitized world, the demand for infrastructure continues to rise. However, with virtual nodes, managing the infrastructure of your cluster becomes much simpler. The burden of manually scaling, upgrading, or troubleshooting worker nodes is removed, giving you more time to focus on your applications rather than the underlying infrastructure. Virtual nodes provide a great solution for managing large clusters with a high number of nodes that require frequent updates or scaling. With this feature, you can easily simplify the management of your cluster and focus on what really matters, that is your applications. Managing cluster add-ons can be a daunting task. But with enhanced clusters, you can now deploy and configure them in a more granular way. This means that you can manage both essential add-ons like CoreDNS and kube-proxy as well as a growing portfolio of optional add-ons like the Kubernetes Dashboard. With enhanced clusters, you have complete control over the add-ons you install or disable, the ability to select specific add-on versions, and the option to opt-in or opt-out of automatic updates by Oracle. You can also manage add-on specific customizations to tailor your cluster to meet the needs of your application. 08:05 Lois: Do users need to worry about deploying add-ons themselves? Mahendra: Oracle manages the lifecycle of add-ons so that you don't have to worry about deploying them yourself. This level of control over add-ons gives you the flexibility to customize your cluster to meet the unique needs of your applications, making managing your cluster a breeze. 08:25 Lois: What about scaling? Mahendra: Scaling your clusters to meet the demands of your workload can be a challenging task. However, with enhanced clusters, you can now provision more worker nodes in a single cluster, allowing you to deploy larger workloads on the same cluster which can lead to better resource utilization and lower operational overhead. Having fewer larger environments to secure, monitor, upgrade, and manage is generally more efficient and can help you save on cost. Remember, there are limits to the number of worker nodes supported on an enhanced cluster, so you should review the Container Engine for Kubernetes limits documentation and consider the additional considerations when defining enhanced clusters with large number of managed nodes. 09:09 Nikita: Ensuring the security of my cluster would be of utmost importance to me, right? How would I do that with enhanced clusters? Mahendra: With enhanced clusters, you can now strengthen cluster security through the use of workload identity. Workload identity enables you to define OCI IAM policies that authorize specific pods to make OCI API calls and access OCI resources. By scoping the policies to Kubernetes service account associated with application pods, you can now allow the applications running inside those pods to directly access the API based on the permissions provided by the policies. 09:48 Nikita: Mahendra, what type of uptime and server availability benefits do enhanced clusters provide? Mahendra: You can now rely on a financially backed service level agreement tied to Kubernetes API server uptime and availability. This means that you can expect a certain level of uptime and availability for your Kubernetes API server, and if it degrades below the stated SLA, you'll receive compensation. This provides an extra level of assurance and helps ensure that your cluster is highly available and performant. 10:20 Lois: Mahendra, do you have any tips for us to remember when creating basic and enhanced clusters? Mahendra: When using the console to create a cluster, a new cluster is created as an enhanced cluster by default unless you explicitly choose to create a basic cluster. If you don't select any enhanced features during cluster creation, you have the option to create the new cluster as a basic cluster. When using CLI or API to create a cluster, you can specify whether to create a basic cluster or an enhanced cluster. If you don't explicitly specify the type of cluster to create, a new cluster is created as a basic cluster by default. Creating a new cluster as an enhanced cluster enables you to easily add enhanced features later even if you didn't select any enhanced features initially. If you do choose to create a new cluster as a basic cluster, you can still choose to upgrade the basic cluster to an enhanced cluster later on. However, you cannot downgrade an enhanced cluster to a basic cluster. These points are really important while you consider selection of a basic cluster or an enhanced cluster for your usage. 11:34 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free! So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 12:13 Nikita: Welcome back! I want to move on to serverless Kubernetes with virtual nodes. But I think before we do that, we first need to have a basic understanding of what managed nodes are. Mahendra: Managed nodes run on compute instances within your tenancy, and are at least partly managed by you. In the context of Kubernetes, a node is a compute host that can be either a virtual machine or a bare metal host. As you are responsible for managing managed nodes, you have the flexibility to configure them to meet your specific requirements. You are responsible for upgrading Kubernetes on managed nodes and for managing cluster capacity. Nodes are responsible for running a collection of pods or containers, and they are comprised of two system components: the kubelet, which is the host brain, and the container runtime such as CRI-O, or containerd. 13:07 Nikita: Ok… so what are virtual nodes, then? Mahendra: Virtual nodes are fully managed and highly available nodes that look and act like real nodes to Kubernetes. They are built using the open source CNCF Virtual Kubelet Project, which provides the translation layer between OCI and Kubernetes. 13:25 Lois: So, what makes Oracle's managed virtual Kubernetes product different? Mahendra: OCI is the first major cloud provider to offer a fully managed virtual Kubelet product that provides a serverless Kubernetes experience through virtual nodes. Virtual nodes are configured by customers and are located within a single availability and fault domain within OCI. Virtual nodes have two main components: port management and container instance management. Virtual nodes delegates all the responsibility of managing the lifecycle of pods to virtual Kubernetes while on a managed node, the kubelet is responsible for managing all the lifecycle state. The key distinction of virtual nodes is that they support up to a 1,000 pods per virtual node with the expectation of supporting more in the future. 14:15 Nikita: What are the other benefits of virtual nodes? Mahendra: Virtual nodes offer a fully managed experience where customers don't have to worry about managing the underlying infrastructure of their containerized applications. Virtual nodes simplifies scaling patterns for customers. Customers can scale their containerized application up or down quickly without worrying about the underlying infrastructure, and they can focus solely on their applications. With virtual nodes, customers only pay for the resources that their containerized application use. This allows customers to optimize their costs and ensures that they are not paying for any unused resources. Virtual nodes can support over 10 times the number of pods that a normal node can. This means that customer can run more containerized applications on virtual nodes, which reduces operational burden and makes it easier to scale applications. Customers can leverage container instances in serverless offering from OCI to take advantage of many OCI functionalities natively. These functionalities include strong isolation and ultimate elasticity with respect to compute capacity. 15:26 Lois: When creating a cluster using Container Engine for Kubernetes, we have the flexibility to customize the worker nodes within the cluster, right? Could you tell us more about this customization? Mahendra: This customization includes specifying two key elements. Firstly, you can select the operating system image to be used for worker nodes. This image serves as a template for the worker node's virtual hard drive, and determines the operating system and other software installed. Secondly, you can choose the shape for your worker nodes. The shape defines the number of CPUs and the amount of memory allocated to each instance, ensuring it meets your specific requirements. This customization empowers you to tailor your OKE cluster to your exact needs. It is important to note that you can define and create OKE clusters using both the console and the REST API. This level of control is specially valuable for your development team when building, deploying, and managing cloud native applications. You have the option to specify whether applications should run on virtual nodes or managed nodes. And Container Engine for Kubernetes efficiently provisions them on Oracle Cloud Infrastructure within your existing OCI tenancy. This flexibility ensures that you can adapt your OKE cluster to suit the specific requirements of your projects and workloads. 16:56 Lois: Thank you so much, Mahendra, for giving us your time today. For more on the topics we discussed, visit mylearn.oracle.com and look for the OCI Container Engine for Kubernetes Specialist course. Join us next week as we dive deeper into working with OKE virtual nodes. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 17:18 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode of the Equipping ELLs podcast, host Beth Vaucher interviews Kavita Mahendra, sharing her inspiring journey from Hyderabad, India, to ESL excellence in the U.S.Kavita reminisces about her multicultural childhood in Hyderabad and discusses the importance of embracing home languages. She sheds light on India's linguistic diversity and the significance of Hindi and English in uniting the nation. Kavita's personal experiences highlight the value of multilingualism and cultural identity in ELL education. Furthermore, she shares her transition to teaching ESL in the U.S., emphasizing the importance of unlearning and adapting instructional approaches. Kavita's passion for empowering educators and her impactful ESL initiatives in schools showcase her dedication to ELL success. This insightful conversation provides valuable tips for ESL teachers, emphasizing the importance of visibility, connection, and personalized student support. Tune in to hear Kavita Mahendra's teaching journey, and we know you'll leave inspired along your own! Resources: Join the Equipping ELLs MembershipShop our TpT Store
In this episode, Lois Houston and Nikita Abraham, along with senior OCI instructor Mahendra Mehra, dive into the fundamentals of Kubernetes. They talk about how Kubernetes tackles challenges in deploying and managing microservices, and enhances software performance, flexibility, and availability. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to another episode of the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! We've spent the last two episodes getting familiar with containerization and the Oracle Cloud Infrastructure Registry. Today, it's going to be all about Kubernetes. So if you've heard of Kubernetes but you don't know what it is, or you've been playing with Docker and containers and want to know how to take it to the next level, you'll want to stay with us. Lois: That's right, Niki. We'll be chatting with Mahendra Mehra, a senior OCI instructor with Oracle University, about the challenges in containerized applications within a complex business setup and how Kubernetes facilitates container orchestration and improves its effectiveness, resulting in better software performance, flexibility, and availability. 01:20 Nikita: Hi Mahendra. To start, can you tell us when you would use Kubernetes? Mahendra: While deploying and managing microservices in a distributed environment, you may run into issues such as failures or container crashes. Issues such as scheduling containers to specific machines depending upon the configuration. You also might face issues while upgrading or rolling back the applications which you have containerized. Scaling up or scaling down containers across a set of machines can be troublesome. 01:50 Lois: And this is where Kubernetes helps automate the entire process? Mahendra: Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. You can think of a Kubernetes as you would a conductor for an orchestra. Similar to how a conductor would say how many violins are needed, which one play first, and how loud they should play, Kubernetes would say, how many webserver front-end containers or back-end database containers are needed, what they serve, and how many resources are to be dedicated to each one. 02:27 Nikita: That's so cool! So, how does Kubernetes work? Mahendra: In Kubernetes, there is a master node, and there are multiple worker nodes. Each worker node can handle multiple pods. Pods are just a bunch of containers clustered together as a working unit. If a worker node goes down, Kubernetes starts new pods on the functioning worker node. 02:47 Lois: So, the benefits of Kubernetes are… Mahendra: Kubernetes can containerize applications of any scale without any downtime. Kubernetes can self-heal containerized applications, making them resilient to unexpected failures. Kubernetes can autoscale containerized applications as for the workload and ensure optimal utilization of cloud resources. Kubernetes also greatly simplifies the process of deployment operations. With Kubernetes, however complex an operation is, it could be performed reliably by executing a couple of commands at the most. 03:19 Nikita: That's great. Mahendra, can you tell us a bit about the architecture and main components of Kubernetes? Mahendra: The Kubernetes cluster has two main components. One is the control plane, and one is the data plane. The control plane hosts the components used to manage the Kubernetes cluster. And the data plane basically hosts all the worker nodes that can be virtual machines or physical machines. These worker nodes basically host pods which run one or more containers. The containers running within these pods are making use of Docker images, which are managed within the image registry. In case of OCI, it is the container registry. 03:54 Lois: Mahendra, you mentioned nodes and pods. What are nodes? Mahendra: It is the smallest unit of computing hardware within the Kubernetes. Its work is to encapsulate one or more applications as containers. A node is a worker machine that has a container runtime environment within it. 04:10 Lois: And pods? Mahendra: A pod is a basic object of Kubernetes, and it is in charge of encapsulating containers, storage resources, and network IPs. One pod represents one instance of an application within Kubernetes. And these pods are launched in a Kubernetes cluster, which is composed of nodes. This means that a pod runs on a node but can easily be instantiated on another node. 04:32 Nikita: Can you run multiple containers within a pod? Mahendra: A pod can even contain more than one container if these containers are relatively tightly coupled. Pod is usually meant to run one application container inside of it, but you can run multiple containers inside one pod. Usually, it is only the case if you have one main application container and a helper container or some sidecar containers that has to run inside of that pod. Every pod is assigned a unique private IP address, using which the pods can communicate with one another. Pods are meant to be ephemeral, which means they die easily. And if they do, upon re-creation, they are assigned a new private IP address. In fact, Kubernetes can scale a number of these pods to adapt for the incoming traffic, consequently creating or deleting pods on demand. Kubernetes guarantees the availability of pods and replicas specified, but not the liveliness of each individual pod. This means that other pods that need to communicate with this application or component cannot rely on the underlying individual pod's IP address. 05:35 Lois: So, how does Kubernetes manage traffic to this indecisive number of pods with changing IP addresses? Mahendra: This is where another component of Kubernetes called services comes in as a solution. A service gets allocated a virtual IP address and lives until explicitly destroyed. Requests to the services get redirected to the appropriate pods, thus the services of a stable endpoint used for inter-component or application communication. And the best part here is that the lifecycle of service and the pods are not connected. So even if the pod dies, the service and the IP address will stay, so you don't have to change their endpoints anymore. 06:13 Nikita: What types of services can you create with Kubernetes? Mahendra: There are two types of services that you can create. The external service is created to allow external users to connect the containerized applications within the pod. Internal services can also be created that restrict the communication within the cluster. Services can be exposed in different ways by specifying a particular type. 06:33 Nikita: And how do you define these services? Mahendra: There are three types in which you can define services. The first one is the ClusterIP, which is the default service type that exposes services on an internal IP within the cluster. This type makes the service only reachable from within the cluster. You can specify the type of service as NodePort. NodePort basically exposes the service on the same port of each selected node in the cluster using a network address translation and makes the service accessible from the outside of the cluster using the node IP and the NodePort combination. This is basically a superset of ClusterIP. You can also go for a LoadBalancer type, which basically creates an external load balancer in the current cloud. OCI supports LoadBalancer types. It also assigns a fixed external IP to the service. And the LoadBalancer type is a superset of NodePort. 07:25 Lois: There's another component called ingress, right? When do you used that? Mahendra: An ingress is used when we have multiple services on our cluster, and we want the user requests routed to the services based on their pod, and also, if you want to talk to your application with a secure protocol and a domain name. Unlike NodePort or LoadBalancer, ingress is not actually a type of service. Instead, it is an entry point that sits in front of the multiple services within the cluster. It can be defined as a collection of routing rules that govern how external users access services running inside a Kubernetes cluster. Ingress is most useful if you want to expose multiple services under the same IP address, and these services all use the same Layer 7 protocol, typically HTTP. 08:10 Lois: Mahendra, what about deployments in Kubernetes? Mahendra: A deployment is an object in Kubernetes that lets you manage a set of identical pods. Without a deployment, you will need to create, update, and delete a bunch of pods manually. With the deployment, you declare a single object in a YAML file, and the object is responsible for creating the pods, making sure they stay up-to-date and ensuring there are enough of them running. You can also easily autoscale your applications using a Kubernetes deployment. In a nutshell, the Kubernetes deployment object lets you deploy a replica set of your pods, update the pods and the replica sets. It also allows you to roll back to your previous deployment versions. It helps you scale a deployment. It also lets you pause or continue a deployment. 08:59 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free! So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 09:37 Nikita: Welcome back! We were talking about how useful a Kubernetes deployment is in scaling operations. Mahendra, how do pods communicate with each other? Mahendra: Pods communicate with each other using a service. For example, my application has a database endpoint. Let's say it's a MySQL service that it uses to communicate with the database. But where do you configure this database URL or endpoints? Usually, you would do it in the application properties file or as some kind of an external environment variable. But usually, it's inside the build image of the application. So for example, if the endpoint of the service or the service name, in this case, changes to something else, you would have to adjust the URL in the application. And this will cause you to rebuild the entire application with a new version, and you will have to push it to the repository. You'll then have to pull that new image into your pod and restart the whole thing. For a small change like database URL, this is a bit tedious. So for that purpose, Kubernetes has a component called ConfigMap. ConfigMap is a Kubernetes object that maintains a key value store that can easily be used by other Kubernetes objects, such as pods, deployments, and services. Thus, you can define a ConfigMap composed of all the specific variables for your environment. In Kubernetes, now you just need to connect your pod to the ConfigMap, and the pod will read all the new changes that you have specified within the ConfigMap, which means you don't have to go on to build a new image every time a configuration changes. 11:07 Lois: So then, I'm just wondering, if we have a ConfigMap to manage all the environment variables and URLs, should we be passing our username and password in the same file? Mahendra: The answer is no. Password or other credentials within a ConfigMap in a plain text format would be insecure, even though it's an external configuration. So for this purpose, Kubernetes has another component called secret. Kubernetes secrets are secure objects which store sensitive data, such as passwords, OAuth tokens, and SSH keys with the encryption within your cluster. Using secrets gives you more flexibility in a pod lifecycle definition and control over how sensitive data is used. It reduces the risk of exposing the data to unauthorized users. 11:50 Nikita: So, you're saying that the secret is just like ConfigMap or is there a difference? Mahendra: Secret is just like ConfigMap, but the difference is that it is used to store secret data credentials, for example, database username and passwords, and it's stored in the base64 encoded format. The kubelet service stores this secret into a temporary file system. 12:11 Lois: Mahendra, how does data storage work within Kubernetes? Mahendra: So let's say we have this database pod that our application uses, and it has some data or generates some data. What happens when the database container or the pod gets restarted? Ideally, the data would be gone, and that's problematic and inconvenient, obviously, because you want your database data or log data to be persisted reliably for long term. To achieve this, Kubernetes has a solution called volumes. A Kubernetes volume basically is a directory that contains data accessible to containers in a given pod within the Kubernetes platform. Volumes provide a plug-in mechanism to connect ephemeral containers with persistent data stores elsewhere. The data within a volume will outlast the containers running within the pod. Containers can shut down and restart because they are ephemeral units. Data remains saved in the volume even if a container crashes because a container crash is not enough to cut off a pod from a node. 13:10 Nikita: Another main component of Kubernetes is a StatefulSet, right? What can you tell us about it? Mahendra: Stateful applications are applications that store data and keep tracking it. All databases such as MySQL, Oracle, and PostgreSQL are examples of Stateful applications. In a modern web application, we see stateless applications connecting with Stateful application to serve the user request. For example, a Node.js application is a stateless application that receives new data on each request from the user. This application is then connected with a Stateful application, such as MySQL database, to process the data. MySQL stores the data and keeps updating the database on the user's request. Now, assume you deployed a MySQL database in the Kubernetes cluster and scaled this to another replica, and a frontend application wants to access the MySQL cluster to read and write data. The read request will be forwarded to both these pods. However, the write request will only be forwarded to the first primary pod. And the data will be synchronized with other pods. You can achieve this by using the StatefulSets. Deleting or scaling down a StatefulSet will not delete the volumes associated with the Stateful applications. This gives you your data safety. If you delete the MySQL pod or if the MySQL pod restarts, you can have access to the data in the same volume. So overall, a StatefulSet is a good fit for those applications that require unique network identifiers; stable persistent storage; ordered, graceful deployment and scaling; as well as ordered, automatic rolling updates. 14:43 Lois: Before we wrap up, I want to ask you about the features of Kubernetes. I'm sure there are countless, but can you tell us the most important ones? Mahendra: Health checks are used to check the container's readiness and liveness status. Readiness probes are intended to let Kubernetes know if the app is ready to serve the traffic. Networking plays a significant role in container orchestration to isolate independent containers, connect coupled containers, and provide access to containers from the external clients. Service discovery allows containers to discover other containers and establish connections to them. Load balancing is a dedicated service that knows which replicas are running and provides an endpoint that is exposed to the clients. Logging allows us to oversee the application behavior. The rolling update allows you to update a deployed containerized application with minimal downtime using different update scenarios. The typical way to update such an application is to provide new images for its containers. Containers, in a production environment, can grow from few to many in no time. Kubernetes makes managing multiple containers an easy task. And lastly, resource usage monitoring-- resources such as CPU and RAM must be monitored within the Kubernetes environment. Kubernetes resource usage looks at the amount of resources that are utilized by a container or port within the Kubernetes environment. It is very important to keep an eye on the resource usage of the pods and containers as more usage translates to more cost. 16:18 Nikita: I think we can wind up our episode with that. Thank you, Mahendra, for joining us today. Kubernetes sure can be challenging to work with, but we covered a lot of ground in this episode. Lois: That's right, Niki! If you want to learn more about the rich features Kubernetes offers, visit mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. Remember, all the training is free, so you can dive right in! Join us next week when we'll take a look at the fundamentals of Oracle Cloud Infrastructure Container Engine for Kubernetes. Until then, Lois Houston… Nikita: And Nikita Abraham, signing off! 16:57 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode, hosts Lois Houston and Nikita Abraham, along with senior OCI instructor Mahendra Mehra, discuss how Oracle Cloud Infrastructure Registry simplifies the development-to-production workflow for developers. Listen to Mahendra explain important container registry concepts, such as images, repositories, image tags, and image paths, as well as how they relate to each other. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Principal Technical Editor with Oracle University, and I'm joined by Lois Houston, Director of Innovation Programs. Lois: Hi there! This is our second episode on OCI Container Engine for Kubernetes, and today we're going to spend time discussing container registries with our colleague and senior OCI instructor, Mahendra Mehra. Nikita: We'll talk about how you can become proficient in managing Oracle Cloud Infrastructure Registry, a vital component in your container workflow. 00:58 Lois: Hi Mahendra, can you explain what Oracle Cloud Infrastructure Registry, or OCIR, is and how it simplifies the container image management process? Mahendra: OCIR is an Oracle-managed registry designed to simplify the development-to-production workflow for developers. It offers a range of functionalities, serving as a private docker registry for internal use where developers can easily store, share, and manage container images. The strength of OCIR lies in its high available and scalable architecture. Leveraging OCI to ensure reliable deployment of applications, developers can use OCIR not only as a private registry but also as a public registry, facilitating the pulling of images from public repositories for users with internet access. 01:55 Lois: But what sets OCIR apart? Mahendra: What sets OCIR apart is its compliance with the Open Container Initiative standards, allowing the storage of container images conforming to the OCI specifications. It goes a step further by supporting manifest lists, sometimes known as multi-architecture images, accommodating diverse architectures like ARM and AMD64. Additionally, OCIR extends its support to Helm charts. Security is a priority with OCIR, offering private access through a service gateway. This means that OCI resources within a VCN in the same region can securely access OCIR without exposing them to the public internet. 02:46 Nikita: OK. What are some other key advantages of OCIR? Mahendra: Firstly, OCIR seamlessly integrates with the Container Engine for Kubernetes, ensuring a cohesive container management experience. In terms of security, OCIR provides flexibility by allowing registries to be either private or public, giving administrators control over accessibility. It is intricately integrated with IAM, offering straightforward authentication through OCI Identity. Another notable benefit is regional availability. You can efficiently pull container images from the same region as your deployments. For high-performance, availability, and low-latency image operations, OCIR leverages the robust infrastructure of OCI, enhancing the overall reliability of image push and pull operations. OCIR ensures anywhere access, allowing you to utilize container CLI for image operations from various locations, be it on the cloud, on-premises, or even from personal laptops. 03:57 Lois: I believe OCIR has repository quotas? Is there a cap on them? Mahendra: In each enabled region for your tenancy, you can establish up to 500 repositories with a cumulative storage limit of 500 GB. Each repository is capable of holding up to 100,000 images. Importantly, charges apply only for stored images. 04:21 Nikita: That's good to know, Mahendra. I want to move on to basic container registry concepts. Maybe we can start with what an image is. Mahendra: Image is basically a read-only template with instructions for creating a container. It holds the application that you want to run as a container, along with any dependencies that are required. Container registry is an Open Container Initiative-compliant registry. As a result, you can store any artifacts that conform to Open Container Initiative specifications, such as Docker images, manifest lists, sometimes also known as multi-architecture images, and Helm charts. 05:02 Lois: And what's a repository then? Mahendra: It's a meaningfully named collection of related images which are grouped together for convenience in a container registry. There are different versions of the same source image, which are grouped together into the same repository. You can have multiple images stored under this repository. The only thing that you need to keep changing is the image version. Every image version is given a tag. And the tag uniquely identifies the image. 05:33 Lois: Is it possible to make the repository public or private? Mahendra: Depending upon your need, a repository can be made private or public. One important thing to note is that the user needs to have an OCI username and authentication token before being able to push/pull an image from the OCIR. 05:52 Nikita: There are so many terms that you come across when working with repositories and container registry, right? Could you take us through them and explain how they relate to each other? I've heard of the region key and tenancy namespace. Mahendra: The region key identifies the container registry region that you are using. A tenancy namespace is an auto-generated random and immutable string of alphanumeric characters. The tenancy namespace can be retrieved from the value of your object storage namespace field. Repository name is the name of a repository in container registry, to and from which you can push and pull images. Repository names can include one or more slash characters and are unique across all the compartments in the entire tenancy. You should note that although a repository name can include slash characters, the slash does not represent a hierarchical directory structure. It is simply one character in the string of characters. As a convenience, you might choose to start the name of different repositories with the same string. A registry identifier is the combination of your container registry region key and the tenancy namespace. 07:07 Lois: What about an image tag and an image path? How do they differ from each other? Mahendra: A tag or an image tag is a string used to refer to a particular image in a known registry. The term "image name" is sometimes used as a shorthand way to refer to a particular image in a particular repository. A tag can be a numerical value or it can be a string. An image path is a fully qualified path to a particular image in a registry. It extends the repository path by adding tags associated with the image. 07:46 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free. So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 08:24 Nikita: Welcome back! Mahendra, from what you've told us, OCIR seems like such a pivotal tool for modern containerized workflows, with its seamless integration, robust security measures, regional accessibility, efficient image management. So, how do we actually manage OCIR? Mahendra: Managing OCIR can be done in three ways. Starting with managing the repository itself, followed by managing the images within the repository, and, last but not the least, managing the overall security of your repository alongside the images. 08:58 Nikita: Can we dive into each of these approaches in a little more detail? How does managing the repository itself work? Mahendra: You can create an empty repository in a compartment and give it a name that's unique across all the compartments in the entire tenancy. There is a limit to the number of repositories you can have in a given region in a tenancy. So, when you no longer need a repository, it makes sense to delete it from the Oracle Cloud Infrastructure registry. Make a note that when you delete a repository, it can take up to 48 hours for the deletion to take effect and for the storage to actually be released. When you create a new repository in Oracle Cloud Infrastructure Registry, you specify the compartment in which you want to create it. Having created the repository in one compartment, you can subsequently move it to a different compartment. The reasons can be many. It can be to change the users who are authorized to use the repository or to change how the billing for a repository is charged. 09:52 Lois: OK. And what about managing images within the repository? Mahendra: You can view the images stored on OCIR using the OCI Console or using Docker images command from your Docker client after logging in to the OCIR repo. To push an image, you first used the Docker tag command to create a copy of the local source image as a new image. As a name for the new image, you specify the fully-qualified path to the target location in your container registry where you want to push the image, including the name of a repository. In order to pull an image, you must be logged in into the OCIR registry using the auth token and use the Docker pull command followed by a fully-qualified name of the image you wish to download on your Docker client. 10:36 Nikita: What happens when you no longer need an old image or you simply want to clean up the list of image tags in a repository? Mahendra: You can delete images from the Oracle Cloud Infrastructure Registry. You can undelete an image you've previously deleted for up to 48 hours after you deleted it. After that time, the image is permanently removed from the container registry. You can set up image retention policies to automatically delete images that meet particular selection criteria. 11:02 Lois: What sort of selection criteria? Mahendra: Criterias can be images that have not been pulled for a certain number of days or images that have not been tagged for a certain number of days. It can also be images that have not been given particular Docker tags specified as exempt from the automatic deletion. There's an hourly process that checks images against the selection criteria, and any that meet the selection criteria are automatically deleted. In each region in a tenancy, there's a global image retention policy. The default criteria of the policy is to retain all images so that no images are automatically deleted. However, you can change the global image retention policy so that the images are deleted if they meet certain criteria that you specify. A region's global image retention policy applies to all the repository within that region unless it is explicitly overridden by one or more custom image retention policies. Only one custom image retention policy at a time can be applied to a repository. If a repository has already been added to a custom retention policy and you want to add repository to a different custom retention policy, you have to remove the policy from the first retention policy before adding it to the second one. 12:15 Lois: Mahendra, what should we keep in mind when we're dealing with the global image retention policy? Mahendra: The global image retention policy are specific to a particular region. To delete images consistently in different regions in your tenancy, you need to set up image retention policies in each region with identical selection criteria. If you want to prevent images from being deleted on the basis of Docker tags they've been given, you need to specify those tags as exempt in a comma-separated list. When you want to clean up the list of images in a repository without actually deleting the images, you can remove the tags from the images in OCIR. Removing images is referred to as untagging. 12:53 Nikita: OK…and the last approach was managing the overall security of your repository alongside the images, right? Mahendra: While managing security, you are given fine grained control over the operations that users are allowed to perform on repositories within the Container Registry. Using the concept of users and groups, you can control repository access by setting up identity access management policies at the tenancy and at the compartment level. You can write policies to allow inspect, read, use, and manage operations on the repository based on the requirements. You can set up Oracle Cloud Infrastructure Registry to scan images in a repository for security vulnerabilities published in the publicly available common vulnerabilities and exposure databases. To perform image scanning, container registry makes use of the Oracle Cloud Infrastructure vulnerability-scanning service and vulnerability scanning REST API. 13:46 Nikita: What do I need to have in place before I can push and pull Docker images to and from Oracle Cloud Infrastructure Registry? Mahendra: The first thing is, your tenancy must be subscribed to one or more of the regions in which the container registry is available. You can check the same within the Oracle documentation. The next thing is, you need to have access to the Docker command line interface to push and pull images on your local machine. The third thing is, users must belong to a group to which a policy grants the appropriate permission or belong to a tenancies administrator group, which by default have access permissions on the container registry. Lastly, user must already have an Oracle Cloud Infrastructure username and an authentication token, which enables them to perform operations on the container registry. 14:29 Lois: Thank you, Mahendra, for sharing your insights on OCIR with us. To watch demos on managing OCIR, visit mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. Nikita: Mahendra will be back next week to walk us through the basics of Kubernetes. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 14:53 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Welcome to a new season of the Oracle University Podcast, where we delve deep into the world of OCI Container Engine for Kubernetes. Join hosts Lois Houston and Nikita Abraham as they ask senior OCI instructor Mahendra Mehra about the transformative power of containers in application deployment and why they're so crucial in today's software ecosystem. Uncover key differences between virtualization and containerization, and gain insights into Docker components and commands. Getting Started with Oracle Cloud Infrastructure: https://oracleuniversitypodcast.libsyn.com/getting-started-with-oracle-cloud-infrastructure-1 Networking in OCI: https://oracleuniversitypodcast.libsyn.com/networking-in-oci OCI Identity and Access Management: https://oracleuniversitypodcast.libsyn.com/oci-identity-and-access-management OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! Welcome to a new season of the Oracle University Podcast. This time around, we're going to delve into the world of OCI Container Engine for Kubernetes, or OKE. For the next couple of weeks, we'll cover key aspects of OKE to help you create, manage, and optimize Kubernetes clusters in Oracle Cloud Infrastructure. 00:58 Lois: So, whether you're a cloud native developer, Kubernetes administrator and developer, a DevOps engineer, or site reliability engineer who wants to enhance your expertise in leveraging the OCI OKE service for cloud native application solutions, you'll want to tune in to these episodes for sure. And if that doesn't sound like you, I'll bet you will find the season interesting even if you're just looking for a deep dive into this service. Nikita: That's right, Lois. In today's episode, we'll focus on concepts of containerization, laying the foundation for your journey into the world of containers. And taking us through all this is Mahendra Mehra, a senior OCI instructor with Oracle University. 01:38 Lois: Hi Mahendra! We're so glad to start our look at containerization with you today. Could you give us an overview? Why is it important in today's software world? Mahendra: Containerization is a form of virtualization, operates by running applications in isolated user spaces known as containers. All these containers share the same underlying operating system. The container engine, pivotal in containerization technologies and container orchestration platforms, serves as the container runtime environment. It effectively manages the creation, deployment, and execution of containers. 02:18 Lois: Can you simplify this for a novice like me, maybe by giving us an analogy? Mahendra: Imagine a container as a fully packaged and portable computing environment. It's like a digital suitcase that holds everything an application needs to run—binaries, libraries, configuration files, dependencies, you name it. And the best part, it's all encapsulated and isolated within container. 02:46 Nikita: Mahendra, how is containerization making our lives easier today? Mahendra: In olden days, running an application meant matching it with your machine's operating system. For example, Windows software required a Windows machine. However, containerization has rewritten this narrative. Now, it's ancient history. With containerization, you create a single software package, a container that gracefully runs on any device or operating systems. What's fascinating is that these containers seamlessly run while sharing the host operating system. The container engine is like a shadow abstracted from the host operating system with limited access to underlying resources. Think of it as a super lightweight virtual machine. The beauty of this, the containerized application becomes a globetrotter, seamlessly running on bare metal within VMs or on the cloud platforms without needing tweaks for each environment. 03:52 Nikita: How is containerization different from traditional virtualization? Mahendra: On one side, we have traditional virtualization. It's like having multiple houses on a single piece of land, and each house or virtual machine has its complete setup—wall, roofs, and utilities. This setup, while providing isolation, can be resource-intensive with each virtual machine carrying its entire operating system. Now, let's shift gears to containerization, the modern day superhero. Imagine a high-rise building where each floor represents a container. These containers share the same building or host operating system, but have their private space or isolated user space. Here's the magic. They are super lightweight, don't carry extra baggage of a full operating system and can swiftly move between different floors. 04:50 Lois: Ok, gotcha. That sounds pretty efficient! So, what are the direct benefits of containerization? Mahendra: With containerization technology, there's less overhead during startup and no need to set up a separate guest OS for each application since they all share the same OS kernel. Because of this high efficiency, containerization is commonly used for packing up the many individual microservices that make up modern applications. Containerization unfolds a spectrum of benefits, delivering unparalleled portability as containers run uniformly across diverse platforms. This agility, fostered by open source container engines, empowers developers with cross-platform flexibility. The speed of containerized applications known for their lightweight nature reduces cost, boosts efficiency, and accelerates start times. Fault isolation ensures robustness, allowing independent operations without affecting others. Efficiency thrives as containers share the OS kernel and reusable layers, optimizing server utilization. The ease of management is achieved through orchestration platforms like Kubernetes automating essential tasks. Security remains paramount as container isolation and defined permissions fortify the infrastructure against malicious threats. Containerization emerges not just as a technology but as a transformative force, redefining how we build, deploy, and manage applications in the digital landscape. 06:37 Lois: It sure makes deployment efficient, scalability, and seamless! Mahendra, various components of Docker architecture work together to achieve containerization goals, right? Can you walk us through them? Mahendra: A developer or a DevOps professional communicates with Docker engine through the Docker client, which may be run on the same computer as Docker engine in case of development environments or through a remote shell. So whenever a developer fires a Docker command, the client sends them to the Docker Daemon which carries them out. The communication between the Docker client and the Docker host is usually taken place through REST APIs. The Docker clients can communicate with more than one Daemon at a time. Docker Daemon is a persistent background process that manages Docker images, containers, networks, and storage volumes. The Docker Daemon constantly listens to the Docker API request from the Docker clients and processes them. Docker registries are services that provide locations from where you can store and download Docker images. In other words, a Docker registry contains repositories that host one or more Docker images. Public registries include Docker Hub and Docker Cloud and private registries can also be used. Oracle Cloud Infrastructure offers you services like OCIR, which is also called a container registry, where you can host your own private or public registry. 08:02 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free. So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 08:39 Nikita: Welcome back! Mahendra, I'm wondering how virtual machines are different from containers. How do virtual machines work? Mahendra: A hypervisor or a virtual machine monitor is a software, firmware, or hardware that creates and runs virtual machines. It is placed between the hardware and the virtual machines, and is necessary to virtualize the server. Within each virtual machine runs a unique guest operating system. VMs with different operating systems can run on the same physical server. A Linux VM can sit alongside a Windows VM and so on. Each VM has its own binaries, libraries, and application that it services. And the VM may be many gigabytes in size. 09:22 Lois: What kind of benefits do we see from virtual machines? Mahendra: This technique provides a variety of benefits like the ability to consolidate applications into a single system, cost savings through reduced footprints, and faster server provisioning. But this approach has its own drawbacks. Each VM includes a separate operating system image, which adds overhead in memory and storage footprint. As it turns out, this issue adds complexity to all the stages of software development lifecycle, from development and test to production and disaster recovery as well. It also severely limits the portability of applications between different cloud providers and traditional data centers. And this is where containers come to the rescue. 10:05 Lois: OK…how do containers help in this situation? Mahendra: Containers sit on top of a physical server and its host operating system—typically, Linux or Windows. Each container shares the host OS kernel and usually the binaries and libraries as well. But the shared components are read only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code. A server can run multiple workloads with a single operating system installation. Containers are thus exceptionally lightweight. They are only megabytes in size and take just seconds to start. What this means in practice is you can put two or three times as many applications on a single server with containers than you can put on a virtual machine. Compared to containers, virtual machines take minutes to run and are order of magnitude larger than an equivalent container measured in gigabytes versus megabytes. 11:01 Nikita: So then, is there ever a time you should use a virtual machine? Mahendra: You should use a virtual machine when you want to run applications that specifically require a new OS, also when isolation and security are your priority over everything else. In most scenarios, a container will provide a lighter, faster, and more cost-effective solution than the virtual machines. 11:22 Lois: Now that we've discussed containerization and the different Docker components, can you tell us more about working with Docker images? We first need to know what a Dockerfile is, right? Mahendra: A Dockerfile is a text file that defines a Docker image. You'll use a Dockerfile to create your own custom Docker image. In other words, you use it to define your custom environment to be used in a Docker container. You'll want to create your own Dockerfile when existing images won't meet your project needs to different runtime requirements, which means that learning about Docker files is an essential part of working with Docker. Dockerfile is a step-by-step definition of building up a Docker image. It provides a set of standard instructions to be used in Dockerfile that Docker will execute when you issue a Docker build command. 12:09 Nikita: Before we wrap up, can you walk us through some Docker commands? Mahendra: Every Dockerfile must start with a FROM instruction. The idea behind this is that you need a starting point to build your image. It can be from scratch or from an existing image available in the Docker registry. The RUN command is used to execute a command and will wait till the command finishes its execution. Since most of the images are Linux-based, a good practice is to set up a directory you will work in. That's the purpose of work directory line. It defines a directory and moves you in. The COPY instruction helps you to copy your source code into the image. ENV provides default values for variables that can be accessed within the containers. If your app needs to be reached from outside the container, you must open its listening port using the EXPOSE command. Once your application is ready to run, the last thing to do is to specify how to execute it. You must add the CMD line with the same command with all the arguments you used locally to launch your application. This command can also be used to execute commands at runtime for the containers, but we can be more flexible using the ENTRYPOINT command. Labels are used in Dockerfile to help organize your Docker images. 13:20 Lois: Thank you, Mahendra, for joining us today. I learned a lot! And if you want to learn more about working with Docker images, go to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. The course is free so you can get started right away. Nikita: Yeah, a fundamental understanding of core OCI services, like Identity and Access Management, networking, compute, storage, and security, is a prerequisite to the course and will certainly serve you well when leveraging the OCI OKE service. And the quickest way to gain this knowledge is by completing the OCI Foundations Associate learning path on MyLearn and getting certified. You can also listen to episodes from our first season, called OCI Made Easy, where we discussed these topics. We'll put a few links in the show notes so you can easily find them. Lois: We're looking forward to having Mahendra join us again next week when we'll talk about container registries. Until next time, this is Lois Houston… Nikita: And Nikita Abraham signing off! 14:24 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Audio Murottal Juz 'Amma Masjid Pogung Raya.
Audio Murottal Juz 'Amma Masjid Pogung Raya.
In this enlightening episode of "Web3 with Sam Kamani," we dive deep into the evolving world of blockchain and DeFi with our special guest, Akash Mahendra. As a pivotal figure behind Haven1 blockchain and Yield app, Akash shares valuable insights on their projects, challenges in the blockchain space, and the future of Web3. Time Stamps & Key Topics: [00:00:00] Introduction: Welcome Akash Mahendra [00:00:10] Overview of Haven1 and Yield App [00:00:18] Haven1: EVM Compatible Layer One Blockchain [00:00:29] Yield App's Role and Achievements [00:01:14] Haven1's Global Team Dynamics [00:01:54] Plans for Haven1 in 2024 and Market Predictions [00:03:15] Understanding Haven1's EVM Compatibility [00:04:26] Haven1's Unique Security Features [00:05:15] Differentiating Haven1 from Other L1 Blockchains [00:07:05] Institutional Engagement with DeFi on Public Blockchains [00:07:50] Future Trends: Increased Usage in Web3 and DeFi [00:10:39] The Role of Stablecoins in Blockchain Adoption [00:13:14] Encouraging Developer Adoption on Haven1 [00:17:24] Akash's Contrarian Views on Web3 and Centralization [00:20:51] Haven1's Vision for the Future [00:23:04] Strategies for Engaging Developers in Haven1 [00:25:07] Akash's Evolving Views on Web3 and DeFi [00:29:52] The Rapid Pace of Innovation in Web3 [00:31:15] Haven1's Community Engagement and Future Plans [00:34:20] Closing Remarks and Future Outlook for Haven1 Closing Thoughts: Join us as we explore the intricacies of Haven1's approach to a more secure and compliant blockchain solution, and how it aims to shape the future of digital finance. Don't miss this opportunity to gain a deeper understanding of where the blockchain industry is heading from one of its key innovators. Nothing mentioned in this podcast is investment advice and please do your own research. Finally, I don't run ads on my podcast. It would mean a lot if you can leave a review of this podcast on ApplePodcasts or share this podcast with a friend. Connect with Akash and Haven1 here. https://www.linkedin.com/in/akash-mahendra-48541121a/ https://yield.app/blog/What-is-Haven1 Connect with me here - https://samkamani.com/#linktree --- Send in a voice message: https://podcasters.spotify.com/pod/show/web3podcast/message
Di episode kali ini Prof Yusril Ihza Mahendra berbagi kisah soal Jepang dan apa yang bisa kita pelajari dari negara itu, juga soal tantangan-tantangan yang harus dihadapi oleh anak-anak muda zaman sekarang. Prof Yusril juga berkisah soal bagaimana kemajuan teknologi seperti Artificial Intelligence atau AI harus diposisikan dengan tetap mengutamakan manusia sebagai pusat utamanya. Penasaran seperti apa kisahnya? Yukss disaksikan video lengkapnya ya!
They said it would be a disaster. They said it wouldn't go ahead. They said the track wasn't fit for purpose. Well WTF would they know? The lads are joined by India correspondent Mahendra to go over what turned out to be a fantastic Indian Moto GP. Sure there was a distinct and disappointing lack of elephants and a disturbingly high ratio of grandmothers in the crowd, but that didn't detract from what was a great first up effort for the world's most populous nation. It also inspired one of Borries' best poems yet and if you're a regualr listener you know how good that needs to be. (If you're not a regular listener, become one. And join our Patreon Pit Crew. And buy Borrie's books. Or just send us money, we don't care. As long as we're getting paid.) Stop looking at this now and press play. What are you waiting for? IMPORTANT ANOUNCEMENT None of this would be possible without our magnificent sponsors. They support us, so it's only good manners you should support them. Visit their websites, try and buy their products, sign up for their newsletters, and tell them WE SENT YOU. And we would be really pleased if you bought Borrie's books. They're a great read, they make great gifts, and it helps him to feed his family. If he can feed his family he tends not to rob people. All four of his masterpieces are here (https://www.shocknawe.com.au/)… And our sponsors: MADE IN GERMANY (https://www.mig.bike/) AGV HELMETS (https://agvhelmets.com.au/) BMW MOTORRAD (https://www.bmw-motorrad.com.au/en/home.html#/filter-all) HARLEY-DAVIDSON (https://www.harley-davidson.com/au/en/index.html) HONDA (https://motorcycles.honda.com.au/) SUZUKI (https://www.suzukimotorcycles.com.au/) TRIUMPH (https://www.triumphmotorcycles.com.au/) APRILIA (https://www.aprilia.com/au_EN/) MOTO GUZZI (https://www.motoguzzi.com/au_EN/models/v85-tt/) SAVIC MOTORCYCLES (https://www.savicmotorcycles.com/) SC-PROJECT OCEANIA (https://sc-project.com.au/) NOISEGUARD (https://www.noiseguard.com.au/) WORLD ON WHEELS (https://www.worldonwheels.tours/) CMB Financial Services (https://cmbfin.com.au) or call 1300 262 346 Indo Insider Tours (https://indoinsidertours.com). GREY GUMS CAFÉ (https://www.facebook.com/GreyGumCafe/) KEBAND CUSTOM PARTS (https://www.kebandcustomparts.com.au) Chris the Artist (https://www.drautoart.com)
Akash Mahendra is Director of the Haven1 Foundation where he leads strategy, operations, and risk management efforts in support of Haven1 — an EVM-compatible L1 blockchain purpose-built to provide a secure environment for on-chain finance. He is also a Portfolio Manager at the digital wealth platform Yield App. Mahendra started his career as a Legal Enforcement Officer at The Australian Securities and Investments Commission, before diving into Web3 full-time. Prior to joining Haven1, Akash served as the Chief Investment Officer at the Web3 investment firm DAO Capital, and the Head of Operations and Strategy at Steady State, an automated DeFi insurance company, where he honed his expertise in blockchain tech and financial portfolio management. About Haven1 Haven1 is an EVM-compatible layer 1 blockchain designed to offer a secure, trusted, compliant environment to drive the mass adoption of on-chain finance. Architected by the innovators behind the digital wealth platform Yield App, Haven1 incorporates a provable identity framework and robust security guardrails at the network level, to provide retail, professional, and institutional investors alike with an on-chain finance platform free from the challenges and risks that plague the DeFi ecosystem. To learn more about Haven1, visit https://www.haven1.org/ About Yield App Yield App is a digital wealth platform that offers safe custody of digital assets, or allows customers to exchange and earn on their assets in return for market-leading rates. Its mission is to safely unlock the full potential of digital assets, combine them with the most rewarding opportunities available across financial markets and make these available to the masses. Since its public launch in February 2021, Yield App has grown to 90,000+ customers, including 1,000+ high-net-worth clients, who have entrusted Yield App with more than $550 million of their digital wealth. $550MM+ in managed assets, $250MM+ deployed into DeFi, and 90k+ active users. To learn more about Yield App, visit yield.app --- Support this podcast: https://podcasters.spotify.com/pod/show/crypto-hipster-podcast/support
Di episode kali ini, PinterPolitik mendapatkan kesempatan berharga sharing gagasan dan pendapat dengan Dr. Yusron Ihza Mahendra. Ekonom dan mantan Dubes Indonesia untuk Jepang dan Federasi Mikronesia ini berbagi refleksinya atas peringatan 25 tahun Reformasi 1998. Yusron membandingkannya dengan Restorasi Meiji di Jepang. Ia juga memberikan sebuah gagasan menarik soal stabilisasi nilai tukar rupiah dengan nikel Indonesia.
Dr. Vibha Mahendra, a board-certified Obstetric Anesthesiologist, joins me on this episode to discuss anesthesia options during labor and delivery and why they are sometimes ineffective, potentially leading to trauma for the patient. Dr. M also provides insight into how providers can problem-solve these pain relief issues and communicate with patients in a way that mitigates trauma. This was such an insightful conversation and I think does a great job educating the patient in a way that puts them in a better position to advocate or communicate with their support people to advocate for pain relief. On this episode, you will hear:-What is OB Anesthesia?-What's in an epidural?-Differences between a spinal block and an epidural- Why are some epidurals ineffective?- What happens when there is an ineffective spinal during a c-section?- The decision to convert to general anesthesia and why some doctors delay- Is it necessary to be restrained during c-section?Guest Bio:Vibha Mahendra, MD is a board-certified Obstetric Anesthesiologist, public health advocate, educator, and founder of SafePartum. Dr. Mahendra is passionate about improving maternal care by directly engaging and empowering all healthcare providers who care for pregnant patients, in a way that aligns with their practice environment and profession. Her professional interests span all of high-risk pregnancy, but she especially loves Obstetric Critical Care and Cardio-Obstetrics! Visit https://www.safepartum.com/academy for more resources on the management of high-risk conditions in pregnancy, labor & delivery, and postpartum. Safe Partum also has an upcoming OB Critical Care & Emergencies Course! Here is a link to the registration page to learn more about and sign up for this incredible course coming in September. https://fun-acorn-864.myflodesk.com/
In this episode, I have my Uncle Raz and my brother, Krishna on to discuss the day our Baja (Grandfather) passed away. We talk about Aakha Chopi Narou Bhani, a beautiful and sad Nepali song by Narayan Gopal, helped our grandfather transition in his final hour. This song was played often by my Grandfather on his Hawaiian slide guitar. We talk about the meaning and significance of this song and how it helped us cope with our loss. Uncle Raz, Krishna and my Grandmother were in the room when Baja (Grandpa) passed. We talk about what happened that week that brought us to that pivotal moment for all of us as a family. To hear the song played by Baja, Krishna and Mahendra, you can visit my Uncle Mahendra's channel: https://www.youtube.com/watch?v=v-lh2_5OKWY (1) Aakha Chopi Narou Bhani | Nepali Sadabahar Geet - YouTube. https://www.youtube.com/watch?v=_vCNjaSi-oQ. (2) Ankha chopi narou bhani - Nepali Song Chord. http://www.nepalisongchord.com/song/Ankha_chopi_narou_bhani. (3) Lyrics & Translations of Aakha Chopi Narou Bhani by Narayan Gopal. https://popnable.com/nepal/songs/397879-narayan-gopal-aakha-chopi-narou-bhani/lyrics-and-translations. (4) Aakha chhopi narou bhani(आँखा छोपी नरोउ ... - Smule. https://www.smule.com/song/narayan-gopal-aakha-chhopi-narou-bhani-आ-ख-छ-प-नर-उ-भन-karaoke-lyrics/732459998_2921319/arrangement. About the Hostess: Sharmila Mali is a Self-Love Expert, intuitive healer, Reiki Master Teacher, Akashic Records Reader (in addition to being a podcaster) and for the past 19 years or so, most of her clients have been women, who want to get over their ex. She also teaches intuitive energy healing and Reiki. FB: @SharmilaTheSelfLoveExpert IG: @sharmilatheselfloveexpert Support the Confident Healer: -DONATE, become a patron and donate one time or monthly, it's easy, www.theconfidenthealer.net/support -Share the podcast with someone that needs it. Intro and Outro Music: Ganga Bahadur instrumental, with Mahendra Bahadur on Tabla, Krishna on guitar. Video courtesy of Mahendra Bahadur, Aakha Chopi Narou Bhani (instrumental) performed by Ganga, Mahendra, and Krishna - YouTube Produced Sharmila Mali Edited by Chris Jessup
Kelly Deily, associate director of graduate career services at Drexel LeBow, speaks with Prateik Mahendra, MS '18, data engineer of privacy at Meta. Mahendra shares essential tips on navigating a successful job search, his outlook on the current state of the economy and suggestions for managing a rewarding work-life balance.Music by ikson
”In a business culture that values doing (execution) and thinking (strategy), we need to evolve toward feeling – a part that remains ignored in the world of business.” – Mahendra Ramsinghani, The Resilient Founder In today's podcast Mahendra Ramsinghani discusses his book, The Resilient Founder, and some of the practices that listeners can use to build resilience. Topics include: The truth about internal darkness and the entrepreneurial experience. Why it might be time to develop a ‘Psychological Quotient'. Why an unstoppable 24/7 attitude is often counter-productive.
Knowing when to form and build a Board of Directors is a challenge for many startup founders.Once the Board is formed, how to manage and communicate with the Board presents an additional burden on founders.Our guest, Mahendra Ramsinghani, co-authored Startup Boards: Getting the Most of Your Board of Directors with Brad Feld, and has all the insights and advice you need to skillfully navigate this important segment of your startup's structure.Currently, Mahendra is the Founder of Secure Octane - a cybersecurity seed fund based in the San Francisco Bay area.He's invested in over fifty pre-seed and seedstage companies and, as the former Director of Venture Capital Initiatives for Michigan Economic Development Corporation (MEDC), Mahendra led the legislation for two Fund-of-Fund programs in Michigan. Mahendra is also the author of two additional books - The Business of Venture Capital and The Resilient Founder.To purchase a copy of Startup Boards: A Field Guide to Building and Leading an Effective Board of Directors, please visit: https://amzn.to/3N7YfjzConnect with Mahendra via these links:LinkedIn: https://www.linkedin.com/in/mahendraram/Website: https://www.thebusinessofvc.com/Thank you for carving out time to improve your Founder Game - when you do better, your startup will do better - cheers from Boston!Ande ♥https://andelyons.com#boardofdirectors #startupboards #buildingeffectivestartupboards #founderadvice WHAT WE LEARNED00:00 - Meet Seasoned Startup Investor Mahendra Ramsinghani09:00 - Mahendra's origin story13:00 - Not knowing what to do can be either paralyzing or empowering18:00 - Mahendra's investor journey - how it began and how it's going25:00 - Secure Octane32:00 - When does a startup need to establish a Board34:00 - Who should be on a Board and how many members are needed based on stage of funding: Seed/Series A-D36:30 39:00 - Legal requirements for Boards and fiduciary responsibilities41:00 - What are the Board's responsibilities and what are the Founder's responsibilities43:20 - Who is in charge45:00 - Communication strategies and intellectual honesty48:00 - How does a Founder deliver bad news to a Board of Directors54:00 - Board of Director's Compensation Packages57:00 - How to off-ramp a toxic Board MemberJOIN STARTUP LIFE LIVE MEETUP GROUPGet an alert whenever I post a new show!https://bit.ly/StartupLifeLIVEWBENC APPLICATION SUPPORTLearn more here: https://bit.ly/GetWBENCSend me an email: ande@andelyons.comCONNECT WITH ME ONLINE: https://twitter.com/AndeLyonshttps://www.linkedin.com/in/andelyons/ https://www.instagram.com/ande_lyons/ TikTok: @andelyonsANDELICIOUS ANNOUNCEMENTSJoin Innovation Women here: https://bit.ly/AndeInnoWomenArlan's Academy: https://arlansacademy.com/Scroobious - use Ande15 discount code: https://www.scroobious.com/How to Raise a Seed Round: https://bit.ly/AAElizabethYinTune in to Mia Voss' Shit We Don't Talk About podcast here: https://shitwedonttalkaboutpodcast.com/SPONSORSHIPIf you resonate with the show's mission of amplifying diverse founder voices while serving first-time founders around the world, please reach out to me to learn more about making an impact through sponsoring the Startup Life LIVE Show! ande@andelyons.com.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendra's books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Don't keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com. Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn. Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed:The business of venture capitalThe philosophy of building meaningful companiesThe importance of putting the right Board in place; andThe importance of building resilience as a founder.To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendra's books here on Amazon:The Business of Venture CapitalThe Resilient FounderStartup BoardsTo my listeners, I hope you enjoyed this episode. Don't keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com. Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn. Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendras books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Dont keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com.Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn.Join me next Friday for another episode.
Investor Shares What Every Startup Founder Should Know About Structuring a Successful Board with Mahendra Ramsinghani S5 Ep.9Mahendra Ramsinghani is the author of two books - The Business of Venture Capital and The Resilient Founder and co-author of Startup Boards with Brad Feld and Matt Blumberg. He is also the Founder of Secure Octane - a cybersecurity seed fund based in San Francisco and led the legislation for two Fund-of-Fund programs.In this episode, we discussed: The business of venture capital The philosophy of building meaningful companies The importance of putting the right Board in place; and The importance of building resilience as a founder. To learn more about the business or venture capital, resilient founders, and startup boards, you can find Mahendra's books here on Amazon: The Business of Venture Capital The Resilient Founder Startup Boards To my listeners, I hope you enjoyed this episode. Don't keep good content to yourself. If you enjoyed this episode, let me know by rating, reviewing, and sharing this episode with others. Click here to review: Podkite link:https://kite.link/where-E2-80-99s-the-funding-3FSubscribe to the podcast at its home on the ALIVE Podcast Network and follow the podcast on your favorite podcast streaming platforms like Apple Podcasts, Google Play, Spotify, and more to get notified when new episodes drop.To be a guest or sponsor the podcast, email whereisthefunding@gmail.com. Follow the podcast on Instagram at whereisthefunding_podcast and follow me, your host Michelle J. McKenzie and the show page on LinkedIn. Join me next Friday for another episode.
Since its initial publication, “The Business of Venture Capital” has been hailed as the definitive, most comprehensive book on the subject. In its upcoming third edition, this market-leading text explains the multiple facets of the business of venture capital, from raising venture funds, to structuring investments, to generating consistent returns, to evaluating exit strategies.Mahendra Ramsinghani is the founder of Secure Octane, a venture capital firm based in San Francisco, which invests in cybersecurity among other sectors. He is also the author of multiple books including “The Resilient Founder,” and “Startup Boards” co-authored with noted VC Brad Feld.Greg and Mahendra dig into everything that makes VCs work in this episode, including what fund managers think about venture capital, betting on humans over ideas, characteristics of a good GP, and the mental health of founders. Episode Quotes:The single biggest problem with venture capital10:51: I think that's the single biggest problem in our businesses: Markets don't evolve or adopt technologies fast enough. Or if they adopt certain technologies, they don't adopt every technology. They'll pick one, right? So you end up saying: Okay, in this scenario, I was the winner and in this scenario, I lost. And so this is a business where you're constantly being humbled and constantly being reminded that you cannot logically plan the outcomes.What are the key characteristics of a successful venture capitalist?39:06: The fundamental attributes that play out well are curiosity and openness to learning about new trends. And then the second, and the more important, is the ability to take risks within a shorter period of time and make our decisions quickly as opposed to trying to belabor over how the future might play out five years from now. Two metrics in measuring fund performance27:03: What fund managers do, you know, people like me, are giving them the two metrics they want to look at before they start the conversation. So, your IRR, you know, is a time-based sort of metric of your performance. And then the second is your cash on cash, whether you're TVPI (Total Value to Paid In) or multiple of investor capital. So those two tend to have now become industry standards.Show Links:Recommended Resources:John Hagel episode Guest Profile:Contributor's Profile at ForbesContributor's Profile at TechCrunchMahendra Ramsinghani on TwitterMahendra Ramsinghani on LinkedInHis Work:Secure Octane InvestmentsStartup Boards: A Field Guide to Building and Leading an Effective Board of Directors The Resilient Founder: Lessons in Endurance from Startup EntrepreneursThe Business of Venture Capital: The Art of Raising a Fund, Structuring Investments, Portfolio Management, and Exits (Wiley Finance)
It was nightly business conversations at his parents' dinner table that first led Prashanth Mahendra-Rajah to consider alternatives to business when it came to building a career. “As most small business owners do, my parents worked all the time—and as with most small businesses, things could at times be financially challenging,” explains Mahendra-Rajah, who vividly recalls business rent increases, outstanding receivables, and the dynamics behind supply and demand that pervaded his parents' dinner conversations. Nevertheless, it was this same scrutiny of supply-and-demand dynamics that Mahendra-Rajah credits with helping him to “come full circle” and ultimately led him to business school. At the time, Mahendra-Rajah was working full-time as a senior process engineer for chemical giant FMC Corp., a career-building stint that afforded him the real-world insights required to enrich a master's thesis that he needed in order to complete a chemical engineering degree from Johns Hopkins. “It was my first job out of college, and the plant manager's M.O. was to always beat me up and demand more cost reductions and better process yields,” recalls Mahendra-Rajah, who notes that his immersion into the business side of manufacturing quickly escalated when FMC received a large order for a synthetic that the company no longer manufactured. “I was given the task of refurbishing an old factory and getting it up and running in a matter of weeks,” remembers Mahendra-Rajah, who adds that the production of the once-discontinued synthetic led the plant manager's mind-set to suddenly pivot. “He was pushing me to spend as fast as I could. I was told not to negotiate with suppliers, and if I needed overtime for the maintenance workers, to ‘go for it'—schedule was paramount, cost was secondary,” explains the career finance leader, who credits the experience with helping him to open a door that he had once shut. Says Mahendra-Rajah: “It kind of brought me back to the table with Mom and Dad and made me realize how so much of our world is really driven by supply and demand and how finance is the oil in the gears." –Jack Sweeney
We know what you might be thinking, there are so many NFT projects already aiming to create a real-world impact, what's different about this one? Well in today's episode, Heather and Rich are joined by John and Mahendra, the co-founders of Good Cactus Frens, on what really goes behind the scenes when an NFT project pairs up with their chosen charity, how it works, their backstory, and more! Let's dive in... Check out Good Cactus Frens' Socials: https://goodcactusfrens.com/ (Website) | https://www.linkedin.com/company/goodcactusfrens/?originalSubdomain=ca (LinkedIn) | https://twitter.com/goodcactusfrens (Twitter) Get the NFTs Simplified Course for only $19 through our Newbies Link https://web3simplified.xyz/?affiliate_id=372221043#course (HERE)! Questions? Comments? Join our community on https://discord.com/invite/Z7yEcDN6yy (Discord)! You can also follow Web 3 and NFTs for Newbies on social https://bit.ly/3rBGGxI (HERE) * Disclaimer* This content is for educational and entertainment purposes only. We are not giving financial or investment advice.