Podcasts about Oracle Cloud

Cloud computing service

  • 191PODCASTS
  • 672EPISODES
  • 34mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jun 22, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Oracle Cloud

Show all podcasts related to oracle cloud

Latest podcast episodes about Oracle Cloud

The Cloudcast
The Rapid Change of Cloud Dynamics

The Cloudcast

Play Episode Listen Later Jun 22, 2025 27:00


Are the dynamics of the hyperscale cloud providers changing? Will the leaders of the last decade continue for the next 3-5 years, or the next decade? Let's explore how the market is now rapidly changing.SHOW: 934SHOW TRANSCRIPT: The Cloudcast #934 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW SPONSORS:[US CLOUD] Cut Enterprise IT Support Costs by 30-50% with US Cloud[VASION] Vasion Print eliminates the need for print servers by enabling secure, cloud-based printing from any device, anywhere. Get a custom demo to see the difference for yourself.SHOW NOTES:Andy Jassy (Amazon CEO) - Thoughts on Generative AI AWS quarterly revenue growth OpenAI struggling to meet deadline for $20B investment (including Microsoft)Inside Microsoft's complex relationship with OpenAIGoogle Cloud quarterly revenuesBG2 PodcastEast Meets West 2025 (keynote presentation)TECHNOLOGY WAVES CHANGE MARKET DYNAMICS; INCUMBENTS DON'T ALWAYS LEADRevenues: AWS ($62B, 2021), Azure ($60B, 2021), GCP ($19B, 2021) Revenues (run rate): AWS ($130B, 2025), Azure ($108B, 2025), GCP ($50B, 2025) AWS is on 3rd CEO since pandemic; market questioning their AI strategyAzure + OpenAI alignment strategy went from strategic to questionedGoogle start-stop AI strategy, but coming together; but company breakup loomingApple AI strategy seems completely unknown (Siri,Apple Intelligence, devices, models?)Oracle seems to be rebounding around Oracle Cloud InfrastructureNew hypercloud AI providers emerging? The Amazon/AWS 20%FEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod

Patoarchitekci
Bieda-hosting, czyli miejsce na MVP, Side Projects i inne nasze zabawki

Patoarchitekci

Play Episode Listen Later May 23, 2025 41:37


Bieda-hosting, czyli miejsce na MVP, Side Projects i inne nasze zabawki – brzmi jak manifest każdego dewelopera z ograniczonym budżetem. Łukasz i Szymon eksplorują świat tanich VPS-ów, free tierów i rozwiązań dla projektów pobocznych. Od Mikrus za 197 zł rocznie po Oracle Cloud z darmową maszyną wirtualną. Prowadzący porównują Docker Compose vs natywne instalacje, omawiają GitHub Actions jako CI/CD i przekonują do Cloudflare jako must-have. Dyskutują o backupach (szczęśliwi ich nie robią), zarządzaniu secretami i reverse proxy z Caddy lub Traefik. Bonus: kalkulacja kosztów prądu dla homelabów może Was zaskoczyć. Jeśli kiedykolwiek commitowaliście sekrety do repo z lenistwa lub zastanawiacie się nad self-hostingiem – ten odcinek rozwikła wasze dylematy. Sprawdźcie, czy biedahosting to Wasza droga do MVP czy może jednak warto zainwestować w coś lepszego.     A teraz nie ma co się obijać!

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 520: IBM Think 2025 - AI Updates that could shape Enterprise Work

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later May 7, 2025 47:30


One of the biggest downsides of consumer AI?It doesn't have up-to-date access to your enterprise data. Even as frontier labs work tirelessly to connect and integrate AI chatbots with your data, we're a far way off from that happening. Unless you're using a platform like IBM's watsonx. And if you are using watsonx, your go-to enterprise AI platform just got a TON more powerful. IBM just unveiled updates across its watson ecosystem at its Think 2025 conference. We've been here covering every step of it, so we're jumping into what you need to know.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the conversation.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:IBM Think Conference 2025 HighlightsIBM's Watson AI Platform UpdatesEnterprise Workflow with Watson x OrchestrateBuild Your Own AI Agents FeaturesPrebuilt Domain Agents OverviewNew Agent Catalog with 50+ AgentsIBM and Salesforce AI CollaborationIBM's Partnership with Oracle for AITimestamps:00:00 Amazon's Advanced AI Coding Tool Kiro03:52 AI Delivers Victim's Court Statement07:12 "IBM Conference Insights and Updates"12:52 Rise of Small Language Models16:03 Watson x Orchestrate Overview17:13 "Streamlined Internal Workflow Automation"21:02 DIY AI Agents Revolution23:52 AI Trust Through Transparent Reasoning28:23 Prebuilt AI Agents Boost Efficiency31:20 IBM Watson AI Traceability Insights35:14 AI Platforms Crossover: Watson and Salesforce41:10 IBM's AI Data Platform Enhancement44:59 IBM Watson x Q&A InvitationKeywords:IBM Think 2025, AI updates, Enterprise work, IBM Watson, Generative AI, Enterprise organizations, IBM products, Watson AI platforms, AI news, Amazon Kiro, Code generation tool, AI agents, Technical design documents, OpenAI, Google's Gemini 2.5 Pro, Web app development, Large Language Models, Enterprise systems, Dynamic enterprise data, Enterprise-grade versions, Meta's Llama, Mistral models, Granate models, Small language models, IBM Watson x, AI agent creation, Build your own agents, Prebuilt domain agents, Salesforce collaboration, Oracle Cloud, Multi agent orchestration, Watson x data intelligence, Unstructured data, Open source models, Consumer grade GPU, Data governance, Code transformation, Semantic understanding, Hybrid cloud strategy.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

WALL STREET COLADA
Tensiones Globales, IA Estratégica y Movimiento en Robotaxis.

WALL STREET COLADA

Play Episode Listen Later May 6, 2025 5:12


En este episodio cubrimos los desarrollos clave que mueven el mercado: • Mercado en rojo por tensión comercial y Fed: Futuros: $SPX -0.7%, $US100 -1%, $INDU -0.7%. Persisten los temores por la falta de avances comerciales, comentarios arancelarios de Trump, y la expectativa por la decisión de tasas de la Fed este miércoles. $NFLX sigue presionado y $F cae -2.7% tras retirar su guía anual. • IBM impulsa IA híbrida con alianzas estratégicas: $IBM presentó nuevos productos de IA en THINK, apuntando a integración empresarial con aliados como $ORCL, $AMZN, $MSFT y $CRM. Lanza su suite watsonx en Oracle Cloud y reafirma inversión de $150B en EE.UU. para mainframes y computación cuántica. • DoorDash compra Deliveroo: $DASH adquiere a $DROOF por $3.9B, sumando 7M de usuarios en 9 países. Expande presencia en Europa y fortalece estrategia internacional tras la compra de Wolt en 2022. El grupo combinó $90B en pedidos brutos en 2024. • Pony.ai y Uber expanden robotaxis globales: $PONY +12% en premarket tras anunciar alianza con $UBER en Medio Oriente. Uber también amplía colaboración con WeRide $WRD a 15 ciudades. Uber continúa tejiendo una red global con más de 15 alianzas en tecnología autónoma. ¡Dale play y prepárate con lo más relevante antes de la apertura del mercado!

Cloud Unplugged
Google AI Agentspace Workflows, Oracle Cloud Breach and OpenAI's New Features

Cloud Unplugged

Play Episode Listen Later Apr 27, 2025 35:39 Transcription Available


In this fact-busting 30-minute episode, Jon and Lewis explore Google's AI AgentSpace technology and how it could let business teams run complete workflows without ever leaving the agent environment. They then test the marketing claims behind Openai's latest model features, before dissecting Oracle's cloud data breach. Finally, they map out the potential impact of the proposed Clean Cloud Act on the energy footprint. Hosts:https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/lewismarshall/

Lex Fridman Podcast
#466 – Jeffrey Wasserstrom: China, Xi Jinping, Trade War, Taiwan, Hong Kong, Mao

Lex Fridman Podcast

Play Episode Listen Later Apr 24, 2025 194:21


Jeffrey Wasserstrom is a historian of modern China. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep466-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/jeffrey-wasserstrom-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Jeffrey Wasserstrom's Books: China in the 21st Century: https://amzn.to/3GnayXT Vigil: Hong Kong on the Brink: https://amzn.to/4jmxWmT Oxford History of Modern China: https://amzn.to/3RAJ9nI The Milk Tea Alliance: https://amzn.to/42DLapH SPONSORS: To support this podcast, check out our sponsors & get discounts: Oracle: Cloud infrastructure. Go to https://oracle.com/lex Tax Network USA: Full-service tax firm. Go to https://tnusa.com/lex Shopify: Sell stuff online. Go to https://shopify.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex AG1: All-in-one daily nutrition drink. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (00:06) - Sponsors, Comments, and Reflections (10:29) - Xi Jinping and Mao Zedong (13:57) - Confucius (21:27) - Education (29:33) - Tiananmen Square (40:49) - Tank Man (50:49) - Censorship (1:26:45) - Xi Jinping (1:44:53) - Donald Trump (1:48:47) - Trade war (2:01:35) - Taiwan (2:11:48) - Protests in Hong Kong (2:44:07) - Mao Zedong (3:05:48) - Future of China PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips SOCIAL LINKS: - X: https://x.com/lexfridman - Instagram: https://instagram.com/lexfridman - TikTok: https://tiktok.com/@lexfridman - LinkedIn: https://linkedin.com/in/lexfridman - Facebook: https://facebook.com/lexfridman - Patreon: https://patreon.com/lexfridman - Telegram: https://t.me/lexfridman - Reddit: https://reddit.com/r/lexfridman

Knee-deep in Tech
Episode 304

Knee-deep in Tech

Play Episode Listen Later Apr 22, 2025 35:42


In this news episode, the trio uncovers some hidden PaaS costs in Databricks and breaks down a few subtle but impactful updates in Microsoft Fabric. The recent Oracle Cloud breach and how it was handled gets some attention. They also cover the latest improvements to Autopatch and LAPS, including reporting and capabilities bringing added flexibility. Rounding out, there's a chance to have a discussion around MCP and how it relates to the evolution of AI Agents, as well as Microsoft Copilot in Azure hitting general availability.Show Notes Hosted on Acast. See acast.com/privacy for more information.

Oracle University Podcast
Integrating APEX with OCI AI Services

Oracle University Podcast

Play Episode Listen Later Apr 22, 2025 20:01


Discover how Oracle APEX leverages OCI AI services to build smarter, more efficient applications. Hosts Lois Houston and Nikita Abraham interview APEX experts Chaitanya Koratamaddi, Apoorva Srinivas, and Toufiq Mohammed about how key services like OCI Vision, Oracle Digital Assistant, and Document Understanding integrate with Oracle APEX.   Packed with real-world examples, this episode highlights all the ways you can enhance your APEX apps.   Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we looked at how generative AI powers Oracle APEX and in today's episode, we're going to focus on integrating APEX with OCI AI Services. Lois: That's right, Niki. We're going to look at how you can use Oracle AI services like OCI Vision, Oracle Digital Assistant, Document Understanding, OCI Generative AI, and more to enhance your APEX apps. 01:03 Nikita: And to help us with it all, we've got three amazing experts with us, Chaitanya Koratamaddi, Director of Product Management at Oracle, and senior product managers, Apoorva Srinivas and Toufiq Mohammed. In today's episode, we'll go through each Oracle AI service and look at how it interacts with APEX. Apoorva, let's start with you. Can you explain what the OCI Vision service is? Apoorva: Oracle Cloud Infrastructure Vision is a serverless multi-tenant service accessible using the console or REST APIs. You can upload images to detect and classify objects in them. With prebuilt models available, developers can quickly build image recognition into their applications without machine learning expertise. OCI Vision service provides a fully managed model infrastructure. With complete integration with OCI Data Labeling, you can build custom models easily. OCI Vision service provides pretrained models-- Image Classification, Object Detection, Face Detection, and Text Recognition. You can build custom models for Image Classification and Object Detection. 02:24 Lois: Ok. What about its use cases? How can OCI Vision make APEX apps more powerful? Apoorva: Using OCI Vision, you can make images and videos discoverable and searchable in your APEX app.  You can use OCI Vision to detect and classify objects in the images. OCI Vision also highlights the objects using a red rectangular box. This comes in handy in use cases such as detecting vehicles that have violated the rules in traffic images. You can use OCI Vision to identify visual anomalies in your data. This is a very popular use case where you can detect anomalies in cancer X-ray images to detect cancer. These are some of the most popular use cases of using OCI Vision with your APEX app. But the possibilities are endless and you can use OCI Vision for any of your image analysis. 03:29 Nikita: Let's shift gears to Oracle Digital Assistant. Chaitanya, can you tell us what it's all about? Chaitanya: Oracle Digital Assistant is a low-code conversational AI platform that allows businesses to build and deploy AI assistants. It provides natural language understanding, automatic speech recognition, and text-to-speech capabilities to enable human-like interactions with customers and employees. Oracle Digital Assistant comes with prebuilt templates for you to get started.  04:00 Lois: What are its key features and benefits, Chaitanya? How does it enhance the user experience? Chaitanya: Oracle Digital Assistant provides conversational AI capabilities that include generative AI features, natural language understanding and ML, AI-powered voice, and analytics and insights. Integration with enterprise applications become easier with unified conversational experience, prebuilt chatbots for Oracle Cloud applications, and chatbot architecture frameworks. Oracle Digital Assistant provides advanced conversational design tools, conversational designer, dialogue and domain trainer, and native multilingual support. Oracle Digital Assistant is open, scalable, and secure. It provides multi-channel support, automated bot-to-agent transfer, and integrated authentication profile. 04:56 Nikita: And what about the architecture? What happens at the back end? Chaitanya: Developers assemble digital assistants from one or more skills. Skills can be based on prebuilt skills provided by Oracle or third parties, custom developed, or based on one of the many skill templates available. 05:16 Lois: Chaitanya, what exactly are “skills” within the Oracle Digital Assistant framework?  Chaitanya: Skills are individual chatbots that are designed to interact with users and fulfill specific type of tasks. Each skill helps a user complete a task through a combination of text messages and simple UI elements like select list. When a user request is submitted through a channel, the Digital Assistant routes the user's request to the most appropriate skill to satisfy the user's request. Skills can combine multilingual NLP deep learning engine, a powerful dialogflow engine, and integration components to connect to back-end systems.  Skills provide a modular way to build your chatbot functionality. Now users connect with a chatbot through channels such as Facebook, Microsoft Teams, or in our case, Oracle APEX chatbot, which is embedded into an APEX application. 06:21 Nikita: That's fascinating. So, what are some use cases of Oracle Digital Assistant in APEX apps? Chaitanya: Digital assistants streamline approval processes by collecting information, routing requests, and providing status updates. Digital assistants offer instant access to information and documentation, answering common questions and guiding users. Digital assistants assist sales teams by automating tasks, responding to inquiries, and guiding prospects through the sales funnel. Digital assistants facilitate procurement by managing orders, tracking deliveries, and handling supplier communication. Digital assistants simplify expense approvals by collecting reports, validating receipts, and routing them for managerial approval. Digital assistants manage inventory by tracking stock levels, reordering supplies, and providing real-time inventory updates. Digital assistants have become a common UX feature in any enterprise application. 07:28 Want to learn how to design stunning, responsive enterprise applications directly from your browser with minimal coding? The new Oracle APEX Developer Professional learning path and certification enables you to leverage AI-assisted development, including generative AI and Database 23ai, to build secure, scalable web and mobile applications with advanced AI-powered features. From now through May 15, 2025, we're waiving the certification exam fee (valued at $245). So, what are you waiting for? Visit mylearn.oracle.com to get started today. 08:09 Nikita: Welcome back! Thanks for that, Chaitanya. Toufiq, let's talk about the OCI Document Understanding service. What is it? Toufiq: Using this service, you can upload documents to extract text, tables, and other key data. This means the service can automatically identify and extract relevant information from various types of documents, such as invoices, receipts, contracts, etc. The service is serverless and multitenant, which means you don't need to manage any servers or infrastructure. You can access this service using the console, REST APIs, SDK, or CLI, giving you multiple ways to integrate. 08:55 Nikita: What do we use for APEX apps?  Toufiq: For APEX applications, we will be using REST APIs to integrate the service. Additionally, you can process individual files or batches of documents using the ProcessorJob API endpoint. This flexibility allows you to handle different volumes of documents efficiently, whether you need to process a single document or thousands at once. With these capabilities, the OCI Document Understanding service can significantly streamline your document processing tasks, saving time and reducing the potential for manual errors. 09:36 Lois: Ok.  What are the different types of models available? How do they cater to various business needs? Toufiq: Let us start with pre-trained models. These are ready-to-use models that come right out of the box, offering a range of functionalities. The available models are Optical Character Recognition (OCR) enables the service to extract text from documents, allowing you to digitize, scan the documents effortlessly. You can precisely extract text content from documents. Key-value extraction, useful in streamlining tasks like invoice processing. Table extraction can intelligently extract tabular data from documents. Document classification automatically categorizes documents based on their content. OCR PDF enables seamless extraction of text from PDF files. Now, what if your business needs go beyond these pre-trained models. That's where custom models come into play. You have the flexibility to train and build your own models on top of these foundational pre-trained models. Models available for training are key value extraction and document classification. 10:50 Nikita: What does the architecture look like for OCI Document Understanding? Toufiq: You can ingest or supply the input file in two different ways. You can upload the file to an OCI Object Storage location. And in your request, you can point the Document Understanding service to pick the file from this Object Storage location.  Alternatively, you can upload a file directly from your computer. Once the file is uploaded, the Document Understanding service can process the file and extract key information using the pre-trained models. You can also customize models to tailor the extraction to your data or use case. After processing the file, the Document Understanding service stores the results in JSON format in the Object Storage output bucket. Your Oracle APEX application can then read the JSON file from the Object Storage output location, parse the JSON, and store useful information at local table or display it on the screen to the end user. 11:52 Lois: And what about use cases? How are various industries using this service? Toufiq: In financial services, you can utilize Document Understanding to extract data from financial statements, classify and categorize transactions, identify and extract payment details, streamline tax document management. Under manufacturing, you can perform text extraction from shipping labels and bill of lading documents, extract data from production reports, identify and extract vendor details. In the healthcare industry, you can automatically process medical claims, extract patient information from forms, classify and categorize medical records, identify and extract diagnostic codes. This is not an exhaustive list, but provides insights into some industry-specific use cases for Document Understanding. 12:50 Nikita: Toufiq, let's switch to the big topic everyone's excited about—the OCI Generative AI Service. What exactly is it? Toufiq: OCI Generative AI is a fully managed service that provides a set of state of the art, customizable large language models that cover a wide range of use cases. It provides enterprise grade generative AI with data governance and security, which means only you have access to your data and custom-trained models. OCI Generative AI provides pre-trained out-of-the-box LLMs for text generation, summarization, and text embedding. OCI Generative AI also provides necessary tools and infrastructure to define models with your own business knowledge. 13:37 Lois: Generally speaking, how is OCI Generative AI useful?  Toufiq: It supports various large language models. New models available from Meta and Cohere include Llama2 developed by Meta, and Cohere's Command model, their flagship text generation model. Additionally, Cohere offers the Summarize model, which provides high-quality summaries, accurately capturing essential information from documents, and the Embed model, converting text to vector embeddings representation. OCI Generative AI also offers dedicated AI clusters, enabling you to host foundational models on private GPUs. It integrates LangChain and open-source framework for developing new interfaces for generative AI applications powered by language models. Moreover, OCI Generative AI facilitates generative AI operations, providing content moderation controls, zero downtime endpoint model swaps, and endpoint deactivation and activation capabilities. For each model endpoint, OCI Generative AI captures a series of analytics, including call statistics, tokens processed, and error counts. 14:58 Nikita: What about the architecture? How does it handle user input? Toufiq: Users can input natural language, input/output examples, and instructions. The LLM analyzes the text and can generate, summarize, transform, extract information, or classify text according to the user's request. The response is sent back to the user in the specified format, which can include raw text or formatting like bullets and numbering, etc. 15:30 Lois: Can you share some practical use cases for generative AI in APEX apps?  Toufiq: Some of the OCI generative AI use cases for your Oracle APEX apps include text summarization. Generative AI can quickly summarize lengthy documents such as articles, transcripts, doctor's notes, and internal documents. Businesses can utilize generative AI to draft marketing copy, emails, blog posts, and product descriptions efficiently. Generative AI-powered chatbots are capable of brainstorming, problem solving, and answering questions. With generative AI, content can be rewritten in different styles or languages. This is particularly useful for localization efforts and catering to diverse audience. Generative AI can classify intent in customer chat logs, support tickets, and more. This helps businesses understand customer needs better and provide tailored responses and solutions. By searching call transcripts, internal knowledge sources, Generative AI enables businesses to efficiently answer user queries. This enhances information retrieval and decision-making processes. 16:47 Lois: Before we let you go, can you explain what Select AI is? How is it different from the other AI services? Toufiq: Select AI is a feature of Autonomous Database. This is where Select AI differs from the other AI services. Be it OCI Vision, Document Understanding, or OCI Generative AI, these are all freely managed standalone services on Oracle Cloud, accessible via REST APIs. Whereas Select AI is a feature available in Autonomous Database. That means to use Select AI, you need Autonomous Database.  17:26 Nikita: And what can developers do with Select AI? Toufiq: Traditionally, SQL is the language used to query the data in the database. With Select AI, you can talk to the database and get insights from the data in the database using human language. At the very basic, what Select AI does is it generates SQL queries using natural language, like an NL2SQL capability.  17:52 Nikita: How does it actually do that? Toufiq: When a user asks a question, the first step Select AI does is look into the AI profile, which you, as a developer, define. The AI profile holds crucial information, such as table names, the LLM provider, and the credentials needed to authenticate with the LLM service. Next, Select AI constructs a prompt. This prompt includes information from the AI profile and the user's question.  Essentially, it's a packet of information containing everything the LLM service needs to generate SQL. The next step is generating SQL using LLM. The prompt prepared by Select AI is sent to the available LLM services via REST. Which LLM to use is configured in the AI profile. The supported providers are OpenAI, Cohere, Azure OpenAI, and OCI Generative AI. Once the SQL is generated by the LLM service, it is returned to the application. The app can then handle the SQL query in various ways, such as displaying the SQL results in a report format or as charts, etc.  19:05 Lois: This has been an incredible discussion! Thank you, Chaitanya, Apoorva, and Toufiq, for walking us through all of these amazing AI tools. If you're ready to dive deeper, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. You'll find step-by-step guides and demos for everything we covered today.  Nikita: Until next week, this is Nikita Abraham… Lois: And Lois Houston signing off! 19:31 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  

The CyberWire
Microsoft squashes windows server bug.

The CyberWire

Play Episode Listen Later Apr 17, 2025 36:06


Microsoft issues emergency updates for Windows Server. Apple releases emergency security updates to patch two zero-days. CISA averts a CVE program disruption. Researchers uncover Windows versions of the BrickStorm backdoor. Atlassian and Cisco patch several high-severity vulnerabilities. An Oklahoma cybersecurity CEO is charged with hacking a local hospital. A Fortune 500 financial firm reports an insider data breach. Researchers unmask IP addresses behind the Medusa Ransomware Group. CISA issues a warning following an Oracle data breach. On our Industry Voices segment, we are joined by Rob Allen, Chief Product Officer at ThreatLocker, to discuss a layered approach to zero trust. Former CISA director Chris Krebs steps down from his role at SentinelOne. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. Industry Voices On our Industry Voices segment, we are joined by Rob Allen, Chief Product Officer at ThreatLocker, to discuss a layered approach to zero trust. Selected Reading New Windows Server emergency updates fix container launch issue (Bleeping Computer) Apple fixes two zero-days exploited in targeted iPhone attacks (Bleeping Computer) CISA Throws Lifeline to CVE Program with Last-Minute Contract Extension (Infosecurity Magazine) MITRE Hackers' Backdoor Has Targeted Windows for Years (SecurityWeek) Vulnerabilities Patched in Atlassian, Cisco Products (SecurityWeek) Edmond cybersecurity CEO accused in major hack at hospital (KOCO News) Fortune 500 firm's ex-employee exposes thousands of clients (Cybernews) Researchers Deanonymized Medusa Ransomware Group's Onion Site (Cyber Security News) CISA warns of potential data breaches caused by legacy Oracle Cloud leak (The Record) Krebs Exits SentinelOne After Security Clearance Pulled (SecurityWeek) The top 10 ThreatLocker policies for 2025 (ThreatLocker) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show.  Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices

Oracle University Podcast
AI-Assisted Development in Oracle APEX

Oracle University Podcast

Play Episode Listen Later Apr 15, 2025 12:57


Get ready to explore how generative AI is transforming development in Oracle APEX. In this episode, hosts Lois Houston and Nikita Abraham are joined by Oracle APEX experts Apoorva Srinivas and Toufiq Mohammed to break down the innovative features of APEX 24.1. Learn how developers can use APEX Assistant to build apps, generate SQL, and create data models using natural language prompts.   Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome back to another episode of the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I'm joined by Lois Houston, Director of Innovation Programs. Lois: Hi everyone! In our last episode, we spoke about Oracle APEX and AI. We covered the data and AI -centric challenges businesses are up against and explored how AI fits in with Oracle APEX. Niki, what's in store for today? Nikita: Well, Lois, today we're diving into how generative AI powers Oracle APEX. With APEX 24.1, developers can use the Create Application Wizard to tell APEX what kind of application they want to build based on available tables. Plus, APEX Assistant helps create, refine, and debug SQL code in natural language. 01:16 Lois: Right. Today's episode will focus on how generative AI enhances development in APEX. We'll explore its architecture, the different AI providers, and key use cases. Joining us are two senior product managers from Oracle—Apoorva Srinivas and Toufiq Mohammed. Thank you both for joining us today. We'll start with you, Apoorva. Can you tell us a bit about the generative AI service in Oracle APEX? Apoorva: It is nothing but an abstraction to the popular commercial Generative AI products, like OCI Generative AI, OpenAI, and Cohere. APEX makes use of the existing REST infrastructure to authenticate using the web credentials with Generative AI Services. Once you configure the Generative AI Service, it can be used by the App Builder, AI Assistant, and AI Dynamic Actions, like Show AI Assistant and Generate Text with AI, and also the APEX_AI PL/SQL API. You can enable or disable the Generative AI Service on the APEX instance level and on the workspace level. 02:31 Nikita: Ok. Got it. So, Apoorva, which AI providers can be configured in the APEX Gen AI service? Apoorva: First is the popular OpenAI. If you have registered and subscribed for an OpenAI API key, you can just enter the API key in your APEX workspace to configure the Generative AI service. APEX makes use of the chat completions endpoint in OpenAI. Second is the OCI Generative AI Service. Once you have configured an OCI API key on Oracle Cloud, you can make use of the chat models. The chat models are available from Cohere family and Meta Llama family.  The third is the Cohere. The configuration of Cohere is similar to OpenAI. You need to have your Cohere OpenAI key. And it provides a similar chat functionality using the chat endpoint.  03:29 Lois: What is the purpose of the APEX_AI PL/SQL public API that we now have? How is it used within the APEX ecosystem? Apoorva: It models the chat operation of the popular Generative AI REST Services. This is the same package used internally by the chat widget of the APEX Assistant. There are more procedures around consent management, which you can configure using this package. 03:58 Lois: Apoorva, at a high level, how does generative AI fit into the APEX environment? Apoorva: APEX makes use of the existing REST infrastructure—that is the web credentials and remote server—to configure the Generative AI Service. The inferencing is done by the backend Generative AI Service. For the Generative AI use case in APEX, such as NL2SQL and creation of an app, APEX performs the prompt enrichment.  04:29 Nikita: And what exactly is prompt enrichment? Apoorva: Let's say you provide a prompt saying "show me the average salary of employees in each department." APEX will take this prompt and enrich it by adding in more details. It elaborates on the prompt by mentioning the requirements, such as Oracle SQL syntax statement, and providing some metadata from the data dictionary of APEX. Once the prompt enrichment is complete, it is then passed on to the LLM inferencing service. Therefore, the SQL query provided by the AI Assistant is more accurate and in context.  05:15 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now to May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 05:41  Nikita: Welcome back! Let's talk use cases. Apoorva, can you share some ways developers can use generative AI with APEX? Apoorva: SQL is an integral part of building APEX apps. You use SQL everywhere. You can make use of the NL2SQL feature in the code editor by using the APEX Assistant to generate SQL queries while building the apps. The second is the prompt-based app creation. With APEX Assistant, you can now generate fully functional APEX apps by providing prompts in natural language. Third is the AI Assistant, which is a chat widget provided by APEX in all the code editors and for creation of apps. You can chat with the AI Assistant by providing your prompts and get responses from the Generative AI Services. 06:37 Lois: Without getting too technical, can you tell us how to create a data model using AI? Apoorva: A SQL Workshop utility called Create Data Model Using AI uses AI to help you create your own data model. The APEX Assistant generates a script to create tables, triggers, and constraints in either Oracle SQL or Quick SQL format. You can also insert sample data into these tables. But before you use this feature, you must create a generative AI service and enable the Used by App Builder setting. If you are using the Oracle SQL format, when you click on Create SQL Script, APEX generates the script and brings you to this script editor page. Whereas if you are using the Quick SQL format, when you click on Review Quick SQL, APEX generates the Quick SQL code and brings you to the Quick SQL page. 07:39 Lois: And to see a detailed demo of creating a custom data model with the APEX Assistant, visit mylearn.oracle.com and search for the "Oracle APEX: Empowering Low Code Apps with AI" course. Apoorva, what about creating an APEX app from a prompt. What's that process like? Apoorva: APEX 24.1 introduces a new feature where you can generate an application blueprint based on a prompt using natural language. The APEX Assistant leverages the APEX Dictionary Cache to identify relevant tables while suggesting the pages to be created for your application. You can iterate over the application design by providing further prompts using natural language and then generating an application based on your needs. Once you are satisfied, you can click on Create Application, which takes you to the Create Application Wizard in APEX, where you can further customize your application, such as application icon and other features, and finally, go ahead to create your application. 08:53 Nikita: Again, you can watch a demo of this on MyLearn. So, check that out if you want to dive deeper.  Lois: That's right, Niki. Thank you for these great insights, Apoorva! Now, let's turn to Toufiq. Toufiq, can you tell us more about the APEX Assistant feature in Oracle APEX. What is it and how does it work? Toufiq: APEX Assistant is available in Code Editors in the APEX App Builder. It leverages generative AI services as the backend to answer your questions asked in natural language. APEX Assistant makes use of the APEX dictionary cache to identify relevant tables while generating SQL queries. Using the Query Builder mode enables Assistant. You can generate SQL queries from natural language for Form, Report, and other region types which support SQL queries. Using the general assistance mode, you can generate PL/SQL JavaScript, HTML, or CSS Code, and seek further assistance from generative AI. For example, you can ask the APEX Assistant to optimize the code, format the code for better readability, add comments, etc. APEX Assistant also comes with two quick actions, Improve and Explain, which can help users improve and understand the selected code. 10:17 Nikita: What about the Show AI Assistant dynamic action? I know that it provides an AI chat interface, but can you tell us a little more about it?  Toufiq: It is a native dynamic action in Oracle APEX which renders an AI chat user interface. It leverages the generative AI services that are configured under Workspace utilities. This AI chat user interface can be rendered inline or as a dialog. This dynamic action also has configurable system prompt and welcome message attributes.  10:52 Lois: Are there attributes you can configure to leverage even more customization?  Toufiq: The first attribute is the initial prompt. The initial prompt represents a message as if it were coming from the user. This can either be a specific item value or a value derived from a JavaScript expression. The next attribute is use response. This attribute determines how the AI Assistant should return responses. The term response refers to the message content of an individual chat message. You have the option to capture this response directly into a page item, or to process it based on more complex logic using JavaScript code. The final attribute is quick actions. A quick action is a predefined phrase that, once clicked, will be sent as a user message. Quick actions defined here show up as chips in the AI chat interface, which a user can click to send the message to Generative AI service without having to manually type in the message. 12:05 Lois: Thank you, Toufiq and Apoorva, for joining us today. Like we were saying, there's a lot more you can find in the “Oracle APEX: Empowering Low Code Apps with AI” course on MyLearn. So, make sure you go check that out. Nikita: Join us next week for a discussion on how to integrate APEX with OCI AI Services. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 12:28 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  

Cyber Bites
Cyber Bites - 11th April 2025

Cyber Bites

Play Episode Listen Later Apr 10, 2025 7:45


* Cyber Attacks Target Multiple Australian Super Funds, Half Million Dollars Stolen* Intelligence Agencies Warn of "Fast Flux" Threat to National Security* SpotBugs Token Theft Revealed as Origin of Multi-Stage GitHub Supply Chain Attack* ASIC Secures Court Orders to Shut Down 95 "Hydra-Like" Scam Companies* Oracle Acknowledges "Legacy Environment" Breach After Weeks of DenialCyber Attacks Target Multiple Australian Super Funds, Half Million Dollars Stolenhttps://www.itnews.com.au/news/aussie-super-funds-targeted-by-fraudsters-using-stolen-creds-616269https://www.abc.net.au/news/2025-04-04/superannuation-cyber-attack-rest-afsa/105137820Multiple Australian superannuation funds have been hit by a wave of cyber attacks, with AustralianSuper confirming that four members have lost a combined $500,000 in retirement savings. The nation's largest retirement fund has reportedly faced approximately 600 attempted cyber attacks in the past month alone.AustralianSuper has now confirmed that "up to 600" of its members were impacted by the incident. Chief member officer Rose Kerlin stated, "This week we identified that cyber criminals may have used up to 600 members' stolen passwords to log into their accounts in attempts to commit fraud." The fund has taken "immediate action to lock these accounts" and notify affected members.Rest Super has also been impacted, with CEO Vicki Doyle confirming that "less than one percent" of its members were affected—equivalent to fewer than 20,000 accounts based on recent membership reports. Rest detected "unauthorised activity" on its member access portal "over the weekend of 29-30 March" and "responded immediately by shutting down the member access portal, undertaking investigations and launching our cyber security incident response protocols."While Rest stated that no member funds were transferred out of accounts, "limited personal information" was likely accessed. "We are in the process of contacting impacted members to work through what this means for them and provide support," Doyle said.HostPlus has confirmed it is "actively investigating the situation" but stated that "no HostPlus member losses have occurred" so far. Several other funds including Insignia and Australian Retirement were also reportedly affected.Members across multiple funds have reported difficulty accessing their accounts online, with some logging in to find alarming $0 balances displayed. The disruption has caused considerable anxiety among account holders.National cyber security coordinator Lieutenant General Michelle McGuinness confirmed that "cyber criminals are targeting individual account holders of a number of superannuation funds" and is coordinating with government agencies and industry stakeholders in response. The Australian Prudential Regulation Authority (APRA) and Australian Securities and Investments Commission (ASIC) are engaging with all potentially impacted funds.AustralianSuper urged members to log into their accounts "to check that their bank account and contact details are correct and make sure they have a strong and unique password that is not used for other sites." The fund also noted it has been working with "the Australian Signals Directorate, the National Office of Cyber Security, regulators and other authorities" since detecting the unauthorised access.If you're a member of any of those funds, watch for official communications and be wary of potential phishing attempts that may exploit the situation.Intelligence Agencies Warn of "Fast Flux" Threat to National Securityhttps://www.cyber.gov.au/about-us/view-all-content/alerts-and-advisories/fast-flux-national-security-threatMultiple intelligence agencies have issued a joint cybersecurity advisory warning organizations about a significant defensive gap in many networks against a technique known as "fast flux." The National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), FBI, Australian Signals Directorate, Canadian Centre for Cyber Security, and New Zealand National Cyber Security Centre have collaborated to raise awareness about this growing threat.Fast flux is a domain-based technique that enables malicious actors to rapidly change DNS records associated with a domain, effectively concealing the locations of malicious servers and creating resilient command and control infrastructure. This makes tracking and blocking such malicious activities extremely challenging for cybersecurity professionals."This technique poses a significant threat to national security, enabling malicious cyber actors to consistently evade detection," states the advisory. Threat actors employ two common variants: single flux, where a single domain links to numerous rotating IP addresses, and double flux, which adds an additional layer by frequently changing the DNS name servers responsible for resolving the domain.The advisory highlights several advantages that fast flux networks provide to cybercriminals: increased resilience against takedown attempts, rendering IP blocking ineffective due to rapid address turnover, and providing anonymity that complicates investigations. Beyond command and control communications, fast flux techniques are also deployed in phishing campaigns and to maintain cybercriminal forums and marketplaces.Notably, some bulletproof hosting providers now advertise fast flux as a service differentiator. One such provider boasted on a dark web forum about protecting clients from Spamhaus blocklists through easily enabled fast flux capabilities.The advisory recommends organizations implement a multi-layered defense approach, including leveraging threat intelligence feeds, analyzing DNS query logs for anomalies, reviewing time-to-live values in DNS records, and monitoring for inconsistent geolocation. It also emphasizes the importance of DNS and IP blocking, reputation filtering, enhanced monitoring, and information sharing among cybersecurity communities."Organizations should not assume that their Protective DNS providers block malicious fast flux activity automatically, and should contact their providers to validate coverage of this specific cyber threat," the advisory warns.Intelligence agencies are urging all stakeholders—both government and providers—to collaborate in developing scalable solutions to close this ongoing security gap that enables threat actors to maintain persistent access to compromised systems while evading detection.SpotBugs Token Theft Revealed as Origin of Multi-Stage GitHub Supply Chain Attackhttps://unit42.paloaltonetworks.com/github-actions-supply-chain-attack/Security researchers have traced the sophisticated supply chain attack that targeted Coinbase in March 2025 back to its origin point: the theft of a personal access token (PAT) associated with the popular open-source static analysis tool SpotBugs.Palo Alto Networks Unit 42 revealed in their latest update that while the attack against cryptocurrency exchange Coinbase occurred in March 2025, evidence suggests the malicious activity began as early as November 2024, demonstrating the attackers' patience and methodical approach."The attackers obtained initial access by taking advantage of the GitHub Actions workflow of SpotBugs," Unit 42 explained. This initial compromise allowed the threat actors to move laterally between repositories until gaining access to reviewdog, another open-source project that became a crucial link in the attack chain.Investigators determined that the SpotBugs maintainer was also an active contributor to the reviewdog project. When the attackers stole this maintainer's PAT, they gained the ability to push malicious code to both repositories.The breach sequence began when attackers pushed a malicious GitHub Actions workflow file to the "spotbugs/spotbugs" repository using a disposable account named "jurkaofavak." Even more concerning, this account had been invited to join the repository by one of the project maintainers on March 11, 2025 – suggesting the attackers had already compromised administrative access.Unit 42 revealed the attackers exploited a vulnerability in the repository's CI/CD process. On November 28, 2024, the SpotBugs maintainer modified a workflow in the "spotbugs/sonar-findbugs" repository to use their personal access token while troubleshooting technical difficulties. About a week later, attackers submitted a malicious pull request that exploited a GitHub Actions feature called "pull_request_target," which allows workflows from forks to access secrets like the maintainer's PAT.This compromise initiated what security experts call a "poisoned pipeline execution attack" (PPE). The stolen credentials were later used to compromise the reviewdog project, which in turn affected "tj-actions/changed-files" – a GitHub Action used by numerous organizations including Coinbase.One puzzling aspect of the attack is the three-month delay between the initial token theft and the Coinbase breach. Security researchers speculate the attackers were carefully monitoring high-value targets that depended on the compromised components before launching their attack.The SpotBugs maintainer has since confirmed the stolen PAT was the same token later used to invite the malicious account to the repository. All tokens have now been rotated to prevent further unauthorized access.Security experts remain puzzled by one aspect of the attack: "Having invested months of effort and after achieving so much, why did the attackers print the secrets to logs, and in doing so, also reveal their attack?" Unit 42 researchers noted, suggesting there may be more to this sophisticated operation than currently understood.ASIC Secures Court Orders to Shut Down 95 "Hydra-Like" Scam Companieshttps://asic.gov.au/about-asic/news-centre/find-a-media-release/2025-releases/25-052mr-asic-warns-of-threat-from-hydra-like-scammers-after-obtaining-court-orders-to-shut-down-95-companies/The Australian Securities and Investments Commission (ASIC) has successfully obtained Federal Court orders to wind up 95 companies suspected of involvement in sophisticated online investment and romance baiting scams, commonly known as "pig butchering" schemes.ASIC Deputy Chair Sarah Court warned consumers to remain vigilant when engaging with online investment websites and mobile applications, describing the scam operations as "hydra-like" – when one is shut down, two more emerge in its place."Scammers will use every tool they can think of to steal people's money and personal information," Court said. "ASIC takes action to frustrate their efforts, including by prosecuting those that help facilitate their conduct and taking down over 130 scam websites each week."The Federal Court granted ASIC's application after the regulator discovered most of the companies had been incorporated using false information. Justice Stewart described the case for winding up each company as "overwhelming," citing a justifiable lack of confidence in their conduct and management.ASIC believes many of these companies were established to provide a "veneer of credibility" by purporting to offer genuine services. The regulator has taken steps to remove numerous related websites and applications that allegedly facilitated scam activity by tricking consumers into making investments in fraudulent foreign exchange, digital assets, or commodities trading platforms.In some cases, ASIC suspects the companies were incorporated using stolen identities, highlighting the increasingly sophisticated techniques employed by scammers. These operations often create professional-looking websites and applications designed to lull victims into a false sense of security.The action represents the latest effort in ASIC's ongoing battle against investment scams. The regulator reports removing approximately 130 scam websites weekly, with more than 10,000 sites taken down to date – including 7,227 fake investment platforms, 1,564 phishing scam hyperlinks, and 1,257 cryptocurrency investment scams.Oracle Acknowledges "Legacy Environment" Breach After Weeks of Denialhttps://www.bloomberg.com/news/articles/2025-04-02/oracle-tells-clients-of-second-recent-hack-log-in-data-stolenOracle has finally admitted to select customers that attackers breached a "legacy environment" and stole client credentials, according to a Bloomberg report. The tech giant characterized the compromised data as old information from a platform last used in 2017, suggesting it poses minimal risk.However, this account conflicts with evidence provided by the threat actor from late 2024 and posted records from 2025 on a hacking forum. The attacker, known as "rose87168," listed 6 million data records for sale on BreachForums on March 20, including sample databases, LDAP information, and company lists allegedly stolen from Oracle Cloud's federated SSO login servers.Oracle has reportedly informed customers that cybersecurity firm CrowdStrike and the FBI are investigating the incident. According to cybersecurity firm CybelAngel, Oracle told clients that attackers gained access to the company's Gen 1 servers (Oracle Cloud Classic) as early as January 2025 by exploiting a 2020 Java vulnerability to deploy a web shell and additional malware.The breach, detected in late February, reportedly involved the exfiltration of data from the Oracle Identity Manager database, including user emails, hashed passwords, and usernames.When initially questioned about the leaked data, Oracle firmly stated: "There has been no breach of Oracle Cloud. The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data." However, cybersecurity expert Kevin Beaumont noted this appears to be "wordplay," explaining that "Oracle rebadged old Oracle Cloud services to be Oracle Classic. Oracle Classic has the security incident." This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit edwinkwan.substack.com

Oracle University Podcast
Unlocking the Power of Oracle APEX and AI

Oracle University Podcast

Play Episode Listen Later Apr 8, 2025 15:08


Lois Houston and Nikita Abraham kick off a new season of the podcast, exploring how Oracle APEX integrates with AI to build smarter low-code applications. They are joined by Chaitanya Koratamaddi, Director of Product Management at Oracle, who explains the basics of Oracle APEX, its global adoption, and the challenges it addresses for businesses managing and integrating data. They also explore real-world use cases of AI within the Oracle APEX ecosystem   Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   -----------------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Thank you for joining us as we begin a new season of the podcast, this time focused on Oracle APEX and how it integrates with AI to help you create powerful applications. This season is for everyone—from beginners and SQL developers to DBA data scientists and low-code enthusiasts. So, if you're interested in using Oracle APEX to build low-code applications that have custom generative AI features, you'll want to stay tuned in. 01:07 Lois: That's right, Niki. Today, we're going to discuss Oracle APEX at a high level, starting with what it is. Then, we'll cover a few business challenges related to data and AI innovation that organizations face, and learn how the powerful combination of APEX and AI can help overcome these challenges.  01:27 Nikita: To take us through it all, we've got Chaitanya Koratamaddi with us. Chaitanya is Director of Product Management for Oracle APEX. Hi Chaitanya! For anyone new to Oracle APEX, can you explain what it is and why it's so widely used? Chaitanya: Oracle APEX is the world's most popular enterprise low code application platform. APEX enables you to build secure and scalable enterprise-scale applications with world class features that can be deployed anywhere, cloud or on-premises. And with APEX, you can build applications 20 times faster with 100 times less code. APEX delivers the most productive way to develop and deploy mobile and web applications everywhere. 02:18 Lois: That's impressive. So, what's the adoption rate like for Oracle APEX? Chaitanya: As of today, there are 19 million plus APEX applications created globally. 5,000 plus APEX applications are created on a daily basis and there are 800,000 plus APEX developers worldwide. 60,000 plus customers in 150 countries across various industry verticals. And 75% of Fortune 500 companies use Oracle APEX. 02:56 Nikita: Wow, the numbers really speak for themselves, right? But Chaitanya, why are organizations adopting Oracle APEX at this scale? Or to put it differently, what's the core business challenge that Oracle APEX is addressing? Chaitanya: From databases to all data, you know that the world is more connected and automated than ever. To drive new business value, organizations need to explore and exploit new sources of data that are generated from this connected world. That can be sounds, feeds, sensors, videos, images, and more. Businesses need to be able to work with all types of data and also make sure that it is available to be used together. Typically, businesses need to work on all data at a massive scale. For example, supply chains are no longer dependent just on inventory, demand, and order management signals. A manufacturer should be able to understand data describing global weather patterns and how it impacts their supply chains. Businesses need to pull in data from as many social sources as possible to understand how customer sentiment impacts product sales and corporate brands. Our customers need a data platform that ensures all this data works together seamlessly and easily. 04:38 Lois: So, you're saying Oracle APEX is the platform that helps businesses manage and integrate data seamlessly. But data is just one part of the equation, right? Then there's AI. How are the two related?  Chaitanya: Before we start talking about Oracle AI, let's first talk about what customers are looking for and where they are struggling within their AI innovation. It all starts with data. For decades, working with data has largely involved dealing with structured data, whether it is your customer records in your CRM application and orders from your ERP database. Data was organized into database and tables, and when you needed to find some insights in your data, all you need to do is just use stored procedures and SQL queries to deliver the answers. But today, the expectations are higher. You want to use AI to construct sophisticated predictions, find anomalies, make decisions, and even take actions autonomously. And the data is far more complicated. It is in an endless variety of formats scattered all over your business. You need tools to find this data, consume it, and easily make sense of it all. And now capabilities like natural language processing, computer vision, and anomaly detection are becoming very essential just like how SQL queries used to be. You need to use AI to analyze phone call transcripts, support tickets, or email complaints so you can understand what customers need and how they feel about your products, customer service, and brand. You may want to use a data source as noisy and unstructured as social media data to detect trends and identify issues in real time.  Today, AI capabilities are very essential to accelerate innovation, assess what's happening in your business, and most importantly, exceed the expectations of your customers. So, connecting your application, data, and infrastructure allows everyone in your business to benefit from data. 07:32 Raise your game with the Oracle Cloud Applications skills challenge. Get free training on Oracle Fusion Cloud Applications, Oracle Modern Best Practice, and Oracle Cloud Success Navigator. Pass the free Oracle Fusion Cloud Foundations Associate exam to earn a Foundations Associate certification. Plus, there's a chance to win awards and prizes throughout the challenge! What are you waiting for? Join the challenge today by visiting oracle.com/education. 08:06 Nikita: Welcome back! So, let's focus on AI across the Oracle Cloud ecosystem. How does Oracle bring AI into the mix to connect applications, data, and infrastructure for businesses? Chaitanya: By embedding AI throughout the entire technology stack from the infrastructure that businesses run on through the applications for every line of business, from finance to supply chain and HR, Oracle is helping organizations pragmatically use AI to improve performance while saving time, energy, and resources.  Our core cloud infrastructure includes a unique AI infrastructure layer based on our supercluster technology, leveraging the latest and greatest hardware and uniquely able to get the maximum out of the AI infrastructure technology for scenarios such as large language processing. Then there is generative AI and ML for data platforms. On top of the AI infrastructure, our database layer embeds AI in our products such as autonomous database. With autonomous database, you can leverage large language models to use natural language queries rather than writing a SQL when interacting with the autonomous database. This enables you to achieve faster adoption in your application development. Businesses and their customers can use the Select AI natural language interface combined with Oracle Database AI Vector Search to obtain quicker, more intuitive insights into their own data. Then we have AI services. AI services are a collection of offerings, including generative AI with pre-built machine learning models that make it easier for developers to apply AI to applications and business operations. The models can be custom-trained for more accurate business results. 10:17 Nikita: And what specific AI services do we have at Oracle, Chaitanya?  Chaitanya: We have Oracle Digital Assistant Speech, Language, Vision, and Document Understanding. Then we have Oracle AI for Applications. Oracle delivers AI built for business, helping you make better decisions faster and empowering your workforce to work more effectively. By embedding classic and generative AI into its applications, Fusion Apps customers can instantly access AI outcomes wherever they are needed without leaving the software environment they use every day to power their business. 11:02 Lois: Let's talk specifically about APEX. How does APEX use the Gen AI and machine learning models in the stack to empower developers. How does it help them boost productivity? Chaitanya: Starting APEX 24.1, you can choose your preferred large language models and leverage native generative AI capabilities of APEX for AI assistants, prompt-based application creation, and more. Using native OCI capabilities, you can leverage native platform capabilities from OCI, like AI infrastructure and object storage, etc. Oracle APEX running on autonomous infrastructure in Oracle Cloud leverages its unique native generative AI capabilities tuned specifically on your data. These language models are schema aware, data aware, and take into account the shape of information, enabling your applications to take advantage of large language models pre-trained on your unique data. You can give your users greater insights by leveraging native capabilities, including vector-based similarity search, content summary, and predictions. You can also incorporate powerful AI features to deliver personalized experiences and recommendations, process natural language prompts, and more by integrating directly with a suite of OCI AI services. 12:38 Nikita: Can you give us some examples of this? Chaitanya: You can leverage OCI Vision to interpret visual and text inputs, including image recognition and classification. Or you can use OCI Speech to transcribe and understand spoken language, making both image and audio content accessible and actionable. You can work with disparate data sources like JSON, spatial, graphs, vectors, and build AI capabilities around your own business data. So, low-code application development with APEX along with AI is a very powerful combination. 13:22 Nikita: What are some use cases of AI-powered Oracle APEX applications?  Chaitanya: You can build APEX applications to include conversational chatbots. Your APEX applications can include image and object detection capability. Your APEX applications can include speech transcription capability. And in your applications, you can include code generation that is natural language to SQL conversion capability. Your applications can be powered by semantic search capability. Your APEX applications can include text generation capability. 14:00 Lois: So, there's really a lot we can do! Thank you, Chaitanya, for joining us today. With that, we're wrapping up this episode. We covered Oracle APEX, the key challenges businesses face when it comes to AI innovation, and how APEX and AI work together to give businesses an AI edge.  Nikita: Yeah, and if you want to know more about Oracle APEX, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. Join us next week for a discussion on AI-assisted development in Oracle APEX. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 14:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Defensive Security Podcast - Malware, Hacking, Cyber Security & Infosec

In this episode, Jerry and Andrew discuss various cybersecurity topics, including the recent Oracle Cloud security breach, a GitHub supply chain attack, insider threats, and the implications of AI in cybersecurity. They explore the challenges of maintaining trust in cloud services, the complexities of insider threats, and the evolving landscape of cybercrime driven by AI … Continue reading Defensive Security Podcast Episode 302 →

ai github oracle cloud defensive security podcast
PEBCAK Podcast: Information Security News by Some All Around Good People
Episode 202 - Malaysia Rejects Ransomware Demand, Security Expert Gets Pwned, Oracle Cloud Breach, Bad Lifestyle Talk

PEBCAK Podcast: Information Security News by Some All Around Good People

Play Episode Listen Later Apr 7, 2025 51:47


Welcome to this week's episode of the PEBCAK Podcast!  We've got four amazing stories this week so sit back, relax, and keep being awesome!  Be sure to stick around for our Dad Joke of the Week. (DJOW) Follow us on Instagram @pebcakpodcast   Please share this podcast with someone you know!  It helps us grow the podcast and we really appreciate it!   Malaysia rejects ransomware demand https://therecord.media/malaysia-pm-says-country-rejected-ransom-demand-airport-cyberattack   Security expert phished https://www.darkreading.com/cyberattacks-data-breaches/security-expert-troy-hunt-lured-mailchimp-phish  https://www.troyhunt.com/a-sneaky-phish-just-grabbed-my-mailchimp-mailing-list/   Oracle cloud breached https://www.cybersecuritydive.com/news/researchers-oracle-cloud-breach/743447/ https://www.bleepingcomputer.com/news/security/oracle-denies-data-breach-after-hacker-claims-theft-of-6-million-data-records/  https://www.cloudsek.com/blog/the-biggest-supply-chain-hack-of-2025-6m-records-for-sale-exfiltrated-from-oracle-cloud-affecting-over-140k-tenants    Bad decision talk https://www.cnbc.com/2016/02/04/mcdonalds-salad-has-more-calories-than-big-mac.html    Dad Joke of the Week (DJOW)   Find the hosts on LinkedIn: Chris - https://www.linkedin.com/in/chlouie/ Brian - https://www.linkedin.com/in/briandeitch-sase/ Glenn - https://www.linkedin.com/in/glennmedina/

The Cyberman Show
March 2025 Cybersecurity Recap EP 94

The Cyberman Show

Play Episode Listen Later Apr 6, 2025 17:43


Send us a textGet up to speed with everything that mattered in cybersecurity this month. In this episode of The Cyberman Show, we break down March 2025's top cyber incidents, threat actor tactics, security product launches, and vulnerabilities actively exploited in the wild.Here's what we cover:

Cyber Security Today
Cybersecurity Month-End Review: Oracle Breach, Signal Group Chat Incident, and Global Cybersecurity Regulations

Cyber Security Today

Play Episode Listen Later Apr 5, 2025 48:19 Transcription Available


In this episode of the cybersecurity month-end review, host Jim Love is joined by Daina Proctor from IBM in Ottawa, Randy Rose from The Center for Internet Security from Saratoga Springs, and David Shipley, CEO of Beauceron Security from Fredericton. The panel discusses major cybersecurity stories from the past month, including the Oracle Cloud breach and its communication failures, the misuse of Signal by U.S. government officials, and global cybersecurity regulation efforts such as the UK's new critical infrastructure laws. They also cover notable incidents like the Kuala Lumpur International Airport ransomware attack and the NHS Scotland cyberattack, the continuous challenges of EDR bypasses, and the importance of fusing anti-fraud and cybersecurity efforts. The discussion emphasizes the need for effective communication and stringent security protocols amidst increasing cyber threats. 00:00 Introduction and Panelist Introductions 01:25 Oracle Cloud Breach: A Case Study in Incident Communication 10:13 Signal Group Chat Controversy 20:16 Leadership and Cybersecurity Legislation 23:30 Cybersecurity Certification Program Overview 24:27 Challenges in Cybersecurity Leadership 24:59 Importance of Data Centers and MSPs 26:53 UK Cybersecurity Bill and MSP Standards 28:09 Cyber Essentials and CMMC Standards 32:47 EDR Bypasses and Small Business Security 39:32 Ransomware Attacks on Critical Infrastructure 43:34 Law Enforcement and Cybercrime 47:24 Conclusion and Final Thoughts

Risky Business
Risky Business #786 -- Oracle is lying

Risky Business

Play Episode Listen Later Apr 2, 2025 55:14


On this week's show Patrick Gray and Adam Boileau discuss the week's cybersecurity news: Yes, Oracle Health and Oracle Cloud did get hacked The fallout from Signalgate continues North Korean IT workers pivot to Europe Honeypot data suggests a storm is brewing for Palo Alto VPNs Canadian Anon gets arrested for hacking Texas GOP This week's episode is sponsored by Trail of Bits. Tjaden Hess, a Principal Security Engineer at Trail of Bits who specialises in cryptography, joins the show this week to talk about what a responsible crypto-currency exchange cold wallet setup looks like, and … contrasts that with Bybit. This episode is also available on Youtube. Show notes Oracle Health breach compromises patient data at US hospitals FBI probes Oracle hack tied to healthcare extortion: Report - Becker's Hospital Review | Healthcare News & Analysis Oracle Still Denies Breach as Researchers Persist Hacker linked to Oracle Cloud intrusion threatens to sell stolen data | Cybersecurity Dive Publius on X: "

Defensive Security Podcast - Malware, Hacking, Cyber Security & Infosec

In this episode of the Defensive Security Podcast, hosts Jerry Bell and Andrew Kalat discuss a range of cybersecurity topics, including the recent Oracle Cloud breach, the challenges of asset management in large environments, and the importance of prioritizing vulnerabilities. They also explore the findings from a pen test report, the implications of emerging threats … Continue reading Defensive Security Podcast Episode 301 →

oracle cloud jerry bell defensive security podcast
Today in Health IT
2 Minute Drill: Oracle's Double Breach Trouble

Today in Health IT

Play Episode Listen Later Apr 1, 2025 3:27 Transcription Available


Drex covers two separate Oracle security incidents affecting healthcare organizations. The Rose87168 hacking group claims to have stolen 6 million user records from Oracle Cloud, now for sale on the dark web. Oracle denies the breach, but independent researchers confirm data authenticity. A second breach on older Cerner servers (not yet migrated to Oracle Cloud) exposed patient medical information, with hackers attempting to extort several US healthcare organizations. The full scope of affected organizations and patients remains unknown but healthcare customers report dissatisfaction with Oracle's lack of transparency and response to these incidents.Remember, Stay a Little Paranoid X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer

Podcast de CreadoresDigitales
⚡ Guerra de EVs, filtraciones y fallas: ¿qué está pasando en la red?

Podcast de CreadoresDigitales

Play Episode Listen Later Mar 27, 2025 66:05


Esta semana en Creadores Digitales: hablamos sobre una posible filtración de datos en Oracle Cloud que la empresa niega, pero levanta sospechas. También discutimos un preocupante bug en Signal que podría revelar números telefónicos, y la polémica detrás de la calidad de las Cybertruck. Además, Telmex vuelve a dar de qué hablar. ¡No te lo pierdas!

Cyber Bites
Cyber Bites - 28th March 2025

Cyber Bites

Play Episode Listen Later Mar 27, 2025 10:24


* Critical Flaw in Next.js Allows Authorization Bypass* Hackers Can Now Weaponize AI Coding Assistants Through Hidden Configuration Rules* Hacker Claims Oracle Cloud Data Theft, Company Refutes Breach* Chinese Hackers Infiltrate Asian Telco, Maintain Undetected Network Access for Four Years* Cloudflare Launches Aggressive Security Measure: Shutting Down HTTP Ports for API AccessCritical Flaw in Next.js Allows Authorization Bypasshttps://zhero-web-sec.github.io/research-and-things/nextjs-and-the-corrupt-middlewareA critical vulnerability, CVE-2025-29927, has been discovered in the Next.js web development framework, enabling attackers to bypass authorization checks. This flaw allows malicious actors to send requests that bypass essential security measures.Next.js, a popular React framework used by companies like TikTok, Netflix, and Uber, utilizes middleware components for authentication and authorization. The vulnerability stems from the framework's handling of the "x-middleware-subrequest" header, which normally prevents infinite loops in middleware processing. Attackers can manipulate this header to bypass the entire middleware execution chain.The vulnerability affects Next.js versions prior to 15.2.3, 14.2.25, 13.5.9, and 12.3.5. Users are strongly advised to upgrade to patched versions immediately. Notably, the flaw only impacts self-hosted Next.js applications using "next start" with "output: standalone." Applications hosted on Vercel and Netlify, or deployed as static exports, are not affected. As a temporary mitigation, blocking external user requests containing the "x-middleware-subrequest" header is recommended.Hackers Can Now Weaponize AI Coding Assistants Through Hidden Configuration Ruleshttps://www.pillar.security/blog/new-vulnerability-in-github-copilot-and-cursor-how-hackers-can-weaponize-code-agentsResearchers Uncover Dangerous "Rules File Backdoor" Attack Targeting GitHub Copilot and CursorIn a groundbreaking discovery, cybersecurity researchers from Pillar Security have identified a critical vulnerability in popular AI coding assistants that could potentially compromise software development processes worldwide. The newly unveiled attack vector, dubbed the "Rules File Backdoor," allows malicious actors to silently inject harmful code instructions into AI-powered code editors like GitHub Copilot and Cursor.The vulnerability exploits a fundamental trust mechanism in AI coding tools: configuration files that guide code generation. These "rules files," typically used to define coding standards and project architectures, can be manipulated using sophisticated techniques including invisible Unicode characters and complex linguistic patterns.According to the research, nearly 97% of enterprise developers now use generative AI coding tools, making this attack particularly alarming. By embedding carefully crafted prompts within seemingly innocent configuration files, attackers can essentially reprogram AI assistants to generate code with hidden vulnerabilities or malicious backdoors.The attack mechanism is particularly insidious. Researchers demonstrated that attackers could:* Override security controls* Generate intentionally vulnerable code* Create pathways for data exfiltration* Establish long-term persistent threats across software projectsWhen tested, the researchers showed how an attacker could inject a malicious script into an HTML file without any visible indicators in the AI's response, making detection extremely challenging for developers and security teams.Both Cursor and GitHub have thus far maintained that the responsibility for reviewing AI-generated code lies with users, highlighting the critical need for heightened vigilance in AI-assisted development environments.Pillar Security recommends several mitigation strategies:* Conducting thorough audits of existing rule files* Implementing strict validation processes for AI configuration files* Deploying specialized detection tools* Maintaining rigorous manual code reviewsAs AI becomes increasingly integrated into software development, this research serves as a crucial warning about the expanding attack surfaces created by artificial intelligence technologies.Hacker Claims Oracle Cloud Data Theft, Company Refutes Breachhttps://www.bleepingcomputer.com/news/security/oracle-denies-data-breach-after-hacker-claims-theft-of-6-million-data-records/Threat Actor Offers Stolen Data on Hacking Forum, Seeks Ransom or Zero-Day ExploitsOracle has firmly denied allegations of a data breach after a threat actor known as rose87168 claimed to have stolen 6 million data records from the company's Cloud federated Single Sign-On (SSO) login servers.The threat actor, posting on the BreachForums hacking forum, asserts they accessed Oracle Cloud servers approximately 40 days ago and exfiltrated data from the US2 and EM2 cloud regions. The purported stolen data includes encrypted SSO passwords, Java Keystore files, key files, and enterprise manager JPS keys.Oracle categorically rejected the breach claims, stating, "There has been no breach of Oracle Cloud. The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data."To substantiate their claims, the hacker shared an Internet Archive URL indicating they uploaded a text file containing their ProtonMail email address to the login.us2.oraclecloud.com server. The threat actor also suggested that SSO passwords, while encrypted, could be decrypted using available files.The hacker's demands are multifaceted: they are selling the allegedly stolen data for an undisclosed price or seeking zero-day exploits. Additionally, they proposed offering partial data removal for companies willing to pay a specific amount to protect their employees' information.In a provocative move, rose87168 claimed to have emailed Oracle, demanding 100,000 Monero (XMR) in exchange for breach details. According to the threat actor, Oracle refused the offer after requesting comprehensive information for fixing and patching the vulnerability.The threat actor alleges that Oracle Cloud servers are running a vulnerable version with a public CVE (Common Vulnerabilities and Exposures) that currently lacks a public proof-of-concept or exploit.Chinese Hackers Infiltrate Asian Telco, Maintain Undetected Network Access for Four Yearshttps://www.sygnia.co/threat-reports-and-advisories/weaver-ant-tracking-a-china-nexus-cyber-espionage-operation/Sophisticated Espionage Campaign Exploits Vulnerable Home RoutersCybersecurity researchers from Sygnia have uncovered a sophisticated four-year cyber espionage campaign by Chinese state-backed hackers targeting a major Asian telecommunications company. The threat actor, dubbed "Weaver Ant," demonstrated extraordinary persistence and technical sophistication in maintaining undetected access to the victim's network.The attack began through a strategic compromise of home routers manufactured by Zyxel, which served as the initial entry point into the telecommunications provider's environment. Sygnia attributed the campaign to Chinese actors based on multiple indicators, including the specific targeting, campaign objectives, hacker working hours, and the use of the China Chopper web shell—a tool frequently employed by Chinese hacking groups.Oren Biderman, Sygnia's incident response leader, described the threat actors as "incredibly dangerous and persistent," emphasizing their primary goal of infiltrating critical infrastructure and collecting sensitive information. The hackers demonstrated remarkable adaptability, continuously evolving their tactics to maintain network access and evade detection.A key tactic in the attack involved operational relay box (ORB) networks, a sophisticated infrastructure comprising compromised virtual private servers, Internet of Things devices, and routers. By leveraging an ORB network primarily composed of compromised Zyxel routers from Southeast Asian telecom providers, the hackers effectively concealed their attack infrastructure and enabled cross-network targeting.The researchers initially discovered the campaign during the final stages of a separate forensic investigation, when they noticed suspicious account restoration and encountered a web shell variant deployed on a long-compromised server. Further investigation revealed multiple layers of web shells that allowed the hackers to move laterally within the network while remaining undetected.Sygnia's analysis suggests the campaign's ultimate objective was long-term espionage, enabling continuous information collection and potential future strategic operations. The hackers' ability to maintain access for four years, despite repeated elimination attempts, underscores the sophisticated nature of state-sponsored cyber intrusions.Cloudflare Launches Aggressive Security Measure: Shutting Down HTTP Ports for API Accesshttps://blog.cloudflare.com/https-only-for-cloudflare-apis-shutting-the-door-on-cleartext-traffic/Company Takes Bold Step to Prevent Potential Data ExposuresCloudflare has announced a comprehensive security initiative to completely eliminate unencrypted HTTP traffic for its API endpoints, marking a significant advancement in protecting sensitive digital communications. The move comes as part of the company's ongoing commitment to enhancing internet security by closing cleartext communication channels that could potentially expose critical information.Starting immediately, any attempts to connect to api.cloudflare.com using unencrypted HTTP will be entirely rejected, rather than simply redirected. This approach addresses a critical security vulnerability where sensitive information like API tokens could be intercepted during initial connection attempts, even before a secure redirect could occur.The decision stems from a critical observation that initial plaintext HTTP requests can expose sensitive data to network intermediaries, including internet service providers, Wi-Fi hotspot providers, and potential malicious actors. By closing HTTP ports entirely, Cloudflare prevents the transport layer connection from being established, effectively blocking any potential data exposure before it can occur.Notably, the company plans to extend this feature to its customers, allowing them to opt-in to HTTPS-only traffic for their websites by the last quarter of 2025. This will provide users with an additional layer of security at no extra cost.While the implementation presents challenges—with approximately 2-3% of requests still coming over plaintext HTTP from "likely human" clients and over 16% from automated sources—Cloudflare has developed sophisticated technical solutions to manage the transition. The company has leveraged tools like Tubular to intelligently manage IP addresses and network connections, ensuring minimal disruption to existing services.The move is part of Cloudflare's broader mission to make the internet more secure, with the company emphasizing that security features should be accessible to all users without additional charges. Developers and users of Cloudflare's API will need to ensure they are using HTTPS connections exclusively moving forward. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit edwinkwan.substack.com

Risky Business
Risky Business #785 -- Signal-gate is actually as bad as it looks

Risky Business

Play Episode Listen Later Mar 26, 2025 59:05


On this week's show Patrick Gray and Adam Boileau discuss the week's cybersecurity news: Yes, the Trump admin really did just add a journo to their Yemen-attack-planning Signal group The Github actions hack is smaller than we thought, but was targeting crypto Remote code exec in Kubernetes, ouch Oracle denies its cloud got owned, but that sure does look like customer keymat Taiwanese hardware maker Clevo packs its private keys into bios update zip US Treasury un-sanctions Tornado Cash, party time in Pyongyang? This week's episode is sponsored by runZero. Long time hackerman HD Moore joins to talk about how network vulnerability scanning has atrophied, and what he's doing to bring it back en vogue. Do you miss early 2000s Nessus? HD knows it, he's got you fam. This episode is also available on Youtube. Show notes The Trump Administration Accidentally Texted Me Its War Plans - The Atlantic Using Starlink Wi-Fi in the White House Is a Slippery Slope for US Federal IT | WIRED Coinbase Initially Targeted in GitHub Actions Supply Chain Attack; 218 Repositories' CI/CD Secrets Exposed GitHub Actions Supply Chain Attack: A Targeted Attack on Coinbase Expanded to the Widespread tj-actions/changed-files Incident: Threat Assessment (Updated 3/21) Critical vulnerabilities put Kubernetes environments in jeopardy | Cybersecurity Dive Researchers back claim of Oracle Cloud breach despite company's denials | Cybersecurity Dive The Biggest Supply Chain Hack Of 2025: 6M Records Exfiltrated from Oracle Cloud affecting over 140k Tenants | CloudSEK Capital One hacker Paige Thompson got too light a sentence, appeals court rules | CyberScoop US scraps sanctions on Tornado Cash, crypto ‘mixer' accused of laundering North Korea money | Reuters Tornado Cash Delisting | U.S. Department of the Treasury Major web services go dark in Russia amid reported Cloudflare block | The Record from Recorded Future News Clevo Boot Guard Keys Leaked in Update Package Six additional countries identified as suspected Paragon spyware customers | CyberScoop The Citizen Lab's director dissects spyware and the ‘proliferating' market for it | The Record from Recorded Future News Malaysia PM says country rejected $10 million ransom demand after airport outages | The Record from Recorded Future News Hacker defaces NYU website, exposing admissions data on 1 million students | The Record from Recorded Future News Notre Dame uni students say outage creating enrolment, graduation, assignment mayhem - ABC News DNA of 15 Million People for Sale in 23andMe Bankruptcy

Today in Health IT
2 Minute Drill: Oracle Cloud Breach Allegations & 23andMe's Bankruptcy with Drex DeFord

Today in Health IT

Play Episode Listen Later Mar 25, 2025 4:13 Transcription Available


Drex covers reports of an alleged Oracle Cloud security incident affecting login infrastructure with over 6 million records at risk across 140,000 tenants (though Oracle denies any breach), and 23andMe's bankruptcy filing. Security recommendations include rotating credentials, resetting passwords for Oracle Cloud users, and downloading then deleting personal genetic data from 23andMe as a precautionary measure.Remember, Stay a Little Paranoid X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer

The Gate 15 Podcast Channel
Weekly Security Sprint EP 104. It happened to them and can happen to you.

The Gate 15 Podcast Channel

Play Episode Listen Later Mar 25, 2025 23:41


Warm Start:• That breach cost HOW MUCH? How CISOs can talk effectively about a cyber incident's toll• Perspective: 25 Years of Evolving Information Sharing Into Actionable Intelligence, new from IT-ISAC Director Scott Algeier.• The Gate 15 Interview EP 56. Information Sharing, Cybersecurity Politics, Threats, and More & New Podcast – Information Sharing, Cybersecurity Politics, Threats, and More! The Gate 15 Interview will be released on all the usual channels later today. Catch this month's special crossover episode now via the Cybersecurity Advisors Network post and on YouTube!• Crypto ISAC at WSJ Tech Live: Exploring the Future of Blockchain & CybersecurityMain Topics:• If it can happen to them, it can happen to you, part one. Managing Communications: The Trump Administration Accidentally Texted Me Its War Plans. Considerations for businesses. • If it can happen to them, it can happen to you, part two. Phishing: A Sneaky Phish Just Grabbed my Mailchimp Mailing List. • Some thoughts on punishment, consistency, standards, and compassion.• White House - Achieving Efficiency Through State and Local Preparednesso Fact Sheet: President Donald J. Trump Achieves Efficiency Through State and Local Preparednesso Trump prioritizes infrastructure resilience against cyber attacks, rolls out National Resilience StrategyQuick Hits:• New Dates Added: Live Virtual Presentations on Targeted Violence Prevention. Live Virtual Presentations on Targeted Violence Prevention. The U.S. Secret Service National Threat Assessment Center (NTAC) is pleased to offer new opportunities to attend live virtual presentations on preventing targeted violence. In these presentations, our expert researchers will share findings and implications from decades of research on targeted violence and offer strategies for preventing acts of violence impacting the places where we work, learn, worship, and otherwise live our daily lives. This list of available virtual training events is regularly updated, and presentation topics change from month to month. To learn more about this series of live virtual presentations, or to register for one or more of these events, please follow the link below. Register here.• FBI PSA - Individuals Target Tesla Vehicles and Dealerships Nationwide with Arson, Gunfire, and Vandalism• Man drives car into protesters outside a Tesla dealership, nobody hurt, sheriff says• Attorney General Bondi Statement on Violent Attacks Against Tesla Property• Violent attacks on Tesla dealerships spike as Musk takes prominent role in Trump White House• Multiple cars set on fire at Tesla service center in Las Vegas in 'targeted attack'• Potential Terror Threat Targeted at Health Sector – AHA & Health-ISAC Joint Threat Bulletin• FBI, healthcare agencies warn of credible threat against hospitals, after multi-city social media terror plot alert• Exclusive: FBI scales back staffing and tracking of domestic terrorism probes• This AP map shows sabotage across Europe that has been blamed on Russia and its proxies• Spring Outlook: Dry in the West, milder than average in the South and East; Drought to develop or persist for Rocky Mountains, Southwest and southern Plains• Halcyon - Last Year in Ransomware: Overview, Developments and Vulnerabilities• Chairmen Green, Garbarino, Brecheen Conduct Oversight Of The Federal Government's Response To China-Backed “Typhoon” Intrusions Under Previous Administration• The Biggest Supply Chain Hack Of 2025: 6M Records Exfiltrated from Oracle Cloud affecting over 140k Tenants • Risky Bulletin: The looming epochalypse

Security Squawk
Cybersecurity Podcast - Oracle Breach, Microsoft Teams Security, and Emerging Cyber Threats

Security Squawk

Play Episode Listen Later Mar 25, 2025 65:41


In our latest podcast episode, we discuss the evolving landscape of cybersecurity threats, uncovering how sophisticated attacks are impacting various sectors and how organizations are responding.​ We begin by examining the recent Oracle Cloud breach, which has potentially exposed 6 million records, affecting over 140,000 businesses. This incident underscores the critical need for robust cloud security measures.​ Next, we discuss Microsoft's initiative to bolster Teams' security against phishing and cyber attacks. These enhancements aim to support overwhelmed security teams by integrating advanced protective features into the widely used collaboration platform.​ We also explore the shift in cyberattack readiness responsibilities to state and local governments, following an executive order from President Trump. This policy change raises questions about the preparedness of local entities to handle sophisticated cyber threats.​ Additionally, we highlight the increasing targeting of Mac users by hackers through sophisticated Apple ID phishing scams. This trend challenges the perception of Macs being inherently more secure and emphasizes the need for vigilance among all users.​ We then analyze PowerSchool's 'Trust The Hackers' response to a recent cyber incident, critiquing the approach and discussing the importance of transparency and trust in cybersecurity practices.​ Furthermore, we address the cyberattack on DHR Health in Texas, which has raised concerns over the security of healthcare information systems and the protection of patient data.​

Oracle University Podcast
Oracle Database@Azure

Oracle University Podcast

Play Episode Listen Later Mar 25, 2025 15:12


The final episode of the multicloud series focuses on Oracle Database@Azure, a powerful cloud database solution. Hosts Lois Houston and Nikita Abraham, along with Senior Manager of CSS OU Cloud Delivery Samvit Mishra, discuss how this service allows customers to run Oracle databases within the Microsoft Azure data center, simplifying deployment and management. The discussion also highlights the benefits of native integration with Azure services, eliminating the need for complex networking setups.   Oracle Cloud Infrastructure Multicloud Architect Professional: https://mylearn.oracle.com/ou/course/oracle-cloud-infrastructure-multicloud-architect-professional-2025-/144474 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last two weeks, we've been talking about different aspects of multicloud. In the final episode of this three-part series, Samvit Mishra, Senior Manager of CSS OU Cloud Delivery, joins us once again to tell us about the Oracle Database@Azure service. Hi Samvit! Thanks for being here today. Samvit: Hi Niki! Hi Lois! Happy to be back. 01:01 Lois: In our last episode, we spoke about the strategic partnership between Oracle and Microsoft, and specifically discussed the Oracle Interconnect for Azure.  Nikita: Yeah, and Oracle Database@Azure is yet another addition to this partnership. What can you tell us about this service, Samvit? Samvit: The Oracle Database@Azure service, which was made generally available in 2023, runs right inside the Microsoft Azure data center and uses Azure networking. The entire Oracle Cloud Database Service infrastructure resides in the Azure data center, while it is managed by an expert Oracle Cloud Infrastructure operations team.  It provides customers simple and secure access to Oracle Cloud database services within their chosen Azure deployment region, without getting into the complexity of managing networking between the cloud vendors. It is natively integrated with various Microsoft Azure services. This provides a seamless user experience when configuring and using the different Azure services with OCI Oracle database, since much of the complexity associated with the configuration is greatly simplified.  There is no need to set up a private interconnect between Microsoft Azure and OCI because the service itself resides within the Azure data center and uses the Azure network. This is very beneficial in terms of strategic deployment because customers can experience microseconds network latency between the endpoints, while receiving a high-performance database environment.  02:42 Nikita: How do I get started with the Oracle Database@Azure service? Samvit: You begin by purchasing the subscription from Oracle and setting up your billing account. Then you provision the database, resources, and service.  With that you are ready to configure your application to connect to the database and work on the remaining deployment. As you continue using the service, you can monitor the different resource metrics using the Azure monitoring services and analyze those logs using Azure Log Analytics.  03:15 Lois: So, the adoption is pretty easy, then. What about the responsibilities? Who is responsible for what? Samvit: The Oracle Cloud operations team is entirely responsible for managing the Exadata Database Infrastructure and the VM cluster resources that are provisioned in the Microsoft Azure data center.  Oracle is responsible for maintaining the service software and infrastructure by applying updates as they are released. Any issues arising from the OCI Database Service and the resources will be addressed by Oracle Support. You have to raise a support ticket for them to investigate and provide a resolution.  And as Azure customers, you have to do rightsizing, based on your workload needs, and provision the Exadata Database Infrastructure and VM cluster in the OCI pod within the Azure data center. You have to provision the database in Exadata Database Service, apply the database and system updates, and take advantage of the cloud automation to maintain and manage the database.  You have to load data, establish the connectivity, and support development on your database. As a customer, you monitor the database and infrastructure metrics and events, and also analyze those logs using the Microsoft Azure-provided native services.  04:42 Nikita: Samvit, what sort of challenges were being faced by customers that necessitated the creation of the Oracle Database@Azure service?  Samvit: A common deployment scenario in customer environments was that a lot of critical applications, which could be packaged applications, in-house applications, or customized third-party applications, used Oracle Database as their primary database solution.  These Oracle databases were deployed in Exadata Infrastructure on-premises or even in Enterprise Server hardware. Some customers evaluated and migrated many of their packaged and other applications to Microsoft Azure compute. Since Oracle Exadata was not supported in Azure, they had to configure a hybrid deployment in order to use Oracle databases that reside in the Exadata infrastructure on-premises.  They needed to configure a dedicated and secure network between the Azure data center and their on-premises data center. This added complexity, incurred high costs, had a latency effect, and was even unreliable. There were also cases where customers migrated Oracle databases on Enterprise Server on-premises to Oracle databases hosted on Azure compute.  This did not boost efficiency to a large scale. And those were the only options available when provisioning Oracle Database in Azure because Exadata was not available earlier in Azure. 06:18 Lois: And how has that been resolved now? Samvit: With the Oracle Database@Azure service, customer requirements have been aptly met by allowing them to host their Oracle databases on Exadata infrastructure, right next to their application in the Azure data center.  Customers, while migrating their applications to Azure compute, can also migrate their Oracle databases on-premises on Exadata infrastructure directly to Exadata Database Service in Azure. And Oracle databases that are on Enterprise Server on-premises can be consolidated directly into Exadata Database Service in Azure, providing them the benefits of scalability, security, performance, and availability, all that are inherent property of OCI Oracle Exadata Database Service.  Customers can see growth in the operational efficiency, saving on the overall cost.  07:17 Nikita: Can you take us through the process of deployment?  Samvit: It's quite simple, actually. First, you deploy the Exadata Database Service that is plugged into Azure VNET. Next, you provision the required number of databases, which might be migrated as is or with a consolidated exercise.  You can use any of the Oracle database tools or utilities to do the migration or even use the Oracle Zero Downtime Migration method to automate the entire Oracle database migration. Finally, migrate your enterprise application into the Azure environment.  Establish the required network configuration to allow communication between the migrated applications and Oracle databases.  And then you are all set to publish your application that is running entirely in Azure. You can leverage other Azure services, like monitoring, log analytics, Power BI, or DevOps tools, to enhance existing or even build and deploy newer enterprise applications that are powered by OCI Oracle Database Service in the back end.  08:25 Lois: What about multi-cloud deployment scenarios where applications reside in Azure, but the Oracle databases are deployed on third-party cloud providers, either as a native solution or in computes? Samvit: These Oracle databases can be migrated to Exadata Database Service in the Oracle Database@Azure service.  There is no need for the complex cross-cloud connectivity setup between the vendors. And at the same time, you experience the lowest latency between the application and the database deployment. 09:05 Want to learn how to design stunning, responsive enterprise applications directly from your browser with minimal coding? The new Oracle APEX Developer Professional learning path and certification enables you to leverage AI-assisted development, including generative AI and Database 23ai, to build secure, scalable web and mobile applications with advanced AI-powered features. From now through May 15, 2025, we're waiving the certification exam fee (valued at $245). So, what are you waiting for? Visit mylearn.oracle.com to get started today. 09:45 Nikita: Welcome back! Samvit, what's the onboarding process like? Samvit: You have to complete the onboarding process to use the service in Microsoft Azure. But before you do that, you first have to complete the subscription process. You must have an active Microsoft Azure account subscription that will be used for subscribing and onboarding the Oracle Database@Azure service.  To subscribe to Oracle Database@Azure, you need to purchase an Oracle Database@Azure private offer from Azure Marketplace. As a customer, you will first reach out to Oracle Sales and negotiate a price for the service. Oracle will provide you with the billing account ID and contact details of the person within the organization who will be handling the service.  After this, Oracle will create a private offer in Azure Marketplace.  10:40 Lois: Sorry to interrupt you, but what's a private offer? Samvit: That's alright, Lois. Private offers are basically solutions or services created for customers by a Microsoft partner, which, in this case, is Oracle. Purchase of those private offers happens from the private offer management page of Azure Marketplace.  But there is a prerequisite. The Azure account must be enabled to make private offer purchases on the subscription from Azure Marketplace. You can refer to the Azure documentation to enable the account, if it is not enabled. You review the offer terms and accept the purchase offer, which will take you to the Create Oracle Subscription page.  You validate the subscription and other particulars and proceed with the process. After the service is deployed, the purchase status of the private offer changes to subscribed.  There are a few points to note here. Billing and payment are done via Azure, and you can use Microsoft Azure Consumption Commitment.  You can also use your on-premises licenses with the Bring Your Own License option and the Unlimited License Agreements to pay towards your service consumption. And you also receive Oracle Support rewards for every dollar spent on the service.  12:02 Nikita: OK, now that I'm subscribed, what's next? Samvit: After you complete the subscription step, Oracle Database@Azure will appear as an Azure resource, just like any other Azure service, and you can move on to onboarding. Onboarding begins with the linking of your OCI account, which will be used for provisioning and managing database resources.  The account is also used for provisioning infrastructure and software maintenance updates for the database service. You can either provide an existing OCI account or create a new one. Then you set up Identity Federation between the Azure account and the OCI tenancy.  This can authenticate login to the OCI portal using Azure credentials, which you require while performing certain operations in OCI. For example, provisioning databases, getting infrastructure and software maintenance updates, and so on. This is an optional step, but it is recommended that you complete the Federation.  The last step is to authorize users by assigning groups and roles in order to have the needed privileges to perform different operations. For example, some groups of users can manage Exadata Database Service resources in Azure, while some can manage the databases in OCI.  You can refer to OCI documentation to get detailed descriptions of roles and group names. 13:31 Lois: Right. That will ensure you assign the correct permissions to the appropriate users. Samvit: Exactly. Assigning the correct roles and permissions to individuals inside the organization is a necessary step for transacting in the marketplace and guaranteeing a smooth purchasing experience. Azure Marketplace uses Azure Role-Based Access Control to enable you to acquire solutions certified to run on Azure. Those are then going to determine the purchasing privileges within the organization.  14:03 Nikita: There's so much more we can discuss about Oracle Database@Azure, but we have to stop somewhere! Thank you so much, Samvit, for joining us over these last three episodes. Lois: Yeah, it's been great to have you, Samvit. Samvit: Thank you for having me. Nikita: Remember, we also have the Oracle Database@Google Cloud service. So, if you want to learn about that, or even if you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Cloud Infrastructure Multicloud Architect Professional course.  Lois: There are a lot of demonstrations that you'll surely find useful. Well, that's all we have for today. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 14:43 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Capital
Radar Empresarial: Alphabet se hace con Wiz por 32.000 millones de dólares

Capital

Play Episode Listen Later Mar 20, 2025 4:31


Por fin se hizo realidad: Alphabet se hace con la empresa de seguridad jurídica Wiz por 32.000 millones de dólares. Es la mayor adquisición de la historia de la matriz de Google. Y es que a la tercera va la vencida. Ya en julio de 2024 la compañía ofreció a Wiz 23.000 millones para evitar que la start-up con sede en Nueva York no siguiera con su oferta pública inicial. Entonces Assaf Rappaport, CEO de Wiz, declaró en comunicado que “"Si bien nos sentimos halagados por las ofertas que hemos recibido, hemos decidido continuar nuestro camino hacia la construcción de Wiz". El halago necesitó de más dinero y nueve mil millones más los que han convencido a la compañía fundada por Israel. ¿Pero qué es lo que hace especial a Wiz? Una de las razones por las que Wiz pudo rechazar esta suculenta oferta fue precisamente porque está respaldada por Sequoia Capital y Thrive Capital. En julio, la valoración que se hacía de la compañía era de 11.000 millones de dólares. Ahora el acuerdo hace que Google se quede con este negocio de seguridad, que por cierto trabaja con una multitud de rivales de la compañía: Wiz ofrecía servicios de ciberseguridad a compañías como Amazon Web Services, Microsoft Azure y Oracle Cloud. Aunque parece que los cambios en la administración Trump pueden favorecer al éxito de la operación. Paco Canós, CEO y fundador de Cyber-C. El acuerdo también pone en duda la viabilidad anti monopolística de la operación. Detrás de Google siempre han estado los organismos reguladores de la competencia tanto en Estados Unidos como en Europa. Está claro que la compra hace que las grandes compañías piensen ahora si seguir con Wiz pero ¿Qué nivel de datos puede obtener Google con esta compra? ¿Permitirá la operación la FTC de Estados Unidos, la CMA del Reino Unido y las autoridades de la Unión Europea? Sea como sea, Wiz era el deseo más anhelado de Google desde hacía ya tiempo. Tanto es así, que esta compra se convierte en la mayor adquisición de la historia de la compañía, superando con creces la compra de Motorola de 12.600 millones de dólares aunque lejos de la mayor operación tecnológica de la historia que fue la compra de Fox por parte de Disney, que tuvo un valor de 71.000 millones. La empresa fue fundada en enero de 2020 por Assaf Rappaport , Yinon Costica, Roy Reznik y Ami Luttwak, quienes anteriormente fundaron la compañía Adallom. Su crecimiento ha sido tal que en agosto de 2022, Wiz afirmó ser la startup que más rápido había escalado de 1 millón de dólares a 100 millones de dólares en ingresos recurrentes anuales, desde febrero de 2021 hasta aproximadamente julio de 2022.

Cloud Stories | Cloud Accounting Apps | Accounting Ecosystem
Unlocking Productivity with AI-Powered NetSuite

Cloud Stories | Cloud Accounting Apps | Accounting Ecosystem

Play Episode Listen Later Feb 27, 2025 20:58


Discover how NetSuite's new AI-powered Text Enhance feature is revolutionising finance teams, streamlining workflows, and boosting productivity with secure, business-specific automation —all while keeping data secure. Scott Wiltshire from Oracle NetSuite shares insights into AI-driven automation, its impact on finance and operations, and why Australian & NZ businesses should embrace this next-gen technology. Discover how NetSuite's new AI-powered "Text Enhanced" feature is revolutionising finance teams by automating manual tasks, improving productivity, and providing smarter business insights Today I'm speaking with Scott Wiltshire, Vice President & GM, Oracle NetSuite ANZ Key Learnings from the Episode: NetSuite's "Text Enhanced" Feature: Uses generative AI to automate content creation within the ERP system. Boosting Productivity: AI-powered automation helps finance teams with collections, journal entry annotations, and other manual documentation tasks. Data Security & Compliance: The AI assistant ensures strict security protocols, keeping company data protected within Oracle Cloud. Personalized AI Assistance: AI-generated content is tailored to specific company data, improving accuracy and relevance. Competitive Advantage: Businesses leveraging AI early gain efficiency, scalability, and a productivity boost. SuiteConnect 2025: NetSuite's regional conference in Sydney will unveil more AI-powered innovations and training opportunities. Contact details: Accounting Apps newsletter: http://HeatherSmithAU.COM Accounting Apps Mastermind: https://www.facebook.com/groups/XeroMasterMind  LinkedIn: https://www.linkedin.com/in/HeatherSmithAU/  YouTube Channel: https://www.youtube.com/ANISEConsulting  Threads: https://www.threads.net/@heathersmithau  BlueSky: https://bsky.app/profile/heathersmithau.bsky.social   

Web Tv Bourse Economie Entreprises's posts
Quand Elon Musk fait main basse sur l'appareil d'État américain

Web Tv Bourse Economie Entreprises's posts

Play Episode Listen Later Feb 12, 2025 5:37


 Deux sujets qui concernent la tech, le premier encore avec Elon Musk à la manœuvre qui fait ses affaires de l'Etat américain Joe Biden avait parlé d'oligarchie en quittant la Maison blanche, l'honnêteté intellectuelle conduit à dire que les Etats-Unis, sous nos yeux se transforment en ploutocratie. Imaginez Bernard Arnault dont on parle tous les jours ou presque en France qui aurait la main sur l'administration française. C'est tout simplement ce qui se passe avec Elon Musk encore adulé par certains politiques français qui ont besoin de lunettes, lui avance pour mettre ses pions. Plusieurs articles de la presse américaine révèlent que des serveurs utilisés par le Department of Government Efficiency (le désormais fameux DOGE) pour copier les données de l'USAID (agence américaine pour le développement international) ou sont principalement opérés par Tesla Data Services et Oracle Cloud. L'Associated Press a rapporté que le personnel de Doge ( de jeunes ingénieurs recrutés par Elon Musk) avait pu accéder à des documents classifiés de l'administration américaine. Ce n'est que le début d'un feuilleton que mêmes les plus grands scénaristes d'Hollywood n'auraient pu imaginer, une démocratie américaine à la merci de l'homme le plus riche du monde.

Techzine Talks
Zonder culturele diversiteit was Oracle Cloud Infrastructure er niet geweest

Techzine Talks

Play Episode Listen Later Feb 10, 2025 42:13


Veel grotere bedrijven lijken tegenwoordig af te stappen van zaken zoals diversiteit en inclusie. Bij Oracle staan ze er compleet anders in, horen we van Afiena van den Broek, HR Senior Director Nederland en Stefan Diedericks, Global Partner Enablement Leader bij het bedrijf. Zij zien heel veel voordelen van teams die niet bestaan uit allemaal dezelfde 'elementen'. OCI zoals we het nu kennen was er bijvoorbeeld niet gekomen zonder cultureel diverse teams.Culture & Inclusion, zoals Oracle het noemt, zal belangrijk blijven. Organisaties en datgene wat ze maken en produceren, worden er beter van, is het standpunt. Wij gingen het gesprek aan hierover. Cultuur is een breed begripWe horen heel vaak gedurende het jaar dat diverse teams betere resultaten leveren voor organisaties. Conceptueel snappen we op zich ook wel dat verschillende achtergronden en vertrekpunten in de nodige situaties uiteindelijk een beter afgewogen eindproduct of -resultaat opleveren. Het draait hierbij uiteindelijk vooral om verschillen in cultuur, in de breedste zin van het woord. Daarbij gaat het niet alleen om kleur, geslacht of andere zaken die al snel bij de term diversiteit om de hoek komen kijken. Ook de laag van de bevolking waaruit iemand afkomstig is speelt een rol en de zaken die iemand heeft meegemaakt. Vandaar ook dat Oracle het niet langer over Diversity & Inclusion heeft, maar over Culture & Inclusion. Uiteindelijk houden we echter ook van concrete voorbeelden van wat de impact is van het hebben van een cultureel divers team. Met alleen concepten kunnen we over het algemeen niet zoveel. Die bespreken we in deze podcast ook uitgebreid. Zo gaan we in op hoe je vanuit dit perspectief om kunt gaan met recruitment voor technische functies in je organisatie. Wat de organisatorische doelstellingen zijn van een dergelijke benadering, maar we bespreken ook concrete voorbeelden van technologie die er niet was geweest zonder een cultureel diverse achtergrond van de teams die eraan werkten en werken.De diverse wortels van Oracle Cloud InfrastructureMet betrekking tot Oracle is Oracle Cloud Infrastructure (OCI) een mooi voorbeeld van een technologie die er in de huidige vorm niet was gekomen zonder culturele diversiteit. Het is namelijk het resultaat van activiteiten van Oracle in Zuid-Afrika, onder andere met overnames en daaropvolgende ontwikkeling van software. Specifiek gaat het hierbij om de ontwikkeling van de FastConnect-software, die ervoor moet zorgen dat cloudregio's een zeer snelle rechtstreekse verbinding krijgen. Dat is een van de onderdelen van OCI waar Oracle op dit moment nog altijd erg trots op is naar buiten toe. Zoals Diedericks het zegt, "de cloudreis van Oracle is in Zuid-Afrika begonnen".Al met al wellicht geen alledaagse aflevering van Techzine Talks deze week, ook al weten we ook hier weer de onmiskenbare handtekening van Coen en Sander onder te zetten. Het beste is om maar gewoon snel te luisteren voor interessante inzichten in hoe Oracle aankijkt tegen Culture & Inclusion en wat we daar van kunnen en willen leren.

Cables2Clouds
How to Prepare For an Interview with a Tech Giant - Part 1

Cables2Clouds

Play Episode Listen Later Jan 22, 2025 45:07 Transcription Available


Send us a textUnlock the secrets to acing interviews with tech giants by tuning into our latest episode of the Cables to Clouds podcast. Join us, Chris Miles and Tim McConnaughey, as we sit down with Kam Agahian, the Senior Director at Oracle Cloud, who brings a treasure trove of insights from his years of experience at the forefront of tech hiring at Amazon and Oracle. Kam shares his unique perspective on what makes a successful interview in the competitive field of cloud networking, offering invaluable advice on optimizing interview strategies and understanding the nuances of the hiring process.Ever wonder about the most effective ways to prepare for a cloud engineering interview? We explore this topic from both sides of the table, discussing the tricky balance of crafting relevant questions and the ethical considerations of scrutinizing candidates' skills. Through engaging anecdotes, we highlight the need for fairness and focus in interviews, addressing the challenges candidates face when traditional network engineering roles transition into cloud-specific positions. Kam emphasizes the importance of aligning interview practices with the goal of building successful teams and satisfying customers, ensuring that both interviewers and candidates are on the same page.Finally, our episode delves deep into the preparatory techniques essential for networking roles. From foundational knowledge like IP and TCP/UDP protocols to advanced topics for seasoned professionals, we cover the spectrum of what candidates need to know. Kam's insights into the art of mastering networking interview theory and the value of certifications like the CCIE are a goldmine for anyone looking to sharpen their technical skills. Whether you're an aspiring cloud network engineer or a seasoned professional, this episode offers a rich blend of theory and practical advice to help you navigate and succeed in high-stakes tech environments.How to connect with Kam:https://www.linkedin.com/in/agahian/More content from Kam:https://packetpushers.net/blog/how-to-break-into-a-cloud-engineering-career/https://www.youtube.com/watch?v=-WTH951b-Okhttps://www.youtube.com/watch?v=B8PeU49H8XICheck out the Fortnightly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on Twitter: https://twitter.com/cables2cloudsFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj

Oracle University Podcast
Introduction to MySQL

Oracle University Podcast

Play Episode Listen Later Jan 7, 2025 26:21


Join hosts Lois Houston and Nikita Abraham as they kick off a new season exploring the world of MySQL 8.4. Together with Perside Foster, a MySQL Principal Solution Engineer, they break down the fundamentals of MySQL, its wide range of applications, and why it's so popular among developers and database administrators. This episode also covers key topics like licensing options, support services, and the various tools, features, and plugins available in MySQL Enterprise Edition.   ------------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative  podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Happy New Year, everyone! Thank you for joining us as we begin a new season of the podcast, this time focused on the basics of MySQL 8.4. If you're a database administrator or want to become one, this is definitely for you. It's also great for developers working with data-driven apps or IT professionals handling MySQL installs, configurations, and support. 01:03 Lois: That's right, Niki. Throughout the season, we'll be delving into MySQL Enterprise Edition and covering a range of topics, including installation, security, backups, and even MySQL HeatWave on Oracle Cloud.  Nikita: Today, we're going to discuss the Oracle MySQL ecosystem and its various components. We'll start by covering the fundamentals of MySQL and the different licenses that are available. Then, we'll explore the key tools and features to boost data security and performance. Plus, we'll talk a little bit about MySQL HeatWave, which is the cloud version of MySQL.  01:39 Lois: To take us through all of this, we've got Perside Foster with us today. Perside is a MySQL Principal Solution Engineer at Oracle. Hi Perside! For anyone new to MySQL, can you explain what it is and why it's so widely used? Perside: MySQL is a relational database management system that organizes data into structured tables, rows, and columns for efficient programming and data management. MySQL is transactional by nature. When storing and managing data, actions such as selecting, inserting, updating, or deleting are required. MySQL groups these actions into a transaction. The transaction is saved only if every part completes successfully. 02:29 Lois: Now, how does MySQL work under the hood? Perside: MySQL is a high-performance database that uses its default storage engine, known as InnoDB. InnoDB helps MySQL handle complex operations and large data volumes smoothly. 02:49 Nikita: For the unversed, what are some day-to-day applications of MySQL? How is it used in the real world? Perside: MySQL works well with online transaction processing workloads. It handles transactions quickly and manages large volumes of transaction at once. OLTP, with low latency and high throughput, makes MySQL ideal for high-speed environments like banking or online shopping. MySQL not only stores data but also replicates it from a main server to several replicas. 03:31 Nikita: That's impressive! And what are the benefits of using MySQL?  Perside: It improves data availability and load balancing, which is crucial for businesses that need up-to-date information. MySQL replication supports read scale-out by distributing queries across servers, which increases high availability. MySQL is the most popular database on the web. 04:00 Lois: And why is that? What makes it so popular? What sets it apart from the other database management systems? Perside: First, it is a relational database management system that supports SQL. It also works as a document store, enabling the creation of both SQL and NoSQL applications without the need for separate NoSQL databases. Additionally, MySQL offers advanced security features to protect data integrity and privacy. It also uses tablespaces for better disk space management. This gives database administrators total control over their data storage. MySQL is simple, solid in its reliability, and secure by design. It is easy to use and ideal for both beginners and professionals. MySQL is proven at scale by efficiently handling large data volumes and high transaction rates. MySQL is also open source. This means anyone can download and use it for free. Users can modify the MySQL software to meet their needs. However, it is governed by the GNU General Public License, or GPL. GPL outlines specific rules for its use. MySQL offers two major editions. For developers and small teams, the Community Edition is available for free and includes all of the core features needed. For large enterprises, the Commercial Edition provides advanced features, management tools, and dedicated technical support. 05:58 Nikita: Ok. Let's shift focus to licensing. Who is it useful for?  Perside: MySQL licensing is essential for independent software vendors. They're called ISVs. And original manufacturers, they're called OEMs. This is because these companies often incorporate MySQL code into their software products or hardware system to boost the functionality and performance of their product. MySQL licensing is equally important for value-added resellers. We call those VARs. And also, it's important for other distributors. These groups bundle MySQL with other commercially licensed software to sell as part of their product offering. The GPL v.2 license might suit Open Source projects that distribute their products under that license.   07:02 Lois: But what if some independent software vendors, original manufacturers, or value-add resellers don't want to create Open Source products. They don't want their source to be publicly available and they want to keep it private? What happens then? Perside: This is why Oracle provides a commercial licensing option. This license allows businesses to use MySQL in their products without having to disclose their source code as required by GPL v2. 07:33 Nikita: I want to bring up the robust support services that are available for MySQL Enterprise. What can we expect in terms of support, Perside?  Perside: MySQL Enterprise Support provides direct access to the MySQL Support team. This team consists of experienced MySQL developers, who are experts in databases. They understand the issues and challenges their customers face because they, too, have personally tackled these issues and challenges. This support service operates globally and is available in 29 languages. So no matter where customers are located, Oracle Support provides assistance, most likely in their preferred language. MySQL Enterprise Support offers regular updates and hot fixes to ensure that the MySQL customer systems stays current with the latest improvements and security patches. MySQL Support is available 24 hours a day, 7 days a week. This ensures that whenever there is an issue, Oracle Support can provide the needed help without any delay. There are no restrictions on how many times customers can receive help from the team because MySQL Enterprise Support allows for unlimited incidents. MySQL Enterprise Support goes beyond simply fixing issues. It also offers guidance and advice. Whether customers require assistance with performance tuning or troubleshooting, the team is there to support them every step of the way.  09:27 Lois: Perside, can you walk us through the various tools and advanced features that are available within MySQL? Maybe we could start with MySQL Shell. Perside: MySQL Shell is an integrated client tool used for all MySQL database operations and administrative functions. It's a top choice among MySQL users for its versatility and powerful features. MySQL Shell offers multi-language support for JavaScript, Python, and SQL. These naturally scriptable languages make coding flexible and efficient. They also allow developers to use their preferred programming language for everything, from automating database tasks to writing complex queries. MySQL Shell supports both document and relational models. Whether your project needs the flexibility of NoSQL's document-oriented structures or the structured relationships of traditional SQL tables, MySQL Shell manages these different data types without any problems. Another key feature of MySQL Shell is its full access to both development and administrative APIs. This ability makes it easy to automate complex database operations and do custom development directly from MySQL Shell. MySQL Shell excels at DBA operations. It has extensive tools for database configuration, maintenance, and monitoring. These tools not only improve the efficiency of managing databases, but they also reduce the possibility for human error, making MySQL databases more reliable and easier to manage.  11:37 Nikita: What about the MySQL Server tool? I know that it is the core of the MySQL ecosystem and is available in both the community and commercial editions. But how does it enhance the MySQL experience? Perside: It connects with various devices, applications, and third-party tools to enhance its functionality. The server manages both SQL for structured data and NoSQL for schemaless applications. It has many key components. The parser, which interprets SQL commands. Optimizer, which ensures efficient query execution. And then the queue cache and buffer pools. They reduce disk usage and speed up access. InnoDB, the default storage engine, maintains data integrity and supports robust transaction and recovery mechanism. MySQL is designed for scalability and reliability. With features like replication and clustering, it distributes data, manage more users, and ensure consistent uptime. 13:00 Nikita: What role does MySQL Enterprise Edition play in MySQL server's capabilities? Perside: MySQL Enterprise Edition improves MySQL server by adding a suite of commercial extensions. These exclusive tools and services are designed for enterprise-level deployments and challenging environments. These tools and services include secure online backup. It keeps your data safe with efficient backup solutions. Real-time monitoring provides insight into database performance and health. The seamless integration connects easily with existing infrastructure, improving data flow and operations. Then you have the 24/7 expert support. It offers round the clock assistance to optimize and troubleshoot your databases. 14:04 Lois: That's an extensive list of features. Now, can you explain what MySQL Enterprise plugins are? I know they're specialized extensions that boost the capabilities of MySQL server, tools, and services, but I'd love to know a little more about how they work. Perside: Each plugin serves a specific purpose. Firewall plugin protects against SQL injection by allowing only pre-approved queries. The audit plugin logs database activities, tracking who accesses databases and what they do. Encryption plugin secures data at rest, protecting it from unauthorized access. Then we have the authentication plugin, which integrates with systems like LDAP and Active Directory for control access. Finally, the thread pool plugin optimizes performance in high load situation by effectively controlling how many execution threads are used and how long they run. The plugin and tools are included in the MySQL Enterprise Edition suite. 15:32 Join the Oracle University Learning Community and tap into a vibrant network of over 1 million members, including Oracle experts and fellow learners. This dynamic community is the perfect place to grow your skills, connect with likeminded learners, and celebrate your successes. As a MyLearn subscriber, you have access to engage with your fellow learners and participate in activities in the community. Visit community.oracle.com/ou to check things out today! 16:03 Nikita: Welcome back! We've been going through the various MySQL tools, and another important one is MySQL Enterprise Backup, right?  Perside: MySQL Enterprise Backup is a powerful tool that offers online, non-blocking backup and recovery. It makes sure databases remain available and performs optimally during the backup process. It also includes advanced features, such as incremental and differential backup. Additionally, MySQL Enterprise Backup supports compression to reduce backups and encryptions to keep data secure. One of the standard capabilities of MySQL Enterprise Backup is its seamless integration with media management software, or MMS. This integration simplifies the process of managing and storing backups, ensuring that data is easily accessible and secure. Then we have the MySQL Workbench Enterprise. It enhances database development and design with robust tools for creating and managing your diagram and ensuring proper documentation. It simplifies data migration with powerful tools that makes it easy to move databases between platforms. For database administration, MySQL Workbench Enterprise offers efficient tools for monitoring, performance tuning, user management, and backup and recovery. MySQL Enterprise Monitor is another tool. It provides real-time MySQL performance and availability monitoring. It helps track database's health and performance. It visually finds and fixes problem queries. This is to make it easy to identify and address performance issues. It offers MySQL best-practice advisors to guide users in maintaining optimal performance and security. Lastly, MySQL Enterprise Monitor is proactive and it provides forecasting. 18:40 Lois: Oh that's really going to help users stay ahead of potential issues. That's fantastic! What about the Oracle Enterprise Manager Plugin for MySQL? Perside: This one offers availability and performance monitoring to make sure MySQL databases are running smoothly and efficiently. It provides configuration monitoring. This is to help keep track of the database settings and configuration. Finally, it collects all available metrics to provide comprehensive insight into the database operation. 19:19 Lois: Are there any tools designed to handle higher loads and improve security? Perside: MySQL Enterprise Thread Pool improves scalability as concurrent connections grows. It makes sure the database can handle increased loads efficiently. MySQL Enterprise Authentication is another tool. This one integrates MySQL with existing security infrastructures. It provides robust security solutions. It supports Linux PAM, LDAP, Windows, Kerberos, and even FIDO for passwordless authentication. 20:02 Nikita: Do any tools offer benefits like customized logging, data protection, database security? Perside: The MySQL Enterprise Audit provides out-of-the-box logging of connections, logins, and queries in XML or JSON format. It also offers simple to fine-grained policies for filtering and log rotation. This is to ensure comprehensive and customizable logging. MySQL Enterprise Firewall detects and blocks out of policy database transactions. This is to protect your data from unauthorized access and activities. We also have MySQL Enterprise Asymmetric Encryption. It uses MySQL encryption libraries for key management signing and verifying data. It ensures data stays secure during handling. MySQL Transparent Data Encryption, another tool, provides data-at-rest encryption within the database. The Master Key is stored outside of the database in a KMIP 1.1-compliant Key Vault. That is to improve database security. Finally, MySQL Enterprise Masking offers masking capabilities, including string masking and dictionary replacement. This ensures sensitive data is protected by obscuring it. It also provides random data generators, such as range-based, payment card, email, and social security number generators. These tools help create realistic but anonymized data for testing and development. 22:12 Lois: Can you tell us about HeatWave, the MySQL cloud service? We're going to have a whole episode dedicated to it soon, but just a quick introduction for now would be great. Perside: MySQL HeatWave offers a fully managed MySQL service. It provides deployment, backup and restore, high availability, resizing, and read replicas, all the features you need for efficient database management. This service is a powerful union of Oracle Infrastructure and MySQL Enterprise Edition 8. It combines robust performance with top-tier infrastructure. With MySQL HeatWave, your systems are always up to date with the latest security fixes, ensuring your data is always protected. Plus, it supports both OLTP and analytics/ML use cases, making it a versatile solution for diverse database needs. 23:22 Nikita: So to wrap up, what are your key takeways when it comes to MySQL? Perside: When you use MySQL, here is the bottom line. MySQL Enterprise Edition delivers unmatched performance at scale. It provides advanced monitoring and tuning capabilities to ensure efficient database operation, even under heavy loads. Plus, it provides insurance and immediate help when needed, allowing you to depend on expert support whenever an issue arises. Regarding total cost of ownership, TCO, this edition significantly reduces the risk of downtime and enhances productivity. This leads to significant cost savings and improved operational efficiency. On the matter of risk, MySQL Enterprise Edition addresses security and regulatory compliance. This is to make sure your data meets all necessary standards. Additionally, it provides direct contact with the MySQL team for expert guidance. In terms of DevOps agility, it supports automated scaling and management, as well as flexible real-time backups, making it ideal for agile development environments. Finally, concerning customer satisfaction, it enhances application performance and uptime, ensuring your customers have a reliable and smooth experience. 25:18 Lois: Thank you so much, Perside. This is really insightful information. To learn more about all the support services that are available, visit support.oracle.com. This is the central hub for all MySQL Enterprise Support resources.  Nikita: Yeah, and if you want to know about the key commercial products offered by MySQL, visit mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Join us next week for a discussion on installing MySQL. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 25:53 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Hipsters Ponto Tech
Oracle Cloud Infrastructure e ONE – Hipsters Ponto Tech #442

Hipsters Ponto Tech

Play Episode Listen Later Dec 17, 2024 44:49


Hoje é dia de falar sobre nuvem! Neste episódio, falamos sobre o serviço de nuvem da Oracle, sobre multicloud, sobre as necessidades do mercado na era da IA e, é claro, sobre o Oracle Next Education, um programa que vem mudando a vida de dezenas de milhares de pessoas em toda a América latina. Vem ver quem participou desse papo: Paulo Silveira, o host que conta com você ouvinte Amanda Gelumbauskas, Latam Head do Oracle Next Education Lucas Leung, Latam Marketing Director na Oracle Paulo Alves, Coordenador da escola de DevOps da Alura

Oracle University Podcast
Best of 2024: Autonomous Database on Serverless Infrastructure

Oracle University Podcast

Play Episode Listen Later Dec 17, 2024 17:25


Want to quickly provision your autonomous database? Then look no further than Oracle Autonomous Database Serverless, one of the two deployment choices offered by Oracle Autonomous Database.   Autonomous Database Serverless delegates all operational decisions to Oracle, providing you with a completely autonomous experience.   Join hosts Lois Houston and Nikita Abraham, along with Oracle Database experts, as they discuss how serverless infrastructure eliminates the need to configure any hardware or install any software because Autonomous Database handles provisioning the database, backing it up, patching and upgrading it, and growing or shrinking it for you.   Survey: https://customersurveys.oracle.com/ords/surveys/t/oracle-university-gtm/survey?k=focus-group-2-link-share-5   Oracle MyLearn: https://mylearn.oracle.com/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Rajeev Grover, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! We hope you've been enjoying these last few weeks as we've been revisiting our most popular episodes of the year.  Lois: Today's episode is the last one in this series and is a throwback to a conversation on Autonomous Databases on Serverless Infrastructure with three experts in the field: Hannah Nguyen, Sean Stacey, and Kay Malcolm. Hannah is a Staff Cloud Engineer, Sean is the Director of Platform Technology Solutions, and Kay is Vice President of Database Product Management. For this episode, we'll be sharing portions of our conversations with them.  01:14 Nikita: We began by asking Hannah how Oracle Cloud handles the process of provisioning an  Autonomous Database. So, let's jump right in! Hannah: The Oracle Cloud automates the process of provisioning an Autonomous Database, and it automatically provisions for you a highly scalable, highly secure, and a highly available database very simply out of the box. 01:35 Lois: Hannah, what are the components and architecture involved when provisioning an Autonomous Database in Oracle Cloud? Hannah: Provisioning the database involves very few steps. But it's important to understand the components that are part of the provisioned environment. When provisioning a database, the number of CPUs in increments of 1 for serverless, storage in increments of 1 terabyte, and backup are automatically provisioned and enabled in the database. In the background, an Oracle 19c pluggable database is being added to the container database that manages all the user's Autonomous Databases. Because this Autonomous Database runs on Exadata systems, Real Application Clusters is also provisioned in the background to support the on-demand CPU scalability of the service. This is transparent to the user and administrator of the service. But be aware it is there. 02:28 Nikita: Ok…So, what sort of flexibility does the Autonomous Database provide when it comes to managing resource usage and costs, you know… especially in terms of starting, stopping, and scaling instances? Hannah: The Autonomous Database allows you to start your instance very rapidly on demand. It also allows you to stop your instance on demand as well to conserve resources and to pause billing. Do be aware that when you do pause billing, you will not be charged for any CPU cycles because your instance will be stopped. However, you'll still be incurring charges for your monthly billing for your storage. In addition to allowing you to start and stop your instance on demand, it's also possible to scale your database instance on demand as well. All of this can be done very easily using the Database Cloud Console. 03:15 Lois: What about scaling in the Autonomous Database? Hannah: So you can scale up your OCPUs without touching your storage and scale it back down, and you can do the same with your storage. In addition to that, you can also set up autoscaling. So the database, whenever it detects the need, will automatically scale up to three times the base level number of OCPUs that you have allocated or provisioned for the Autonomous Database. 03:38 Nikita: Is autoscaling available for all tiers?  Hannah: Autoscaling is not available for an always free database, but it is enabled by default for other tiered environments. Changing the setting does not require downtime. So this can also be set dynamically. One of the advantages of autoscaling is cost because you're billed based on the average number of OCPUs consumed during an hour. 04:01 Lois: Thanks, Hannah! Now, let's bring Sean into the conversation. Hey Sean, I want to talk about moving an autonomous database resource. When or why would I need to move an autonomous database resource from one compartment to another? Sean: There may be a business requirement where you need to move an autonomous database resource, serverless resource, from one compartment to another. Perhaps, there's a different subnet that you would like to move that autonomous database to, or perhaps there's some business applications that are within or accessible or available in that other compartment that you wish to move your autonomous database to take advantage of. 04:36 Nikita: And how simple is this process of moving an autonomous database from one compartment to another? What happens to the backups during this transition? Sean: The way you can do this is simply to take an autonomous database and move it from compartment A to compartment B. And when you do so, the backups, or the automatic backups that are associated with that autonomous database, will be moved with that autonomous database as well. 05:00 Lois: Is there anything that I need to keep in mind when I'm moving an autonomous database between compartments?  Sean: A couple of things to be aware of when doing this is, first of all, you must have the appropriate privileges in that compartment in order to move that autonomous database both from the source compartment to the target compartment. In addition to that, once the autonomous database is moved to this new compartment, any policies or anything that's defined in that compartment to govern the authorization and privileges of that said user in that compartment will be applied immediately to that new autonomous database that has been moved into that new compartment. 05:38 Nikita: Sean, I want to ask you about cloning in Autonomous Database. What are the different types of clones that can be created?  Sean: It's possible to create a new Autonomous Database as a clone of an existing Autonomous Database. This can be done as a full copy of that existing Autonomous Database, or it can be done as a metadata copy, where the objects and tables are cloned, but they are empty. So there's no rows in the tables. And this clone can be taken from a live running Autonomous Database or even from a backup. So you can take a backup and clone that to a completely new database. 06:13 Lois: But why would you clone in the first place? What are the benefits of this?  Sean: When cloning or when creating this clone, it can be created in a completely new compartment from where the source Autonomous Database was originally located. So it's a nice way of moving one database to another compartment to allow developers or another community of users to have access to that environment. 06:36 Nikita: I know that along with having a full clone, you can also have a refreshable clone. Can you tell us more about that? Who is responsible for this? Sean: It's possible to create a refreshable clone from an Autonomous Database. And this is one that would be synced with that source database up to so many days. The task of keeping that refreshable clone in sync with that source database rests upon the shoulders of the administrator. The administrator is the person who is responsible for performing that sync operation. Now, actually performing the operation is very simple, it's point and click. And it's an automated process from the database console. And also be aware that refreshable clones can trail the source database or source Autonomous Database up to seven days. After that period of time, the refreshable clone, if it has not been refreshed or kept in sync with that source database, it will become a standalone, read-only copy of that original source database. 07:38 Nikita: Ok Sean, so if you had to give us the key takeaways on cloning an Autonomous Database, what would they be?  Sean: It's very easy and a lot of flexibility when it comes to cloning an Autonomous Database. We have different models that you can take from a live running database instance with zero impact on your workload or from a backup. It can be a full copy, or it can be a metadata copy, as well as a refreshable, read-only clone of a source database. 08:12 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You'll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all of which is available free to subscribers. So, get going! Pick a course of your choice, get certified, join the Oracle University Learning Community, and network with your peers. If you're already an Oracle MyLearn user, go to MyLearn to begin your journey. If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started.  08:50 Nikita: Welcome back! Thank you, Sean, and hi Kay! I want to ask you about events and notifications in Autonomous Database. Where do they really come in handy?  Kay: Events can be used for a variety of notifications, including admin password expiration, ADB services going down, and wallet expiration warnings. There's this service, and it's called the notifications service. It's part of OCI. And this service provides you with the ability to broadcast messages to distributed components using a publish and subscribe model. These notifications can be used to notify you when event rules or alarms are triggered or simply to directly publish a message. In addition to this, there's also something that's called a topic. This is a communication channel for sending messages to subscribers in the topic. You can manage these topics and their subscriptions really easy. It's not hard to do at all. 09:52 Lois: Kay, I want to ask you about backing up Autonomous Databases. How does Autonomous Database handle backups? Kay: Autonomous Database automatically backs up your database for you. The retention period for backups is 60 days. You can restore and recover your database to any point in time during this retention period. You can initiate recovery for your Autonomous Database by using the cloud console or an API call. Autonomous Database automatically restores and recovers your database to the point in time that you specify. In addition to a point in time recovery, we can also perform a restore from a specific backup set.  10:37 Lois: Kay, you spoke about automatic backups, but what about manual backups?  Kay: You can do manual backups using the cloud console, for example, if you want to take a backup say before a major change to make restoring and recovery faster. These manual backups are put in your cloud object storage bucket. 10:58 Nikita: Are there any special instructions that we need to follow when configuring a manual backup? Kay: The manual backup configuration tasks are a one-time operation. Once this is configured, you can go ahead, trigger your manual backup any time you wish after that. When creating the object storage bucket for the manual backups, it is really important-- so I don't want you to forget-- that the name format for the bucket and the object storage follows this naming convention. It should be backup underscore database name. And it's not the display name here when I say database name. In addition to that, the object name has to be all lowercase. So three rules. Backup underscore database name, and the specific database name is not the display name. It has to be in lowercase. Once you've created your object storage bucket to meet these rules, you then go ahead and set a database property. Default_backup_bucket. This points to the object storage URL and it's using the Swift protocol. Once you've got your object storage bucket mapped and you've created your mapping to the object storage location, you then need to go ahead and create a database credential inside your database. You may have already had this in place for other purposes, like maybe you were loading data, you were using Data Pump, et cetera. If you don't, you would need to create this specifically for your manual backups. Once you've done so, you can then go ahead and set your property to that default credential that you created. So once you follow these steps as I pointed out, you only have to do it one time. Once it's configured, you can go ahead and use it from now on for your manual backups. 13:00 Lois: Kay, the last topic I want to talk about before we let you go is Autonomous Data Guard. Can you tell us about it? Kay: Autonomous Data Guard monitors the primary database, in other words, the database that you're using right now.  13:14 Lois: So, if ADB goes down… Kay: Then the standby instance will automatically become the primary instance. There's no manual intervention required. So failover from the primary database to that standby database I mentioned, it's completely seamless and it doesn't require any additional wallets to be downloaded or any new URLs to access APEX or Oracle Machine Learning. Even Oracle REST Data Services. All the URLs and all the wallets, everything that you need to authenticate, to connect to your database, they all remain the same for you if you have to failover to your standby database. 13:58 Lois: And what happens after a failover occurs? Kay: After performing a failover, a new standby for your primary will automatically be provisioned. So in other words, in performing a failover your standby does become your new primary. Any new standby is made for that primary. I know, it's kind of interesting. So currently, the standby database is created in the same region as the primary database. For better resilience, if your database is provisioned, it would be available on AD1 or Availability Domain 1. My secondary, or my standby, would be provisioned on a different availability domain. 14:49 Nikita: But there's also the possibility of manual failover, right? What are the differences between automatic and manual failover scenarios? When would you recommend using each? Kay: So in the case of the automatic failover scenario following a disastrous situation, if the primary ADB becomes completely unavailable, the switchover button will turn to a failover button. Because remember, this is a disaster. Automatic failover is automatically triggered. There's no user action required. So if you're asleep and something happens, you're protected. There's no user action required, but automatic failover is allowed to succeed only when no data loss will occur. For manual failover scenarios in the rare case when an automatic failover is unsuccessful, the switchover button will become a failover button and the user can trigger a manual failover should they wish to do so. The system automatically recovers as much data as possible, minimizing any potential data loss. But you can see anywhere from a few seconds or minutes of data loss. Now, you should only perform a manual failover in a true disaster scenario, expecting the fact that a few minutes of potential data loss could occur, to ensure that your database is back online as soon as possible.  16:23 Lois: We hope you've enjoyed revisiting some of our most popular episodes over these past few weeks. We always appreciate your feedback and suggestions so remember to take that quick survey we've put out. You'll find it in the show notes for today's episode. Thanks a lot for your support. We're taking a break for the next two weeks and will be back with a brand-new season of the Oracle University Podcast in January. Happy holidays, everyone! Nikita: Happy holidays! Until next time, this is Nikita Abraham... Lois: And Lois Houston, signing off! 16:56 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Oracle University Podcast
Best of 2024: Developing Redwood Applications

Oracle University Podcast

Play Episode Listen Later Dec 10, 2024 21:26


Redwood is a state-of-the-art graphical interface that defines the look and feel of the new Oracle Cloud Redwood Applications.   In this episode, hosts Lois Houston and Nikita Abraham, along with Senior Principal OCI Instructor Joe Greenwald, take a closer look at the intent behind the design and development aspects of the new Redwood experience. They also explore Redwood page templates and components.   Survey: https://customersurveys.oracle.com/ords/surveys/t/oracle-university-gtm/survey?k=focus-group-2-link-share-5   Developing Redwood Applications with Visual Builder: https://mylearn.oracle.com/ou/learning-path/developing-redwood-applications-with-visual-builder/112791   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Nikita: Hello and welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Thanks for joining us for this Best of 2024 series, where we're playing for you our four most popular episodes of the year.    Nikita: Today's episode is #3 of 4 and is a throwback to another conversation with Joe Greenwald, our Senior Principal OCI Instructor. We asked Joe about Oracle's Redwood design system and how it helps us create stunning, world-class enterprise applications and user experiences.  01:04 Lois: Yeah, Redwood is the basis for all the new Oracle Cloud Applications being re-designed, developed, and delivered. Joe is the best person to ask about all of this because he's been working with our Oracle software development tools since the early 90s and is responsible for OU's Visual Builder Studio and Redwood course content. So, let's dive right in! Joe: Hi Lois. Hi Niki. I am excited to join you on this episode because with the release of 24A Fusion applications, we are encouraging all our customers to adopt the new Redwood design system and components, and take advantage of the world-class look and feel of the new Redwood user experience. Redwood represents a new approach and direction for us at Oracle, and we're excited to have our customers benefit from it. 01:49 Nikita: Joe, you've been working with Oracle user interface development tools and frameworks for a long time. How and why is Redwood different? Joe: I joined Oracle in 1992, and the first Oracle user interface I experienced was Oracle Forms. And that was the character mode. I came from a background of Smalltalk and its amazing, pioneering graphical user interface (GUI) design capabilities. I worked at Apple and I developed my own GUIs for a few years on PCs and Macs. So, Character Mode Forms, what we used to call DMV (Department of Motor Vehicles) screens, was a shock, to say the least. Since then, I've worked with almost every user interface and development platform Oracle has created: Character Mode Forms, GUI Forms, Power Objects, HyperCard on the Macintosh, that was pre-OS X by the way, Sedona, written in native C++ and ActiveX and OLE, which didn't make it to a product but appeared in other things later, ADF Faces, which uses Java to generate HTML pages, and APEX, which uses PL/SQL to generate HTML pages. And I've worked with and wrote training classes for Java Swing, an excellent GUI framework for event-driven desktop and enterprise applications, but it wasn't designed for the web. So, it's with pleasure that I introduce you to the Redwood design system, easily the best effort I've ever seen, from the look and feel of holistic user-goal-centered design philosophy and approach to the cutting-edge WYSIWYG design tools.  03:16 Lois: Joe, is Redwood just another set of styles, colors, and fonts, albeit very nice-looking ones? Joe: The Redwood platform is new for Oracle, and it represents a significant change, not just in the look and feel, colors, fonts, and styles, I mean that too, but it's also a fundamental change in how Oracle is creating, designing, and imagining user interfaces. As you may be aware, all Oracle Cloud Applications are being re-designed, re-engineered, and re-rebuilt from the ground up, with significant changes to both back-end and front-end architectures. The front end is being redesigned, re-developed, and re-created in pure HTML5, CSS3, and JavaScript using Visual Builder Studio and its design-time browser-based Integrated Development Environment. The back end is being re-architected, re-designed, and implemented in a modern microservice architecture for Oracle Cloud using Kubernetes and other modern technologies that improve performance and work better in the cloud than our current legacy architecture. The new Oracle Cloud Applications platform uses Redwood for its design system—its tools, its patterns, its components, and page templates. Redwood is a richer and more productive platform to create solutions while still being cost-effective for Oracle. It encourages a transformation of the fundamental user experience, emphasizing identifying, meeting, and understanding end users' goals and how the applications are used.  04:39 Nikita: Joe, do you think Oracle's user interface has been improved with Redwood? In what ways has the UI changed? Joe: Yes, absolutely. Redwood has changed a lot of things. When I joined Oracle back in the '90s, there was effectively no user interface division or UI team. Everybody just did their own thing. There was no user interface lab—and that was started in the mid-‘90s—and I was asked to give product usability feedback and participate in UI tests and experiments in those labs. I also helped test the products I was teaching at the time. I actually distinctly remember having to take a week to train users on Oracle's Designer CASE tool product just to prep the participants enough to perform usability testing. I can still hear the UI lab manager shaking her head and saying any product that requires a week of training to do usability testing has usability issues! And if you're like me and you've been around Oracle long enough, you know that Oracle's not always been known for its user interfaces and been known to release products that look like they were designed by two or more different companies. All that has changed with Redwood. With Redwood, there's a new internal design group that oversees the design choices of all development teams that develop products. This includes a design system review and an ongoing audit process to ensure that all the products being released, whether Fusion apps or something else, all look and feel similar so it looks like it's designed by a single company with a single thought in mind. Which it is. There's a deeper, consistent commitment in identifying user needs, understanding how the applications are being used, and how they meet those user needs through things like telemetry: gathering metrics from the actual components and the Redwood system itself to see how the applications are being used, what's working well, and what isn't.  This telemetry is available to us here at Oracle, and we use it to fine tune the applications' usability and purpose.  06:29 Lois: That's really interesting, Joe. So, it's a fundamental change in the way we're doing things. What about the GUI components themselves? Are these more sophisticated than simple GUI components like buttons and text fields? Joe: The graphical components themselves are at a much higher level, more comprehensive, and work better together. And in Redwood, everything is a component. And I'm not just talking about things like input text fields and buttons, though it applies to these more fine-grained components as well. Leveraging Oracle's deep experience in building enterprise applications, we've incorporated that knowledge into creating page templates so that the structure and look and feel of the page is fixed based on our internal design standards. The developer has control over certain portions of it, but the overall look and feel of the page is controlled by Oracle. So there is consistency of look and feel within and across applications. These page templates come with predefined functionalities: headers, titles, properties, and variables to manipulate content and settings, slots for other components to hold like search fields, collections, contextual information, badges, and images, as well as primary and secondary actions, and variables for events and event handling through Visual Builder action chains, which handle the various actions and processing of the request on the page. And all these page templates and components are responsive, meaning they respond to the change in the size of the page and the orientation. So, when you move from a desktop to a handheld mobile device or a tablet, they respond appropriately and consistently to deliver a clean, easy-to-use interface and experience. 08:03 Nikita: You mentioned WYSIWYG design tools and their integration with Visual Builder Studio's integrated development environment. How does Redwood work with Visual Builder Studio? Joe: This is easily one of my most favorite aspects about Redwood and the integration with Visual Builder Studio Designer. The components and page templates are responsive at runtime as well as responsive at design time! In over 30 years of working with Oracle software development products, this is the first development system and integrated development environment I've seen Oracle produce where what you see is what you get at design time. Now, with products such as Designer and JDeveloper ADF Faces and even APEX—all those page-generation types of products—you have to generate the page, deploy it, and only then can you view the final page to see whether it meets the needs of your user interface. For example, with Designer, there were literally hundreds of configuration parameters that you could set to control how forms and reports looked when they were generated —down to how many buttons on a row or how many rows to a page, that sort of thing, all done in text mode. Then you'd generate and run the page to see what the result was and then go back and modify things until you got what you wanted. I remember hearing the product managers for Oracle ADF Faces being asked…well, a customer asked, “What happens if I put this component here and this component here? What will the page look like?” and they'd say, “I don't know. Render the page and let's see.” That's just crazy talk. With Redwood and its integration with Visual Builder Studio Designer, what you see on the page at design time is literally what you get. And if I make the page narrower or I even convert it to a mobile display while in the Designer itself, I immediately see what the page looks like in that new mode. Everything just moves accordingly, at design time. For example, when changing to a mobile UI, everything stacks up nicely; the components adjust to the page size and change right there in the design environment. Again, I can't emphasize enough the simple luxury of being able to see exactly what the user is going to see on my page and having the ability to change the resolution, orientation, and screen size, and it changes right there immediately in my design environment. 10:06 Lois: I'm intrigued by the idea of page templates that are managed by Oracle but still leave room for the developer to customize aspects of the look and feel and functionality. How does that work? Joe: Well, the page templates themselves represent the typical pages you would most likely use in an enterprise application. Things like a welcome page, a search page, and edit and create pages, and a couple of different ways to display summary information, including foldout pages, though this is not an exhaustive list of course. Not only do they provide a logical and complete starting point for the layout of the page itself, but they also include built-in functionality. These templates include functionality for buttons, primary and secondary actions, and areas for holding contextual information, badges, avatars, and images. And this is all built right into the page, and all of them use variables to describe the contents for the various parts, so the contents can change programmatically as the variables' contents change, if necessary. 11:04 Do you have an idea for a new course or learning opportunity? We'd love to hear it! Visit the Oracle University Learning Community and share your thoughts with us on the Idea Incubator. Your suggestion could find a place in future development projects. Visit mylearn.oracle.com to get started.  11:24 Nikita: Welcome back! So, Joe, let's say I'm a developer. How do I get started working with Redwood? Joe: One of the easiest ways to do it is to use Visual Builder Studio Designer and create a new visual application. If you're creating a standalone, bespoke custom application, you can choose a Redwood starter template, which will include all the Redwood components and page templates automatically. Or, if you're extending and customizing an Oracle Fusion application, Redwood is already included.  Either way, when you create a new page, you have a choice of different page templates—welcome page templates, edit pages, search pages, etc. —and all you have to do is choose a page that you want and begin configuring it. And actually if you make a mistake, it's easy to switch page templates. All the components, page templates, and so on have documentation right there inside Visual Builder Studio Designer, and we do recommend that you read through the documentation first to get an understanding of what the use case for that template is and how to use it. And some components are more granular, like a collection container which holds a collection of rows of a list or a table and provides capabilities like toolbars and other actions that are already built and defined. You decide what actions you want and then use predefined event listeners that are triggered when an event occurs in the application—like a button being clicked or a row being selected—which kicks off a series of actions to be performed. 12:42 Lois: That sounds easy enough if you know what you're doing. Joe, what are some of the more common pages and what are they used for?  Joe: Redwood page templates can be broken down into categories. There are overview templates like the welcome page template, which has a nice banner, colors, and illustrations that can be used for a welcoming page—like for entering a new application or a new logical section of the application. The dashboard landing page template displays key information values and their charts and graphs, which can come from Oracle Analytics, and automatically switches the display depending on which set of data is selected.  The detail templates include a general overview, which presents read-only information related to a single record or resource. The item overview gives you a small panel to view summary information (for example, information on a customer) and in the main section, you can view details like all the orders for that customer. And you can even navigate through a set of customers, clicking arrows for next-previous navigation. And that's all built in. There's no programming required. The fold-out page template folds out horizontally to show you individual panels with more detail that can be displayed about the subject being retrieved as well as overflow and drill-down areas. And there's a collection detail template that will display a list with additional details about the selected item (for example, an order and its order line items). The smart search page does exactly what it says. It has a search component that you use to filter or search the data coming back from the REST data sources and then display the results in a list or a table. You define the filter yourself and apply it using different kinds of comparators, so you can look for strings that start with certain values or contain values, or numerical values that are equal to or less than, depending on what you're filtering for. And then there are the transactional templates, which are meant to make changes. This includes both the simple create and edit and advanced create and edit templates.  The simple create and edit page template edits a single record or creates a single record. And the advanced page template works well if you're working with master-detail, parent-child type relationships. Let's say you want to view the parent and create children for it or even create a parent and the children at the same time. And there's a Gantt chart page for project management–type tracking and a guided process page for multiple-step processes and there's a data management page template specifically for viewing and editing data collections like Excel spreadsheets. 14:55 Nikita: You mentioned that there's a design system behind all this. How is this used, and how does the customer benefit from it? Joe: Redwood comprises both a design system and a development system. The design system has a series of steps that we follow here at Oracle and can suggest that you, our customers and partners, can follow as well. This includes understanding the problem, articulating the vision for the page and the application (what it should do), identifying the proper Redwood page templates to use, adding detail and refining the design and then using a number of different mechanisms, including PowerPoint or Figma design tools to specify the design for development, and then monitor engagement in the real world. These are the steps that we follow here at Oracle. The Redwood development process starts with learning how to use Redwood components and templates using the documentation and other content from redwood.oracle.com and Visual Builder Studio. Then it's about understanding the design created by the design team, learning more about components and templates for your application, specifically the ones you're going to use, how they work, and how they work together. Then developing your application using Visual Builder Studio Designer, and finally improving and refining your application. Now, right now, as I mentioned, telemetry is available to us here at Oracle so we can get a sense of the feedback on the pages of how components are being used and where time is being spent, and we use that to tune the designs and components being used. That telemetry data may be available to customers in the future. Now, when you go to redwood.oracle.com, you can access the Redwood pattern book that shows you in detail all the different page templates that are available: smart search page, data grid, welcome page, dashboard landing page, and so on, and you can select these and read more about them as well as the actual design specifications that were used to build the pages—defining what they do and what they respond to. They provide a lot of detailed information about the templates and components, how they work and how they're intended to be used. 16:50 Lois: That's a lot of great resources available. But what if I don't have access to Visual Builder Studio Designer? Can I still see how Redwood looks and behaves? Joe: Well, if you go to redwood.oracle.com, you can log in and work with the Redwood reference application, which is a live application working with live data. It was created to show off the various page templates and components, their look and feel and functionality from the Redwood design and development systems. This is an order management application, so you can do things like view filtered pending orders, create new orders, manage orders, and view information about customers and inventory. It uses the different page templates to show you how the application can perform. 17:29 Nikita: I assume there are common aspects to how these page templates are designed, built, and intended to be used. Is that a good way to begin understanding how to work with them? Becoming familiar with their common properties and functionality? Joe: Absolutely! Good point! All pages have titles, and most have primary and secondary actions that can be triggered through a variety of GUI events, like clicking a button or a link or selecting something in a list or a table. The transactional page templates include validation groups that validate whether the data is correct before it is submitted, as well as a message dialog that can pop up if there are unsaved changes and someone tries to leave the page. All the pages can use variables to display information or set properties and can easily display specific contextual information about records that have been retrieved, like adding the Order Number or Customer Name and Number to the page title or section headers. 18:18 Lois: If I were a developer, I'd be really excited to get started! So, let's say I'm a developer. What's the best way to begin learning about Redwood, Joe? Joe: A great place to start learning about the Redwood design and development system is at the redwood.oracle.com page I mentioned. We have many different pages that describe the philosophy and fundamental basis for Redwood, the ideas and intent behind it, and how we're using it here at Oracle. It also has a list of all the different page templates and components you can use and a link to the Redwood reference application where you can sign in and try it yourself. In addition, we at Oracle University offer a course called Design and Develop Redwood Applications, and in there, we have both lecture content as well as hands-on practices where you build a lightweight version of the Redwood reference application using data from the Fusion apps application, as well as the pages that I talked about: the welcome page, detail pages, transactional pages, and the dashboard landing page.  And you'll see how those pages are designed and constructed while building them yourself.  It's very important though to take one of the free Visual Builder Developer courses first: either Build Visual Applications Using Visual Builder Studio and/or Develop Fusion Applications Using Visual Builder Studio before you try to work through the practices in the Redwood course because it uses a lot of Visual Builder Designer technology.  You'll get a lot more out of the Redwood practices if you understand the basics of Visual Builder Studio first. The Build Visual Applications Using Visual Builder Studio course is probably a better place to start unless you know for a fact you will be focusing on extending Oracle Fusion Applications using Visual Builder Studio. Now, a lot of the content is the same between the two courses as they share much of the same technology and architectures. 19:58 Lois: Ok, so Build Visual Applications Using Visual Builder Studio and Develop Fusion Applications Using Visual Builder Studio…all on mylearn.oracle.com and all free for anyone who wants to take them, right? Joe: Yes, exactly. And the free Redwood learning path leads to an Associate certification. While our courses are a great place to start in preparing for your certification exam, they are not, of course, by themselves sufficient to pass and you will want to study and be familiar with the redwood.oracle.com content as well. The learning path is free, but you do have to pay for the certification exam. Nikita: We hope you enjoyed that conversation. A quick reminder about the short survey we've created to gather your insights and suggestions for the podcast. It's really quick. Just click the link in the show notes to complete the survey. Thank you so much for helping us make the show better. Join us next week for another throwback episode. Until then, this is Nikita Abraham... Lois: And Lois Houston, signing off! 20:58 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Oracle University Podcast
Best of 2024: Introduction to Visual Builder Studio, Visual Builder Cloud Service, Stand-Alone, and JET

Oracle University Podcast

Play Episode Listen Later Nov 26, 2024 24:58


The next generation of front-end user interfaces for Oracle Fusion Applications is being built using Visual Builder Studio and Oracle JavaScript Extension Toolkit. However, many of the terms associated with these tools can be confusing.   In this episode, Lois Houston and Nikita Abraham are joined by Senior Principal OCI Instructor Joe Greenwald. Together, they take you through the different terminologies, how they relate to each other, and how they can be used to deliver the new Oracle Fusion Applications as well as stand-alone, bespoke visual web applications.   Survey: https://customersurveys.oracle.com/ords/surveys/t/oracle-university-gtm/survey?k=focus-group-2-link-share-5   Develop Fusion Applications Using Visual Builder Studio: https://mylearn.oracle.com/ou/course/develop-fusion-applications-using-visual-builder-studio/138392/   Build Visual Applications Using Oracle Visual Builder Studio: https://mylearn.oracle.com/ou/course/build-visual-applications-using-oracle-visual-builder-studio/137749/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Nikita: Hello and welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! If you've been following along with us, you'll know that we've had some really interesting seasons this year. We covered Autonomous Database, Artificial Intelligence, Visual Builder Studio and Redwood, OCI Container Engine for Kubernetes, and Oracle Database 23ai New Features. Nikita: And we've had some pretty awesome special guests. Do go back and check out those episodes if any of those topics interest you. 01:04 Lois: As we close out the year, we thought this would be a good time to revisit some of our best episodes. Over the next few weeks, you'll be able to listen to four of our most popular episodes of the year.  Nikita: Right, this is the best of the best–according to you–our listeners.   Lois: Today's episode is #1 of 4 and is a throwback to a discussion with Senior Principal OCI Instructor Joe Greenwald on Visual Builder Studio. Nikita: We asked Joe about Visual Builder Studio and Oracle JavaScript Extension Toolkit, also known as JET. Together, they form the basis of the technology for the next generation of front-end user interfaces for Oracle Fusion Applications, as well as many other Oracle applications, including most Oracle Cloud Infrastructure (OCI) interfaces. 01:48 Lois: We looked at the different terminologies and technologies, how they relate to each other, and how they deliver the new Oracle Fusion Applications and stand-alone, bespoke visual web applications.  So, let's dive right in. Nikita: Joe, I'm somewhat thrown by the terminology around Visual Builder, Visual Studio, and JET. Can you help streamline that for us? Lois: Yeah, things that are named the same sometimes refer to different things, and sometimes things with a different name refer to the same thing. 02:18 Joe: Yeah, I know where you're coming from. So, let's start with Visual Builder Studio. It's abbreviated as VBS and can go by a number of different names. Some of the most well-known ones are Visual Builder Studio, VBS, Visual Builder, Visual Builder Stand-Alone, and Visual Builder Cloud Service. Clearly, this can be very confusing. For the purposes of these episodes as well as the training courses I create, I use certain definitions.  02:42 Lois: Can you take us through those? Joe: Absolutely, Lois. Visual Builder Studio refers to a product that comes free with an OCI account and allows you to manage your project-related assets. This includes the project itself, which is a container for all of its assets. You can assign teams to your projects, as well as secure the project and declare roles for the different team members. You manage GIT repositories with full graphical and command-line GIT support, define package, build, and deploy jobs, and create and run continuous integration/continuous deployment graphical and code-managed pipelines for your applications. These can be visual applications, created using the Visual Builder Integrated Development Environment, the IDE, or non-visual apps, such as Java microservices, docker builds, NPM apps, and things like that. And you can define environments, which determine where your build jobs can be deployed. You can also define issues, which allow you to identify, track, and manage things like bugs, defects, and enhancements. And these can be tracked in code review merge requests and build jobs, and be mapped to agile sprints and scrum boards. There's also support for wikis for team collaboration, code snippets, and the management of the repository and the project itself. So, VBS supports code reviews before code is merged into GIT branches for package, build, and deploy jobs using merge requests. 04:00 Nikita: OK, what exactly do you mean by that? Joe: Great. So, for example, you could have developers working in one GIT branch and when they're done, they would push their private code changes into that remote branch. Then, they'd submit a merge request and their changes would be reviewed. Once the changes are approved, their code branch is merged into the main branch and then automatically runs a CI/CD package (continuous integration/continuous deployment) package, build, and deploy job on the code. Also, the CI/CD package, build, and deploy jobs can run against any branches, not just the main branch. So Visual Builder Studio is intended for managing the project and all of its assets. 04:37 Lois: So Joe, what are the different tools used in developing web applications? Joe: Well, Visual Builder, Visual Builder Studio Designer, Visual Builder Designer, Visual Builder Design-Time, Visual Builder Cloud Service, Visual Builder Stand-Alone all kind of get lumped together. You can kinda see why. What I'm referring to here are the tools that we use to build a visual web application composed of HTML5, CSS3, JavaScript, and JSON (JavaScript Object Notation) for metadata. I call this Visual Builder Designer. This is an Integrated Development Environment, it's the “IDE” which runs in your browser. You use a combination of drag and drop, setting properties, and writing and modifying custom and generated code to develop your web applications. You work within a workspace, which is your own private copy of a remote Git branch. When you're ready to start development work, you open an existing workspace or create a new one based on a clone of the remote branch you want to work on. Typically, a new branch would be created for the development work or you would join an existing branch. 05:38 Nikita: What's a workspace, Joe? Is it like my personal laptop and drive? Joe: A workspace is your own private code area that stores any changes you make on the Oracle servers, so your code changes are never lost—even when working in a browser-based, network-based tool. A good analogy is, say I was working at home on my own machine. And I would make a copy of a remote GIT branch and then copy that code down to my local machine, make my code changes, do my testing, etc. and then commit my work—create a logical save point periodically—and then when I'm ready, I'd push that code up into the remote branch so it can be reviewed and merged with the main branch. My local machine is my workspace. However, since this code is hosted up by Oracle on our servers, and the code and the IDE are all running in your browser, the workspace is a simulation of a local work area on your own computer. So, the workspace is a hosted allocation of resources for you that's private. Other people can't see what's going on in your workspace. Your workspace has a clone of the remote branch that you're working with and the changes you make are isolated to your cloned code in your workspace. 06:41 Lois: Ok… the code is actually hosted on the server, so each time you make a change in the browser, the change is written back to the server? Is it possible that you might lose your edits if there's a networking interruption? Joe: I want to emphasize that while I started out not personally being a fan of web-based integrated development environments, I have been using these tools for over three years and in all that time, while I have lost a connection at times—networks are still subject to interruptions—I've never lost any changes that I've made. Ever. 07:11 Nikita: Is there a way to save where you are in your work so that you could go back to it later if you need to? Joe: Yes, Niki, you're asking about commits and savepoints, like in a Git repository or a Git branch. When you reach a logical stopping or development point in your work, you would create a commit or a savepoint. And when you're ready, you would push that committed code in your workspace up to the remote branch where it can be reviewed and then eventually merged, usually with the main Git branch, and then continuous integration/continuous package and deployment build jobs are run. Now, I'm only giving you a high-level overview, but we cover all this and much more in detail with hands-on practices in our Visual Builder developer courses. Right now, I'm just trying to give you a sense of how these different tools are used. 07:52 Lois: Yeah, that makes sense, Joe. It's a lot to cover in a short amount of time. Now, we've discussed the Visual Builder Designer IDE and workspace. But can you tell us more about Visual Builder Cloud Service and stand-alone environments? What are they used for? What features do they provide? Are they the same or different things? Joe: Visual Builder Cloud Service or Visual Builder Stand-Alone, as it's sometimes called, is a service that Oracle hosts on its servers. It provides hosting for the deployed web application source code as well as database tables for business objects that we build and maintain to store your customer data. This data can come from XLS or CSV files, or even your own Oracle database customer table data. A custom REST proxy makes calls to external third-party REST services on your behalf and supports several popular authentication mechanisms. There is also integration with the Identity Cloud Service (IDCS) to manage users and their access to your web apps. Visual Builder Cloud Service is a for-fee product. You pay licensing fees for how much you use because it's a hosted service. Visual Builder Studio, the project asset management aspect I discussed earlier, is free with a standard OCI license. Now, keep in mind these are separate from something like Visual Builder Design Time and the service that's running in Fusion application environments. What I'm talking about now is creating stand-alone, bespoke, custom visual applications. These are applications that are built using industry-standard HTML5, CSS3, JavaScript, and JSON for metadata and are hosted on the Oracle servers.  09:30 Are you looking for practical use cases to help you plan and apply configurations that solve real-world challenges?  With the new Applied Learning courses for Cloud Applications, you'll be able to practically apply the concepts learned in our implementation courses and work through case studies featuring key decisions and configurations encountered during a typical Oracle Cloud Applications implementation. Applied learning scenarios are currently available for General Ledger, Payables, Receivables, Accounting Hub, Global Human Resources, Talent Management, Inventory, and Procurement, with many more to come!  Visit mylearn.oracle.com to get started. 10:12 Nikita: Welcome back! Joe, you said Visual Builder Cloud Service or Stand-Alone is a for-fee service. Is there a way I can learn about using Visual Builder Designer to build bespoke visual applications without a fee? Joe: Yes. Actually, we've added an option where you can run the Visual Builder Designer and learn how to create web apps without using the app hosting or the business object database that stores your customer data or the REST proxy for authentication or the Identity Cloud Service. So you don't get those features, but you can still learn the fundamentals of developing with Visual Builder Designer. You can call third-party APIs, you can download the source, and run it locally, for example, in a Tomcat server. This is a great and free way to learn how to develop with the Visual Builder Designer. 10:55 Lois: Joe, I want to know more about the kinds of apps you can build in VB Designer and the capabilities that VB Cloud Service provides. Joe: Visual Builder Designer allows you to build custom, bespoke web applications made of interactive webpages; flows of pages for navigation; events that respond when things happen in the app, for example, GUI events like a button is clicked or values are entered into a text field; variables to store the state of the application and the ability to make REST calls, all from your browser. These applications have full access to the Oracle Fusion Applications APIs, given that you have the right security permissions and credentials of course. They can access your customer business data as business objects in our internally hosted database tables or your own customer database tables. They can access third-party APIs, and all these different data sources can appear in the same visual application, on the same page, at the same time. They use the Identity Cloud Service to identify which users can log in and authenticate against the application. And they all use the new Redwood graphical user interface components and page templates, so they have the same look and feel of all Oracle applications. 12:02 Nikita: But what if you're building or extending Oracle Fusion Applications? Don't things change a little bit? Joe: Good point, Niki. Yes. While you still work within Visual Builder Studio, that doesn't change, VBS maintains your project and all your project-related assets, that is still the same. However, in this case, there is no separate hosted Visual Builder Cloud Service or Stand-Alone instance. In this case, Visual Builder is hosted inside of Fusion apps itself as part of the installation. I won't go into the details of how the architecture works, but the Visual Builder instance that you're running your code against is part of Fusion applications and is included in the architecture as well as the billing. All your code changes are maintained and stored within a single container called an extension. And this extension is a Git repository that is created for you, or you can create it yourself, depending on how you choose to work within Visual Builder Studio. You create an extension to hold the source code changes that provide a customization or configuration. This means making a change to an existing page or a set of pages or even adding new pages and flows to your Oracle Fusion Applications. You use Visual Builder Studio and Visual Builder Designer in a similar way as to how you would use them for bespoke stand-alone visual applications. 13:12 Lois: I'm trying to envision how this workflow is used. How is it different from bespoke VB app development? Or is it different at all? Joe: So, recall that the Visual Builder Designer is effectively the Integrated Development Environment, the IDE, where you make your code changes by working with both the raw HTML5, CSS3, and JavaScript code, if need be, or the Page Designer for drag and drop, and setting properties and then Live mode to test your work. You use a version of VB Designer to view and modify your customizations, and the code is stored in a Git repository called an extension. So, in that sense, the work of developing pages and flows and such is the same. You still start by creating or, more typically, joining a project and then either create a new extension from scratch or base it on an existing application, or go directly to the page that you want to edit and, on that page, select from your profile menu to edit in Visual Builder Studio. Now, this is a different lifecycle path from bespoke visual applications. With them, you're not extending an app or modifying individual pages in the same way. You get a choice of which project you want to add your extension to when you're working with Fusion apps and potentially which repository to store your customizations, unless one already exists and then it's assigned automatically to hold your code changes. So you make your changes and edits to the portions of the application that have been opened for extensibility by the development team. This is another difference. Once you make your code changes, the workflow is pretty much the same as for a bespoke visual application: do your development work, commit your changes, push your changes to the remote branch. And then typically, your code is reviewed and if the code passes and is approved, it's merged with the main branch. Then, the package and deploy jobs run to deploy the main code to the production environment or whatever environment you're targeting. And once the package and deploy jobs complete, the code base is updated and users who log in see the changes that you've made. 15:03 Nikita: You mentioned creating apps that combine data from Fusion cloud, applications, customer data, and third-party APIs into one page. Why is it necessary? Why can't you just do all that in one Fusion Applications extension? Joe: When you create extensions, you are working within the Oracle Fusion Applications ecosystem, that's what they actually call it, which includes a defined a set of users who have been predefined and are, therefore, known to Fusion Applications. So, if you're a user and you're not part of that Fusion Apps ecosystem, you can't access the pages. Period. That's how Fusion Apps works to maintain its security and integrity. Secondly, you're working pretty much solely with the Fusion Applications APIs data sources coming directly from Fusion Applications, which are also available to you when you're creating bespoke visual apps. When you're working with Fusion Applications in Visual Builder, you don't have access to these business objects that give you access to your own customer database data through Visual Builder-generated REST APIs. Business objects are available only to bespoke visual applications in the hosted VB Cloud Service instance. So, your data sources are restricted to the Oracle Fusion Applications APIs and some third-party APIs that work within a narrow set of authentication mechanisms currently, although there are plans to expand this in the future. A mashup app that allows you now to access all these data sources while creating apps that leverage the Redwood Component System, so they look and work like Fusion Apps. They're a highly popular option for our partners and customers. 16:28 Lois: So, to review, we have two different approaches. You can create a visual application using the for-fee, hosted Visual Builder Cloud Service/Stand-Alone or the one that comes with Oracle Integration Cloud, or you can use the extension architecture for Fusion applications, where you use the designer and create your extensions, and the code is delivered and deployed to Fusion applications code. You haven't talked about JET yet though, Joe. What is that? Joe: So, JET is an abbreviation. It stands for Oracle JavaScript Extension Toolkit and JET is the underlying technology that makes Visual Builder, visual applications, and Visual Builder Extensions for Fusion Applications possible. Oracle JavaScript Extension Toolkit provides a module-based, open-source toolkit that leverages modern JavaScript, TypeScript, CSS3, and HTML5 to deliver web applications. It's targeted at JavaScript developers working on client-side applications. It is not for backend development.  It's a collection of popular, powerful JavaScript libraries and a set of Oracle-contributed JavaScript libraries that make it very simple, easy, and efficient to build front-end applications that can consume and interact with Oracle products and services, especially Oracle Cloud services, but of course it can work with any type of third-party API. 17:44 Nikita: How are JET applications architected, Joe, and how does that relate to Visual Builder pages and flows? Joe: The architecture of JET applications is what's called a single page architecture. We've all seen these. These are where you have a single webpage—think of your index page that provides the header and footer for your webpage—and then the middle portion or the middle content of the page, represented by modules, allow you to navigate from one page or module to another. It also provides the data mapping so that the data elements in the variables and the state of the application, as well as the graphical user interface elements that provide the fields and functionality for the interface for the application, these are all maintained on the client side. If you're working in pure JET, then you work with these modules at the raw JavaScript code level. And there are a lot of JavaScript developers who want to work like this and create their custom applications from the code up, so to speak. However, it also provides the basis for Visual Builder visual applications and Fusion Apps visual extensions in Visual Builder. 18:41 Lois: How does JET support VB Apps? You didn't talk much about having to write a bunch of JavaScript and HTML5, so I got the impression that this is all done for you by VB Designer? Joe: Visual Builder applications are composed of HTML5, CSS3, and JavaScript code that is usually generated by the developer when she drags and drops components on to the page designer canvas or sets properties or creates action chains to respond to events. But there's also a lot of JavaScript object notation (JSON) metadata created at the time that describes the pages, the flows, the navigation, the REST services, the variables, their data types, and other assets needed for the app to function. This JSON metadata is translated at runtime using a large JavaScript extension toolkit library called the Visual Builder Runtime that runs in the browser and real time translates the metadata and other assets in the Visual Builder source code into JET code and assets, which are actually executed at runtime. And it's very quick, very fast, very efficient, and provides a layer of abstraction between the raw JET code and the Visual Builder architecture of pages, flows, action chains for executing code and events to handle things that occur in the user interface, including saving the state in variables that are mapped to GUI components. For example, if you have an Input text component, you need to have a variable to store the value that was entered into that Input text component between page refreshes. The data can move from the Input text component to the variable, and from the variable to that Input text component if it's changed programmatically, for example. So, JET manages binding these data values to variables and the UI components on the page. So, a change to a variable value or a change to the contents of the component causes the others to change automatically. Now, this is only a small part of what JET and the frameworks and libraries it uses do for the applications. JET also provides more complex GUI components like lists and tables, and selection lists, and check boxes, and all the sorts of things you would expect in a modern GUI application. 20:37 Nikita: You mentioned a layer of abstraction between Visual Builder Studio Designer and JET. What's the benefit of working in Visual Builder Designer versus JET itself? Joe: The benefit of Visual Builder is that you work at a higher level of abstraction than having to get down into the more detailed levels of deep JavaScript code, working with modules, data mappings, HTML code, single page architecture navigation, and the related functionalities. You can work at a higher level, a graphical level, where you can drag and drop things onto a design canvas and set properties. The VB architecture insulates you from the more technical bits of JET. Now, this frees the developer to concentrate more on application and page design, implementing logic and business rules, and creating a pleasing workflow and look and feel for the user. This keeps them from having to get caught up in the details of getting this working at the code level. Now if needed, you can write custom JavaScript, HTML5, and CSS3 code, though much less than in a JET app, and all that is part of the VB application source, which becomes part of the code used by JET to execute the application itself. And yet it all works seamlessly together. 21:38 Lois: Joe, I know we have courses in JavaScript, HTML, and CSS. But does a developer getting ready to work in Visual Builder Designer have to go take those courses first or can they start working in VB Designer right away? Joe: Yeah, that question does often comes up: Do I need to learn JET to work with Visual Builder? No, you don't. That's all taken care for you in the products themselves. I don't really think it helps that much to learn JET if you are going to be a VB developer. In some ways, it could even be a bit distracting since some of things you learn to do in JET, you would have to unlearn or not do so much because of what VB does it for you. The things you would have to do manually in code in JET are done for you. This is why we call VB a low code development tool. I mean, you certainly can if you want to, but I would spend more time learning about the different GUI components, page templates, the Visual Builder architecture — events, action chains, and the data provider variables and types. Now, I know JET myself. I started with that before learning Visual Builder, but I use very little of my JET knowledge as a VB developer. Visual Builder Designer provides a nice, abstracted, clean layer of modern visual development on top of JET, while leveraging the power and flexibility of JET and keeping the lower-level details out of my way. 22:49 Nikita: Joe, where can I go to get started with Visual Builder? Joe: Well, for more information, I recommend you take a look at our Develop Fusion Applications course if you're working with Fusion Applications and Visual Builder Studio. The other course is Develop Visual Applications with Visual Builder Studio and that's if you're creating stand-alone bespoke applications. Both these courses are free. We also have a comprehensive course that covers JavaScript, HTML5, and CSS3, and while it's not required that you take that to be successful, it can be helpful down the road. I would also say that some basic knowledge of HTML5, CSS3, and JavaScript will certainly support you and serve you well when working with Visual Builder. You learn more as you go along and you find that you need to create more sophisticated applications. I would also mention that a lot of the look and feel of the applications in Visual Builder visual applications and Fusion apps extensions and customizations come through JET components, JET styles, and JET variables, and CSS variables, so that's something that you would want to pursue at some point. There's a JET cookbook out there. You can search for Oracle JET and look for the JET cookbook and that's a good introduction to all of that. 23:50 Nikita: We hope you enjoyed that conversation. To learn about some of the courses Joe mentioned, visit mylearn.oracle.com to get started. Lois: Before we wrap up, we've got a favor to ask. We've created a short survey to capture your thoughts on the podcast. It'll only take a few minutes of your time. Just click the link in the show notes and share your feedback. We want to make sure we're delivering the best experience possible so don't hesitate to let us know what's on your mind! Thanks for your support. Join us next week for another throwback episode. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 24:30 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

WBSRocks: Business Growth with ERP and Digital Transformation
WBSP637: Grow Your Business by Understanding Oracle Cloud HCM's Capabilities, an Objective Panel Discussion

WBSRocks: Business Growth with ERP and Digital Transformation

Play Episode Listen Later Oct 21, 2024 58:00


Send us a textIn service-oriented industries, human capital management processes often take precedence over those centered on products, acknowledging the paramount importance of people as assets. While Oracle HCM once dominated the HCM market, the emergence of Workday posed a formidable challenge in the industry. Despite this, Oracle HCM maintains a substantial market share and continues to be a robust competitor for both Workday and SuccessFactors, especially among clients who rely on Oracle for their ERP requirements.In today's episode, we invited a panel of industry experts for a live discussion on LinkedIn to conduct an independent review of Oracle Cloud HCM's capabilities. We covered many grounds including where Oracle Cloud HCM might be a a fit in the enterprise architecture and where it might be overused. Finally, they analyze many data points to help understand the core strengths and weaknesses of Oracle Cloud HCM.For more information on growth strategies for SMBs using ERP and digital transformation, visit our community at wbs. rocks or elevatiq.com. To ensure that you never miss an episode of the WBS podcast, subscribe on your favorite podcasting platform. 

Cloud Wars Live with Bob Evans
Oracle's Bold Moves: Cloud Innovation, Global Expansion, and the Multi-Cloud Era with Mahesh Thiagarajan | Cloud Wars Live

Cloud Wars Live with Bob Evans

Play Episode Listen Later Oct 9, 2024 18:18


Oracle's Expanding Multi-Cloud StrategyThe Big Themes:Oracle's unique cloud architecture: Oracle's differentiated cloud architecture offers customers flexibility across different environments, whether public, private, or government. This hybrid architecture ensures that customers can deploy the same services whether they need an on-premises solution, a fully isolated cloud region, or a multi-cloud setup. Oracle's approach also addresses the specific needs of regulated industries, like finance or healthcare.Global expansion and investments: Oracle is heavily investing in expanding its cloud infrastructure across the globe, including in emerging markets like Saudi Arabia, Malaysia, and Latin America. These investments aim to meet growing local demands for cloud services while enhancing Oracle's global footprint. By establishing more localized data centers, Oracle can reduce latency for customers, ensure data sovereignty, and cater to regional regulatory requirements.Multi-cloud partnerships: Oracle's partnerships with major cloud providers like Microsoft, Google Cloud, and AWS represent a shift in the cloud industry, as it was previously unheard of for these companies to collaborate. Oracle recognized early on that customers were using multi-cloud strategies, so it adapted by integrating with these other platforms. Its willingness to collaborate with competitors reflects its commitment to answering customers' needs.The Big Quote: “It is very clear that customers drive our strategy . . . Oracle is kicking this multi-cloud era forward, an open era, if you will."

The NetSuite Podcast
Tips from Venture Capitalists on Fundraising Right Now

The NetSuite Podcast

Play Episode Listen Later Oct 7, 2024 40:53


Learn more about NetSuite's Business Grows Here event series: https://tinyurl.com/bdeabwr7   In this episode of the NetSuite Podcast, cohost Megan O'Brien sits down with JD Weinstein, Global Director of Oracle's Venture Capital Practice. He discusses the findings from a panel he moderated at NetSuite's Business Grows Here event stop in St. Louis [2:01]. They then play excerpts of the panel featuring Dan Conner, general partner at Ascend Venture Capital, and Craig Herron, managing principal at iSelect [8:50]. They discuss the advice they have for early-stage founders, including tripling the amount of investors they reach out to and tripling the amount of time spent fundraising [15:46]. Dan and Craig cover the status of dry powder since its 2021 highs [27:37]. They conclude by sharing their top takeaways for founders [36:25]   Follow Us Here:   Business Grows Here: https://tinyurl.com/bdeabwr7   JD Weinstein LinkedIn: https://www.linkedin.com/in/jdweinstein/ Dan Conner LinkedIn: https://www.linkedin.com/in/danconner1/ Craig Herron LinkedIn: https://www.linkedin.com/in/craig-herron-3a2801/   Oracle NetSuite LinkedIn: https://social.ora.cl/6000wKFhC X (Twitter): https://social.ora.cl/6007wK2zD Instagram: https://social.ora.cl/6003wK2Hv Facebook: https://social.ora.cl/6005wK2Dv   #NetSuite #VentureCapital #Fundraising   ---------------------------------------------------------   Episode Transcript:   00;00;04;04 - 00;00;40;00 Hello everyone. Thank you so much for tuning in to the NetSuite Podcast. I'm Megan O'Brien, a co-host of the podcast. We have quite a unique episode in store for you all today. Recently, NetSuite has been hosting events in various different cities across the US called Business Growth Here. This tour is geared towards helping local entrepreneurs and business leaders discover strategies and tools essential for business expansion, as well as valuable insights on effectively managing all aspects of a growing business from cash flow to overall operations.   00;00;40;02 - 00;01;10;11 The events are tailored to the unique challenges and opportunities of each city and feature local leaders and visionaries. In the Saint Louis tour stop, one of the sessions that really stood out to me was a panel on the current venture capital landscape. It was moderated by JD Weinstein, global director of Oracle's venture capital practice, and featured Dan Conner, general partner at Ascend Venture Capital, and Craig Herron, managing principal at iSelect.   00;01;10;13 - 00;01;35;22 There's a lot of great insight in there around the market build back, what venture capitalists are looking for right now in companies, and how founders can increase their chances of getting funding. After hearing that, I knew I wanted to share the valuable insights with all of you, our wonderful listeners. With that, let's jump in, because you're not going to want to miss out on this episode.   00;01;35;24 - 00;02;02;02 You're listening to the NetSuite Podcast, where we discuss what's happening within NetSuite, why we're doing it, and where we're heading in the future. We'll dive into the details about the software and the people at NetSuite who are behind all the moving parts. We'll also feature customer growth stories discussing the ups and downs of running a company and how one integrated system can help your business continue to scale.   00;02;02;05 - 00;02;22;01 To kick us off, we have JD Weinstein, the global director of Oracle's venture capital practice, who moderated the panel. He joined us for a quick interview just to give an overview of the session and some of his key takeaways. Could you begin by telling our listeners a little bit about yourself and what you do for Oracle? Sure thing.   00;02;22;03 - 00;02;56;26 My name is JD Weinstein. I joined Oracle just over six years ago and now lead our global venture practice. I've previously worked for various early stage accelerator programs and strategic or corporate venture funds to help entrepreneurs grow their businesses with special advantages. At Oracle, we work alongside VCs globally to help early stage portfolio companies scale with our cloud technology solutions, global customer network, and rich enterprise ecosystem.   00;02;56;28 - 00;03;22;25 So that starts with NetSuite and Oracle Cloud infrastructure, but extends to database to Java and our rich application suite. We also make strategic equity investments alongside our M&A function under our corporate development line of business. You were the moderator for a session at the St Louis Business Grows Here event called Raising Capital to Fuel Growth in an AI-Driven Era.   00;03;22;27 - 00;04;00;10 Could you give us an overview of the panel for all our listeners? Sure. We covered a good bit of ground here, starting with the state of the economy and what it means for venture and founders growing their businesses in this era. I had the pleasure to interview Craig Herron, the managing principal of iSelect, a venture fund focused on the agrifood supply chain and health care, and Dan Connor, a general partner at Ascend Venture Capital, who leads an early stage thematic VC specializing in data-centric companies.   00;04;00;12 - 00;04;30;13 We talked about the state of the economy and what it means, from rising interest rates, fewer public listings, valuation correction to other complex macro headwinds, and how it really translates to start up business building. And then how that has changed fundraising in this climate overall too. We spoke to what makes a great business venture backable. So what the general partners on stage look for in exceptional entrepreneurs.   00;04;30;16 - 00;04;57;20 And then we also talked to tactical advice on just a general approach to fundraising and how to run a successful process. Hint: exactly like you would a sophisticated enterprise sales strategy. And then, of course, we concluded with the surge of AI capabilities and how we're going to be more productive with less. How that's impacted our industry. Why do you think this session was so important to include in our St Louis Business   00;04;57;20 - 00;05;23;27 Grows Here event? I mean, what is it about today's landscape that made it especially timely? Yeah, I think it's so important that we highlight the investment in commercial activity that's booming in the Midwest and specifically in Saint Louis and broader Missouri for this Business Grows Here event. Oftentimes we get this false perception of only venture activity buzzing on the coasts.   00;05;24;00 - 00;05;50;24 And while the majority of megarounds do happen there, at the earliest stages, we're seeing more and more data show the spread of entrepreneurial ecosystems emerging across this entire mid-continent. Steve Case and The Rise of the Rest phenomena, right? And so, with connectivity everywhere in the world, everybody has access now to build a great company. What was the highlight of the venture capital panel in Saint Louis for you?   00;05;50;25 - 00;06;24;24 Any particularly interesting thoughts you heard? You know, I can recall, I loved a quote that Dan pointed out in the panel, which was really just a description for founders to go back to the fundamentals that I see so many startups miss. Your customers are the most important stakeholders, period. Full stop. Without them, there is no business. So he describes a funny metaphor for saying they look for mission-critical businesses to invest in.   00;06;24;28 - 00;06;49;03 And so, if a customer, you know, the example he gave was somebody's hair is on fire and you may be selling sandwiches, which could be the best in the world or best in town, but someone's hair is on fire, that they're probably not going to want to sandwich. A much better business would be, you know, leasing fire extinguishers or something else that drives mission criticality.   00;06;49;05 - 00;07;19;13 What are your thoughts on the venture capital landscape as a result of the panel? What did you leave with? I'm really bullish on the venture landscape as I've always been and believe that entrepreneurs have the chance to shape the world for the better while advancing humanity. In this particular time, especially when we look at, you know, other hard times in the economy, an astounding number of companies were created from the last ‘08 Recession.   00;07;19;15 - 00;07;48;29 WhatsApp, Venmo, Pinterest, Slack, Uber, Airbnb, list goes on. Same thing happened after '01. And just less than half of Fortune 500 companies can actually trace back to being created in a crisis. And so why is that? People look for security, behaviors shift immensely, fear plays in. So the world becomes a pretty giant opportunity for entrepreneurs to take advantage of in these times.   00;07;49;02 - 00;08;14;17 That's such a great description. Kind of uplifting, and I love it. So, to end it, are there any best practices that you have for any listeners here that might be seeking funding right now or in the near future? You know, there's one insight that's one insight that's always stuck with me profoundly, which is this: Investors invest in lines, not dots   00;08;14;17 - 00;08;40;25 metaphor. What that means is rarely investors will wire you funds after your very first meeting, which is a dot or a data point. More often than not, they're evaluating your execution, your communication, trust building over time. And so each meeting that you have with an investor is a dot or a potential data point. And what investors are really looking for is to connect those dots. They're investing   00;08;40;25 - 00;09;19;24 in that connection, that's fantastic. Thank you so much for joining us, JD. I really appreciate it. Thanks, Megan. Enjoyed it. NetSuite by Oracle. The number one cloud financial system is everything you need to grow all in one place. Financials, inventory and more. Make better decisions faster so you can do more and spend less. See how at NetSuite.com/pod. With that, let's jump into the panel recording with JD, Dan Connor, general partner at Ascend Venture Capital, and Craig Herron, managing principal at iSelect.   00;09;19;27 - 00;09;47;13 So, you know, today's climate in venture it's been an interesting couple of years to say the least. So investors are dealing with, you know, the uncertainty of rising interest rates, fewer public listings and exits, valuation corrections, and a bunch of other complex macro trends. And so hopefully Dan and Craig will just distill this information, maybe give us some insights as to how to raise money in this climate.   00;09;47;15 - 00;10;15;20 I still strongly believe that there is a ton of upside to growing a business in today's world. And so a brief introduction. Maybe we can start with Craig Herron, who is the managing principal of iSelect and then we'll move on to Dan, who is the general partner at Ascend Venture Capital. I'd love for all just to give us brief introductions of who you are and your fund's focus and maybe your core thesis.   00;10;15;22 - 00;10;42;13 So iSelect Fund, we've deployed about $200 million over the last several years. We've invested in 78 companies. 65 of those are still active. Others, the others have exited. We invest in three areas: the agrifood supply chain, health and wellness with a focus on cardio, metabolic disease, and then food is health. How do we use nutrition to change the health care system?   00;10;42;15 - 00;11;07;14 Right. First, I'd like to say this is a great event, awesome job to everybody, the production team and all the staff. This is awesome. So I'm Dan Connor. I'm the founder and general partner of Ascend Venture Capital. We're a thematic VC, which means we're focused on a specific theme. Currently, the theme that we're investing under is this data-centric transformation that's happening across every industry.   00;11;07;16 - 00;11;30;00 The expectation that every decision should be made based on data. For instance, 15 years ago, when you used to land on a plane, there's sometimes I would go on the air and say, ‘This plane was just landed on autopilot' and it used to freak people out. But now if you got on the plane and the pilot said, ‘I'm just gonna do this one by feel,' people would be terrified.   00;11;30;00 - 00;11;55;08 So that just signifies how much that shift has taken place. We're investing out of our third fund right now, and just making trouble in the venture capital industry. Awesome, we love trouble. So before we go on with a couple of questions, I love just a quick show of hands just to see who we've got in the room, who are entrepreneurs or growing small kind of or early stage startups.   00;11;55;09 - 00;12;22;12 Can I get a show of hands? Traditional SMBs if you identify maybe over there? Great. Investors in the crowd? Okay. And then large enterprises, corporates? Okay. How about Cardinals fans? Okay. You guys aren't sleeping yet. Good. Great. So as you know, as we all know, many businesses hit an inflection point and need capital to fuel their growth.   00;12;22;12 - 00;12;44;14 But raising venture funding isn't always the most strategic decision. And so I'd love to hear from each of you what actually makes a great business venture backable? And then maybe what are some qualities of exceptional entrepreneurs? Dan, if you'd like to kick us off. Yeah. So initially in our search, we're looking for three things. Number one, does it solve mission critical problem?   00;12;44;17 - 00;13;08;22 Is the problem solving a problem? That is, is it a product solving a problem that ranks among the top three strategic priorities for the customer base? The example that the algorithm I give is if your hair is on fire, I could try and make a case that our sandwiches are the best in town. You're probably going to need a sandwich is some point in the future, but it's much better to be in the business of leasing fire extinguisher services at that point.   00;13;08;24 - 00;13;35;23 So mission criticality. Second is the business transformative? If it works, does it change the whole face of an industry? We're not looking for a slightly better or slightly more socially conscious Uber, we're just we're looking for something that changes the way that things are done in an entire industry. And thirdly, is it a unique value proposition? Are they solving a problem that is completely different from how things are done today?   00;13;35;25 - 00;13;56;24 And are they the only one that's taken that approach? Those three things to me make a venture backable business. So I agree with everything Dan just said. I'd throw in two more in there. So first of all, size of market, right? If you're going after $100 million market, there's really just not enough room to scale there.   00;13;57;01 - 00;14;26;23 We're looking for markets that are, you know, potentially billions of dollars. So if you get a decent market share that there's a big opportunity for the company to scale and grow. And then second is team. You got to have a venture backable team at that point in time. So those are you know, typically we're looking for people that have come out of leading research institutions or, you know, have been entrepreneurs already once or.   00;14;26;25 - 00;14;57;10 You know, if they're in the AG business, they there's been a career of Monsanto in a specific area or have been at Danforth or, you know, some of the other kind of major centers in that given industry. And there's a really great metaphor that we like to share with a number of entrepreneurs that we work with, which is that as you go out to market and raise funding, investors, often they invest in lines, not dots.   00;14;57;12 - 00;15;31;03 And so what this means is that you may meet an investor on day one and you may walk out with a $10 million blank checks. That typically doesn't happen. 99 percent of the time. But what does happen is typically investors are looking to invest in each of these data points. Every time that you meet an investor and you show growth in your company or you communicate what's going on, you are overcoming challenges, it's the progression of how you go through business is often what, you know, we look for in early stage companies.   00;15;31;10 - 00;15;46;03 And so if you think of that strategy, then you can line up, you know, as you go to market, ‘I want to get as many of these lines in as possible.' And so start these relationships as early as you can and demonstrate growth over time.   00;15;46;05 - 00;16;06;19 So like to that theme I'd love to hear from each of you, maybe more a little bit about like real tactical advice that you have for early stage founders. So often, like we get caught up with, you know, there's a lot of glitz and glamor of venture capital and they raise these $100 million rounds and how did you do it?   00;16;06;19 - 00;16;39;04 And $100 million rounds. Yeah, exclamation point. But what if you're just going for like an early call it seed or series A stage raise and you're kind of early to this practice, what best practices can you recommend? I'll jump in. So there are a lot there tend to be a lot of incubators and accelerators out there that are specifically focused on a given industry or segment.   00;16;39;04 - 00;17;12;03 And, you know, often if you've not been a successful entrepreneur previously, they provide a lot of great education. They have networks to introduce you to early adopter customers. And, you know, and they can also introduce you to VCs and typically have VCs or angels that are interested in investing. I guess secondly, you know, it's great if you're coming with a customer in hand, right?   00;17;12;03 - 00;17;39;18 Because every VC wants to see some sort of traction that you've there's actually been proof point that this actually is going to work in the marketplace. And I guess I'd leave it at those two to start with. Absolutely. Yeah. So three things I would add to that. See, seed and series A have gotten extremely squirrely right now. It's very hard to engage investors.   00;17;39;20 - 00;18;07;05 They're the number of investors who actually have capital to invest, has gone down, since 2021, when actually a lot of folks raised funds and then were immediately unsuccessful at investing them. The best way to lose a small fortune is to start with a big one, and that's what's happened in venture capital recently. So a lot of people have gotten tighter with their funds to invest in new companies at all versus doubling down on existing portfolio companies.   00;18;07;08 - 00;18;31;15 So seed and series A have gotten very squirrely. So the first piece of advice I would say is you got a triple the number of investors on your list that you plan to reach out to. Triple it. And then triple the time that it's going to take you to dedicate to fundraising because it's a slog and it's the number of times that you have to follow up with one investor to get a response has gotten has gotten a lot longer.   00;18;31;17 - 00;18;55;17 So that's what I would say is build the database of three times the number of investors that you're going to reach out to and reach out to every single one, one by one. It takes hard work, but I know I'm not the only troublemaker out here, so I feel like there's something that everybody is dedicated to. Secondly, as you have those calls, as you engage with those investors, you have to also be listening to what they're telling you about.   00;18;55;23 - 00;19;24;12 We invest in seed stage companies or how they define seed stage companies, or we invest in series A companies. And, you know, you're too early for us. As they say that to you, you say two things: One, okay, tell me your criteria so that you can take good notes on that call and then keep that database. And then thirdly, say if they're saying you're too early for them, say, would you mind if I put you on my investor newsletter to keep you updated in a lightweight way?   00;19;24;14 - 00;19;55;01 You'd be surprised the number of folks who do ask permission to put you on their online newsletter, or actually follow up with newsletters. It set you aside. It keeps you front of mind. And then when it comes time to raise that next round or series A or series B, you have a list of folks who have told you their criteria and that they will invest later on if you hit these certain milestones so that it's not the first call when you're reaching out to folks and saying, you know, we're actively fund raising, we've got two weeks left to raise this round or else we're out of business.   00;19;55;03 - 00;20;16;17 So those three things. So first off, triple the number of investors you're going to reach out to and the time it takes to actually dedicate to fundraising. Number two, reach out to every single one and keep good notes on everybody. And three, maintain a monthly newsletter that you reach out to and drip campaign to everyone in that in that list you connected with.   00;20;16;19 - 00;20;39;25 To add something to what Dan just said, and agree with those, there are a lot of tools out there that are available to you for free. You know, Crunchbase, you know other things that you can go out and find to do research on the VCs. It's helpful to do the research upfront. Figure out who actually could be investing your space.   00;20;39;27 - 00;21;04;20 The other thing that, from a very tactical standpoint, is that different VCs have different ways in which they will source deal flows. So there are VCs out there where if you don't get into a partner, right, there's no way you're going to get funded. Right? And so sometimes going in you had a lower level through an associate or otherwise isn't going to get you where you need to be.   00;21;04;23 - 00;21;29;02 But there are other VCs like us where we make team decisions and so it's, you know, coming in through any member of our team is perfectly fine, but it's just you should try to figure out that going in so you make sure you're trying to get a connection into the right person that's actually going to be able to take your deal through and, you know, through to an investment committee or otherwise.   00;21;29;05 - 00;21;49;15 Yeah, I'll give you one more hack, which is a really strong, warm introduction. It's often hard to get if you're not already sort of well-versed and connected within the networks. But often people think, hey, this investor can introduce me to other investors. Like a lot of times, right? That that network runs strong and they all know one another.   00;21;49;15 - 00;22;25;25 We all know one another. Actually, I think the most quality warm interaction you can get is from a portfolio company. So from a different founder that they've already put money to work into because there's a reason they've already gotten that check. And I'd venture to guess that you all are going to probably take that call probably 100% of the time if a company that you've invested in is, you know, telling you, Hey, meet xyz company, and then, you know, to Dan's point, like, run this, like you're running an enterprise sales operation. Like it is a numbers game at the end of the day.   00;22;25;25 - 00;22;47;19 And so you do how it can be a slog and but with the right planning and persistence, you know you will break through to get there. Go ahead. One other thing. When you figure out, you know, figure out who your top list of like, ‘this is the perfect VC.   00;22;47;22 - 00;23;11;06 They do exactly what we do.' Then go pitch to ten other people first, right? Because you don't want to go into that VC the first time if you've never done a pitch before, right? You want your materials refined, you want your presentation refined. You don't want to be reading your presentation. The kiss of death is somebody who gets up there and just read slides.   00;23;11;08 - 00;23;44;07 You know, we, unlike some others, do like to actually be walked through a deck. But I don't want somebody who's just going to read it and, you know, like they are standing in a room giving a presentation. You have to get the reps in. I want to put an emphasis on Dan's point with a question that he kind of skirted by, which is when you are meeting investor, how many folks have met and raised funding or attempted to raise money and you've gotten a, ‘hey, no, this is a little bit too early for us.'   00;23;44;09 - 00;24;10;23 Has anyone ran into that? If you haven't, you most definitely will. I promise you. And so the way to answer that is just asking the question, you know, in a very polite way, ‘Hey, what would you want to see or need to see to make this an investable business?' You can curate a lot of that information from investors or other folks that you're meeting and that sets at least a rough guideline right to where you're going.   00;24;10;26 - 00;24;42;16 I'd like to ask a question. So of the folks who have talked to venture investors, how many of you have a story of an investor that's just acted in a thoughtless, like inappropriate or unprofessional manner, like ghosting, not showing up, any stories like that? Well, if you haven't, you will, because there's a lot of really, really subpar human behavior that's rampant in the venture industry, especially in the early stages.   00;24;42;18 - 00;25;10;19 So you need to have a hard shell and let's make all that, just to take it and then move on to the next. And then remember those stories for when you're when you're IPOing and you're having drinks with everybody in your team and telling those stories about those investors later on because it's a big problem. Yeah, so, on the other side of the equation, maybe you'll have like what major pitfalls should entrepreneurs avoid?   00;25;10;23 - 00;25;37;29 Is there some proactive insight you all can share to get ahead of some challenges before for the entrepreneurs run up against them? Yeah so first off, not dedicating time, a set period, where fundraising is your main focus. If you're if you're running an organizational sales campaign, you're not you're not just running an undefined period.   00;25;37;29 - 00;25;59;04 You're setting aside certain hours every day where you're tackling a certain number of investor reach outs, investor prospect reach outs, so that you can handle on a weekly basis going forward just week by week and reaching out to as many as you can, engaging as many as you can. That is, it's hard to do.   00;25;59;04 - 00;26;37;25 It's hard to carve out that amount of time in your schedule because you're doing 50 other things. I get it. I founded this firm in 2015. We've been scrambling ever since so setting aside that dedicated time and actually reaching out to literally hundreds of investors, literally hundreds, is something that it will set you apart. I don't know about Dan's firm but our firm biases against a CEO at an A or seed level, an entrepreneur hiring an investment banker or other representation, right?   00;26;37;25 - 00;27;06;09 We view it as the job of the CEO to go out and raise money, and that is, you know, we want to see that the CEOs putting in that kind of time, that kind of effort, etc., in order to do that, that role at that level, when you get to a C or D level, yeah, there might be more reason for a banker to get involved, but typically not at seed and A level.   00;27;06;12 - 00;27;26;12 Yeah. Otherwise, you need to make sure you're, you're prepared, right? You're going to, you're going to get a lot of questions. To Dan's point earlier, there are a lot of people out there who, you know, are can be not the nicest people in the middle of pitches. But these are not two of them. The only friendly faces on stage.   00;27;26;14 - 00;27;47;17 Yeah I actually I can honestly say that we've gotten into a lot of deals by not being jerks as opposed to kicked out of deals for the other reason but yeah. To that point, people do try all sorts of different mediums to raise funds because it is quite challenging right now. There's a lot less dry powder, it's called out there.   00;27;47;17 - 00;28;11;10 So since the venture investment dollars have gone down in both dollars and deal amounts since 2021 and ‘22 at the highs, you know, we're living in a new world. So just curious what you all have been seeing maybe in Missouri or more broadly with your funds. Is this impacted investment activity or deal flow for each of you?   00;28;11;13 - 00;28;41;25 We're actually still doing a decent amount of seed and A business. We'll do 6 to 10 seed and A rounds this year, which is pretty consistent with par in the past. In the past, we also were doing more B and C and later stage deals. All of that effort is now solely focused on our existing portfolio and, for that matter, a decent amount of the A businesses as well.   00;28;41;27 - 00;29;09;11 And in terms of like us prospecting or doing outreach or even inbound for seed deals where we've cut back on the amount of time we have available for that. Just because I've got issues I've got to deal with my existing portfolio that take precedence over putting new capital to work for new investments.   00;29;09;14 - 00;29;31;07 I mean, I'm sure Dan's going to echo the same thing because that is pretty consistent across the board. You're seeing a lot of firms that used to do early stage stuff retrench and do later stage, change the criteria. They're now growth investors looking for ten plus million in revenue. So, yeah, it's definitely gotten a lot more challenging.   00;29;31;09 - 00;29;54;14 You know? Yeah. So actually, we're a kid in a candy store right now. During the pandemic, there were  60% more companies started every month than any time before in history. So for every three companies started prior to the pandemic, there were five started every month just based on how the averages were working out. And that was sustained for three years.   00;29;54;16 - 00;30;19;04 So there are so many new companies right now that have been around for a couple of years and are now looking for seed series, series A funding, it's immense. The number of companies that we used to be able to review every month just to keep up was about 300. Now we're scrambling to do 500 a month and we're not keeping up with the number of companies that are entering our search criteria.   00;30;19;07 - 00;30;38;23 We need to be doing more because there are just so many companies, and that's the most exciting time to be investing, because in these times that in these times of turmoil in which people lose their jobs, they leave their company because they want to go live on the beach and are sick of it. They bring the knowledge and the funding that they have to start a new idea.   00;30;38;27 - 00;31;04;06 And the most the most consequential companies get started in these times. Google was founded in the dot com bust. Airbnb was founded in the global financial crisis. These iconic companies get started in times just like this. And we're scrambling to find those gems in that deal flow. So in Missouri, I guess I'll put it to you this way.   00;31;04;10 - 00;31;29;22 So what can you tell me what geography has the monopoly on brilliance on brilliant founders? Can anyone tell me what the what the limitation oof sending an email is geographically? Can anyone tell me how much more difficult it is to send money from one place in the country to another outside of Missouri?   00;31;29;24 - 00;31;54;09 There's no limit on where you have to be to be able to start an iconic company. The talent pool is everywhere. Brilliant entrepreneurs start up everywhere in the country. Sam Altman's from Saint Louis. Taylor Swift's mom lives in Saint Louis. So there's no there's no limit to where you can go with your company.   00;31;54;12 - 00;32;28;02 You can sell into the e-commerce market from anywhere. I think that's one thing that's changed as well, that folks have gotten a little more devious in terms of the revenue, in terms of the dollars they're bringing in. Every dollar of revenue is, by the math, an infinite valuation fundraising dollar. And so getting devious about new product lines, new revenue lines to actually front your growth in the future, SBA loans, bank loans, grants from their CEO.   00;32;28;02 - 00;32;51;12 I mean there is funding outside of venture capital if that has been to drive a well so. That's right that's a really good point. We'll be sure to have an angel panel with Mrs. Swift and Sam Altman next time around here. You know, one more question to you both, which is just given the surge of AI capabilities, it's clear that we're all going to be much more productive with less.   00;32;51;15 - 00;33;16;26 And so founders and investors alike are you know, it seems to be a trend. We're all moving back to the fundamentals of, you know, not grow at all costs, but maybe grow with efficiency or dare I say, profitability, maybe. So I'm curious, how does this how does this affect your thesis or, you know, have your expectations changed when you're meeting early stage companies?   00;33;16;28 - 00;33;35;17 Unknown So we're thesis driven, as I mentioned. So the thing that we've been focused on since fund one was is this data-centric paradigm shift that's been taking place. Every company needs to become a data company to stay competitive. I think that you can make the case in the future that every company is going to have to become an AI company to stay competitive.   00;33;35;19 - 00;33;56;08 But right now, we're in the middle of the hype cycle. So I don't I'm not ready to stake a claim, stake a fund in investing in that theme. But I think that in the future it may be. On the ground. I think that we're seeing a lot more, as you mentioned, efficiency, productivity coming from individuals in an organization.   00;33;56;10 - 00;34;19;02 And that is that has been a trend that's been happening for the last 60 years. Super producers are emerging and finding those other super producers to join your team is a way to grow a company fast. It costs less to start a business. There are fewer people required to start a transformational business. OpenAI   00;34;19;02 - 00;34;48;25 was four people when they were valued at $5 billion. There is a time coming where there will be a single person company who is worth $1B just by the just by the path of that trend. So maybe it's you. So even in our sectors, we are still investing a lot in AI-driven companies.   00;34;48;28 - 00;35;13;07 I'd say the first thing is you really need to make sure you're an AI-driven company and not, you know, just adding it to your name for the sake of it. It's the hype, as Dan said, the hype cycle because it's pretty quick to tell whether or not you're really an AI company or not, right? And the kiss of death is labeling yourself that one when you're really not.   00;35;13;10 - 00;35;38;18 It really hasn't changed our theses or the kind of way we looked at things. You know, to the other side of that, in terms of, you know, shifting back toward fundamentals, I'd say certainly in our later stage portfolio, we are much more driven by, you know, how quickly can we get to cash flow. Early stage portfolio.   00;35;38;18 - 00;36;12;10 There's still a little bit of leeway there, but you'd better, you know, you better have a clear line of sight to customer's revenue. Everything else, if not preferably already have customers revenue or minimal viable product, etc. You know, in general, bootstrapping or not raising venture funding leads to limited resources, which often leads to better decision making, which then leads to better outcomes.   00;36;12;12 - 00;36;39;21 Right. And so we see it all the time. Too many people get, you know, the $100 million round, you have more cash than you know what to do with. And so actually there is some truth to growing a real sustainable business in this way too. So we're about out of time, but I wanted to give each of you just an opportunity maybe to, you know, for a piece of parting advice you'd like to share with entrepreneurs in the crowd that are going out to raise or maybe in the thick of it right now.   00;36;39;24 - 00;37;11;00 So I'd go again with early adopter customers, right? The more if, you know, every business needs a customer, so the faster you can get them, the quicker you can get them, it shows somebody you're trying to fundraise with that you've got traction, that the product is actually something somebody wants to buy. And it goes just a long, long way towards proving out that whatever it is your invention or idea is actually going to work.   00;37;11;03 - 00;37;53;19 So focus first on customers, you know, and then think, how do you go from there? Yeah, my advice would be bold, go boldly. The thing that binds us is this human element that everybody that we're interacting with should be expecting to be interacting a positive way. And if you can just get in front of that person with the right, think about where they are in their in their day, what they do on a day-to-day basis and get in their shoes, whether it's a customer or an investor or a person who you're courting to join your team, make sure that you understand that person's   00;37;53;26 - 00;38;28;29 culture. Their driving theses and figure out how that how you can craft your story to get in front of that person. I talked about out of the playing the entrance road to the to the global economy is this wide open and it's unlimited but the thing that we remain rooted in is our culture, our way that we do things as tribes, the knowledge that we share as individuals in a group that is extremely important.   00;38;28;29 - 00;38;53;17 And remembering that as you go out and try to meet new customers and build relationships is crucial and the only way to do that is to remember that human element. So be bold. It's great. I have the last word on this. So Doug Leone from Sequoia has a great quote where he says, ‘Architect your top table like you would your product,' meaning   00;38;53;19 - 00;39;15;21 you know, really just stressing the importance of choosing, you know, your early investors and partners. They're going to be going on a very long journey with you if this works out as successful. If it's not successful, you also want to make sure that you chose the right partners alongside that. The one caveat with that is the best deal is, is the check that gets signed.   00;39;15;21 - 00;39;35;17 So at the end of the day, you've got to do what you've got to do to grow your business. And anyway, so with that, thank you so much for being here. I know we're just out of time and thank you for your insights. Thank you. Thank you. Thanks,Oracle. Well, that wraps up another great episode.   00;39;35;20 - 00;40;02;21 One of the pieces of advice that I found most interesting from the panel was being realistic around how much effort it would take to raise money in today's environment and founders needing to triple the amount of investors they reach out to and triple the amount of time fundraising. It might be harder to get an investor on board right now, but, like JD said, some of the biggest companies have been built in times where investors weren't as willing to invest.   00;40;02;23 - 00;40;22;16 Thank you so much to JD Weinstein for not only moderating the panel but also taking the time to join us on the podcast. A big thanks as well to our panelists, Dan and Craig. If you want to learn more about NetSuite's Business Grows Here events, be sure to check out the link in our show notes to see if we're coming to a city near you.   00;40;22;18 - 00;40;43;24 As always, a big thanks to our wonderful editing team over at Oracle and to all of you for tuning in. If you want more episodes just like this one, make sure you subscribe to our channel and give us a rating and review. Until next time! You just listened to the NetSuite Podcast. Be sure to tune in every week with more   00;40;43;24 - 00;40;51;04 NetSuite developments, stories, and insights into the benefits of one integrated system to help you run your business.

Cables2Clouds
How to Become a Cloud Network Engineer

Cables2Clouds

Play Episode Listen Later Oct 2, 2024 48:07 Transcription Available


Send us a textCurious about the evolution of network engineering from the 90s to the age of the cloud? Join us as we have an enlightening conversation with Kam, a seasoned expert with over twenty years in the field, now at the helm of cloud network engineering at Oracle Cloud. Kam walks us through the transformation from traditional network admin roles to today's specialized cloud network engineer, detailing how significant technological shifts and responsibilities have morphed over the decades. From the rise of Active Directory and WAN/LAN advancements to the game-changing impact of automation and private clouds, this episode offers a panoramic view of the journey and milestones that have paved the way for modern cloud networking.Unpack the essential skill sets that define a Cloud Network Engineer today in the second segment of our discussion. As we navigate through the layers of networking, discover why traditional physical layer concerns have become obsolete and how layer three configurations like BGP still hold critical importance. Learn about the nuanced responsibilities that cloud environments introduce, including the significance of load balancing and managing firewalls in a software-defined ecosystem. This is your chance to grasp the core competencies that are now indispensable for any aspiring cloud network professional.Finally, we shift gears to focus on practical advice for transitioning to cloud networking roles. Kam sheds light on the importance of a cloud-first mindset and shares strategies for adapting to multi-cloud environments and Kubernetes. Understand the necessity of continuous learning and certifications for staying competitive in this fast-evolving field, and gain insights into managing application performance and costs effectively. Whether you're just starting your cloud journey or looking to refine your expertise, this episode is packed with actionable tips and invaluable perspectives to help you thrive in cloud network engineering.*Disclaimer*Kam's opinions expressed in this episode do not reflect the opinion of his employer.Show Links:@kagahian – Kam Agahian on TwitterKam Agahian on LinkedInCloud Network Engineering: A Closer Look – NANOG (via YouTube)How to Break into a Cloud Engineering Career? – via Packet PushersCheck out the Fortnightly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on Twitter: https://twitter.com/cables2cloudsFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatjArt of Network Engineering (AONE): https://artofnetworkengineering.com

Cloud Wars Live with Bob Evans
Oracle's Mike Sicilia Explores AI-Driven Industry Cloud Transformations

Cloud Wars Live with Bob Evans

Play Episode Listen Later Sep 20, 2024 16:39


Oracle's Industry Cloud InnovationsThe Big Themes:AI integration in Industry Applications: AI, especially GenAI is playing a significant role in transforming industry applications. Mike explains how AI technologies are enhancing customer experiences, such as in healthcare, where they automate routine tasks and improve patient interactions. This integration is expected to continue growing.Shift towards Industry Suites: There is a clear shift toward adopting Industry Suites rather than piecemeal cloud solutions. Mike explains that Industry Suites, which integrate various technologies and services, are becoming more popular as they offer a cohesive solution and reduce the need for extensive regression testing and complex integrations. They help organizations maintain cloud benefits, such as innovation and security, without the drawbacks of fragmented systems.Business transformation through technology: Technology has a huge impact on broader business transformation. Mike notes that what starts as an IT project can evolve into a comprehensive business transformation initiative. For instance, AI-driven offerings in healthcare are not only improving operational efficiency but also transforming business models and patient care.The Big Quote: “Everybody's here to serve their customers. Patients are customers, people who check into hotels and customers, people who eat at restaurants or customers. Every CEO that I engage with says, 'This transformation helps me serve my customers better, and that's why I want to engage in it.'"

Datascape Podcast
Episode 79 - Recapping Oracle Cloud World 2024 Announcements with Simon Pane and Nelson Calero

Datascape Podcast

Play Episode Listen Later Sep 17, 2024 42:31


In this episode, Warner is joined by Oracle Ace Simon Pane and Oracle Ace Director Nelson Calero to discuss the latest and greatest announcements from the Oracle Cloud World 2024 conference. Major themes include the new multi-cloud services, the new Oracle database 23 AI product and more!

Sales Game Changers | Tip-Filled  Conversations with Sales Leaders About Their Successful Careers
Kim Lynch's Mission is Bringing Oracle Cloud Solutions to Federal Agencies

Sales Game Changers | Tip-Filled Conversations with Sales Leaders About Their Successful Careers

Play Episode Listen Later Sep 10, 2024 23:38


This is episode 696. Read the complete transcription on the Sales Game Changers Podcast website. Register for the September 13 Women in Sales Leadership Elevation Conference here. Register for the IES Women in Sales Leadership Development programs here. Today's show featured an interview with Kim Lynch, Executive Vice President, Government Intelligence and Defense at Oracle. KIM'S ADVICE:  “Be in the market, be with your customers. I'm out there all the time. It's the best way to develop that customer trust, to be able to understand their challenges. Being with them, walking the hallways, it's so important, for all of us, the engagement with customers. You can't over engage. That's critical.”  

Telecom Reseller
Oracle continues to expand the reach of the Oracle Cloud Infrastructure (OCI) with the new AT&T IoT agreement, Podcast

Telecom Reseller

Play Episode Listen Later Sep 10, 2024 11:45


Oracle continues to expand the reach of the Oracle Cloud Infrastructure (OCI) with the new AT&T IoT agreement, Andrew De La Torre, GVP of Technology & Oracle Communications, describes to Don Witt of The Channel Daily News, a TR Publication, how Oracles fast OCI architecture will work well with the extensive AT&T IoT implementations. @Don Witt, Reporting LIVE for The Channel Daily News Andrew De La Torre, GVP of Technology,  Oracle Communications The Oracle and AT&T announcement is a major IoT partnering in the communication community.  Andrew De La Torre, GVP of Technology,  Oracle Communications, describes to Don Witt of The Channel Daily News, a TR Publication, how Oracles fast OCI architecture will work well with the extensive AT&T IoT implementations. The Oracle Enterprise Communication Platform (ECP) is effectively a horizontal platform that enables connected devices and real-time communications to be embedded into the Industrial solutions that Oracle produces. This covers several very diverse industries. These diverse industries can use embedded communication technology in their application development allowing them to participate in real-time communications at Oracles incredibly fast network speeds. Listen in to Andrew as he provides a very clear understanding of OCI and the security provided. About: Oracle Cloud is the first public cloud built from the ground up to be a better cloud for every application. By rethinking core engineering and systems design for cloud computing, they created innovations that solve problems that customers have with existing public clouds. They accelerate migrations of existing enterprise workloads, deliver better reliability and performance for all applications, and offer the complete services customers need to build innovative cloud applications. There are six key reasons customers are choosing Oracle Cloud Infrastructure (OCI) for all their cloud workloads. Far easier to migrate critical enterprise workloads Everything you need to build modern cloud native applications Autonomous services automatically secure, tune, and scale your apps Oracle Cloud provides the most support for hybrid cloud strategies Their approach to security—built in, on by default, at no extra charge OCI offers superior price-performance Listen in to Andrew as he provides a very clear understanding of OCI and the security provided. For more information, go to: Oracle  

Oracle University Podcast
Time & Data Handling & Data Storage

Oracle University Podcast

Play Episode Listen Later Sep 3, 2024 15:19


In this episode, hosts Lois Houston and Nikita Abraham discuss improvements in time and data handling and data storage in Oracle Database 23ai. They are joined by Senior Principal Instructor Serge Moiseev, who explains the benefit of allowing databases to have their own time zones, separate from the host operating system. Serge also highlights two data storage improvements: Automatic SecureFiles Shrink, which optimizes disk space usage, and Automatic Storage Compression, which enhances database performance and efficiency. These features aim to reduce the reliance on DBAs and improve overall database management.   Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Principal Technical Editor with Oracle University, and joining me is Lois Houston, Director of Innovation Programs. Lois: Hi there! Over the past two weeks, we've delved into database sharding, exploring what it is, Oracle Database Sharding, its benefits, and architecture. We've also examined each new feature in Oracle Database 23ai related to sharding. If that sounds intriguing to you, make sure to check out those episodes. And just to remind you, even though most of you already know, 23ai was previously known as 23c. 01:04  Nikita: That's right, Lois. In today's episode, we're going to talk about the 23ai improvements in time and data handling and data storage with one of our Senior Principal Instructors at Oracle University, Serge Moiseev. Hi Serge! Thanks for joining us today. Let's start with time and data handling. I know there are two new changes here in 23ai: the enhanced time zone data upgrade and the improved system data and system timestamp data handling. What are some challenges associated with time zone data in databases? 01:37 Serge: Time zone definitions change from time to time due to legislative reasons. There are certain considerations. Changes include daylight savings time when we switch, include the activity that affects the Oracle Database time zone files.  Time zone files are modified and used by the administrators. Customers select the time zone file to use whenever it's appropriate. And customers can manage the upgrade whenever it happens.  The upgrades affect columns of type TIMESTAMP with TIME ZONE. Now, the upgrades can be online or offline.  02:24 Lois: And how have we optimized this process now?  Serge: Oracle Database 23c improves the upgrade by reducing the resources used, by selectively using the updates and minimizing the application impact. And only the data that has dependencies on the time zone would be impacted by the upgrade.  The optimization of the time zone file upgrade does not really change the upgrade process, so upgrade can be done offline. Database would be unavailable for a prolonged period of time, which is not optimal for today's database availability requirements.  Online upgrade, in this case, we want to minimize the application impact while the data is being upgraded. With the 23c database enhancement for time zone file change handling, the modified data is minimized, which means that the database updates only impacted rows. And it reduces the impact to the applications and other database operations.  03:40 Nikita: Serge, how does updating only the impacted rows improve the efficiency of the upgrade process? Serge: The benefits of enhanced timezone update include customers who manage large fleet of databases. They will benefit tremendously with a lower downtime. The DBAs will benefit due to the faster updates and less resource consumption needed to apply those updates. And that improves the efficiency of the update process.  Tables with no affected data are simply skipped and not touched. All results in the significant resource savings on the upgrade of the time zone files. It applies to all customers that utilize timestamp with time zone columns for their data storage.  04:32 Lois: Excellent! Now, what can you tell us about the improved system data and system timestamp data handling?  Serge: Date and time in Oracle databases depends on the system time as well as the database settings. System time now can be set as the local time zone for an individual database.  04:53 Nikita: How was it before this update? Serge: Before 23c, the time has always matched the time zone of the database host operating system. Now, imagine that we use either multitenant environments or cloud-based environments when the host OS system time zone is not really the same as the application that runs in a different geographic locality or affects data from other locations.  And system time obviously applies not only to the data stored and updated in the database rows but also to the scheduler, the flashback, to a place to materialized view refresh, Recovery Manager, and other time-sensitive features in the database itself.  Now, with the database time versus operating system time, there is a need to be more selective. It is desired that the applications use the same database time in the same time zone as the applications are actually being used in.  And multitenant and cloud databases will certainly experience a mismatch between the host operating system time zone, which is not local for the applications that run in some other geographical locations or not recognizing some, for example, daylight savings time.  So migration challenge is obviously present. If you want to migrate from a specific on-premises database to either multitenant or cloud, you would experience the host operating system time zone by default.  06:38 Lois: And that's obviously not convenient for the applications, right? Serge: Well, the database-specific time in Oracle Database 23c, any cloud database can set local time zone to whatever the customer's requirements are explicitly. And any pluggable database can also set its own local time zone to customer's requirements, not inheriting the time zone from the container database it is currently running in.  This simplifies migration to multitenant or cloud for applications that are time-sensitive. And it offers more intuitive, easier database monitoring, and development.  07:23 Working towards an Oracle Certification this year? Take advantage of the Certification Prep live events in the Oracle University Learning Community. Get tips from OU experts and hear from others who have already taken their certifications. Once you're certified, you'll gain access to an exclusive forum for Oracle-certified users. What are you waiting for? Visit mylearn.oracle.com to get started.   07:51 Nikita: Welcome back! Let's move on to the data storage improvements. We have two updates here as well, automatic secure file shrink and automatic storage compression. Let's start with the first one. But before we get into it, Serge, can you explain what SecureFiles are?  Serge: SecureFiles are the default storage mechanism for large objects in Oracle Database. They are strongly recommended by Oracle to store and manage large object data.  The LOBs are stored in segments. Those segments may incur large amounts of free space over time. Because of the updates to the LOB data, the fragmentation of the space used is growing depending, of course, on the frequency and the scope of the updates.  The storage efficiency could be improved by shrinking segments with the free space removed. And manual secure files shrinking has become available since Oracle Database 21c, requiring administrators to perform these tasks manually.  Traditional SecureFiles required the time-consuming DBA activities. DBAs would need to manually identify eligible LOB segments either using Segment Advisor or PL/SQL or built-in database views.  Once identified, the administrators would manually execute shrink operations on very large LOBs which takes too much time and may result in excessive disk space consumption. For example, code to operate this shrinking would look like ALTER TABLE some table SHRINK SPACE CASCADE.  That would shrink all LOB segments in a particular table. If you want to scope the shrinking to a single column, the code would be required to ALTER TABLE some table MODIFY LOB, followed by the column name SHRINK SPACE.  This affects only a single column in a table with LOBs.  10:01 Lois: So, how has automatic secure shrinking made things better? Serge: Automatic SecureFile shrink removes the emphasis from the DBAs to manually perform these tasks. And it results in the more optimal use of space over time.  It is integrated into the automated database maintenance tasks. The automation once enabled runs every 30 minutes, collects eligible LOB segments, and shrinks them offline. The execution time and freed space would vary depending on the fragmentation and the size of the LOBs. Each shrink execution may reclaim up to 5 gigabytes of unused disk space from each LOB segment that is idle.  On the high level, automatic SecureFile shrink improves the Oracle Database 23c storage usage efficiency. It is part of the ongoing Oracle Database improvement effort and transparently reclaims the free space with negligible to no impact on performance of the database operations.  Again, this is done in the background without affecting the running processes. It makes Oracle database 23c less dependent on the DBA activities while reducing the disk space required to store SecureFiles, reducing the usage of LOB segments.  Automatic securefile shrink runs incrementally in small steps over time. Some of the features are tunable. And it is supported for all types of large objects, storage, compressed, encrypted, and duplicated the object segments.  11:50 Nikita: Right, and note that this feature is turned on out-of-the-box in the Autonomous Database 23ai in Oracle Cloud. Now, let's talk about Automatic Storage Compression, Serge.  Serge: With Automatic Storage Compression and Automatic Clustering, the storage compression gives you the background compression functionality. Directly loaded data is first uncompressed to speed up the actual load process. Rows are then moved into hybrid columnar compression format in the background asynchronously.  The automatic clustering applies advanced heuristic algorithms to cluster the stored data depending on the workload and data access patterns and the data access is optimized to more efficiently make use of database table indices, zone maps, and join zone maps.  Automatic Storage Compression advantages include the improvements to Oracle Database 23c storage efficiency as well. It is part of the continuous improvement, part of the ongoing Oracle Database improvement effort. And it brings performance gains, speeds up uncompressed data loads while compressing in the background.  The latencies to load and compress data are because of that also reduced. With the hybrid columnar compression in particular, this works in combination.  And it results in less DBA activities, makes the Database Management less dependent on the DBA time and availability and effort.  Automatic Storage Compression performs operations asynchronously on the data that has already been loaded. To control Automatic Storage Compression on-premises, it must be enabled explicitly. And you have to have heatmap enabled on your Oracle Database objects.  Table must use hybrid columnar compression and be placed on the tablespace with the SEGMENT SPACE MANAGEMENT AUTO and allowing autoallocation. And this feature, again, is transparent for the Autonomous Database 23c in the Oracle Cloud. 14:21 Lois: Thanks for that quick rundown of the new features, Serge. We really appreciate you for taking us through them. To learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 14:50 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Cloud Wars Live with Bob Evans
What to Expect from Oracle Cloud World, Workday Rising, HR Tech Conference, and SAP SuccessConnect 2024 | Tinder on Customers

Cloud Wars Live with Bob Evans

Play Episode Listen Later Aug 28, 2024 21:49


Episode 43 | Upcoming Conference InsightsThe Big Themes:Upcoming HR and tech conferences: Oracle CloudWorld will cover AI in Fusion Applications, Oracle's autonomous database, and cloud innovations, with keynotes from Safra Catz and Larry Ellison. Workday Rising will focus on employee empowerment, new global payroll offerings, and AI in HCM. The HR Technology Conference will explore AI solutions for HR functions and system integration. SAP SuccessConnect will showcase global human experience management, AI-enabled learning, and cloud migrations.AI-driven decision-making and transformation of work practices: The conferences will spotlight how AI tools can enhance decision-making by analyzing data to predict business outcomes, while also transforming work practices. This transformation includes reducing administrative tasks and improving efficiency.Interconnectedness of systems: The conferences will also feature a strong emphasis on breaking down organizational silos and improving system integration, especially among larger ERP vendors. The aim is to enhance visibility and streamline operations across different departments.The Big Quote: “What are the partners doing to bring innovation to the forefront? How is [the] implementation of these products changing in light of AI and generative AI? Those would be the main takeaways that we're hoping to see from CloudWorld."

Sales Game Changers | Tip-Filled  Conversations with Sales Leaders About Their Successful Careers
Watching the Top AI Vendors Emerge with Oracle Cloud Sales Leader Sara Chen

Sales Game Changers | Tip-Filled Conversations with Sales Leaders About Their Successful Careers

Play Episode Listen Later Aug 7, 2024 14:41


This is episode 688. Read the complete transcription on the Sales Game Changers Podcast website. This is a Sales Story and a Tip episode! Watch the video of the interview here. Read more about the Institute for Excellence in Sales Premier Women in Sales Employer (PWISE) designation and program here. Purchase Fred Diamond's best-sellers Love, Hope, Lyme: What Family Members, Partners, and Friends Who Love a Chronic Lyme Survivor Need to Know and Insights for Sales Game Changers now! Today's show featured an interview with Oracle Cloud Regional Sales Leader Sara Chen. SARA'S TIP:  "The main takeaway from this experience is that even at a large enterprise such as Oracle, that the Oracle type of company will pivot and point resources and strategic alignment to initiatives that is ultimately valuable, even if it's not an existing focus for the company. If you work for a company that values that innovation, the nature of emerging technology, it should be feasible to shift and take advantage of the brand, network and resources that a large enterprise such as Oracle could provide, but ultimately mobilize quickly in a specific field or area, whether that's AI or not that the market is trending towards."    

Futurum Tech Podcast
It's Showtime! NetApp, Broadcom, AWS, Splunk + Oracle's Cloud Announcements - Infrastructure Matters, Episode 44

Futurum Tech Podcast

Play Episode Listen Later Jun 18, 2024 32:50


In this episode of Infrastructure Matters, hosts Camberley Bates, Steve Dickens and Krista Macomber give the rundown on conferences and announcements from Broadcom Mainframe Analysts event, NetApp's Analyst event, Splunk and AWS reinforce conferences, plus announcements from Oracle.  Key topics include:  NetApp Analyst Event: NetApp hosted an event in New York, celebrating their recent success and introducing their new "intelligent data infrastructure" positioning covering their focus on unified storage, managing data across hybrid environments seamlessly advancements in AI and data management. Broadcom Mainframe  Analyst Event: Broadcom demonstrated their continued investment and growth in mainframe software, including new capabilities such as Watchtower and  their investment in the "vitality program" focused on training and hiring non-traditional candidates for high-paying system programming roles. AWS reInforce: This AWS event focused on cloud security, highlighted AWS's efforts to enhance their security infrastructure and customer tools, including AI models and new security features.  Splunk Acquisition by Cisco: Splunk, recently acquired by Cisco, showcased their commitment to innovation and integration within Cisco's ecosystem and maintaining innovation momentum while integrating with Cisco's existing technologies like AppDynamics and Thousand Eyes. Oracle Cloud and Google: Oracle & Google announce collaborating to provide customers with interoperability between their respective cloud platforms. 

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We're writing this one day after the monster release of OpenAI's Sora and Gemini 1.5. We covered this on ‘s ThursdAI space, so head over there for our takes.IRL: We're ONE WEEK away from Latent Space: Final Frontiers, the second edition and anniversary of our first ever Latent Space event! Also: join us on June 25-27 for the biggest AI Engineer conference of the year!Online: All three Discord clubs are thriving. Join us every Wednesday/Friday!Almost 12 years ago, while working at Spotify, Erik Bernhardsson built one of the first open source vector databases, Annoy, based on ANN search. He also built Luigi, one of the predecessors to Airflow, which helps data teams orchestrate and execute data-intensive and long-running jobs. Surprisingly, he didn't start yet another vector database company, but instead in 2021 founded Modal, the “high-performance cloud for developers”. In 2022 they opened doors to developers after their seed round, and in 2023 announced their GA with a $16m Series A.More importantly, they have won fans among both household names like Ramp, Scale AI, Substack, and Cohere, and newer startups like (upcoming guest!) Suno.ai and individual hackers (Modal was the top tool of choice in the Vercel AI Accelerator):We've covered the nuances of GPU workloads, and how we need new developer tooling and runtimes for them (see our episodes with Chris Lattner of Modular and George Hotz of tiny to start). In this episode, we run through the major limitations of the actual infrastructure behind the clouds that run these models, and how Erik envisions the “postmodern data stack”. In his 2021 blog post “Software infrastructure 2.0: a wishlist”, Erik had “Truly serverless” as one of his points:* The word cluster is an anachronism to an end-user in the cloud! I'm already running things in the cloud where there's elastic resources available at any time. Why do I have to think about the underlying pool of resources? Just maintain it for me.* I don't ever want to provision anything in advance of load.* I don't want to pay for idle resources. Just let me pay for whatever resources I'm actually using.* Serverless doesn't mean it's a burstable VM that saves its instance state to disk during periods of idle.Swyx called this Self Provisioning Runtimes back in the day. Modal doesn't put you in YAML hell, preferring to colocate infra provisioning right next to the code that utilizes it, so you can just add GPU (and disk, and retries…):After 3 years, we finally have a big market push for this: running inference on generative models is going to be the killer app for serverless, for a few reasons:* AI models are stateless: even in conversational interfaces, each message generation is a fully-contained request to the LLM. There's no knowledge that is stored in the model itself between messages, which means that tear down / spin up of resources doesn't create any headaches with maintaining state.* Token-based pricing is better aligned with serverless infrastructure than fixed monthly costs of traditional software.* GPU scarcity makes it really expensive to have reserved instances that are available to you 24/7. It's much more convenient to build with a serverless-like infrastructure.In the episode we covered a lot more topics like maximizing GPU utilization, why Oracle Cloud rocks, and how Erik has never owned a TV in his life. Enjoy!Show Notes* Modal* ErikBot* Erik's Blog* Software Infra 2.0 Wishlist* Luigi* Annoy* Hetzner* CoreWeave* Cloudflare FaaS* Poolside AI* Modular Inference EngineChapters* [00:00:00] Introductions* [00:02:00] Erik's OSS work at Spotify: Annoy and Luigi* [00:06:22] Starting Modal* [00:07:54] Vision for a "postmodern data stack"* [00:10:43] Solving container cold start problems* [00:12:57] Designing Modal's Python SDK* [00:15:18] Self-Revisioning Runtime* [00:19:14] Truly Serverless Infrastructure* [00:20:52] Beyond model inference* [00:22:09] Tricks to maximize GPU utilization* [00:26:27] Differences in AI and data science workloads* [00:28:08] Modal vs Replicate vs Modular and lessons from Heroku's "graduation problem"* [00:34:12] Creating Erik's clone "ErikBot"* [00:37:43] Enabling massive parallelism across thousands of GPUs* [00:39:45] The Modal Sandbox for agents* [00:43:51] Thoughts on the AI Inference War* [00:49:18] Erik's best tweets* [00:51:57] Why buying hardware is a waste of money* [00:54:18] Erik's competitive programming backgrounds* [00:59:02] Why does Sweden have the best Counter Strike players?* [00:59:53] Never owning a car or TV* [01:00:21] Advice for infrastructure startupsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: Hey, and today we have in the studio Erik Bernhardsson from Modal. Welcome.Erik [00:00:19]: Hi. It's awesome being here.Swyx [00:00:20]: Yeah. Awesome seeing you in person. I've seen you online for a number of years as you were building on Modal and I think you're just making a San Francisco trip just to see people here, right? I've been to like two Modal events in San Francisco here.Erik [00:00:34]: Yeah, that's right. We're based in New York, so I figured sometimes I have to come out to capital of AI and make a presence.Swyx [00:00:40]: What do you think is the pros and cons of building in New York?Erik [00:00:45]: I mean, I never built anything elsewhere. I lived in New York the last 12 years. I love the city. Obviously, there's a lot more stuff going on here and there's a lot more customers and that's why I'm out here. I do feel like for me, where I am in life, I'm a very boring person. I kind of work hard and then I go home and hang out with my kids. I don't have time to go to events and meetups and stuff anyway. In that sense, New York is kind of nice. I walk to work every morning. It's like five minutes away from my apartment. It's very time efficient in that sense. Yeah.Swyx [00:01:10]: Yeah. It's also a good life. So we'll do a brief bio and then we'll talk about anything else that people should know about you. Actually, I was surprised to find out you're from Sweden. You went to college in KTH and your master's was in implementing a scalable music recommender system. Yeah.Erik [00:01:27]: I had no idea. Yeah. So I actually studied physics, but I grew up coding and I did a lot of programming competition and then as I was thinking about graduating, I got in touch with an obscure music streaming startup called Spotify, which was then like 30 people. And for some reason, I convinced them, why don't I just come and write a master's thesis with you and I'll do some cool collaborative filtering, despite not knowing anything about collaborative filtering really. But no one knew anything back then. So I spent six months at Spotify basically building a prototype of a music recommendation system and then turned that into a master's thesis. And then later when I graduated, I joined Spotify full time.Swyx [00:02:00]: So that was the start of your data career. You also wrote a couple of popular open source tooling while you were there. Is that correct?Erik [00:02:09]: No, that's right. I mean, I was at Spotify for seven years, so this is a long stint. And Spotify was a wild place early on and I mean, data space is also a wild place. I mean, it was like Hadoop cluster in the like foosball room on the floor. It was a lot of crude, like very basic infrastructure and I didn't know anything about it. And like I was hired to kind of figure out data stuff. And I started hacking on a recommendation system and then, you know, got sidetracked in a bunch of other stuff. I fixed a bunch of reporting things and set up A-B testing and started doing like business analytics and later got back to music recommendation system. And a lot of the infrastructure didn't really exist. Like there was like Hadoop back then, which is kind of bad and I don't miss it. But I spent a lot of time with that. As a part of that, I ended up building a workflow engine called Luigi, which is like briefly like somewhat like widely ended up being used by a bunch of companies. Sort of like, you know, kind of like Airflow, but like before Airflow. I think it did some things better, some things worse. I also built a vector database called Annoy, which is like for a while, it was actually quite widely used. In 2012, so it was like way before like all this like vector database stuff ended up happening. And funny enough, I was actually obsessed with like vectors back then. Like I was like, this is going to be huge. Like just give it like a few years. I didn't know it was going to take like nine years and then there's going to suddenly be like 20 startups doing vector databases in one year. So it did happen. In that sense, I was right. I'm glad I didn't start a startup in the vector database space. I would have started way too early. But yeah, that was, yeah, it was a fun seven years as part of it. It was a great culture, a great company.Swyx [00:03:32]: Yeah. Just to take a quick tangent on this vector database thing, because we probably won't revisit it but like, has anything architecturally changed in the last nine years?Erik [00:03:41]: I'm actually not following it like super closely. I think, you know, some of the best algorithms are still the same as like hierarchical navigable small world.Swyx [00:03:51]: Yeah. HNSW.Erik [00:03:52]: Exactly. I think now there's like product quantization, there's like some other stuff that I haven't really followed super closely. I mean, obviously, like back then it was like, you know, it's always like very simple. It's like a C++ library with Python bindings and you could mmap big files and into memory and like they had some lookups. I used like this kind of recursive, like hyperspace splitting strategy, which is not that good, but it sort of was good enough at that time. But I think a lot of like HNSW is still like what people generally use. Now of course, like databases are much better in the sense like to support like inserts and updates and stuff like that. I know I never supported that. Yeah, it's sort of exciting to finally see like vector databases becoming a thing.Swyx [00:04:30]: Yeah. Yeah. And then maybe one takeaway on most interesting lesson from Daniel Ek?Erik [00:04:36]: I mean, I think Daniel Ek, you know, he started Spotify very young. Like he was like 25, something like that. And that was like a good lesson. But like he, in a way, like I think he was a very good leader. Like there was never anything like, no scandals or like no, he wasn't very eccentric at all. It was just kind of like very like level headed, like just like ran the company very well, like never made any like obvious mistakes or I think it was like a few bets that maybe like in hindsight were like a little, you know, like took us, you know, too far in one direction or another. But overall, I mean, I think he was a great CEO, like definitely, you know, up there, like generational CEO, at least for like Swedish startups.Swyx [00:05:09]: Yeah, yeah, for sure. Okay, we should probably move to make our way towards Modal. So then you spent six years as CTO of Better. You were an early engineer and then you scaled up to like 300 engineers.Erik [00:05:21]: I joined as a CTO when there was like no tech team. And yeah, that was a wild chapter in my life. Like the company did very well for a while. And then like during the pandemic, yeah, it was kind of a weird story, but yeah, it kind of collapsed.Swyx [00:05:32]: Yeah, laid off people poorly.Erik [00:05:34]: Yeah, yeah. It was like a bunch of stories. Yeah. I mean, the company like grew from like 10 people when I joined at 10,000, now it's back to a thousand. But yeah, they actually went public a few months ago, kind of crazy. They're still around, like, you know, they're still, you know, doing stuff. So yeah, very kind of interesting six years of my life for non-technical reasons, like I managed like three, four hundred, but yeah, like learning a lot of that, like recruiting. I spent all my time recruiting and stuff like that. And so managing at scale, it's like nice, like now in a way, like when I'm building my own startup. It's actually something I like, don't feel nervous about at all. Like I've managed a scale, like I feel like I can do it again. It's like very different things that I'm nervous about as a startup founder. But yeah, I started Modal three years ago after sort of, after leaving Better, I took a little bit of time off during the pandemic and, but yeah, pretty quickly I was like, I got to build something. I just want to, you know. Yeah. And then yeah, Modal took form in my head, took shape.Swyx [00:06:22]: And as far as I understand, and maybe we can sort of trade off questions. So the quick history is started Modal in 2021, got your seed with Sarah from Amplify in 2022. You just announced your Series A with Redpoint. That's right. And that brings us up to mostly today. Yeah. Most people, I think, were expecting you to build for the data space.Erik: But it is the data space.Swyx:: When I think of data space, I come from like, you know, Snowflake, BigQuery, you know, Fivetran, Nearby, that kind of stuff. And what Modal became is more general purpose than that. Yeah.Erik [00:06:53]: Yeah. I don't know. It was like fun. I actually ran into like Edo Liberty, the CEO of Pinecone, like a few weeks ago. And he was like, I was so afraid you were building a vector database. No, I started Modal because, you know, like in a way, like I work with data, like throughout my most of my career, like every different part of the stack, right? Like I thought everything like business analytics to like deep learning, you know, like building, you know, training neural networks, the scale, like everything in between. And so one of the thoughts, like, and one of the observations I had when I started Modal or like why I started was like, I just wanted to make, build better tools for data teams. And like very, like sort of abstract thing, but like, I find that the data stack is, you know, full of like point solutions that don't integrate well. And still, when you look at like data teams today, you know, like every startup ends up building their own internal Kubernetes wrapper or whatever. And you know, all the different data engineers and machine learning engineers end up kind of struggling with the same things. So I started thinking about like, how do I build a new data stack, which is kind of a megalomaniac project, like, because you kind of want to like throw out everything and start over.Swyx [00:07:54]: It's almost a modern data stack.Erik [00:07:55]: Yeah, like a postmodern data stack. And so I started thinking about that. And a lot of it came from like, like more focused on like the human side of like, how do I make data teams more productive? And like, what is the technology tools that they need? And like, you know, drew out a lot of charts of like, how the data stack looks, you know, what are different components. And it shows actually very interesting, like workflow scheduling, because it kind of sits in like a nice sort of, you know, it's like a hub in the graph of like data products. But it was kind of hard to like, kind of do that in a vacuum, and also to monetize it to some extent. I got very interested in like the layers below at some point. And like, at the end of the day, like most people have code to have to run somewhere. So I think about like, okay, well, how do you make that nice? Like how do you make that? And in particular, like the thing I always like thought about, like developer productivity is like, I think the best way to measure developer productivity is like in terms of the feedback loops, like how quickly when you iterate, like when you write code, like how quickly can you get feedback. And at the innermost loop, it's like writing code and then running it. And like, as soon as you start working with the cloud, like it's like takes minutes suddenly, because you have to build a Docker container and push it to the cloud and like run it, you know. So that was like the initial focus for me was like, I just want to solve that problem. Like I want to, you know, build something less, you run things in the cloud and like retain the sort of, you know, the joy of productivity as when you're running things locally. And in particular, I was quite focused on data teams, because I think they had a couple unique needs that wasn't well served by the infrastructure at that time, or like still is in like, in particular, like Kubernetes, I feel like it's like kind of worked okay for back end teams, but not so well for data teams. And very quickly, I got sucked into like a very deep like rabbit hole of like...Swyx [00:09:24]: Not well for data teams because of burstiness. Yeah, for sure.Erik [00:09:26]: So like burstiness is like one thing, right? Like, you know, like you often have this like fan out, you want to like apply some function over very large data sets. Another thing tends to be like hardware requirements, like you need like GPUs and like, I've seen this in many companies, like you go, you know, data scientists go to a platform team and they're like, can we add GPUs to the Kubernetes? And they're like, no, like, that's, you know, complex, and we're not gonna, so like just getting GPU access. And then like, I mean, I also like data code, like frankly, or like machine learning code like tends to be like, super annoying in terms of like environments, like you end up having like a lot of like custom, like containers and like environment conflicts. And like, it's very hard to set up like a unified container that like can serve like a data scientist, because like, there's always like packages that break. And so I think there's a lot of different reasons why the technology wasn't well suited for back end. And I think the attitude at that time is often like, you know, like you had friction between the data team and the platform team, like, well, it works for the back end stuff, you know, why don't you just like, you know, make it work. But like, I actually felt like data teams, you know, or at this point now, like there's so much, so many people working with data, and like they, to some extent, like deserve their own tools and their own tool chains, and like optimizing for that is not something people have done. So that's, that's sort of like very abstract philosophical reason why I started Model. And then, and then I got sucked into this like rabbit hole of like container cold start and, you know, like whatever, Linux, page cache, you know, file system optimizations.Swyx [00:10:43]: Yeah, tell people, I think the first time I met you, I think you told me some numbers, but I don't remember, like, what are the main achievements that you were unhappy with the status quo? And then you built your own container stack?Erik [00:10:52]: Yeah, I mean, like, in particular, it was like, in order to have that loop, right? You want to be able to start, like take code on your laptop, whatever, and like run in the cloud very quickly, and like running in custom containers, and maybe like spin up like 100 containers, 1000, you know, things like that. And so container cold start was the initial like, from like a developer productivity point of view, it was like, really, what I was focusing on is, I want to take code, I want to stick it in container, I want to execute in the cloud, and like, you know, make it feel like fast. And when you look at like, how Docker works, for instance, like Docker, you have this like, fairly convoluted, like very resource inefficient way, they, you know, you build a container, you upload the whole container, and then you download it, and you run it. And Kubernetes is also like, not very fast at like starting containers. So like, I started kind of like, you know, going a layer deeper, like Docker is actually like, you know, there's like a couple of different primitives, but like a lower level primitive is run C, which is like a container runner. And I was like, what if I just take the container runner, like run C, and I point it to like my own root file system, and then I built like my own virtual file system that exposes files over a network instead. And that was like the sort of very crude version of model, it's like now I can actually start containers very quickly, because it turns out like when you start a Docker container, like, first of all, like most Docker images are like several gigabytes, and like 99% of that is never going to be consumed, like there's a bunch of like, you know, like timezone information for like Uzbekistan, like no one's going to read it. And then there's a very high overlap between the files are going to be read, there's going to be like lib torch or whatever, like it's going to be read. So you can also cache it very well. So that was like the first sort of stuff we started working on was like, let's build this like container file system. And you know, coupled with like, you know, just using run C directly. And that actually enabled us to like, get to this point of like, you write code, and then you can launch it in the cloud within like a second or two, like something like that. And you know, there's been many optimizations since then, but that was sort of starting point.Alessio [00:12:33]: Can we talk about the developer experience as well, I think one of the magic things about Modal is at the very basic layers, like a Python function decorator, it's just like stub and whatnot. But then you also have a way to define a full container, what were kind of the design decisions that went into it? Where did you start? How easy did you want it to be? And then maybe how much complexity did you then add on to make sure that every use case fit?Erik [00:12:57]: I mean, Modal, I almost feel like it's like almost like two products kind of glued together. Like there's like the low level like container runtime, like file system, all that stuff like in Rust. And then there's like the Python SDK, right? Like how do you express applications? And I think, I mean, Swix, like I think your blog was like the self-provisioning runtime was like, to me, always like to sort of, for me, like an eye-opening thing. It's like, so I didn't think about like...Swyx [00:13:15]: You wrote your post four months before me. Yeah? The software 2.0, Infra 2.0. Yeah.Erik [00:13:19]: Well, I don't know, like convergence of minds. I guess we were like both thinking. Maybe you put, I think, better words than like, you know, maybe something I was like thinking about for a long time. Yeah.Swyx [00:13:29]: And I can tell you how I was thinking about it on my end, but I want to hear you say it.Erik [00:13:32]: Yeah, yeah, I would love to. So to me, like what I always wanted to build was like, I don't know, like, I don't know if you use like Pulumi. Like Pulumi is like nice, like in the sense, like it's like Pulumi is like you describe infrastructure in code, right? And to me, that was like so nice. Like finally I can like, you know, put a for loop that creates S3 buckets or whatever. And I think like Modal sort of goes one step further in the sense that like, what if you also put the app code inside the infrastructure code and like glue it all together and then like you only have one single place that defines everything and it's all programmable. You don't have any config files. Like Modal has like zero config. There's no config. It's all code. And so that was like the goal that I wanted, like part of that. And then the other part was like, I often find that so much of like my time was spent on like the plumbing between containers. And so my thing was like, well, if I just build this like Python SDK and make it possible to like bridge like different containers, just like a function call, like, and I can say, oh, this function runs in this container and this other function runs in this container and I can just call it just like a normal function, then, you know, I can build these applications that may span a lot of different environments. Maybe they fan out, start other containers, but it's all just like inside Python. You just like have this beautiful kind of nice like DSL almost for like, you know, how to control infrastructure in the cloud. So that was sort of like how we ended up with the Python SDK as it is, which is still evolving all the time, by the way. We keep changing syntax quite a lot because I think it's still somewhat exploratory, but we're starting to converge on something that feels like reasonably good now.Swyx [00:14:54]: Yeah. And along the way you, with this expressiveness, you enabled the ability to, for example, attach a GPU to a function. Totally.Erik [00:15:02]: Yeah. It's like you just like say, you know, on the function decorator, you're like GPU equals, you know, A100 and then or like GPU equals, you know, A10 or T4 or something like that. And then you get that GPU and like, you know, you just run the code and it runs like you don't have to, you know, go through hoops to, you know, start an EC2 instance or whatever.Swyx [00:15:18]: Yeah. So it's all code. Yeah. So one of the reasons I wrote Self-Revisioning Runtimes was I was working at AWS and we had AWS CDK, which is kind of like, you know, the Amazon basics blew me. Yeah, totally. And then, and then like it creates, it compiles the cloud formation. Yeah. And then on the other side, you have to like get all the config stuff and then put it into your application code and make sure that they line up. So then you're writing code to define your infrastructure, then you're writing code to define your application. And I was just like, this is like obvious that it's going to converge, right? Yeah, totally.Erik [00:15:48]: But isn't there like, it might be wrong, but like, was it like SAM or Chalice or one of those? Like, isn't that like an AWS thing that where actually they kind of did that? I feel like there's like one.Swyx [00:15:57]: SAM. Yeah. Still very clunky. It's not, not as elegant as modal.Erik [00:16:03]: I love AWS for like the stuff it's built, you know, like historically in order for me to like, you know, what it enables me to build, but like AWS is always like struggle with developer experience.Swyx [00:16:11]: I mean, they have to not break things.Erik [00:16:15]: Yeah. Yeah. And totally. And they have to build products for a very wide range of use cases. And I think that's hard.Swyx [00:16:21]: Yeah. Yeah. So it's, it's easier to design for. Yeah. So anyway, I was, I was pretty convinced that this, this would happen. I wrote, wrote that thing. And then, you know, I imagine my surprise that you guys had it on your landing page at some point. I think, I think Akshad was just like, just throw that in there.Erik [00:16:34]: Did you trademark it?Swyx [00:16:35]: No, I didn't. But I definitely got sent a few pitch decks with my post on there and it was like really interesting. This is my first time like kind of putting a name to a phenomenon. And I think this is a useful skill for people to just communicate what they're trying to do.Erik [00:16:48]: Yeah. No, I think it's a beautiful concept.Swyx [00:16:50]: Yeah. Yeah. Yeah. But I mean, obviously you implemented it. What became more clear in your explanation today is that actually you're not that tied to Python.Erik [00:16:57]: No. I mean, I, I think that all the like lower level stuff is, you know, just running containers and like scheduling things and, you know, serving container data and stuff. So like one of the benefits of data teams is obviously like they're all like using Python, right? And so that made it a lot easier. I think, you know, if we had focused on other workloads, like, you know, for various reasons, we've like been kind of like half thinking about like CI or like things like that. But like, in a way that's like harder because like you also, then you have to be like, you know, multiple SDKs, whereas, you know, focusing on data teams, you can only, you know, Python like covers like 95% of all teams. That made it a lot easier. But like, I mean, like definitely like in the future, we're going to have others support, like supporting other languages. JavaScript for sure is the obvious next language. But you know, who knows, like, you know, Rust, Go, R, whatever, PHP, Haskell, I don't know.Swyx [00:17:42]: You know, I think for me, I actually am a person who like kind of liked the idea of programming language advancements being improvements in developer experience. But all I saw out of the academic sort of PLT type people is just type level improvements. And I always think like, for me, like one of the core reasons for self-provisioning runtimes and then why I like Modal is like, this is actually a productivity increase, right? Like, it's a language level thing, you know, you managed to stick it on top of an existing language, but it is your own language, a DSL on top of Python. And so language level increase on the order of like automatic memory management. You know, you could sort of make that analogy that like, maybe you lose some level of control, but most of the time you're okay with whatever Modal gives you. And like, that's fine. Yeah.Erik [00:18:26]: Yeah. Yeah. I mean, that's how I look at about it too. Like, you know, you look at developer productivity over the last number of decades, like, you know, it's come in like small increments of like, you know, dynamic typing or like is like one thing because not suddenly like for a lot of use cases, you don't need to care about type systems or better compiler technology or like, you know, the cloud or like, you know, relational databases. And, you know, I think, you know, you look at like that, you know, history, it's a steadily, you know, it's like, you know, you look at the developers have been getting like probably 10X more productive every decade for the last four decades or something that was kind of crazy. Like on an exponential scale, we're talking about 10X or is there a 10,000X like, you know, improvement in developer productivity. What we can build today, you know, is arguably like, you know, a fraction of the cost of what it took to build it in the eighties. Maybe it wasn't even possible in the eighties. So that to me, like, that's like so fascinating. I think it's going to keep going for the next few decades. Yeah.Alessio [00:19:14]: Yeah. Another big thing in the infra 2.0 wishlist was truly serverless infrastructure. The other on your landing page, you called them native cloud functions, something like that. I think the issue I've seen with serverless has always been people really wanted it to be stateful, even though stateless was much easier to do. And I think now with AI, most model inference is like stateless, you know, outside of the context. So that's kind of made it a lot easier to just put a model, like an AI model on model to run. How do you think about how that changes how people think about infrastructure too? Yeah.Erik [00:19:48]: I mean, I think model is definitely going in the direction of like doing more stateful things and working with data and like high IO use cases. I do think one like massive serendipitous thing that happened like halfway, you know, a year and a half into like the, you know, building model was like Gen AI started exploding and the IO pattern of Gen AI is like fits the serverless model like so well, because it's like, you know, you send this tiny piece of information, like a prompt, right, or something like that. And then like you have this GPU that does like trillions of flops, and then it sends back like a tiny piece of information, right. And that turns out to be something like, you know, if you can get serverless working with GPU, that just like works really well, right. So I think from that point of view, like serverless always to me felt like a little bit of like a solution looking for a problem. I don't actually like don't think like backend is like the problem that needs to serve it or like not as much. But I look at data and in particular, like things like Gen AI, like model inference, like it's like clearly a good fit. So I think that is, you know, to a large extent explains like why we saw, you know, the initial sort of like killer app for model being model inference, which actually wasn't like necessarily what we're focused on. But that's where we've seen like by far the most usage. Yeah.Swyx [00:20:52]: And this was before you started offering like fine tuning of language models, it was mostly stable diffusion. Yeah.Erik [00:20:59]: Yeah. I mean, like model, like I always built it to be a very general purpose compute platform, like something where you can run everything. And I used to call model like a better Kubernetes for data team for a long time. What we realized was like, yeah, that's like, you know, a year and a half in, like we barely had any users or any revenue. And like we were like, well, maybe we should look at like some use case, trying to think of use case. And that was around the same time stable diffusion came out. And the beauty of model is like you can run almost anything on model, right? Like model inference turned out to be like the place where we found initially, well, like clearly this has like 10x like better agronomics than anything else. But we're also like, you know, going back to my original vision, like we're thinking a lot about, you know, now, okay, now we do inference really well. Like what about training? What about fine tuning? What about, you know, end-to-end lifecycle deployment? What about data pre-processing? What about, you know, I don't know, real-time streaming? What about, you know, large data munging, like there's just data observability. I think there's so many things, like kind of going back to what I said about like redefining the data stack, like starting with the foundation of compute. Like one of the exciting things about model is like we've sort of, you know, we've been working on that for three years and it's maturing, but like this is so many things you can do like with just like a better compute primitive and also go up to stack and like do all this other stuff on top of it.Alessio [00:22:09]: How do you think about or rather like I would love to learn more about the underlying infrastructure and like how you make that happen because with fine tuning and training, it's a static memory. Like you exactly know what you're going to load in memory one and it's kind of like a set amount of compute versus inference, just like data is like very bursty. How do you make batches work with a serverless developer experience? You know, like what are like some fun technical challenge you solve to make sure you get max utilization on these GPUs? What we hear from people is like, we have GPUs, but we can really only get like, you know, 30, 40, 50% maybe utilization. What's some of the fun stuff you're working on to get a higher number there?Erik [00:22:48]: Yeah, I think on the inference side, like that's where we like, you know, like from a cost perspective, like utilization perspective, we've seen, you know, like very good numbers and in particular, like it's our ability to start containers and stop containers very quickly. And that means that we can auto scale extremely fast and scale down very quickly, which means like we can always adjust the sort of capacity, the number of GPUs running to the exact traffic volume. And so in many cases, like that actually leads to a sort of interesting thing where like we obviously run our things on like the public cloud, like AWS GCP, we run on Oracle, but in many cases, like users who do inference on those platforms or those clouds, even though we charge a slightly higher price per GPU hour, a lot of users like moving their large scale inference use cases to model, they end up saving a lot of money because we only charge for like with the time the GPU is actually running. And that's a hard problem, right? Like, you know, if you have to constantly adjust the number of machines, if you have to start containers, stop containers, like that's a very hard problem. Starting containers quickly is a very difficult thing. I mentioned we had to build our own file system for this. We also, you know, built our own container scheduler for that. We've implemented recently CPU memory checkpointing so we can take running containers and snapshot the entire CPU, like including registers and everything, and restore it from that point, which means we can restore it from an initialized state. We're looking at GPU checkpointing next, it's like a very interesting thing. So I think with inference stuff, that's where serverless really shines because you can drive, you know, you can push the frontier of latency versus utilization quite substantially, you know, which either ends up being a latency advantage or a cost advantage or both, right? On training, it's probably arguably like less of an advantage doing serverless, frankly, because you know, you can just like spin up a bunch of machines and try to satisfy, like, you know, train as much as you can on each machine. For that area, like we've seen, like, you know, arguably like less usage, like for modal, but there are always like some interesting use case. Like we do have a couple of customers, like RAM, for instance, like they do fine tuning with modal and they basically like one of the patterns they have is like very bursty type fine tuning where they fine tune 100 models in parallel. And that's like a separate thing that modal does really well, right? Like you can, we can start up 100 containers very quickly, run a fine tuning training job on each one of them for that only runs for, I don't know, 10, 20 minutes. And then, you know, you can do hyper parameter tuning in that sense, like just pick the best model and things like that. So there are like interesting training. I think when you get to like training, like very large foundational models, that's a use case we don't support super well, because that's very high IO, you know, you need to have like infinite band and all these things. And those are things we haven't supported yet and might take a while to get to that. So that's like probably like an area where like we're relatively weak in. Yeah.Alessio [00:25:12]: Have you cared at all about lower level model optimization? There's other cloud providers that do custom kernels to get better performance or are you just given that you're not just an AI compute company? Yeah.Erik [00:25:24]: I mean, I think like we want to support like a generic, like general workloads in a sense that like we want users to give us a container essentially or a code or code. And then we want to run that. So I think, you know, we benefit from those things in the sense that like we can tell our users, you know, to use those things. But I don't know if we want to like poke into users containers and like do those things automatically. That's sort of, I think a little bit tricky from the outside to do, because we want to be able to take like arbitrary code and execute it. But certainly like, you know, we can tell our users to like use those things. Yeah.Swyx [00:25:53]: I may have betrayed my own biases because I don't really think about modal as for data teams anymore. I think you started, I think you're much more for AI engineers. My favorite anecdotes, which I think, you know, but I don't know if you directly experienced it. I went to the Vercel AI Accelerator, which you supported. And in the Vercel AI Accelerator, a bunch of startups gave like free credits and like signups and talks and all that stuff. The only ones that stuck are the ones that actually appealed to engineers. And the top usage, the top tool used by far was modal.Erik [00:26:24]: That's awesome.Swyx [00:26:25]: For people building with AI apps. Yeah.Erik [00:26:27]: I mean, it might be also like a terminology question, like the AI versus data, right? Like I've, you know, maybe I'm just like old and jaded, but like, I've seen so many like different titles, like for a while it was like, you know, I was a data scientist and a machine learning engineer and then, you know, there was like analytics engineers and there was like an AI engineer, you know? So like, to me, it's like, I just like in my head, that's to me just like, just data, like, or like engineer, you know, like I don't really, so that's why I've been like, you know, just calling it data teams. But like, of course, like, you know, AI is like, you know, like such a massive fraction of our like workloads.Swyx [00:26:59]: It's a different Venn diagram of things you do, right? So the stuff that you're talking about where you need like infinite bands for like highly parallel training, that's not, that's more of the ML engineer, that's more of the research scientist and less of the AI engineer, which is more sort of trying to put, work at the application.Erik [00:27:16]: Yeah. I mean, to be fair to it, like we have a lot of users that are like doing stuff that I don't think fits neatly into like AI. Like we have a lot of people using like modal for web scraping, like it's kind of nice. You can just like, you know, fire up like a hundred or a thousand containers running Chromium and just like render a bunch of webpages and it takes, you know, whatever. Or like, you know, protein folding is that, I mean, maybe that's, I don't know, like, but like, you know, we have a bunch of users doing that or, or like, you know, in terms of, in the realm of biotech, like sequence alignment, like people using, or like a couple of people using like modal to run like large, like mixed integer programming problems, like, you know, using Gurobi or like things like that. So video processing is another thing that keeps coming up, like, you know, let's say you have like petabytes of video and you want to just like transcode it, like, or you can fire up a lot of containers and just run FFmpeg or like, so there are those things too. Like, I mean, like that being said, like AI is by far our biggest use case, but you know, like, again, like modal is kind of general purpose in that sense.Swyx [00:28:08]: Yeah. Well, maybe I'll stick to the stable diffusion thing and then we'll move on to the other use cases for AI that you want to highlight. The other big player in my mind is replicate. Yeah. In this, in this era, they're much more, I guess, custom built for that purpose, whereas you're more general purpose. How do you position yourself with them? Are they just for like different audiences or are you just heads on competing?Erik [00:28:29]: I think there's like a tiny sliver of the Venn diagram where we're competitive. And then like 99% of the area we're not competitive. I mean, I think for people who, if you look at like front-end engineers, I think that's where like really they found good fit is like, you know, people who built some cool web app and they want some sort of AI capability and they just, you know, an off the shelf model is like perfect for them. That's like, I like use replicate. That's great. I think where we shine is like custom models or custom workflows, you know, running things at very large scale. We need to care about utilization, care about costs. You know, we have much lower prices because we spend a lot more time optimizing our infrastructure, you know, and that's where we're competitive, right? Like, you know, and you look at some of the use cases, like Suno is a big user, like they're running like large scale, like AI. Oh, we're talking with Mikey.Swyx [00:29:12]: Oh, that's great. Cool.Erik [00:29:14]: In a month. Yeah. So, I mean, they're, they're using model for like production infrastructure. Like they have their own like custom model, like custom code and custom weights, you know, for AI generated music, Suno.AI, you know, that, that, those are the types of use cases that we like, you know, things that are like very custom or like, it's like, you know, and those are the things like it's very hard to run and replicate, right? And that's fine. Like I think they, they focus on a very different part of the stack in that sense.Swyx [00:29:35]: And then the other company pattern that I pattern match you to is Modular. I don't know.Erik [00:29:40]: Because of the names?Swyx [00:29:41]: No, no. Wow. No, but yeah, yes, the name is very similar. I think there's something that might be insightful there from a linguistics point of view. Oh no, they have Mojo, the sort of Python SDK. And they have the Modular Inference Engine, which is their sort of their cloud stack, their sort of compute inference stack. I don't know if anyone's made that comparison to you before, but like I see you evolving a little bit in parallel there.Erik [00:30:01]: No, I mean, maybe. Yeah. Like it's not a company I'm like super like familiar, like, I mean, I know the basics, but like, I guess they're similar in the sense like they want to like do a lot of, you know, they have sort of big picture vision.Swyx [00:30:12]: Yes. They also want to build very general purpose. Yeah. So they're marketing themselves as like, if you want to do off the shelf stuff, go out, go somewhere else. If you want to do custom stuff, we're the best place to do it. Yeah. Yeah. There is some overlap there. There's not overlap in the sense that you are a closed source platform. People have to host their code on you. That's true. Whereas for them, they're very insistent on not running their own cloud service. They're a box software. Yeah. They're licensed software.Erik [00:30:37]: I'm sure their VCs at some point going to force them to reconsider. No, no.Swyx [00:30:40]: Chris is very, very insistent and very convincing. So anyway, I would just make that comparison, let people make the links if they want to. But it's an interesting way to see the cloud market develop from my point of view, because I came up in this field thinking cloud is one thing, and I think your vision is like something slightly different, and I see the different takes on it.Erik [00:31:00]: Yeah. And like one thing I've, you know, like I've written a bit about it in my blog too, it's like I think of us as like a second layer of cloud provider in the sense that like I think Snowflake is like kind of a good analogy. Like Snowflake, you know, is infrastructure as a service, right? But they actually run on the like major clouds, right? And I mean, like you can like analyze this very deeply, but like one of the things I always thought about is like, why does Snowflake arbitrarily like win over Redshift? And I think Snowflake, you know, to me, one, because like, I mean, in the end, like AWS makes all the money anyway, like and like Snowflake just had the ability to like focus on like developer experience or like, you know, user experience. And to me, like really proved that you can build a cloud provider, a layer up from, you know, the traditional like public clouds. And in that layer, that's also where I would put Modal, it's like, you know, we're building a cloud provider, like we're, you know, we're like a multi-tenant environment that runs the user code. But we're also building on top of the public cloud. So I think there's a lot of room in that space, I think is very sort of interesting direction.Alessio [00:31:55]: How do you think of that compared to the traditional past history, like, you know, you had AWS, then you had Heroku, then you had Render, Railway.Erik [00:32:04]: Yeah, I mean, I think those are all like great. I think the problem that they all faced was like the graduation problem, right? Like, you know, Heroku or like, I mean, like also like Heroku, there's like a counterfactual future of like, what would have happened if Salesforce didn't buy them, right? Like, that's a sort of separate thing. But like, I think what Heroku, I think always struggled with was like, eventually companies would get big enough that you couldn't really justify running in Heroku. So they would just go and like move it to, you know, whatever AWS or, you know, in particular. And you know, that's something that keeps me up at night too, like, what does that graduation risk like look like for modal? I always think like the only way to build a successful infrastructure company in the long run in the cloud today is you have to appeal to the entire spectrum, right? Or at least like the enterprise, like you have to capture the enterprise market. But the truly good companies capture the whole spectrum, right? Like I think of companies like, I don't like Datadog or Mongo or something that were like, they both captured like the hobbyists and acquire them, but also like, you know, have very large enterprise customers. I think that arguably was like where I, in my opinion, like Heroku struggle was like, how do you maintain the customers as they get more and more advanced? I don't know what the solution is, but I think there's, you know, that's something I would have thought deeply if I was at Heroku at that time.Alessio [00:33:14]: What's the AI graduation problem? Is it, I need to fine tune the model, I need better economics, any insights from customer discussions?Erik [00:33:22]: Yeah, I mean, better economics, certainly. But although like, I would say like, even for people who like, you know, needs like thousands of GPUs, just because we can drive utilization so much better, like we, there's actually like a cost advantage of staying on modal. But yeah, I mean, certainly like, you know, and like the fact that VCs like love, you know, throwing money at least used to, you know, add companies who need it to buy GPUs. I think that didn't help the problem. And in training, I think, you know, there's less software differentiation. So in training, I think there's certainly like better economics of like buying big clusters. But I mean, my hope it's going to change, right? Like I think, you know, we're still pretty early in the cycle of like building AI infrastructure. And I think a lot of these companies over in the long run, like, you know, they're, except it may be super big ones, like, you know, on Facebook and Google, they're always going to build their own ones. But like everyone else, like some extent, you know, I think they're better off like buying platforms. And, you know, someone's going to have to build those platforms.Swyx [00:34:12]: Yeah. Cool. Let's move on to language models and just specifically that workload just to flesh it out a little bit. You already said that RAMP is like fine tuning 100 models at once simultaneously on modal. Closer to home, my favorite example is ErikBot. Maybe you want to tell that story.Erik [00:34:30]: Yeah. I mean, it was a prototype thing we built for fun, but it's pretty cool. Like we basically built this thing that hooks up to Slack. It like downloads all the Slack history and, you know, fine-tunes a model based on a person. And then you can chat with that. And so you can like, you know, clone yourself and like talk to yourself on Slack. I mean, it's like nice like demo and it's just like, I think like it's like fully contained modal. Like there's a modal app that does everything, right? Like it downloads Slack, you know, integrates with the Slack API, like downloads the stuff, the data, like just runs the fine-tuning and then like creates like dynamically an inference endpoint. And it's all like self-contained and like, you know, a few hundred lines of code. So I think it's sort of a good kind of use case for, or like it kind of demonstrates a lot of the capabilities of modal.Alessio [00:35:08]: Yeah. On a more personal side, how close did you feel ErikBot was to you?Erik [00:35:13]: It definitely captured the like the language. Yeah. I mean, I don't know, like the content, I always feel this way about like AI and it's gotten better. Like when you look at like AI output of text, like, and it's like, when you glance at it, it's like, yeah, this seems really smart, you know, but then you actually like look a little bit deeper. It's like, what does this mean?Swyx [00:35:32]: What does this person say?Erik [00:35:33]: It's like kind of vacuous, right? And that's like kind of what I felt like, you know, talking to like my clone version, like it's like says like things like the grammar is correct. Like some of the sentences make a lot of sense, but like, what are you trying to say? Like there's no content here. I don't know. I mean, it's like, I got that feeling also with chat TBT in the like early versions right now it's like better, but.Alessio [00:35:51]: That's funny. So I built this thing called small podcaster to automate a lot of our back office work, so to speak. And it's great at transcript. It's great at doing chapters. And then I was like, okay, how about you come up with a short summary? And it's like, it sounds good, but it's like, it's not even the same ballpark as like, yeah, end up writing. Right. And it's hard to see how it's going to get there.Swyx [00:36:11]: Oh, I have ideas.Erik [00:36:13]: I'm certain it's going to get there, but like, I agree with you. Right. And like, I have the same thing. I don't know if you've read like AI generated books. Like they just like kind of seem funny, right? Like there's off, right? But like you glance at it and it's like, oh, it's kind of cool. Like looks correct, but then it's like very weird when you actually read them.Swyx [00:36:30]: Yeah. Well, so for what it's worth, I think anyone can join the modal slack. Is it open to the public? Yeah, totally.Erik [00:36:35]: If you go to modal.com, there's a button in the footer.Swyx [00:36:38]: Yeah. And then you can talk to Erik Bot. And then sometimes I really like picking Erik Bot and then you answer afterwards, but then you're like, yeah, mostly correct or whatever. Any other broader lessons, you know, just broadening out from like the single use case of fine tuning, like what are you seeing people do with fine tuning or just language models on modal in general? Yeah.Erik [00:36:59]: I mean, I think language models is interesting because so many people get started with APIs and that's just, you know, they're just dominating a space in particular opening AI, right? And that's not necessarily like a place where we aim to compete. I mean, maybe at some point, but like, it's just not like a core focus for us. And I think sort of separately, it's sort of a question of like, there's economics in that long term. But like, so we tend to focus on more like the areas like around it, right? Like fine tuning, like another use case we have is a bunch of people, Ramp included, is doing batch embeddings on modal. So let's say, you know, you have like a, actually we're like writing a blog post, like we take all of Wikipedia and like parallelize embeddings in 15 minutes and produce vectors for each article. So those types of use cases, I think modal suits really well for. I think also a lot of like custom inference, like yeah, I love that.Swyx [00:37:43]: Yeah. I think you should give people an idea of the order of magnitude of parallelism, because I think people don't understand how parallel. So like, I think your classic hello world with modal is like some kind of Fibonacci function, right? Yeah, we have a bunch of different ones. Some recursive function. Yeah.Erik [00:37:59]: Yeah. I mean, like, yeah, I mean, it's like pretty easy in modal, like fan out to like, you know, at least like 100 GPUs, like in a few seconds. And you know, if you give it like a couple of minutes, like we can, you know, you can fan out to like thousands of GPUs. Like we run it relatively large scale. And yeah, we've run, you know, many thousands of GPUs at certain points when we needed, you know, big backfills or some customers had very large compute needs.Swyx [00:38:21]: Yeah. Yeah. And I mean, that's super useful for a number of things. So one of my early interactions with modal as well was with a small developer, which is my sort of coding agent. The reason I chose modal was a number of things. One, I just wanted to try it out. I just had an excuse to try it. Akshay offered to onboard me personally. But the most interesting thing was that you could have that sort of local development experience as it was running on my laptop, but then it would seamlessly translate to a cloud service or like a cloud hosted environment. And then it could fan out with concurrency controls. So I could say like, because like, you know, the number of times I hit the GPT-3 API at the time was going to be subject to the rate limit. But I wanted to fan out without worrying about that kind of stuff. With modal, I can just kind of declare that in my config and that's it. Oh, like a concurrency limit?Erik [00:39:07]: Yeah. Yeah.Swyx [00:39:09]: Yeah. There's a lot of control. And that's why it's like, yeah, this is a pretty good use case for like writing this kind of LLM application code inside of this environment that just understands fan out and rate limiting natively. You don't actually have an exposed queue system, but you have it under the hood, you know, that kind of stuff. Totally.Erik [00:39:28]: It's a self-provisioning cloud.Swyx [00:39:30]: So the last part of modal I wanted to touch on, and obviously feel free, I know you're working on new features, was the sandbox that was introduced last year. And this is something that I think was inspired by Code Interpreter. You can tell me the longer history behind that.Erik [00:39:45]: Yeah. Like we originally built it for the use case, like there was a bunch of customers who looked into code generation applications and then they came to us and asked us, is there a safe way to execute code? And yeah, we spent a lot of time on like container security. We used GeoVisor, for instance, which is a Google product that provides pretty strong isolation of code. So we built a product where you can basically like run arbitrary code inside a container and monitor its output or like get it back in a safe way. I mean, over time it's like evolved into more of like, I think the long-term direction is actually I think more interesting, which is that I think modal as a platform where like I think the core like container infrastructure we offer could actually be like, you know, unbundled from like the client SDK and offer to like other, you know, like we're talking to a couple of like other companies that want to run, you know, through their packages, like run, execute jobs on modal, like kind of programmatically. So that's actually the direction like Sandbox is going. It's like turning into more like a platform for platforms is kind of what I've been thinking about it as.Swyx [00:40:45]: Oh boy. Platform. That's the old Kubernetes line.Erik [00:40:48]: Yeah. Yeah. Yeah. But it's like, you know, like having that ability to like programmatically, you know, create containers and execute them, I think, I think is really cool. And I think it opens up a lot of interesting capabilities that are sort of separate from the like core Python SDK in modal. So I'm really excited about C. It's like one of those features that we kind of released and like, you know, then we kind of look at like what users actually build with it and people are starting to build like kind of crazy things. And then, you know, we double down on some of those things because when we see like, you know, potential new product features and so Sandbox, I think in that sense, it's like kind of in that direction. We found a lot of like interesting use cases in the direction of like platformized container runner.Swyx [00:41:27]: Can you be more specific about what you're double down on after seeing users in action?Erik [00:41:32]: I mean, we're working with like some companies that, I mean, without getting into specifics like that, need the ability to take their users code and then launch containers on modal. And it's not about security necessarily, like they just want to use modal as a back end, right? Like they may already provide like Kubernetes as a back end, Lambda as a back end, and now they want to add modal as a back end, right? And so, you know, they need a way to programmatically define jobs on behalf of their users and execute them. And so, I don't know, that's kind of abstract, but does that make sense? I totally get it.Swyx [00:42:03]: It's sort of one level of recursion to sort of be the Modal for their customers.Erik [00:42:09]: Exactly.Swyx [00:42:10]: Yeah, exactly. And Cloudflare has done this, you know, Kenton Vardar from Cloudflare, who's like the tech lead on this thing, called it sort of functions as a service as a service.Erik [00:42:17]: Yeah, that's exactly right. FaSasS.Swyx [00:42:21]: FaSasS. Yeah, like, I mean, like that, I think any base layer, second layer cloud provider like yourself, compute provider like yourself should provide, you know, it's a mark of maturity and success that people just trust you to do that. They'd rather build on top of you than compete with you. The more interesting thing for me is like, what does it mean to serve a computer like an LLM developer, rather than a human developer, right? Like, that's what a sandbox is to me, that you have to redefine modal to serve a different non-human audience.Erik [00:42:51]: Yeah. Yeah, and I think there's some really interesting people, you know, building very cool things.Swyx [00:42:55]: Yeah. So I don't have an answer, but, you know, I imagine things like, hey, the way you give feedback is different. Maybe you have to like stream errors, log errors differently. I don't really know. Yeah. Obviously, there's like safety considerations. Maybe you have an API to like restrict access to the web. Yeah. I don't think anyone would use it, but it's there if you want it.Erik [00:43:17]: Yeah.Swyx [00:43:18]: Yeah. Any other sort of design considerations? I have no idea.Erik [00:43:21]: With sandboxes?Swyx [00:43:22]: Yeah. Yeah.Erik [00:43:24]: Open-ended question here. Yeah. I mean, no, I think, yeah, the network restrictions, I think, make a lot of sense. Yeah. I mean, I think, you know, long-term, like, I think there's a lot of interesting use cases where like the LLM, in itself, can like decide, I want to install these packages and like run this thing. And like, obviously, for a lot of those use cases, like you want to have some sort of control that it doesn't like install malicious stuff and steal your secrets and things like that. But I think that's what's exciting about the sandbox primitive, is like it lets you do that in a relatively safe way.Alessio [00:43:51]: Do you have any thoughts on the inference wars? A lot of providers are just rushing to the bottom to get the lowest price per million tokens. Some of them, you know, the Sean Randomat, they're just losing money and there's like the physics of it just don't work out for them to make any money on it. How do you think about your pricing and like how much premium you can get and you can kind of command versus using lower prices as kind of like a wedge into getting there, especially once you have model instrumented? What are the tradeoffs and any thoughts on strategies that work?Erik [00:44:23]: I mean, we focus more on like custom models and custom code. And I think in that space, there's like less competition and I think we can have a pricing markup, right? Like, you know, people will always compare our prices to like, you know, the GPU power they can get elsewhere. And so how big can that markup be? Like it never can be, you know, we can never charge like 10x more, but we can certainly charge a premium. And like, you know, for that reason, like we can have pretty good margins. The LLM space is like the opposite, like the switching cost of LLMs is zero. If all you're doing is like straight up, like at least like open source, right? Like if all you're doing is like, you know, using some, you know, inference endpoint that serves an open source model and, you know, some other provider comes along and like offers a lower price, you're just going to switch, right? So I don't know, to me that reminds me a lot of like all this like 15 minute delivery wars or like, you know, like Uber versus Lyft, you know, and like maybe going back even further, like I think a lot about like sort of, you know, flip side of this is like, it's actually a positive side, which is like, I thought a lot about like fiber optics boom of like 98, 99, like the other day, or like, you know, and also like the overinvestment in GPU today. Like, like, yeah, like, you know, I don't know, like in the end, like, I don't think VCs will have the return they expected, like, you know, in these things, but guess who's going to benefit, like, you know, is the consumers, like someone's like reaping the value of this. And that's, I think an amazing flip side is that, you know, we should be very grateful, the fact that like VCs want to subsidize these things, which is, you know, like you go back to fiber optics, like there was an extreme, like overinvestment in fiber optics network in like 98. And no one made money who did that. But consumers, you know, got tremendous benefits of all the fiber optics cables that were led, you know, throughout the country in the decades after. I feel something similar abou

Screaming in the Cloud
How Tailscale Builds for Users of All Tiers with Maya Kaczorowski

Screaming in the Cloud

Play Episode Listen Later Dec 19, 2023 33:45


Maya Kaczorowski, Chief Product Officer at Tailscale, joins Corey on Screaming in the Cloud to discuss what sets the Tailscale product approach apart, for users of their free tier all the way to enterprise. Maya shares insight on how she evaluates feature requests, and how Tailscale's unique architecture sets them apart from competitors. Maya and Corey discuss the importance of transparency when building trust in security, as well as Tailscale's approach to new feature roll-outs and change management.About MayaMaya is the Chief Product Officer at Tailscale, providing secure networking for the long tail. She was mostly recently at GitHub in software supply chain security, and previously at Google working on container security, encryption at rest and encryption key management. Prior to Google, she was an Engagement Manager at McKinsey & Company, working in IT security for large enterprises.Maya completed her Master's in mathematics focusing on cryptography and game theory. She is bilingual in English and French.Outside of work, Maya is passionate about ice cream, puzzling, running, and reading nonfiction.Links Referenced: Tailscale: https://tailscale.com/ Tailscale features: VS Code extension: https://marketplace.visualstudio.com/items?itemName=tailscale.vscode-tailscale  Tailscale SSH: https://tailscale.com/kb/1193/tailscale-ssh  Tailnet lock: https://tailscale.com/kb/1226/tailnet-lock  Auto updates: https://tailscale.com/kb/1067/update#auto-updates  ACL tests: https://tailscale.com/kb/1018/acls#tests  Kubernetes operator: https://tailscale.com/kb/1236/kubernetes-operator  Log streaming: https://tailscale.com/kb/1255/log-streaming  Tailscale Security Bulletins: https://tailscale.com/security-bulletins  Blog post “How Our Free Plan Stays Free:” https://tailscale.com/blog/free-plan  Tailscale on AWS Marketplace: https://aws.amazon.com/marketplace/pp/prodview-nd5zazsgvu6e6  TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I am joined today on this promoted guest episode by my friends over at Tailscale. They have long been one of my favorite products just because it has dramatically changed the way that I interact with computers, which really should be enough to terrify anyone. My guest today is Maya Kaczorowski, Chief Product Officer at Tailscale. Maya, thanks for joining me.Maya: Thank you so much for having me.Corey: I have to say originally, I was a little surprised to—“Really? You're the CPO? I really thought I would have remembered that from the last time we hung out in person.” So, congratulations on the promotion.Maya: Thank you so much. Yeah, it's exciting.Corey: Being a product person is probably a great place to start with this because we've had a number of conversations, here and otherwise, around what Tailscale is and why it's awesome. I don't necessarily know that beating the drum of why it's so awesome is going to be covering new ground, but I'm sure we're going to come up for that during the conversation. Instead, I'd like to start by talking to you about just what a product person does in the context of building something that is incredibly central not just to critical path, but also has massive security ramifications as well, when positioning something that you're building for the enterprise. It's a very hard confluence of problems, and there are days I am astonished that enterprises can get things done based purely upon so much of the mitigation of what has to happen. Tell me about that. How do you even function given the tremendous vulnerability of the attack surface you're protecting?Maya: Yeah, I don't know if you—I feel like you're talking about the product, but also the sales cycle of talking [laugh] and working with enterprise customers.Corey: The product, the sales cycle, the marketing aspects of it, and—Maya: All of it.Corey: —it all ties together. It's different facets of frankly, the same problem.Maya: Yeah. I think that ultimately, this is about really understanding who the customer that is buying the product is. And I really mean that, like, buying the product, right? Because, like, look at something like Tailscale. We're typically used by engineers, or infrastructure teams in an organization, but the buyer might be the VP of Engineering, but it might be the CISO, or the CTO, or whatever, and they're going to have a set of requirements that's going to be very different from what the end-user has as a set of requirements, so even if you have something like bottom-up adoption, in our case, like, understanding and making sure we're checking all the boxes that somebody needs to actually bring us to work.Enterprises are incredibly demanding, and to your point, have long checklists of what they need as part of an RFP or that kind of thing. I find that some of the strictest requirements tend to be in security. So like, how—to your point—if we're such a critical part of your network, how are you sure that we're always available, or how are you sure that if we're compromised, you're not compromised, and providing a lot of, like, assurances and controls around making sure that that's not the case.Corey: I think that there's a challenge in that what enterprise means to different people can be wildly divergent. I originally came from the school of obnoxious engineering where oh, as an engineer, whenever I say something is enterprise grade, that's not a compliment. That means it's going to be slow and moribund. But that is a natural consequence of a company's growth after achieving success, where okay, now we have actual obligations to customers and risk mitigation that needs to be addressed. And how do you wind up doing that without completely hobbling yourself when it comes to accelerating feature velocity? It's a very delicate balancing act.Maya: Yeah, for sure. And I think you need to balance, to your point, kind of creating demand for the product—like, it's actually solving the problem that the customer has—versus checking boxes. Like, I think about them as features, or you know, feature requests versus feature blockers or deal blockers or adoption blockers. So, somebody wants to, say, connect to an AWS VPC, but then the person who has to make sure that that's actually rolled out properly also wants audit logs and SSH session recording and RBAC-based controls and lots of other things before they're comfortable deploying that in their environment. And I'm not even talking about the list of, you know, legal, kind of, TOS requirements that they would have for that kind of situation.I think there's a couple of things that you need to do to even signal that you're in that space. One of the things that I was—I was talking to a friend of mine the other day how it feels like five years ago, like, nobody had SOC 2 reports, or very few startups had SOC 2 reports. And it's probably because of the advent of some of these other companies in this space, but like, now you can kind of throw a dart, and you'll hit five startups that have SOC 2 reports, and the amount that you need to show that you're ready to sell to these companies has changed.Corey: I think that there's a definite broadening of the use case. And I've been trying to avoid it, but let's go diving right into it. I used to view Tailscale as, oh it's a VPN. The end. Then it became something more where it effectively became the mesh overlay where all of the various things that I have that speak Tailscale—which is frankly, a disturbing number of things that I'd previously considered to be appliances—all talk to one another over a dedicated network, and as a result, can do really neat things where I don't have to spend hours on end configuring weird firewall rules.It's more secure, it's a lot simpler, and it seems like every time I get that understanding down, you folks do something that causes me to yet again reevaluate where you stand. Most recently, I was doing something horrifying in front-end work, and in VS Code the Tailscale extension popped up. “Oh, it looks like you're running a local development server. Would you like to use Tailscale Funnel to make it available to the internet?” And my response to that is, “Good lord, no, I'm ashamed of it, but thanks for asking.” Every time I think I get it, I have to reevaluate where it stands in the ecosystem. What is Tailscale now? I feel like I should get the official description of what you are.Maya: Well, I sure hope I'm not the official description. I think the closest is a little bit of what you're saying: a mesh overlay network for your infrastructure, or a programmable network that lets you mesh together your users and services and services and services, no matter where they are, including across different infrastructure providers and, to your point, on a long list of devices you might have running. People are running Tailscale on self-driving cars, on robots, on satellites, on elevators, but they're also running Tailscale on Linux running in AWS or a MacBook they have sitting under their desk or whatever it happens to be. The phrase that I like to use for that is, like, infrastructure agnostic. We're just a building block.Your infrastructure can be whatever infrastructure you want. You can have the cheapest GPUs from this cloud, or you can use the Android phone to train the model that you have sitting on your desk. We just help you connect all that stuff together so you can build your own cloud whatever way you want. To your point, that's not really a VPN [laugh]. The word VPN doesn't quite do it justice. For the remote access to prod use case, so like a user, specifically, like, a developer infra team to a production network, that probably looks the most like a zero-trust solution, but we kind of blur a lot of the lines there for what we can do.Corey: Yeah, just looking at it, at the moment, I have a bunch of Raspberries Pi, perhaps, hanging out on my tailnet. I have currently 14 machines on there, I have my NAS downstairs, I have a couple of EC2 instances, a Google Cloud instance, somewhere, I finally shut down my old Oracle Cloud instance, my pfSense box speaks it natively. I have a Thinkst Canary hanging out on there to detect if anything starts going ridiculously weird, my phone, my iPad, and a few other things here and there. And they all just talk seamlessly over the same network. I can identify them via either IP address, if I'm old, or via DNS if I want to introduce problems that will surprise me at one point or another down the road.I mean, I even have an exit node I share with my brother's Tailscale account for reasons that most people would not expect, namely that he is an American who lives abroad. So, many weird services like banks or whatnot, “Oh, you can't log in to check your bank unless you're coming from US IP space.” He clicks a button, boom, now he doesn't get yelled at to check his own accounts. Which is probably not the primary use case you'd slap on your website, but it's one of those solving everyday things in somewhat weird ways.Maya: Oh, yeah. I worked at a bank maybe ten years ago, and they would block—this little bank on the east coast of the US—they would block connections from Hawaii because why would any of your customers ever be in Hawaii? And it was like, people travel and maybe you're—Corey: How can you be in Hawaii? You don't have a passport.Maya: [laugh]. People travel. They still need to do banking. Like, it doesn't change, yeah. The internet, we've built a lot of weird controls that are IP-based, that don't really make any sense, that aren't reflective. And like, that's true for individuals—like you're describing, people who travel and need to bank or whatever they need to do when they travel—and for corporations, right? Like the old concept—this is all back to the zero trust stuff—but like, the old concept that you were trusted just because you had an IP address that was in the corp IP range is just not true anymore, right? Somebody can walk into your office and connect to the Wi-Fi and a legitimate employee can be doing their job from home or from Starbucks, right? Those are acceptable ways to work nowadays.Corey: One other thing that I wanted to talk about is, I know that in previous discussions with you folks—sometimes on the podcast sometimes when I more or less corner someone a Tailscale at your developer conference—one of the things that you folks talk about is Tailscale SSH, which is effectively a drop-in replacement for the SSH binary on systems. Full disclosure, I don't use it, mostly because I'm grumpy and I'm old. I also like having some form of separation of duties where you're the network that ties it all together, but something else winds up acting as that authentication step. That said, if I were that interesting that someone wanted to come after me, there are easier ways to get in, so I'm mostly just doing this because I'm persnickety. Are you seeing significant adoption of Tailscale SSH?Maya: I think there's a couple of features that are missing in Tailscale SSH for it to be as adopted by people like you. The main one that I would say is—so right now if you use Tailscale SSH, it runs a binary on the host, you can use your Tailscale credentials, and your Tailscale private key, effectively, to SSH something else. So, you don't have to manage a separate set of SSH keys or certs or whatever it is you want to do to manage that in your network. Your identity provider identity is tied to Tailscale, and then when you connect to that device, we still need to have an identity on the host itself, like in Unix. Right now, that's not tied to Tailscale. You can adopt an identity of something else that's already on the host, but it's not, like, corey@machine.And I think that's the number one request that we're getting for Tailscale SSH, to be able to actually generate or tie to the individual users on the host for an identity that comes from, like, Google, or GitHub, or Okta, or something like that. I'm not hearing a lot of feedback on the security concerns that you're expressing. I think part of that is that we've done a lot of work around security in general so that you feel like if Tailscale were to be compromised, your network wouldn't need to be compromised. So, Tailscale itself is end-to-end encrypted using WireGuard. We only see your public keys; the private keys remain on the device.So, in some sense the, like, quote-unquote, “Worst” that we could do would be to add a node to your network and then start to generate traffic from that or, like, mess with the configuration of your network. These are questions that have come up. In terms of adding nodes to your network, we have a feature called tailnet lock that effectively lets you sign and verify that all the nodes on your network are supposed to be there. One of the other concerns that I've heard come up is, like, what if the binary was compromised. We develop in open-source so you can see that that's the case, but like, you know, there's certainly more stuff we could be doing there to prevent, for example, like a software supply chain security attack. Yeah.Corey: Yeah, but you also have taken significant architectural steps to ensure that you are not placed in a position of undue trust around a lot of these things. Most recently, you raised a Series B, that was $100 million, and the fact that you have not gone bankrupt in the year since that happened tells me that you are very clearly not routing all customer traffic through you folks, at least on one of the major cloud providers. And in fact, a little bit of playing a-slap-and-tickle with Wireshark affirm this, that the nodes talk to each other; they do not route their traffic through you folks, by design. So one, great for the budget, I have respect for that data transfer pattern, but also it means that you are in the position of being a global observer in a way that can be, in many cases, exploited.Maya: I think that's absolutely correct. So, it was 18 months ago or so that we raised our Series B. When you use Tailscale, your traffic connects peer-to-peer directly between nodes on your network. And that has a couple of nice properties, some of what you just described, which is that we don't see your traffic. I mean, one, because it's end-to-end encrypted, but even if we could capture it, and then—we're not in the way of capturing it, let alone decrypting it.Another nice property it has is just, like, latency, right? If your user is in the UK, and they're trying to access something in Scotland, it's not, you know, hair-pinning, bouncing all the way to the West Coast or something like that. It doesn't have to go through one of our servers to get there. Another nice property that comes with that is availability. So, if our network goes down, if our control plane goes down, you're temporarily not able to add nodes or change your configuration, but everything in your network can still connect to each other, so you're not dependent on us being online in order for your network to work.And this is actually coming up more and more in customer conversations where that's a differentiator for us versus a competitor. Different competitors, also. There's a customer case study on our website about somebody who was POC'ing us with a different option, and literally during the POC, the competitor had an outage, unfortunately for them, and we didn't, and they sort of looked at our model, our deployment model and went, “Huh, this really matters to us.” And not having an outage on our network with this solution seems like a better option.Corey: Yeah, when the network is down, the computers all turn into basically space heaters.Maya: [laugh]. Yeah, as long as they're not down because, I guess, unplugged or something. But yeah, [laugh] I completely agree. Yeah. But I think there's a couple of these kinds of, like, enterprise things that people are—we're starting to do a better job of explaining and meeting customers where they are, but it's also people are realizing actually does matter when you're deploying something at this scale that's such a key part of your network.So, we talked a bit about availability, we talked a bit about things like latency. On the security side, there's a lot that we've done around, like I said, tailnet lock or that type of thing, but it's like some of the basic security features. Like, when I joined Tailscale, probably the first thing I shipped in some sense as a PM was a change log. Here's the change log of everything that we're shipping as part of these releases so that you can have confidence that we're telling you what's going on in your network, when new features are coming out, and you can trust us to be part of your network, to be part of your infrastructure.Corey: I do want to further call out that you have a—how should I frame this—a typically active security notification page.Maya: [laugh].Corey: And I think it is easy to misconstrue that as look at how terrifyingly insecure this is? Having read through it, I would argue that it is not that you are surprisingly insecure, but rather that you are extraordinarily transparent about things that are relatively minor issues. And yes, they should get fixed, but, “Oh, that could be a problem if six other things happen to fall into place just the right way.” These are not security issues of the type, “Yeah, so it turns out that what we thought was encrypting actually wasn't and we're just expensive telnet.” No, there's none of that going on.It's all been relatively esoteric stuff, but you also address it very quickly. And that is odd, as someone who has watched too many enterprise-facing companies respond to third-party vulnerability reports with rather than fixing the problem, more or less trying to get them not to talk about it, or if they do, to talk about it only using approved language. I don't see any signs of that with what you've done there. Was that a challenging internal struggle for you to pull off?Maya: I think internally, it was recognizing that security was such an important part of our value proposition that we had to be transparent. But once we kind of got past that initial hump, we've been extremely transparent, as you say. We think we can build trust through transparency, and that's the most important thing in how we respond to security incidents. But code is going to have bugs. It's going to have security bugs. There's nothing you can do to prevent that from happening.What matters is how you—and like, you should. Like, you should try to catch them early in the development process and, you know, shift left and all that kind of stuff, but some things are always going to happen [laugh] and what matters in that case is how you respond to them. And having another, you know, an app update that just says “Bug fixes” doesn't help you figure out whether or not you should actually update, it doesn't actually help you trust us. And so, being as public and as transparent as possible about what's actually happening, and when we respond to security issues and how we respond to security issues is really, really important to us. We have a policy that talks about when we will publish a bulletin.You can subscribe to our bulletins. We'll proactively email anyone who has a security contact on file, or alternatively, another contact that we have if you haven't provided us a security contact when you're subject to an issue. I think by far and large, like, Tailscale has more security bulletins just because we're transparent about them. It's like, we probably have as many bugs as anybody else does. We're just lucky that people report them to us because they see us react to them so quickly, and then we're able to fix them, right? It's a net positive for everyone involved.Corey: It's one of those hard problems to solve for across the board, just because I've seen companies in the past get more or less brutalized by the tech press when they have been overly transparent. I remember that there was a Reuters article years ago about Slack, for example, because they would pull up their status history and say, “Oh, look at all of these issues here. You folks can't keep your website up.” But no, a lot of it was like, “Oh, file uploads for a small subset of our users is causing a problem,” and so on and so forth. These relatively minor issues that, in aggregate, are very hard to represent when you're using traffic light signaling.So, then you see people effectively going full-on AWS status page where there's a significant outage lasting over a day, last month, and what you see on this is if you go really looking for it is this yellow thing buried in his absolute sea of green lights, even though that was one of the more disruptive things to have happened this year. So, it's a consistent and constant balance, and I really have a lot of empathy no matter where you wind up landing on that?Maya: Yeah, I think that's—you're saying it's sort of about transparency or being able to find the right information. I completely agree. And it's also about building trust, right? If we set expectations as to how we will respond to these things then we consistently respond to them, people believe that we're going to keep doing that. And that is almost more important than, like, committing to doing that, if that makes any sense.I remember having a conversation many years ago with an eng manager I worked with, and we were debating what the SLO for a particular service should be. And he sort of made an interesting point. He's like, “It doesn't really matter what the SLO is. It matters what you actually do because then people are going to start expecting [laugh] what you actually do.” So, being able to point at this and say, “Yes, here's what we say and here's what we actually do in practice,” I think builds so much more trust in how we respond to these kinds of things and how seriously we take security.I think one of the other things that came out of the security work is we realized—and I think you talked to Avery, the CEO of Tailscale on a prior podcast about some of this stuff—but we realized that platforms are broken, and we don't have a great way of pushing automatic updates on a lot of platforms, right? You know, if you're using the macOS store, or the Android Play Store, or iOS or whatever, you can automatically update your client when there is a security issue. On other platforms, you're kind of stuck. And so, as a result of us wanting to make sure that the fleet is as updated as possible, we've actually built an auto-update feature that's available on all of our major clients now, so people can opt in to getting those updates as quickly as needed when there is a security issue. We want to expose people to as little risk as possible.Corey: I am not a Tailscale customer. And that bugs me because until I cross that chasm into transferring $1 every month from my bank account to yours, I'm just a whiny freeloader in many respects, which is not at all how you folks who never made me feel I want to be very clear on that. But I believe in paying for the services that empower me to do my job more effectively, and Tailscale absolutely qualifies.Maya: Yeah, understood, I think that you still provide value to us in ways that aren't your data, but then in ways that help our business. One of them is that people like you tend to bring Tailscale to work. They tend to have a good experience at home connecting to their Synology, helping their brother connect to his bank account, whatever it happens to be, and they go, “Oh.” Something kind of clicks, and then they see a problem at work that looks very similar, and then they bring it to work. That is our primary path of adoption.We are a bottom-up adoption, you know, product-led growth product [laugh]. So, we have a blog post called “How Our Free Plan Stays Free” that covers some of that. I think the second thing that I don't want to undersell that a user like you also does is, you have a problem, you hit an issue, and you write into support, and you find something that nobody else has found yet [laugh].Corey: I am very good at doing that entirely by accident.Maya: [laugh]. But that helps us because that means that we see a problem that needs to get fixed, and we can catch it way sooner than before it's deployed, you know, at scale, at a large bank, and you know, it's a critical, kind of, somebody's getting paged kind of issue, right? We have a couple of bugs like that where we need, you know, we need a couple of repros from a couple different people in a couple different situations before we can really figure out what's going on. And having a wide user base who is happy to talk to us really helps us.Corey: I would say it goes beyond that, too. I have—I see things in the world of Tailscale that started off as features that I requested. One of the more recent ones is, it is annoying to me to see on the Tailscale machines list everything I have joined to the tailnet with that silly little up arrow next to it of, “Oh, time to go back and update Tailscale to the latest,” because that usually comes with decent benefits. Great, I have to go through iteratively, or use Ansible, or something like that. Well, now there's a Tailscale update option where it will keep itself current on supported operating systems.For some unknown reason, you apparently can't self-update the application on iOS or macOS. Can't imagine why. But those things tend to self-update based upon how the OS works due to all the sandboxing challenges. The only challenge I've got now is a few things that are, more or less, embedded devices that are packaged by the maintainer of that embedded system, where I'm beholden to them. Only until I get annoyed enough to start building a CI/CD system to replace their package.Maya: I can't wait till you build that CI/CD system. That'll be fun.Corey: “We wrote this code last night. Straight to the bank with it.” Yeah, that sounds awesome.Maya: [laugh] You'd get a couple of term sheets for that, I'm sure.Corey: There are. I am curious, looping back to the start of our conversation, we talked about enterprise security requirements, but how do you address enterprise change management? I find that that's something an awful lot of companies get dreadfully wrong. Most recently and most noisily on my part is Slack, a service for which I paid thousands of dollars a year, decided to roll out a UI redesign that, more or less, got in the way of a tremendous number of customers and there was no way to stop it or revert it. And that made me a lot less likely to build critical-flow business processes that depended upon Slack behaving a certain way.Just, “Oh, we decided to change everything in the user interface today just for funsies.” If Microsoft pulled that with Excel, by lunchtime they'd have reverted it because an entire universe of business users would have marched on Redmond to burn them out otherwise. That carries significant cost for businesses. Yet I still see Tailscale shipping features just as fast as you ever have. How do you square that circle?Maya: Yeah. I think there's two different kinds of change management really, which is, like—because if you think about it, it's like, an enterprise needs a way to roll out a product or a feature internally and then separately, we need a way to roll out new things to customers, right? And so, I think on the Tailscale side, we have a change log that tells you about everything that's changing, including new features, and including changes to the client. We update that religiously. Like, it's a big deal, if something doesn't make it the day that it's supposed to make it. We get very kind of concerned internally about that.A couple of things that were—that are in that space, right, we just talked about auto-updates to make it really easy for you to maintain what's actually rolled out in your infrastructure, but more importantly, for us to push changes with a new client release. Like, for example, in the case of a security incident, we want to be able to publish a version and get it rolled out to the fleet as quickly as possible. Some of the things that we don't have here, but although I hear requests for is the ability to, like, gradually roll out features to a customer. So like, “Can we change the configuration for 10% of our network and see if anything breaks before rolling back, right before rolling forward.” That's a very traditional kind of infra change management thing, but not something I've ever seen in, sort of, the networking security space to this degree, and something that I'm hearing a lot of customers ask for.In terms of other, like, internal controls that a customer might have, we have a feature called ACL Tests. So, if you're going to change the configuration of who can access what in your network, you can actually write tests. Like, your permission file is written in HuJSON and you can write a set of things like, Corey should be able to access prod. Corey should not be able to access test, or whatever it happens to be—actually, let's flip those around—and when you have a policy change that doesn't pass those tests, you actually get told right away so you're not rolling that out and accidentally breaking a large part of your network. So, we built several things into the product to do it. In terms of how we notify customers, like I said, that the primary method that we have right now is something like a change log, as well as, like, security bulletins for security updates.Corey: Yeah, it's one of the challenges, on some level, of the problem of oh, I'm going to set up a service, and then I'm going to go sail around the world, and when I come back in a year or two—depending on how long I spent stranded on an island somewhere—now I get to figure out what has changed. And to your credit, you have to affirmatively enable all of the features that you have shipped, but you've gone from, “Oh, it's a mesh network where everything can talk to each other,” to, “I can use an exit node from that thing. Oh, now I can seamlessly transfer files from one node to another with tail drop,” to, “Oh, Tailscale Funnel. Now, I can expose my horrifying developer environment to the internet.” I used that one year to give a talk at a conference, just because why not?Maya: [crosstalk 00:27:35].Corey: Everything evolves to become [unintelligible 00:27:37] email on Microsoft Outlook, or tries to be Microsoft Excel? Oh, no, no. I want you to be building Microsoft PowerPoint for me. And we eventually get there, but that is incredibly powerful functionality, but also terrifying when you think you have a handle on what's going on in a large-scale environment, and suddenly, oh, there's a whole new vector we need to think about. Which is why your—the thought and consideration you put into that is so apparent and so, frankly, welcome.Maya: Yeah, you actually kind of made a statement there that I completely missed, which is correct, which is, we don't turn features on by default. They are opt-in features. We will roll out features by default after they've kind of baked for an incredibly long period of time and with, like, a lot of fanfare and warning. So, the example that I'll give is, we have a DNS feature that was probably available for maybe 18 months before we turned it on by default for new tailnets. So didn't even turn it on for existing folks. It's called Magic DNS.We don't want to touch your configuration or your network. We know people will freak out when that happens. Knowing, to your point, that you can leave something for a year and come back, and it's going to be the same is really important. For everyone, but for an enterprise customer as well. Actually, one other thing to mention there. We have a bunch of really old versions of clients that are running in production, and we want them to keep working, so we try to be as backward compatible as possible.I think the… I think we still have clients from 2019 that are running and connecting to corp that nobody's updated. And like, it'd be great if they would update them, but like, who knows what situation they're in and if they can connect to them, and all that kind of stuff, but they still work. And the point is that you can have set it up four years ago, and it should still work, and you should still be able to connect to it, and leave it alone and come back to it in a year from now, and it should still work and [laugh] still connect without anything changing. That's a very hard guarantee to be able to make.Corey: And yet, somehow you've been able to do that, just from the perspective of not—I've never yet seen you folks make a security-oriented decision that I'm looking at and rolling my eyes and amazed that you didn't make the decision the other way. There are a lot of companies that while intending very well have done, frankly, very dumb things. I've been keeping an eye on you folks for a long time, and I would have caught that in public. I just haven't seen anything like that. It's kind of amazing.Last year, I finally took the extraordinary step of disabling SSH access anywhere except the tailnet to a number of my things. It lets my logs fill up a lot less, and you've built to that level of utility-like reliability over the series of longtime experimentation. I have yet to regret having Tailscale in the mix, which is, frankly, not something I can say about almost any product.Maya: Yeah. I'm very proud to hear that. And like, maintaining that trust—back to a lot of the conversation about security and reliability and stuff—is incredibly important to us, and we put a lot of effort into it.Corey: I really appreciate your taking the time to talk to me about how things continue to evolve over there. Anything that's new and exciting that might have gotten missed? Like, what has come out in, I guess, the last six months or so that are relevant to the business and might be useful for people looking to use it themselves?Maya: I was hoping you're going to ask me what came out in the last, you know, 20 minutes while we were talking, and the answer is probably nothing, but you never know. But [laugh]—Corey: With you folks, I wouldn't doubt it. Like, “Oh, yeah, by the way, we had to do a brand treatment redo refresh,” or something on the website? Why not? It now uses telepathy just because.Maya: It could, that'd be pretty cool. No, I mean, lots has gone on in the last six months. I think some of the things that might be more interesting to your listeners, we're now in the AWS Marketplace, so if you want to purchase Tailscale through AWS Marketplace, you can. We have a Kubernetes operator that we've released, which lets you both ingress and egress from a Kubernetes cluster to things that are elsewhere in the world on other infrastructure, and also access the Kubernetes control plane and the API server via Tailscale. I mentioned auto-updates. You mentioned the VS Code extension. That's amazing, the fact that you can kind of connect directly from within VS Code to things on your tailnet. That's a lot of the exciting stuff that we've been doing. And there's boring stuff, you know, like audit log streaming, and that kind of stuff. But it's good.Corey: Yeah, that stuff is super boring until suddenly, it's very, very exciting. And those are not generally good days.Maya: [laugh]. Yeah, agreed. It's important, but boring. But important.Corey: [laugh]. Well, thank you so much for taking the time to talk through all the stuff that you folks are up to. If people want to learn more, where's the best place for them to go to get started?Maya: tailscale.com is the best place to go. You can download Tailscale from there, get access to our documentation, all that kind of stuff.Corey: Yeah, I also just want to highlight that you can buy my attention but never my opinion on things and my opinion on Tailscale remains stratospherically high, so thank you for not making me look like a fool, by like, “Yes. And now we're pivoting to something horrifying is a business model and your data.” Thank you for not doing exactly that.Maya: Yeah, we'll keep doing that. No, no, blockchains in our future.Corey: [laugh]. Maya Kaczorowski, Chief Product Officer at Tailscale. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. This episode has been brought to us by our friends at Tailscale. If you enjoyed this episode, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment that will never actually make it back to us because someone screwed up a firewall rule somewhere on their legacy connection.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.