Podcasts about Object storage

  • 83PODCASTS
  • 160EPISODES
  • 31mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 22, 2025LATEST
Object storage

POPULARITY

20172018201920202021202220232024


Best podcasts about Object storage

Latest podcast episodes about Object storage

Oracle University Podcast
Integrating APEX with OCI AI Services

Oracle University Podcast

Play Episode Listen Later Apr 22, 2025 20:01


Discover how Oracle APEX leverages OCI AI services to build smarter, more efficient applications. Hosts Lois Houston and Nikita Abraham interview APEX experts Chaitanya Koratamaddi, Apoorva Srinivas, and Toufiq Mohammed about how key services like OCI Vision, Oracle Digital Assistant, and Document Understanding integrate with Oracle APEX.   Packed with real-world examples, this episode highlights all the ways you can enhance your APEX apps.   Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we looked at how generative AI powers Oracle APEX and in today's episode, we're going to focus on integrating APEX with OCI AI Services. Lois: That's right, Niki. We're going to look at how you can use Oracle AI services like OCI Vision, Oracle Digital Assistant, Document Understanding, OCI Generative AI, and more to enhance your APEX apps. 01:03 Nikita: And to help us with it all, we've got three amazing experts with us, Chaitanya Koratamaddi, Director of Product Management at Oracle, and senior product managers, Apoorva Srinivas and Toufiq Mohammed. In today's episode, we'll go through each Oracle AI service and look at how it interacts with APEX. Apoorva, let's start with you. Can you explain what the OCI Vision service is? Apoorva: Oracle Cloud Infrastructure Vision is a serverless multi-tenant service accessible using the console or REST APIs. You can upload images to detect and classify objects in them. With prebuilt models available, developers can quickly build image recognition into their applications without machine learning expertise. OCI Vision service provides a fully managed model infrastructure. With complete integration with OCI Data Labeling, you can build custom models easily. OCI Vision service provides pretrained models-- Image Classification, Object Detection, Face Detection, and Text Recognition. You can build custom models for Image Classification and Object Detection. 02:24 Lois: Ok. What about its use cases? How can OCI Vision make APEX apps more powerful? Apoorva: Using OCI Vision, you can make images and videos discoverable and searchable in your APEX app.  You can use OCI Vision to detect and classify objects in the images. OCI Vision also highlights the objects using a red rectangular box. This comes in handy in use cases such as detecting vehicles that have violated the rules in traffic images. You can use OCI Vision to identify visual anomalies in your data. This is a very popular use case where you can detect anomalies in cancer X-ray images to detect cancer. These are some of the most popular use cases of using OCI Vision with your APEX app. But the possibilities are endless and you can use OCI Vision for any of your image analysis. 03:29 Nikita: Let's shift gears to Oracle Digital Assistant. Chaitanya, can you tell us what it's all about? Chaitanya: Oracle Digital Assistant is a low-code conversational AI platform that allows businesses to build and deploy AI assistants. It provides natural language understanding, automatic speech recognition, and text-to-speech capabilities to enable human-like interactions with customers and employees. Oracle Digital Assistant comes with prebuilt templates for you to get started.  04:00 Lois: What are its key features and benefits, Chaitanya? How does it enhance the user experience? Chaitanya: Oracle Digital Assistant provides conversational AI capabilities that include generative AI features, natural language understanding and ML, AI-powered voice, and analytics and insights. Integration with enterprise applications become easier with unified conversational experience, prebuilt chatbots for Oracle Cloud applications, and chatbot architecture frameworks. Oracle Digital Assistant provides advanced conversational design tools, conversational designer, dialogue and domain trainer, and native multilingual support. Oracle Digital Assistant is open, scalable, and secure. It provides multi-channel support, automated bot-to-agent transfer, and integrated authentication profile. 04:56 Nikita: And what about the architecture? What happens at the back end? Chaitanya: Developers assemble digital assistants from one or more skills. Skills can be based on prebuilt skills provided by Oracle or third parties, custom developed, or based on one of the many skill templates available. 05:16 Lois: Chaitanya, what exactly are “skills” within the Oracle Digital Assistant framework?  Chaitanya: Skills are individual chatbots that are designed to interact with users and fulfill specific type of tasks. Each skill helps a user complete a task through a combination of text messages and simple UI elements like select list. When a user request is submitted through a channel, the Digital Assistant routes the user's request to the most appropriate skill to satisfy the user's request. Skills can combine multilingual NLP deep learning engine, a powerful dialogflow engine, and integration components to connect to back-end systems.  Skills provide a modular way to build your chatbot functionality. Now users connect with a chatbot through channels such as Facebook, Microsoft Teams, or in our case, Oracle APEX chatbot, which is embedded into an APEX application. 06:21 Nikita: That's fascinating. So, what are some use cases of Oracle Digital Assistant in APEX apps? Chaitanya: Digital assistants streamline approval processes by collecting information, routing requests, and providing status updates. Digital assistants offer instant access to information and documentation, answering common questions and guiding users. Digital assistants assist sales teams by automating tasks, responding to inquiries, and guiding prospects through the sales funnel. Digital assistants facilitate procurement by managing orders, tracking deliveries, and handling supplier communication. Digital assistants simplify expense approvals by collecting reports, validating receipts, and routing them for managerial approval. Digital assistants manage inventory by tracking stock levels, reordering supplies, and providing real-time inventory updates. Digital assistants have become a common UX feature in any enterprise application. 07:28 Want to learn how to design stunning, responsive enterprise applications directly from your browser with minimal coding? The new Oracle APEX Developer Professional learning path and certification enables you to leverage AI-assisted development, including generative AI and Database 23ai, to build secure, scalable web and mobile applications with advanced AI-powered features. From now through May 15, 2025, we're waiving the certification exam fee (valued at $245). So, what are you waiting for? Visit mylearn.oracle.com to get started today. 08:09 Nikita: Welcome back! Thanks for that, Chaitanya. Toufiq, let's talk about the OCI Document Understanding service. What is it? Toufiq: Using this service, you can upload documents to extract text, tables, and other key data. This means the service can automatically identify and extract relevant information from various types of documents, such as invoices, receipts, contracts, etc. The service is serverless and multitenant, which means you don't need to manage any servers or infrastructure. You can access this service using the console, REST APIs, SDK, or CLI, giving you multiple ways to integrate. 08:55 Nikita: What do we use for APEX apps?  Toufiq: For APEX applications, we will be using REST APIs to integrate the service. Additionally, you can process individual files or batches of documents using the ProcessorJob API endpoint. This flexibility allows you to handle different volumes of documents efficiently, whether you need to process a single document or thousands at once. With these capabilities, the OCI Document Understanding service can significantly streamline your document processing tasks, saving time and reducing the potential for manual errors. 09:36 Lois: Ok.  What are the different types of models available? How do they cater to various business needs? Toufiq: Let us start with pre-trained models. These are ready-to-use models that come right out of the box, offering a range of functionalities. The available models are Optical Character Recognition (OCR) enables the service to extract text from documents, allowing you to digitize, scan the documents effortlessly. You can precisely extract text content from documents. Key-value extraction, useful in streamlining tasks like invoice processing. Table extraction can intelligently extract tabular data from documents. Document classification automatically categorizes documents based on their content. OCR PDF enables seamless extraction of text from PDF files. Now, what if your business needs go beyond these pre-trained models. That's where custom models come into play. You have the flexibility to train and build your own models on top of these foundational pre-trained models. Models available for training are key value extraction and document classification. 10:50 Nikita: What does the architecture look like for OCI Document Understanding? Toufiq: You can ingest or supply the input file in two different ways. You can upload the file to an OCI Object Storage location. And in your request, you can point the Document Understanding service to pick the file from this Object Storage location.  Alternatively, you can upload a file directly from your computer. Once the file is uploaded, the Document Understanding service can process the file and extract key information using the pre-trained models. You can also customize models to tailor the extraction to your data or use case. After processing the file, the Document Understanding service stores the results in JSON format in the Object Storage output bucket. Your Oracle APEX application can then read the JSON file from the Object Storage output location, parse the JSON, and store useful information at local table or display it on the screen to the end user. 11:52 Lois: And what about use cases? How are various industries using this service? Toufiq: In financial services, you can utilize Document Understanding to extract data from financial statements, classify and categorize transactions, identify and extract payment details, streamline tax document management. Under manufacturing, you can perform text extraction from shipping labels and bill of lading documents, extract data from production reports, identify and extract vendor details. In the healthcare industry, you can automatically process medical claims, extract patient information from forms, classify and categorize medical records, identify and extract diagnostic codes. This is not an exhaustive list, but provides insights into some industry-specific use cases for Document Understanding. 12:50 Nikita: Toufiq, let's switch to the big topic everyone's excited about—the OCI Generative AI Service. What exactly is it? Toufiq: OCI Generative AI is a fully managed service that provides a set of state of the art, customizable large language models that cover a wide range of use cases. It provides enterprise grade generative AI with data governance and security, which means only you have access to your data and custom-trained models. OCI Generative AI provides pre-trained out-of-the-box LLMs for text generation, summarization, and text embedding. OCI Generative AI also provides necessary tools and infrastructure to define models with your own business knowledge. 13:37 Lois: Generally speaking, how is OCI Generative AI useful?  Toufiq: It supports various large language models. New models available from Meta and Cohere include Llama2 developed by Meta, and Cohere's Command model, their flagship text generation model. Additionally, Cohere offers the Summarize model, which provides high-quality summaries, accurately capturing essential information from documents, and the Embed model, converting text to vector embeddings representation. OCI Generative AI also offers dedicated AI clusters, enabling you to host foundational models on private GPUs. It integrates LangChain and open-source framework for developing new interfaces for generative AI applications powered by language models. Moreover, OCI Generative AI facilitates generative AI operations, providing content moderation controls, zero downtime endpoint model swaps, and endpoint deactivation and activation capabilities. For each model endpoint, OCI Generative AI captures a series of analytics, including call statistics, tokens processed, and error counts. 14:58 Nikita: What about the architecture? How does it handle user input? Toufiq: Users can input natural language, input/output examples, and instructions. The LLM analyzes the text and can generate, summarize, transform, extract information, or classify text according to the user's request. The response is sent back to the user in the specified format, which can include raw text or formatting like bullets and numbering, etc. 15:30 Lois: Can you share some practical use cases for generative AI in APEX apps?  Toufiq: Some of the OCI generative AI use cases for your Oracle APEX apps include text summarization. Generative AI can quickly summarize lengthy documents such as articles, transcripts, doctor's notes, and internal documents. Businesses can utilize generative AI to draft marketing copy, emails, blog posts, and product descriptions efficiently. Generative AI-powered chatbots are capable of brainstorming, problem solving, and answering questions. With generative AI, content can be rewritten in different styles or languages. This is particularly useful for localization efforts and catering to diverse audience. Generative AI can classify intent in customer chat logs, support tickets, and more. This helps businesses understand customer needs better and provide tailored responses and solutions. By searching call transcripts, internal knowledge sources, Generative AI enables businesses to efficiently answer user queries. This enhances information retrieval and decision-making processes. 16:47 Lois: Before we let you go, can you explain what Select AI is? How is it different from the other AI services? Toufiq: Select AI is a feature of Autonomous Database. This is where Select AI differs from the other AI services. Be it OCI Vision, Document Understanding, or OCI Generative AI, these are all freely managed standalone services on Oracle Cloud, accessible via REST APIs. Whereas Select AI is a feature available in Autonomous Database. That means to use Select AI, you need Autonomous Database.  17:26 Nikita: And what can developers do with Select AI? Toufiq: Traditionally, SQL is the language used to query the data in the database. With Select AI, you can talk to the database and get insights from the data in the database using human language. At the very basic, what Select AI does is it generates SQL queries using natural language, like an NL2SQL capability.  17:52 Nikita: How does it actually do that? Toufiq: When a user asks a question, the first step Select AI does is look into the AI profile, which you, as a developer, define. The AI profile holds crucial information, such as table names, the LLM provider, and the credentials needed to authenticate with the LLM service. Next, Select AI constructs a prompt. This prompt includes information from the AI profile and the user's question.  Essentially, it's a packet of information containing everything the LLM service needs to generate SQL. The next step is generating SQL using LLM. The prompt prepared by Select AI is sent to the available LLM services via REST. Which LLM to use is configured in the AI profile. The supported providers are OpenAI, Cohere, Azure OpenAI, and OCI Generative AI. Once the SQL is generated by the LLM service, it is returned to the application. The app can then handle the SQL query in various ways, such as displaying the SQL results in a report format or as charts, etc.  19:05 Lois: This has been an incredible discussion! Thank you, Chaitanya, Apoorva, and Toufiq, for walking us through all of these amazing AI tools. If you're ready to dive deeper, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. You'll find step-by-step guides and demos for everything we covered today.  Nikita: Until next week, this is Nikita Abraham… Lois: And Lois Houston signing off! 19:31 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  

The GeekNarrator
How do vector (search) databases work? ft: turbopuffer

The GeekNarrator

Play Episode Listen Later Apr 7, 2025 68:58


For memberships: join this channel as a member here:https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinSummary:In this conversation, Kaivalya Apte and Simon Eskildsen talk about vector databases, particularly focusing on TurboPuffer. They discuss the importance of vector search, embeddings, and the challenges associated with building efficient search engines. The conversation covers various aspects such as cost considerations, chunking strategies, multi-tenancy, and performance optimization. Simon shares insights on the future of vector search and the significance of observability and metrics in database performance. The discussion emphasizes the need for practical application and experimentation in understanding these technologies.Chapters:00:00 Introduction to Vector Databases10:34 Understanding Vectors and Embeddings15:03 Example: Designing a Search Engine for Podcasts27:53 Scaling Challenges in Vector Search36:46 Indexing and Querying in TurboPuffer38:12 Understanding Indexing and Query Planning45:45 Exploring Index Types and Their Performance50:27 Data Ingestion and Embedding Retrieval54:19 Use Cases and Challenges in Vector Search01:01:22 Metrics and Observability in Vector Databases01:03:52 Future Trends in Vector Search and DatabasesReferences:How do build a database on Object Storage? https://youtu.be/RFmajOeUKnETurbopuffer https://turbopuffer.com/Continous Recall measurement: https://turbopuffer.com/blog/continuous-recallTurbopuffer architecture: https://turbopuffer.com/architecture

The GeekNarrator
How would you design a database on Object Storage?

The GeekNarrator

Play Episode Listen Later Dec 2, 2024 68:26


Join Kaivalya Apte and Simon Hørup Eskildsen from Turbopuffer as they talk about the complexities of building a database on top of object storage. Discover the key challenges, the nuances of various storage formats, and the critical trade-offs involved. Learn from Simon's rich experience, from his time at Shopify to creating Turbopuffer. This episode covers everything—from approaches to write-ahead logs to multi-tenancy and object storage advancements. Perfect for database enthusiasts and those keen on first-principles thinking! 00:00 Introduction 00:17 Simon's Background and Journey to TurboBuffer 02:42 Challenges in Database Scalability 04:21 Experimenting with Vector Databases 05:02 Cost Implications of Vector Databases 05:52 Architectural Considerations for Search Workloads 07:39 Building a Database on Object Storage 16:14 Designing a Simple Database on Object Storage 26:01 Handling Multiple Writers and Consistency 31:26 Trade-offs in Write Operations 32:36 Optimizing MySQL Write Performance 34:03 Batching Writes in Object Storage 35:08 Time-Based vs Size-Based Batching 36:32 Understanding Amplification in Databases 42:26 Challenges with Cold Queries 44:02 Building and Persisting B-Trees 50:53 Separating Workloads in Databases 56:07 Multi-Tenancy Challenges 01:00:39 Choosing Storage Formats 01:06:10 Key Innovations in Object Storage Databases Important links: - https://github.com/sirupsen/napkin-math (numbers) - https://turbopuffer.com/ - https://turbopuffer.com/architecture - https://sirupsen.com/napkin/problem-10-mysql-transactions-per-second - https://sirupsen.com (my blog, napkin math) - https://sirupsen.com/subscribe (napkin math newsletter) - https://github.com/rkyv/rkyv rkyv rust Become a member of The GeekNarrator to get access to member only videos, notes and monthly 1:1 with me. Like building stuff? Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription. https://app.codecrafters.io/join?via=geeknarrator If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning!

Packet Pushers - Full Podcast Feed
Tech Bytes: MinIO Optimizes Object Storage for AI Infrastructure (Sponsored)

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Nov 18, 2024 19:00


Today on the Tech Bytes podcast we welcome back sponsor MinIO to talk about how AI is altering the data infrastructure landscape, and why organizations are looking to build AI infrastructure on-prem. We also dig into MinIO's AIStor, a software-only, distributed object store that offers simplicity, scalability, and performance for AI infrastructure and other high-performance... Read more »

Packet Pushers - Briefings In Brief
Tech Bytes: MinIO Optimizes Object Storage for AI Infrastructure (Sponsored)

Packet Pushers - Briefings In Brief

Play Episode Listen Later Nov 18, 2024 19:00


Today on the Tech Bytes podcast we welcome back sponsor MinIO to talk about how AI is altering the data infrastructure landscape, and why organizations are looking to build AI infrastructure on-prem. We also dig into MinIO's AIStor, a software-only, distributed object store that offers simplicity, scalability, and performance for AI infrastructure and other high-performance... Read more »

The GeekNarrator
Database Internals - SlateDB with Chris Riccomini

The GeekNarrator

Play Episode Listen Later Oct 11, 2024 61:45


Welcome back to another episode! Today, I have a special guest, Chris Riccomini, joining me to delve into the exciting world of databases. In this episode, we focus on SlateDB, a new and innovative database that's making waves in the tech community. We'll cover a wide range of topics, including the architecture of SlateDB, its internals, design decisions, and some fascinating use cases. Chris, a seasoned software engineer with a background at LinkedIn and WePay, shares his journey and the motivations behind creating SlateDB.

The Changelog
Reinventing Kafka on object storage (Interview)

The Changelog

Play Episode Listen Later Aug 29, 2024 104:11


Ryan Worl, Co-founder and CTO at WarpStream, joins us to talk about the world of Kafka and data streaming and how WarpStream redesigned the idea of Kafka to run in modern cloud environments directly on top of object storage. Last year they posted a blog titled, "Kafka is dead, long live Kafka" that hit the top of Hacker News to put WarpStream on the map. We get the backstory on Kafka and why it's so widely used, who created it and for what purpose, and the behind the scenes on all things WarpStream.

Changelog Master Feed
Reinventing Kafka on object storage (Changelog Interviews #606)

Changelog Master Feed

Play Episode Listen Later Aug 29, 2024 104:11


Ryan Worl, Co-founder and CTO at WarpStream, joins us to talk about the world of Kafka and data streaming and how WarpStream redesigned the idea of Kafka to run in modern cloud environments directly on top of object storage. Last year they posted a blog titled, "Kafka is dead, long live Kafka" that hit the top of Hacker News to put WarpStream on the map. We get the backstory on Kafka and why it's so widely used, who created it and for what purpose, and the behind the scenes on all things WarpStream.

Einsen & Nullen
(Objekt)Speicher der Zukunft - Was kommt nach der Zukunft?

Einsen & Nullen

Play Episode Listen Later Jul 23, 2024 15:46


„Das Schöne am ‚Software designed‘-Konstrukt ist, dass ich beliebige neue Technologie mit bestehenden Technologien mischen kann“ In der finalen Episode des Themenschwerpunktes „(Objekt)Speicher der Zukunft“ hält uns Christoph Storzum weiterhin tapfer die Stellung und erklärt uns das „What's next“ der Speichertechnologien. Dabei lassen wir zum einen die gesamte Staffel noch einmal Revue passieren und diskutieren zum anderen die verschiedenen Stärken der Speicherarten (Flash vs Disc vs Tape). Präsentiert wird diese Folge von Intel und Scality.

Tech AI Radio
Garage: Open-Source Distributed Object Storage

Tech AI Radio

Play Episode Listen Later Jul 20, 2024


Einsen & Nullen
(Objekt)Speicher der Zukunft - Vom Datalake zum selbstfahrenden Auto

Einsen & Nullen

Play Episode Listen Later Jul 17, 2024 17:50


In der vorletzten Folge unseres Themenschwerpunkts „(Objekt)Speicher der Zukunft“ befassen wir uns mit sog. „Datalakes“ und zeigen auf, wieso die richtige Speicherinfrastruktur in unserem IT-gestützten Arbeitsalltag wichtig ist. Hierbei werfen wir einen Blick auf konkrete Anwendungsbeispiele wie Kreditkartentransaktionen, automatisch generierte Flugpriese und „predictive maintenance“ sowie selbstfahrende Autos und KI! Unsere Gäste sind erneut Ruediger Frank zusammen mit Christoph Storzum - und die Folge wird präsentiert von Scality und Intel!

Einsen & Nullen
(Objekt)Speicher der Zukunft - Über Mediendaten & Streaming

Einsen & Nullen

Play Episode Listen Later Jul 10, 2024 17:03


In der aktuellen Folge zum Thema „(Objekt)Speicher der Zukunft“ legen wir weniger den Fokus auf einen speziellen Bereich, sondern holen zu einem Rundumschlag aus: Unsere Gäste Ruediger Frank, Christoph Storzum & Andreas Viehbeck zeigen uns anhand von verschiedensten Usecases und alltäglichen Beispielen die Flexibilität und Vielseitigkeit der Objektspeicher Technologie auf. Präsentiert wird diese Folge von Intel & Scality.

Packet Pushers - Full Podcast Feed
Tech Bytes: High Performance, Scalable Object Storage with MinIO (Sponsored)

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Jul 8, 2024 18:27


Today on the Tech Bytes podcast we talk with Jonathan Symonds, Chief Marketing Officer at MinIO about MinIO’s object storage offering; a software-defined, Amazon S3-compatible object storage that offers high performance and scale for modern workloads and AI/ML. We discuss how MinIO helps customers across industries drive AI innovation and AI architectures, how object storage... Read more »

Packet Pushers - Fat Pipe
Tech Bytes: High Performance, Scalable Object Storage with MinIO (Sponsored)

Packet Pushers - Fat Pipe

Play Episode Listen Later Jul 8, 2024 18:27


Today on the Tech Bytes podcast we talk with Jonathan Symonds, Chief Marketing Officer at MinIO about MinIO’s object storage offering; a software-defined, Amazon S3-compatible object storage that offers high performance and scale for modern workloads and AI/ML. We discuss how MinIO helps customers across industries drive AI innovation and AI architectures, how object storage... Read more »

Packet Pushers - Briefings In Brief
Tech Bytes: High Performance, Scalable Object Storage with MinIO (Sponsored)

Packet Pushers - Briefings In Brief

Play Episode Listen Later Jul 8, 2024 18:27


Today on the Tech Bytes podcast we talk with Jonathan Symonds, Chief Marketing Officer at MinIO about MinIO’s object storage offering; a software-defined, Amazon S3-compatible object storage that offers high performance and scale for modern workloads and AI/ML. We discuss how MinIO helps customers across industries drive AI innovation and AI architectures, how object storage... Read more »

Cabeça de Lab
O QUE É OBJECT STORAGE? | #MAGALUCLOUD

Cabeça de Lab

Play Episode Listen Later Jul 4, 2024 22:55


O Magalu lançou seu próprio serviço de cloud no mercado: A MagaluCloud. Se você está curioso sobre como esse serviço está revolucionando o armazenamento de dados ou quer saber mais sobre as vantagens e aplicações do Object Storage, este episódio é para você! Essa é uma oportunidade imperdível para obter insights exclusivos diretamente dos desenvolvedores e descubrir como a tecnologia do MagaluCloud pode impulsionar seus projetos. Não deixe de apertar o play! Edição completa por Rádiofobia Podcast e Multimídia: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://radiofobia.com.br/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ --- Nos siga no Twitter e no Instagram: @luizalabs @cabecadelab Dúvidas, cabeçadas e sugestões, mande e-mail para o ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠cabecadelab@luizalabs.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ou uma DM no Instagram Participantes: CELSO PROVIDELO | https://www.linkedin.com/in/cprov/ CLEVERSON GALEGO | https://www.linkedin.com/in/cleverson-gallego-mcp-psm-i-55814a19/ YOHAN RODRIGUES | ⁠⁠https://www.linkedin.com/in/yohan-rodrigues/ Magalu Cloud: https://magalu.cloud/object-storage/

Einsen & Nullen
(Objekt)Speicher der Zukunft - Backup, Restore, Repeat!

Einsen & Nullen

Play Episode Listen Later Jul 3, 2024 17:03


Diese Woche haben wir erneut Ruediger Frank, Christoph Storzum & Andreas Viehbeck zu Gast und tauchen tiefer in den Schwerpunkt “(Objekt)Speicher der Zukunft” ein. So gehen wir u.A. den Fragen nach, wie Firmen ihre wertvollen Unternehmensdaten schützen können, was ein gutes Backup-Setup ausmacht – und wieso hierbei der Restore-Prozess oftmals vergessen wird! Präsentiert wird diese Folge wieder von Intel & Scality.

NerdCast
NerdCast 7 - Object Storage: Armazene com Segurança e Facilidade

NerdCast

Play Episode Listen Later Jun 28, 2024 30:19


Neste sétimo episódio do Nerd na Cloud, vamos falar sobre o que é o Object Storage e como você pode armazenar seus dados com segurança e facilidade na "Nuvem" do Magalu Cloud! MAGALU CLOUD Armazene seus dados no Object Storage do Magalu Cloud: https://jovemnerd.page.link/Object_Storage_NNC_7 Clique e saiba mais sobre o Magalu Cloud: https://jovemnerd.page.link/Conheca_O_Magalu_Cloud_7 Seja um parceiro do Magalu Cloud: https://jovemnerd.page.link/Magalu_Cloud_Parcerias_7 ARTE DA VITRINE: Randall Random EDIÇÃO COMPLETA POR RADIOFOBIA PODCAST E MULTIMÍDIA Mande suas críticas, elogios, sugestões e caneladas para nerdcast@jovemnerd.com.br

Einsen & Nullen
(Objekt)Speicher der Zukunft - Von Datenexplosionen & Parkplätzen

Einsen & Nullen

Play Episode Listen Later Jun 26, 2024 15:38


Mit der heutigen Folge starten wir den neuen Schwerpunkt „(Objekt)Speicher der Zukunft“, präsentiert von Intel und Scality. Zu Gast sind dieses Mal Ruediger Frank, Christoph Storzum & Andreas Viehbeck, die uns erklären, worum es sich beim sogenannten Objektspeicher handelt und wieso man diesen mit „Valet-Parken“ vergleichen kann. Ferner diskutieren wir die titelgebende „Datenexplosion“ und wenden uns abschließend noch dem „S3 Protokoll“ und dessen Relevanz für die Datensicherheit in der Cloud zu.

XenTegra - Nutanix Weekly
Nutanix Weekly: Nutanix Unified Storage: Ideal for High-Performance AI and Modern Data-Intensive Workloads

XenTegra - Nutanix Weekly

Play Episode Listen Later Feb 5, 2024 27:45


We recently commissioned International Data Corporation (IDC) to evaluate the use of Nutanix Objects Storage for modern, data-intensive workloads. IDC examined the evolution of Object Storage which has ranged from cold data repositories for backups and archives to newer, high-performance object stores for data-intensive workloads. The study also involves insightful interviews with Nutanix customers who are using Nutanix Objects Storage for analytics workloads.https://www.nutanix.com/blog/modern-data-intensive-workloads-requiring-high-performanceHost: Phil Sellers (@infraphilip)Co-Host: Harvey Green IIICo-Host: Jirah CoxCo-Host: Ben Rogers

Self-Hosted
115: A NAS in Every Home

Self-Hosted

Play Episode Listen Later Jan 26, 2024 48:53


Brian Moses joins us and shares his most recent NAS build and love for 3D printers. Then Alex gets into the hardware he's deploying around the house, and why we don't see eye-to-eye on ZigBee. Special Guest: Brian Moses.

Oracle University Podcast
Autonomous Database Tools

Oracle University Podcast

Play Episode Listen Later Jan 16, 2024 36:04


In this episode, hosts Lois Houston and Nikita Abraham speak with Oracle Database experts about the various tools you can use with Autonomous Database, including Oracle Application Express (APEX), Oracle Machine Learning, and more.   Oracle MyLearn: https://mylearn.oracle.com/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Tamal Chatterjee, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! We spent the last two episodes exploring Oracle Autonomous Database's deployment options: Serverless and Dedicated. Today, it's tool time! Lois: That's right, Niki. We'll be chatting with some of our Database experts on the tools that you can use with the Autonomous Database. We're going to hear from Patrick Wheeler, Kay Malcolm, Sangeetha Kuppuswamy, and Thea Lazarova. Nikita: First up, we have Patrick, to take us through two important tools. Patrick, let's start with Oracle Application Express. What is it and how does it help developers? 01:15 Patrick: Oracle Application Express, also known as APEX-- or perhaps APEX, we're flexible like that-- is a low-code development platform that enables you to build scalable, secure, enterprise apps with world-class features that can be deployed anywhere. Using APEX, developers can quickly develop and deploy compelling apps that solve real problems and provide immediate value. You don't need to be an expert in a vast array of technologies to deliver sophisticated solutions. Focus on solving the problem, and let APEX take care of the rest. 01:52 Lois: I love that it's so easy to use. OK, so how does Oracle APEX integrate with Oracle Database? What are the benefits of using APEX on Autonomous Database? Patrick: Oracle APEX is a fully supported, no-cost feature of Oracle Database. If you have Oracle Database, you already have Oracle APEX. You can access APEX from database actions. Oracle APEX on Autonomous Database provides a preconfigured, fully managed, and secure environment to both develop and deploy world-class applications. Oracle takes care of configuration, tuning, backups, patching, encryption, scaling, and more, leaving you free to focus on solving your business problems. APEX enables your organization to be more agile and develop solutions faster for less cost and with greater consistency. You can adapt to changing requirements with ease, and you can empower professional developers, citizen developers, and everyone else. 02:56 Nikita: So you really don't need to have a lot of specializations or be an expert to use APEX. That's so cool! Now, what are the steps involved in creating an application using APEX?  Patrick: You will be prompted to log in as the administrator at first. Then, you may create workspaces for your respective users and log in with those associated credentials. Application Express provides you with an easy-to-use, browser-based environment to load data, manage database objects, develop REST interfaces, and build applications which look and run great on both desktop and mobile devices. You can use APEX to develop a wide variety of solutions, import spreadsheets, and develop a single source of truth in minutes. Create compelling data visualizations against your existing data, deploy productivity apps to elegantly solve a business need, or build your next mission-critical data management application. There are no limits on the number of developers or end users for your applications. 04:01 Lois: Patrick, how does APEX use SQL? What role does SQL play in the development of APEX applications?  Patrick: APEX embraces SQL. Anything you can express with SQL can be easily employed in an APEX application. Application Express also enables low-code development, providing developers with powerful data management and data visualization components that deliver modern, responsive end user experiences out-of-the-box. Instead of writing code by hand, you're able to use intelligent wizards to guide you through the rapid creation of applications and components. Creating a new application from APEX App Builder is as easy as one, two, three. One, in App Builder, select a project name and appearance. Two, add pages and features to the app. Three, finalize settings, and click Create. 05:00 Nikita: OK. So, the other tool I want to ask you about is Oracle Machine Learning. What can you tell us about it, Patrick? Patrick: Oracle Machine Learning, or OML, is available with Autonomous Database. A new capability that we've introduced with Oracle Machine Learning is called Automatic Machine Learning, or AutoML. Its goal is to increase data scientist productivity while reducing overall compute time. In addition, AutoML enables non-experts to leverage machine learning by not requiring deep understanding of the algorithms and their settings. 05:37 Lois: And what are the key functions of AutoML? Patrick: AutoML consists of three main functions: Algorithm Selection, Feature Selection, and Model Tuning. With Automatic Algorithm Selection, the goal is to identify the in-database algorithms that are likely to achieve the highest model quality. Using metalearning, AutoML leverages machine learning itself to help find the best algorithm faster than with exhaustive search. With Automatic Feature Selection, the goal is to denoise data by eliminating features that don't add value to the model. By identifying the most predicted features and eliminating noise, model accuracy can often be significantly improved with a side benefit of faster model building and scoring. Automatic Model Tuning tunes algorithm hyperparameters, those parameters that determine the behavior of the algorithm, on the provided data. Auto Model Tuning can significantly improve model accuracy while avoiding manual or exhaustive search techniques, which can be costly both in terms of time and compute resources. 06:44 Lois: How does Oracle Machine Learning leverage the capabilities of Autonomous Database? Patrick: With Oracle Machine Learning, the full power of the database is accessible with the tremendous performance of parallel processing available, whether the machine learning algorithm is accessed via native database SQL or with OML4Py through Python or R.  07:07 Nikita: Patrick, talk to us about the Data Insights feature. How does it help analysts uncover hidden patterns and anomalies? Patrick: A feature I wanted to call the electromagnet, but they didn't let me. An analyst's job can often feel like looking for a needle in a haystack. So throw the switch and all that metallic stuff is going to slam up onto that electromagnet. Sure, there are going to be rusty old nails and screws and nuts and bolts, but there are going to be a few needles as well. It's far easier to pick the needles out of these few bits of metal than go rummaging around in a pile of hay, especially if you have allergies. That's more or less how our Insights tool works. Load your data, kick off a query, and grab a cup of coffee. Autonomous Database does all the hard work, scouring through this data looking for hidden patterns, anomalies, and outliers. Essentially, we run some analytic queries that predict expected values. And where the actual values differ significantly from expectation, the tool presents them here. Some of these might be uninteresting or obvious, but some are worthy of further investigation. You get this dashboard of various exceptional data patterns. Drill down on a specific gauge in this dashboard and significant deviations between actual and expected values are highlighted. 08:28 Lois: What a useful feature! Thank you, Patrick. Now, let's discuss some terms and concepts that are applicable to the Autonomous JSON Database with Kay. Hi Kay, what's the main focus of the Autonomous JSON Database? How does it support developers in building NoSQL-style applications? Kay: Autonomous Database supports the JavaScript Object Notation, also known as JSON, natively in the database. It supports applications that use the SODA API to store and retrieve JSON data or SQL queries to store and retrieve data stored in JSON-formatted data.  Oracle AJD is Oracle ATP, Autonomous Transaction Processing, but it's designed for developing NoSQL-style applications that use JSON documents. You can promote an AJD service to ATP. 09:22 Nikita: What makes the development of NoSQL-style, document-centric applications flexible on AJD?  Kay: Development of these NoSQL-style, document-centric applications is particularly flexible because the applications use schemaless data. This lets you quickly react to changing application requirements. There's no need to normalize the data into relational tables and no impediment to changing the data structure or organization at any time, in any way. A JSON document has its own internal structure, but no relation is imposed on separate JSON documents. Nikita: What does AJD do for developers? How does it actually help them? Kay: So Autonomous JSON Database, or AJD, is designed for you, the developer, to allow you to use simple document APIs and develop applications without having to know anything about SQL. That's a win. But at the same time, it does give you the ability to create highly complex SQL-based queries for reporting and analysis purposes. It has built-in binary JSON storage type, which is extremely efficient for searching and for updating. It also provides advanced indexing capabilities on the actual JSON data. It's built on Autonomous Database, so that gives you all of the self-driving capabilities we've been talking about, but you don't need a DBA to look after your database for you. You can do it all yourself. 11:00 Lois: For listeners who may not be familiar with JSON, can you tell us briefly what it is?  Kay: So I mentioned this earlier, but it's worth mentioning again. JSON stands for JavaScript Object Notation. It was originally developed as a human readable way of providing information to interchange between different programs. So a JSON document is a set of fields. Each of these fields has a value, and those values can be of various data types. We can have simple strings, we can have integers, we can even have real numbers. We can have Booleans that are true or false. We can have date strings, and we can even have the special value null. Additionally, values can be objects, and objects are effectively whole JSON documents embedded inside a document. And of course, there's no limit on the nesting. You can nest as far as you like. Finally, we can have a raise, and a raise can have a list of scalar data types or a list of objects. 12:13 Nikita: Kay, how does the concept of schema apply to JSON databases? Kay: Now, JSON documents are stored in something that we call collections. Each document may have its own schema, its own layout, to the JSON. So does this mean that JSON document databases are schemaless? Hmmm. Well, yes. But there's nothing to fear because you can always use a check constraint to enforce a schema constraint that you wish to introduce to your JSON data. Lois: Kay, what about indexing capabilities on JSON collections? Kay: You can create indexes on a JSON collection, and those indexes can be of various types, including our flexible search index, which indexes the entire content of the document within the JSON collection, without having to know anything in advance about the schema of those documents.  Lois: Thanks Kay! 13:18 AI is being used in nearly every industry—healthcare, manufacturing, retail, customer service, transportation, agriculture, you name it! And, it's only going to get more prevalent and transformational in the future. So it's no wonder that AI skills are the most sought after by employers.  We're happy to announce a new OCI AI Foundations certification and course that is available—for FREE! Want to learn about AI? Then this is the best place to start! So, get going! Head over to mylearn.oracle.com to find out more.  13:54 Nikita: Welcome back! Sangeetha, I want to bring you in to talk about Oracle Text. Now I know that Oracle Database is not only a relational store but also a document store. And you can load text and JSON assets along with your relational assets in a single database.  When I think about Oracle and databases, SQL development is what immediately comes to mind. So, can you talk a bit about the power of SQL as well as its challenges, especially in schema changes? Sangeetha: Traditionally, Oracle has been all about SQL development. And with SQL development, it's an incredibly powerful language. But it does take some advanced knowledge to make the best of it. So SQL requires you to define your schema up front. And making changes to that schema could be a little tricky and sometimes highly bureaucratic task. In contrast, JSON allows you to develop your schema as you go--the schemaless, perhaps schema-later model. By imposing less rigid requirements on the developer, it allows you to be more fluid and Agile development style. 15:09 Lois: How does Oracle Text use SQL to index, search, and analyze text and documents that are stored in the Oracle Database? Sangeetha: Oracle Text can perform linguistic analyses on documents as well as search text using a variety of strategies, including keyword searching, context queries, Boolean operations, pattern matching, mixed thematic queries, like HTML/XML session searching, and so on. It can also render search results in various formats, including unformatted text, HTML with term highlighting, and original document format. Oracle Text supports multiple languages and uses advanced relevance-ranking technology to improve search quality. Oracle Text also offers advantage features like classification, clustering, and support for information visualization metaphors. Oracle Text is now enabled automatically in Autonomous Database. It provides full-text search capabilities over text, XML, JSON content. It also could extend current applications to make better use of textual fields. It builds new applications specifically targeted at document searching. Now, all of the power of Oracle Database and a familiar development environment, rock-solid autonomous database infrastructure for your text apps, we can deal with text in many different places and many different types of text. So it is not just in the database. We can deal with data that's outside of the database as well. 17:03 Nikita: How does it handle text in various places and formats, both inside and outside the database? Sangeetha: So in the database, we can be looking a varchar2 column or LOB column or binary LOB columns if we are talking about binary documents such as PDF or Word. Outside of the database, we might have a document on the file system or out on the web with URLs pointing out to the document. If they are on the file system, then we would have a file name stored in the database table. And if they are on the web, then we should have a URL or a partial URL stored in the database. And we can then fetch the data from the locations and index it in the term documents format. We recognize many different document formats and extract the text from them automatically. So the basic forms we can deal with-- plain text, HTML, JSON, XML, and then formatted documents like Word docs, PDF documents, PowerPoint documents, and also so many different types of documents. All of those are automatically handled by the system and then processed into the format indexing. And we are not restricted by the English either here. There are various stages in the index pipeline. A document starts one, and it's taken through the different stages so until it finally reaches the index. 18:44 Lois: You mentioned the indexing pipeline. Can you take us through it? Sangeetha: So it starts with a data store. That's responsible for actually reaching the document. So once we fetch the document from the data store, we pass it on to the filter. And now the filter is responsible for processing binary documents into indexable text. So if you have a PDF, let's say a PDF document, that will go through the filter. And that will extract any images and return it into the stream of HTML text ready for indexing. Then we pass it on to the sectioner, which is responsible for identifying things like paragraphs and sentences. The output from the section is fed onto the lexer. The lexer is responsible for dividing the text into indexable words. The output of the lexer is fed into the index engine, which is responsible for laying out to the indexes on the disk. Storage, word list, and stop list are some additional inputs there. So storage tells exactly how to lay out the index on disk. Word list which has special preferences like desegmentation. And then stop is a list word that we don't want to index. So each of these stages and inputs can be customized. Oracle has something known as the extensibility framework, which originally was designed to allow people to extend capabilities of these products by adding new domain indexes. And this is what we've used to implement Oracle Text. So when kernel sees this phrase INDEXTYPE ctxsys.context, it knows to handle all of the hard work creating the index. 20:48 Nikita: Other than text indexing, Oracle Text offers additional operations, right? Can you share some examples of these operations? Sangeetha: So beyond the text index, other operations that we can do with the Oracle Text, some of which are search related. And some examples of that are these highlighting markups and snippets. Highlighting and markup are very similar. They are ways of fetching these results back with the search. And then it's marked up with highlighting within the document text. Snippet is very similar, but it's only bringing back the relevant chunks from the document that we are searching for. So rather than getting the whole document back to you, just get a few lines showing this in a context and the theme and extraction. So Oracle Text is capable of figuring out what a text is all about. We have a very large knowledge base of the English language, which will allow you to understand the concepts and the themes in the document. Then there's entity extraction, which is the ability to find out people, places, dates, times, zip codes, et cetera in the text. So this can be customized with your own user dictionary and your own user rules. 22:14 Lois: Moving on to advanced functionalities, how does Oracle Text utilize machine learning algorithms for document classification? And what are the key types of classifications? Sangeetha: The text analytics uses machine learning algorithms for document classification. We can process a large set of data documents in a very efficient manner using Oracle's own machine learning algorithms. So you can look at that as basically three different headings. First of all, there's classification. And that comes in two different types-- supervised and unsupervised. The supervised classification which means in this classification that it provides the training set, a set of documents that have already defined particular characteristics that you're looking for. And then there's unsupervised classification, which allows your system itself to figure out which documents are similar to each other. It does that by looking at features within the documents. And each of those features are represented as a dimension in a massively high dimensional feature space in documents, which are clustered together according to that nearest and nearness in the dimension in the feature space. Again, with the named entity recognition, we've already talked about that a little bit. And then finally, there is a sentiment analysis, the ability to identify whether the document is positive or negative within a given particular aspect. 23:56 Nikita: Now, for those who are already Oracle database users, how easy is it to enable text searching within applications using Oracle Text? Sangeetha: If you're already an Oracle database user, enabling text searching within your applications is quite straightforward. Oracle Text uses the same SQL language as the database. And it integrates seamlessly with your existing SQL. Oracle Text can be used from any programming language which has SQL interface, meaning just about all of them.  24:32 Lois: OK from Oracle Text, I'd like to move on to Oracle Spatial Studio. Can you tell us more about this tool? Sangeetha: Spatial Studio is a no-code, self-service application that makes it easy to access the sorts of spatial features that we've been looking at, in particular, in order to get that data prepared to use with spatial, visualizing results in maps and tables, and also doing the analysis and sharing results. Spatial Studios is encoded at no extra cost with Autonomous Database. The studio web application itself has no additional cost and it runs on the server. 25:13 Nikita: Let's talk a little more about the cost. How does the deployment of Spatial Studio work, in terms of the server it runs on?  Sangeetha: So, the server that it runs on, if it's running in the Cloud, that computing node, it would have some cost associated with it. It can also run on a free tier with a very small shape, just for evaluation and testing.  Spatial Studio is also available on the Oracle Cloud Marketplace. And there are a couple of self-paced workshops that you can access for installing and using Spatial Studio. 25:47 Lois: And how do developers access and work with Oracle Autonomous Database using Spatial Studio? Sangeetha: Oracle Spatial Studio allows you to access data in Oracle Database, including Oracle Autonomous Database. You can create connections to Oracle Autonomous Databases, and then you work with the data that's in the database. You can also see Spatial Studio to load data to Oracle Database, including Oracle Autonomous Database. So, you can load these spreadsheets in common spatial formats. And once you've loaded your data or accessed data that already exists in your Autonomous Database, if that data does not already include native geometrics, Oracle native geometric type, then you can prepare the data if it has addresses or if it has latitude and longitude coordinates as a part of the data. 26:43 Nikita: What about visualizing and analyzing spatial data using Spatial Studio? Sangeetha: Once you have the data prepared, you can easily drag and drop and start to visualize your data, style it, and look at it in different ways. And then, most importantly, you can start to ask spatial questions, do all kinds of spatial analysis, like we've talked about earlier. While Spatial Studio provides a GUI that allows you to perform those same kinds of spatial analysis. And then the results can be dropped on the map and visualized so that you can actually see the results of spatial questions that you're asking. When you've done some work, you can save your work in a project that you can return to later, and you can also publish and share the work you've done. 27:34 Lois: Thank you, Sangeetha. For the final part of our conversation today, we'll talk with Thea. Thea, thanks so much for joining us. Let's get the basics out of the way. How can data be loaded directly into Autonomous Database? Thea: Data can be loaded directly to ADB through applications such as SQL Developer, which can read data files, such as txt and xls, and load directly into tables in ADB. 27:59 Nikita: I see. And is there a better method to load data into ADB? Thea: A more efficient and preferred method for loading data into ADB is to stage the data cloud object store, preferably Oracle's, but also supported our Amazon S3 and Azure Blob Storage. Any file type can be staged in object store. Once the data is in object store, Autonomous Database can access a directly. Tools can be used to facilitate the data movement between object store and the database. 28:27 Lois: Are there specific steps or considerations when migrating a physical database to Autonomous? Thea: A physical database can simply be migrated to autonomous because database must be converted to pluggable database, upgraded to 19C, and encrypted. Additionally, any changes to an Oracle-shipped stored procedures or views must be found and reverted. All uses of container database admin privileges must be removed. And all legacy features that are not supported must be removed, such as legacy LOBs. Data Pump, expdp/impdp must be used for migrating databases versions 10.1 and above to Autonomous Database as it addresses the issues just mentioned. For online migrations, GoldenGate must be used to keep old and new database in sync. 29:15 Nikita: When you're choosing the method for migration and loading, what are the factors to keep in mind? Thea: It's important to segregate the methods by functionality and limitations of use against Autonomous Database. The considerations are as follows. Number one, how large is the database to be imported? Number two, what is the input file format? Number three, does the method support non-Oracle database sources? And number four, does the methods support using Oracle and/or third-party object store? 29:45 Lois: Now, let's move on to the tools that are available. What does the DBMS_CLOUD functionality do? Thea: The Oracle Autonomous Database has built-in functionality called DBMS_CLOUD specifically designed so the database can move data back and forth with external sources through a secure and transparent process. DBMS_CLOUD allows data movement from the Oracle object store. Data from any application or data source export to text-- .csv or JSON-- output from third-party data integration tools. DBMS_CLOUD can also access data stored on Object Storage from the other clouds, AWS S3 and Azure Blob Storage. DBMS_CLOUD does not impose any volume limit, so it's the preferred method to use. SQL*Loader can be used for loading data located on the local client file systems into Autonomous Database. There are limits around OS and client machines when using SQL*Loader. 30:49 Nikita: So then, when should I use Data Pump and SQL Developer for migration? Thea: Data Pump is the best way to migrate a full or part database into ADB, including databases from previous versions. Because Data Pump will perform the upgrade as part of the export/import process, this is the simplest way to get to ADB from any existing Oracle Database implementation. SQL Developer provides a GUI front end for using data pumps that can automate the whole export and import process from an existing database to ADB. SQL Developer also includes an import wizard that can be used to import data from several file types into ADB. A very common use of this wizard is for importing Excel files into ADW. Once a credential is created, it can be used to access a file as an external table or to ingest data from the file into a database table. DBMS_CLOUD makes it much easier to use external tables, and the organization external needed in other versions of the Oracle Database are not needed. 31:54 Lois: Thea, what about Oracle Object Store? How does it integrate with Autonomous Database, and what advantages does it offer for staging data? Thea: Oracle Object Store is directly integrated into Autonomous Database and is the best option for staging data that will be consumed by ADB. Any file type can be stored in object store, including SQL*Loader files, Excel, JSON, Parquet, and, of course, Data Pump DMP files. Flat files stored on object store can also be used as Oracle Database external tables, so they can queried directly from the database as part of a normal DML operation. Object store is a separate bin storage allocated to the Autonomous Database for database Object Storage, such as tables and indexes. That storage is part of the Exadata system Autonomous Database runs on, and it is automatically allocated and managed. Users do not have direct access to that storage. 32:50 Nikita: I know that one of the main considerations when loading and updating ADB is the network latency between the data source and the ADB. Can you tell us more about this? Thea: Many ways to measure this latency exist. One is the website cloudharmony.com, which provides many real-time metrics for connectivity between the client and Oracle Cloud Services. It's important to run these tests when determining with Oracle Cloud service location will provide the best connectivity. The Oracle Cloud Dashboard has an integrated tool that will provide real time and historic latency information between your existing location and any specified Oracle Data Center. When migrating data to Autonomous Database, table statistics are gathered automatically during direct-path load operations. If direct-path load operations are not used, such as with SQL Developer loads, the user can gather statistics manually as needed. 33:44 Lois: And finally, what can you tell us about the Data Migration Service? Thea: Database Migration Service is a fully managed service for migrating databases to ADB. It provides logical online and offline migration with minimal downtime and validates the environment before migration. We have a requirement that the source database is on Linux. And it would be interesting to see if we are going to have other use cases that we need other non-Linux operating systems. This requirement is because we are using SSH to directly execute commands on the source database. For this, we are certified on the Linux only. Target in the first release are Autonomous databases, ATP, or ADW, both serverless and dedicated. For agent environment, we require Linux operating system, and this is Linux-safe. In general, we're targeting a number of different use cases-- migrating from on-premise, third-party clouds, Oracle legacy clouds, such as Oracle Classic, or even migrating within OCI Cloud and doing that with or without direct connection. If you have any direct connection behind a firewall, we support offline migration. If you have a direct connection, we support both offline and online migration. For more information on all migration approaches are available for your particular situation, check out the Oracle Cloud Migration Advisor. 35:06 Nikita: I think we can wind up our episode with that. Thanks to all our experts for giving us their insights.  Lois: To learn more about the topics we've discussed today, visit mylearn.oracle.com and search for the Oracle Autonomous Database Administration Workshop. Remember, all of the training is free, so dive right in! Join us next week for another episode of the Oracle University Podcast. Until then, Lois Houston… Nikita: And Nikita Abraham, signing off! 35:35 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Lead at the Top of Your Game
Leading the World of Object Storage with Garima Kapoor

Lead at the Top of Your Game

Play Episode Listen Later Nov 14, 2023 41:20


IN THIS EPISODE...In this digital age, where the volume of data is growing exponentially, object storage has emerged as a fundamental technology, particularly well-suited for cloud computing and big data applications. It offers the advantages of easy scalability, durability, and accessibility, making it an integral part of modern data management solutions. Unlike traditional file systems, which organize data into hierarchical folders and directories, object storage takes a different approach.My guest today, Garima Kapoor, Ph.D., is the Co-Founder and Chief Operating Officer (COO) of MinIO, Inc., an industry-leading company that has pioneered a high-performance, S3-compatible object store. With a solid educational background and extensive experience, Garima has been instrumental in MinIO's remarkable journey. Under her strategic leadership, MinIO has emerged as a powerhouse in data storage, specializing in large-scale AI/ML, data lake, and database workloads. The innovative object store solution MinIO offers is designed to meet the demanding requirements of modern data-driven applications. It is characterized by its software-defined architecture, enabling seamless deployment on a wide range of environments, including cloud and on-premises infrastructure.------------Full show notes, links to resources mentioned, and other compelling episodes can be found at http://LeadYourGamePodcast.com. (Click the magnifying icon at the top right and type “Garima”)Love the show? Subscribe, rate, review, and share! ------------JUST FOR YOU: Increase your leadership acumen by identifying your personal Leadership Trigger. Take my free my free quiz and instantly receive your 5-page report. Need to up-level your workforce or execute strategic People initiatives? https://shockinglydifferent.com/contact or tweet @KaranRhodes.-------------ABOUT GARIMA KAPOOR:Garima Kapoor is a prominent figure in the tech industry, known for her role as the Chief Operating Officer (COO) and co-founder of MinIO, a cutting-edge technology company. With a solid financial background, she initially served as the company's Chief Financial Officer (CFO) before taking on her current leadership position. Garima is not only a successful entrepreneur but also an active investor and advisor to emerging technology companies in the dynamic landscape of Silicon Valley.Her academic journey is equally impressive, holding a Doctor of Philosophy (Ph.D.) in Accounting and Finance from Nirma University, a Masters in Economics from Banasthali Vidyapith, and a Bachelor of Science (BS) degree in Economics from Delhi University. Garima's multifaceted expertise and leadership have played a pivotal role in shaping the success of MinIO and contributing to the advancement of technology in the digital era.------------WHAT TO LISTEN FOR:WHAT TO LISTEN FOR:1. What does MinIO do, and how does it help organizations?2. What is object storage?3. What are the tips for building a successful startup?4. What is the role of fundraising and product development in the growth of a storage company?5. What is courageous agility, and how does it help to navigate unpredictable paths in leadership and...

Data Protection Gumbo
206: Bridging the On-Prem and Cloud Storage Gap - Cloudian

Data Protection Gumbo

Play Episode Listen Later Jul 25, 2023 22:46


Jon Toor, Chief Marketing Officer at Cloudian where we dissect the benefits of object storage. We delve into navigating the dynamic interplay between cloud and on-premises solutions, and how object storage can be the game-changer for backup environments, offering a secure and cost-effective solution.

Data Protection Gumbo
202: Object Storage vs. Tape Backup: The Ultimate Showdown - OSNEXUS

Data Protection Gumbo

Play Episode Listen Later Jun 27, 2023 29:48


Steven Umbehocker, Founder and CEO of OSNEXUS takes us on a flavorful journey through the world of data storage as he dishes out his knowledge on the heated debate between object storage and tape backup. We'll be stirring the pot on the advantages and disadvantages of both storage options, exploring the concept of immutability, and discussing the different tiers of data storage.

The Cloudcast
The Economics & Beyond of Object Storage

The Cloudcast

Play Episode Listen Later Jun 7, 2023 36:14


Jon Toor (CMO @CloudianStorage) talks about the history and evolution of object storage, the rise of enterprise class object storage, and the changing economics of cloud storage.SHOW: 725CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT - "CLOUDCAST BASICS"SHOW SPONSORS:CloudZero – Cloud Cost Visibility and Savings​​CloudZero provides immediate and ongoing savings with 100% visibility into your total cloud spendDatadog Application Monitoring: Modern Application Performance MonitoringGet started monitoring service dependencies to eliminate latency and errors and enhance your users app experience with a free 14 day Datadog trial. Listeners of The Cloudcast will also receive a free Datadog T-shirt.SHOW NOTES:Cloudian websiteTopic 1 - Welcome to the show Jon. Tell us a little bit about your background. Your career and Cloudian parallel in many ways.Topic 2 - Cloudian has been around since before object storage was cool. We first heard about Cloudian back in the OpenStack and early AWS S3 days. Object storage has come a long way. Can you help everyone frame where we were and where we are today?Topic 3 - We've seen the rise of Enterprise class, S3 compatible object storage for use cases like hybrid cloud, data sovereignty, and more recently analytics such as data lakehouses. Where are you seeing implementations these days as we've moved beyond basic, simple storage behind cloud backends.  Topic 4 - With the recent changes to the world economy, how much does economics come into conversations around the design of solutions. There's often a healthy tension between what is technically possible and what is economically feasible. How does that design conversation play out lately?Topic 5 - We used to talk about “Data Gravity” all the time. The concept for those unfamiliar is that data has a certain weight and attracts more data to existing sources and becomes hard to move over time. We haven't talked about it as much in recent years and we are seeing the rise of hybrid and multicloud solutions but folks often don't think about access to the data. Where are folks building large data sets? What are they using them for? Are they ever moving them?Topic 6 - Last question, Cloudian is well known for their partnerships, alliances and solutions. You partner with hardware companies, software companies, backup companies, public clouds, etc. It's quite a mix. Has this been a factor in Cloudian's longevity and tell everyone a little bit about how this came to be and how important you see this for the future. FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnet

COMPRESSEDfm
136 | The Surprisingly Deep World of Web Forms

COMPRESSEDfm

Play Episode Listen Later May 25, 2023 61:41


Austin joins James and Amy to take a deep dive into Web Forms.Show Notes00:00 Introduction03:57 Intro to Forms06:28 Bringing in JavaScript10:50 Potholes13:24 Modern Frameworks16:45 Progressive Enhancement26:59 What can go wrong with Java?31:55 Forms in Svelte35:36 File Uploads43:50 Serverless and FIle Systems48:45 Flaws of Object Storage

Veeam Partner Perspectives with Eric Dougherty
OOTBI? Is that even a word?

Veeam Partner Perspectives with Eric Dougherty

Play Episode Listen Later May 18, 2023 36:51


Yes, there is an Object Storage solution built specifically for Veeam, but it isn't by Veeam. In this episode, Matt Price and Anthony Cusimano from Object First discuss why partners should be teaming with Object First for on-prem object storage for backup and recovery solutions. If you want to know what OOTBI is, you'll have to listen!

Great Things with Great Tech!
Delivering Petabyte Scale Storage with Scality | Episode #64

Great Things with Great Tech!

Play Episode Listen Later May 2, 2023 40:46


Breaking through the scalability barrier with purpose built Object Storage for increased availability, durability and performance! In this episode I talk with Paul Speciale, CMO at Scality. Scality's software-based storage delivers billions of files to five hundred million users daily with 100% availability. They make standard x86 servers scale to hundreds of petabytes and billions of objects. Scality has transformed the way organizations store and manage their data with their flagship product, Scality RING, and their more recent launch, Scality ARTESCA. Paul and I discuss the company's history, the adaptive storage approach, and how Scality partners with industry giants like HPE, Veeam, Dell, and Cisco. The episode also delves into the growing threat of #ransomware and how Scality's expanded capabilities can help combat it. The company was founded in 2009 in France and is headquartered in European Union (EU). ☑️  Support the Channel by buying a coffee? - https://ko-fi.com/gtwgt   ☑️  Technology and Technology Partners Mentioned: Veeam, OpenStack, Swift, Object Storage, S3, AWS, HPE, DELL, Cisco, VMware   ☑️ Web: https://scality.com ☑️ Crunch Base Profile: https://www.crunchbase.com/organization/scality ☑️ Interested in being on #GTwGT? Contact via Twitter @GTwGTPodcast or go to https://www.gtwgt.com   ☑️ Subscribe to YouTube: ⁠⁠⁠https://www.youtube.com/@GTwGTPodcast?sub_confirmation=1⁠⁠⁠ Web - https://gtwgt.com Twitter - https://twitter.com/GTwGTPodcast Spotify - https://open.spotify.com/show/5Y1Fgl4DgGpFd5Z4dHulVX Apple Podcasts - https://podcasts.apple.com/us/podcast/great-things-with-great-tech-podcast/id1519439787, ☑️  Music: https://www.bensound.com

Engineering Kiosk
#69 MySQL vs. MariaDB

Engineering Kiosk

Play Episode Listen Later May 2, 2023 70:59


Wie viel MySQL Drop In-Replacement steckt wirklich in MariaDB?MariaDB, ein Fork der populären Datenbank MySQL. Angetreten, um ein Drop-In-Replacement und eine direkte Alternative zu MySQL darzustellen. Doch wie viel ist da dran? Ist MariaDB MySQL kompatibel? Wo liegen die Gemeinsamkeiten und Unterschiede? Was war eigentlich der Grund für den Fork? In welchen Bereichen entwickeln sich beide Datenbanken vollkommen anders? Und was hat sich im Bereich der Storage-Engines alles so getan?In dieser Episode bringen wir etwas Licht in den direkten Vergleich zwischen MySQL und MariaDB.Bonus: Was ein Weber-Grill mit MySQL und MariaDB zu tun hat.Das schnelle Feedback zur Episode:

The Datanation Podcast - Podcast for Data Engineers, Analysts and Scientists
BONUS: What is Object Storage like AWS S3, Minio and more!

The Datanation Podcast - Podcast for Data Engineers, Analysts and Scientists

Play Episode Listen Later Apr 12, 2023


Alex Merced discusses what is Object Storage and the history of file systems. Join the community at datanation.click

Data Protection Gumbo
176: The Evolution of Data Storage: From Disks to Tapes to Objects in the Cloud - OSNEXUS/Seagate Lyve Cloud

Data Protection Gumbo

Play Episode Listen Later Jan 10, 2023 25:01


Steven Umbehocker, Founder and CEO at OSNEXUS and John Conlin, Territory Manager at Seagate Lyve Cloud discuss the evolution of backup from disk-to-tape to disk-to-object and disk-to-cloud.

Chief Future Officer
12 The musician - CFO

Chief Future Officer

Play Episode Listen Later Oct 31, 2022 29:17


Intekhab is the CFO of WEKA, which has been recognized as a Visionary for a second year in the 2022 Gartner Magic Quadrant for Distributed File Systems and Object Storage. Intekhab has directed finance at several different tech startups over the last two decades, sits on the board of various non-profits, and serves as an advisor to many startups. In this episode we take a dive into Intekhab's personal journey - from crucial mentorship experiences, plenty of advice he received and is now passing on to younger generations, and the role music has played in his journey and success. For all finance professionals looking to upskill - as well as young people looking to get a foot in the door - this episode is for you.

The Micronaut Podcast
014 - Micronaut Object Storage

The Micronaut Podcast

Play Episode Listen Later Oct 11, 2022 29:00


Google Cloud Platform Podcast
Storage Spotlight with Sean Derrington and Nishant Kohli

Google Cloud Platform Podcast

Play Episode Listen Later Sep 14, 2022 30:59


Host Stephanie Wong chats with storage pros Sean Derrington and Nishant Kohli this week to learn more about cost optimization with storage projects and exciting new launches in the Google Cloud storage space! To start, we talk about the Storage Spotlight of years past and the cool Google Cloud products that Google is unveiling this year. Optimization is a huge theme this year, with a focus not only on cost optimization but also performance and resource use as well. Enterprise readiness and storage everywhere, Sean tells us, are the most important pillars as Google continues to improve offerings. We learn about Hyperdisk and the three customizable attributes users can control and the benefits of Filestore Enterprise for GKE for large client systems. Nishant talks about Cloud Storage and how clients are using it at scale for their huge data projects. Specifically, Google Storage has been working to help clients with large-scale data storage needs to optimize costs with Autoclass. Storage Insights is another new tool launching late this year or early next year that empowers better decision-making through increased knowledge and analytics of storage usage. GKE storage is getting a revamp as well with Backup for GKE to help clients recover applications and data easily. Google Cloud for Backup and DR helps keep projects secure as well. This managed service is easy to use and integrate into all cloud projects and can be used with on prem projects and then backed up into the cloud. This is ideal for clients as they shift to cloud or hybrid systems. Companies like Redivis take advantage of some of these new data features, and Nishant talks more about how Autoclass and other tools have helped them save money and improve their business. Sean Derrington Sean is the Group Product Manager for the storage team. He is a long time storage industry PM veteran; he's worked on Veritas, Symantec, Exablox (storage startup). Nishant Kohli Nishant has a decade plus of Object Storage experience at Dell/EMC and Hitachi. He's currently Senior Product Manager on the storage team. Cool things of the week Cloud Next 2022 site Integrating ML models into production pipelines with Dataflow blog Four non-traditional paths to a cloud career (and how to navigate them) blog Interview What's New & Next: A Spotlight on Storage site Google Cloud Online Storage Products site GCP Podcast Episode 277: Storage Launches with Brian Schwarz and Sean Derrington podcast GKE site Filestore site Filestore Enterprise site Filestore Enterprise for fully managed, fault tolerant persistent storage on GKE blog Cloud Storage site Cloud Storage Autoclass docs GCP Episode 307: FinOps with Joe Daly podcast Storage Insights docs GCP Podcast Episode 318: GKE Turns 7 with Tim Hockin podcast Backup for GKE docs Backup and DR Service site Redivis site What's something cool you're working on? Stephanie is working on new video content and two Next sessions: one teaching how to simplify and secure your network for all workloads and one talking about how our infrastructure partner ecosystem helps customers. Hosts Stephanie Wong

Kubernetes Bytes
Kubernetes COSI 101 with Sid Mani

Kubernetes Bytes

Play Episode Listen Later Aug 20, 2022 56:47


In this episode of Kubernetes Bytes, Ryan and Bhavin talk to the SIG Storage COSI Co-Lead Sid Mani about the Container Object Storage Interface (COSI) project, as it enters the Alpha phase of the maturity cycle. The discussion dives into the need for a different Object Storage standard, how it works with Kubernetes, the vision of the community, and how people/vendors can contribute to the ecosystem. Show links Acorn Labs - https://venturebeat.com/programming-development/open-source-acorn-takes-a-new-approach-to-deploy-cloud-native-application-on-kubernetes/ Ghost Security Emerges from Stealth, Announces Initial $15M in Funding at $50M validation - https://ghost.security/blog/ghost-security-emerges-from-stealth-announces-initial-15m-in-funding?hsLang=en Granulate launches a new free tool for optimizing K8s costs called gMaestro - https://sdtimes.com/kubernetes/granulate-launches-new-free-tool-for-optimizing-kubernetes-costs/ What's new with Kubernetes 1.25 - https://sysdig.com/blog/kubernetes-1-25-whats-new/ Kubernetes volumes for beginners - https://dev.to/iarchitsharma/kubernetes-volume-explained-for-beginners-3doj Intro to eBPF - https://chrisshort.net/intro-to-ebpf XKCD link - https://xkcd.com/927/ Kubernetes CSI 101 episode - https://anchor.fm/kubernetesbytes/episodes/Container-and-Kubernetes-Storage-101-e1647o1/a-a6cgu6a

The Datanation Podcast - Podcast for Data Engineers, Analysts and Scientists

Alex Merced discusses Iceberg’s Object Storage Compatibility features and why they are so useful for Modern data lakes and data lakehouses.

Great Things with Great Tech!
Episode 48 - MinIO

Great Things with Great Tech!

Play Episode Listen Later Jul 4, 2022 35:45


In this episode I talk with Garima Kapoor Chief Operating Officer and co-founder at MinIO. MiniO has built a high performance distributed s3-compatible object storage solution that is software defined, hardware agnostic designed for large scale data infrastructure. Garima and I talk about how MinIO was built from scratch with the private cloud as it's target but has also become a defaco standard for home labs through a strong focus on community. MinIO also offers high-performance, Kubernetes-native object storage. Optimized for cloud-native workloads, and looks to deliver Enterprise and service provider usecases around ML/AI, analytics, backup and archival workloads. MinIO was founded in 2014 and is Head Quartered out of the San Francisco Bay Area. ☑️ Technology and Technology Partners Mentioned: Block Storage, Storage, NVMe, Object Storage, Kubernetes, VMware, S3, AWS, Azure, GCP, Software Defined Storage ☑️ Raw Talking Points: Community s3 API Millions of docker pulls PUT, GET, DELETE Opensource or Purchase Subscription Single Binary 34MB? Written in Go Lang CPU offloading Web Browser/Interface Significance of GUI ACCESSABILTY Cluster with multiple nodes Scaling and Performance Resilience at scale Consistency Unstructured Data Data growth and the move to Object Storage Kubernetes + CSIs ☑️ Web: https://min.io ☑️ Interested in being on #GTwGT? Contact via Twitter @GTwGTPodcast or go to https://www.gtwgt.com ☑️ Music: https://www.bensound.com

Data Protection Gumbo
146: Backup and Recovery Paparazzi Edition: All-Flash! - VAST Data

Data Protection Gumbo

Play Episode Listen Later May 31, 2022 34:33


Howard Marks, Technologist Extraordinary and Plenipotentiary at VAST Data discusses the rules of nines, pros and cons of using all-flash storage, and using replication to keep RTOs and RPOs in check.

Great Things with Great Tech!
Episode 46 - StorPool

Great Things with Great Tech!

Play Episode Listen Later May 11, 2022 41:37


In this episode I talk with Boyan Krosnov Chief Product Officer and co-founder at StorPool. StorPool a market leading Software defined storage vendor, offering reliable and speedy storage platforms with a focus on low latency throughput... covering Public and private clouds platforms, servicing managed cloud and Service providers as well as enterprises, and SaaS vendors. Boyan an myself talk about how StorPool leverages an agnostic approach to hardware to allow StorPool to run across multiple hardware platforms and configurations while still maintaining reliable and speedy storage and how they have ridden the alternative new age IT stacks to success. StorPool was founded in 2011 and is Head Quartered out of the Sofia, Bulgaria. ☑️ Technology and Technology Partners Mentioned: Block Storage, Storage, NVMe, Object Storage, Kubernetes, VMware, KVM, Hyper-V, OpenStack, Cloudstack, Software Defined Storage ☑️ Raw Talking Points: MSP Angle Cloud Platforms SDS 2.0 Hows it designed and architected Filesystem? Object Storage Based? Pooling of capacity and performance Standard storage, storage and compute PERFORMANCE methodology IOPS/Latency/Storage Consumption Decreasing latency Proper Benchmarking Resiliency Failure domains/resolution Differential? Storage Protocols Distributed Storage? Running on any hardware Future of storage with public cloud and more managed data platforms Continuous Improvement process New-Age IT Stacks ☑️ Web: https://storpool.com/ ☑️ Interested in being on #GTwGT? Contact via Twitter @GTwGTPodcast or go to https://www.gtwgt.com ☑️ Music: https://www.bensound.com

LINUX Unplugged
452: Synapse Collapse

LINUX Unplugged

Play Episode Listen Later Apr 4, 2022 54:23 Very Popular


How we nearly crashed our Matrix server; what we did wrong and how we're fixing it. Plus an update on elementary OS, GNOME's next chapter, and we kick off the NixOS Challenge. NixOS Challenge Goals: Study the Nix Expression Language (https://nixos.wiki/wiki/Nix_Expression_Language) Setup at least one Nix/NixOS system (https://nixos.org/manual/nixos/stable/). Install htop (https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/system/htop/default.nix). Join the Nix Nerds Matrix channel (https://linuxunplugged.com/matrixinfo). Post a screenshot in the NixOS Challenge GitHub (https://github.com/JupiterBroadcasting/nixos-challenge/). Complete all the above before the end of April. Special Guest: Danielle Foré.

Data Protection Gumbo
136: The New Storage Standard for Data - Scality

Data Protection Gumbo

Play Episode Listen Later Mar 22, 2022 29:42


Paul Speciale, Chief Marketing Officer at Scality discusses several different use cases of file and object storage software, how to build a scalable data platform, and some new and emerging technology on the storage horizon.

Data Protection Gumbo
133: Cloud Simplification vs Cloud Complications - WekaIO

Data Protection Gumbo

Play Episode Listen Later Mar 1, 2022 36:09


Shimon Ben-David, CTO of WekaIO discusses some of the storage challenges posed by modern applications that leverage Cloud today, his views on artificial intelligence, and the best way to deal with unstructured data.

Building the Backend: Data Solutions that Power Leading Organizations
Transform Your Object Storage Into a Git-like Repository With Paul Singman @ LakeFS

Building the Backend: Data Solutions that Power Leading Organizations

Play Episode Listen Later Mar 1, 2022 27:23


In this episode we speak with Paul Singman Developer Advocate at Treeverse / LakeFS. LakeFS is an open source project  that allows you to transform your object storage into a Git-like repository. Top 3 takeawaysLakeFS enables use cases like debugging to quickly view historical versions of your data at a specific point in time and running ML experiments over the same set of data with branching..The current data landscape is very fragmented with many tools available.. Over the coming years there will most likely be consolidation of tools that are more open and integrated. Data quality and observability continue to be key components of successful data lakes and having visibility into job runs. 

Cribl: The Stream Life
The State of Logging in the Enterprise

Cribl: The Stream Life

Play Episode Listen Later Feb 18, 2022 24:51


In this episode of The Stream Life podcast, Ed Bailey is joined by Jon Rust from Cribl to discuss his journey from ISP entrepreneur to Splunk admin to Sales Engineer for Cribl. Links Maximize and Regexify Your Lookups with LogStream How To Replay Events from Object Storage into Your Analytics Systems using LogStream Using Webhooks in LogStream to Trigger Incidents in the PagerDuty API

GreyBeards on Storage
128: GreyBeards talk containers, K8s, and object storage with AB Periasamy, Co-Founder&CEO MinIO

GreyBeards on Storage

Play Episode Listen Later Jan 27, 2022 46:18


Sponsored by: Once again Keith and I are talking K8s storage, only this time it was object storage. Anand Babu (AB) Periasamy, Co-founder and CEO of MinIO, has been on our show a couple of times now and its always an insightful discussion. He's got an uncommon perspective on IT today and what needs to … Continue reading "128: GreyBeards talk containers, K8s, and object storage with AB Periasamy, Co-Founder&CEO MinIO"