Podcasts about rdbms

  • 62PODCASTS
  • 84EPISODES
  • 32mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 27, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about rdbms

Latest podcast episodes about rdbms

The PeopleSoft Administrator Podcast
#341 - PeopleTools Pathway

The PeopleSoft Administrator Podcast

Play Episode Listen Later Apr 27, 2024 61:15


This week on the podcast, Kyle and Dan about the future pathways to take to make sure you are taking advantage of the PeopleTools platform. The PeopleSoft Administrator Po dcast is hosted by Dan Iverson and Kyle Benson. Show Notes PeopleTools Pathway  General PeopleTools Changes @ 5:45 Development Tools @ 12:30 Search Capabilities  @ 24:00 Infrastructure Platforms @ 32:30 Lifecycle Management @ 46:15 References Links: Oracle Extends Support for PeopleSoft to 2034 PeopleSoft Info Portal – PeopleTools Delivered features (CFO tool) New feature overview (Doc ID 2991346.1) Product roadmap (Doc ID 1966243.2) Maintenance Schedule (Doc ID 876292.1) PeopleSoft – Ideas Lab Graham Smith's Data for PeopleTools Releases Critical Patch Updates, Security Alerts and Bulletins PeopleSoft PeopleTools Maintenance Patches and Deployment Packages Released for 2024 - All (Doc ID 2997838.1) E-CERT: Finding a Certified Combination of PeopleTools, PeopleSoft Enterprise Applications, and Rdbms: an Example with Screenshots (Doc ID 1463015.1) PeopleSoft on Oracle Cloud Infrastructure E-UPG: New PeopleTools 8.60 Upgrade Projects - PPLTLS859 and PPLTLS860 (Doc ID 2910205.1)

Voice of the DBA
Navigating the Database Landscape

Voice of the DBA

Play Episode Listen Later Mar 12, 2024 3:25


The title of our keynote session at the Redgate Summit in Atlanta is Navigating the Database Landscape, and I'll be delivering part of the talk, along with Grant Fritchey and Kathi Kellenberger today, Mar 13. This is based on the State of Database Landscape Survey results, as well as our experience working with customers and implementing DevOps solutions over the last decade. The talk was mostly written by others, but as I rehearsed the session, I found myself wondering about how I'd approach my job if we returned to being a DBA or developer. When working in technology today, there are many challenges outside of actually learning about any of the particular products, languages, platforms, etc. We have the politics of working with others, ongoing work, emergency requests outside of channels, random questions asked by others, code reviews, and probably a few other things I'm forgetting, all outside of learning any new skills. While I consider myself a lifelong learner, I know that finding time (and energy) to acquire the basics of any new technology is challenging. Read the rest of Navigating the Database Landscape

Voice of the DBA
Continuity Across Restarts

Voice of the DBA

Play Episode Listen Later Jan 12, 2024 3:25


There are a lot of database platforms, and each tries to convince you theirs is better. As Brent points out in that link, sometimes they just skip comparing themselves to other platforms because it makes them look better. They only look at the platforms they compete well against. For most of us, we often just need basic CRUD operations. I know that most RDBMS platforms would work for us, and sometimes NoSQL ones work as well, though I think that NoSQL isn't necessarily better for many applications (maybe most). You may feel differently, but that's my view. While I use SQL Server, I think the majority of systems I've managed or built could easily run on MySQL, PostgreSQL, or many other platforms. Read the rest of Continuity Across Restarts

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)

SQL (or RDBMS) has been around for a long time now. NoSQL databases are relatively newer but they too have been around for what feels like an eon now! Despite that, every time a new App, a new Service, or a new System is designed, there continues to be a sense of uncertainty around which direction to go (at least, it seems that way). There's a few reasons for that, and there's no single correct answer. So, while it surely depends on a variety of factors, and there is no dearth of documentation online about the differences between the 2 types of database systems, I still think it is worth a short course on it. In this hour-long course, we'll take another look at the database options available today, and try to determine which ones would make the most sense, and when. Purchase course in one of 2 ways: 1. Go to https://getsnowpal.com, and purchase it on the Web 2. On your phone:     (i) If you are an iPhone user, go to http://ios.snowpal.com, and watch the course on the go.     (ii). If you are an Android user, go to http://android.snowpal.com.

SaaS for Developers
The Promise of Serverless

SaaS for Developers

Play Episode Listen Later Jul 28, 2023 66:30


When developers talk about Serverless, they often focus on FaaS. But the best Serverless experience, by far, is delivered by a data store. S3. Why? Because it "just works" and lets developers focus on their code. Serverless databases help you focus on your queries and workload. They abstract the compute. Which also means - usage based pricing. In this video, Ram Subramanian, Nile's CEO, joins us to discuss his vision of the perfect Serverless database experience. We talk about: - What makes S3 so amazing? - What would the S3 experience look like if we apply it to RDBMS? - Development cycle: Serverless requirements when coding, testing and finally in production. - Performance of Serverless databases, and what will make performance tuning a better experience - Elasticity and scalability. We agreed that "scale to zero" doesn't mean what everyone thinks it means. - Cost of Serverless. Is it actually worth it? - Architectures: Compute-storage separation, disaggregation, sharding and gateways. - Multi-regions concerns And of course: Do Serverless DBs make sense for SaaS developers?

Der Data Analytics Podcast
PostgreSQL - open-source relationales Datenbankmanagementsystem (RDBMS)

Der Data Analytics Podcast

Play Episode Listen Later Jan 9, 2023 4:20


PostgreSQL ist ein open-source relationales Datenbankmanagementsystem (RDBMS), das für seine Robustheit, Leistung und Flexibilität bekannt ist. Es wird häufig in Unternehmensumgebungen eingesetzt und unterstützt eine Vielzahl von Funktionen, darunter ACID-Transaktionen, vollständige Datenintegrität und Unterstützung für eine Vielzahl von Programmiersprachen.

Voice of the DBA
Cloud Databases

Voice of the DBA

Play Episode Listen Later Sep 29, 2022 3:15


Most of us are used to a database that lives on a server somewhere. It might be in our data center or a VM that exists somewhere, but it's really an on-premises type of infrastructure. Even if the VM is in AWS or Azure, this is a single system on a server that we control. We can add HA capabilities to this system, but the model is the same as if the database were on our development workstation. Note: this doesn't matter if this is an RDBMS like SQL Server or PostgreSQL or a NoSQL type system, such as MongoDB or Neo4J. Read the rest of Cloud Databases

EM360 Podcast
The Unstructured Data Revolution

EM360 Podcast

Play Episode Listen Later Jun 29, 2022 16:31


Unstructured data refers to the information that isn't arranged according to a pre-set data model, meaning it can't be stored in a traditional database or RDBMS.  With recent numbers showing that up to 90% of the data collected by enterprises is unstructured, how can businesses pivot into managing these huge amounts of information? In this episode of the EM360 Podcast, Editor Matt Harris speaks to Paul Speciale, CMO at Scality, to discuss: What is behind the huge growth in data Managing unstructured data Selecting a storage solution for cloud, on-prem and edge deployments

Voice of the DBA
The General Database Platform

Voice of the DBA

Play Episode Listen Later May 17, 2022 3:50


It's been a decade-plus of the Not-Only-SQL (NoSQL) movement where a large variety of specialized database platforms have been developed and sold. It seems that there are so many different platforms for data stores that you can find one for whatever specialized type of data you are working with. However, is that what people are doing to store data in their applications? I saw this piece on the return to the general-purpose database, postulating that a lot of the NoSQL database platforms have added additional capabilities that make them less specialized and more generalized. I've seen some of this, just as many relational platforms have added features that compete with one of the NoSQL classes of databases. The NoSQL datastores might be adding SQL-like features because some of these platforms are too specialized, and the vendors have decided they need to cover a slightly wider set of use cases. Read the rest of The General Database Platform

Datascape Podcast
Episode 61 - Cockroach Db And Distributed Databases With Daniel Holt

Datascape Podcast

Play Episode Listen Later May 11, 2022 41:06


In this episode, Warner is joined by Daniel Holt, VP of Solutions Engineering at Cockroach Labs to discuss distributed databases, RDBMS vs NoSQL vs NewSQL and most importantly, all about Cockroach Db!

Voice of the DBA
Moving Away from MySQL

Voice of the DBA

Play Episode Listen Later Apr 11, 2022 2:36


I like SQL Server as a database. I think it's very complete, solves most of my problems, and is easy to use in work. It costs money, but less than some others. It's also more complete to me than some of the open source databases out there. That being said, I think most of the top five or six relational platforms would work for me and I wouldn't hesitate to use them. I ran across a post from Steinar Gunderson, who worked at Oracle on the MySQL team. It's on his last day there, and it's a bit of a why did he leave. I like that he notes he found a better opportunity, but he digs in deeper. Why did he look for a new opportunity? Read the rest of Moving Away From MySQL

Arch-In-Minutes
RDBMS Consistência forte | Você ArcH-Expert

Arch-In-Minutes

Play Episode Listen Later Mar 29, 2022 8:59


Fala ARQ, tudo 100% com você?No conteúdo de hoje nós vamos falar sobre como fazer uma solução de alta consistência, em um banco de dados relacional,entretanto sem o ônus que esse modelo nos impôe, principalmente quando falamos de performance.Venha ser VIP na ArcH, me siga no meu novo canal do Telegram:https://t.me/pisanidaarchA ArcH é uma produtora de conteúdo digital que ajuda mensalmente milhares de profissionais a se tornarem FERA em ARQUITETURA de SISTEMAS, a seguir alguns dos temas que abordamos: abordagens arquiteturais, padrões de projeto, padrões de arquitetura e tecnologia com eficiência, agilidade e qualidade, tudo para contribuir com o desenvolvimento profissional da comunidade de Arquitetos de SoluçõesSoftware e Sistemas do Brasil.Saiba mais sobre a ArcH:▶ https://archoffice.tech

Unruly Software
Episode 203: ACID is BASED

Unruly Software

Play Episode Listen Later Mar 28, 2022 55:30


Questions? Comments? Want your free copy of Postgres? Find out more on our site podcast.unrulysoftware.com (https://podcast.unrulysoftware.com). You can join our discord (https://discord.gg/NGP2nWtFJb) to chat about tech anytime directly with the hosts.

Screaming in the Cloud
Throwing Houlihans at MongoDB with Rick Houlihan

Screaming in the Cloud

Play Episode Listen Later Mar 24, 2022 40:44


About RickI lead the developer relations team for strategic accounts at MongoDB. My responsibilities include defining technical standards for the global strategic accounts team and consulting with the largest customers and opportunities for the business. My role spans technology sectors and as part of my engagements I routinely provide guidance on industry best practices, technology transformation, distributed systems implementation, cloud migration, and more. I led the architecture and design effort at Amazon for migrating thousands of relational workloads from RDBMS to NoSQL and built the center of excellence team responsible for defining the best practices and design patterns used today by thousands of Amazon internal service teams and AWS customers. I currently operate as the technical leader for our global strategic account teams to build the market for MongoDB technology by facilitating center of excellence capabilities within our customer organizations through training, evangelism, and direct design consultation activities.30+ years of software and IT expertise.9 patents in Cloud Virtualization, Complex Event Processing, Root Cause Analysis, Microprocessor Architecture, and NoSQL Database technology.Links: MongoDB: https://www.mongodb.com/ Twitter: https://twitter.com/houlihan_rick TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: The company 0x4447 builds products to increase standardization and security in AWS organizations. They do this with automated pipelines that use well-structured projects to create secure, easy-to-maintain and fail-tolerant solutions, one of which is their VPN product built on top of the popular OpenVPN project which has no license restrictions; you are only limited by the network card in the instance. To learn more visit: snark.cloud/deployandgoCorey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of “Hello, World” demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking, databases, observability, management, and security. And—let me be clear here—it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself, all while gaining the networking, load balancing, and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free, you can do things like run small-scale applications or do proof-of-concept testing without spending a dime. You know that I always like to put asterisks next to the word free? This is actually free, no asterisk. Start now. Visit snark.cloud/oci-free that's snark.cloud/oci-free.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. A year or two before the pandemic hit, I went on a magical journey to a mythical place called Australia. I know, I was shocked as anyone to figure out that this was in fact real. And while I was there, I gave the opening keynote at a conference that was called Latency Conf, which is great because there's a heck of a timezone shift, and I imagine that's what it's talking about.The closing keynote was delivered by someone I hadn't really heard of before, and he started talking about single table design with respect to DynamoDB, which, okay, great; let's see what he's got to say. And the talk started off engaging and entertaining and a high-level overview and then got deeper and deeper and deeper and I felt, “Can I please be excused? My brain is full.” That talk was delivered by Rick Houlihan, who now is the Director of Developer Relations for Strategic Accounts over at MongoDB, and I'm fortunate enough to be able to get him here to more or less break down some of what he was saying back then, catch up with what he's been up to, and more or less suffer my slings and arrows. Rick, thank you for joining me.Rick: Great. Thanks, Corey. I really appreciate—you brought back some memories, you know, trip down memory lane there. And actually, interestingly enough, that was the world's introduction to single table design was that. That was my dry-run rehearsal for re:Invent 2018 is where I delivered that talk, and it has become since the most positive—Corey: This was two weeks before re:Invent, which was just a great thing. I'd been invited to go; why not? I figured I'd see a couple of clients I had out in that direction. And I learned things like Australia is a big place. So, doing a one-week trip, including Sydney, Melbourne, and Perth. Don't do that.Rick: I had no idea that it took so long to fly from one side to the other, right? I mean, that's a long plane [laugh] [crosstalk 00:02:15]—Corey: Oh, yeah. And you were working at AWS at the time—Rick: Absolutely.Corey: —so I can only assume that they basically stuffed you into a dog kennel and threw you underneath the seating area, given their travel policy?Rick: Well, you know, I have the—[clear throat] actually at the time, they just upgraded the policy to allow the intermediate seating, right? So, if you wanted to get the—Corey: Ohhh—Rick: I know—Corey: Big spender. Big spender.Rick: Yes, yes. I can get a little bit extra legroom, so I didn't have my knees shoved into some of these back. But it was good.Corey: So, let's talk about, I guess… we'll call it the elephant in the room. You were at MongoDB, where you were a big proponent of the whole no-SQL side of the world. Then you went to go work at AWS and you carried the good word of DynamoDB far and wide. It made an impression; I built my entire newsletter pipeline production system on top of DynamoDB. It has the same data in three different tables because I'm not good at listening or at computers.But now you're back at Mongo. And it's easy to jump to the conclusion of, “Oh, you're just shilling for whoever it is that happens to sign your paycheck.” And at this point, are you—what's the authenticity story? But I've been paying attention to what you've been saying, and I think that's a bad take because you have been saying the same things all along since before you were on the Dynamo side of it. I do some research for this show, and you've been advocating for outcomes and the right ways to do things. How do you view it?Rick: That's basically the story here, right? I've always been a proponent of NoSQL. You know, what I took—the knowledge—it was interesting, the knowledge I took from MongoDB evolved as I went to AWS and I delivered, you know, thousands of applications and deployed workloads that I'd never even imagined I would have my hands on before I went there. I mean, honestly, what a great place it was to cut your teeth on data modeling at scale, right? I mean, that's the—there is no greater scale.That's when you learn where things break. And honestly, a lot of the lessons I took from MongoDB, well, when I applied them at scale at AWS, they worked with varying levels of success, and we had to evolve those into the sets of design patterns, which I started to propose for DynamoDB customers, which had been highly effective. I still believe in all those patterns. I would never tell somebody that they need to drop everything and run to MongoDB, but, you know, again, all those patterns apply to MongoDB, too, right? A very—a lot—I wouldn't say all of them, but many of them, right?So, I'm a proponent of NoSQL. And I think we talked before the call a little bit about, you know, if I was out there hocking relational technology right now and saying RDBMS is the future, then everybody who criticizes anything I say, I would absolutely have to, you know, say that there's some validity there. But I'm not saying anything different I've ever said. MongoDB announced Serverless, if you remember, in July, and that was a big turning point for me because the API that we offer, the developer experience for MongoDB is unmatched, and this is what I talk to people now. And it's the patterns that I've always proposed, I still model data the same way, I don't do it any different, and I've always said, if you go back to my earlier sessions on NoSQL, it's all the same.It doesn't matter if it's MongoDB, DynamoDB, or any other technology. I've always shown people how to model their data and NoSQL and I don't care what database you're using, I've actually helped MongoDB customers do their job better over the years as well. So.Corey: Oh, yeah. And looking back at some of your early talks as well, you passed my test for, “Is this person a shill?” Because you wound up in those talks, addressing head-on when is a relational model the right thing to do? And then you put the answers up on a slide, and this—and what—it didn't distill down to, “If you're a fool.”Rick: [laugh].Corey: Because there are use cases where if you don't [unintelligible 00:05:48] your access patterns, if you have certain constraints and requirements, then yeah. That you have always been an advocate for doing the right thing for the workload. And in my experience, for my use cases, when I looked at MongoDB previously, it was not a fit for me. It was very much a you run this on an instance basis, you have to handle all this stuff. Like three—you kno, keeping it in triplicate in three different DynamoDB tables, my newsletter production pipeline now, including backups and the rest, of DynamoDB portion has climbed to the princely sum of $1.30 a month, give or take.Rick: A month. Yes, exactly.Corey: So, there's no answer for that there. Now that Mongo Serverless is coming out into the world, oh, okay, this starts to be a lot more compelling. It starts to be a lot more flexible.Rick: I was just going to say, for your use case there, Corey, you're probably looking at the very similar pricing experience now, with MongoDB Serverless. Especially when you look at the pricing model, it's very close to the on-demand table model. It actually has discounted tiering above it, which I haven't really broken it down yet against a provision capacity model, but you know, there's a lot of complexity in DynamoDB pricing. And they're working on this, they'll get better at it as well, but right now you have on-demand, you have provisioned throughput, you have [clear throat] reserved capacity allocations. And, you know, there's a time and place for all of those, but it puts the—again, it's just complexity, right?This is the problem that I've always had with DynamoDB. I just wish that we'd spent more time on improving the developer experience, right, enhancing the API, implementing some of these features that, you know, help. Let's make single table design a first-class citizen of the DynamoDB API. Right now it's a red—it's a—I don't want to say redheaded stepchild, I have two [laugh] I have two redhead children and my wife is redhead, but yeah. [laugh].Corey: [laugh]. That's—it's—Rick: That's the way it's treated, right? It's treated like a stepchild. You know, it's like, come on, we're fully funding the solutions within our own umbrella that are competing with ourselves, and at the same time, we're letting the DynamoDB API languish while our competitors are moving ahead. And eventually, it just becomes, you know, okay, guys, I want to work with the best tooling on the market, and that's really what it came down to. As long as DynamoDB was the king of serverless, yes, absolutely; best tooling on the market.And they still are [clear throat] the leader, right? There's no doubt that DynamoDB is ahead in the serverless landscape, that the MongoDB solution is in its nascency. It's going to be here, it's going to be great, that's part of what I'm here for. And that's again, getting back to why did you make the move, I want to be part of this, right? That's really what it comes down to.Corey: One of the things that I know that was my own bias has always been that if I'm looking at something like—that I'm looking at my customer environments to see what's there, I can see DynamoDB because it has its own line item in the bill. MongoDB is generally either buried in marketplace charges, or it's running on a bunch of EC2 instances, or it just shows up as data transfer. So, it's not as top-of-mind for the way that I view things in… through the lens of you know, billing. So, that does inform my perception, but I also know that when I'm talking to large-scale companies about what they're doing, when they're going all-in on AWS, a large number of them still choose things like Mongo. When I've asked them why that is, sometimes you get the answer of, “Oh, legacy. It's what we built on before.” Cool—Rick: Sure.Corey: —great. Other times, it's a, “We're not planning to leave, but if we ever wanted to go somewhere else, it's nice to not have to reimagine the entire data architecture and change the integration points start to finish because migrations are hard enough without that.” And there is validity to the idea of a strategic exodus being possible, even if it's not something you're actively building for all the time, which I generally advise people not to do.Rick: Yeah. There's a couple things that have occurred over the last, you know, couple of years that have changed the enterprise CIO and CTO's assessment of risk, right? Risk is the number one decision factor in a CTOs portfolio and a CIO's, you know, decision-making process, right? What is the risk? What is the impact of that risk? Do I need to mitigate that risk, or do I accept that risk? Okay?So, right now, what you've seen is with Covid, people have realized that you know, on-prem infrastructure is a risk, right? It used to be an asset; now it's a risk. Those personnel that have to run that on-prem infrastructure, hey, what happens when they're not available? The infrastructure is at risk. Okay.So, offloading that to cloud providers is the natural solution. Great. So, what happens when you offload to a cloud provider and IAD goes down, or you know, us-east-1 goes down—we call it IAD or we used to call it IAD internally at AWS when I was there because, you know, the regions were named by airport codes, but it's us-east-1—how many times has us-east-1 had problems? Do you want to really be the guy that every time us-east-1 goes down, you're in trouble? What happens when people in us-east-1 have trouble? Where do they go?Corey: Down generally speaking.Rick: [crosstalk 00:10:37]—well, if they're well-architected, right, if they're well-architected, what do they do? They go to us-west-2. How much infrastructure is us-west-2 have? So, if everybody in us-east-1 is well-architected, then they all go to us-west-2. What happens in us-west-2? And I guarantee you—and I've been warning about this at AWS for years, there's a cascade failure coming, and it's going to be coming because we're well-architecting everybody to failover from our largest region to our smaller regions.And those smaller regions, they cannot take the load and nobody's doing any of that planning, so, you know, sooner or later, what you're going to see is dominoes fall, okay? [clear throat]. And it's not just going to be us-east-1, it's going to be us-east-1 failed, and the rollover caused a cascade failure in us-west-2, which caused a cascade—Corey: Because everyone's failing over during—Rick: That's right. That's right.Corey: —this event the same way. And also—again, not to dunk on them unnecessarily, but when—Rick: No, I'm not dunking.Corey: —us-east-1 goes, down a lot of the control plane services freeze up—Rick: Oh, of course they do.Corey: —like [unintelligible 00:11:25].Rick: Exactly. Oh, we not single point of failure, right? Uh-huh, exactly. There you go, Route 53, now—and that actually surprised me is DynamoDB instead of Route 53 is your primary database. So, I'm actually must have had some impact on you—Corey: To move one workload off of Dynamo to Route 53 [crosstalk 00:11:39] issue number because I have to practice what I preach.Rick: That's right. Exactly.Corey: It was weird; they the thing slower and little bit less, uh—Rick: [laugh]. I love it when [crosstalk 00:11:45]—yeah, yeah—Corey: —and a little bit [crosstalk 00:11:45] cache-y. But yeah.Rick: —sure. Okay, I can understand that. [laugh].Corey: But it made the architecture diagram a little bit more head-scratching, and really, that's what it's all about. Getting a high score.Rick: Right. So, if you think about your data, right, I mean, would you rather be running on an infrastructure that's tied to a cloud provider that could experience these kinds of regional failures and cascade failures, or would you rather have your data infrastructure go across cloud providers so that when provider has problems, you can just go ahead and switch the light bulb over on the other one and ramp right back up, right? You know? And honestly, you're running active, active configurations and that kind of, [clear throat] you know, deployment, you know, design, and you're never going to go down. You're always going—Corey: The challenge I've had—Rick: —to be the one that stays up.Corey: The theory is sound, but the challenge I've had in production with trying these things is that one, the thing that winds up handling the failover piece is often causes more outage than the underlying stuff itself.Rick: Well, sure. Yeah.Corey: Two, when you're building something to run a workload to run in multiple cloud providers, you're forced to use a lot of—Rick: Lowest common denominator?Corey: Lowest common denominator stuff. Yeah.Rick: Yeah, yeah totally. I hear that all the time.Corey: Unless you're actively running it in both places, it looks like a DR Plan, which doesn't survive the next commit to the codebase. It's the—Rick: I totally buy that. You're talking about the stack, stack duplication, all that kind of—that's an overhead and complexity, I don't worry about at the data layer, right?Corey: Oh, yeah.Rick: The data layer—Corey: If you're talking about—Rick: —[crosstalk 00:12:58]Corey: —[crosstalk 00:12:58] data layer, oh, everything you're saying makes perfect sense.Rick: Makes perfect sense, right? And honestly, you know, let's put it this way: If this is what you want to do—Corey: What do you mean identity management and security handover working differently? Oh, that's a different team's problem. Oh, I miss those days.Rick: Yeah, you know, totally right. It's not ideal. But you know, I mean, honestly, it's not a deal that somebody wants to manage themselves, is moving that data around. The data is the lock-in. The data is the thing that ties you to—Corey: And the cost of moving it around in some cases, too.Rick: That's exactly right. You know, so you know, having infrastructure that spans providers and spans both on-prem and cloud, potentially, you know, that can span multiple on-prem locations, man, I mean, that's just that's power. And MongoDB provides that; I mean, DynamoDB can't. And that's really one of the biggest limitations that it will always have, right? And we talked about, and I still believe in the power of global tables, and multi-region deployments, and everything, it's all real.But these types of scenarios, I think this is the next generation of failure that the cloud providers are not really prepared for, they haven't experienced it, they don't know what it's even going to look like, and I don't think you want to be tied to a single provider when these things start happening, right, if you have a large amount of infrastructure deployed someplace. It just seems like [clear throat] that's a risk that you're running at these days, and you can mitigate that risk somewhat by going with a MongoDB Atlas. I agree, all those other considerations. But you know, I also heard—it's a lot of fun, too, right? There's a lot of fun in that, right?Because if you think about it, I can deploy technologies in ways on any cloud provider, they're going to be cloud provider agnostic, right? I can use, you know, containerized technologies, Kubernetes, I can use—hell, I'm not even afraid to use Lambda functions, and just, you know, put a wrapper around that code and deploy it both as a Lambda or a Cloud Function in GCP. The code's almost the same in many cases, right? What it's doing with the data, you can code this stuff in a way—I used to do it all the time—you abstract the data layer, right? Create a DAL. How about a CAL? A cloud [laugh] cloud access layer, right, you know? [laugh].Corey: I wish, on some level, we could go down some of these paths. And someone asked me once a while back of, “Well, you seem to have a lot of opinions on this. Do you think you could build a better cloud than AWS?” And my answer—Rick: Hell yes.Corey: —look them a bit by surprise of, “Absolutely. Step one, I want similar resources, so give me $20 billion to spend”—Rick: I was going to say, right?Corey: —”then I'm going to hire the smart people.” Not that we're somehow smarter or better or anything else than the people who built AWS originally, but now—Rick: We have all those lessons learned.Corey: —we have fifteen years of experience to fall back on.Rick: Exactly.Corey: “Oh. I wouldn't make that mistake again.”Rick: Exactly. Don't need to worry about that. Yeah exactly.Corey: You can't just turn off a cloud service and relaunch it with a completely different interface and API and the rest.Rick: People who criticize, you know, services like DynamoDB, like—and other AWS services—look, these things are like any kind of retooling of the services, it's like rebuilding the engine on the airplane while it's flying.Corey: Oh, yeah.Rick: And you have to do it with a level of service assurance that—I mean, come on. DynamoDB provides four nines out of the box, right? Five nines if you turn on global tables. And they're doing this at the same time as they have pipeline releases dropping regularly, right? So, you can imagine what kind of, you know, unit testing goes on there, what kind of Canary deployments are happening.It's just, it's an amazing infrastructure that they maintain, incredibly complex, you know? In some ways, these are lessons that we need to learn in MongoDB if we're going to be successful operating a shared backplane serverless, you know, processing fabric. We have to look at what DynamoDB does right. And we need to build our own infrastructure that mirrors those things, right? And in some ways, these things are there, in some ways, they're working on, in some ways, we got a long ways to go.But you know, I mean, it's this is the exciting part of that journey for me. Now, in my case, I focus on strategic accounts, right? Strategic accounts are big, you know, they're the potential to be our whale customers, right? These are probably not customers who would be all that interested in serverless, right? They're customers that would be more interested in provisioned infrastructure because they're the people that I talked to when I was at DynamoDB; I would be talking to customers who are interested in like, reserved capacity allocations, right? If you're talking about—Corey: Yeah, I wanted to ask you about that. You're developer advocacy—which I get—for strategic accounts.Rick: Right.Corey: And I'm trying to wrap my head around—Rick: Why [crosstalk 00:17:19]—Corey: [crosstalk 00:17:19] strategic accounts are the big ones, potential spend lots of stuff. Why do they need special developer advocacy?Rick: [laugh]. Well, yeah, it's funny because, you know, one of the reasons why it started talking to Mark Porter about this, you know, was the fact that, you know, the overlap is really around [clear throat] the engagements that I ran when I was doing the Amazon retail migration, right? When Amazon retail started to move to NoSQL, we deprecated 3000 Oracle server instances, we moved a large percentage of those workloads to NoSQL. The vast majority probably just were lift-and-shift into RDS and whatnot because they were too small, too old, not worth upgrading whatnot, but every single tier, what we call tier-one service, right, every money-making service was redesigned and redeployed on DynamoDB, right? So, we're talking about 25,000 developers that we had to ramp. This is back four years ago; now we have, like, 75,000.But back then we had 25,000 global developers, we had [clear throat] a technology shift, a fundamental paradigm shift between relational modeling and NoSQL modeling, and the whole entire organization needed to get up to speed, right? So, it was about creating a center of excellence, it was about operating as an office of the CTO within the organization to drive this technology into the DNA of our company. And so that exercise was actually incredibly informative, educational, in that process of executing a technology transformation in a major enterprise. And this is something that we want to reproduce. And it's actually what I did for Dynamo as well, really more than anything.Yes, I was on Twitter, I was on Twitch, I did a lot of these things that were kind of developer advocate, you know, activities, but my primary job at AWS was working with large strategic customers, enabling their teams, you know, teaching them how to model their data in NoSQL, and helping them cross the chasm, right, from relational. And that is advocacy, right? The way I do it is I use their workloads. [clear throat]. I use their—the customers, you know, project teams themselves, I break down their models, I break down their access patterns when I leave, essentially—with the whole day of design reviews, we'll walk through 12 or 15 workloads, and when I leave these guys have an idea: How would I do it if I wanted to use NoSQL, right?Give them enough breadcrumbs so that they can actually say, “Okay, if I want to take it to the next step, I can do it without calling up and say, ‘Hey, can we get a professional services team in here?'” right? So, it's kind of developer advocacy and it's kind of not, right? We're kind of recognizing that these are whales, these are customers with internal resources that are so huge, they could suck our Developer's Advocacy Team in and chew it up, right? So, what we're trying to do is form a focus team that can hit hard and move the needle inside the accounts. That's what I'm doing. Essentially, it's the same work I did for [clear throat] AWS for DynamoDB. I'm just doing it for, you know—they traded for a new quarterback. Let's put it that way. [laugh].Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: So, one thing that I find appealing about the approach maps to what I do in the world of cloud economics, where I—like, in my own environment, our AWS bill is creeping up again—we have 14 AWS accounts—and that's a little over $900 a month now. Which, yeah, big money, big money.Rick: [laugh].Corey: In the context of running a company, that no one notices or cares. And our customers spend hundreds of millions a year, pretty commonly. So, I see the stuff in the big accounts and I see the stuff in the tiny account here. Honestly, the more interesting stuff is generally in on the smaller side of the scale, just because you're not going to have a misconfiguration costing a third of your bill when a third of your bill is $80 million a year. So—Rick: That's correct. If you do then that's a real problem, right?Corey: Oh yeah.Rick: [laugh].Corey: It's very much a two opposite ends of a very broad spectrum. And advice for folks in one of those situations is often disastrous to folks on the other side of that.Rick: That's right. That's right. I mean, at some scale, managing granularity hurts you, right? The overhead of trying to keep your costs, you know, it—but at the same time, it's just different, a different measure of cost. There's a different granularity that you're looking at, right? I mean, things below a certain, you know, level stop becoming important when, you know, the budget start to get a certain scale or a certain size, right? Theoretically—Corey: Yeah, for there's certain workloads, things that I care about with my dollar-a-month Dynamo spend, if I were to move that to Mongo Serverless, great, but my considerations are radically different than a company that is spending millions a month on their database structure.Rick: That's right. Really, that's what it comes down to.Corey: Yeah, we don't care about the pennies. We care about is it going to work? How do we back it up? What's the replication factor?Rick: And that—but also, it's more than that. It's, you know, for me, from my perspective, it really comes down to that, you know, companies are spending millions of dollars a year in database services. These are companies that are spending ten times that, five times that, in you know, in developers, you know, expense, right? Building services, maintaining the code that runs—that the services run.You know, the biggest problem I had with MongoDB is the level of code complexity. It's a cut after cut after cut, right? And the way I kind of describe the experience—and other people have described it to me; I didn't come up with this analogy. I had a customer tell me this as they were leaving DynamoDB—“DynamoDB is death by a thousand cuts. You love it, you start using it, you find a little problem, you start fixing it. You start fixing it. You start fixing—you come up with a pattern. Talk to Rick, he'll come up with something. He'll tell you how to do that.” Okay?And you know, how many customers did I would do this with? You know, and it's honestly, they're 15-minute phone calls for me, but every single one of those 15-minute phone calls turns into eight hours of developer time writing the code, debugging it, deploying it over and over again, it's making sure it's going the way it's [crosstalk 00:23:02]—Corey: Have another 15-minute call with Rick, et cetera, et cetera. Yeah.Rick: Another 15—exactly. And it's like okay, that's you know—eventually, they just get tired of it, right? And I actually had a customer that tell me—a big customer—tell me flat out, “Yeah, you proved that the DynamoDB can support our workload and it'll probably do it cheaper, but I don't have a half-a-dozen Ricks on my team, right? I don't have any Ricks on my team. I can't be getting you in here every single time we have to do a complex data model overhaul, right?”And this was—granted, it was one of the more complex implementations that I've ever done. In order to make it work. I had to overload the fricking table with multiple access patterns on the partition key, something I never done in my life. I made it work, but it was just—honestly, that was an exercise to me that taught me something. If I have to do this, it's unnatural, okay?And that's—[laugh] you know what I mean? And honestly, there's API improvements that we could have done to make that less of a problem. It's not like we haven't known since the last, I don't know, I joined the company that a thousand WCUs per storage partition was pretty small. Okay? We've kind of known that for I don't know, since DynamoDB, was invented. As matter of fact is, from what I know, talking to people who were around back then, that was a huge bone of contention back in the day, right? A thousand WCUs, ten gigabytes, there were a lot of the PEs on the team that were going, “No way. No way. That's way too small.” And then there were other people that were like, “Nah, nobody's ever going to need more than that.” And you know, a lot of this was based on the analysis of [crosstalk 00:24:28]—Corey: Oh, nothing ever survives first contact from—Rick: Of course.Corey: —customer, particularly a customer who is not themselves deeply familiar with what's happening under the hood. Like, I had this problem back when I was traveling trainer for Puppet for a while. It was, “Great. Well, Puppet is obviously a piece of crap because everyone I talked to has problems with it.” So, I was one of the early developers behind SaltStack—Rick: Oh nice.Corey: —and, “Ah, this is going to be a thing of beauty and it'll be awesome.” And that lasted until the first time I saw what somebody's done with it in the wild. It was, “Oh, okay, that's an [unintelligible 00:25:00] choice.”Rick: Okay, that's how—“Yeah, I never thought about that,” right? Happy path. We all love the happy path, right? As we're working with technologies, we figure out how we like to use it, we all use it that way. Of course, you can solve any problem you want the way that you'd like to solve it. But as soon as someone else takes that clay, they mold a different statue and you go, “Oh, I didn't realize it could look like that.” Right, exactly.Corey: So, here's one for you that I've been—I still struggle with this from time to time, but why would I, if I'm building something out—well, first off, why on earth would I do that? I have people for that who are good at things—but if I'm building something out and it has a database layer, why would someone choose NoSQL over—Rick: Oh, sure.Corey: —over SQL?Rick: [crosstalk 00:25:38] question.Corey: —and let me be clear here—and I'm coming at this from the perspective of someone who, basically me a few years ago, who has no real understanding of what databases are. So, my mental model of a database is Microsoft Excel, where I can fire up a [unintelligible 00:25:51] table of these things—Rick: Sure. [laugh]. Hey, well then, you know what? Then you should love NoSQL because that's kind of the best analogy of what is NoSQL. It's like a spreadsheet, right? Whereas a relational database is like a bunch of spreadsheets, each with their own types of rows, right? So—[laugh].Corey: Oh, my mind was blown with relational stuff [unintelligible 00:26:07] wait, you could have multiple tables? It's, “What do you think relational meant there, buddy?” My map of NoSQL was always key and value, and that was it. And that's all it can be. And sure, for some things, that's what I use, but not everything.Rick: That's right. So, you know, the bottom line is, when you think about the relational database, it all goes back to, you know, the first paper ever written on the relational model, Edgar Codd—and I can't remember the exact title, but he wrote the distributed model, the data model for distributed systems, something like that. He discussed, you know, the concept of normalization, the power of normalization, why you would want this. And the reason why we wanted this, why he thought this was important, this actually kind of demonstrates how—boy, they used to write killer abstracts to papers, right? It's like the very first sentence, this is why I'm write in this paper. You read the first sentence, you know: “Future users of modern computer systems must have a way to be able to ask questions of the data without knowing how to write code.”I mean, I don't know if those were the words, but that was basically what he said, that was why he invented the normalized data model. Because, you know, with the hierarchical management systems at the time, everyone had to know everything about the data in order to be able to get any answers, right? And he was like, “No, I want to be able to just write a question and have the system answer that.” Now, at the time, a lot of people felt like that's great, and they agreed with his normalized model—it was elegant—but they all believe that the CPU overhead at the time was way too high, right? To generate these views of data on the fly, no freaking way. Storage is expensive. But it ain't that expensive, right?Well, this little thing called Moore's Law, right? Moore's Law balanced his checkbook for, like, 40 years, 50 years, it balanced the relational database checkbook, okay? So, as the CPUs got faster and faster, crunching, the data became less and less of a problem, okay? And so we crunched bigger and bigger data sets, we got very, very happy with this. Up until about 2014.At 2014, a really interesting thing happened. If you look at the top 500, which is the supercomputers, the top 500 supercomputing clusters around the world, and you look at their performance increases year-to-year after 2014, it went off a cliff. No longer beating Moore's Law. Ever since, they've been—and per-core performance, you know, CPU, you know, instructions executed per second, everything. It's just flattening. Those curves are flattening. Moore's Law is broken.Now, you'll get people argue about it, but the reality is, if it wasn't broken, the top 500 would still be cruising away. They're not. Okay? So, what this is telling us is that the relational database is losing its horsepower. Okay?Why is it happening? Because, you know, gate length has an absolute minimum, it's called zero, right? We can't have a logic gate that's the—with negative distance, right? [laugh]. So, you know, these things—but storage, storage, hey, it just keeps on getting cheaper and cheaper, right?We're going the other way with storage, right? It's gigabytes, it's terabytes, it's petabytes, you know, with CPU, we're going smaller and smaller and smaller, and the fab cost is increasing. There's just—it's going to take a next-generation CPU technology to get back on track with Moore's Law.Corey: Well, here's the challenge. Everything you're saying makes perfect sense from where your perspective is. I reiterate, you are working with strategic accounts, which means ‘big.' When I'm building something out in the evenings because I want to see if something is possible, performance considerations and that sort of characteristic does not factor into it. When I'm a very small-scale, I care about cost to some extent—sure, whatever—but the far more expensive aspect of it, in the ways that matter have been what is the expensive—what—the big expensive piece is—Rick: We've talked about it.Corey: —engineering time—Rick: That's what we just talked about, right?Corey: —where it's, “What I'm I familiar with?”Rick: As a developer, right, why would I use MongoDB over DynamoDB? Because the developer experience [crosstalk 00:29:33]—Corey: Exactly. Sure, down the road there are performance characteristics and yeah, at the time I have this super-large, scaled-out, complex workload, yeah, but most workloads will not get to that.Rick: Will not ever get there. Ever get there. [crosstalk 00:29:45]—Corey: Yeah, so optimizing for [crosstalk 00:29:45], how's it going to work when I'm Facebook-scale? It's—Rick: So, first of—no, exactly, Facebook scale is irrelevant here. What I'm talking about is actually a cost ratchet that's going to lever on midsize workloads soon, right? Within the next four to five years, you're going to see mid-level workloads start to suffer from significant performance cost deficiencies compared to NoSQL workloads running on the same. Now you—hell, you see it right now, but you don't really experience it, like you said, until you get to scale, right? But in midsize workloads, [clear throat] that's going to start showing up, right? This cost overhead cannot go away.Now, the other thing here that you got to understand is, just because it's new technology doesn't make it harder to use. Just because you don't know how to use something, right, doesn't mean that it's more difficult. And NoSQL databases are not more difficult than the relational database. I can express every single relationship in a NoSQL database that I express in a relational database. If you think about the modern OLTP applications, we've done the analysis, ad nauseum: 70% of access patterns are for a single object, a single row of data from a single table; another 20% are for a row of datas—a range of rows from a single table. Okay, that leaves only 10% of your access patterns involve any kind of complex table traversal or entity traversals. Okay?And most of those are simple one-to-many hierarchies. So, let's put those into perspective here: 99% of the access patterns in an OLTP application can be modeled without denormalization in a single table. Because single table doesn't require—just because I put all the objects in one place doesn't mean that it's denormalized. Denormalized requires strong redundancies in the stored set. Duplication of data. Okay?Edgar Codd himself said that the normalized data model does not depend on storage, that they are irrelevant. I could put all the objects in the same document. As long as there's no duplication of data, there's no denormalization. I know, I can see your head going, “Wow,” but it's true, right? Because as long as I can clearly express the relationships of the data without strong redundancies, it is a normalized data model.That's what most people don't understand. NoSQL does not require denormalization. That's a decision you make, and it usually happens when you have many-to-many relationships; then we need to start duplicating the data.Corey: In many cases, at least my own experience—because again, I am bad at computers—I find that the data model is not something that is sat out—that you sit down and consciously plan very often. It's rather something—Rick: Oh yeah.Corey: —happens to you instead. I mean—Rick: That's right. [laugh].Corey: —realistically, like, using DynamoDB for this is aspirational. I just checked, and if I look at—so I started this newsletter back in March of 2017. I spun up this DynamoDB table that backs it, and I know it's the one that's in production because it has the word ‘test' in its name, because of course it does. And I'm looking into it, and it has 8700 items in it now and it's 3.7 megabytes. It's—Rick: Sure, oh boy. Nothing, right?Corey: —not for nothing, this could have been just as easily and probably less complex for my level of understanding at the time, a CSV file that I—Rick: Right. Exactly, right.Corey: —grabbed from a Lambda out of S3, do the thing to it, and then put it back.Rick: [unintelligible 00:32:45]. Right.Corey: And then from a performance and perspective side on my side, it would make no discernible difference.Rick: That's right because you're not making high-velocity requests against the small object. It's just a single request every now and then. S3 performance would probably—you might even be less. It might even cost you less to use S3.Corey: Right. And 30 to 100 of the latest ones are the only things that are ever looked at in any given week, the rest of it is mostly deadstock that could be transitioned out elsewhere.Rick: Exactly.Corey: But again, like, now that they have their lower cost infrequent access storage, then great. It's not item level; it's table levels, so what's the point? I can knock that $1.30 a month down to, what, $1.10?Rick: Oh well, yeah, no, I mean, again, Corey for those small workloads, you know what? It's like, go with what you know. But the reality is, look, as a developer, we should always want to know more, and we should always want to know new things, and we should always be aware of where the industry is headed. And honestly, I've heard through—I'm an old, old school, relational guy, okay, I cut my teeth on—oh, God, I don't even know what version of MS SQL Server it was, but when I was, you know, interviewing at MongoDB. I was talking to Dan Pasette, about the old Enterprise Manager, where we did the schema designer and all this, and we were reminiscing about, you know, back in the day, right?Yeah, you know, reality of things are is that if you don't get tuned into the new tooling, then you're going to get left behind sooner or later. And I know a lot of people who that has happened to over the years. There's a reason why I'm 56 years old and still relevant in tech, okay? [laugh].Corey: Mainframes, right? I kid.Rick: Yes, mainframes.Corey: I kid. You're not that much older than I am, let's be clear here.Rick: You know what? I worked on them, okay? And some of my peers, they never stopped, right? They just kind of stayed there.Corey: I'm still waiting for AWS/400. We don't see them yet, but hope springs eternal.Rick: I love it. I love that. But no, one of the things that you just said that I think it hit me really, it's like the data model isn't something you think about. The data model is something that just happens, right? And you know what, that is a problem because this is exactly what developers today think. They think know the relational database, but they don't.You talk to any DBA out there who's coming in after the fact and cleaned up all the crappy SQL that people like me wrote, okay? I mean, honestly, I wrote some stuff in the day that I thought, “This is perfect. There's no way that could be anything better than this,” right? Nice derived table joins insi—and you know what? Then here comes the DBA when the server is running at 90% CPU and 100% percent memory utilization and page swapping like crazy, and you're saying we got to start sharding the dataset.And you know, my director of engineering at the time said, “No, no, no. What we need is somebody to come in and clean up our SQL.” I said, “What do you mean? I wrote that SQL.” He's like, “Like I said, we need someone to come and clean up our SQL.”I said, “Okay, fine.” We brought the guy in. 1500 bucks an hour, we paid this guy, I was like, “There's no way that this guy is going to be worth that.” A day and a half later, our servers are running at 50% CPU and 20% memory utilization. And we're thinking about, you know, canceling orders for additional hardware. And this was back in the day before cloud.So, you know, developers think they know what they're doing. [clear throat]. They don't know what they're doing when it comes to the database. And don't think just because it's a relational database and they can hack it easier that it's better, right? Yeah, it's, there's no substitute for knowing what you're doing; that's what it comes down to.So, you know, if you're going to use a relational database, then learn it. And honestly, it's a hell of a lot more complicated to learn a relational database and do it well than it is to learn how to model your data in NoSQL. So, if you sit two developers down, and you say, “You learn NoSQL, you learn relational,” two months later, this guy is still going to be studying. This guy's going to be writing code for seven weeks. Okay? [laugh]. So, you know, that's what it comes down to. You want to go fast, use NoSQL and you won't have any problems.Corey: I think that's a good place to leave it. If people want to learn more about how you view these things, where's the best place to find you?Rick: You know, always hit me up on Twitter, right? I mean, @houlihan_rick, that's my—underbar rick, that's my Twitter handle. And you know, I apologize to folks who have hit me up on Twitter and gotten no response. My Twitter as you probably have as well, my message request box is about 3000 deep.So, you know, every now and then I'll start going in there and I'll dig through, and I'll reply to somebody who actually hit me up three months ago if I get that far down the queue. It is a Last In, First Out, right? I try to keep things as current as possible. [laugh].Corey: [crosstalk 00:36:51]. My DMs are a trash fire. Apologies as well. And we will, of course, put links to it in the [show notes 00:36:55].Rick: Absolutely.Corey: Thank you so much for your time. I really do appreciate it. It's always an education talking to you about this stuff.Rick: I really appreciate being on the show. Thanks a lot. Look forward to seeing where things go.Corey: Likewise.Rick: All right.Corey: Rick Houlihan Director of Developer Relations, Strategic Accounts at MongoDB. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an upset comment talking about how we didn't go into the proper and purest expression of NoSQL non-relational data, DNS TXT records.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Voice of the DBA
The Usefulness of Database Features

Voice of the DBA

Play Episode Listen Later Feb 25, 2022 3:06


SQL Server is constantly growing and changing, as are most database platforms. There are lots of platform changes, among them enhancements to the T-SQL language. Microsoft has added window functions, in-memory structures, the ability to execute code in other languages, and more. Some of these features are well built and some need more work. What's always interesting to me is what actually gets built and what doesn't. There was an article recently on evaluating features in an RDBMS, and the article uses the JSON data type in Google's Big Query as an example. The evaluation is interesting, examining whether the feature actually helps the user, or if it is mostly marketing. In this case, the feature is outside of the "normal" conventions of the platform,  but it is useful. Read the rest of The Usefulness of Database Features

Voice of the DBA
The NoSQL Rise and Fall

Voice of the DBA

Play Episode Listen Later Feb 16, 2022 3:12


I'm not sure this blog that seems to talk about the problems of NoSQL databases in general makes a lot of sense. If you read the comments, you certainly see lots of complaints, but I also think the post isn't well written. It lists the problems of RDBMSes as possibly deleting all tables while changing a key or being unable to add a column easily. While I don't know about all RDBMSes, I don't know any that could lose all data with anything less than DROP DATABASE. At the same time, the NoSQL complaints and problems seem to be generally presented, which isn't good. The various NoSQL flavors of databases vary widely and the way you look at a columnar or graph database is much different than a document database. Really, the piece ought to be separated to look at a certain class of NoSQL database compared with RDBMSes. Read the rest of The NoSQL Rise and Fall

Red Hat X Podcast Series
The Cloud Native Database for Modern Transactional Applications Featuring: YugabyteDB

Red Hat X Podcast Series

Play Episode Listen Later Feb 1, 2022 35:34


The rise of microservices, DevOps, and global applications is putting pressure ontraditional systems of record. Modern transactional applications need databases that can deliver continuous availability, on-demand scale, and geo-distribution without sacrificing ACID guarantees or RDBMS features. These databases should run where the applications are and enable developers to deliver new capabilities quickly.Join Yugabyte's Tim Faulkes and our host for a conversation about the design of high-performance distributed SQL databases, and real-world use cases driving the adoption of distributed SQL databases.

Percona's HOSS Talks FOSS:  The Open Source Database Podcast
Learning MySQL Book by Sergey Kuzmichev and Vinicius Grippa - Percona Podcast 51

Percona's HOSS Talks FOSS: The Open Source Database Podcast

Play Episode Listen Later Jan 26, 2022 32:26


The HOSS Talks FOSS,  a Percona's Podcast dedicated to Open Source, databases, and technology, starts the new year 2022 with a special episode to learn and improve knowledge about MySQL. Matt Yonkovit, The Head of Open Source Strategy (HOSS) at Percona, sits down with Vinicius Grippa and Sergey Kuzmichev from Support Engineer Team at Percona to talk about their book “Learning MySQL: Get a Handle on Your Data”. This practical guide provides the insights and tools necessary to take full advantage of this powerful RDBMS.This edition includes new chapters on high availability, load balancing, and using MySQL in the cloud.

Software Engineering Radio - The Podcast for Professional Software Developers
Episode 485: Howard Chu on B+tree Data Structure in Depth

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Nov 9, 2021 62:02


Howard Chu, CTO of Symas Corp and chief architect of the OpenLDAP project, discusses the key features of B+Tree Data Structures which make it the default selection for efficient and predictable storage of sorted data.

Data on Kubernetes Community
DoK Talks #97- Learn about Developing a Multicluster Operator with K8ssandra Operator // John Sanda

Data on Kubernetes Community

Play Episode Listen Later Oct 29, 2021 62:10


https://go.dok.community/slack https://dok.community/ ABSTRACT OF THE TALK Cassandra is a highly scalable database with an architecture that makes it well suited for multi-region workloads. A Kubernetes cluster often spans across multiple zones within a single region. Multi-region Kubernetes clusters are less common though due to the challenges that they present. This has led to a growing number of multi-cluster solutions. In this presentation John Sanda introduces K8ssandra Operator. It is designed from the ground up for multi-cluster deployments. John will discuss how to reconcile objects across multiple clusters, how to manage secrets, pitfalls to avoid, and testing strategies. BIO John Sanda is a DataStax engineer working on the K8ssandra project. He is passionate about Cassandra and Kubernetes and loves being involved in open source. Prior to joining DataStax John worked for a year at The Last Pickle as an Apache Cassandra consultant. Prior to that, he spent a number of years at Red Hat as an engineer. It was during that time John got involved with Cassandra when he redesigned a metrics data store and built it with Cassandra in place of an RDBMS. He had his first initial exposure to Cassandra and Kubernetes when the metrics storage engine was later used in OpenShift.

TSR - The Server Room
Episode 97 - DbVis Software's DbVisualizer Pro

TSR - The Server Room

Play Episode Listen Later Oct 23, 2021 38:34


DbVisualizer is a feature rich, intuitive multi-database tool for developers, analysts and database administrators, providing a single powerful interface across a wide variety of operating systems. With its easy-to-use and clean interface, DbVisualizer has proven to be one of the most cost effective database tools available, yet to mention that it runs on all major operating systems and supports all major RDBMS that are available. Users only need to learn and master one application. DbVisualizer integrates transparently with the operating system being used.

Screaming in the Cloud
Yugabyte and Database Innovations with Karthik Ranganathan

Screaming in the Cloud

Play Episode Listen Later Sep 21, 2021 38:53


About KarthikKarthik was one of the original database engineers at Facebook responsible for building distributed databases including Cassandra and HBase. He is an Apache HBase committer, and also an early contributor to Cassandra, before it was open-sourced by Facebook. He is currently the co-founder and CTO of the company behind YugabyteDB, a fully open-source distributed SQL database for building cloud-native and geo-distributed applications.Links: Yugabyte community Slack channel: https://yugabyte-db.slack.com/ Distributed SQL Summit: https://distributedsql.org Twitter: https://twitter.com/YugaByte TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: You could build you go ahead and build your own coding and mapping notification system, but it takes time, and it sucks! Alternately, consider Courier, who is sponsoring this episode. They make it easy. You can call a single send API for all of your notifications and channels. You can control the complexity around routing, retries, and deliverability and simplify your notification sequences with automation rules. Visit courier.com today and get started for free. If you wind up talking to them, tell them I sent you and watch them wince—because everyone does when you bring up my name. Thats the glorious part of being me. Once again, you could build your own notification system but why on god's flat earth would you do that?Corey: This episode is sponsored in part by “you”—gabyte. Distributed technologies like Kubernetes are great, citation very much needed, because they make it easier to have resilient, scalable, systems. SQL databases haven't kept pace though, certainly not like no SQL databases have like Route 53, the world's greatest database. We're still, other than that, using legacy monolithic databases that require ever growing instances of compute. Sometimes we'll try and bolt them together to make them more resilient and scalable, but let's be honest it never works out well. Consider Yugabyte DB, its a distributed SQL database that solves basically all of this. It is 100% open source, and there's not asterisk next to the “open” on that one. And its designed to be resilient and scalable out of the box so you don't have to charge yourself to death. It's compatible with PostgreSQL, or “postgresqueal” as I insist on pronouncing it, so you can use it right away without having to learn a new language and refactor everything. And you can distribute it wherever your applications take you, from across availability zones to other regions or even other cloud providers should one of those happen to exist. Go to yugabyte.com, thats Y-U-G-A-B-Y-T-E dot com and try their free beta of Yugabyte Cloud, where they host and manage it for you. Or see what the open source project looks like—its effortless distributed SQL for global apps. My thanks to Yu—gabyte for sponsoring this episode.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's promoted episode comes from the place where a lot of my episodes do: I loudly and stridently insist that Route 53—or DNS in general—is the world's greatest database, and then what happens is a whole bunch of people who work at database companies get upset with what I've said. Now, please don't misunderstand me; they're wrong, but I'm thrilled to have them come on and demonstrate that, which is what's happening today. My guest is CTO and co-founder of Yugabyte. Karthik Ranganathan, thank you so much for spending the time to speak with me today. How are you?Karthik: I'm doing great. Thanks for having me, Corey. We'll just go for YugabyteDB being the second-best database. Let's just keep the first [crosstalk 00:01:13]—Corey: Okay. We're all fighting for number two, there. And besides, number two tries harder. It's like that whole branding thing from years past. So, you were one of the original database engineers at Facebook, responsible for building a bunch of nonsense, like Cassandra and HBase. You were an HBase committer, early contributor to Cassandra, even before it was open-sourced.And then you look around and said, “All right, I'm going to go start a company”—roughly around 2016, if memory serves—“And I'm going to go and build a database and bring it to the world.” Let's start at the beginning. Why on God's flat earth do we need another database?Karthik: Yeah, that's the question. That's the million-dollar question isn't it, Corey? So, this is one, fortunately, that we've had to answer so many times from 2016, that I guess we've gotten a little good at it. So, here's the learning that a lot of us had from Facebook: we were the original team, like, all three of us founders, we met at Facebook, and we not only build databases, we also ran them. And let me paint a picture.Back in 2007, the public cloud really wasn't very common, and people were just going into multi-region, multi-datacenter deployments, and Facebook was just starting to take off, to really scale. Now, forward to 2013—I was there through the entire journey—a number of things happened in Facebook: we saw the rise of the equivalent of Kubernetes which was internally built; we saw, for example, microservice—Corey: Yeah, the Tupperware equivalent, there.Karthik: Tupperware, exactly. You know the name. Yeah, exactly. And we saw how we went from two data centers to multiple data centers, and nearby and faraway data centers—zones and regions, what do you know as today—and a number of such technologies come up. And I was on the database side, and we saw how existing databases wouldn't work to distribute data across nodes, failover, et cetera, et cetera.So, we had to build a new class of databases, what we now know is NoSQL. Now, back in Facebook, I mean, the typical difference between Facebook and an enterprise at large is Facebook has a few really massive applications. For example, you do a set of interactions, you view profiles, you add friends, you talk with them, et cetera, right? These are supermassive in their usage, but they were very few in their access patterns. At Facebook, we were mostly interested in dealing with scale and availability.Existing databases couldn't do it, so we built NoSQL. Now, forward a number of years, I can't tell you how many times I've had conversations with other people building applications that will say, “Hey, can I get a secondary index on the SQL database?” Or, “How about that transaction? I only need it a couple of times; I don't need it all the time, but could you, for example, do multi-row transactions?” And the answer was always, “Not,” because it was never built for that.So today, what we're seeing is that transactional data and transactional applications are all going cloud-native, and they all need to deal with scale and availability. And so the existing databases don't quite cut it. So, the simple answer to why we need it is we need a relational database that can run in the cloud to satisfy just three properties: it needs to be highly available, failures or no, upgrades or no, it needs to be available; it needs to scale on demand, so simply add or remove nodes and scale up or down; and it needs to be able to replicate data across zones, across regions, and a variety of different topologies. So availability, scale, and geographic distribution, along with retaining most of the RDBMS features, the SQL features. That's really what the gap we're trying to solve.Corey: I don't know that I've ever told this story on the podcast, but I want to say it was back in 2009. I flew up to Palo Alto and interviewed at Facebook, and it was a different time, a different era; it turns out that I'm not as good on the whiteboard as I am at running my mouth, so all right, I did not receive an offer, but I think everyone can agree at this point that was for the best. But I saw one of the most impressive things I've ever seen, during a part of that interview process. My interview is scheduled for a conference room for must have been 11 o'clock or something like that, and at 10:59, they're looking at their watch, like, “Hang on ten seconds.” And then the person I was with reached out to knock on the door to let the person know that their meeting was over and the door opened.So, it's very clear that even in large companies, which Facebook very much was at the time, people had synchronized clocks. This seems to be a thing, as I've learned from reading the parts that I could understand of the Google Spanner paper: when you're doing distributed databases, clocks are super important. At places like Facebook, that is, I'm not going to say it's easy, let's be clear here. Nothing is easy, particularly at scale, but Facebook has advantages in that they can mandate how clocks are going to be handled throughout every piece of their infrastructure. You're building an open-source database and you can't guarantee in what environment and on what hardware that's going to run, and, “You must have an atomic clock hooked up,” is not something you're generally allowed to tell people. How do you get around that?Karthik: That's a great question. Very insightful, cutting right to the chase. So, the reality is, we cannot rely on atomic clocks, we cannot mandate our users to use them, or, you know, we'd not be very popularly used in a variety of different deployments. In fact, we also work in on-prem private clouds and hybrid deployments where you really cannot get these atomic clocks. So, the way we do this is we come up with other algorithms to make sure that we're able to get the clocks as synchronized as we can.So, think about at a higher level; the reason Google uses atomic clocks is to make sure that they can wait to make sure every other machine is synchronized with them, and the wait time is about seven milliseconds. So, the atomic clock service, or the true time service, says no two machines are farther apart than about seven milliseconds. So, you just wait for seven milliseconds, you know everybody else has caught up with you. And the reason you need this is you don't want to write on a machine, you don't want to write some data, and then go to a machine that has a future or an older time and get inconsistent results. So, just by waiting seven milliseconds, they can ensure that no one is going to be older and therefore serve an older version of the data, so every write that was written on the other machine see it.Now, the way we do this is we only have NTP, the Network Time Protocol, which does synchronization of time across machines, except it takes 150 to 200 milliseconds. Now, we wouldn't be a very good database, if we said, “Look, every operation is going to take 150 milliseconds.” So, within these 150 milliseconds, we actually do the synchronization in software. So, we replaced the notion of an atomic clock with what is called a hybrid logical clock. So, one part using NTP and physical time, and another part using counters and logical time and keep exchanging RPCs—which are needed in the course of the database functioning anyway—to make sure we start normalizing time very quickly.This in fact has some advantages—and disadvantages, everything was a trade-offs—but the advantage it has over a true time-style deployment is you don't even have to wait that seven milliseconds in a number of scenarios, you can just instantly respond. So, that means you get even lower latencies in some cases. Of course, the trade-off is there are other cases where you have to do more work, and therefore more latency.Corey: The idea absolutely makes sense. You started this as an open-source project, and it's thriving. Who's using it and for what purposes?Karthik: Okay, so one of the fundamental tenets of building this database—I think back to your question of why does the world need another database—is that the hypothesis is not so much the world needs another database API; that's really what users complain against, right? You create a new API and—even if it's SQL—and you tell people, “Look. Here's a new database. It does everything for you,” it'll take them two years to figure out what the hell it does, and build an app, and then put it in production, and then they'll build a second and a third, and then by the time they hit the tenth app, they find out, “Okay, this database cannot do the following things.” But you're five years in; you're stuck, you can only add another database.That's really the story of how NoSQL evolved. And it wasn't built as a general-purpose database, right? So, in the meanwhile, databases like Postgres, for example, have been around for so long that they absorb and have such a large ecosystem, and usage, and people who know how to use Postgres and so on. So, we made the decision that we're going to keep the database API compatible with known things, so people really know how to use them from the get-go and enhance it at a lower level to make a cloud-native. So, what is YugabyteDB do for people?It is the same as Postgres and Postgres features of the upper half—it reuses the code—but it is built on the lower half to be [shared nothing 00:09:10], scalable, resilient, and geographically distributed. So, we're using the public cloud managed database context, the upper half is built like Amazon Aurora, the lower half is built like Google Spanner. Now, when you think about workloads that can benefit from this, we're a transactional database that can serve user-facing applications and real-time applications that have lower latency. So, the best way to think about it is, people that are building transactional applications on top of, say, a database like Postgres, but the application itself is cloud-native. You'd have to do a lot of work to make this Postgres piece be highly available, and scalable, and replicate data, and so on in the cloud.Well, with YugabyteDB, we've done all that work for you and it's as open-source as Postgres, so if you're building a cloud-native app on Postgres that's user-facing or transactional, YugabyteDB takes care of making the database layer behave like Postgres but become cloud-native.Corey: Do you find that your users are using the same database instance, for lack of a better term? I know that instance is sort of a nebulous term; we're talking about something that's distributed. But are they having database instances that span multiple cloud providers, or is that something that is more talk than you're actually seeing in the wild?Karthik: So, I'd probably replace the word ‘instance' with ‘cluster', just for clarity, right?Corey: Excellent. Okay.Karthik: So, a cluster has a bunch—Corey: I concede the point, absolutely.Karthik: Okay. [laugh]. Okay. So, we'll still keep Route 53 on top, though, so it's good. [laugh].Corey: At that point, the replication strategy is called a zone transfer, but that's neither here nor there. Please, by all means, continue.Karthik: [laugh]. Okay. So, a cluster database like YugabyteDB has a number of instances. Now, I think the question is, is it theoretical or real? What we're seeing is, it is real, and it is real perhaps in slightly different ways than people imagine it to be.So, I'll explain what I mean by that. Now, there's one notion of being multi-cloud where you can imagine there's like, say, the same cluster that spans multiple different clouds, and you have your data being written in one cloud and being read from another. This is not a common pattern, although we have had one or two deployments that are attempting to do this. Now, a second deployment shifted once over from there is where you have your multiple instances in a single public cloud, and a bunch of other instances in a private cloud. So, it stretches the database across public and private—you would call this a hybrid deployment topology—that is more common.So, one of the unique things about YugabyteDB is we support asynchronous replication of data, just like your RDBMSs do, the traditional RDBMSs. In fact, we're the only one that straddles both synchronous replication of data as well as asynchronous replication of data. We do both. So, once shifted over would be a cluster that's deployed in one of the clouds but an asynchronous replica of the data going to another cloud, and so you can keep your reads and writes—even though they're a little stale, you can serve it from a different cloud. And then once again, you can make it an on-prem private cloud, and another public cloud.And we see all of those deployments, those are massively common. And then the last one over would be the same instance of an app, or perhaps even different applications, some of them running on one public cloud and some of them running on a different public cloud, and you want the same database underneath to have characteristics of scale and failover. Like for example, if you built an app on Spanner, what would you do if you went to Amazon and wanted to run it for a different set of users?Corey: That is part of the reason I tend to avoid the idea of picking a database that does not have at least theoretical exit path because reimagining your entire application's data model in order to migrate is not going to happen, so—Karthik: Exactly.Corey: —come hell or high water, you're stuck with something like that where it lives. So, even though I'm a big proponent as a best practice—and again, there are exceptions where this does not make sense, but as a general piece of guidance—I always suggest, pick a provider—I don't care which one—and go all-in. But that also should be shaded with the nuance of, but also, at least have an eye toward theoretically, if you had to leave, consider that if there's a viable alternative. And in some cases in the early days of Spanner, there really wasn't. So, if you needed that functionality, okay, go ahead and use it, but understand the trade-off you're making.Now, this really comes down to, from my perspective, understand the trade-offs. But the reason I'm interested in your perspective on this is because you are providing an open-source database to people who are actually doing things in the wild. There's not much agenda there, in the same way, among a user community of people reporting what they're doing. So, you have in many ways, one of the least biased perspectives on the entire enterprise.Karthik: Oh, yeah, absolutely. And like I said, I started from the least common to the most common; maybe I should have gone the other way. But we absolutely see people that want to run the same application stack in multiple different clouds for a variety of reasons.Corey: Oh, if you're a SaaS vendor, for example, it's, “Oh, we're only in this one cloud,” potential customers who in other clouds say, “Well, if that changes, we'll give you money.” “Oh, money. Did you say ‘other cloud?' I thought you said something completely different. Here you go.” Yeah, you've got to at some point. But the core of what you do, beyond what it takes to get that application present somewhere else, you usually keep in your primary cloud provider.Karthik: Exactly. Yep, exactly. Crazy things sometimes dictate or have to dictate architectural decisions. For example, you're seeing the rise of compliance. Different countries have different regulatory reasons to say, “Keep my data local,” or, “Keep some subset of data are local.”And you simply may not find the right cloud providers present in those countries; you may be a PaaS or an API provider that's helping other people build applications, and the applications that the API provider's customers are running could be across different clouds. And so they would want the data local, otherwise, the transfer costs would be really high. So, a number of reasons dictate—or like a large company may acquire another company that was operating in yet another cloud; everything else is great, but they're in another cloud; they're not going to say, “No because you're operating on another cloud.” It still does what they want, but they still need to be able to have a common base of expertise for their app builders, and so on. So, a number of things dictate why people started looking at cross-cloud databases with common performance and operational characteristics and security characteristics, but don't compromise on the feature set, right?That's starting to become super important, from our perspective. I think what's most important is the ability to run the database with ease while not compromising on your developer agility or the ability to build your application. That's the most important thing.Corey: When you founded the company back in 2016, you are VC-backed, so I imagine your investor pitch meetings must have been something a little bit surreal. They ask hard questions such as, “Why do you think that in 2016, starting a company to go and sell databases to people is a viable business model?” At which point you obviously corrected them and said, “Oh, you misunderstand. We're building an open-source database. We're not charging for it; we're giving it away.”And they apparently said, “Oh, that's more like it.” And then invested, as of the time of this recording, over $100 million in your company. Let me to be the first to say there are aspects of money that I don't fully understand and this is one of those. But what is the plan here? How do you wind up building a business case around effectively giving something away for free?And I want to be clear here, Yugabyte is open-source, and I don't have an asterisk next to that. It is not one of those ‘source available' licenses, or ‘anyone can do anything they want with it except Amazon' or ‘you're not allowed to host it and offer it as a paid service to other people.' So, how do you have a business, I guess is really my question here?Karthik: You're right, Corey. We're 100% open-source under Apache 2.0—I mean the database. So, our theory on day one—I mean, of course, this was a hard question and people did ask us this, and then I'll take you guys back to 2016. It was unclear, even as of 2016, if open-source companies were going to succeed. It was just unclear.And people were like, “Hey, look at Snowflake; it's a completely managed service. They're not open-source; they're doing a great job. Do you really need open-source to succeed?” There were a lot of such questions. And every company, every project, every space has to follow its own path, just applying learnings.Like for example, Red Hat was open-source and that really succeeded, but there's a number of others that may or may not have succeeded. So, our plan back then was to tread the waters carefully in the sense we really had to make sure open-source was the business model we wanted to go for. So, under the advisement from our VCs, we said we'd take it slowly; we want to open-source on day one. We've talked to a number of our users and customers and make sure that is indeed the path we've wanted to go. The conversations pretty clearly told us people wanted an open database that was very easy for them to understand because if they are trusting their crown jewels, their most critical data, their systems of record—this is what the business depends on—into a database, they sure as hell want to have some control over it and some transparency as to what goes on, what's planned, what's on the roadmap. “Look, if you don't have time, I will hire my people to go build for it.” They want it to be able to invest in the database.So, open-source was absolutely non-negotiable for us. We tried the traditional technique for a couple of years of keeping a small portion of the features of the database itself closed, so it's what you'd call ‘open core.' But on day one, we were pretty clear that the world was headed towards DBaaS—Database as a Service—and make it really easy to consume.Corey: At least the bad patterns as well, like, “Oh, if you want security, that's a paid feature.”Karthik: Exactly.Corey: No. That is not optional. And the list then of what you can wind up adding as paid versus not gets murky, and you're effectively fighting your community when they try and merge some of those features in and it just turns into a mess.Karthik: Exactly. So, it did for us for a couple of years, and then we said, “Look, we're not doing this nonsense. We're just going to make everything open and just make it simple.” Because our promise to the users was, we're building everything that looks like Postgres, so it's as valuable as Postgres, and it'll work in the cloud. And people said, “Look, Postgres is completely open and you guys are keeping a few features not open. What gives?”And so after that, we had to concede the point and just do that. But one of the other founding pieces of a company, the business side, was that DBaaS and ability to consume the database is actually far more critical than whether the database itself is open-source or not. I would compare this to, for example, MySQL and Postgres being completely open-source, but you know, Amazon's Aurora being actually a big business, and similarly, it happens all over the place. So, it is really the ability to consume and run business-critical workloads that seem to be more important for our customers and enterprises that paid us. So, the day-one thesis was, look, the world is headed towards DBaaS.We saw that already happen with inside Facebook; everybody was automated operations, simplified operations, and so on. But the reality is, we're a startup, we're a new database, no one's going to trust everything to us: the database, the operations, the data, “Hey, why don't we put it on this tiny company. And oh, it's just my most business-critical data, so what could go wrong?” So, we said we're going to build a version of our DBaaS that is in software. So, we call this Yugabyte Platform, and it actually understands public clouds: it can spin up machines, it can completely orchestrate software installs, rolling upgrades, turnkey encryption, alerting, the whole nine yards.That's a completely different offering from the database. It's not the database, it's just on top of the database and helps you run your own private cloud. So, effectively if you install it on your Amazon account or your Google account, it will convert it into what looks like a DynamoDB, or a Spanner, or what have you with you, with Yugabyte as DB as the database inside. So, that is our commercial product; that's source available and that's what we charge for. The database itself, completely open.Again, the other piece of the thinking is, if we ever charge too much, our customers have the option to say, “Look, I don't want your DBaaS thing; I'm going to the open-source database and we're fine with that.” So, we really want to charge for value. And obviously, we have a completely managed version of our database as well. So, we reuse this platform for our managed version, so you can kind of think of it as portability, not just of the database but also of the control plane, the DBaaS plane.They can run it themselves, we can run it for them, they could take it to a different cloud, so on and so forth.Corey: I like that monetization model a lot better than a couple of others. I mean, let's be clear here, you've spent a lot of time developing some of these concepts for the industry when you were at Facebook. And because at Facebook, the other monetization models are kind of terrifying, like, “Okay. We're going to just monetize the data you store in the open-source database,” is terrifying. Only slightly less would be the Google approach of, “Ah, every time you wind up running a SQL query, we're going to insert ads.”So, I like the model of being able to offer features that only folks who already have expensive problems with money to burn on those problems to solve them will gravitate towards. You're not disadvantaging the community or the small startup who wants it but can't afford it. I like that model.Karthik: Actually, the funny thing is, we are seeing a lot of startups also consume our product a lot. And the reason is because we only charge for the value we bring. Typically the problems that a startup faces are actually much simpler than the complex requirements of an enterprise at scale. They are different. So, the value is also proportional to what they want and how much they want to consume, and that takes care of itself.So, for us, we see that startups, equally so as enterprises, have only limited amount of bandwidth. They don't really want to spend time on operationalizing the database, especially if they have an out to say, “Look, tomorrow, this gets expensive; I can actually put in the time and money to move out and go run this myself. Why don't I just get started because the budget seems fine, and I couldn't have done it better myself anyway because I'd have to put people on it and that's more expensive at this point.” So, it doesn't change the fundamentals of the model; I just want to point out, both sides are actually gravitating to this model.Corey: This episode is sponsored in part by our friends at Jellyfish. So, you're sitting in front of your office chair, bleary eyed, parked in front of a powerpoint and—oh my sweet feathery Jesus its the night before the board meeting, because of course it is! As you slot that crappy screenshot of traffic light colored excel tables into your deck, or sift through endless spreadsheets looking for just the right data set, have you ever wondered, why is it that sales and marketing get all this shiny, awesome analytics and inside tools? Whereas, engineering basically gets left with the dregs. Well, the founders of Jellyfish certainly did. That's why they created the Jellyfish Engineering Management Platform, but don't you dare call it JEMP! Designed to make it simple to analyze your engineering organization, Jellyfish ingests signals from your tech stack. Including JIRA, Git, and collaborative tools. Yes, depressing to think of those things as your tech stack but this is 2021. They use that to create a model that accurately reflects just how the breakdown of engineering work aligns with your wider business objectives. In other words, it translates from code into spreadsheet. When you have to explain what you're doing from an engineering perspective to people whose primary IDE is Microsoft Powerpoint, consider Jellyfish. Thats Jellyfish.co and tell them Corey sent you! Watch for the wince, thats my favorite part.Corey: A number of different surveys have come out that say overwhelmingly companies prefer open-source databases, and this is waved around as a banner of victory by a lot of—well, let's be honest—open-source database companies. I posit that is in fact crap and also bad data because what the open-source purists—of which I admit, I used to be one, and now I solve business problems instead—believe that people are talking about freedom, and choice, and the rest. In practice, in my experience, what people are really distilling that down to is they don't want a commercial database. And it's not even about they're not willing to pay money for it, but they don't want to have a per-core licensing challenge, or even having to track licensing of where it is installed and how, and wind up having to cut checks for folks. For example, I'm going to dunk on someone because why not?Azure for a while has had this campaign that it is five times cheaper to run some Microsoft SQL workloads in Azure than it is on AWS as if this was some magic engineering feat of strength or something. It's absolutely not, it's that it is really expensive licensing-wise to run it on things that aren't Azure. And that doesn't make customers feel good. That's the thing they want to get away from, and what open-source license it is, and in many cases, until the source-available stuff starts trending towards, “Oh, you're going to pay us or you're not going to run it at all,” that scares the living hell out of people, then they don't actually care about it being open. So, at the risk of alienating, I'm sure, some of the more vocal parts of your constituency, where do you fall on that?Karthik: We are completely open, but for a few reasons right? Like, multiple different reasons. The debate of whether it purely is open or is completely permissible, to me, I tend to think a little more where people care about the openness more so than just the ability to consume at will without worrying about the license, but for a few different reasons, and it depends on which segment of the market you look at. If you're talking about small and medium businesses and startups, you're absolutely right; it doesn't matter. But if you're looking at larger companies, they actually care that, like for example, if they want a feature, they are able to control their destiny because you don't want to be half-wedded to a database that cannot solve everything, especially when the time pressure comes or you need to do something.So, you want to be able to control or to influence the roadmap of the project. You want to know how the product is built—the good and the bad—you want a lot of people testing the product and their feedback to come out in the open, so you at least know what's wrong. Many times people often feel like, “Hey, my product doesn't work in these areas,” is actually a bad thing. It's actually a good thing because at least those people won't try it and [laugh] they'll be safe. Customer satisfaction is more important than just the apparent whatever it is that you want to project about the product.At least that's what I've learned in all these years working with databases. But there's a number of reasons why open-source is actually good. There's also a very subtle reason that people may not understand which is that legal teams—engineering teams that want to build products don't want to get caught up in a legal review that takes many months to really make sure, look, this may be a unique version of a license, but it's not a license the legal team as seen before, and there's going to be a back and forth for many months, and it's just going to derail their product and their timelines, not because the database didn't do its job or because the team wasn't ready, but because the company doesn't know what the risk it'll face in the future is. There's a number of these aspects where open-source starts to matter for real. I'm not a purist, I would say.I'm a pragmatist, and I have always been, but I would say that a number of reasons why–you know, I might be sounding like a purist, but a number of reasons why a true open-source is actually useful, right? And at the end of the day, if we have already established, at least at Yugabyte, we're pretty clear about that, the value is in the consumption and is not in the tech if we're pretty clear about that. Because if you want to run a tier-two workload or a hobbyist app at home, would you want to pay for a database? Probably not. I just want to do something for a while and then shut it down and go do my thing. I don't care if the database is commercial or open-source. In that case, being open-source doesn't really take away. But if you're a large company betting, it does take away. So.Corey: Oh, it goes beyond that because it's not even, in the large company story, whether it costs money because regardless, I assure you, open-source is not free; the most expensive thing that we see in all of our customer accounts—again, our consultancy fixes AWS bills, an expensive problem that hits everyone—the environment in AWS is always less expensive than the people who are working on the environment. Payroll is an expense that dwarfs the AWS bill for anyone that is not a tiny startup that is still not paying a market-rate salary to its founders. It doesn't work that way. And the idea, for those folks is, not about the money, it's about the predictability. And if there's a 5x price hike from their database manager that suddenly completely disrupts their unit economic model, and they're in trouble. That's the value of open-source in that it can go anywhere. It's a form of not being locked into any vendor where it's hosted, as well as, now, no one company that has put it out there into the world.Karthik: Yeah, and the source-available license, we considered that also. The reason to vote against that was you can get into scenarios where the company gets competitive with his open-source site where the open-source wants a couple other features to really make it work for their own use case, like you know, case in point is the startup, but the company wants to hold those features for the commercial side, and now the startup has that 5x price jump anyway. So, at this point, it comes to a head-on where the company—the startup—is being charged not for value, but because of the monetization model or the business model. So, we said, “You know what? The best way to do this is to truly compete against open-source. If someone wants to operationalize the database, great. But we've already done it for you.” If you think that you can operationalize it at a lower cost than what we've done, great. That's fine.Corey: I have to ask, there has to have been a question somewhere along the way, during the investment process of, what if AWS moves into your market? And I can already say part of the problem with that line of reasoning is, okay, let's assume that AWS turns Yugabyte into a managed database offering. First, they're not going to be able to articulate for crap why you should use that over anything else because they tend to mumble when it comes time to explain what it is that they do. But it has to be perceived as a competitive threat. How do you think about that?Karthik: Yeah, this absolutely came up quite a bit. And like I said, in 2016, this wasn't news back then; this is something that was happening in the world already. So, I'll give you a couple of different points of view on this. The reason why AWS got so successful in building a cloud is not because they wanted to get into the database space; they simply wanted their cloud to be super successful and required value-added services like these databases. Now, every time a new technology shift happens, it gives some set of people an unfair advantage.In this case, database vendors probably didn't recognize how important the cloud was and how important it was to build a first-class experience on the cloud on day one, as the cloud came up because it wasn't proven, and they had twenty other things to do, and it's rightfully so. Now, AWS comes up, and they're trying to prove a point that the cloud is really useful and absolutely valuable for their customers, and so they start putting value-added services, and now suddenly you're in this open-source battle. At least that's how I would view that it kind of developed. With Yugabyte, obviously, the cloud's already here; we know on day one, so we're kind of putting out our managed service so we'll be as good as AWS or better. The database has its value, but the managed service has its own value, and so we'd want to make sure we provide at least as much value as AWS, but on any cloud, anywhere.So, that's the other part. And we also talked about the mobility of the DBaaS itself, the moving it to your private account and running the same thing, as well as for public. So, these are some of the things that we have built that we believe makes us super valuable.Corey: It's a better approach than a lot of your predecessor companies who decided, “Oh, well, we built the thing; obviously, we're going to be the best at running it. The end.” Because they dramatically sold AWS's operational excellence short. And it turns out, they're very good at running things at scale. So, that's a challenging thing to beat them on.And even if you're able to, it's hard to differentiate among the differences because at that caliber of operational rigor, it's one of those, you can only tell in the very niche cases; it's a hard thing to differentiate on. I like your approach a lot better. Before we go, I have one last question for you, and normally, it's one of those positive uplifting ones of what workloads are best for Yugabyte, but I think that's boring; let's be more cynical and negative. What workloads would run like absolute crap on YugabyteDB?Karthik: [laugh]. Okay, we do have a thing for this because we don't want to take on workloads and, you know, everybody have a bad experience around. So, we're a transactional database built for user-facing applications, real-time, and so on, right? We're not good at warehousing and analytic workloads. So, for example, if you were using a Snowflake or a Redshift, those workloads are not going to work very well on top of Yugabyte.Now, we do work with other external systems like Spark, and Presto, which are real-time analytic systems, but they translate the queries that the end-user have into a more operational type of query pattern. However, if you're using it straight-up for analytics, we're not a good bet. Similarly, there's cases where people want very high number of IOPS by reusing a cache or even a persistent cache. Amazon just came out with a [number of 00:31:04] persistent cache that does very high throughput and low-latency serving. We're not good at that.We can do reasonably low-latency serving and reasonably high IOPS at scale, but we're not the use case where you want to hit that same lookup over and over and over, millions of times in a second; that's not the use case for us. The third thing I'd say is, we're a system of record, so people care about the data they put, and they don't absolutely don't want to lose it and they want to show that it's transactional. So, if there's a workload where there's a lot of data and you're okay if you want to lose, and it's just some sensor data, and your reasoning is like, “Okay, if I lose a few data points, it's fine.” I mean, you could still use us, but at that point you'd really have to be a fanboy or something for Yugabyte. I mean, there's other databases that probably do it better.Corey: Yeah, that's the problem is whenever someone says, “Oh, yeah. Database”—or any tool that they've built—“Like, this is great.” “What workloads is it not a fit for?” And their answer is, “Oh, nothing. It's perfect for everything.”Yeah, I want to believe you, but my inner bullshit sense is tingling on that one because nothing's fit for all purposes; it doesn't work that way. Honestly, this is going to be, I guess, heresy in the engineering world, but even computers aren't always the right answer for things. Who knew?Karthik: As a founder, I struggled with this answer a lot, initially. I think the problem is, when you're thinking about a problem space, that's all you're thinking about, you don't know what other problem spaces exist, and when you are asked the question, “What workloads is it a fit for?” At least I used to say, initially, “Everything,” because I'm only thinking about that problem space as the world, and it's fit for everything in that problem space, except I don't know how to articulate the problem space—Corey: Right—Karthik: —[crosstalk 00:32:33]. [laugh].Corey: —and at some point, too, you get so locked into one particular way of thinking that the world that people ask about other cases like, “Oh, that wouldn't count.” And then your follow-up question is, “Wait, what's a bank?” And it becomes a different story. It's, how do you wind up reasoning about these things? I want to thank you for taking all the time you have today to speak with me. If people want to learn more about Yugabyte—either the company or the DB—how can they do that?Karthik: Yeah, thank you as well for having me. I think to learn about Yugabyte, just come join our community Slack channel. There's a lot of people; there's, like, over 3000 people. They're all talking interesting questions. There's a lot of interesting chatter on there, so that's one way.We have an industry-wide event, it's called the Distributed SQL Summit. It's coming up September 22nd, 23rd, I think a couple of days; it's a two-day event. That would be a great place to actually learn from practitioners, and people building applications, and people in the general space and its adjacencies. And it's not necessarily just about Yugabyte; it's generally about distributed SQL databases, in general, hence it's called the Distributed SQL Summit. And then you can ask us on Twitter or any of the usual social channels as well. So, we love interaction, so we are pretty open and transparent company. We love to talk to you guys.Corey: Well, thank you so much for taking the time to speak with me. Well, of course, throw links to that into the [show notes 00:33:43]. Thank you again.Karthik: Awesome. Thanks a lot for having me. It was really fun. Thank you.Corey: Likewise. Karthik Ranganathan, CTO, and co-founder of YugabyteDB. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment, halfway through realizing that I'm not charging you anything for this podcast and converting the angry comment into a term sheet for $100 million investment.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Software Lifecycle Stories
Software Match Makers with Mohan Panchapikesan

Software Lifecycle Stories

Play Episode Listen Later Aug 20, 2021 47:07


This conversation between Mohan Panchapikesan, CEO and Director at Medexpert Software Solutions, shares his stories about - Mohan started his career after completing his engineering in Anna University and IIT Kharagpur and talks about being a self made person but stays as a small town boy from Trichy Mohan shares being inherently curious and how his firm looks at not only the hospital management software but also the Ecosystem that drives it including insurance, pharma and the equipment sourcing and the entire supply chainMohan talks about the growth and fixed mindset and its application being the CEO. Being in a position of authority, having an understanding of people's fears and ambiguity enables in bringing cohesion towards the goals of the firm by motivationMohan talks about Elon Musks' motivation of moving to Mars due to the explosion of population and take that as an opportunity He refers Ford, Starbucks, McDonalds and Microsoft stories to talk about being bigger is better and democratising everything you doTalking about evolution and democratising the reach of services between the 1990s and 2020s has expanded the voice of individuals & takes the example of this podcast being in existence because of the forces that are in playStaying curious is the only way to sustain your learnings Perform or perish in corporate world has seeped into the personal world as Adopt or Die Discoverability is the key in the age of plethora of choices and technologists become the match makers of the world. Best match makers are those that have the horoscopes of all the open source software out thereFrom interoperability to intelligently make those inter connectionsIntelligence in integration and ecosystem is evolving. Unless we as mankind evolve with it, we will become artificial or redundant in the ecosystemOne has to be attuned with what is happening around us social or technology wise Mohan started his career in software in SA Software working as an RDBMS specialist. He later honed his data and software skills working in Walmart as a program manager. Mohan worked in Cognizant in various leadership roles by scaling people, technology and solutions to provide the right type of integration to the clients. Mohan became an entrepreneur serendipitously through a mutual friend. He's now CEO of Medexpert software solutions that's focused on reducing inefficiencies in the Medical ecosystem and driving innovations.Mohan can be reached at https://www.linkedin.com/in/mpjaya/

SecTools Podcast Series
SecTools Podcast E33 With Joxean Koret

SecTools Podcast Series

Play Episode Listen Later Aug 17, 2021 30:40


Joxean Koret has been working for the past 15 years in many different computing areas. He started as a database software developer and DBA for a number of different RDBMS. Eventually he turned towards reverse engineering and applied this DB insights to discover dozens of vulnerabilities in major database products, especially Oracle. He also worked in areas like malware analysis, anti-malware software development and developing IDA Pro at Hex-Rays. He is currently a senior security engineer. Joxean is the author and maintainer for Diaphora and Pigaios projects focused on diffing techniques. For more SecTools podcast episodes, visit https://infoseccampus.com

Voice of the DBA
Where Does NoSQL work well?

Voice of the DBA

Play Episode Listen Later Jun 28, 2021 2:42


It seems not too long ago that NoSQL was the hottest thing on the market. MongoDB was leading the charge, with tremendous growth as a company and many developers looking to adopt their platform. I remember years ago going to visit a number of customers in New York City, where Mongo had just grown their offices. Every company was considering migrating their RDBMS to MongoDB, trying to calculate the cost and return. Now, I certainly hear about companies adopting database that are in the NoSQL family, but not always (or even often) to replace a RDBMS. Instead, it's for a new application or alongside an existing RDBMS. While there are applications that are well suited for a non-RDBMS data store, there are plenty that perform worse on those platforms. This might be because the platform doesn't handle their workload, or because the developers aren't writing good code, but I know I'm not worried that RDBMS usage is going away, or even down. Read the rest of Where Does NoSQL work well?

YoungCTO.Tech
IT Career Talk: Part Leader Kenneth Samonte

YoungCTO.Tech

Play Episode Listen Later May 9, 2021 25:12


Guest Engineer Kenneth Samonte of YoungCTO Rafi Quisumbing Ken is an Amazon AWS Certified Solutions Architect – Professional, AWS Certified DevOps Professional, Certified Google Cloud Platform (GCP) Engineer, and Red Hat Certified System Administrator (RHCSA). His career started as a full-time on-premises Linux/UNIX system administrator. Now, he's working as a Cloud Architect for Samsung Research and Development Philippines. LINKEDIN: https://www.linkedin.com/in/kenneth-s...​ Ken is also a VMware Certified Professional, IBM AIX Administration Certified has an ITIL v3 Certification. I'm a registered Electronics Engineer and a Cisco Certified Network Associate (CCNA). Ken has also written a book dedicated to AWS DevOps Professionals who want to get AWS Certified. Core Technologies: Cloud: Amazon Web Services, Google Cloud Platform Infrastructure: Terraform, Ansible, AWS, GCP, Docker, Kubernetes, VMWare, Linux, OpenStack, Ceph, Resilience, and Disaster Recovery. CI/CD: Spinnaker, Jenkins, Packer, CircleCI Monitoring: Stackdriver, SumoLogic, DataDog, NewRelic, Elasticsearch, Logstash, Kibana, Grafana, Prometheus, Zabbix. Programming: NodeJs, GraphQL, Java, RDBMS, NoSQL, MySQL, Redis, Memcached, SOAP, Microservices Miscellaneous: WordPress, Zimbra, iRedMail, JIRA, Red Hat, Ubuntu If you want to be a guest here, please reach out to me anywhere. Kahit mag comment lang oks na. YoungCTO Shirt: https://store.awsug.ph/shop/product/1...

Network Automation Hangout
002 — NetBox as a Source of Truth, problems with Open Source, Python type annotations

Network Automation Hangout

Play Episode Listen Later Apr 26, 2021 65:29


Panelists: Jeremy Stretch (@jstretch85), Roman Dodin (@ntdvps), John McGovern (@IPvZero), Carl Montanari (@carlrmontanari) and Dmitry Figol (@dmfigol) Topics: - NetBox as a Source of Truth: roadmap, scope, RDBMS/git - Problems with Open Source: expectations, licenses, contributions, sponsorships - Network collective gNMI episode: https://networkcollective.com/2021/04/gnmi/ - Python type annotation problem in 3.10 (PEP 563/649): https://github.com/samuelcolvin/pydantic/issues/2678 Recorded live on 2021-04-22 Weekly recordings with the community on Thursdays at 6 PM CET / 12 PM ET / 9 AM PT on dogehouse.tv

DBAle
DBAle 31: Monitoring matters for modern data management

DBAle

Play Episode Listen Later Apr 21, 2021 54:29


Is it a beer, is it a muffin or is it a Panda Pop? Who knows but at 9% strength, producer Louise joins our Chris duo as plan B, to monitor proceedings. Very fitting as our discussion focuses on monitoring for the modern data age. We talk busyness, hybrid estates, tooling, Multi-RDBMS, and a surprising amount about car mechanics. In The News we debate scrape or breach, and the potential maximum fines for the latest Facebook scandal. So, grab yourself a beer and tune in – cheers.

Start Over Coder
052: Relational Databases Intro

Start Over Coder

Play Episode Listen Later Dec 27, 2020 14:11


Working with data is one of the most important aspects of development. This week I got an intro to relational databases, and here's what I learned. As we all know, a database is where we store data…what makes it relational is the method we use to store it: information is stored in tables, and then we relate those tables to each other by referencing unique id numbers from one table to the next. To interact with the data, we use a relational database management system (RDBMS) like MySQL, PostgreSQL, MariaDB, etc. Another option for storing data is to use a non-relational database like MongoDB or Neo4j which takes a less rigid approach to how and where data can be stored. These are also sometimes called NoSQL (not only SQL) databases and they provide flexibility when the data you're working with vary a lot in structure and content. However when the information is uniform, relational databases can be very efficient because there is not a lot of repetition, and you can easily access exactly the information you need with SQL, when you need it. With an RDBMS like MySQL, you can write queries to do everything from basic CRUD commands (create - read - update - delete/destroy) to refining searches and aggregating information. Using these queries and commands not only allows you to determine what information is displayed in your application, but it can also help with marketing decision, business development, advertising and much more. One correction—when I gave the Instagram example, I said that you would have a table for users, and then a table for “that user's photos” and so on. I meant to say you would have a table for ALL photos—it would not be a table for each user. You then link a user id to the photos table to show which user that photo belongs to. Sorry I misspoke! Show Links: Most popular baby names by US state The Ultimate MySQL Bootcamp W3 MySQL Exercises SQLZoo US Government Data UK Government Data Cloud9 web environment to practice This episode was originally published 16 January, 2018.

Trino Community Broadcast
7: Cost Based Optimizer, Decorrelate subqueries, and does Presto make my RDBMS faster?

Trino Community Broadcast

Play Episode Listen Later Dec 21, 2020 71:55


Table of Contents: - Intro Song: 00:00 - Intro: 00:20 - News: 3:27 - Concept of the week: Cost Based Optimizer 16:48 - PR of the week: PR 1415 Decorrelate subqueries with Limit or TopN 43:09 - PR Demo: EXPLAIN Decorrelate subqueries with Limit or TopN 53:36 - Question of the week: Will running Presto on my relational database make processing faster? 1:02:24Show Notes: https://trino.io/episodes/7.htmlShow Page: https://trino.io/broadcast/

Arquitetando
Arquitete FACILMENTE com CACHES e REDIS | Redis o que é | Redis para que serve

Arquitetando

Play Episode Listen Later Aug 5, 2020 17:42


Projetar soluções arquiteturais extremamente performatica requerem em geral o uso de caches.No conteúdo de hoje vamos falar sobre caches em especial sobre o Redis, abordando também as diferenças entre bancos relacionais e NOSQL.Também vamos ilustrar algumas sugestões arquiteturais fazendo uso de caches.-----------------O Redis é um repositório de estrutura de dados em memória de código aberto (licenciado pela BSD), usado como banco de dados, cache e intermediário de mensagens. Além de ser fácil de usar, ela suporta vários tipos de estruturas que permitem ao desenvolvedor suprir a grande maioria das necessidades de dados que um problema pode requerer.Ele guarda as informações no estilo chave-valor e suporta tipos complexos de valor, o que possibilita o uso da tecnologia em vários tipos de casos. Além disso, o Redis tem estratégias para guardar os dados em memória e em disco, garantindo resposta rápida e persistência de dados.Os tipos de valores suportados que são usados mais comumente são:String;Listas;HashMap;Set.Repare que a possibilidade de usar HashMap faz com que praticamente qualquer objeto de dados serializável seja capaz de ser guardado no Redis. Os outros tipos de dados suportados são igualmente úteis e podem ser utilizados em casos mais específicos ou mais simples.Todo valor é acompanhado de uma chave, que é usada para resgatar os valores guardados e torna possível configurar regras de expiração, fazendo com que o Redis funcione como uma espécie de cache para aquele dado.O Redis não tem o conceito de schemas como outros bancos de dados, por isso é necessário fazer definições de chave que permitam uma separação lógica de cada um dos tipos de dados guardados. Venha ser VIP na ArcH, me siga no meu novo canal do Telegram:https://t.me/pisanidaarch---Conteúdo cross tecnologico, pode ser aplicado a java, rust, .net, c#, php, nodejs, javascript, go lang etcA ArcH é uma produtora de conteúdo digital que ajuda mensalmente milhares de profissionais a se tornarem FERA em ARQUITETURA de SISTEMAS, a seguir alguns dos temas que abordamos: abordagens arquiteturais, padrões de projeto, padrões de arquitetura e tecnologia com eficiência, agilidade e qualidade, tudo para contribuir com o desenvolvimento profissional da comunidade de Arquitetos de SoluçõesSoftware e Sistemas do Brasil.Saiba mais sobre a ArcH:▶ https://archoffice.tech---CONTATO:▶ Whats: (11) 9.9696-8533 ▶ E-mail: pisani@archoffice.tech

Arch-In-Minutes
Arquitete FACILMENTE com CACHES e REDIS | Redis o que é | Redis para que serve

Arch-In-Minutes

Play Episode Listen Later Aug 5, 2020 17:42


Projetar soluções arquiteturais extremamente performatica requerem em geral o uso de caches.No conteúdo de hoje vamos falar sobre caches em especial sobre o Redis, abordando também as diferenças entre bancos relacionais e NOSQL.Também vamos ilustrar algumas sugestões arquiteturais fazendo uso de caches.-----------------O Redis é um repositório de estrutura de dados em memória de código aberto (licenciado pela BSD), usado como banco de dados, cache e intermediário de mensagens. Além de ser fácil de usar, ela suporta vários tipos de estruturas que permitem ao desenvolvedor suprir a grande maioria das necessidades de dados que um problema pode requerer.Ele guarda as informações no estilo chave-valor e suporta tipos complexos de valor, o que possibilita o uso da tecnologia em vários tipos de casos. Além disso, o Redis tem estratégias para guardar os dados em memória e em disco, garantindo resposta rápida e persistência de dados.Os tipos de valores suportados que são usados mais comumente são:String;Listas;HashMap;Set.Repare que a possibilidade de usar HashMap faz com que praticamente qualquer objeto de dados serializável seja capaz de ser guardado no Redis. Os outros tipos de dados suportados são igualmente úteis e podem ser utilizados em casos mais específicos ou mais simples.Todo valor é acompanhado de uma chave, que é usada para resgatar os valores guardados e torna possível configurar regras de expiração, fazendo com que o Redis funcione como uma espécie de cache para aquele dado.O Redis não tem o conceito de schemas como outros bancos de dados, por isso é necessário fazer definições de chave que permitam uma separação lógica de cada um dos tipos de dados guardados. Venha ser VIP na ArcH, me siga no meu novo canal do Telegram:https://t.me/pisanidaarch---Conteúdo cross tecnologico, pode ser aplicado a java, rust, .net, c#, php, nodejs, javascript, go lang etcA ArcH é uma produtora de conteúdo digital que ajuda mensalmente milhares de profissionais a se tornarem FERA em ARQUITETURA de SISTEMAS, a seguir alguns dos temas que abordamos: abordagens arquiteturais, padrões de projeto, padrões de arquitetura e tecnologia com eficiência, agilidade e qualidade, tudo para contribuir com o desenvolvimento profissional da comunidade de Arquitetos de SoluçõesSoftware e Sistemas do Brasil.Saiba mais sobre a ArcH:▶ https://archoffice.tech---CONTATO:▶ Whats: (11) 9.9696-8533 ▶ E-mail: pisani@archoffice.tech

Bit Jet Kit
It's Just SQL Server on Azure, Right?

Bit Jet Kit

Play Episode Listen Later Jul 19, 2020 5:11


On this episode, players can learn about the leading edge Microsoft AI Server technology, Azure, used for Microsoft's enterprise Relational Database Management System, SQL Server. However, the social issues surrounding big business are specifically law officers enforcing bureaucracy. This is unsustainable, so I recommend the big RDBMS' nemesis, Android Pie's SQLite database. Further, before Android 10 is selected, Android Pie should be selected for its predictive capabilities as to protect against a Minority Report society. Image by earvine95 on Pixabay. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/bitjetkit/support

DBA Genesis Audio Experience
Is Oracle RDBMS loosing edge over NoSQL | #dailyDBA 37

DBA Genesis Audio Experience

Play Episode Listen Later Mar 13, 2020 17:06


#dbaChallenge: How to handle interview rejection? - Questions Picked-up For This Episode: ============================ How can I setup primary database so that if I delete rows on primary, the rows must not be deleted on standby? How can I generate a report of every SQL statement executed by every session? Is Oracle RDBMS loosing edge over NoSQL? How can I reduce Concurrency wait event? Can we create PDB from an existing PDB via DBLINK? Bonus Question: How to handle interview rejection? - #dailyDBA #dbaGenesis - Your comments encourage us to produce quality content, please take a second and say ‘Hi’ in the comments and let me and my team know what you thought of the video … p.s. It would mean the world to me if you hit the subscribe button ;) - Link to full course: https://dbagenesis.com/p/oracle-virtualbox-administration Link to all DBA courses: https://dbagenesis.com/courses Link to real-time projects: https://dbagenesis.com/p/projects Link to support articles: https://support.dbagenesis.com - DBA Genesis provides all you need to build and manage effective Oracle technology learning. We designed DBA Genesis as a simple to use yet powerful online Oracle learning system for students. Each of our courses is taught by an expert instructor, and every course is available with a challenging project to push you out of your comfort zone!! DBA Genesis is currently the fastest & the most engaging learning platforms for DBAs across the globe. Take your database administration skills to next level by enrolling into your first course. - Facebook: https://www.facebook.com/dbagenesis/ Instagram: https://www.instagram.com/dbagenesis/ Twitter: https://twitter.com/DbaGenesis Website: https://dbagenesis.com/ Contact us: support@dbagenesis.com - Start your DBA Journey Today !! Become an exclusive DBA Genesis member: https://www.youtube.com/channel/UCUHHRiLeH7sO46GJBRGweag/join

Voice over Work
SQL Quick Start Guide, Chapter by Chapter

Voice over Work

Play Episode Listen Later Feb 5, 2020 20:09


The ultimate beginner's guide to learning SQL - from retrieving data to creating databases! Structured query language, or SQL (pronounced "sequel" by many), is the most widely used programming language in database management and is the standard language for relational database management systems (RDBMS). SQL programming allows users to return, analyze, create, manage, and delete data within a database - all within a few commands. With more industries and organizations looking to the power of data, the need for an efficient, scalable solution for data management is required. More often than not, organizations implement a relational database management system in one form or another. These systems create long-term data "warehouses" that can be easily accessed to return and analyze results, such as, "Show me all of the clients from Canada that have purchased more than $20,000 in the last three years." This "query", which would have taken an extensive amount of hands-on research to complete prior to the use of a database, can now be determined in seconds by executing a simple "SELECT SQL" statement on a database. SQL can seem daunting to those with little to zero programming knowledge and can even pose a challenge to those who have experience with other languages. Most resources jump right into the technical jargon and are not suited for someone to really grasp how SQL actually works. That's why we created this book. Our goal here is simple: to show you exactly everything you need to know to utilize SQL in whatever capacity you may need in simple, easy-to-follow concepts. Our book provides multiple step-by-step examples of how to master these SQL concepts to ensure you know what you're doing and why you're doing it every step of the way. PLEASE NOTE: When you purchase this title, the accompanying reference material will be available in your Library section along with the audio. ©2015 ClydeBank Media LLC (P)2015 ClydeBank Media LLC #sql #databasedatamanagement #programming #databases #russellnewton #newtonmediagroup --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/voiceoverwork/message

Streaming Audio: a Confluent podcast about Apache Kafka
Distributed Systems Engineering with Apache Kafka ft. Jun Rao

Streaming Audio: a Confluent podcast about Apache Kafka

Play Episode Listen Later Feb 5, 2020 54:59


Jun Rao (Co-founder, Confluent) explains what relational databases and distributed databases are, how they work, and major differences between the two. He also delves into important lessons he’s learned along the way through the transition from the relational world to the distributed world. To be successful at a place like Confluent, he outlines three fundamental traits that a distributed systems engineer must possess, emphasizing the importance of curiosity and knowledge, care in code development, and being open-minded and collaborative. You may even find that sometimes, the people with the best answers to your problems aren't even at your company! Originally from China, Jun moved to the U.S. for his Ph.D. and eventually landed in IBM research labs. He worked there for over 10 years before moving to LinkedIn, where Apache Kafka® was initially being developed and implemented. EPISODE LINKSGet 30% off Kafka Summit London registration with the code KSL20AudioJoin the Confluent Community Slack

Voice over Work
Everything You Need to Know to Utilize SQL

Voice over Work

Play Episode Listen Later Feb 2, 2020 5:06


The ultimate beginner's guide to learning SQL - from retrieving data to creating databases! Structured query language, or SQL (pronounced "sequel" by many), is the most widely used programming language in database management and is the standard language for relational database management systems (RDBMS). SQL programming allows users to return, analyze, create, manage, and delete data within a database - all within a few commands. With more industries and organizations looking to the power of data, the need for an efficient, scalable solution for data management is required. More often than not, organizations implement a relational database management system in one form or another. These systems create long-term data "warehouses" that can be easily accessed to return and analyze results, such as, "Show me all of the clients from Canada that have purchased more than $20,000 in the last three years." This "query", which would have taken an extensive amount of hands-on research to complete prior to the use of a database, can now be determined in seconds by executing a simple "SELECT SQL" statement on a database. SQL can seem daunting to those with little to zero programming knowledge and can even pose a challenge to those who have experience with other languages. Most resources jump right into the technical jargon and are not suited for someone to really grasp how SQL actually works. That's why we created this book. Our goal here is simple: to show you exactly everything you need to know to utilize SQL in whatever capacity you may need in simple, easy-to-follow concepts. Our book provides multiple step-by-step examples of how to master these SQL concepts to ensure you know what you're doing and why you're doing it every step of the way. PLEASE NOTE: When you purchase this title, the accompanying reference material will be available in your Library section along with the audio. ©2015 ClydeBank Media LLC (P)2015 ClydeBank Media LLC Blog #sql #databasedatamanagement #programming #databases #russellnewton #newtonmediagroup --- Send in a voice message: https://anchor.fm/voiceoverwork/message

DevOps Chat
Software Architecture for Cloud Native, .NET Core, & Open Source, Donald Lutz

DevOps Chat

Play Episode Listen Later Nov 19, 2019 23:16


In this episode of DevOps Chats we talk with Donald Lutz, Principal Software Architect, specializing in systems integration and creating large, scalable cloud applications. Occasionally DevOps Chats is fortunate to spotlight DevOps and cloud native developers doing trailblazing work in contemporary software architectures. Donald fits that bill to a "t.", as an entrepreneur and employee at startups like Faction, BoldTech Systems, and his own company Technetronic Solutions, and established companies including Via West. Our discussion focuses on creating cloud native applications in startups and large enterprise IT. Donald's currently working with one of the world's largest financial institutions to move from legacy applications directly to cloud native apps, bypassing any interim lift-n-shift moves. His work spans many Microsoft technologies, including .NET Core, runtime framework Dapper for RDBMS mapping, Service Fabric, and Azure, and opensource Kubernetes, Terraform, and Puppet (and Enterprise). During our discussion, we cover the challenges of architecting and scaling very large cloud native applications, implementing DevOps in less mature software organizations, how established software patterns benefit DevOps and cloud app developers, and the importance of giving back by hosting meetups to share knowledge and mentor others. Donald also gives back as an active leader and mentor of the FIRST Robotics Team 1410 since 2005. Join in on our conversation as we explore the complex and sophisticated inter-workings of a cloud native software architect.

Arch-In-Minutes
O que é RDBMS em um minuto

Arch-In-Minutes

Play Episode Listen Later Nov 18, 2019 1:06


Neste breve vídeo explicamos o que é um RDBMS em um minuto.Assista ao vídeo completo:https://youtu.be/CWAbwN8eOng--A ArcHOffice é uma produtora de conteúdo educativo com o objetivo de desbravar o mundo da Arquitetura de TI e ajudar arquitetos a utilizar a abordagens arquiteturais, padrões de projeto, padrões de arquitetura e tecnologia com eficiência, agilidade e qualidade.Saiba mais sobre a ArcH:▶ https://archoffice.tech

Streaming Audio: a Confluent podcast about Apache Kafka
Data Modeling for Apache Kafka – Streams, Topics & More with Dani Traphagen

Streaming Audio: a Confluent podcast about Apache Kafka

Play Episode Listen Later Oct 7, 2019 40:25


Helping users be successful when it comes to using Apache Kafka® is a large part of Dani Traphagen’s role as a senior systems engineer at Confluent. Whether she’s advising companies on implementing parts of Kafka or rebuilding their systems entirely from the ground up, Dani is passionate about event-driven architecture and the way streaming data provides real-time insights on business activity. She explains the concept of a stream, topic, key, and stream-table duality, and how each of these pieces relate to one another. When it comes to data modeling, Dani covers importance business requirements, including the need for a domain model, practicing domain-driven design principles, and bounded context. She also discusses the attributes of data modeling: time, source, key, header, metadata, and payload, in addition to exploring the significance of data governance and lineage and performing joins.EPISODE LINKSConvert from table to stream and stream to table Distributed, Real-Time Joins and Aggregations on User Activity Events Using Kafka StreamsKSQL in Action: Real-Time Streaming ETL from Oracle Transactional DataKSQL in Action: Enriching CSV Events with Data from RDBMS into AWSJourney to Event Driven – Part 4: Four Pillars of Event Streaming MicroservicesJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.

The InfoQ Podcast
Event Sourcing: Bernd Rücker on Architecting for Scale

The InfoQ Podcast

Play Episode Listen Later Sep 13, 2019 25:07


Today on the podcast, Bernd Rucker of Camunda talks about event sourcing. In particular, Wes and Bernd discuss thoughts around scalability, events, commands, consensus, and the orchestration engines Camunda implemented. This podcast is a primer on considerations between an RDBMS and event-driven systems. Why listen to this podcast: - An event-driven system is a more modern approach to building highly scalable systems. - An RDBMS system can limit throughput in scalability. Camunda was able to achieve higher levels of scale by implementing an event-driven system. - Command and events are often confused. Commands are actions that request something to happen. Events describe something that happened. Confusing the two causes confusion in application development of event-driven systems.

Roaring Elephant
Episode 146 – Roaring News

Roaring Elephant

Play Episode Listen Later Jun 25, 2019 36:21


A new function is being called into being by Forrester called the "Data Hunter" which sounded interesting enough to us to spend some time on. Then we cover a nice guest blog on the Cloudera site and we finish off with some rambling on he changes in the HPC world. Enjoy! Loincloths and spears to the ready: the Data Hunter is born! Dave found a small arcticle on the Forrester site that points to a paid webinar about Data Hunting. Now we did not pony up the 300$ they charge for the webinar, but we found the concept quite compelling and looked at the three "audience questions" that were included in the article. The "Small File Problem" and a little "You're Doing it Wrong"...? This guest blog on the Cloudera web site actually has some practical information that can be useful when you need to consolidate your incremental upload files to reduce the amount of files your Hive queries need to traverse. The additional complexity here was that this had to happen on a live production environment without service interruption and keeping all data available and sane. We do however need to remark that the author of this article was making life quite difficult for himself since his "data estate" really does not seem to justify the use of any kind of Big Data technologies. We fully agree with his own summary where he states that using a standard RDBMS would most likely be a better solution... Should "HPC" now be spelled "HPE"??? With the Enterprise branch of HP gobbling up Cray, after doing the same with whatever remained of Silicon Graphics way back in 2016, theynow represent a large percentage of what could be considered "traditional HPC". Of course, IBM is still in there too, but not much of the old supercomputer firms are still around. Of course, the whole HPC world is undergoing a major redesign towards GPU's (and to a lesser extent FPGA's) so it does make sense that the ecosystem is changing...?   And that's all we have for this episode. See you next week! Don't forget to subscribe to our YouTube channel and consider becoming a Patreon and support your favorite podcast! Please use the Contact Form on this blog or our twitter feed to send us your questions, or to suggest future episode topics you would like us to cover.

Informatics Overload
Techniques for validation and testing of RDBMS functions

Informatics Overload

Play Episode Listen Later Feb 3, 2019 7:14


In this episode I discuss Unit 3 Outcome 1 Key Knowledge points 11, 12, and 13; U3O1 KK 11 – functions and techniques within an RDBMS to efficiently and effectively validate and manipulate data U3O1 KK 12 - functions and techniques to retrieve required information through searching, sorting, filtering and querying data sets U3O1 KK 13 - methods and techniques for testing that solutions perform as intended

Arumugam's Podcast
NOSQL Vs RDBMS (Open Source Series) - Episode 3 (Tamil)

Arumugam's Podcast

Play Episode Listen Later Jan 15, 2019 5:51


NOSQL Vs RDBMS (Open Source Series) - Episode 3 (Tamil) by Arumugam

Comunidad CODE
Un viaje por CosmosDB

Comunidad CODE

Play Episode Listen Later Jan 13, 2019 55:55


Ponente: Leonardo Micheloni ¿Te imaginas una base de datos que nos de la posibilidad de que los datos estén allí donde está el cliente? ¿y que además ha sido pensada desde el principio para ambientes distribuídos y de alta escala?. ¿Y si además esta base de datos soporta diferentes modelos y diferentes APIs? todo esto y mucho más es lo que nos da CosmosDB. CosmosDB es una base de datos NoSQL pensada para la alta escala y la flexibilidad, permitiendo seleccionar nuestra API preferida para interactuar con los datos, nuestro nivel de consistencia adecuado y poder cambiarlo en cualquier consulta. En esta charla haremos un repaso de las principales características de CosmosDB y las posibilidades que nos brinda, en qué casos es mejor este tipo de servicios que una RDBMS y en cuáles no, por último haremos una demo para demostrar la compatibilidad de la API de MongoDB. https://comunidadcode.com/2018-05-17-cosmosdb/

Informatics Overload
Naming Conventions

Informatics Overload

Play Episode Listen Later Dec 29, 2018 1:39


In this episode I discuss Unit 3 Outcome 1 Key Knowledge point 6, naming conventions to support efficient use and maintenance of an RDBMS

Informatics Overload
Flat file VS RDBMS

Informatics Overload

Play Episode Listen Later Dec 29, 2018 5:26


In this episode I discuss Unit 3 Outcome 1 Key Knowledge points 5 AND 7, purposes and structure of an RDBMS, including comparison with flat file databases, AND a methodology for creating an RDBMS structure: identifying entities, defining tables and fields to represent entities; defining relationships by identifying primary key and foreign key fields; defining data types and field sizes; normalisation to third level.

SDCast
SDCast #92: в гостях Илья Космодемьянский, один из основателей и директор компании Data Egret

SDCast

Play Episode Listen Later Nov 6, 2018 89:46


Встречайте 92-й выпуск SDCast’а! У меня в гостях Илья Космодемьянский, один из основателей и директор компании Data Egret. Илья является активным участником сообщества PostgreSQL, он регулярно выступает на различных конференциях как с довольно сложными техническими докладами о внутренностях PostgreSQL, так и с докладами про soft skills, получение опыта и знаний в области баз данных. В этом выпуске мы обсуждаем в целом роль баз данных в ИТ системах, как поменялась разработка софта и требования к базам данных в связи с возросшей нагрузкой и увеличением мощностей вычислительных ресурсов, с развитием процессов разработки, появлением различных архитектур, таких как микросервисы и прочие. Подискутировали о том, какое влияние на классические RDBMS оказали такие новые (и уже не очень) веяния, как noSQL и newSQL. Обсудили, как различные БД и PostgreSQL в частности адаптируются под новые требования к надёжности, масштабируемости, отказоустойчивости. Посмотрели в ретроспективе на развитие PostgreSQL, вспомнили что с ним было 10−15 лет назад, когда MySQL был на коне, и какое место PostgreSQL занимает сейчас, как изменился процесс разработки и что происходит в проекте сейчас. Отдельно обсудили тему знаний и как стать экспертом в области баз данных, какие для этого нужные базовые знания, как и где набираться опыта. Илья рассказал про свой путь в IT, поделился своими практическими советами и рекомендациями.

DataSnax Podcast
Active Everywhere Built For Hybrid Cloud | DataStax Enterprise

DataSnax Podcast

Play Episode Listen Later Oct 29, 2018 11:54


Relational database management systems (RDBMS) have served many enterprises for over 4 decades.  Times are certainly changing and the way data is accessed and collected is experiencing exponential growth.  In this podcast, Amit Chaudhry, VP of Product Marketing at DataStax, talks about the importance for having an Active Everywhere database architecture that only DataStax Enterprise can deliver.

Plumbers of Data Science
#039 Is ETL Dead For Data Science and Big Data?

Plumbers of Data Science

Play Episode Listen Later Oct 3, 2018 28:47


Is ETL dead in Data Science and Big Data? In today's podcast I share with you my views on your questions regarding ETL (extract, transform, load). Data Lakes & Data Warehouse where is the difference? Is ETL still practiced or did pre processing & cleansing replace it What would replace ETL in Data Engineering? How to become a data engineer? (check out my facebook note) How to get experience training at home? Real time analytics with RDBMS or HDFS?

The Polyglot Developer Podcast
TPDP022: NoSQL Databases and the Flexibility of a Non-Relational Model

The Polyglot Developer Podcast

Play Episode Listen Later Oct 2, 2018 45:18


In this episode I'm joined by Matt Groves, Senior Developer Advocate at the NoSQL database company, Couchbase. The focus of this episode is to become familiar with NoSQL and where it makes sense to use it in your projects, both new and old. Matt and explore numerous NoSQL database technologies which include Graph, Document, Key-Value, and Columnar, and look at the possible advantages they bring over the RDBMS alternative. I know Matt Groves from my time working with him at Couchbase. While Couchbase will be included in the episode, it is by no means the focus of the episode. A brief writeup to this episode can be found via https://www.thepolyglotdeveloper.com/2018/10/tpdp-e22-nosql-databases-flexibility-non-relational-model/

Develpreneur: Become a Better Developer and Entrepreneur
Databases Overview - Laying The Groundwork

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Apr 23, 2018 25:58


In this episode, we continue the series of overview discussions.  This time around we will cover a databases overview.  We looked at SDLC in the prior installment of this series and will discuss databases in a two-parter.  This first part includes definitions and high-level summaries of database engines, types, and terms. Databases Overview - The Engines To the untrained observer, a database is a place to store data.  This definition works well if you do not need to go any deeper.  Unfortunately, we do, and one of the critical distinctions among databases is how they store that data.  This fact is not easier to learn by the abbreviations used in talking about them.  One of the more significant parts of this episode covers RDBMS, OODBMS, ISAM, and other major engines as well as what makes them different.  This is an essential first step in getting us comfortable with the world of data storage. The Moving Parts The other primary focus is for those that want to understand careers in database technology or advance theirs.  We go over some of the critical facets of working in a database as an administrator or developer.  This coverage includes things like SQL, stored procedures, functions, and triggers.  But that's not all.  There are tiers of sorts of database skills, and we look at how these pieces sort out.  Pun intended.  This does not go as far as being a tutorial in these areas, but it should make you much more comfortable in your understanding of them.

Secrets of Data Analytics Leaders
James Serra: Myths of Modern Data Management

Secrets of Data Analytics Leaders

Play Episode Listen Later Apr 1, 2018 31:12


In this podcast, Wayne Eckerson and James Serra discuss myths of modern data management. Some of the myths discussed include 'all you need is a data lake', 'the data warehouse is dead', 'we don’t need OLAP cubes anymore', 'cloud is too expensive and latency is too slow', 'you should always use a NoSQL product over a RDBMS.' Serra is big data and data warehousing solutions architect at Microsoft with over thirty years of IT experience. He is a popular blogger and speaker and has presented at dozens of Microsoft PASS and other events. Prior to Microsoft, Serra was an independent data warehousing and business intelligence architect and developer.

Changelog Master Feed
CockroachDB and distributed databases in Go (Go Time #73)

Changelog Master Feed

Play Episode Listen Later Mar 23, 2018 64:28 Transcription Available


Andrei Matei joined the show and talked with us about CockroachDB (and why it’s easier to use than any RDBMS), distributed databases with Go, tracing, and other interesting projects and news.

Go Time
CockroachDB and distributed databases in Go

Go Time

Play Episode Listen Later Mar 23, 2018 64:28 Transcription Available


Andrei Matei joined the show and talked with us about CockroachDB (and why it’s easier to use than any RDBMS), distributed databases with Go, tracing, and other interesting projects and news.

Go Time
CockroachDB and distributed databases in Go

Go Time

Play Episode Listen Later Mar 23, 2018 64:28


Andrei Matei joined the show and talked with us about CockroachDB (and why it’s easier to use than any RDBMS), distributed databases with Go, tracing, and other interesting projects and news.

IGeometry
Episode 21 - RDBMS

IGeometry

Play Episode Listen Later Jan 25, 2018 12:30


We discuss relational databases. Their properties and scalability. --- Send in a voice message: https://anchor.fm/hnasr/message

AWS re:Invent 2017
DAT320: Moving a Galaxy into the Cloud: Best Practices from Samsung on Migrating to Amazon DynamoDB

AWS re:Invent 2017

Play Episode Listen Later Nov 30, 2017 42:25


In this session, we introduce you to the best practices for migrating databases, such as traditional RDBMS or other NoSQL databases to Amazon DynamoDB. We discuss DynamoDB key concepts, evaluation criteria, data modeling in DynamoDB, how to move data into DynamoDB, and data migration key considerations. We share a case study of Samsung Electronics, which migrated their Cassandra cluster to DynamoDB for their Samsung Cloud workload.

All Ruby Podcasts by Devchat.tv
RR 314 DynamoDB on Rails with Chandan Jhunjhunwal

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Jun 13, 2017 46:47


RR 314 DynamoDB on Rails with Chandan Jhunjhunwal Today's Ruby Rogues podcast features DynamoDB on Rails with Chandan Jhunjhunwal. DynamoDB is a NoSQL database that helps your team solve managing infrastructure issues like setup, costing and maintenance. Take some time to listen and know more about DynamoDB! [00:02:18] – Introduction to Chandan Jhunjhunwal Chanchan Jhunjhunwal is an owner of Faodail Technology, which is currently helping many startups for their web and mobile applications. They started from IBM, designing and building scalable mobile and web applications. He mainly worked on C++ and DB2 and later on, worked primarily on Ruby on Rails. Questions for Chandan [00:04:05] – Introduction to DynamoDB on Rails I would say that majority of developers work in PostgreSQL, MySQL or other relational database. On the other hand, Ruby on Rails is picked up by many startup or founder for actually implementing their ideas and bringing them to scalable products. I would say that more than 80% of developers are mostly working on RDBMS databases. For the remaining 20%, their applications need to capture large amounts of data so they go with NoSQL. In NoSQL, there are plenty of options like MongoDB, Cassandra, or DynamoDB. When using AWS, there’s no provided MongoDB. With Cassandra, it requires a lot of infrastructure setup and costing, and you’ll have to have a team which is kind of maintaining it on a day to day basis. So DynamoDB takes all those pain out of your team and you no longer have to focus on managing the infrastructure. [00:07:35] – Is it a good idea to start with a regular SQL database and then, switch to NoSQL database or is it better to start with NoSQL database from day one? It depends on a couple of factors. For many of the applications, they start with RDBMS because they just want to get some access, and probably switch to something like NoSQL. First, you have to watch the incoming data and their capacity. Second is familiarity because most of the developers are more familiar with RDBMS and SQL queries. For example, you have a feed application, or a messaging application, where you know that there will be a lot of chat happening and you’d expect that you’re going to take a huge number of users. You can accommodate that in RDBMS but I would probably not recommend that. [00:09:30] Can I use DynamoDB as a caching mechanism or cache store? I would not say replacement, exactly. On those segments where I could see that there’s a lot of activity happening, I plugged in DynamoDB. The remaining part of the application was handled by RDBMS. In many applications, what I’ve seen is that they have used a combination of them. [00:13:05] How do you decide if you actually want to use DynamoDB for all the data in your system? The place where we say that this application is going to be picked from day one is where the number of data which will be coming will increase. It also depends on the development team that you have if they’re familiar with DynamoDB, or any other NoSQL databases. [00:14:50] Is DynamoDB has document store or do you have of columns? You can say key value pairs or document stores. The terminologies are just different and the way you design the database. In DynamoDB, you have something like hash key and range key. [00:22:10] – Why don’t we store images in the database? I would say that there are better places to store the, which is faster and cheaper. There are better storage like CDN or S3. Another good reason is that if you want to fetch a proper size of image based on the user devices screen, resizing and all of the stuff inside the database could be cumbersome. You’ll repeat adding different columns where we’ll be storing those different sizes of images. [00:24:40] – Is there a potentially good reason for NoSQL database as your default go-to data store? If you have some data, which is complete unstructured, if you try to store back in RDBMS, it will be a pain. If we talk about the kind of media which gets generated in our day to day life, if you try to model them in a relational database, it will be pretty painful and eventually, there will be a time when you don’t know how to create correlations. [00:28:30] – Horizontally scalable versus vertically scalable In vertically scalable, when someone posts, we keep adding that at the same table. As we add data to the table, the database size increases (number of rows increases). But in horizontally scalable, we keep different boxes connected via Hadoop or Elastic MapReduce which will process the added data. [00:30:20] – What does it take to hook up a DynamoDB instance to a Rails app? We could integrate DynamoDB by using the SDK provided by AWS. I provided steps which I’ve outlined in the blog - how to create different kinds of tables, how to create those indexes, how to create the throughput, etc. We could configure AWS SDK, add the required credential, then we could create different kinds of tables. [00:33:00] – In terms of scaling, what is the limit for something like PostgreSQL or MySQL, versus DynamoDB? There’s no scalability limit in DynamoDB, or any other NoSQL solutions. Picks David Kimura             CorgUI Jason Swett      Database Design for Mere Mortals Charles Maxwood VMWare Workstation GoCD Ruby Rogues Parley Ruby Dev Summit Chandan Jhunjhunwal      Twitter @ChandanJ chandan@faodailtechnology.com

Devchat.tv Master Feed
RR 314 DynamoDB on Rails with Chandan Jhunjhunwal

Devchat.tv Master Feed

Play Episode Listen Later Jun 13, 2017 46:47


RR 314 DynamoDB on Rails with Chandan Jhunjhunwal Today's Ruby Rogues podcast features DynamoDB on Rails with Chandan Jhunjhunwal. DynamoDB is a NoSQL database that helps your team solve managing infrastructure issues like setup, costing and maintenance. Take some time to listen and know more about DynamoDB! [00:02:18] – Introduction to Chandan Jhunjhunwal Chanchan Jhunjhunwal is an owner of Faodail Technology, which is currently helping many startups for their web and mobile applications. They started from IBM, designing and building scalable mobile and web applications. He mainly worked on C++ and DB2 and later on, worked primarily on Ruby on Rails. Questions for Chandan [00:04:05] – Introduction to DynamoDB on Rails I would say that majority of developers work in PostgreSQL, MySQL or other relational database. On the other hand, Ruby on Rails is picked up by many startup or founder for actually implementing their ideas and bringing them to scalable products. I would say that more than 80% of developers are mostly working on RDBMS databases. For the remaining 20%, their applications need to capture large amounts of data so they go with NoSQL. In NoSQL, there are plenty of options like MongoDB, Cassandra, or DynamoDB. When using AWS, there’s no provided MongoDB. With Cassandra, it requires a lot of infrastructure setup and costing, and you’ll have to have a team which is kind of maintaining it on a day to day basis. So DynamoDB takes all those pain out of your team and you no longer have to focus on managing the infrastructure. [00:07:35] – Is it a good idea to start with a regular SQL database and then, switch to NoSQL database or is it better to start with NoSQL database from day one? It depends on a couple of factors. For many of the applications, they start with RDBMS because they just want to get some access, and probably switch to something like NoSQL. First, you have to watch the incoming data and their capacity. Second is familiarity because most of the developers are more familiar with RDBMS and SQL queries. For example, you have a feed application, or a messaging application, where you know that there will be a lot of chat happening and you’d expect that you’re going to take a huge number of users. You can accommodate that in RDBMS but I would probably not recommend that. [00:09:30] Can I use DynamoDB as a caching mechanism or cache store? I would not say replacement, exactly. On those segments where I could see that there’s a lot of activity happening, I plugged in DynamoDB. The remaining part of the application was handled by RDBMS. In many applications, what I’ve seen is that they have used a combination of them. [00:13:05] How do you decide if you actually want to use DynamoDB for all the data in your system? The place where we say that this application is going to be picked from day one is where the number of data which will be coming will increase. It also depends on the development team that you have if they’re familiar with DynamoDB, or any other NoSQL databases. [00:14:50] Is DynamoDB has document store or do you have of columns? You can say key value pairs or document stores. The terminologies are just different and the way you design the database. In DynamoDB, you have something like hash key and range key. [00:22:10] – Why don’t we store images in the database? I would say that there are better places to store the, which is faster and cheaper. There are better storage like CDN or S3. Another good reason is that if you want to fetch a proper size of image based on the user devices screen, resizing and all of the stuff inside the database could be cumbersome. You’ll repeat adding different columns where we’ll be storing those different sizes of images. [00:24:40] – Is there a potentially good reason for NoSQL database as your default go-to data store? If you have some data, which is complete unstructured, if you try to store back in RDBMS, it will be a pain. If we talk about the kind of media which gets generated in our day to day life, if you try to model them in a relational database, it will be pretty painful and eventually, there will be a time when you don’t know how to create correlations. [00:28:30] – Horizontally scalable versus vertically scalable In vertically scalable, when someone posts, we keep adding that at the same table. As we add data to the table, the database size increases (number of rows increases). But in horizontally scalable, we keep different boxes connected via Hadoop or Elastic MapReduce which will process the added data. [00:30:20] – What does it take to hook up a DynamoDB instance to a Rails app? We could integrate DynamoDB by using the SDK provided by AWS. I provided steps which I’ve outlined in the blog - how to create different kinds of tables, how to create those indexes, how to create the throughput, etc. We could configure AWS SDK, add the required credential, then we could create different kinds of tables. [00:33:00] – In terms of scaling, what is the limit for something like PostgreSQL or MySQL, versus DynamoDB? There’s no scalability limit in DynamoDB, or any other NoSQL solutions. Picks David Kimura             CorgUI Jason Swett      Database Design for Mere Mortals Charles Maxwood VMWare Workstation GoCD Ruby Rogues Parley Ruby Dev Summit Chandan Jhunjhunwal      Twitter @ChandanJ chandan@faodailtechnology.com

Ruby Rogues
RR 314 DynamoDB on Rails with Chandan Jhunjhunwal

Ruby Rogues

Play Episode Listen Later Jun 13, 2017 46:47


RR 314 DynamoDB on Rails with Chandan Jhunjhunwal Today's Ruby Rogues podcast features DynamoDB on Rails with Chandan Jhunjhunwal. DynamoDB is a NoSQL database that helps your team solve managing infrastructure issues like setup, costing and maintenance. Take some time to listen and know more about DynamoDB! [00:02:18] – Introduction to Chandan Jhunjhunwal Chanchan Jhunjhunwal is an owner of Faodail Technology, which is currently helping many startups for their web and mobile applications. They started from IBM, designing and building scalable mobile and web applications. He mainly worked on C++ and DB2 and later on, worked primarily on Ruby on Rails. Questions for Chandan [00:04:05] – Introduction to DynamoDB on Rails I would say that majority of developers work in PostgreSQL, MySQL or other relational database. On the other hand, Ruby on Rails is picked up by many startup or founder for actually implementing their ideas and bringing them to scalable products. I would say that more than 80% of developers are mostly working on RDBMS databases. For the remaining 20%, their applications need to capture large amounts of data so they go with NoSQL. In NoSQL, there are plenty of options like MongoDB, Cassandra, or DynamoDB. When using AWS, there’s no provided MongoDB. With Cassandra, it requires a lot of infrastructure setup and costing, and you’ll have to have a team which is kind of maintaining it on a day to day basis. So DynamoDB takes all those pain out of your team and you no longer have to focus on managing the infrastructure. [00:07:35] – Is it a good idea to start with a regular SQL database and then, switch to NoSQL database or is it better to start with NoSQL database from day one? It depends on a couple of factors. For many of the applications, they start with RDBMS because they just want to get some access, and probably switch to something like NoSQL. First, you have to watch the incoming data and their capacity. Second is familiarity because most of the developers are more familiar with RDBMS and SQL queries. For example, you have a feed application, or a messaging application, where you know that there will be a lot of chat happening and you’d expect that you’re going to take a huge number of users. You can accommodate that in RDBMS but I would probably not recommend that. [00:09:30] Can I use DynamoDB as a caching mechanism or cache store? I would not say replacement, exactly. On those segments where I could see that there’s a lot of activity happening, I plugged in DynamoDB. The remaining part of the application was handled by RDBMS. In many applications, what I’ve seen is that they have used a combination of them. [00:13:05] How do you decide if you actually want to use DynamoDB for all the data in your system? The place where we say that this application is going to be picked from day one is where the number of data which will be coming will increase. It also depends on the development team that you have if they’re familiar with DynamoDB, or any other NoSQL databases. [00:14:50] Is DynamoDB has document store or do you have of columns? You can say key value pairs or document stores. The terminologies are just different and the way you design the database. In DynamoDB, you have something like hash key and range key. [00:22:10] – Why don’t we store images in the database? I would say that there are better places to store the, which is faster and cheaper. There are better storage like CDN or S3. Another good reason is that if you want to fetch a proper size of image based on the user devices screen, resizing and all of the stuff inside the database could be cumbersome. You’ll repeat adding different columns where we’ll be storing those different sizes of images. [00:24:40] – Is there a potentially good reason for NoSQL database as your default go-to data store? If you have some data, which is complete unstructured, if you try to store back in RDBMS, it will be a pain. If we talk about the kind of media which gets generated in our day to day life, if you try to model them in a relational database, it will be pretty painful and eventually, there will be a time when you don’t know how to create correlations. [00:28:30] – Horizontally scalable versus vertically scalable In vertically scalable, when someone posts, we keep adding that at the same table. As we add data to the table, the database size increases (number of rows increases). But in horizontally scalable, we keep different boxes connected via Hadoop or Elastic MapReduce which will process the added data. [00:30:20] – What does it take to hook up a DynamoDB instance to a Rails app? We could integrate DynamoDB by using the SDK provided by AWS. I provided steps which I’ve outlined in the blog - how to create different kinds of tables, how to create those indexes, how to create the throughput, etc. We could configure AWS SDK, add the required credential, then we could create different kinds of tables. [00:33:00] – In terms of scaling, what is the limit for something like PostgreSQL or MySQL, versus DynamoDB? There’s no scalability limit in DynamoDB, or any other NoSQL solutions. Picks David Kimura             CorgUI Jason Swett      Database Design for Mere Mortals Charles Maxwood VMWare Workstation GoCD Ruby Rogues Parley Ruby Dev Summit Chandan Jhunjhunwal      Twitter @ChandanJ chandan@faodailtechnology.com

The NoSQL Database Podcast
NDP016: The MEAN Stack for Application Development

The NoSQL Database Podcast

Play Episode Listen Later Mar 23, 2017 29:26


In this episode I'm joined by Jonathan Casarrubias, owner of the website mean.expert, where we discuss the MongoDB, Express Framework, Angular, and Node (MEAN) development stack. The MEAN stack is a very popular development stack, not only because it is quick, but because development is very easy.  After all, each technology in the stack has some JavaScript background. Learn about how to get started and why its more convenient than RDBMS alternative stacks.

AWS re:Invent 2016
DAT318: Migrating from RDBMS to NoSQL: How PlayStation Network Moved from MySQL to Amazon DynamoDB

AWS re:Invent 2016

Play Episode Listen Later Dec 24, 2016 44:00


In this session, you will learn the key differences between a relational database management service (RDBMS) and non-relational (NoSQL) databases like Amazon DynamoDB. You will learn about suitable and unsuitable use cases for NoSQL databases. You'll learn strategies for migrating from an RDBMS to DynamoDB through a 5-phase, iterative approach. See how Sony migrated an on-premises MySQL database to the cloud with Amazon DynamoDB, and see the results of this migration.

.NET Rocks!
Distributed Caching with Iqbal Khan

.NET Rocks!

Play Episode Listen Later Oct 12, 2016 50:27


What role does distributed caching play in applications today? Carl and Richard sit down with Iqbal Khan to talk about nCache, an open source product built to do distributed caching in the .NET world. The conversation starts out with the traditional role of a distributed cache - state storage for a large scaling websites. It's never as simple as it sounds! From there, Iqbal dives into comparing caching to noSQL stores and RDBMS - they can all have a role in your application. The discussion then turns to more complex challenges around using distributed caches for map-reduce problems, and so on. Caching can do a lot!Support this podcast at — https://redcircle.com/net-rocks/donations

.NET Rocks!
Distributed Caching with Iqbal Khan

.NET Rocks!

Play Episode Listen Later Oct 12, 2016 50:28


What role does distributed caching play in applications today? Carl and Richard sit down with Iqbal Khan to talk about nCache, an open source product built to do distributed caching in the .NET world. The conversation starts out with the traditional role of a distributed cache - state storage for a large scaling websites. It's never as simple as it sounds! From there, Iqbal dives into comparing caching to noSQL stores and RDBMS - they can all have a role in your application. The discussion then turns to more complex challenges around using distributed caches for map-reduce problems, and so on. Caching can do a lot!Support this podcast at — https://redcircle.com/net-rocks/donations

Vendor Media from Oracle OpenWorld
MongoDB: RDBMS to MongoDB Migration Guide

Vendor Media from Oracle OpenWorld

Play Episode Listen Later Sep 12, 2016


The relational database has been the foundation of enterprise data management for over thirty years. But the way we build and run applications today, coupled with growth in data sources and user loads, are pushing relational databases beyond their limits. To address these challenges, organizations such as Verizon, Cisco, eHarmony and Under Armour have migrated […]

The NoSQL Database Podcast
NDP008: Oracle NoSQL and How it Integrates with RDBMS

The NoSQL Database Podcast

Play Episode Listen Later Aug 8, 2016 28:59


In this episode I am joined by Ashok Joshi from Oracle where we discuss Oracle' NoSQL database platform. Oracle's relational database platform is huge and there are many users, but many are unfamiliar with Oracle's NoSQL offerings.  Based on BerkeleyDB, Oracle's NoSQL database is a scalable solution that pairs nicely with its relational database platform. We discuss things ranging from querying with Oracle NoSQL to how you can get the RDBMS to retrieve data from it. If you have any questions for the speakers, send them to advocates@couchbase.com.

RunAs Radio
SQL Q and A at SQLIntersection Spring 2016

RunAs Radio

Play Episode Listen Later May 25, 2016 56:38


Time for another SQL Q and A! Richard moderates an hour long discussion at SQLintersection in Orlando with panelists Paul Randal, Kim Tripp and Brent Ozar as they tackle the questions and concerns of the attendees. A number of other SQL luminaries (including Microsoft SQL team members) chime in about topics ranging from data types to data recovery, noSQL vs RDBMS, query performance strategies, and more! Why should you upgrade to SQL 2016? How long does CheckDB take to run on SQL 2005? What is the longest answer to a question that Kim Tripp can give? All this and more in this show full of great answers and debates!

time sql nosql rdbms brent ozar spring 2016
The NoSQL Database Podcast
NDP003: Switching from a Relational Database to NoSQL

The NoSQL Database Podcast

Play Episode Listen Later Mar 31, 2016 29:56


In this episode I am joined by a company that was once using an Oracle RDBMS and has since switched entirely to using NoSQL. I am joined by Tom Coates, who is a Senior Principal Architect at Nuance. I interview Tom to find out more information regarding how they were using Oracle at Nuance and what were the driving factors in why they switched completely to a new and completely different database platform. In this episode you'll learn more about the type of data being stored, the complete transition process, and any difficulties that came up along the way. If you have questions regarding todays episode or for the guest speaker, send an email to advocates@couchbase.com. You'll either receive a response via email or have your question answered in a future episode. You can learn more about Nuance by visiting http://www.nuance.com

dotNETpodcast
Relazioni - Dino Esposito

dotNETpodcast

Play Episode Listen Later Jan 22, 2016 3:28


Siete favorevoli o contrari alle relazioni? Non quelle tra persone, ovviamente, qui si parla di database...Meglio un "classico" DB relazionale o un DB "NoSql"?Dino Esposito ci spiega il suo punto di vista mentre aspetta, tra un RDBMS e un NoSql, l'avvento dei DB ad eventi.

SDCast
SDCast #36: в гостях Дмитрий Павлов, администратор хранилищ данных банка Тинькофф

SDCast

Play Episode Listen Later Jan 21, 2016


Рад представить вам первый выпуск 2016 года, за номером 36. У меня в гостях Дмитрий Павлов, администратор хранилищ данных банка Тинькофф. В этом выпуске речь идет про хранилища данных, чем они отличаются от просто баз данных, какими отличительными возможностями они обладают, для какой нагрузки они предназначены по сравнению с RDBMS.

The NoSQL Database Podcast
NDP001: NoSQL in the Perspective of Industry Leaders

The NoSQL Database Podcast

Play Episode Listen Later Jan 15, 2016 57:55


In this episode I am joined by three experts in the NoSQL database space. I am joined by Perry Krug from Couchbase, Tim Berglund from DataStax, and Srini Penchikala from InfoQ. The four of us give an introduction in regards to what NoSQL is, where it came from, and how it relates to the older relational databases that exist as of today. Have you ever wondered why companies use NoSQL in their technology stack? We'll answer these kind of things in the episode. If you have questions for any of the guest speakers in today's episode or about the episode itself, send an email to advocates@couchbase.com. You'll either receive a response via email or have your question answered in a future episode.

Les Cast Codeurs Podcast
LCC 111 - Interview sur Microsoft Azure avec Patrick Chanezon et Benjamin Guinebertiere

Les Cast Codeurs Podcast

Play Episode Listen Later Oct 26, 2014 117:09


Emmanuel discute avec Benjamin et Patrick du cloud et du développement des applications dans ce paradigme. Ils discutent les différents services et couches de Microsoft Azure ainsi que son écosystème. Enregistré le 13 octobre 2014 Téléchargement de l’épisode LesCastCodeurs-Episode–111.mp3 Interview Microsoft Azure Azure France sur Twitter Ta vie, ton oeuvre Benjamin Guinebertiere @bengui Microsoft Patrick Chanezon @chanezon Cloud and future C’est quoi le cloud pour vous? Portrait d’un développeur à la façon The Artist (Parleys video, Francais) Pourquoi le cloud est inévitable? Microsoft Cloud OS Uniquement cloud on mélange hybride? OVH et Microsoft Azure Windows Azure Pack Sécurité, données privées et entités étatiques. Lockheed Martin Azure infrastructure Qu‘est-ce qu’Azure? IaaS, PaaS, SaaS, SkyNet? Microsoft Azure Le compte à rebourd de Patrick Office 365 Java sur Azure Microsoft Azure Cloud Services Web roles et worker roles Azure Websites Docker Kubernetes Fig CoreOS LXC Atomic OpenShift Les OS On peut faire tourner sous quel OS? Des solutions comme EC2? Support Docker? Azure et Kubernetes Microsoft Drawbridge Le stockage et bases de données Quels sont les choix de stockage de “fichiers”? Azure Blob Storage Azure Table Storage Azure Queues Azure File Service Quels sont les choix de bases de données? Bases de données RDBMS et scalabilité Discuter les options NoSQL Azure SQL Database Azure Redis Cache Azure Search MongoDB Azure Document DB HBase HortonWorks Les langages et stack applicatives Quelles sont les plateformes Comment vous supportez les plateformes? Les mises à jour Clustering etc Azure Websites Kudu console Dropbox Team Foundation Server (TFS) Wildfly JGroups Interagir avec Azure DevOps API REST? Ligne de commande? Clickodrome? IDE Azure SDK PowerShell Azure CLI Azure Webjobs Azure Automation Comment gérer ces 10s de templates et ces 100s de machines Azure Resource Manager Chef Puppet Développent forcement en connecté? Que faire dans le TGV? Écosystème Amazon mange petit à petit l’écosysteme qui se construit au dessus? Microsoft? Azure Express Route Data Gravity Les prix Les prix Les autres services Azure Analytics et Big Data Autre ? Azure Machine Learning Azure HDInsight API management Azure ISS (Intelligent System Service) BizTalk Visual Studio Online Monaco Sécurité Chiffrement des données en mouvement et au repos VPN vers l’IT on-premise Azure Rights Management Azure Trust Center Azure Express Route Futur J’ai pas d’argent, je peux essayer quand même? Les liens pour aller plus loin Cours gratuits sur Azure en français : Tout savoir pour déployer votre application web dans le cloud Microsoft Azure Introduction au Machine Learning pour les débutants Autres (français/anglais) : http://www.microsoftvirtualacademy.com/product-training/microsoft-azure-topic-page-fr Vous souhaitez être accompagnés sur Azure pour votre projet cloud ? Pépinière Microsoft Azure Java on Azure Managing complexity in giant systems Devops, the Microsoft way Le blog de MS OpenTech Docker sur Azure Kubernetes sur Azure Nous contacter Contactez-nous via twitter http://twitter.com/lescastcodeurs sur le groupe Google http://groups.google.com/group/lescastcodeurs ou sur le site web http://lescastcodeurs.com/ Flattr-ez nous (dons) sur http://lescastcodeurs.com/ En savoir plus sur le sponsoring? sponsors@lescastcodeurs.com

Voice of the DBA
No Handwaving Away the DBA

Voice of the DBA

Play Episode Listen Later May 8, 2014 2:49


There's a great quote I read, at the end of this article. It says: "...if you think that switching to NoSQL will just let you hand-wave away all of the challenges of running a database, you are terribly misguided." The context is that all too often people looking to move away from some of the hassles of working with RDBMS platforms, which includes working with the DBA, haven't completely thought through the issues. I do think NoSQL has a place in the world. There are domains of problems that I'm sure Riak, MongoDB, and others, solve in a more efficient way than SQL Server, Oracle, MySQL, and other relational systems. I'm not sure what they are, and to some extent, I haven't seen good guidance on where particular platforms excel. Most of the articles and pieces on choosing NoSQL seem to be trying to sell me "why a particular platform can replace my other one", and telling me to add in things like transactions, but not explaining the drawbacks. Read the rest of "No Handwaving Away the DBA" at SQLServerCentral.

How to Program with Java Podcast
Database Terminology - Relationships, Joins and Keys

How to Program with Java Podcast

Play Episode Listen Later Dec 4, 2013


Terminology It's the foundation when learning any new concepts.  In this episode of the "How to Program with Java Podcast" we will be talking about some new database terminology. One of the most important aspects of modern databases is the fact that they allow you to define relationships. Relationships between tables allow you to break data up into its individual "areas of interest".  But when you break the data up, you'll need to know how to put it back together.  This is accomplished using relationships, keys and joins. There's plenty to learn about these concepts and we will start by scratching the surface in this episode.   Exciting Announcement As you'll hear in the first few minutes of this episode, I've recently had an epiphone!   I realized that there's no great communities dedicated to programmers. So I took it upon myself to create the very first community dedicated to programmers and the pursuit of knowledge and advancement of our common goals (to excel as programmers).  You'll learn lots about this community in the episode, so I won't go in to details here, but if you're interested in checking it out - please visit: http://coderscampus.com  

IBM developerWorks podcasts
Sqoop: Big data conduit between NoSQL and RDBMS

IBM developerWorks podcasts

Play Episode Listen Later Aug 28, 2013 5:52


Visit This Week on developerWorks at: http://ibm.com/developerworks/thisweek Links to articles mentioned on this episode are at: http://ibm.co/17r5h5G

Software Engineering Radio - The Podcast for Professional Software Developers

Recording Venue: Skype Guest: Michael Hunger Michael Hunger of Neo Technology, and a developer on the Neo4J database, joins Robert to discuss graph databases. Graph databases fall within the larger category of NoSQL databases but they are not primarily a solution to problems of scale. They differentiate themselves from RDBMS in offering a data model built […]

TWiT Throwback (Video HI)
FLOSS Weekly 224: PostgreSQL and EnterpriseDB

TWiT Throwback (Video HI)

Play Episode Listen Later Aug 29, 2012 58:32


PostgreSQL is the #1 enterprise-class open source database with a feature set comparable to the major proprietary RDBMS vendors and a customer list that spans every industry. Hosts: Randal Schwartz and Dan Lynch Guest: Ed Boyajian Download or subscribe to this show at https://twit.tv/shows/floss-weekly. We invite you to read, add to, and amend our show notes. Here's what's coming up for FLOSS in the future. Think your open source project should be on FLOSS Weekly? Email Randal at merlyn@stonehenge.com. Thanks to CacheFly for providing the bandwidth for this podcast, and Lullabot's Jeff Robbins, web designer and musician, for our theme music.

TWiT Throwback (MP3)
FLOSS Weekly 224: PostgreSQL and EnterpriseDB

TWiT Throwback (MP3)

Play Episode Listen Later Aug 29, 2012 58:32


PostgreSQL is the #1 enterprise-class open source database with a feature set comparable to the major proprietary RDBMS vendors and a customer list that spans every industry. Hosts: Randal Schwartz and Dan Lynch Guest: Ed Boyajian Download or subscribe to this show at https://twit.tv/shows/floss-weekly. We invite you to read, add to, and amend our show notes. Here's what's coming up for FLOSS in the future. Think your open source project should be on FLOSS Weekly? Email Randal at merlyn@stonehenge.com. Thanks to CacheFly for providing the bandwidth for this podcast, and Lullabot's Jeff Robbins, web designer and musician, for our theme music.

Star Integration Server
9 - SIS Cell-level Security (RDBMS Row-level Security)

Star Integration Server

Play Episode Listen Later Sep 29, 2009 0:48


Star Integration Server
9 - SIS Cell-level Security (RDBMS Row-level Security)

Star Integration Server

Play Episode Listen Later Sep 29, 2009 0:48