POPULARITY
In this episode, Lois Houston and Nikita Abraham continue their deep dive into Oracle GoldenGate 23ai, focusing on its evolution and the extensive features it offers. They are joined once again by Nick Wagner, who provides valuable insights into the product's journey. Nick talks about the various iterations of Oracle GoldenGate, highlighting the significant advancements from version 12c to the latest 23ai release. The discussion then shifts to the extensive new features in 23ai, including AI-related capabilities, UI enhancements, and database function integration. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we introduced Oracle GoldenGate and its capabilities, and also spoke about GoldenGate 23ai. In today's episode, we'll talk about the various iterations of Oracle GoldenGate since its inception. And we'll also take a look at some new features and the Oracle GoldenGate product family. 00:57 Lois: And we have Nick Wagner back with us. Nick is a Senior Director of Product Management for GoldenGate at Oracle. Hi Nick! I think the last time we had an Oracle University course was when Oracle GoldenGate 12c was out. I'm sure there's been a lot of advancements since then. Can you walk us through those? Nick: GoldenGate 12.3 introduced the microservices architecture. GoldenGate 18c introduced support for Oracle Autonomous Data Warehouse and Autonomous Transaction Processing Databases. In GoldenGate 19c, we added the ability to do cross endian remote capture for Oracle, making it easier to set up the GoldenGate OCI service to capture from environments like Solaris, Spark, and HP-UX and replicate into the Cloud. Also, GoldenGate 19c introduced a simpler process for upgrades and installation of GoldenGate where we released something called a unified build. This means that when you install GoldenGate for a particular database, you don't need to worry about the database version when you install GoldenGate. Prior to this, you would have to install a version-specific and database-specific version of GoldenGate. So this really simplified that whole process. In GoldenGate 23ai, which is where we are now, this really is a huge release. 02:16 Nikita: Yeah, we covered some of the distributed AI features and high availability environments in our last episode. But can you give us an overview of everything that's in the 23ai release? I know there's a lot to get into but maybe you could highlight just the major ones? Nick: Within the AI and streaming environments, we've got interoperability for database vector types, heterogeneous capture and apply as well. Again, this is not just replication between Oracle-to-Oracle vector or Postgres to Postgres vector, it is heterogeneous just like the rest of GoldenGate. The entire UI has been redesigned and optimized for high speed. And so we have a lot of customers that have dozens and dozens of extracts and replicats and processes running and it was taking a long time for the UI to refresh those and to show what's going on within those systems. So the UI has been optimized to be able to handle those environments much better. We now have the ability to call database functions directly from call map. And so when you do transformation with GoldenGate, we have about 50 or 60 built-in transformation routines for string conversion, arithmetic operation, date manipulation. But we never had the ability to directly call a database function. 03:28 Lois: And now we do? Nick: So now you can actually call that database function, database stored procedure, database package, return a value and that can be used for transformation within GoldenGate. We have integration with identity providers, being able to use token-based authentication and integrate in with things like Azure Active Directory and your other single sign-on for the GoldenGate product itself. Within Oracle 23ai, there's a number of new features. One of those cool features is something called lock-free reservation columns. So this allows you to have a row, a single row within a table and you can identify a column within that row that's like an inventory column. And you can have multiple different users and multiple different transactions all updating that column within that same exact row at that same time. So you no longer have row-level locking for these reservation columns. And it allows you to do things like shopping carts very easily. If I have 500 widgets to sell, I'm going to let any number of transactions come in and subtract from that inventory column. And then once it gets below a certain point, then I'll start enforcing that row-level locking. 04:43 Lois: That's really cool… Nick: The one key thing that I wanted to mention here is that because of the way that the lock-free reservations work, you can have multiple transactions open on the same row. This is only supported for Oracle to Oracle. You need to have that same lock-free reservation data type and availability on that target system if GoldenGate is going to replicate into it. 05:05 Nikita: Are there any new features related to the diagnosability and observability of GoldenGate? Nick: We've improved the AWR reports in Oracle 23ai. There's now seven sections that are specific to Oracle GoldenGate to allow you to really go in and see exactly what the GoldenGate processes are doing and how they're behaving inside the database itself. And there's a Replication Performance Advisor package inside that database, and that's been integrated into the Web UI as well. So now you can actually get information out of the replication advisor package in Oracle directly from the UI without having to log into the database and try to run any database procedures to get it. We've also added the ability to support a per-PDB Extract. So in the past, when GoldenGate would run on a multitenant database, a multitenant database in Oracle, all the redo data from any pluggable database gets sent to that one redo stream. And so you would have to configure GoldenGate at the container or root level and it would be able to access anything at any PDB. Now, there's better security and better performance by doing what we call per-PDB Extract. And this means that for a single pluggable database, I can have an extract that runs at that database level that's going to capture information just from that pluggable database. 06:22 Lois And what about non-Oracle environments, Nick? Nick: We've also enhanced the non-Oracle environments as well. For example, in Postgres, we've added support for precise instantiation using Postgres snapshots. This eliminates the need to handle collisions when you're doing Postgres to Postgres replication and initial instantiation. On the GoldenGate for big data side, we've renamed that product more aptly to distributed applications in analytics, which is really what it does, and we've added a whole bunch of new features here too. The ability to move data into Databricks, doing Google Pub/Sub delivery. We now have support for XAG within the GoldenGate for distributed applications and analytics. What that means is that now you can follow all of our MAA best practices for GoldenGate for Oracle, but it also works for the DAA product as well, meaning that if it's running on one node of a cluster and that node fails, it'll restart itself on another node in the cluster. We've also added the ability to deliver data to Redis, Google BigQuery, stage and merge functionality for better performance into the BigQuery product. And then we've added a completely new feature, and this is something called streaming data and apps and we're calling it AsyncAPI and CloudEvent data streaming. It's a long name, but what that means is that we now have the ability to publish changes from a GoldenGate trail file out to end users. And so this allows through the Web UI or through the REST API, you can now come into GoldenGate and through the distributed applications and analytics product, actually set up a subscription to a GoldenGate trail file. And so this allows us to push data into messaging environments, or you can simply subscribe to changes and it doesn't have to be the whole trail file, it can just be a subset. You can specify exactly which tables and you can put filters on that. You can also set up your topologies as well. So, it's a really cool feature that we've added here. 08:26 Nikita: Ok, you've given us a lot of updates about what GoldenGate can support. But can we also get some specifics? Nick: So as far as what we have, on the Oracle Database side, there's a ton of different Oracle databases we support, including the Autonomous Databases and all the different flavors of them, your Oracle Database Appliance, your Base Database Service within OCI, your of course, Standard and Enterprise Edition, as well as all the different flavors of Exadata, are all supported with GoldenGate. This is all for capture and delivery. And this is all versions as well. GoldenGate supports Oracle 23ai and below. We also have a ton of non-Oracle databases in different Cloud stores. On an non-Oracle side, we support everything from application-specific databases like FairCom DB, all the way to more advanced applications like Snowflake, which there's a vast user base for that. We also support a lot of different cloud stores and these again, are non-Oracle, nonrelational systems, or they can be relational databases. We also support a lot of big data platforms and this is part of the distributed applications and analytics side of things where you have the ability to replicate to different Apache environments, different Cloudera environments. We also support a number of open-source systems, including things like Apache Cassandra, MySQL Community Edition, a lot of different Postgres open source databases along with MariaDB. And then we have a bunch of streaming event products, NoSQL data stores, and even Oracle applications that we support. So there's absolutely a ton of different environments that GoldenGate supports. There are additional Oracle databases that we support and this includes the Oracle Metadata Service, as well as Oracle MySQL, including MySQL HeatWave. Oracle also has Oracle NoSQL Spatial and Graph and times 10 products, which again are all supported by GoldenGate. 10:23 Lois: Wow, that's a lot of information! Nick: One of the things that we didn't really cover was the different SaaS applications, which we've got like Cerner, Fusion Cloud, Hospitality, Retail, MICROS, Oracle Transportation, JD Edwards, Siebel, and on and on and on. And again, because of the nature of GoldenGate, it's heterogeneous. Any source can talk to any target. And so it doesn't have to be, oh, I'm pulling from Oracle Fusion Cloud, that means I have to go to an Oracle Database on the target, not necessarily. 10:51 Lois: So, there's really a massive amount of flexibility built into the system. 11:00 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now through May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 11:26 Nikita: Welcome back! Now that we've gone through the base product, what other features or products are in the GoldenGate family itself, Nick? Nick: So we have quite a few. We've kind of touched already on GoldenGate for Oracle databases and non-Oracle databases. We also have something called GoldenGate for Mainframe, which right now is covered under the GoldenGate for non-Oracle, but there is a licensing difference there. So that's something to be aware of. We also have the OCI GoldenGate product. We are announcing and we have announced that OCI GoldenGate will also be made available as part of the Oracle Database@Azure and Oracle Database@ Google Cloud partnerships. And then you'll be able to use that vendor's cloud credits to actually pay for the OCI GoldenGate product. One of the cool things about this is it will have full feature parity with OCI GoldenGate running in OCI. So all the same features, all the same sources and targets, all the same topologies be able to migrate data in and out of those clouds at will, just like you do with OCI GoldenGate today running in OCI. We have Oracle GoldenGate Free. This is a completely free edition of GoldenGate to use. It is limited on the number of platforms that it supports as far as sources and targets and the size of the database. 12:45 Lois: But it's a great way for developers to really experience GoldenGate without worrying about a license, right? What's next, Nick? Nick: We have GoldenGate for Distributed Applications and Analytics, which was formerly called GoldenGate for big data, and that allows us to do all the streaming. That's also where the GoldenGate AsyncAPI integration is done. So in order to publish the GoldenGate trail files or allow people to subscribe to them, it would be covered under the Oracle GoldenGate Distributed Applications and Analytics license. We also have OCI GoldenGate Marketplace, which allows you to run essentially the on-premises version of GoldenGate but within OCI. So a little bit more flexibility there. It also has a hub architecture. So if you need that 99.99% availability, you can get it within the OCI Marketplace environment. We have GoldenGate for Oracle Enterprise Manager Cloud Control, which used to be called Oracle Enterprise Manager. And this allows you to use Enterprise Manager Cloud Control to get all the statistics and details about GoldenGate. So all the reporting information, all the analytics, all the statistics, how fast GoldenGate is replicating, what's the lag, what's the performance of each of the processes, how much data am I sending across a network. All that's available within the plug-in. We also have Oracle GoldenGate Veridata. This is a nice utility and tool that allows you to compare two databases, whether or not GoldenGate is running between them and actually tell you, hey, these two systems are out of sync. And if they are out of sync, it actually allows you to repair the data too. 14:25 Nikita: That's really valuable…. Nick: And it does this comparison without locking the source or the target tables. The other really cool thing about Veridata is it does this while there's data in flight. So let's say that the GoldenGate lag is 15 or 20 seconds and I want to compare this table that has 10 million rows in it. The Veridata product will go out, run its comparison once. Once that comparison is done the first time, it's then going to have a list of rows that are potentially out of sync. Well, some of those rows could have been moved over or could have been modified during that 10 to 15 second window. And so the next time you run Veridata, it's actually going to go through. It's going to check just those rows that were potentially out of sync to see if they're really out of sync or not. And if it comes back and says, hey, out of those potential rows, there's two out of sync, it'll actually produce a script that allows you to resynchronize those systems and repair them. So it's a very cool product. 15:19 Nikita: What about GoldenGate Stream Analytics? I know you mentioned it in the last episode, but in the context of this discussion, can you tell us a little more about it? Nick: This is the ability to essentially stream data from a GoldenGate trail file, and they do a real time analytics on it. And also things like geofencing or real-time series analysis of it. 15:40 Lois: Could you give us an example of this? Nick: If I'm working in tracking stock market information and stocks, it's not really that important on how much or how far down a stock goes. What's really important is how quickly did that stock rise or how quickly did that stock fall. And that's something that GoldenGate Stream Analytics product can do. Another thing that it's very valuable for is the geofencing. I can have an application on my phone and I can track where the user is based on that application and all that information goes into a database. I can then use the geofencing tool to say that, hey, if one of those users on that app gets within a certain distance of one of my brick-and-mortar stores, I can actually send them a push notification to say, hey, come on in and you can order your favorite drink just by clicking Yes, and we'll have it ready for you. And so there's a lot of things that you can do there to help upsell your customers and to get more revenue just through GoldenGate itself. And then we also have a GoldenGate Migration Utility, which allows customers to migrate from the classic architecture into the microservices architecture. 16:44 Nikita: Thanks Nick for that comprehensive overview. Lois: In our next episode, we'll have Nick back with us to talk about commonly used terminology and the GoldenGate architecture. And if you want to learn more about what we discussed today, visit mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 17:10 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
גיא ואיתן מדברים על כמה פיצ'רים מעניינים של SQLServer, כמה סקריפטים מעניינים ושימושיים, מנסים להיזכר מה בא ביחד עם WINDOW Clause, ודנים על איך לעשות אינדקס רבילד און-ליין כאשר אי אפשר לעשות אותו און-ליין. קישורים רלוונטים: Managed Instance link overview - Azure SQL Managed Instance | Microsoft Learn Collect Database Files Information.sql at MadeiraData/microsoft-dbas-club WINDOW clause (Transact-SQL) - SQL Server | Microsoft Learn Online Index Operations without Enterprise Edition.sql at MadeiraData/microsoft-dbas-club Guidelines for online index operations - SQL Server | Microsoft Learn
The Enterprise Edition of From The Holodeck will analyze Star Trek: Enterprise, exploring its narrative structure, philosophical themes, and place within Star Trek canon. This in-depth discussion will go beyond episode-by-episode recaps, allowing for a comprehensive examination of the show’s evolving ideas—including the ideological foundations of the Federation, the interplay between idealism and pragmatism in...
The Enterprise Edition of From The Holodeck will analyze Star Trek: Enterprise, exploring its narrative structure, philosophical themes, and place within Star Trek canon. This in-depth discussion will go beyond episode-by-episode recaps, allowing for a comprehensive examination of the show’s evolving ideas—including the ideological foundations of the Federation, the interplay between idealism and pragmatism in...
Lois Houston and Nikita Abraham continue their conversation with MySQL expert Perside Foster, with a closer look at MySQL Enterprise Backup. They cover essential features like incremental backups for quick recovery, encryption for data security, and monitoring with MySQL Enterprise Monitor—all to help you manage backups smoothly and securely. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! Last week was the first of a two-part episode covering the different types of backups and why they're important. Today, we'll look at how we can use MySQL Enterprise Backup for efficient and consistent backups. 00:52 Nikita: And of course, we've got Perside Foster with us again to walk us through all the details. Perside, could you give us an overview of MySQL Enterprise Backup? Perside: MySQL Enterprise Backup is a form of physical backup at its core, so it's much faster for large data sets than logical backups, such as the most commonly used MySQL Dump. Because it backs up the data files, it's non-locking and enables either complete system backup or partial backup, focusing only on specific databases. 01:29 Lois: And what are the benefits of using MySQL Enterprise Backup? Perside: You can back up to local storage or direct-to-common-cloud storage types. You can perform incremental backups, which can speed up your backup process greatly. Incremental backups enable point-in-time recovery. It's useful when you need to restore to a point in time before some application or human error occurred. Backups can be compressed to save archival storage requirements and encrypted for regulatory compliance and offline data security. 02:09 Nikita: So we know MySQL Enterprise Backup is an impressive tool, but could you talk more about some of the main features it supports for creating and managing backups? Specifically, which tools are integrated within MySQL Enterprise to support different backup scenarios? Perside: MySQL Enterprise Backup supports SBT, implemented by many common Tape storage systems. MySQL Enterprise Backup supports optimistic backup. This process deals with busy tables separately from the rest of the database. It can record changes that happen in the database during the backup for consistency. In a large data set, this can make a huge difference in performance. MySQL Enterprise Backup runs on all supported platforms. It's available when you have a MySQL Enterprise Edition license. And it comes with Enterprise Edition, but it also is available as a separate package. You can get the most recent version from eDelivery, where you can also get a trial version. If you need a previous release, you can get that from My Oracle Support. It's also available in all versions of MySQL, whether you run a Long-Term support version or an Innovation Release. For LTS releases, MySQL Enterprise Backup supports MySQL instances of the same LTS release. For Innovation releases, it supports the previous LTS release and any subsequent Innovation version within the same LTS family. 04:03 Nikita: How does MySQL Enterprise Monitor manage and track backup processes? Perside: MySQL Enterprise Monitor has a dashboard for monitoring MySQL Enterprise Backup. The dashboard monitors the health of backup process and usage throughout the entire Enterprise fleet, not just a single server. It supports drilling down into specific sub-operations within a backup job. You can see information about full backups, partial backups, and incremental backups. You can configure alerts that will notify you in the event of delays, failures, or backups that have not been performed in some configuration time period. 04:53 Lois: Ok…let's get into the mechanics. I understand that MySQL Enterprise Backup uses binary logs as part of its backup process. Can you explain how these logs fit into the bigger picture of maintaining database integrity? Perside: MySQL Enterprise Backup is a utility designed specifically for backing up MySQL systems in the most efficient and flexible way. At its simplest, it performs a physical backup of the data files, so it is fast. However, it also records the changes that were made during the time it took to do the backup. So, the result is that you get a consistent backup of the data at the time the backup completed. This backup is not tied to the host system and can be moved to other hosts. It can be used for archiving and is fully supported as part of the MySQL Enterprise Edition. It is, however, tied to the specific version of MySQL from which the backup was taken. So, you cannot use it for upgrades where the destination server is an upgrade from the source. For example, if you take a backup from MySQL 5.7, you can't directly restore it to MySQL 8.0. As a part of MySQL Enterprise Edition, it's not part of the freely available Community Edition. 06:29 Lois: Perside, how do MySQL's binary logs track changes over time? And why is this so critical for backups? Perside: The binary logs record changes to the database. These changes are recorded in a sequential set of files numbered incrementally. MySQL logs changes either in statement-based form, where each log entry records the statement that gives rise to the change, or in row-based form where the actual change row data is recorded. If you select mixed format, then MySQL records statements for most operations and records row for changes where the statement might result in a different row value each time it's run, for example, where there's a generated value like autoincrement. The current log file grows as changes are recorded. When it reaches its maximum configured size, that log file is closed, and the next sequential file is created for new logs. You can make this happen automatically by using the FLUSH BINARY LOGS command. This does not delete any existing log files. 07:59 Nikita: But what happens if you want to delete the log files? Perside: If you want to delete all log files, you can do so manually with the PURGE BINARY LOGS command, either specifying a file or a date time. 08:14 Lois: When it comes to tracking transactions, MySQL provides a couple of methods, right? Can you explain the differences between Global Transaction Identifiers and the traditional log file sequence? Perside: Log files positioning is one of two formats, either legacy, where you specify transactions with a log file in a sequence number, or by using global transaction identifiers, or GTIDs, where each transaction is identified with a universally unique identifier or UUID. When you apply a transaction to the source server, that is when the GTID is attached to the transaction. This makes it particularly useful in replication topologies so that each transaction is uniquely identified by both its server ID and the transaction sequence number. When such a transaction is replicated to other hosts, the transaction retains its original GTID so that you can track when that transaction has propagated to the replicas and has been applied. The global transaction identifier is unique across the entire network. 09:49 Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like LLMs, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.comand find out more. 10:19 Nikita: Welcome back! Let's move on to replication. How does MySQL's legacy log format handle transactions, and what does that mean for replication timing across different servers? Perside: Legacy format binary logs are non-transactional. This means that a transaction made up of multiple modifications is logged as a sequence of changes. It's possible that different hosts in a replication network apply those changes at different times. Each server that uses legacy binary logging maintain the current applied log position as coordinates based on a combination of binary log files in the position within that log file. 11:11 Nikita: Troubleshooting with legacy logs can be quite complex, right? So, how does the lack of unique transaction IDs make it more difficult to address replication issues? Perside: Because each server has its own log with its own transactions, these modification could have entirely different coordinates, making it challenging to find the specific modification point if you need to do any deep dive troubleshooting, for example, if one replica fell partway through applying a transaction and you need to partially roll it back manually. On the other hand, when you enable GTIDs, the transaction applied on the source host has that globally unique identifier attached to the whole transaction as a sequence of unique IDs. When the second or subsequent servers apply those transactions, they have exactly the same identifier, making it both transaction-safe for MySQL and also easier to troubleshoot if you need to. 12:26 Lois: How can you use binary logs to perform a point-in-time recovery in MySQL? Perside: First, you restore the last full backup. Once you've restarted the restart server, find the current log position of that backup. Either it's GTID or log sequence number. The SHOW BINARY LOG STATUS command shows this information. Then you can use the MySQL binlog utility to replay events from the binary log files, specifying the start and stop position containing the range of log operations that you wish to apply. You can pipe the output of the MySQL bin log to the MySQL client if you want to execute the changes immediately, or you can redirect the output to a script file if you want to examine and perhaps edit the changes. 13:29 Nikita: And how do you save binary logs? Perside: You can save binary logs to use in disaster recovery, for point-in-time restores, or for incremental backups. One way to do this is to flush the logs so that the log file closes and ready for copying. And then copy it to a different server to protect against hardware media failures. You can also use the MySQL binlog utility to create a copy of a set of binary log files in the same format, but to a different file or set of files. This can be useful if you want to run MySQL binlog continuously, copying from the source server binary log to a new location, perhaps in network storage. If you do this, remember that MySQL binlog does not run as a service or daemon, so you'll need to monitor it to make sure it's running continually. 14:39 Lois: Can you take us through how the MySQL Enterprise Backup process works? What does it do when performing a backup? Perside: First, it performs a physical file copy of necessary data and log files. This can be done while the server is fully operational, and it has minimal impact on performance. Once this initial copy is taken, it applies a low impact backup lock on the instance. If you have any tables that are not using InnoDB, the backup cannot guarantee transaction-safe consistency for those tables. It applies a weed lock to those tables so that it can guarantee consistency. Then it briefly locks all logging activity to take a consistent view of the current coordinates of various logs. It releases the weed lock on non-transactional tables. Using the log coordinates that were taken earlier in the process, it gathers all logs for transactions that have occurred since then. Bear in mind that the backup process takes place while the system is active. So, for a consistent backup, it must record not only the data files, but all changes that occurred during the backup. Then it releases the backup lock. The last piece of information recorded is any metadata for the backup itself, including its timing and contents in the final redo log. That completes the backup operation. 16:30 Nikita: And where are the files stored? Perside: The files contained in the backup are saved to the backup location, which can be on the local system or in network storage. The files contained in the backup location include files from the MySQL data directory. Some raw files include InnoDB tablespace, plus any InnoDB file per table tablespace files, and InnoDB log files. Other files might include data files belonging to other storage engines, perhaps MyISAM files. The various log files in instance configuration files are also retained. 17:20 Lois: What steps do you follow to restore a MySQL Enterprise Backup, and how do you guarantee consistency, especially when dealing with incremental backups? Perside: To restore from a backup using MySQL Enterprise Backup, you must first remove any previous files from the data directory. The restore process will fail if you attempt to restore over an existing system or backup. Then you restore the database with appropriate options. If you only restore a single backup, you can use copy, back, and apply log to ensure that the restored system has a consistency state. If you perform a full backup in subsequent incremental backups, you might need to restore multiple times using copy-back, and then use copy-back-and-apply-log only for the final consistent restore operation. The restart server might be on the same host or might be a different host with different configuration. This means that you might have to change some configuration on the restored server, including the operating system ownership of the restored data directory and various MySQL configuration files. If you want to retain the MySQL configuration files from the source server to reproduce on a new server, you should copy those files separately. MySQL Enterprise Backup focuses on the data rather than the server configuration. It does, however, produce configuration files appropriate for the backup. These are similar to the MySQL configuration files, but only contain options relevant for the backup process itself. There's also variables that have been changed to non-default values and all global variable values. These files must be renamed and possibly edited before they are suitable to become configuration files in the newly restored server. For example, the mysqld-auto.cnf file contains a JSON-formatted set of persisted variables. The backup process stores this as the newly named backup mysqld-auto.cnf. If you want to use it in the restored server, you must rename it and place it in the appropriate location so that the restored server can read it. This also applies in part to the auto.cnf file, which contain identifying information for the server. If you are replacing the original server or restoring on the same host, then you can keep the original values. However, this information must be unique within a network. So, if you are restoring this backup to create a replica in a replication topology, you must not include that file and instead start MySQL without it so that it creates its own unique identifying information. 21:14 Nikita: Let's discuss securing and optimizing backups. How does MySQL Enterprise Backup handle encryption and compression, and what are the critical considerations for each? Perside: You can encrypt backups so that they are secure while moving them around or archiving them. The encrypt option performs the encryption. And you can specify the encryption key either on the command line as a string or a key file that has been generated with some cryptographic algorithm. Encryption only applies to image files, not to backup directories. You can also compress backup with different levels of compression, with higher levels requiring more CPU, but resulting in greater savings in storage. Compression only works with InnoDB data files. If your organization has media management software for performing backups, perhaps to a tape array, then you can use the SBT interface supported in MySQL Enterprise Backup. 22:34 Lois: Before we wrap up, could you share how MySQL Enterprise Backup facilitates the management of backups across a multi-server environment? Perside: As an enterprise solution, it's easy to run MySQL Enterprise Backup in a multi-server environment. We've already mentioned backing up to cloud storage, but you can, of course, back up to a directory or image on network storage that can be mounted locally, perhaps with NFS or some other file system. The "with time" option enables multiple backups within the same backup directory, where each in its own subdirectory named with the timestamp. This is especially useful when you want to run the same backup script repeatedly. 23:32 Lois: Thank you for that detailed overview, Perside. This wraps up our discussion of the various backup types, their pros and cons, and how to select the right option for your needs. In our next session, we'll explore the different MySQL monitoring strategies and look at the features as well as benefits of Heatwave. Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the MySQL 8.4 Essentials course. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 24:06 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Picking up from Part 1, hosts Lois Houston and Nikita Abraham continue their deep dive into MySQL security with MySQL Solution Engineer Ravish Patel. In this episode, they focus on user authentication techniques and tools such as MySQL Enterprise Audit and MySQL Enterprise Firewall. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Last week, we began exploring MySQL security, covering regulatory compliance and common security threats. Nikita: This week, we're continuing the conversation by digging deeper into MySQL's user authentication methods and taking a closer look at some powerful security tools in the MySQL Enterprise suite. 00:57 Lois: And we're joined once again by Ravish Patel, a MySQL Solution Engineer here at Oracle. Welcome, Ravish! How does user authentication work in MySQL? Ravish: MySQL authenticates users by storing account details in a system database. These accounts are authenticated with three elements, username and hostname commonly separated with an @ sign along with a password. The account identifier has the username and host. The host identifier specifies where the user connects from. It specifies either a DNS hostname or an IP address. You can use a wild card as part of the hostname or IP address if you want to allow this username to connect from a range of hosts. If the host value is just the percent sign wildcard, then that username can connect from any host. Similarly, if you create the user account with an empty host, then the user can connect from any host. 01:55 Lois: Ravish, can MySQL Enterprise Edition integrate with an organization's existing accounts? Ravish: MySQL Enterprise authentication integrates with existing authentication mechanisms in your infrastructure. This enables centralized account management, policies, and authentication based on group membership and assigned corporate roles, and MySQL supports a wide range of authentication plugins. If your organization uses Linux, you might already be familiar with PAM, also known as Pluggable Authentication Module. This is a standard interface in Linux and can be used to authenticate to MySQL. Kerberos is another widely used standard for granting authorization using a centralized service. The FIDO Alliance, short for Fast Identify Online, promotes an interface for passwordless authentication. This includes methods for authenticating with biometrics RUSB security tokens. And MySQL even supports logging into centralized authentication services that use LDAP, including having a dedicated plugin to connect to Windows domains. 03:05 Nikita: So, once users are authenticated, how does MySQL handle user authorization? Ravish: The MySQL privilege system uses the GRANT keyword. This grants some privilege X on some object Y to some user Z, and optionally gives you permission to grant the same privilege to others. These can be global administrative privileges that enable users to perform tasks at the server level, or they can be database-specific privileges that allow users to modify the structure or data within a database. 03:39 Lois: What about database privileges? Ravish: Database privileges can be fine-grained from the largest to the smallest. At the database level, you can permit users to create, alter, and delete whole databases. The same privileges apply at the table, view, index, and stored procedure levels. And in addition, you can control who can execute stored procedures and whether they do so with their own identity or with the privileges of the procedure's owner. For tables, you can control who can select, insert, update, and delete rows in those tables. You can even specify the column level, who can select, insert, and update data in those columns. Now, any privileged system carries with it the risk that you might forget an important password and lock yourself out. In MySQL, if you forget the password to the root account and don't have any other admin-level accounts, you will not be able to administer the MySQL server. 04:39 Nikita: Is there a way around this? Ravish: There is a way around this as long as you have physical access to the server that runs the MySQL process. If you launch the MySQL process with the --skip grant tables option, then MySQL will not load the privilege tables from the system database when it starts. This is clearly a dangerous thing to do, so MySQL also implicitly disables network access when you use that option to prevent users from connecting over the network. When you use this option, any client connection to MySQL succeeds and has root privileges. This means you should control who has shell access to the server during this time and you should restart the server or enable privileged system with the command flush privileges as soon as you have changed the root password. The privileges we have already discussed are built into MySQL and are always available. MySQL also makes use of dynamic privileges, which are privileges that are enabled at runtime and which can be granted once they are enabled. In addition, plugins and components can define privileges that relate to features of those plugins. For example, the enterprise firewall plugin defines the firewall admin privilege and the audit admin privilege is defined by the enterprise audit plugin. 06:04 Are you working towards an Oracle Certification this year? Join us at one of our certification prep live events in the Oracle University Learning Community. Get insider tips from seasoned experts and learn from others who have already taken their certifications. Go to community.oracle.com/ou to jump-start your journey towards certification today! 06:28 Nikita: Welcome back! Ravish, I want to move on to MySQL Enterprise security tools. Could you start with MySQL Enterprise Audit? Ravish: MySQL Enterprise Audit is an extension available in Enterprise Edition that makes it easier to comply with regulations that require observability and control over who does what in your database servers. It provides visibility of connections, authentication, and individual operations. This is a necessary part of compliance with various regulations, including GDPR, NIS2, HIPAA, and so on. You can control who has access to the audited events so that the audits themselves are protected. As well as configuring what you audit, you can also configure rotation policies so that unmonitored audit logs don't fill up your storage space. The configuration can be performed while the server is running with minimal effect on production applications. You don't need to restart the server to enable or disable auditing or to change the filtering options. You can output the audit logs in either XML or JSON format, depending on how you want to perform further searching and processing. If you need it, you can compress the logs to save space and you can encrypt the logs to provide address protection of audited identities and data modifications. The extension is available either as a component or if you prefer, as the legacy plugin. 07:53 Lois: But how does it all work? Ravish: Well, first, as a DBA, you'll enable the audit plugin and attach it to your running server. You can then configure filters to audit your connections and queries and record who does what, when they do it, and so on. Then once the system is up and running, it audits whenever a user authenticates, accesses data, or even when they perform schema changes. The logs are recorded in whatever format that you have configured. You can then monitor the audited events at will with MySQL tools such as Workbench or with any software that can view and manipulate XML or JSON files. You can even configure Enterprise Audit to export the logs to an external Audit Vault, enabling collection, and archiving of audit information from all over your enterprise. In general, you won't audit every action on every server. You can configure filters to control what specific information ends up in the logs. 08:50 Nikita: Why is this sort of filtering necessary, Ravish? Ravish: As a DBA, this enables you to create a custom designed audit process to monitor things that you're really interested in. Rules can be general or very fine grained, which enables you to reduce the overall log size, reduces the performance impact on the database server and underlying storage, makes it easier to process the log file once you've gathered data, and filters are configured with the easily used JSON file format. 09:18 Nikita: So what information is audited? Ravish: You can see who did what, when they did it, what commands they use, and whether they succeeded. You can also see where they connected from, which can be useful when identifying man in the middle attacks or stolen credentials. The log also records any available client information, including software versions and information about the operating system and much more. 09:42 Lois: Can you tell us about MySQL Enterprise Firewall, which I understand is a specific tool to learn and protect the SQL statements that MySQL executes? Ravish: MySQL Enterprise Firewall can be enabled on MySQL Enterprise Edition with a plugin. It uses an allow list to set policies for acceptable queries. You can apply this allow list to either specific accounts or groups. Queries are protected in real time. Every query that executes is verified per server and checked to make sure that it conforms to query structures that are defined in the allow list. This makes it very useful to block SQL injection attacks. Only transactions that match well-formed queries in the allow list are permitted. So any attempt to inject other types of SQL statements are blocked. Not only does it block such statements, but it also sends an alert to the MySQL error log in real time. This gives you visibility on any security gaps in your applications. The Enterprise Firewall has a learning mode during which you can train the firewall to identify the correct sort of query. This makes it easy to create the allow list based on a known good workload that you can create during development before your application goes live. 10:59 Lois: Does MySQL Enterprise Firewall operate seamlessly and transparently with applications? Ravish: Your application simply submits queries as normal and the firewall monitors incoming queries with no application changes required. When you use the Enterprise Firewall, you don't need to change your application. It can submit statements as normal to the MySQL server. This adds an extra layer of protection in your applications without requiring any additional application code so that you can protect against malicious SQL injection attacks. This not only applies to your application, but also to any client that configured user runs. 11:37 Nikita: How does this firewall system work? Ravish: When the application submits a SQL statement, the firewall verifies that the statement is in a form that matches the policy defined in the allow list before it passes to the server for execution. It blocks any statement that is in a form that's outside of policy. In many cases, a badly formed query can only be executed if there is some bug in the application's data validation. You can use the firewall's detection and alerting features to let when it blocks such a query, which will help you quickly detect such bugs, even when the firewall continues to block the malicious queries. 12:14 Lois: Can you take us through some of the encryption and masking features available in MySQL Enterprise Edition? Ravish: Transparent data encryption is a great way to protect against physical security disclosure. If someone gains access to the database files on the file system through a vulnerability of the operating system, or even if you've had a laptop stolen, your data will still be protected. This is called Data at Rest Encryption. It protects not only the data rows in tablespaces, but also other locations that store some version of the data, such as undo logs, redo logs, binary logs and relay logs. It is a strong encryption using the AES 256 algorithm. Once we enable transparent data encryption, it is, of course, transparent to the client software, applications, and users. Applications continue to submit SQL statements, and the encryption and decryptions happen in flight. The application code does not need to change. All data types, table structure, and database names remain the same. It's even transparent to the DBAs. The same data types, table structure, and so on is still how the DBA interacts with the system while creating indexes, views, and procedures. In fact, DBAs don't even need to be in possession of any encryption keys to perform their admin tasks. It is entirely transparent. 13:32 Nikita: What kind of management is required for encryption? Ravish: There is, of course, some key management required at the outside. You must keep the keys safe and put policies in place so that you store and rotate keys effectively, and ensure that you can recover those keys in the event of some disaster. This key management integrates with common standards, including KMIP and KMS. 13:53 Lois: Before we close, I want to ask you about the role of data masking in MySQL. Ravish: Data masking is when we replace some part of the private information with a placeholder. You can mask portions of a string based on the string position using the letter X or some other character. You can also create a table that contains a dictionary of suitable replacement words and use that dictionary to mask values in your data. There are specific functions that work with known formats of data, for example, social security numbers as used in the United States, national insurance numbers from the United Kingdom, and Canadian social insurance numbers. You can also mask various account numbers, such as primary account numbers like credit cards or IBAN numbers as used in the European Bank system. There are also functions to generate random values, which can be useful in test databases. This might be a random number within some range, or an email address, or a compliant credit card number, or social security number. You can also create random information using the dictionary table that contains suitable example values. 14:58 Nikita: Thank you, Ravish, for taking us through MySQL security. We really cannot overstate the importance of this, especially in today's data-driven world. Lois: That's right, Niki. Cyber threats are increasingly sophisticated these days. You really have to be on your toes when it comes to security. If you're interested in learning more about this, the MySQL 8.4 Essentials course on mylearn.oracle.com is a great next step. Nikita: We'd also love to hear your thoughts on our podcast so please feel free to share your comments, suggestions, or questions by emailing us at ou-podcast_ww@oracle.com. That's ou-podcast_ww@oracle.com. In our next episode, we'll journey into the world of MySQL backups. Until then, this is Nikita Abraham… Nikita: And Lois Houston, signing off! 15:51 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Explore the essentials of MySQL database design with Lois Houston and Nikita Abraham, who team up with MySQL expert Perside Foster to discuss key storage concepts, transaction support in InnoDB, and ACID compliance. You'll also get tips on choosing the right data types, optimizing queries with indexing, and boosting performance with partitioning. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! Last week, we looked at installing MySQL and in today's episode, we're going to focus on MySQL database design. Lois: That's right, Niki. Database design is the backbone of any MySQL environment. In this episode, we'll walk you through how to structure your data to ensure smooth performance and scalability right from the start. 00:58 Nikita: And to help us with this, we have Perside Foster joining us again. Perside is a MySQL Principal Solution Engineer at Oracle. Hi Perside, let's start with how MySQL handles data storage on the file system. Can you walk us through the architecture? Perside: In the MySQL architecture, the storage engine layer is part of the server process. Logically speaking, it comes between the parts of the server responsible for inputting, parsing, and optimizing SQL and the underlying file systems. The standard storage engine in MySQL is called InnoDB. But other storage engines are also available. InnoDB supports many of the features that are required by a production database system. Other storage engines have different sets of features. For example, MyISAM is a basic fast storage engine but has fewer reliability features. NDB Cluster is a scalable distributed storage engine. It runs on multiple nodes and uses additional software to manage the cluster. 02:21 Lois: Hi Perside! Going back to InnoDB, what kind of features does InnoDB offer? Perside: The storage engine supports many concurrent users. It also keeps their changes separate from each other. One way it achieves this is by supporting transactions. Transactions allows users to make changes that can be rolled back if necessary and prevent other users from seeing those changes until they are committed or saved persistently. The storage engine also enables referential integrity. This is to make sure that data in a dependent table refers only to valid source data. For example, you cannot insert an order for a customer that does not exist. It stores raw data on disk in a B-tree structure and uses fast algorithms to insert rows in the correct place. This is done so that the data can be retrieved quickly. It uses a similar method to store indexes. This allows you to run queries based on a sort order that is different from the row's natural order. InnoDB has its own buffer pool. This is a memory cache that stores recently accessed data. And as a result, queries on active data are much faster than queries that read from the disk. InnoDB also has performance features such as multithreading and bulk insert optimization. 04:13 Lois: So, would you say InnoDB is generally the best option? Perside: When you install MySQL, the standard storage engine is InnoDB. This is generally the best choice for production workloads that need both reliability and high performance. It supports transaction syntax, such as commit and rollback, and is fully ACID compliant. 04:41 Nikita: To clarify, ACID stands for Atomicity, Consistency, Isolation, and Durability. But could you explain what that means for anyone who might be new to the term? Perside: ACID stands for atomic. This means your transaction can contain multiple statements, but the transaction as a whole is treated as one change that succeeds or fails. Consistent means that transactions move the system from one consistent state to another. Isolated means that changes made during a transaction are isolated from other users until that transaction completes. And durable means that the server ensures that the transaction is persisted or written to disk once it completes. 05:38 Lois: Thanks for breaking that down for us, Perside. Could you tell us about the data encryption and security features supported by InnoDB? Perside: InnoDB supports data encryption, which keeps your data secure on the disk. It also supports compression, which saves space at the cost of some extra CPU usage. You can configure an InnoDB cluster of multiple MySQL server nodes across multiple hosts to enable high availability. Transaction support is a key part of any reliable database, particularly when multiple concurrent users can change data. By default, each statement commits automatically so that you don't have to type commit every time you update a row. You can open a transaction with the statement START TRANSACTION or BEGIN, which is synonymous. 06:42 Nikita: Perside, what exactly do the terms "schema" and "database" mean in the context of MySQL, and how do they relate to the storage structure of tables and system-level information? Perside: Schema and database both refer to collections of tables and other objects. In some platform, a schema might contain databases. In MySQL, the word schema is a synonym for database. In InnoDB and some other storage engines, each database maps to a directory on the file system, typically in the data directory. Each table has rows data stored in a file. In InnoDB, this file is the InnoDB tablespace, although you can choose to store tables in other tablespaces. MySQL uses some databases to store or present system-level information. The MySQL and information schema databases are used to store and present structural information about the server, including authentication settings and table metadata. You can query performance metrics from the performance schema and sys databases. If you have configured a highly available InnoDB cluster, you can examine its configuration from the MySQL InnoDB cluster metadata database. 08:21 Lois: What kind of data types does MySQL support? Perside: MySQL supports a number of data types with special characteristics. BLOB stands for Binary Large Object Block. Columns that specify this type can contain large chunks of binary data. For example, JPG pictures or MP3 audio files. You can further specify the amount of storage required by specifying the subtype-- for example, TINYBLOB or LONGBLOB. Similarly, you can store large amounts of text data in TEXT, TINYTEXT, and so on. These types, BLOB and TEXT, share the same characteristic, that they are not stored in the same location as other data from the same row. This is to improve performance because many queries against the table do not query BLOB or TEXT data contained within the table. MySQL supports geographic or spatial data and queries on that data. These include ways to represent points, lines, polygons, and collections of such elements. The JSON data type enables you to use MySQL as a document store. A column of this type can contain complete JSON documents in each row. And MySQL has several functions that enable querying and searching for values within such documents. 10:11 Adopting a multicloud strategy is a big step towards future-proofing your business and we're here to help you navigate this complex landscape. With our suite of courses, you'll gain insights into network connectivity, security protocols, and the considerations of working across different cloud platforms. Start your journey to multicloud today by visiting mylearn.oracle.com. 10:38 Nikita: Welcome back. Perside, how do indexes improve the performance of MySQL queries? Perside: Indexes make it easier for MySQL to find specific rows. This doesn't just speed up queries, but also ensures that newly inserted rows are placed in the best position in the data file so that future queries will findthem quickly. 11:03 Nikita: And how do these indexes work exactly? Perside: Indexes work by storing the raw data or a subset of the raw data in some defined order. An index can be ordered on some non-unique value, such as a person's name. Or you can create an index on some value that must be unique within the table, such as an ID. The primary index, sometimes called a clustered index, is the complete table data stored on a unique value called a Primary Key. 11:38 Lois: Ok. And what types of indices are supported by InnoDB? Perside: InnoDB supports multiple index types. Raw data in most secondary indexes are stored in a BTREE structure. This stores data in specific buckets based on the index key using fixed-size data pages. HASH indexes are supported by some storage engines, including the memory storage engine. InnoDB has an adaptive HASH feature, which kicks in automatically for small tables and workloads that benefits from them. Spatial data can be indexed using the RTREE structure. 12:25 Nikita: What are some best practices we should follow when working with indexes in MySQL? Perside: First, you should create a Primary Key for each table. This value is unique for each row and is used to order the row data. InnoDB doesn't require that tables have an explicit Primary Key, but if you don't set one, it creates a hidden Primary Key. Each secondary index is a portion of the data ordered by some other column. And internally, each index entry uses the Primary Key as a lookup back to the rest of the row. If your Primary Key is large or complex, this increases the storage requirement of each index. And every time you modify a row, MySQL must update every affected index in the background. The more indexes you have on a table, the slower every insert operation will be. This means that you should only create indexes that improve query performance for your specific workload. The sys schema in MySQL Enterprise Monitor have features to identify indexes that are unused. Use prefix and compound keys to reduce indexes. A prefix key contains only the first part of a string. This can be particularly useful when you have large amounts of text in an index key and want to index based on the first few characters. A compound key contains multiple columns, for example, last name and first name. This also speeds up queries where you're looking for only those values because the secondary index can fulfill the query without requiring a lookup back to the primary indexes. 14:35 Lois: Before we let you go, can you explain what table partitioning is? Perside: Table partitioning is enabled by using a plugin. When you partition a table, you divide its content according to certain rules. You might store portions of the table based on the range of values in a column. For example, storing all sales for 2024 in a single partition. A partition based on a list enables you to store rows with specific values in the partition column. When you partition by hash or key, you distribute rows somewhat evenly between partitions. This means that you can distribute a large table across multiple disks, or you can place more frequently accessed data on faster storage. Explain works with partitioning. Simply prefix any query that uses partition data, and the output shows information about how the optimizer will use the partition. Partitioning is one of the features that is only fully supported in Enterprise Edition. 15:57 Lois: Perside, thank you so much for joining us today. In our next episode, we'll dive deep into MySQL security. Nikita: And if you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Until next week, this is Nikita Abraham… Lois: And Lois Houston signing off! 16:18 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode, Lois Houston and Nikita Abraham discuss the basics of MySQL installation with MySQL expert Perside Foster. Perside covers every key step, from preparing your environment and selecting the right software, to installing MySQL, setting up secure initial user accounts, configuring the system, and managing updates efficiently. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Welcome back to another episode of the Oracle University Podcast. I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I'm joined by Lois Houston, Director of Innovation Programs. Lois: Hi everyone! In our last episode, we spoke about Oracle MySQL ecosystem and its various components. We also discussed licensing, security, and some key tools. What's on the agenda for today, Niki? 00:52 Nikita: Well Lois, today, we're going beyond tools and features to talk about installing MySQL. Whether you're setting up MySQL for the first time or looking to understand its internal structure a little better, this episode will be a valuable guide. Lois: And we're lucky to have Perside Foster back with us. Perside is a MySQL Principal Solution Engineer at Oracle. Hi Perside! Say I wanted to get started and install MySQL. What factors should I keep in mind before I do that? 01:23 Perside: The first thing to consider is the environment for the database server. MySQL is supported on many different Linux distributions. You can also run it on Windows or Apple macOS. You can run MySQL on a variety of host platforms. You can use dedicated servers in a server room or virtual machines in a data center. Developers might prefer to deploy on Docker or Kubernetes containers. And don't forget, you can deploy HeatWave, the MySQL cloud version, in many different clouds. MySQL has great multithreading capability. It also has support for Non-Uniform Memory Access or NUMA. This is particularly important if you run large systems with hundreds of concurrent connections. MySQL storage engine, InnoDB, makes effective use of your available memory. It stores your active data in a buffer pool. This greatly improves access time compared to reading straight from disk. Of course, SSDs and other solid state media are much faster than hard disks. But don't forget, MySQL can make full use of that performance benefit too. Redundancy is very important for the MySQL server. Hardware with redundant power supply, storage media, and network connections can make all the difference to your uptime. Without redundancy, a single point of failure will bring down the server if it fails. 03:26 Nikita: Got it. Perside, from where can I download the different editions of MySQL? Perside: Our most popular software is the MySQL Community Edition. It is available at no cost for mysql.com for many platforms. This version is why MySQL is the most popular database for web application. And it is also open source. MySQL Enterprise Edition is the commercial edition. It is fully supported by Oracle. You can get it from support.oracle.com as an Oracle customer. If you want to try out the enterprise features but are not yet a customer, you can get the latest version of MySQL as a trial edition from edelivery.oracle.com. Because MySQL is open source, you can get the source code from either mysql.com or GitHub. Most people don't need the source. But any developer who wants to modify the code or even contribute back to the project are welcome to do so. 04:43 Lois: Perside, can you walk us through MySQL's release model? Perside: This is divided into LTS and Innovation releases, each with a different target audience. LTS stands for long-term support. MySQL 8.4 is an LTS release and will be supported for several years. LTS releases are feature-stable. When you install an LTS release, you can apply future bug fixes and security patches without changing any behavior in the product. The bug fixes and security patches are designed to be backward compatible. This means you can upgrade easily from previous releases. LTS releases come every two years. This allows you to maintain a stable system without having to change your underlying application too frequently. You will not be forced to upgrade after two years. You can continue to enjoy support for an LTS release for up to eight years. Along with LTS releases, we also have Innovation releases. These contain the latest leading-edge features that are developed even in the middle of an LTS cycle. You can upgrade from LTS to Innovation and back again, depending on which features you require in your application. Innovation releases have a much more rapid cadence. You can get the latest features every quarter. This means Innovation releases are supported only for their specific release. So, if you're on the Innovation track, you must upgrade more frequently. All editions of MySQL are shipped as both LTS and Innovation releases. This includes the self-managed editions and also HeatWave in the cloud. You can treat both LTS and Innovation releases as production-ready. This means they are generally available releases. Innovation does not mean beta quality software. You get the same quality support from Oracle whether you're using LTS or Innovative software. The MySQL client software and other tools will operate with both LTS and innovation releases. 07:43 Nikita: What are connectors in the context of MySQL? Perside: Connectors are the language-specific software component that connects your application to MySQL. You should use the latest version of connectors. Connectors are also production-ready, generally available software. They will work with any version of MySQL that is supported at the time of the connector's release. 08:12 Nikita: How does MySQL integrate with Docker and other container platforms? Perside: You might already be familiar with the Docker store. It is used for getting containerized images of software. As an Oracle customer, you might be familiar with My Oracle Support. It provides support and updates for all supported Oracle software in patches. MySQL works well with virtualization and container platform, including Docker. You can get images from the Community Edition on Docker Hub. If you are an Enterprise Edition customer, you can get images from the Docker store for MySQL Oracle Support or from Oracle container's registry. 09:04 Lois: What resources are available for someone who wants to know more about MySQL? Perside: MySQL has detailed documentation. You should familiarize yourself with the documentation as you prepare to install MySQL. The reference manual for both Community and Enterprise editions are available at the Developer Zone at dev.mysql.com. Oracle customers also have access to the knowledge base at support.oracle.com. It contains support information on use cases and reference architectures. The product team regularly posts announcements and technical articles to several blogs. These blogs often contain pre-release announcements of upcoming features to help you prepare for your next project. Also, you'll find deep dives into technical topics and complex problems that MySQL solves. This includes some problems specific to highly available architecture. We also feature individual blogs from high profile members of our team. These include the MySQL Community evangelist lefred. He posts about upcoming events and interesting features. Also, Dimitri Kravchuk offers blogs that provide deep dives into performance. 10:53 Nikita: Ok, now that I have all this information and am prepped and ready, how do I actually install MySQL on my operating system? What's the process like? Perside: You can install MySQL on various operating system, depending on your needs. These might include several distributions of Linux or UNIX, Windows, Mac OS, Oracle Linux based on the Unbreakable Enterprise Kernel, Solaris, and freeBSD. As always, the MySQL documentation provides full details on supported operating system. It also provides the specific installation steps for each of the operating system. Plus, it tells you how to perform the initial configuration and further administrative steps. If you're installing on Windows, you have a couple of options. First, the MySQL Installer utility is the easiest way to install MySQL. It installs MySQL and performs the initial configuration based on options that you choose at installation time. It includes not only the MySQL server, but also the most important connectors, the MySQL Shell Client, MySQL Workbench Client with user interface and common utilities for troubleshooting and administration. It also installs several sample databases and models and documentation. It's the easiest way to install MySQL because it uses an installation wizard. It lets you select your installation target location, what components to install, and other options. 12:47 Lois: But what if I want to have more control? Perside: For more control over your installation, you can install MySQL from the binary zip archive. This does not include sample or supporting tools and connectors, but only contains the application's binaries, which you can install anywhere you want. This means that the initial configuration is not performed by selecting an option through a wizard. Instead, you must configure the Windows service and MySQL configuration file yourself. Linux installation is more varied. This is because of the different distribution and also because of its terms of flexibility. On many distributions of Linux, you can use the package manager native to that distribution. For example, you can use the yum package manager in all Oracle Linux to install RPM files. You can also use a binary archive to install only the files. To decide which method you want to use, it's based on several factors. How much you know about MySQL files and configuration and the operating system on which you're going to do the installation? Any applicable standard or operating procedures within your own company's IT infrastructure, how much control do you need over this installation and how flexible a method do you need? For example, the RPM package for Oracle Linux, it installs the file in specific locations and with a specific service, MySQL user account. 14:54 Transform the way you work with Oracle Database 23ai! This cutting-edge technology brings the power of AI directly to your data, making it easier to build powerful applications and manage critical workloads. Want to learn more about Database 23ai? Visit mylearn.oracle.com to pick from our range of courses and enroll today! 15:18 Nikita: Welcome back! Is there a way for me to extend the functionality of MySQL beyond its default capabilities? Perside: Much of MySQL's behavior is standard and always exists when you install the server. However, you can configure some additional behaviors by extending MySQL with plugins or components. Plugins operate closely with the server and by calling APIs exposed by the server, they add features by providing extra functions or variables. Not only do they add variables, they can also interact with the servers on global variables and functions. That makes them work as if they are dynamically loadable parts of the server itself. Components also extend functionality, but they are separate from the server and extend its functionality through a service-based architecture. You can also extend MySQL in other ways-- by creating stored procedures, triggers, and functions with standard SQL and MySQL extensions to that language, or by creating external dynamically loaded user-defined functions. 16:49 Lois: Perside, can we talk about the initial user accounts? Perside: A MySQL account identifier is more than just a username and password. It consists of three elements, two that identify the account, and one that is used for authentication. The three elements are the username, it's used to log in from the client; the hostname element, it identifies a computer or set of computers; and the password, it must be provided to gain access to MySQL. The hostname is a part of the account identifier that controls where the user can log in. It is typically a DNS computer name or an IP address. You can use a wildcard, which is the percentage sign to allow the name user to log in from any connected host, or you can use the wildcard as part of an IP address to allow login from a limited range of IP addresses. 17:58 Nikita: So, what happens when I install MySQL on my computer? Perside: When you first install MySQL on your computer, it installs several system accounts. The only user account that you can log in to is the administrative account. That's called the root account. Depending on the installation method that you use, you'll either see the initial root password on the console as you install the server, or you can read it from the log file. For security reasons, the password of a new account, such as the root account must change. MySQL prevents you from executing any other operation with that account until you have changed the password. 18:46 Lois: What are the system requirements for installing and running MySQL? Perside: The MySQL service must run as a system-level user. Each operating system has its own method for creating such a user. All operating system follows the same general principles. However, when using the MySQL installer on Windows or the RPM package installation on Oracle Linux, each installation process creates and configure the system-level user. 19:22 Lois: Perside, since MySQL is always evolving, how do I upgrade it when newer versions become available? Perside: When you upgrade MySQL, you have to bring the server down so that the upgrade process can replace all of the relevant binary executable files. And if necessary, update the data and configuration to suit the new software. The safest thing to do is to back up your whole MySQL environment. This includes not only your data in the files, such as binaries and configuration files, but also logical elements, including triggers, stored procedures, user configuration, and anything else that's required to rebuild your system. The upgrade process gives you two main options. An in-place upgrade uses your existing data directory. After you shut down your MySQL server process, you either replace the package or binaries with new versions, or you install the new binary executables in a new location and point your symbolic links to this new location. The server process detects that the data directory belongs to an earlier version and performs any required upgrade checks. 20:46 Lois: Thank you, Perside, for taking us through the practical aspects of using MySQL. If you want to learn about the MySQL architecture, visit mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Nikita: Before you go, we wanted to take a minute to thank you for taking the Oracle University Podcast survey that we put out at the end of last year. Your insights were invaluable and will help shape our future episodes. Lois: And if you missed taking the survey but have feedback to share, you can write to us at ou-podcast_ww@oracle.com. That's ou-podcast_ww@oracle.com. We'd love to hear from you. Join us next week for a discussion on MySQL database design. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 21:45 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
It's the Enterprise Edition! Not the ship lineage... the show! We will find out who can best grab points with their mental grappling hooks and who truly has faith of the heart. Lived Long and Podcast Presents: The Trouble With Trivia Episode 9 “Enterprise Edition” Date of Podcast: December 1, 2024 Host: Ashley Millard Contestants: Davan Skelhorn, Dave Mader, Adam Woodward PRODUCER Davan Skelhorn Streaming live on Twitch, Youtube and Facebook: Twitch Channel: https://www.twitch.tv/livelongandpodcast YouTube Channel: https://www.youtube.com/user/livelongandpodcast Facebook Page: https://www.facebook.com/pg/LiveLongAndPodcast Audio version available wherever you get your audio podcasts. Listen to Spotify: https://open.spotify.com/show/0yIEMJhawSLGAozJAh4EdG Listen via Anchor: https://anchor.fm/livelongandpodcast #StarTrek #LocutorsOfTrek #LiveLongAndPodcast #UFP #UnitedFederationOfPodcasts #Startrektrivia A PROUD MEMBER OF THE UNITED FEDERATION OF PODCASTS
It's the Enterprise Edition! Not the ship lineage... the show! We will find out who can best grab points with their mental grappling hooks and who truly has faith of the heart. Lived Long and Podcast Presents: The Trouble With Trivia Episode 9 “Enterprise Edition” Date of Podcast: December 1, 2024 Host: Ashley Millard Contestants: Davan Skelhorn, Dave Mader, Adam Woodward PRODUCER Davan Skelhorn Streaming live on Twitch, Youtube and Facebook: Twitch Channel: https://www.twitch.tv/livelongandpodcast YouTube Channel: https://www.youtube.com/user/livelongandpodcast Facebook Page: https://www.facebook.com/pg/LiveLongAndPodcast Audio version available wherever you get your audio podcasts. Listen to Spotify: https://open.spotify.com/show/0yIEMJhawSLGAozJAh4EdG Listen via Anchor: https://anchor.fm/livelongandpodcast #StarTrek #LocutorsOfTrek #LiveLongAndPodcast #UFP #UnitedFederationOfPodcasts #Startrektrivia A PROUD MEMBER OF THE UNITED FEDERATION OF PODCASTS
News:Paige - Lightning CSS gains wid(er)spread adoptionJack - Vercel becomes official hosting partner for Astro TJ - ChatGPT Enterprise is out now What Makes Us Happy this Week:Paige - Star Trek: Strange New WorldsJack - Chip WarTJ - M2 Macbook ProJoin Us:Blue Collar Coder on YouTubeBlue Collar Coder on DiscordReach out via emailTweet at us on X @front_end_firehttps://www.pronextjs.dev/
In this episode, Lois Houston and Nikita Abraham are joined by MySQL Developer Advocate Scott Stroz to talk about MySQL Document Store, a NoSQL solution built on top of MySQL. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community MySQL: https://dev.mysql.com/doc/ Oracle MySQL Blog: https://blogs.oracle.com/mysql/ LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00;00;00;00 - 00;00;38;19 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Product Innovation and Go to Market Programs with Oracle University. And with me is Nikita Abraham, Principal Technical Editor. 00;00;38;22 - 00;00;59;15 Hi, everyone! For the last two weeks, we've been talking about MySQL and NoSQL. And in today's special episode, we're going to dive a little deeper and focus on MySQL Document Store with MySQL Developer Advocate Scott Stroz. Hi, Scott! Thanks for being here today. Why don't you tell us a little more about yourself? Hi, Niki. Hi, Lois. 00;00;59;19 - 00;01;16;10 I'm happy to be here with you guys. Like you said, I'm a developer advocate for MySQL. I've been a software developer for over 20 years. In that time frame, MySQL is the only thing in my development stack that hasn't changed. I used MySQL in my first job as a web developer, and I still use it today. 00;01;16;12 - 00;01;41;26 And for those who may not know, the best way to describe what a developer advocate does is our job is to make developers better at their job. Scott, we discussed NoSQL last week, but for anyone who missed that episode, can you give us a high-level explanation of what NoSQL means? Before I can explain NoSQL, we should probably go over what we mean by a relational database. 00;01;41;27 - 00;02;06;10 In a relational database, data is stored in tables. Each table consists of multiple columns, and each column holds a specific data type - a string, a number, a date, etc. In many cases, the data in one table relates to data in another table. This is where the relational part comes from and data is stored in rows or records. In a relational database, data is often very structured. 00;02;06;12 - 00;02;31;29 SQL or structured query language is used to retrieve, update, add, or delete rows from the database, and NoSQL database at its most basic level is a storage mechanism that does not use the table structure I just mentioned. Data is often stored as JSON documents, as a blob of text. Our audience may find it interesting that NoSQL does not necessarily mean there is no SQL used at all. 00;02;32;01 - 00;02;58;25 In some cases, NoSQL actually stands for not only SQL. Interesting. So, what are JSON documents? JSON is an acronym for JavaScript Object Notation and it is a textual representation of a data structure. JSON objects are wrapped in curly braces and consist of key-value pairs. The values can be simple, such as strings or numbers, or they can also be other JSON objects or arrays. 00;02;58;28 - 00;03;21;09 JSON arrays are wrapped in brackets and consist of comma-separated values that can be simple values again, such as numbers or strings. But they can also be other JSON objects or other arrays. This means that data in JSON objects can be nested with many levels. The best thing about JSON is that it's ubiquitous and can be used in almost any programing language. 00;03;21;11 - 00;03;41;21 I say almost every because I've not used every programing language. So, I'm covering myself just in case there's one out there that doesn't have JSON support. That's pretty good. Okay. It's easy to pick up on how to read it as well. When I first started using JSON, it was like trying to read The Matrix. But now I can read JSON just as easy as I can read a book. 00;03;41;22 - 00;04;03;08 Why would a developer choose to use a NoSQL solution? Can you give us a few examples of that? That is a great question, Niki. When starting out a new project, when a data structure doesn't exist, it may make sense to use a NoSQL solution. In other words, if the schema changes frequently, it may make sense not to have a schema. 00;04;03;10 - 00;04;22;25 Then, once the scheme is matured, the data can be parsed out into a relational database model. I come from the school of thought that all data should be in tables and columns with the proper relationships defined and be very structured. But here's the thing that took me a while to accept. Not all data is structured and not all data needs to be related to other data. 00;04;23;00 - 00;04;49;12 Things like application configuration or user preferences most likely don't need to be stored in a relational database and may work best being stored as JSON. One of the biggest uses of storing JSON is ingesting data from third-party sources. Many applications use external APIs to retrieve data. In those cases, we have no control over the schema that's used for that data. 00;04;49;15 - 00;05;08;28 In trying to account for changes in the schema that will inevitably come is going to be a difficult task. So, storing that data in JSON makes a lot more sense. That makes sense. And then you can handle the JSON as you need to. Okay, let's get to our main topic of discussion for today. What is MySQL Document Store? 00;05;09;00 - 00;05;35;09 MySQL Document Store is a NoSQL implementation built on top of MySQL. JSON documents are stored in a MySQL database table using the InnoDB storage engine. CRUD operations - create, retrieve, update, and delete - are abstracted from the developer through an easy-to-use API. Application developers, whether it's web applications, mobile applications, or native operating system applications, communicate with MySQL Document Store over the X-protocol, which uses port 33060 instead of the standard port 3306. 00;05;35;11 - 00;06;00;10 The nomenclature of NoSQL databases differs from relational databases, right? Can you explain some of the basic terms that are used? Developers who come from a relational database background may initially be confused by some of the terms used to describe the structure where the documents are stored. 00;06;00;12 - 00;06;23;04 I know I was. We use three main terms to describe the structure of MySQL document store – schema, collection, and document. In relational database parlance, a schema would be akin to a database. A collection would be the same as a table, and a document, the actual JSON that we're storing, would be like a row in that table. 00;06;23;07 - 00;06;56;07 So, what happens under the covers when using MySQL Document Store? So, any time we use the document store API, the commands are turned into SQL commands that are then executed on the database server. When developers use the MySQL Document Store API to create a new schema, behind the scenes, MySQL creates a new database, which should be the same as running a SQL query to create a new database. When a new collection is created, MySQL creates a new table in the database using a create table query, and it adds three columns to that table. 00;06;56;09 - 00;07;24;09 The first is _id. This column serves as the primary key when a document is saved to the database, and the key named _id is not provided. MySQL autogenerates the id, saves it to this column, and then also injects it into the JSON document. The next column is doc. This column stores the JSON documents using the JSON data type. And then the last column is _json_schema. 00;07;24;12 - 00;07;57;09 And it's used to validate the schema of documents that are added to the collection. CRUD operations follow the same process. For instance, when we make a call to the API to retrieve documents, on the backend, that command is converted into a SELECT statement using native JSON functions to return the document. If developers want to see what commands are executed when using MySQL Document Store, they can enable the general log setting and then view the log after executing API commands. 00;07;57;12 - 00;08;25;29 Are you attending Oracle CloudWorld 2023? Learn from experts, network with peers, and find out about the latest innovations when Oracle CloudWorld returns to Las Vegas from September 18 through 21. CloudWorld is the best place to learn about Oracle solutions from the people who build and use them. In addition to your attendance at CloudWorld, your ticket gives you access to Oracle MyLearn and all of the cloud learning subscription content as well as three free certification exam credits. 00;08;26;03 - 00;08;53;11 This is valid from the week you register through 60 days after the conference. So, what are you waiting for? Register today! Learn more about Oracle CloudWorld at www.oracle.com/cloudworld. Welcome back! Scott, just before the break, you mentioned something about schema validation. Isn't being schema-less one of the advantages of using a NoSQL solution? 00;08;53;15 - 00;09;16;22 Being schema-less is one of the features of NoSQL databases that developers like more than others. There may be times when we must ensure that documents added to a collection match a certain syntax or schema. For example, we may want to ensure that documents always have a specific key or that a particular key, if it exists, is numeric or some other data type. 00;09;16;24 - 00;09;38;20 When the collection is created, we can define those rules using a JSON object with a specific syntax. On the backend, MySQL will create a check constraint using that JSON and any time a document is saved to a collection, it's validated to ensure it matches the rules or schema we define. If the document does not adhere to that schema, MySQL will throw an error. 00;09;38;22 - 00;10;00;13 What do developers need to do to start using MySQL Document Store. In terms of configuring MySQL? They don't need to do anything. The X-plugin, which is what's used for communication between the server and the client, has been installed by default since version 8.0.1. So, if they're using a newer version of MySQL 8, they already have access to Document Store. 00;10;00;15 - 00;10;24;28 You may need to make some changes to the network infrastructure to allow traffic over port 33060, but for a network administrator, that should be easy to accomplish. MySQL Document Store is also available on all editions. It's available in Enterprise Edition and the Community Edition as well. And I should note that Oracle Cloud Infrastructure is currently the only cloud provider supporting MySQL Document Store for their MySQL cloud implementations. 00;10;25;00 - 00;10;48;27 Scott, what programing languages are supported for use with MySQL Document Store? There are quite a few languages that are supported. We have connectors or SDKs, as some people call them, or Java, which also works with other Java-based languages, such as Groovy and Kotlin. We also have connectors for C++, Python, PHP, .Net, Node.js and MySQL Shell. 00;10;49;00 - 00;11;14;18 Our listeners have probably heard of most of these with the exception of MySQL Shell. What is that? MySQL Shell is a command line interface that allows us to connect to and manage MySQL database instances. We can use it to create document store schemas and collections easily, but it can do so much more. We can manage to configure MySQL instances, including creating and configuring server replication and clustering. 00;11;14;20 - 00;11;39;15 It even offers a sandbox feature where we can quickly spin up MySQL instances for testing, replication, and clustering configuration without the need to stand up full MySQL server instances. There are three modes in MySQL Shell. By default, MySQL Shell starts in JavaScript mode where the commands we use follow JavaScript syntax. There is a Python mode where the commands we use follow Python syntax. 00;11;39;17 - 00;12;05;17 And finally, there is SQL mode where we can run standard SQL queries. SQL mode functions very much like the older MySQL command line client. And what are the advantages of using MySQL Document Store? I think the best feature of MySQL Document Store is that because the documents are stored in a database table using the JSON data type, we can use native SQL to run complex queries for reports and analytics. 00;12;05;19 - 00;12;27;13 MySQL has quite a few native functions for working with JSON, which can help extract data from a document store easier than in other solutions. Another big advantage is that MySQL Document Store is fully ACID compliant because the JSON documents are stored using the InnoDB storage engine. What does it mean for a database to be ACID compliant? 00;12;27;15 - 00;12;55;27 In databases, data is updated, added, deleted, etc. in transactions or steps. Sometimes, these transactions are a single query. Other times they may be multiple queries run in succession. Thezacronym ACID, which stands for atomicity, consistency, isolation, and durability, ensures that these transactions are processed in a reliable manner. Atomicity guarantees that each transaction is treated as a single unit. 00;12;55;29 - 00;13;30;24 If one part of the transaction fails, the entire transaction fails. Consistency ensures that every part of the transaction follows all database constraints. If the data in every part of the transaction violates these constraints, the entire transaction fails. Isolation means that transactions are run in isolation so that they do not interfere with each other. And finally, durability means that once a transaction is committed, meaning all parts of the transaction is succeeded, that the data is written to the database. Database is considered ACID compliant when it adheres to all of this. 00;13;30;26 - 00;13;55;16 Before we let you go, if people want more information about MySQL Document Store, where can they find it? I think the best place to get more information is from the documentation on the MySQL site at dev.mysql.com/doc. There are also quite a few posts about MySQL Document Store on the MySQL blog at blogs.oracle.com/mysql. 00;13;55;19 - 00;14;15;06 Wonderful! Thank you so much, Scott, for taking the time to be with us today. Oh, thanks for having me. Well, folks, that brings us to the end of this episode. We hope you've learned something new and that you'll join us next week for a discussion on Oracle Cloud Infrastructure's maximum security architecture. Until then, this is Lois Houston and Nikita Abraham signing off. 00;14;15;09 - 00;16;57;25 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode, Lois Houston and Nikita Abraham are joined by Autumn Black to discuss MySQL Database, a fully-managed database service powered by the integrated HeatWave in-memory query accelerator. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Deepak Modi, Ranbir Singh, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00;00;00;00 - 00;00;39;08 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. Hello and welcome to the Oracle University Podcast. You're listening to our second season Oracle Database Made Easy. I'm Lois Houston, Director of Product Innovation and Go to Market Programs with Oracle University. 00;00;39;10 - 00;01;08;03 And with me is Nikita Abraham, Principal Technical Editor. Hi, everyone. In our last episode, we had a really fascinating conversation about Oracle Machine Learning with Cloud Engineer Nick Commisso. Do remember to catch that episode if you missed it. Today, we have with us Autumn Black, who's an Oracle Database Specialist. Autumn is going to take us through MySQL, the free version and the Enterprise Edition, and MySQL Data Service. 00;01;08;05 - 00;01;39;16 We're also going to ask her about HeatWave. So let's get started. Hi, Autumn. So tell me, why is MySQL such a popular choice for developers? MySQL is the number one open-source database and the second most popular database overall after the Oracle Database. According to a Stack Overflow survey, MySQL has been for a long time and remains the number one choice for developers, primarily because of its ease of use, reliability, and performance. 00;01;39;17 - 00;02;08;22 And it's also big with companies? MySQL is used by the world's most innovative companies. This includes Twitter, Facebook, Netflix, and Uber. It is also used by students and small companies. There are different versions of MySQL, right? What are the main differences between them when it comes to security, data recovery, and support? MySQL comes in two flavors: free version or paid version. 00;02;08;24 - 00;02;45;05 MySQL Community, the free version, contains the basic components for handling data storage. Just download it, install it, and you're ready to go. But remember, free has costs. That stored data is not exactly secure and data recovery is not easy and sometimes impossible. And there is no such thing as free MySQL Community support. This is why MySQL Enterprise Edition was created, to provide all of those missing important pieces: high availability, security, and Oracle support from the people who build MySQL. 00;02;45;10 - 00;03;09;24 You said MySQL is open source and can be easily downloaded and run. Does it run on-premises or in the cloud? MySQL runs on a local computer, company's data center, or in the cloud. Autumn, can we talk more about MySQL in the cloud? Today, MySQL can be found in Amazon RDS and Aurora, Google Cloud SQL, and Microsoft Azure Database for MySQL. 00;03;09;27 - 00;03;35;23 They all offer a cloud-managed version of MySQL Community Edition with all of its limitations. These MySQL cloud services are expensive and it's not easy to move data away from their cloud. And most important of all, they do not include the MySQL Enterprise Edition advanced features and tools. And they are not supported by the Oracle MySQL experts. 00;03;35;25 - 00;04;07;03 So why is MySQL Database Service in Oracle Cloud Infrastructure better than other MySQL cloud offerings? How does it help data admins and developers? MySQL Database Service in Oracle Cloud Infrastructure is the only MySQL database service built on MySQL Enterprise Edition and 100% built, managed, and supported by the MySQL team. Let's focus on the three major categories that make MySQL Database Service better than the other MySQL cloud offerings: ease of use, security, and enterprise readiness. 00;04;07;03 - 00;04;44;24 MySQL DBAs tend to be overloaded with mundane database administration tasks. They're responsible for many databases, their performance, security, availability, and more. It is difficult for them to focus on innovation and on addressing the demands of lines of business. MySQL is fully managed on OCI. MySQL Database Service automates all those time-consuming tasks so they can improve productivity and focus on higher value tasks. 00;04;44;26 - 00;05;07;13 Developers can quickly get all the latest features directly from the MySQL team to deliver new modern apps. They don't get that on other clouds that rely on outdated or forked versions of MySQL. Developers can use the MySQL Document Store to mix and match SQL and NoSQL content in the same database as well as the same application. 00;05;07;19 - 00;05;30;26 Yes. And we're going to talk about MySQL Document Store in a lot more detail in two weeks, so don't forget to tune in to that episode. Coming back to this, you spoke about how MySQL Database Service or MDS on OCI is easy to use. What about its security? MDS security first means it is built on Gen 2 cloud infrastructure. 00;05;30;28 - 00;05;57;13 Data is encrypted for privacy. Data is on OCI block volume. So what does this Gen 2 cloud infrastructure offer? Is it more secure? Oracle Cloud is secure by design and architected very differently from the Gen 1 clouds of our competitors. Gen 2 provides maximum isolation and protection. That means Oracle cannot see customer data and users cannot access our cloud control computer. 00;05;57;15 - 00;06;27;09 Gen 2 architecture allows us to offer superior performance on our compute objects. Finally, Oracle Cloud is open. Customers can run Oracle software, third-party options, open source, whatever you choose without modifications, trade-offs, or lock-ins. Just to dive a little deeper into this, what kind of security features does MySQL Database Service offer to protect data? Data security has become a top priority for all organizations. 00;06;27;12 - 00;06;55;17 MySQL Database Service can help you protect your data against external attacks, as well as internal malicious users with a range of advanced security features. Those advanced security features can also help you meet industry and regulatory compliance requirements, including GDPR, PCI, and HIPPA. When a security vulnerability is discovered, you'll get the fix directly from the MySQL team, from the team that actually develops MySQL. 00;06;55;19 - 00;07;22;16 I want to talk about MySQL Enterprise Edition that you brought up earlier. Can you tell us a little more about it? MySQL Database Service is the only public cloud service built on MySQL Enterprise Edition, which includes 24/7 support from the team that actually builds MySQL, at no additional cost. All of the other cloud vendors are using the Community Edition of MySQL, so they lack the Enterprise Edition features and tools. 00;07;22;22 - 00;07;53;24 What are some of the default features that are available in MySQL Database Service? MySQL Enterprise scalability, also known as the thread pool plugin, data-at-rest encryption, native backup, and OCI built-in native monitoring. You can also install MySQL Enterprise Monitor to monitor MySQL Database Service remotely. MySQL works well with your existing Oracle investments like Oracle Data Integrator, Oracle Analytics Cloud, Oracle GoldenGate, and more. 00;07;53;27 - 00;08;17;20 MySQL Database Service customers can easily use Docker and Kubernetes for DevOps operations. So how much of this is managed by the MySQL team and how much is the responsibility of the user? MySQL Database Service is a fully managed database service. A MySQL Database Service user is responsible for logical schema modeling, query design and optimization, define data access and retention policies. 00;08;17;22 - 00;08;44;26 The MySQL team is responsible for providing automation for operating system installation, database and OS patching, including security patches, backup, and recovery. The system backs up the data for you, but in an emergency, you can restore it to a new instance with a click. Monitoring and log handling. Security with advanced options available in MySQL Enterprise Edition. 00;08;44;28 - 00;09;01;18 And of course, maintaining the data center for you. To use MDS, users must have OCI tenancy, a compartment, belong to a group with required policies. 00;09;01;21 - 00;09;28;28 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You'll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all of which is available free to subscribers. So get going. Pick a course of your choice, get certified, join the Oracle University Learning Community, and network with your peers. If you're already an Oracle MyLearn user, go to MyLearn to begin your journey. 00;09;29;03 - 00;09;40;24 If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started. 00;09;40;27 - 00;10;05;20 Welcome back! Autumn, tell us about the system architecture of MySQL Database Service. A database system is a logical container for the MySQL instance. It provides an interface enabling management of tasks, such as provisioning, backup and restore, monitoring, and so on. It also provides a read and write endpoint, enabling you to connect to the MySQL instance using the standard protocols. 00;10;05;28 - 00;10;31;27 And what components does a MySQL Database Service DB system consist of? A computer instance, an Oracle Linux operating system, the latest version of MySQL server Enterprise Edition, a virtual network interface card, VNIC, that attaches the DB system to a subnet of the virtual cloud network, network-attached higher performance block storage. Is there a way to monitor how the MySQL Database Service is performing? 00;10;31;29 - 00;10;59;29 You can monitor the health, capacity, and performance of your Oracle Cloud Infrastructure MySQL Database Service resources by using metrics, alarms, and notifications. The MySQL Database Service metrics enable you to measure useful quantitative data about your MySQL databases such as current connection information, statement activity, and latency, host CPU, memory, and disk I/O utilization, and so on. 00;11;00;03 - 00;11;23;15 You can use metrics data to diagnose and troubleshoot problems with MySQL databases. What should I keep in mind about managing the SQL database? Stopped MySQL Database Service system stops billing for OCPUs, but you also cannot connect to the DB system. During MDS automatic update, the operating system is upgraded along with patching of the MySQL server. 00;11;23;17 - 00;11;49;15 Metrics are used to measure useful data about MySQL Database Service system. Turning on automatic backups is an update to MDS to enable automatic backups. MDS backups can be removed by using the details pages and OCI and clicking Delete. Thanks for that detailed explanation on MySQL, Autumn. Can you also touch upon MySQL HeatWave? Why would you use it over traditional methods of running analytics on MySQL data? 00;11;49;18 - 00;12;18;01 Many organizations choose MySQL to store their valuable enterprise data. MySQL is optimized for Online Transaction Processing, OLTP, but it is not designed for Online Analytic Processing, OLAP. As a result, organizations that need to efficiently run analytics on data stored in MySQL database move their data to another database to run analytic applications such as Amazon Redshift. 00;12;18;04 - 00;12;41;22 MySQL HeatWave is designed to enable customers to run analytics on data that is stored in MySQL database without moving data to another database. What are the key features and components of HeatWave? HeatWave is built on an innovative in-memory analytics engine that is architected for scalability and performance, and is optimized for Oracle Cloud Infrastructure, OCI. 00;12;41;24 - 00;13;05;29 It is enabled when you add a HeatWave cluster to a MySQL database system. A HeatWave cluster comprises a MySQL DB system node and two or more HeatWave nodes. The MySQL DB system node includes a plugin that is responsible for cluster management, loading data into the HeatWave cluster, query scheduling, and returning query results to the MySQL database system. 00;13;06;02 - 00;13;29;15 The HeatWave nodes store data and memory and processed analytics queries. Each HeatWave node contains an instance of the HeatWave. The number of HeatWave nodes required depends on the size of your data and the amount of compression that is achieved when loading the data into the HeatWave cluster. Various aspects of HeatWave use machine-learning-driven automation that helps to reduce database administrative costs. 00;13;29;18 - 00;13;52;11 Thanks, Autumn, for joining us today. We're looking forward to having you again next week to talk to us about Oracle NoSQL Database Cloud Service. To learn more about MySQL Data Service, head over to mylearn.oracle.com and look for the Oracle Cloud Data Management Foundations Workshop. Until next time, this is Nikita Abraham and Lois Houston signing off. 00;13;52;14 - 00;16;33;05 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode, hosts Lois Houston and Nikita Abraham are joined once again by Alex Bouchereau to discuss how you can use Oracle Database Cloud Service to deploy Oracle Databases in the cloud. They also talk through the fundamentals of Oracle Cloud Infrastructure database system instances, including bare metal and virtual machine shapes and their storage architecture. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Deepak Modi, Ranbir Singh, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: http://traffic.libsyn.com/oracleuniversitypodcast/Oracle_University_Podcast_S02_EP04.mp3 00;00;00;00 - 00;00;38;21 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Product Innovation and Go to Market Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. 00;00;38;22 - 00;01;07;29 Hi, everyone. In our last episode, we discussed Oracle Exadata Cloud Service with our Oracle Database Specialist Alex Bouchereau. If you missed that episode, please remember to go back and give it a listen. We're so excited to have Alex with us again and today she's going to talk about the fundamentals of OCI database system instances on Oracle Cloud Infrastructure. We'll also ask her about the DB system for bare metal and virtual machine shapes. 00;01;08;02 - 00;01;40;22 Hi, Alex. Can you tell us about the Oracle Database Cloud Service? Oracle Database Cloud Service provides you the ability to deploy Oracle databases to the cloud. You can deploy Enterprise Edition or Standard Edition 2 and any database version from 11.2 and later. You have the option to deploy using virtual machine or bare metal shapes. Database Cloud Service VM and BM metal DB systems are deployed in Oracle Cloud Infrastructure regions around the world. 00;01;40;24 - 00;02;12;01 With Database Cloud Service, you manage the database instance, including provisioning, patching, backup, and disaster recovery using OCI cloud automation tools, such as the OCI console, CLI, and API. There is also an SDK that supports a number of different languages. Using VM shapes, you can deploy real application clusters and scale your storage requirements. Walk us through the various Oracle database editions. You know, from Standard Edition 2 to Enterprise Edition Extreme Performance. 00;02;12;03 - 00;02;37;15 What are the key differences in terms of features, add-ons, and licensing options? Beginning with Standard Edition 2, features included with the service are Multitenant Pluggable Database and Tablespace Encryption. Moving to Enterprise Edition, you add additional database features, such as data guard and the EM packs of data masking and subsetting, tuning, and diagnostics. 00;02;37;17 - 00;03;14;06 With Enterprise Edition High Performance, in addition to the base Enterprise Edition, you add Management and Lifecycle Management packs as well as Advanced Security and Database Vault and Label Security. And finally, Enterprise Edition Extreme Performance has all the previously discussed features, plus Active Data Guard, Oracle RAC, and Database In-Memory. Note that all packages include Oracle Database Transparent Data Encryption, TDE. You have an option to bring your own license, BYOL, which means you could use your organization's existing Oracle Database software licenses. 00;03;14;12 - 00;03;44;27 What's the difference between bare metal instances and virtual machines in the context of Oracle Cloud Infrastructure? OCI is the only public cloud that supports BM and VMs during the same set of APIs, hardware, firmware, software stack, and networking infrastructure. Bare metal instances are single tenant and you have full control over the resources provisioned within the service. Virtual machines follow a multitenant model and share servers that are not overprovisioned. 00;03;45;00 - 00;04;11;00 So how is the customer billed for these services? For databases using bare metal and virtual machine infrastructure, Oracle Cloud Infrastructure uses per second billing. This means that OCPU and storage usage is billed by the second with a minimum usage period of one minute for virtual machine DB systems and one hour for bare metal DB systems. 00;04;11;00 - 00;04;39;29 For bare metal servers, there is an infrastructure charge along with the billing for OCPUs. Alex, can DBCS be customized? I mean, are there different kinds of DBCS offerings to fulfill the various needs of our customers? The maximum number of OCPUs, RAM, and storage for your database depends on the shape you choose. There are two types of DB systems on virtual machines. Single node or one-node virtual machine DB system consists of one virtual machine. 00;04;40;02 - 00;05;13;27 Two-node virtual machine DB systems consist of two virtual machines of separate servers. Virtual machine DB systems use Oracle Cloud Infrastructure block storage. Bare metal systems used NVMe local storage and are therefore not scalable. Storage scaling is available for all VM shapes. Online OCPU scaling is available for bare metal systems or for two-node RAC VMs. Alex, help us understand what Database Cloud Service on bare metal is. 00;05;13;29 - 00;05;43;07 How does it differ from other Oracle Database Service offerings? Database Cloud Service on bare metal is a database service offering that enables customers to deploy and manage full-featured Oracle databases on bare metal servers. Bare metal offers proven and predictable performance with local NVMe storage on dedicated servers that provide high IOPS with extremely low latency. Bare metal DB systems consist of a single bare metal server running Oracle Linux with locally attached NVMe storage. 00;05;43;07 - 00;06;11;25 If the node fails, you can simply launch another system and restore the databases from current backups. Bare metal DB system provide dedicated resources that are ideal for databases that are performance intensive. And how do you start a bare metal instance? When you launch a bare metal DB system, you select the single Oracle Database edition that applies to all the databases on that DB system. 00;06;11;28 - 00;06;49;08 The selected edition cannot be changed. Each DB system can have multiple database homes, which can be different versions. Each database home can have multiple databases, which are, of course, the same version as the database home. Are you attending Oracle CloudWorld 2023? Learn from experts, network with peers, and find out about the latest innovations when Oracle CloudWorld returns to Las Vegas from September 18 through 21. CloudWorld is the best place to learn about Oracle solutions from the people who build and use them. 00;06;49;12 - 00;07;17;22 In addition to your attendance at CloudWorld, your ticket gives you access to Oracle MyLearn and all of the cloud learning subscription content as well as three free certification exam credits. This is valid from the week you register through 60 days after the conference. So what are you waiting for? Register today. Learn more about Oracle CloudWorld at www.oracle.com/cloudworld. 00;07;17;24 - 00;07;46;08 Welcome back. Alex, can you explain how Automatic Storage Management interfaces with disks in OCI? ASM directly interfaces with the disks. When you provision a bare metal system, you will indicate the data storage percentage assigned to data storage, which is user data and database files. Your choice is 40% or 80%. The remaining percentage is assigned to RECO storage consisting of database redo logs, archive logs, and recovery manager backups. 00;07;46;08 - 00;08;20;14 Storage is continuously monitored for any faults. Any disk that fails will be taken offline. Whenever the shapes list a maximum amount of usable space and data in RECO, these reservations for rebalancing are already taken into account. The root user has complete control over the storage subsystem so customization and tuning are possible. But the service already optimally configures these by default according to best practices. 00;08;20;18 - 00;08;58;20 So what are the key benefits of running databases on virtual machines? Database Service on VMs is a Database Service offering that enables customers to build, scale, and manage full-featured Oracle databases on virtual machines. The key benefits of running databases on VMs are cost-effectiveness, ease of getting started, durable and scalable storage, and the ability to run real application clusters, RAC, to improve availability. Database Cloud Service on VMs is built on the same high performance, highly available cloud infrastructure used by all Oracle Cloud Infrastructure services. 00;08;58;22 - 00;09;23;21 RAC databases will run on a single availability domain (AD) while ensuring each node is on a separate physical RAC, ensuring high availability. When you launch a virtual machine DB system, you select the Oracle Database edition and version that applies to the database on that DB system. The selected edition cannot be changed. Depending on your selected Oracle database edition and version, your DB system can support multiple pluggable databases, PDBs. 00;09;23;28 - 00;09;59;17 Are there various kinds of compute shapes available with DBCS? Virtual machine database systems offer compute shapes with 1 to 24 cores to support customers with small to large database processing requirements. A virtual machine DB system database uses Oracle Cloud Infrastructure block storage instead of local storage. You specify a storage size when you launch the DB system, and you can scale up the storage as needed at any time. 00;09;59;17 - 00;10;27;26 On-demand up and down scaling of database OCPUs use without interrupting database operations on two-node Oracle Real Application Cluster deployments allows customers to meet application needs while minimizing costs. To change to the number of CPU cores on an existing virtual machine DB system, you will change the shape of that DB system. Can you tell us about the node virtual machine DB systems in OCI? 00;10;27;28 - 00;11;01;22 For one-node virtual machine DB systems, Oracle Cloud Infrastructure provides a fast provisioning option that allows you to create a DB system using Logical Volume Manager, LVM, as your storage management software. Fast provisioning for an Oracle Database with Logical Volume Manager increases developer productivity. Can we also choose to use Automatic Storage with DBCS? You have a choice to select ASM storage management. You will select Oracle Grid Infrastructure when you provision your DB system to use Oracle Automatic Storage Management. 00;11;01;25 - 00;11;32;03 This is required for RAC deployments. Virtual machine DB system deployments with ASM support 11.2 database versions and above, whereas virtual machine DB system deployments with LVM support 12.2 database versions and above. When you provision your virtual machine DB system, you will identify the amount of available storage. This is the amount of block storage and gigabyte to allocate to the virtual machine DB system. 00;11;32;05 - 00;11;56;05 Available storage can be scaled up or down as needed after provisioning your DB system. You will also indicate total storage, the total block storage and gigabyte used by the virtual machine DB system. The amount of available storage you select determines this value. Oracle charges for the total storage used. What our customers really want to know is how their data is kept secure? 00;11;56;11 - 00;12;26;29 Lay it out for us. Alex. Identity Management, network controls, and encryption are all built into Database Cloud Service virtual machine and bare metal DB system to provide end-to-end security. Oracle Data Safe is an integrated security service available at no additional cost. With it, you could do a few things. Identify configuration drift through overall security assessments. This helps you identify gaps and then fix the issues. Flag risks users or behavior, and see if they're all well controlled. 00;12;27;02 - 00;12;54;14 Audit user activity. Track risky actions. Raise alerts and streamline compliance checks. Discover sensitive data. Know where it is and how much you have. Mask sensitive data. Three out of four DB instances are copies for dev test. This lets you remove that risk. To help meet security compliance standards, you can optionally enable FIPS, STIG, and SELinux options. 00;12;54;20 - 00;13;15;04 Thank you, Alex, for guiding us through all of this. To learn more about Oracle Database Cloud Service, please visit mylearn.oracle.com and take a look at our free Oracle Cloud Data Management Foundations Workshop. You'll also find quizzes that you can take to test your understanding of these concepts. That brings us to the end of this episode. 00;13;15;06 - 00;13;39;11 We're very excited about our episode next week in which we'll be talking with Rohit Rahi, Vice President of OCI Global Delivery, and Bill Lawson, Senior Director of Cloud Applications Product Management for Oracle University. We'll be talking about the free Oracle Cloud Infrastructure and Cloud Apps training and certifications that are available for a limited time. So you really won't want to miss that episode! 00;13;39;13 - 00;16;23;28 Until next time, this is Lois Houston and Nikita Abraham signing off. That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
The MIni Quiz is a play along at home version of Treksperts Quiz.No contestants, no scorekeeper, just you and your friends battling it out for the title of Ultimate Trekspert.This edition of The Mini Quiz is focussed purely on Star Trek: Enterprise. The Mini Quiz is normally offered as a BQN Patron Exclusive. However as a special treat, we are offering it to all our listeners as a taste of the special content you can get by becoming a Patron of the BQN at patreon.com/bqn BQN Podcasts are brought to you by listeners like you. Special thanks to these patrons on Patreon whose generous contributions help produce the podcast! Tim CooperMahendran RadhakrishnanDavid WillettPeter HongTom Van ScotterVera BibleJim McMahonJustin OserGreg MolumbyThad HaitChrissie De Clerck-SzilagyiJoe MignoneCarl WondersMatt HarkerYou can become a part of the Hive Mind Collective here: https://www.Patreon.com/BQN We'd love to add your uniqueness to our own!Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing.STAR TREK and all related marks, logos and characters are owned by CBS Studios Inc. “BQN” is not endorsed or sponsored by or affiliated with CBS/Paramount Pictures or the STAR TREK franchise.
Enterprise voice AI has been overshadowed for years by the tech giants' activities. That meant the consumer applications often drowned out what was happening in the enterprise. At the same time, most enterprises were moving slowly. That has changed over the past two years. Enterprise adoption of voice and conversational AI solutions is growing steadily and expanding into new use cases. Today we will talk about the contact center, restaurants, automotive, and media sectors. We also go into some detail about large language models and how enterprises are thinking about ChatGPT, Omnichannel, and more. Susan Westwater is the founder of Pragmatic Digital and Strategy Director at Vixen Labs. She is also the author of the book, "Voice Strategy: Creating Useful and Usable Voice Experiences." Jason Fields is the chief strategy officer at Voicify, a leading platform for voice experience creation. He was formerly a senior vice president at Rightpoint and an adjunct professor at Emerson College. Braden Ream is the CEO and co-founder of Voiceflow; the leading conversation AI design collaboration platform. Braden also was recently named to the Forbes 30 under 30 list. Susan, Braden, and Jason are each working on the front lines with enterprises, so you will get some fresh and practical perspectives. Enjoy!
About PerryPerry Krug currently leads the Shared Services team which is focused on building tools and managing infrastructure and data to increase the productivity of Couchbase's Sales and Field organisations. Perry has been with Couchbase for over 12 years and has served in many customer-facing technical roles, helping hundreds of customers understand, deploy, and maintain Couchbase's NoSQL database technology. He has been working with high performance caching and database systems for over 15 years.Links Referenced: Couchbase: https://www.couchbase.com/ Perry's LinkedIn: https://www.linkedin.com/in/perrykrug/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out. Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: InfluxDB is the smart data platform for time series. It's built from the ground-up to handle the massive volumes and countless sources of time-stamped data produced by sensors, applications, and systems. You probably think of these as logs.InfluxDB is programmable and performant, has a common API across the platform, and handles high granularity data–at scale and with high availability. Use InfluxDB to build real-time applications for analytics, IoT, and cloud-native services, all in less time and with less code. So go ahead–turn your apps up to 11 and start your journey to Awesome for free at InfluxData.com/screaminginthecloudCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's episode is a promoted guest episode brought to us by our friends at Couchbase. Now, I want to start off by saying that this week is AWS re:Invent. And there is Last Week in AWS swag available at their booth. More on that to come throughout the next half hour or so of conversation. But let's get right into it. My guest today is Perry Krug, Director of Shared Services over at Couchbase. Perry, thanks for joining me.Perry: Hey, Corey, thank you so much for having me. It's a pleasure.Corey: So, we're recording this before re:Invent, so the fact that we both have, you know, personality and haven't lost our voices yet should probably be a bit of a giveaway on this. But I want to start at the very beginning because unlike people who are academically successful, I tend to suck at doing the homework, across the board. Couchbase has been around for a long time. We've seen the company do a bunch of different things, most importantly and notably, sponsoring my ridiculous nonsense for which I thank you. But let's start at the beginning. What is Couchbase?Perry: Yeah, you're very welcome, Corey. And it's again, it's a pleasure to be here. So, Couchbase is an enterprise database company at the very top level. We make database software and we distribute that to our customers. We have two flavors, two ways of getting your hands on it.One is the kind of legacy, what we call self-managed, where you the user, the customer, downloads the software, installs it themselves, sets it up, manages the cluster monitoring, scaling all of that. And that's, you know, a big part of our business. Over the last few years we've identified, and certainly others in the industry have, as well the desire for users to access database and other technology in a hosted Software-as-a-Service pay-as-you-go, cloud-native, buzzword, et cetera, et cetera, vehicle. And so, we've released the Couchbase Capella, which is our fully managed, fully hosted database-as-a-service, running in—currently—Amazon and Google, soon to be Azure as well. And it wraps and extends our core Couchbase Server product into a, as I mentioned, hosted and managed platform that our users can now come to and consume as developers and build their applications while leaving all of the operational and administration—monitoring, managing failover expansion, all of that—to us as the experts.Corey: So, you folks are non-relational database, NoSQL in the common parlance, which is odd because they call it NoSQL, yet. They keep making more of them, so I feel like that's sort of the Hollywood model where okay, that was so good. We're going to do it again. Where did NoSQL come from? Because back when I was learning databases, when dinosaurs roamed the earth, it was all about relational models, like we're going to use a relational database because when the only tool you have is an axe, every problem looks like hours of fun. What gave rise to this, I guess, Cambrian explosion that we've seen of NoSQL options that proliferate o'er the land?Perry: Yeah, a really, really good question, and I like the axe-throwing metaphor. So sure, 20, 30, 40 now years ago, as digital applications needed a place to store their data, the world invented relational databases. And those were used and continue to be used very well for what they were designed for, for data that follows a very strict structure that doesn't need to be served at significant scale, does not need to be replicated geographically, does not need to handle data coming in from different sources and those sources changing their formats of things all the time. And so, I'm probably as old as you are and been around when the dinosaurs were there. We remember this term called ‘Web 2.0.' Kids, you're going to have to go look that up in the dictionary or TikTok it or something.But Web 2.0 really was the turning point when websites became web applications. And suddenly, there was the introduction of MySpace and Facebook and Amazon and Google and LinkedIn, and a number of others, and they realized that relational databases we're not going to meet their needs, whether it be performance, whether it be flexibility, whether it be changing of data models, whether it be introducing new features at a rapid pace. They tried; they stretched them, they added a bunch of different databases together, and really was not going to be a viable solution. So, 10 now, maybe 15 years ago, you started to see the rise of these tech giants—although we didn't call them tech giants back then but they were the precursors to today's—invent their own new databases.So, Amazon had theirs, Google has theirs, LinkedIn, and a number of others. These companies had reached a level of scale and reached a level of user base, had reached a level of data requirement, had reached a level of expectation with their customers. These customers, us, the users, us consumers, we expect things to be fast, we expect them to be always available. We expect Facebook to give us our news feed in milliseconds. We expect Google to give us our website or our search results in immediate, with more and more information coming along with them.And so, it was these companies that hit those requirements first. The only solution for them was to start from scratch and rewrite their own databases. Fast forward five, six, seven years, and we as consumers turned around and said, “Look, I really liked the way Facebook does things. I really like the way Google does things. I really like the way Amazon does things.“Bank of America, can you do the same? IRS, can you do the same? Health care vendor number one, two, three, and four, government body, can you all give me the same experience? I want my taxi to tell me exactly where it's going to take me from one place to another, I want it to give me a receipt immediately after I finish my ride. Actually, I want to be able to change my payment method after I paid for that ride because I used the wrong one.”All of these are expectations that we as consumers have taken from the tech giants—Apple, LinkedIn, Facebook—and turned around to nearly every other service that we interact with on a daily basis. And all of a sudden, the requirements that Facebook had, that Google had, that no other company had, you know, outside of the top five, suddenly were needed by every single industry, nearly every single company, in order to be competitive in their markets.Corey: And there's no way to scale relational to get to a point where it can wind up handling those type workloads efficiently?Perry: Correct, correct. And it's not just that the technology cannot do it—everything is technically feasible—but the cost both financially and time-to-market-wise in order to do that in a relational database was untenable. It either cost too much money, or it costs too much developers time, or cost too much of everybody's time to try to shoehorn something into it. And then you have the rise of cloud and containers, which relational databases, you know, never even had the inkling of a thought that they might need to be able to handle someday. And so, these requirements that consumers have been placed on everything else that they interact with really led to the rise of NoSQL as a commodity or as a database for the masses.LinkedIn is not in the business of developing a database and then selling it to everybody else to use as a database, right? They built it for themselves, they made their service better. And so, what you see is some of those founding fathers created databases, but then had no desire to sell them to others. And then after that followed the rise of companies like Couchbase and a number of others who said, “Look, we think we can provide those capabilities, we think we can meet those requirements for everybody.” And thereby rose the plethora of NoSQL databases because everybody had a little bit different of an approach to it.If you ask ten people what NoSQL is about, you're going to get eleven or twelve different answers. But you can kind of distill that into two categories. One is performance and operations. So, I need it to be faster, I need it to be scalable, I need it to be replicated geographically. And that's what NoSQL is to me. And that's the right answer.And so, you have things like Cassandra and Redis that are meant to be fast and scalable and replicated. You ask another group and they're going to tell you, “No, no, no. NoSQL needs to be flexible. I need to get rid of the rigid database schemas, I need to bring JSON or other data formats in and munge all this data together and create something cool and new out of it.” And thereby you have the rise of things like MongoDB, who focused nearly exclusively on the developer experience of working with data.And for a long time, those two were in opposite camps, where you have the databases that did performance and the databases that did flexibility. I'm not here to say that Couchbase is the ultimate kitchen sink for everything, but we've certainly tried to approach both of those challenges together so that you can have something that scales and performs and can be flexible enough in data model. And everybody else is trying to do the same thing, right? But all these databases are competing for that same nirvana of the best of both worlds.Corey: And it almost feels like there's a convergence play in place where everything now is trying to go away from the idea of, “Oh, yeah, we started off as a purpose-built database, but you can use this for everything.” And I don't necessarily know that is going to be the path that a lot of companies want to go down. What do you view Couchbase as I guess, falling down? In other words, what workloads is Couchbase inappropriate for?Perry: Yeah, that's a good question. And my [crosstalk 00:10:35]—Corey: Anyone who can't answer that one is a zealot and that's one of those okay, let's be very careful and not take our eyes off you for one second, while smiling and backing away slowly.Perry: Let's cut to commercial. No, I mean, there certainly are workloads that you know, in the past, we've not been good for that we've made improvements to address. There are workloads that we had not address well today that we will try to address in the future, and there are workloads that we may never see as fitting in our wheelhouse. The biggest category group that comes to mind is Couchbase is not an archival database. We are not meant to have data put in us that you don't care about, that you don't want to—that you just need to keep it around, but you don't ever need to access.And there are systems that do that well, they do that at a solid total cost of ownership. And Couchbase is meant for operational data. It's meant for data that needs to be interacted with, read and/or written, at scale and at a reasonable performance to serve a user-facing or system-facing application. And we call ourselves a general-purpose database. Bongo and others call themselves as well. Oracle calls itself a general-purpose database, and yet, not everybody uses Oracle for everything.So, there are reasons that you—Corey: Who could afford that?Perry: Who could? Exactly. It comes down to cost, ultimately. So, I'm not here to say that Couchbase does everything. We like to think, and we're trying to target and strive towards an 80%, right? If we can do 80% of an application or an organization's workloads, there is certainly room for 20% of other workloads, other applications, other requirements that can be met or need to be met by purpose-built databases.But if you rewind four or five years, there was this big push towards polyglot persistence. It's a buzzword that came and kind of has gone out of fashion, but it presented the idea that everybody is going to use 15 different databases and everybody is going to pick the right one for exactly the workload and they're going to somehow stitch them all together. And that really hasn't come to fruition either. So, I think there's some balance, where it's not one to rule them all, but it's also not 15 for every company. Some organizations just have a set of requirements that they want to be met and our database can do that.Corey: Let's continue our tour of the competitive landscape here now that we've handled the relational side of the world. The best database, as anyone who's listened to this show knows, is of course, Amazon's Route 53 TXT records stuffed into DNS, especially in the NoSQL land. Clearly, you're all fighting for second place after that. How do you stack up against the idea of legitimately using that approach? And for those who are not in on the joke, please don't do this. It is not the right answer. But I'm curious to get your take as to why DNS TXT records are an inappropriate NoSQL option.Perry: Well, it's a joke, right? And let's be clear about that. But—Corey: I have to say that because otherwise, someone tries it in production. I've gotten that wrong a few times, historically, so now I put a disclaimer in because yeah, it's only funny, so long as people are in on the joke. If not, and I lead someone down the primrose path to disaster, I feel bad. So, let's be very clear. We're kidding.Perry: And I'm laughing. I'm laughing here behind the camera. I am. I am.Corey: Yeah.Perry: So, the element of truth that I think Couchbase is in a position, or I'm in a position to kind of talk about is, 12 years ago, when Couchbase started, we were a key-value database and that's where we saw the best part of the market in those days, and where we were able to achieve the best scale and replication and performance, and fairly quickly realized that simple key-value, though extremely valuable and easy to manage, was not broad enough in requirements-meeting. And that's where we set our sights on and identified the larger, kind of, document database group, which is really just a derivative of key-value, where still everything is a key and a value; it's just now a document that you can reason about, that you can create an index on, that you can query, that you can run full-text search on, you can do much more with the data. So, at our core, we are still a key-value database. When that value is JSON, we become a document database. And so, if Route 53 decided that they wanted to enter into the document database market, they would need to be adding things that allowed you to introspect and ask questions of the data within that text which you can't, right?Corey: Well, not with that attitude. But yeah, I agree with you.Perry: [laugh].Corey: Moving up the stack, let's talk about a much more fearsome competitor here that I'm certain you see an awful lot of deals that you wind up closing, specifically your own open-source product. You historically have wound up selling software into environments, I believe, you referred to as your legacy offering where it's the hosted version of your commercial software. And now of course, you also have Capella, your cloud-hosted version. But open-source looks surprisingly compelling for an awful lot of use cases and an awful lot of folks. What's the distinction?Perry: Sure. Just to correct a little bit the distinction, we have Couchbase Server, which we provide as a what we call self-managed, where you can download it and install it yourself. Now, you could do that with the open-source version or you could do that with our Enterprise Edition. What we've then done is wrapped that Enterprise Edition in a hosted bottle, and that's Capella. So, the open-source version is something we've long been supporters of; it's been a core part of our go-to-market for the last 12 or 13 years or so and we still see it as a strong offering for organizations that don't need the added features, the added capabilities, don't need the support of the experts that wrote the software behind them.Certainly, we contribute and support our community through our forums and Discord and other channels, but that's a very big difference than two o'clock in the morning, something's not working and I need a ticket to track. We don't do that for our community edition. So, we see lots of users downloading that, picking it up building it into their applications, especially applications that are in their infancy or are with organizations that they simply can't afford the added cost and therefore they don't get the added benefit. We're not here to gouge and carve out every dollar that we can, but if you need the benefit that we can provide, we think there's value in that and that's what we're trying to run a business as.Corey: Oh, absolutely. It doesn't work when you're trying to wind up charging a license fee for something that someone is doing in their spare time project for funsies just to learn the technology. It's like, and then you show up. It's like, “That'll be $700. Surprise.”Yeah, that's sort of the AWS billing model approach, where—it's not a viable onramp for most folks. So, the open-source direction down there make sense. Counterpoint. If you're running a bank on top of it, “Well, we're running it ourselves and really hoping for the best. I mean, we have access to the code and all.” Great, but there are times you absolutely want some of the best minds in the world, with respect to that particular product, able to help troubleshoot so the ATM start working again before people riot in the streets.Perry: Yeah, yeah. And ultimately, it's a question of core competencies. Are you an organization that wants to be in the database development market? Great, by all means, we'd love to support you in that. If you want to focus on doing what you do best be at a bank or an e-commerce website, you worry about your application, you let us worry about the database and everybody gets along very well.Corey: There's definitely something to be said for outsourcing some of the pain, some of the challenge around an awful lot of it.Perry: There's a natural progression to the cloud for that and Software-as-a-Service, database-as-a-service where you're now outsourcing even more by running on our hosting platform. No longer do you have to download the binary and install yourself, no longer do you have to setup the cluster and watch it in case it has a blip or the statistic goes up too far. We're taking care of that for you. So yes, you're paying for that service, but you're getting the value of not having to be a database manager, let alone database developer for them.Corey: Love how serverless helps you scale big and ship fast, but hate debugging your serverless apps? With Lumigo's serverless observability, it's fast and easy (and maybe a little fun, too). End-to-end distributed tracing gives developers full clarity into their most complex serverless and containerized applications, connecting every service from AWS Lambda and Amazon ECS to DynamoDB, API Gateways, Step Functions and more. Try Lumigo free and debug 3x faster, reduce error rate and speed up development. Visit snark.cloud/lumigo That's snark.cloud/L-U-M-I-G-OCorey: What is the point of distinction between Couchbase Server and Couchbase Capella? To be clear, your self-hosted versus managed cloud offerings. When is one appropriate versus the other?Perry: Well, I'm supposed to say that Capella is always the appropriate choice, but there are currently a number of situations where Capella is not available in particular regions or cloud providers and so downloading running the software yourself certainly in your own—yes, there are people who still run their own data centers. I know it's taboo and we don't like to talk about that, but there are people who have on-premise. And so, Couchbase Capella is not available for them. But Couchbase Server is the original Couchbase database and it is the core of Couchbase Capella. So, wrapping is not giving it enough credit; we use Couchbase Server to power Couchbase Capella.And so, there's an enormous amount of value added around the core database, but ultimately, it's the behind the scenes of Couchbase Capella. Which I think is a nice benefit in that when an application is connecting to either one, it gets the same experience. You can point an application at one versus the other and because it's the same database running behind the scenes, the behavior, the data model, the query language, the APIs are all the same, so it adds a nice level of flexibility four customers that are either moving from one to another or have to have some sort of hybrid approach, which we see in the market today.Corey: Let's talk economics for a second. I can see scenarios where especially you have a high volume environment where you're sending tremendous amounts of data back and forth and as soon as it crosses an availability zone boundary or a region boundary, or God forbid, goes out to the internet via standard egress fees over in AWS-land, there's a radically different economic modeling that comes into play as opposed to having something in the same availability zone, in the same subnet just where that—or all traffic back and forth is free. Do you see that in your customer base, that that is a model that is driving people towards self-hosting?Perry: No. And I'd say no because Capella allows you to peer and run your application in the same availability zone as the as a database. And so, as long as that's an option for you that we have, you know, our offering in the right region, in the right AZ, and you can put your application there, then that's not a not an issue. We did have a customer not too long ago that didn't set that up correctly, they thought they did, and we noticed some high data transfer charges. Again, the benefit of running a hosted service, we detected that for them and were able to turn around and say, “Hmm, you might want to change this to over there so that we all save some money in doing so.”If we were not there watching it, they might not have noticed that themselves if they were running it self-managed; they might not have known what to do about it. And so, there's a benefit to working with us and using that hosted platform that we can keep an eye out. And we can apply all of our learning and best practices and bug fixes, we give that to everybody, rather than each person having to stumble across those hurdles themselves.Corey: That's one of those fun, weird corner-case trivia things about AWS data transfer. When you're transferring data within the same region between availability zones, it costs a penny on the sending side and a penny on the receiving side. Everything else is one side or the other that winds up getting the charge. And what makes this especially fun is that when it shows up on your bill, if you transfer a petabyte, it shows as cross-AZ data transfer: two petabytes.Perry: Two. Yeah.Corey: So, it double-counts so they can bill for it appropriately, but it leads to some really weird hunting it down, like, “Okay, well, we found half of it, but where's the other half hiding?” It's always obnoxious to trace this stuff down. The fact that you see it on your bill, well, that's testament to the fact that yeah, they're using the service. Good for them and good for you. Being able to track it down on a per-customer basis that does speak to your level of insight into what exactly is going on in your environment and where. As someone who does this for a living, let me confirm that is absolutely non-trivial.Perry: No, definitely not trivial. And you know, we've learned over the last four or five years, we've learned an enormous amount about how cloud providers work, how AWS works, but guess what, Google does it completely differently. And Azure does it—Corey: Yep.Perry: —completely differently. And so, on the surface level, they're all just cloud providers and they give you a VM, and you put some stuff on it, but integrating with the APIs, integrating with the different systems and naming of things, and then understanding the intricacies of the ins and outs, and, yeah, these cloud providers have their own bugs as well. And so, sometimes you stumble across that for them. And it's been a significant learning exercise that I think we're all better off for, having Couchbase gone through it for you.Corey: Let's get this a little bit more germane for this week for those of you who are listening to this during re:Invent. You folks are clearly here at the show—it's funny to talk about ‘here,' even though when we're recording this, it is not near here; we're actually home and enjoying ourselves, but welcome to temporal dislocation; here we are—here at the show, you folks are—among other things—being kind enough to pass out the Last Week in AWS swag from your booth, which, thank you. So, that is obviously the primary reason that you were at the show. What are the other reasons? What are the secondary reasons that you decided to come here?Perry: Yeah [laugh]. Well, I guess I have to think about this now since you already called out the primary reason.Corey: Exactly. Wait, we can have more than one reason for things? My God.Perry: Can we? Can we? AWS has long been a huge partner of ours, even before Capella itself was released. I remember sometime in, you know, five years or so ago, some 30% of our customers were running Couchbase inside of AWS, and some of our largest were some of your largest at times, like Viber, the messaging platform. And so, we've always had a very strong relationship with AWS, and the better that we can be presenting ourselves to your customers, and our customers can feel that we are jointly supporting them, the better. And so, you know, coming to re:Invent is a testament to that long-standing and very solid partnership, and also it's meant to get more exposure for us to let it be clear that Couchbase runs very well on AWS.Corey: It's one of those areas where when someone says, “Oh yeah, this is a great service offering, but it doesn't run super well on AWS.” It's like, “Okay, so are you bad computers or is what you have built so broken and Byzantine that it has to live somewhere else?” Or occasionally, the use case is absolutely not supported by AWS. Not to beat them up some more on their egress fees, but I'm absolutely about to if you're building a video streaming site, you don't want it living in AWS. It won't run super well there. Well, it'll run well, it'll just run extortionately expensively and that means that it's a non-starter.Perry: Yeah, why do you think Netflix raises their fees?Corey: Netflix, to their credit, has been really rather public about this, where they do all of their egress via their Open Connect, custom-built CDN appliances that they drop all over the place. They don't stream a single byte from AWS, and we know this from the outside because they are clearly still solvent.Perry: [laugh].Corey: I do the math on that. So, if I had been streaming at on-demand prices one month with my Netflix usage, I would have wound up spending four times my subscription fee just in their raw costs for data transfer. And I have it on good authority that is not just data transfer that is their only bill in the entire company; they also have to pay people and content and the analytics engine and whatnot. And it's kind of a weird, strange world.Perry: Real estate.Corey: Yeah. Because it's one of those strange stories because they are absolutely a showcase customer for AWS. They've been a marquee customer trotted out year after year to talk about what they're doing. But if you attempt to replicate their business purely on top of AWS, it will not work. Full stop. The economics preclude that happening.What is your philosophy these days on what historically has felt like an existential threat to most vendors that I've spoken to in a variety of ways: what if Amazon decides to enter your market? I'd ask you the same thing. Do you have fears that they're going to wind up effectively taking your open-source offering and turning it into Amazon Basics Couchbase, for lack of a better term? Is that something that is on your threat radar, or is that not really something you concern yourselves about?Perry: So, I mean, there's no arguing, there's no illusion that Amazon and Google and Microsoft are significant competitors in the database space, along with Oracle and IBM and Mongo and a handful of others.Corey: Anything's a database if you hold it wrong.Perry: This is true. This specific point of open-source is something that we have addressed in the same ways that others have addressed. And that's by choosing and changing our license model so that it precludes cloud providers from using the open-source database to produce their own service on the back of it. Let me be clear, it does not impact our existing open-source users and anybody that wants to use the Community Edition or download the software, the source code, and build it themselves. It's only targeted at Amazon because they have a track record of doing that to things like Elastic and Redis and Mongo, all of whom who have made similar to Couchbase moves to prevent that by the licensing of the open-source code.Corey: So, one of the things I do see at re:Invent every year is—and I believe wholeheartedly this comes historically from a lot of AWS's requirements for vendors on the show floor that have become public through a variety of different ways—where you for a long time, you are not allowed to mention multi-cloud or reference the fact that you work on any other cloud provider there. So, there's been a theme of this is why, for whatever it is we sell or claim to sell or hope one day to sell, AWS is the absolute best place for you to run it, full stop. And in some cases, that's absolutely true because people build primarily for a certain cloud provider and then when they find customers and other places, they learn to run it over there, too. If I'm approaching this from the perspective of I have a database problem—because looking at my philosophy on databases is hard to imagine I don't have database problems—then is my experience going to be better or even materially different between any of the cloud providers if I become a Couchbase Capella customer?Perry: I'd like to say no. We've done our best to abstract and to leverage the best of all of the cloud providers underneath to provide Couchbase in the best form that they will allow us to. And as far as I can see, there's no difference amongst those. Your application and what you do with the data, that may be better suited to one provider or another, but it's always been Couchbase is philosophy—sort of say, strategy—to make our software available to wherever our customers and users want to, to consume it. And that goes everything from physical hardware running in a data center, virtual machines on top of that, containers, cloud, and different cloud providers, different regions, different availability zones, all the way through to edge and other infrastructures. We're not in a position to say, “If you want Couchbase, you should use AWS.” We're in a position to say, “If you are using AWS, you can have Couchbase.”Corey: I really want to thank you for being so generous with your time, and of course, your sponsorship dollars, which are deeply appreciated. Once again, swag is available at the Couchbase booth this week at re:Invent. If people want to learn more and if for some unfathomable reason, they're not at re:Invent, probably because they make good life choices, where can they go to find you?Perry: couchbase.com. That'll to be the best place to land on. That takes you to our documentation, our resources, our getting help, our contact pages, directly into Capella if you want to sign in or login. I would go there.Corey: And we will, of course, put links to that in the show notes. Thank you so much for your time. I really appreciate it.Perry: Corey, it's been a pleasure. Thank you for your questions and banter, and I really appreciate the opportunity to come and share some time with you.Corey: We'll have to have you back in the near future. Perry Krug, Director of Shared Services at Couchbase. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry and insulting comment berating me for being nowhere near musical enough when referencing [singing] Couchbase Capella.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
In der Saturn Tarifwelt befinden sich aktuell gleich 2 Top-Angebote, die sich Samsung-Fans näher ansehen sollten. Ihr bekommt das Galaxy S22 in der Enterprise Edition zusammen mit Vertrag im Netz von Vodafone oder o2 zum absoluten Tiefstpreis. Wir haben sämtliche Kosten ausgerechnet und verraten, warum ihr zuschlagen solltet.
A daily look at the relevant information security news from overnight - 03 June, 2022Episode 237 - 03 June 2022Windows opatch- https://www.bleepingcomputer.com/news/security/windows-msdt-zero-day-vulnerability-gets-free-unofficial-patch/UNISOC DoS - https://www.securityweek.com/millions-budget-smartphones-unisoc-chips-vulnerable-remote-dos-attacksAsian app attack - https://www.bleepingcomputer.com/news/security/chinese-luoyu-hackers-deploy-cyber-espionage-malware-via-app-updates/Atlassian critical - https://www.securityweek.com/atlassian-confluence-servers-hacked-zero-day-vulnerabilityGitLb patch - https://www.bleepingcomputer.com/news/security/gitlab-security-update-fixes-critical-account-take-over-flaw/Hi, I'm Paul Torgersen. It's Friday, June 3rd, 2022, and this is a look at the information security news from overnight. From BleepingComputer.comWhile Microsoft has still not released a patch for the Windows critical vulnerability known as Follina, our friends at opatch have. Instead of just disabling the MSDT URL protocol handler which is the Microsoft suggested mitigation for the issue, opatch has added sanitization of the user-provided path to avoid rendering the Windows diagnostic wizardry inoperable across the Operating System for all applications. Details in the article. From SecurityWeek.com:Millions of budget smartphones that use UNISOC chipsets could have a critical vulnerability that leads to a denial of service attack. UNISOC has about 11% of the smartphone chip market, with the majority of these chips sold in Asia and Africa. The company has already issued the appropriate patch. Google will also address this flaw in an upcoming Android patch. From BleepingComputer.com:Chinese hacking group LuoYu is infecting victims with the WinDealer information stealer by switching legitimate app updates with a man-on-the-side attack. They are currently targeting popular Asian apps such as QQ, WeChat, and WangWang. Details in the article. From SecurityWeek.com:Atlassian Confluence Servers and Data Centers are affected by a critical vulnerability that can be leveraged for remote code execution and is being actively exploited in the wild. All supported versions of Confluence Server and Data Center are affected. Until a patch becomes available, users have been advised to prevent access to their Confluence servers from the internet, or simply disable these instances. The company hopes to have a patch ready by the end of today. And last today, from BleepingComputer.comGitLab has released a critical security update for multiple versions of its Community and Enterprise Edition products to address eight vulnerabilities, one of which that could lead to account takeover. That 9.9 severity vulnerability affects all GitLab versions 11.10 through 14.9.4, 14.10 through 14.10.3, and version 15.0. Get your patch on kids. That's all for me today . Have a great rest of your day. Like and subscribe. And until next time, be safe out there.
This week we discuss voice year in review for 2021. All of the guests in this episode are based in Europe but operate globally so you will get both perspectives. Also, the conversation is heavily skewed toward enterprise voice applications because that drove so much of the important news in 2021. We start with an in-depth discussion of acquisitions and funding which were bigger in 2021 than in any previous year. Guests include: - Otto Soderlund is CEO and co-founder of Speechly which is pioneering a new full-duplex and super-fast voice interface for the web and mobile. - Dominik Meissner is a co-founder of 169 labs. The company has worked with Sony, Ikea, Lotto, Mercedes Benz, MINI, and Diageo. - Maarten Lens-Fitzgerald is the creator of Open Voice, one of the most popular newsletters in the industry, and founder of the Project Zilver consortium which seeks to make the lives of elder adults better through the use of voice assistants.
EP313 - Vale a pena investir no Oracle Enterprise Edition? Entre no nosso canal do Telegram para receber conteúdos Exclusivos sobre Banco de dados Oracle: https://t.me/joinchat/AAAAAEb7ufK-90djaVuR4Q
EP305 - Quando utilizar Oracle Standard ou Enterprise Edition? Entre no nosso canal do Telegram para receber conteúdos Exclusivos sobre Banco de dados Oracle: https://t.me/joinchat/AAAAAEb7ufK-90djaVuR4Q
EP261 - Flashback é só para Enterprise Edition? Entre no nosso canal do Telegram para receber conteúdos Exclusivos sobre Banco de dados Oracle: https://t.me/joinchat/AAAAAEb7ufK-90djaVuR4Q
GoodData launches its enterprise edition of its headless Business Intelligence service, called GoodData CN Production. The platform is based on a microservices design that is accessed using application programming interfaces. Enterprises can give scalable, real-time data to everyone inside a company and their consumers with GoodData CN Production. Outreach, a sales engagement platform, has secured $200 million in a Series G fundraising round. The company's valuation has increased to $4.4 billion due to this financing, making it a Unicorn. The fund will be used to grow Outreach's sales and marketing teams and invest in creating, acquiring, and delivering new innovative technologies. In the previous year, Outreach has grown to over 800 employees worldwide, and it has launched two new global offices in London and Prague.
This is the second part of my talk with my colleague Bert Laverman. We continued our talk with Domain-Driven Design. We discussed how Bert found out about the concept, and we delved deeper into some aspects of the DDD-centric systems, such as Ubiquitous Language and Bounded Context. Bert also highlighted the benefits of using agile development methodology when designing a system that requires lots of detailed planning and decision making and getting it all done in a timely manner. We discussed why software architecture is so important, and lastly but very importantly we talked about Axon Server Standard Edition and Enterprise Edition. He also explained the benefits of using Docker and Kubernetes when setting up and using Axon Server. You can reach Bert on Twitter @BertLaverman and on LinkedIn. Connect with me on Twitter @SaraTorrey and on LinkedIn. For more information about us visit axoniq.io
Mowies Expands On-Demand Platform with Harmonic's VOS Cloud Streaming SaaS. Mowies expanded offering allows consumers to purchase on-demand content directly from filmmakers, musicians and storytellers. The VOS360 SaaS provides Mowies with an easy-to-use, end-to-end solution for media processing and delivery, optimizing bandwidth usage and enabling an exceptional viewing experience for viewers.London-based multi-store, mutli-vendor e-commerce solution for global scaling platform Techsembly raises £1 million from SuperSeed Ventures and a number of undisclosed private investors.The funding is expected to be used to assemble a cross-marketing team that will initially target European and Asian markets. Uber Offers SaaS To Three New Public Transit Agencies. Denver's Regional Transportation District will begin employing Uber's management software on its fleet of wheelchair-accessible vehicles this week, with Cecil Transit and Porterville Transit following in the weeks to come. These agencies will pay a fee to use Uber's technology. Uber made its first transit deal in June of last year, offering residents of Marin County, population 250,000, a chance to use Uber's app to book minibus rides. The deal allowed the county transit agency to charge riders $4 per mile, with a dollar discount for every mile for travelers with disabilities or mobility issues. Uber earns a flat fee for managing the service, capped at $80,000.Aras Announces Expansion of Subscription Offerings with Enterprise SaaS. Enterprise Edition delivers full customization capabilities in an all-inclusive SaaS format with DevOps processes specifically designed to meet the mission-critical demands of the most complex scenarios. The new subscription offer from Aras provides unmatched cloud capabilities for the largest enterprise deployments. Aiven, a provider of open-source software for managing cloud data infrastructure, raises $100 million from investors led by venture capital fund Atomico and would use the proceeds to expand internationally. Helsinki-based Aiven stated that the Series C funding round valued the business at more than $800 million. It was also backed by new investors Salesforce Ventures and World Innovation Lab.
I veckan avslöjade SVT Nyheter att Vårdinnovations kommunikationschef Sara Johansson inte existerade. ”Sara” verkade ändå vara en riktig person, inte minst med tanke på att hon hade över 500 kontakter på Linkedin. I veckans poddavsnitt pratar Tess och Nikka om SVT Nyheters avslöjande samt om påhittade personer och om social trovärdighet. Korrigering! Namntillägget som Nikka pratar om för Samsungs mobiler är ”Enterprise Edition”, inte ”Business Edition”. Business Edition är inte ett namntillägg som Samsung använder. Fullständiga shownotes finns på https://go.nikkasystems.com/podd113.
The Wikimedia Foundation has decided to launch Wikipedia Enterprise wherein it will charge companies that utilize Wikipedia content a fee. In other news, Google said it will cut its service fee for Play Store, which it charges Android app developers, to 15% for the first one million of revenue. Which means slightly cheaper apps now. --- Send in a voice message: https://anchor.fm/whatsnewonthenet/message
Esse é o bate papo sobre a plataforma de ecommerce, a Magento Commerce, onde Gustavo Esteves CEO da Métricas Boss e diretor da Ecommerce Pro, junto com Lucas Souza Sócio na BeL Partner, debatem o assunto com a participação ilustre de Núbia Mota Growth Marketing, Latin America na Adobe. Nesse bate papo falamos sobre: - Apresentação da participante. - O que é a plataforma Magento e quais seus recursos? Quais empresas hoje já utilizam a Magento? - A Magento tem várias versões, Enterprise Edition (versão paga com suporte especializado), Magento Go (aplicação em Cloud) e Community Edition (versão gratuita e a mais utilizada). Quais as diferenças? - Para que tipo de negócio é cada uma das versões? - O fato do Magento ser Open Source é algo simplesmente sensacional por conseguirmos desenvolver o nosso site com o jeito do seu negócio, com as suas necessidades e com a UX aplicada ao seu usuário. Como as empresas que utilizam o Magento podem e devem usufruir de fato do open source? - Falando de um time interno da sua empresa. Internalizo a evolução e gestão do meu magento ou contrato um parceiro? Como que consigo decidir isso? - Dica Final. Site > www.batepaposobreecommerce.com.br Site Ecommerce Pro > www.ecommercepro.com.br Instagram Ecommerce Pro > @ecommerceprofissional #e-commerce
Jaron Lanier is a musician, computer scientist and renowned technology philosopher who has enjoyed a colourful career in art and technology. He is the author of “Ten Arguments for Deleting Your Social Media Accounts Right Now.” As one of the forefathers of virtual reality, he is also the inventor of the Nintendo Powerglove and Microsoft Team’s Together Mode. He was recently featured in Netflix’s The Social Dilemma, exploring the dangerous impacts of social media on our daily lives. On today’s episode of Innovation Heroes, Peter and Jaron explore how social media’s dark side is creeping into the enterprise, the future of virtual reality and Jaron’s work with Microsoft through-out the COVID-19 pandemic, including Microsoft’s Together Mode. You can learn more about Jaron at http://www.jaronlanier.com/
Leading global tech analysts Patrick Moorhead (Moor Insights & Strategy) and Daniel Newman (Futurum Research) are front and center on The Six Five analyzing the tech industry's biggest news each and every week and also conducting interviews with tech industry "insiders" on a regular basis. The Six Five represents six (6) handpicked topics that will be covered for five (5) minutes each. Welcome to this week’s edition of “The 6-5.” I’m Patrick Moorhead with Moor Insights & Strategy, co-host, joined by Daniel Newman with Futurum Research. On this week’s show we will be talking: State of Smartphone 5G https://futurumresearch.com/snapdragon-summit-qualcomm-exhibits-explosion-of-5g/ Snapdragon 865 and 765 https://futurumresearch.com/qualcomms-snapdragon-summit-day-2-first-thoughts-about-the-snapdragon-865/ 3D Sonic Max https://twitter.com/PatrickMoorhead/status/1202662566557712384?s=20 AI Improvements https://twitter.com/PatrickMoorhead/status/1202449323088199680?s=20 XR2 5G https://futurumresearch.com/snapdragon-summit-xr-and-compute-take-center-stage/ https://www.forbes.com/sites/moorinsights/2019/12/05/qualcomms-xr2-5g-brings-together-the-best-in-immersion-and-connectivity/#75d970644fa5 https://twitter.com/PatrickMoorhead/status/1202686414749491200?s=20 ACPC expansion with 8c, 7c, 8cx Enterprise Edition https://futurumresearch.com/snapdragon-summit-xr-and-compute-take-center-stage/ https://twitter.com/PatrickMoorhead/status/1202746026949369856?s=20 Alright...let’s get started. Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.
Java is owned by Oracle. Oracle has assigned Eclipse Foundation to handle Java Enterprise Edition, but they are not allowed to use the name Java (owned by Oracle). So JEE has been renamed to Jakarta Enterprise Edition. But this in not just about a name change, it can cause a whole range of challenges. Listen to Pontus Ullgren, Johan Carlsson and Tomas Ljunggren elaborating the subject.
In this episode of AWS TechChat, TechChat turns 50 and Shane and Pete come at you with a raft of short sharp and important updates that occurred in June, in the year 2019. They started the show, introducing you to a new service that has gone GA - AWS IoT Events. AWS IoT Events is a new, fully managed IoT service that makes it easy to detect and respond to events from IoT sensors and applications without the traditional heavy lifting of building traditional IoT applications and brings a managed complex event detection service to our already buff IoT suit. Sticking with IoT theme, we quickly announced BLE (Bluetooth Low Energy) support has landed in Amazon FreeRTOS, and a new MQTT library is now generally available in Amazon FreeRTOS 201906.00. With this update, you can now securely connect Amazon FreeRTOS devices using BLE to AWS IoT via Android and iOS devices, and use the new MQTT library to create applications that are independent of the connectivity protocol. We then spoke about two new features for Microsoft SQL. Always On Availability Groups have made their way to SQL Server 2017 Enterprise Edition and we let you know you can now restore a multi-file native SQL Server backup from Amazon S3 to an Amazon RDS SQL Server database instance. Amazon Elastic Container Service (ECS) for Windows containers has gone GA so prepare your BSODs, only kidding. We now support Amazon ECS clusters running Windows Server 2019 containers, so if Windows containers are your thing, take a look at Amazon ECS. Lastly, with Amazon API Gateway custom domains, you can now enforce a minimum Transport Layer Security (TLS) version and cipher suites through a security policy allowing you to further improve security for your customers. Resources: • AWS IoT Events https://aws.amazon.com/iot-events/ • Amazon FreeRTOS Bluetooth Low Energy https://aws.amazon.com/blogs/iot/perform-ota-firmware-updates-on-espressif-esp32-devices-using-amazon-freertos-bluetooth-low-energy-mqtt-proxy/ • Amazon RDS SQL Server https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-rds-for-sql-server-now-supports-always-on-availability-groups-for-sql-server-2017/ • Amazon Elastic Container Service (ECS) for Windows containers https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_Windows.html • Amazon API Gateway – TLS https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-api-gateway-adds-configurable-transport-layer-security-version-custom-domains/
Wie gut passen verteilte Teams und Agilität zusammen? Diese Frage ist eigentlich ganz leicht zu beantworten: es passt leider nicht sehr gut zusammen! Verteilte Teams bringen einfach sehr viele Nachteile. Die Arbeit in verteilten Teams bedeutet, dass beispielsweise manche Kollegen im Homeoffice arbeiten oder manche im Büro und sich nicht alle zusammen an einem Ort befinden. Das führt zur bekannten Grüppchenbildung und verbessert die Arbeit im Team nicht unbedingt. Sollte sich die Zusammenarbeit allerdings nicht anders vereinbaren lassen, versucht zumindest in einer ähnlichen Zeitzone zusammenzuarbeiten. Anbei findest du einige hilfreiche Tipps für eine Zusammenarbeit in verteilten Teams. Toolgestütztes, agiles Arbeiten in verteilten Teams Wenn du trotzdem agil mit deinem verteilten Team arbeiten musst, kannst du das natürlich mit verschiedenen Tools unterstützen. Hierfür bietet sich das Bilden von Pärchen sehr gut an, um mit analogen Boards zu arbeiten. Die genaue Erklärung findest du in der ganzen Episode. Analoge Boards sind immer das besser Tool. Solltest du mit Online-Tools arbeiten, empfehle ich dir sehr es so zu konfigurieren, dass es sich wie ein analoges Tool anfühlt. Persönliche Treffen sind auch in verteilten Teams sehr wichtig Sorge für regelmäßige Treffen (mindestens alle 8 – 12 Wochen) der Teams an verschiedenen Orten, dass das Team zusammenwachsen kann. Am besten druckt ihr alles aus, was ihr bearbeiten wollt, anstatt nur gemeinsam auf den Beamer oder den Laptop zu schauen. Führe Retrospektiven durch Retrospektiven sind auch in verteilten Teams sehr wichtig. Entsprechende Tools bieten sich hierfür perfekt an und ich kann dir das Tool „Retrium“ sehr empfehlen. Sorge für 100 % Teamzugehörigkeit Alle Kollegen sollten zu 100 % im Team mitarbeiten und nicht noch mit anderen Projekten beschäftigt sein. Nur so wird eine optimale und zielführende Zusammenarbeit gewährleistet. Nutze zuverlässige Videokonferenz-Tools Gerade in verteilten Teams ist es von Vorteil, sich während einer Videokonferenz sehen zu können. Sorge daher für eine zuverlässige Ausstattung für eure Team-Meetings. Retrium Hier noch die Info zu Retrium, die ich von Niki Kohari bekommen habe:Thanks for sharing information about Retrium to your listeners. Over the last year or so, we've lowered the cost of Retrium, streamlined the upgrade process, and removed discounting from our Team Edition product. This means there's no easy coupon code that people can use when they upgrade to get a discount on our service.We can give people extra time on a free trial from 30 days to 60 days, but they'd need to reach out to us at help@retrium.com for a trial extension and mention the episode.We can also give 10% off for Enterprise Edition, which starts at a 5 team commitment. The cost of Enterprise starts at $3,540 for 5 team rooms for an annual contract, so this a good fit for larger organizations.
Cloud Foundry is an open source, platform-as-a-service (PaaS) on IBM Cloud that enables you to deploy and scale apps without and managing servers. With the launch of IBM Cloud Foundry Enterprise Environment, IBM is bringing isolated Cloud Foundry environments on top of Kubernetes clusters, targeting enterprise developers in regulated industries. In this episode, Carl Swanson from IBM Cloud joins us on the show to discuss why IBM made this move, and which industries could benefit from this change.
Docker making bold moves to with Enterprise Edition. Can it be the One Ring to Rule them All. We’ll ask Kubernetes. https://www.theregister.co.uk/2018/06/13/docker_pitches_kit/ We also discuss: The images for the new Fallout look incredible. https://www.gamespot.com/articles/e3-2018-fallout-76-beta-confirmed-get-in-by-pre-or/1100-6459656/ Mars Opportunity needs solar panel wipers. https://www.theregister.co.uk/2018/06/14/opportunity_mars_sandstorm/ Those crazy Russians and their “Security Software”. https://www.theregister.co.uk/2018/06/13/eu_kaspersky_cyber_defence_motion/. China will chip all your cars. […] The post Episode 44 – Docker, The One Ring to rule them All? We still have time for Fallout 4 and Bitcoin. appeared first on The Tech Fugitives Show!.
In this first episode your hosts introduce the show, give an update to the news of the week, and interview Kubernetes community manager Paris Pittman. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod News of the week Introducing Heptio Gimbal: Bridging cloud native and traditional infrastructure Gimbal coverage at VentureBeat Docker enables Kubernetes support in Enterprise Edition 2.0 Kubernetes best practices: How and why to build small container images EKS certified Kubernetes Links from the interview Kubernetes Community on GitHub List of SIGs and Working Groups Community calendar Kubernetes Slack Stack Overflow Paris Pittman
Jeremy Blum is working at Shaper, reinventing hand held power tools starting with the revolutionary CNC router, Shaper Origin. Join Altium’s Judy Warner and Jeremy for a conversation on making tools for making things. Show Highlights: Shaper is a human in the loop company CNC, or computer numerical control varies in implementation. Large CNCs can be Desktop size to warehouse size. Shaper Origin created to be an affordable, portable handheld tool and the way The Shaper Origin works, computer vision based and real time motor control. Precisely calibrated and sophisticated industrial robot that we are selling as a consumer device “We’re both making tools to help other people make things” Links and Resources: Exploring Arduino: Tools and Techniques for Engineering Wizardry Jeremyblum.com Jeremy Blum’s Youtube Channel Shaper and The Shaper Origin How to take a chessboard with Shaper Origin Shaper Projects - Forum Shaper Instagram Hi everyone welcome back to Altium’s OnTrack Podcast. This is Judy Warner and today we have a special guest that I'm eager for you to meet but before we get going I'd like to make sure that you subscribe to our podcast and favorite us on your favorite RSS feed. You can follow me personally on LinkedIn which I would love or on Twitter @AltiumJudy and Altium is also on Facebook, Twitter and LinkedIn. So today we have a young rockstar of engineering with us, named Jeremy Blum, and I'm gonna read a little bit so forgive me while my eyes leave the screen for a moment. On his website it says this my passion - using engineering to improve people's lives and giving people the tools they need to do the same. Jeremy is currently the head of Electrical Engineering at Shaper where they're using computer vision to reinvent the way people use handheld power tools. Prior to joining Shaper he was a lead electrical architect for confidential products at GoogleX including Google glass. He has his master's and bachelor's degree in electrical and computer engineering from Cornell University. He did a lot of amazing things at Cornell and he has an insatiable passion for building things: prosthetic hands, fiber optic LED lighting systems, 3d printers and scanners and on and on. Some of his work has been featured not only in international conferences in peer-reviewed journal but in also popular media outlets such as the Discovery Channel, Wall Street Journal, and Popular Science magazine. He was also named Forbes magazine 30 under 30 as recognition for his work that has helped America make things and get stuff done. He also holds several patents and he loves to teach. He has a YouTube channel with lots of video tutorials and his latest, well I guess I don't know if it's your latest, but he's writing a book called Exploring Arduino. He spends most of his time investing his talents and time at Shaper. So, Jeremy welcome that was a mouthful so thanks, really. Yeah. So you're awfully young to have such a pedigree so I want to ask you first after watching a few of your YouTube things since they go way back I noticed a couple t-shirts you had worn and those that said you were a geek but my first kind of tongue-in-cheek question is are you a geek or are you a nerd? I think geek is the cool term now I think so I go with geek I think right, if you consider a nerd to be the person who is really into reading into like super in-depth topics and getting really in the weeds and stuff. I'm kind of guilty of all the above. I would concur with that and I think geek is... that geek is like a cool term now. It used to be the uncool term like now you know nerd seems like this but nerds are like Sheldon Cooper like physics, chemistry, right? Look smart guy, so tell us a little bit what your earliest memories were of sort of making and building things I mean most people like you remember some pretty vivid memories of taking apart home appliances are putting stuff together. Mm-hmm, yeah so you know it makes a very perfect circle of my life if you will and that some might release memories building things are woodworking and no I work at a company making woodworking tools but the some of the early memories I have are my dad and his dad are fairly experienced woodworkers so it was spending a lot of time in the garage at home building stuff with him. I went to summer camp where they gave you all these outdoor activities and think you can do water skiing and hiking and all this stuff but I ended up spending as much time as they would allow me to in the woodworking bunk which is this little basically an 8 foot by 8 foot room with a couple of tools that, you know, I feel like it wouldn't let kids do this nowadays. But at least when I was 12, they let us use jig saws and band saws like that and I feel the liability issue is different. It's changed in the last decade or two, but when I was ten or twelve that's what I was doing, I was building furniture that was admittedly not great, but I think pretty good for like 11 years old right? And so those are some of my earliest memories. Before that it’s the same thing you'll hear from most engineers, which is Legos and connects and things along those lines, taking apart stuff in my parents house and family’s. After I did that a couple of times I knew better about keeping track of what screws go where. It's like we put it back together but I've been taking things apart and building stuff for as long as I can remember. It started more on the mechanical side of things in this transition towards the electrical and electromechanical and robotic side of things. As I've gotten older I saw a comment you wrote on your personal website that said you felt that engineering was sort of along the same lines as artistry. What did you mean by that exactly? I think people use art to express themselves and to leave some kind of impact on the world around them. Convey an idea, convey a concept and I don't really think it’s any different than that. It's a lot more based in equations and math and physics, but at the end of the day, you're still kind of taking some input, some desire that you have and trying to generate an output that'll have some kind of impact on people, right? So yeah whether you're building a spaceship or a car or some little doodad or trinket, the goal, at the end of the day, is you want to be able to give it to someone or show up someone, and have some kind of impact on them and affect the way they think. In some way similar to art in my opinion, it's funny that they've kind of come out with this new acronym. Right, there was STEM you know, and now it's STEAM right, because they've added art. I think the thinking is starting to change. I was raised by an artist and so I've always had that bug, but I like, you know, you and I were just geeking out about the Falcon Heavy launching. Like oh, I'm just as happy you know - I'm thrilled as much sort of, being in the work I am with technology as I am with art, so I think there really is a connection there. So you are good, that's a really good job of bridging the divide STEM was, so it felt like this other thing and you were like an artist or you're in a STEM field. Yeah that's not the case, there's so much overlap. I work with designers and everything every day and right, I couldn't do my job without them. Well I can tell you that for me, I'm a little bit more of a creative person. I write a lot and express myself through writing and speaking communication, but I was horrible at math but I loved making things and I took things apart. But just because I didn't have that aptitude for math, or it wasn't encouraged and it made me feel like I was in no-man's land, so for me personally I'm with you. I like that it's sort of crossing over because I'm like, oh you know I am sort of in this you know? I'm like why have I spent the last 30 years you know, in circuit boards and you know, how did I get here? But I think it is that connection and that curiosity, right so you will get up to talking about Shaper which is really what I wanted to talk to you mostly about today but I wanted to also mention that when you were very young you started this YouTube channel and you've recorded an awful lot of educational videos and really helpful stuff for makers hackers and different people have sort of used that as a jumping-off place and inspiration. So what inspired you to do that in the first place? In the first place, well... so, I think the very first video I published was me building the computer and it was my first experience building a computer. I didn't really know what I was doing and I just thought I was really cool and I wanted to make a video of it and my friends and I were already hobbyist videographers. We really liked making short movies you know, like kids make short movies - action movies, things like that. But we had already developed that into a business and we got pretty good at it. When I was 12 or 13, a friend of mine and I, from high school, started our first business which was doing video production and it was pretty successful for like weddings and birthday parties. Video montages, things like that. Now you can like press one button on your Macbook and it'll do it for you, but we lived in this golden age where people hadn't figured that out yet so we could do it as 12-year-olds and charge money for it. And by the way, the 12 year olds were the sharpest people in the room. I mean, I still ask my 20-something kids, like my phone's not working you know, you guys really had an affinity and like there's, like you said; there's more than one but we're like, yeah. But it was really the rise of Technology and you were kind of right in the middle of that I suppose? Yeah the timing was really good, YouTube had kind of just come into existence when I was building that first computer and we had experience making videos. But they were mostly you know, pictures of other people for their weddings and things like that and so we wanted to do something we were passionate about. So, a friend and I, we were going to build this computer. Anyway, they filmed it, we set it to music, people seemed really into it and then started asking us lots of questions like, oh, how did you pick this, how did you do that, and it just kind of grew from there and people seemed to really like it and appreciative and positive. You hear about all the terrible things that people say on YouTube comments right, but it was all positive. People were appreciating it and so it snowballs from there. That's so nice - there's your creativity again right? The crossover of your creativity and your your scientific self. So, do you still keep that website up or are you pretty much tied up with the startup work? I've been pretty damn busy with Shaper you know if I could I'd love to. The reality is it turns my YouTube stuff from the one-man show. I do all the writing, a lot of those videos involve me developing projects and all the documentation around the open source stuff you know. If a 10 or 15-minute video takes me, you know, many dozens or or more hours to produce, because I'm such a perfectionist. Like if I was less of a perfectionist and I could make just kind of slightly crappier, but still informative videos, I would do it. But I have trouble putting a thing that doesn't have high production value so I'd love to make more videos and I hope to get back to doing more. But I will admit the frequency of my postings has dropped off considerably especially in the last two and a half years or so that it that I've been in shape works. I've been so focused here. Yeah okay, well it looks like you still have like a hundred and sixty thousand subscribers and a whole bunch so it looks like people are still engaging with your stuff even if you're not keeping up with it so yeah, kudos to you. So you did your Bachelor's and Master's at Cornell and what was your first job when you got out of school. I had a bunch of cows, I was in school and I was also working on my own stuff overlapping with with a lot of that. My first real job, real full-time job I guess, was at Google after I finished my Master's degree but before that I had had a startup that I was working on for a while that ended up. Peter hangout, and before that I worked at MakerBot the 3D printing company for a few years. First as an intern and then I consulted for them remotely while I was in school working on their electrical stuff. But my first full-time job was that Google X. Yeah, so... um, what kind of things did you work on at Google that you wouldn't have to kill us to tell us? It's pretty out there now so I can talk about most of it now not all of it, but most of it, the majority of my time there was spent working on the electrical engineering and system architecture for what became Google Glass Enterprise Edition. So there was the first version of Google Glass that came out to consumers and I won't spend the time going into all the social implications of that product and things that were done poorly or things that were done well. My focus was on what became the Enterprise Edition of the products, just totally new hardware and is used by a variety of very large companies and medical practices around the country now as an augmented reality platform for basically helping with productivity. So that's a piece of hardware that's basically an upgraded version of Google Glass one most people saw, right um... all new processor and everything inside, everything's new and different but that's used by big companies like I think Boeing uses it on their assembly line to do cable assemblies. Things that they used to look at a manual for, they now see heads-up and they get feedback as they're doing it and they like cut their time in half. There's a bunch of startups building software on top of the enterprise version of Glass now for medical applications, so doctors use it in their patient interactions to pull up medical records and things like that in real time and make them more efficient and more productive. There's a bunch of other applications as well that's those kinds of applications are the things that originally got me excited about going into work on that project. Don't get me wrong I have several pairs of glasses. I don't really wear them anymore but there was a period right, where I wore them almost every day and use them for all the mundane things. They showed in the original marketing videos like sending text messages and getting, turn-by-turn navigation and stuff and they were cool for that but the application that they ended up developing for the Enterprise Edition that I think were an augmented reality platform really shines so I'm from the department that worked on that. That sounds like a pretty exciting project and company, a workforce you know. pretty much fresh out of a Master's program so that's exciting. So how did you go from Google to Shaper? And then let's start talking about Shaper because I've seen some of the videos and I want to run out. I want one when I go build some stuff. So tell us about that transition and then start filling our listeners in about this incredible new handheld CNC you guys have developed. Yeah happily. So when I was at Google, one of my tasks in addition to kind of leading the system architecture for some of the products we were working on, but in addition to those responsibilities one of the other things I did there was kind of always be scouting for external technologies that might be relevant partners for things that were doing things in the computer vision space, things like that. So I go to a lot of conferences for this and me and my colleague from Google, Joe. Joe is now the CEO at Shaper. Me and Joe went to this conference called Solid-con in San Francisco where we met Alec, who is one of the two co-founders of what is now Shaper, at the time it was still called something else and so into this conference and Joe and I walked out, and everything in there was like eh, but man... did you see that one booth that auto correcting hand tool thing? That thing was so cool and so one thing leads to another. Joe ends up leaving Google and going to what is now Shaper to become the CEO to join Alec and align co-founders and shortly after Joe left I also left to come over to Shaper to lead up the electrical engineering efforts here and to basically be responsible for taking a prototyped product that was not manufacturable at all, and making it into a manufacturable piece of hardware that we can mass produce and sell to people at a reasonable price. It would perform well and be reliable and all the things that you want a power tool to do right? So let me now explain what Shaper is. So Schaffer is a handheld robotics company. We like to call ourselves a human in the loop robotics company and our first product shape or Origin just started shipping to customers about three or four months ago. Uh-huh and the Shaper Origin is a handheld CNC. For listeners who aren't familiar with CNC or computer numerical control; CNC machines are what you can imagine a company like IKEA might use to mass-produce furniture. They can range anywhere from desktop size for several thousand dollars, to the size of a warehouse basically, you know, going up to millions of dollars. The way a CNC machine works is, you say, I want to cut X design, you throw in some sheet material. Normally you pre-program all the paths that a cutting bit is going to take and it moves around and cuts out this material, so CNC machining is basically the opposite of 3D printing; 3d printing as an end of manufacturing technology. CNC machining is a subtractive technology - you start with the material, you cut stuff away until you have a solid item that you want in the right shape. So what shape or Origin is; it takes that concept and basically shrinks it down to a portable handheld power tool. So we're trying to bridge the gap between capabilities what you can do with hand tools and the capabilities of what you can do with, you know, multi-tens of thousands of dollar CNC machines, kind of in the middle, that gives a lot of versatile capability. So the way Origin works is, it's computer vision based. We use computer vision and real-time motor control to basically scan your workpiece. We have this stuff called Shaper tape, you put down on whatever it is you're going to cut. It can be a piece of sheet material, like you can cut on a CNC machine, or it can be an already fully-built table and you want to make a particular feature, and if you want to put a mother-of-pearl inlay in it or something. Okay you scan and you put your clamp down or whatever that cut is, you scan it in and the tool now knows, based on tracking vision off of those partners, exactly where it is in 3D space. You load in a design file - the tool is connected to Wi-Fi or this USB port, so you can make your design totally CAD agnostic. You can make it any 2D or 3D CAD program you want. Yeah so like AutoCAD, like what kind of tools? Anything in the 3D realm. The most common tool that our users use is fusion 360 okay, but in the 2D world the most common is Inkscape or Adobe Illustrator. Really anything that can generate an SVG or vector file format and so you can do some color coding of that design in advance to instruct it to how you wanna put it, or you can do it all on the tool but basically the gist is, you make your design advanced, get it to the tool, it flies magically over your Wi-Fi. You put the design in your account online, it just appears on the triple click of the design. You move the tool around your workpiece like a cursor, click to lock the design as its workpiece and now that design is like virtually locked to wherever you chose on your workpiece. At this point it's like a video game, the screen is a capacitive five inch, capacitive multi-touch screen. You move the tool around the workpiece - the tool is basically auto correct for your hands. So there's a corrective region on the tool, there's motors that compensate for the movements in real time so do you do the rough cuts and the machine does the precision. Oh, see when I was watching it you know you could see the image on the screen like you were following a pattern, say you were cutting out a rectangle and I was trying to get my head around, yeah wonder if you slipped? To me what the human was the problem in that equation so after you do the rough cut how do you literally handle it? Okay it all happens in real-time, automatically. Oh, okay, I see. Let's say you want to cut a perfect square you cut mostly a perfect square but you're human so you're moving back and forth you you're making micro adjustments as you go your shaking back and forth whatever, it doesn't matter any movement that you do that's contrary to the design you've placed on the tool virtually is compensated for in real-time. So if you want to cut a perfectly straight line you're moving the zigzag you get a perfectly straight line because the tool is moving opposite to your zigzag direction in real time that's internal, so what you end up with is the cut that looks like it came off of a $10,000 or $100,000 CNC machine that might take up an entire room and so it opens up a tremendous amount of possibility. With CNC machines you work with sheet goods. With our tool you can work on any existing work surface you have, you can inlay designs into your existing wooden toys you can go to your kitchen countertop installation and cut out the opening for your sink without any jigs or fixtures. You're really thinking about it at all in advance and most of these things you can even do without ever involving a computer in the process because we have on tool design capabilities as well so you can design things directly on the tool, lock them to your workpiece and then automatically cut. Yeah. So what kind of materials, like what stuff - obviously wood, say plastics what kind of other types of materials can it work on? Yes so at the heart of our engine is a 720 watt trim router motor and so you can really cut anything that you would cut with that. We do hardwood, softwood, plastics composites, soft metals like aluminum and brass. We've done our fair share jewellery - dog tags, things like that with our tool. We’ve done composite Corian countertop material, soapstone; really anything that you can imagine cutting with a traditional router you can you can cut with our tool. We're not changing the physics of cutting we're just making it a lot easier and guaranteeing you get a high quality cut with a handheld tool instantly regardless of experience level. Which is so great like I've always thought it would be so great to work with wood but I know the learning curve I'd go through would totally discourage me, but when I saw that - there is, I think it was a recent video that came out basically creating an outdoor chess-board on what looked like a tree stump right? The kind you'd find in the park, I think that's what it was anyways. and cutting out each square and then laying it with the darker wood and it was stunning. And I was watching the screen and just this interactive thing I was like: I think I could do that with a little training. Yeah, you totally could, so no computer was ever involved in doing it, they literally got a slab of wood, said they want to make a chessboard with the tool because it knows exactly where it is in the scale of everything. You can lock and grid to whatever you're gonna do, so lots of grid to it you know. Say I want a one by one inch square grid, andI want to cut out every other square to put an inlay in for my chessboard and and you're done. And the the learning curve on it and you know - you don't have to take my word for it. If you go to the Shaper Tools Instagram or anything like that or, if you search Shaper-made on Instagram there’s just a lot of people getting them and unboxing them and you can watch this pretty funny progression. Someone just posted a video on Instagram yesterday and they're like, oh my god I got Shaper Origin and in 30 seconds they'd be like, oh my god I'm opening the box. 30 seconds later, like I'm doing my first cut with shape or again 30 seconds later, I already figured it out, this thing is so easy like I cant’ believe how easy it was for me to figure out how to do this! And that's that's part of what we're trying to accomplish. Well I love that, I can't wait to see I'm sure you guys are excited to see too because I'm sure your imaginations have run wild with what you can do with it but once you start shipping it and getting it in people's hands what they're going to come up with is gonna be phenomenal. Right, oh yeah absolutely, and we've been beta testing various versions of the prototype of this product for going on two years now - more than two years I think, so we have a lot of data about how people use and abuse this tool and we've tried to design all of those workflows and eventualities into it. But people continue to surprise us every day and it's great they love it, it's so fun. Well you know, like I was just saying, for me personally it’s, oh that sounds fun but all that work ugh, the barrier would be the learning curve. The other barrier is good lumber tools are a bit expensive so you don't want to screw them up right? So I would think that to sort of have this sort of foolproof once you've learned it to feel really confident going to buy a piece of really beautiful wood you know. I think that would take away that sort of barrier to entry for people to try that, pretty confident. Yeah, I mean to give you context I think one of my co-workers is is doing a test today where he's over at some guy's house doing straight-up inlays into like a wooden floor. Right, so that's brave. Yeah, there’s just room for mistakes there yeah, and and this is not the first time we’ve done that. In fact there's a there's a bunch of patches on the floor in our office here. We're in a pretty old building in the Mission District and in San Francisco and the the wooden floors here have seen better days. We ripped out the carpet when we moved into this office and some wooden floors were not in good shape. And you know we found all the halls we did live inlays for them and patched them up and that's great. That's so cool, so you just said you just started shipping, how long ago? So the first units started shipping out in October. October okay, so what's the feedback you're getting from these early shipments? I mean it's great people are getting them in using them and doing all kinds of amazing projects and they're posting them online and we have a forum on our website with a lot of people posting projects but also lots of people post on Instagram and Twitter and Facebook and I've been sharing what they're doing and it's been a very exciting, nerve-wracking and exciting, experience to see people getting this tool in their hands finally, that we've been working on really hard for several years the and the anticipation. We started doing pre-orders for the tool at the end of 2016 so a lot of these people have been waiting for well over a year to receive this this product that they put their faith in. We told them we're gonna make this, here's the pedigree of a team we know what we're doing, we know how to make stuff but still, as with any pre-order, we didn't use Kickstarter or anything similar to that you know. It's an amount of risk and you don’t want people not receiving it. You want them getting it and saying, WOW the build quality is excellent and this thing works exactly the way you said it would, and it's intuitive. It's very heartening to see that. That's so great, so what is next to your shipping Origin? I'm sure you have some things in your back pocket perhaps, you know future dreams for your next iterations or other types of tools? Don't tell your secrets I'm not asking your secrets - no IP spoiling. We've got for the future - the key focus right now, is we're still catching up to pre-orders right, so we've been ramping up our manufacturing capability and just are now kind of getting up to full speed, you know. The maximum output that we can expect from our factory it's still gonna be a little while before we finished filling those pre-orders, especially since more and more orders come in every day and so we have to keep keep adding more and more to what we're going to be manufacturing since that's the key focus right now. Yeah it’s making sure that it goes smoothly and where we're generating a high quality product that meets a lot of very exacting quality standards. I mean we really are building a precisely calibrated and sophisticated, basically industrial robot that we are selling as a consumer electronic device. And so making sure that everyone that comes off the line is identical and works the same way and we know exactly how it's going to behave and it's gonna work with the accuracy and precision that we've promised. This is a big focus right now, yeah that's what we’re focused on and there will be more things in the future for us too. Now one reason we became acquainted with you Jeremy, as we know with every startup you have to wear multiple hats and it sounds like you're wearing multiple hats but one of the hats you've been wearing is as an electrical engineer. Electrical engineers and PCB designers who will be the bulk of our listeners here. So can you give us a little insight to your electrical design and I know that you've been using Altium and I know you've used some other tools in your lifetime but I know you're using Altium now. So tell us a little bit about what's under the hood and what it took to to get Origin running from an electrical and electronics standpoint? Absolutely, I mean I'll leave some of the exact details but yeah we use Altium for all the design of the rigid and flex PCBs inside the product of which there are many like I mentioned, since it is effectively an industrial robot. The electromechanical requirements of the tool and where the physical electronics are for different things is complicated. So, it's a large multi board system; there's the camera that’s used for doing the workpiece, there's the capacitive multi-touch display, there's the application processor and the microcontroller that's responsible for all the motor controls, there's the boards that are out controlling the correction motors and the z-axis motor that that moves the actual cutting bit up and down. There's a sensor in the base that's that's used to do Z touch draw so that the bit knows where it is in space so there's a lot of boards in there. Boards and flexes, all designed with Altium. One of the things - and as you mentioned - that I've used many CAD packages in the past, basically any major one that you can think of, and one of the first things that I was tasked with when I came to join Shaper is there was no real electrical or formal electrical design before I was here. There were prototypes and one-off boards that were used to simulate and test certain aspects of the design, but the whole architecture has been new over the last two or three years since I've been here to make the product manufacturable. So the first thing I was tasked with was I had to decide, okay, in the past that mostly been dictated to for one reason or another like here's the CAD package you have to use and I'll say something that was difficult for me at Google as I was working in Altium for some of our work, and then for reasons that I won't go into, it had to use a competitor product to Altium’s and I did not enjoy the experience. So, it was a pretty easy choice when I came here but one of the driving factors of that was we're a very lean team here, as you said I'm responsible for a lot more than just the electrical engineering and the PCB design and so the 3D capabilities of Altium have been super important. We have really tight tolerances inside of this product we have to meet a lot of requirements because it's a lot. And we want things put together perfectly to make sure that it's performing the way we expect it to. So a lot of that ends up playing back down to the PCBs and understanding exactly how much space things are going to take up and exactly where every component is, and how big it is, and which things are touching off to thermal pads or which things are gonna have a certain amount of clearance. There's AC power; where that comes in from the raw, we have to meet certain clearance requirements and make sure those distance are, not just in 2D but in 3D, also correct and acceptable to ship this product. So the 3D capabilities of Altium have been incredible. You saw actually, with the new release of Altium 18 I just started playing with the multi board assembly feature which is already been very useful for some of our rigid and flex assemblies that go together and seeing how those fit together has been super useful and a really important part of my job and makes it faster to go from an idea to something that fits. We can build into the mechanical CAD model of the tool and know that everything is going to assemble together the way that we expect it to and designed to do. Well, looking at Shape or Origin it does have a really, for all the capability that it has you know, it really doesn't appear to be terribly larger than a typical hand router like you’re saying it has so much more capability so there's got to be a lot crammed inside and including that display in the sensors and all of this so it kind of reminds me actually, coming from the board industry, is like drones where you have to fold everything up and fit it mechanically - weight, power, that's the mantra you know. In military - weight and power - to keep things in that really tight but super functional footprint and be able to stand the thermal stresses of being in a small package and the vibration and all these other things so I'm sure was not an easy design? Yeah we were a startup, we're a lean small team of engineers here doing this so we've built a lot of our own testing equipment and fixtures to stress test this machine so it's working in a dusty environment and we want to know. is this machine gonna be tests proof okay. So we used Shaper Origin to build a dust chamber for itself stuck an air compressor to it… there's videos of this online. Plastic talcum powder and sawdust for you know, running it for days straight with all the motors cycling. We're going to do that to analyze different configurations of our z-axis assembly and how the PCBs are sealed and things like that to protect them. We've done all that; same thing goes for our spindle motor and building systems to test the functionality and lifetime of that. All things that we've developed in-house to make sure that the design works the way we expect it to and is gonna stand the test of time and be a reliable tool that you can use in your shop for many years. Well it sounds like wonderfully brutal work you've done in the last couple of years. Really rigorous engineering standpoint. Now are you manufacturing it but you know, short term and long term - I don't mean to say that in a loaded way - but are you manufacturing it now locally and in-house or are you contracting out? I mean how will you handle the manufacturing challenges over time? Well Origin today, like basically any product that you buy, to say the whole product is quote-unquote ‘made anywhere’ is sort of deceptive because parts come from all over the world. This is true. In one country in the motor comes from Germany, extrusions and plastic parts come from another country, the final assembly currently happens in the United States. I won't go into more detail than that but yeah you know that's the final you'll see on most products now if you really dig into it, it's made everywhere! They really are, it's a really good answer and it really is a global economy and that's never more true than electronics because we source our parts from all over. So yeah, that and when you're in the startup scrappy mode especially right? You have to be able to get it to market affordably and reliably and do all these things. Well thank you so much Jeremy I I have really enjoyed this conversation especially the part about the electronics and we love supporting startups here and and I hope soon we’ll come up and shoot a video at your plant and be able to show our audience some of the neat things you've been able to do with our tool, to make your tool and and we love that we support companies like Shaper and engineers and designers like you. It's what wakes us up in the morning so thank you so much for telling the story. Yeah, my pleasure. You know, Shaper and Altium have a lot in common - we're both making tools help other people make things. Exactly that's what we that's what we do we wake up in the morning and that story of Altium started from it's very beginnings from a couple of university students that were teaching and you know, they thought the young hungry startup minded population should have tools and at that time they weren't available. There were many computers and large expensive engineering stations and they weren't available, and so our two scrappy founders said, to heck with that we need tools. And so they started making them on a PC and haven't looked back since. So that spirit still kind of lives in our building and I think, that's what people kind of still relate to that to. So that spirit we do share that in common. So, my last last question is - and I hate to ask this of you because you work so much I'm sure you don't have any spare time - but I have noticed because of the overlap of Arts and Technology a lot of people I know that our design engineers usually have really incredible hobbies like woodworking or painting or music or you know, some kind of really interesting hobby life and so I always like to ask at the end of this program. I call it design after-hours and so I want to ask you what you do after hours but I'm worried you don't have any hours after hours. I do still try to try to make time to work on personal projects and I have the great fortune of working in a fully-featured wood shop so that is definitely some of what I do. A lot of things in my apartment are made in the wood shop here mostly using Shape Origin, so various pieces of furniture in my apartment and game boards, Chinese checkers set and organizers for my kitchen. Simple things like that - just fun projects and when I'm not doing that stuff, for me my hobbies are the same I'm very fortunate that I really like things I work on so my hobbies aren't the same things I work on but when I'm not working on the electronics for Shape Origin. I do other software and electronics projects, I document them on my website and on github and in other places. The thing that's been really interesting for me has been home automation and this has kind of been an ongoing project for me for the last couple years, building a natural language voice controlled home automation system. Something along the lines of how Amazon echo works. The first version that I built - just to get the record straight - was working long before Amazon Alexa came out. I will cede and say that the current products manufactured by Amazon and Google are way better than anything I've made. But still a fun learning process and it is a system I use in my house everyday. It's a natural language voice control system that I have hooked into my lights and my music and I had hooked into the shades of my old apartment but not anymore, and I have it pinned to my phone. My number one most used feature is… because it's still like kind of weird to talk to an imaginary computer thing to ask it to do stuff and so I still don't feel super great about that but I do do it because it's an interesting engineering problem too. It was interesting to learn how to write those natural language processing software and I've been doing a bunch of that. My most used feature is still just like at night when I'm ready to go to bed and I have my phone, I just like give it a shake and it turns off all my lights. Oh! So simple. But I want one I haven't got, I have two Echos in my house and still I'm not using them for half the stuff that they could do but they are super fun. Well again thank you Jeremy it's been great to have you and so great to learn about Shaper. Give us your website real quick and then we'll share some links below with our audience. Yeah if you want to learn more about Shaper go to https://shapertools.com/ or just Google shape Origin on Google and there's tons of YouTube videos, lots to see what the tool is capable of and if you want to know more about what I'm doing I’m at jeremyblum.com. Very good. Jeremy thanks so much, you're really fun to watch and we'll keep keep our eyes on you and keep our eyes on Shaper and we look forward to talking to you soon. Thank you this has been Judy Warner with the Ontrack Podcast. Thank you for joining us for our talk with Jeremy Blum. Please remember to subscribe and we'll see you next time and remember to always stay OnTrack.
In this episode, Marcello Sukhdeo talks about how data collected by a Fitbit could be used in a murder case, also a new website to combat fake news is being launched by Wikipedia founder Jimmy Wales, and a new feature on Google Maps to mark your parking spot. Data from Fitbit to solve case? A man who is a suspect in his wife’s murder, may be found guilty due to data collected by his wife’s Fitbit. Richard Dabate of Connecticut is accused of killing his wife at their home in December 2015. His statement to the police is totally different than the data collected by his wife’s Fitbit. Dabate told police that a masked assailant came into their home at around 9 AM on December 23rd 2015 and subdued him with “pressure points” before shooting his wife, with a gun he owned. He said that the man killed his wife as she was returning through their garage from a workout. But according to Fitbit’s data, his wife was moving around for more than an hour after her husband said the murder took place. This man’s statement is contrary to data collected. This goes to show that we have become so entangled in data that our digital finger print is all over. I think we will see more and more cases where data will be used to help solve criminal cases in the future. Wikitribune With all the fake news around, Wikipedia founder Jimmy Wales is launching a new site to fight fake news – it’s called Wikitribune. "This will be the first time that professional journalists and citizen journalists will work side-by-side as equals writing stories as they happen, editing them live as they develop and at all times backed by a community checking and re-checking all facts." Facebook's tips for spotting fake news Be skeptical of headlines Look closely at the URL Check the source Watch for unusual formatting Check the photos Check the dates Check the evidence Look at other reports Is the story a joke? Some stories are intentionally false Google Maps Google has officially rolled out a new feature to Google Maps on iOS and Android that will help you find your parked car. Google already offers automatic parking location detection in some cases but now it’s easier to control it manually with notes and images about exactly where you left your car. Other news Spotify appears to be working on its first wearable device, developing a "category defining product akin to Pebble Watch, Amazon Echo, and Snap Spectacles," based on a job listing discovered on their website. It's possible the company is working on a set of headphones or something Apple’s AirPods. Other hints include internet connectivity and something developed by Spotify itself, rather than an integration with existing hardware. Microsoft officials said the company will be ready to integrate Dynamics 365 for Sales with LinkedIn's Sales Navigator as of July 2017. This integration will help users who have both Dynamics 365 Sales and LinkedIn to get contextual recommendations and provide tailored content, as well as provide account and lead updates. Microsoft is making a promotional bundle available that includes Dynamics 365 for Sales, Enterprise Edition, and LinkedIn Sales Navigator Team for $135 per user per month. Twitter has added nine million new users in Q1, which brings the company’s total to 328 million. That’s the biggest quarter-over-quarter jump for Twitter since early 2015. After the announcement, Twitter’s stock went up more than 11 percent in early pre-market trading. Twitter also says that daily active users are up 14 percent over the same quarter last year.
Mit Java EE 7 ist es recht einfach möglich, mehrschichtige Enterprise-Anwendungen zu programmieren. In der vierundfünfzigsten Episode des Anwendungsentwickler-Podcasts gibt es eine Lernzielkontrolle zur Enterprise Edition von Java. Inhalt Welche Versionen von Java gibt es? ME, SE, EE Was ist Java EE? Die Enterprise Edition. Eine Sammelspezifikation für mehrere Teilspezifikationen. Ermöglicht die Entwicklung von mehrschichtigen... Der Beitrag Java EE 7 (Lernzielkontrolle) – Anwendungsentwickler-Podcast #54 erschien zuerst auf IT-Berufe-Podcast.
At Devcon, I took the opportunity to meet several people - stay tuned for several more episodes during the rest of this year. For this episode, I spoke with Zsigmond Rab. Zsigmond is Lead Engineer, Technical Support & Trainer at Liferay Hungary. This is a short and informal tongue-in-cheek talk about support-related issues. (More notes and links available in HTML version of this paragraph and in the blogpost linked to the episode)
Thanks for Downloading this episode. Thanks again for the feedback on the previous first episode. The sound quality was not great and we had a few bloopers. I hope this recording is a little better. If you would like to contribute or have any Oracle licence questions please contact me via the contact page at Madora.co.uk or email me. Kay.williams@madora.co.uk Does 12c Multi-tenant offer real opportunity for cost savings through consolidation? For those not familiar with Oracle 12c, which was released in June last year, it offers a host of new features. The ‘C’ in 12c stands for Cloud and it is focused firmly on providing a back bone for the Oracle Cloud strategy. With all the announcements around cloud, DBaaS, PaaS and the like, I thought it worthy of giving an introduction to Oracle 12c for the non technical – well not too technical. Some four years in development this was a major release with a fundamental new architecture. Oracle Database 12c provides a bunch of extras such as database consolidation, query optimisation, performance tuning, high availability, partitioning, backup and recovery. Tom Kyte does a nice job of explaining the new features in YouTube for those who are a bit more technically minded. https://www.youtube.com/watch?v=ekTTXoHBmWw&feature=youtu.be It is the database consolidation capability via pluggable databases that really are the game changer for consolidation. Officially now known as multi-tenant, this is a paid for option that could have a massive impact on the cost of building and running a large database estate. So what is the multi-tenant option? In simple terms I like to think of the Multi-tenant option as an egg box which is the container database, holding many eggs which are the pluggable databases. The egg box provides the ability to share its memory and background processes across the many eggs it holds. The egg box holds the metadata and the eggs hold the user data. Traditional database architecture requires each database instance to have a discrete set of memory and storage allocated to it. Sometimes this results in poorly utilised resources. Clearly as your database estate grew you could find yourself buying more memory and storage on a regular basis. This also has an impact on management as more effort is required to manage and maintain the hardware infrastructure as well as the growing instances of databases. Add the non-production environments and the disaster recovery and backup options and the physical servers can soon sprawl. One of the solutions pursued by a great number of organisations to the physical server sprawl was the use of virtualisation software. Although this has greater benefits in physical server reduction it still required each oracle database instance to have its own resources set aside. Multi-tenant databases offer an additional level of consolidation by sharing the database resources amongst the pluggable databases. You can create up to 253 pluggable databases all managed as a single instance via the container database. That is one giant egg box! In production environments it is not uncommon to see 20 different instances all requiring routine maintenance from upgrades, patches, tuning, back up and configured for RAC and Data Guard. With Oracle 12c Multi-tenant all 20 instances can be treated as one. A DBA can easily plug or unplug an existing database to or from a container database. There is no need to change any code in the application. When the user connects to a plugged database, the database environment looks exactly as if the user had connected to a traditional database. The Multi-tenant option helps provide the following: High consolidation density: Many databases can share memory and background processes, reducing significant hardware. Provisioning: A database can be unplugged from one environment and plugged into another in seconds using SQL commands. Spinning up and cloning new instances for test and dev is now a breeze. Patching and upgrades: Patch a new container. Now unplug from the un-patched container and plug into the patch container – job done. Manage many databases as one: Do your routine tasks like back up on the container database not individual instances. An order of magnitude when you consider a container can hold 253 databases! Resource management: The Oracle Resource Manager feature can work at the pluggable database level for you to manage resource competition among the databases in your environment. In the words of Oracle senior vice president Andy Mendelsohn, Pluggable databases represent “a fundamental rearchitecture of the [Oracle] database. Now if I’m an administrator, I have one database overall to administer.” Can 12c really deliver substantial cost savings? There is no denying there are cost benefits to be gained through increased agility via rapid cloning, reduced overheads in management and lower hardware infrastructure. It was a disappointment to a number of customers that this became a paid option. This cost should be offset by the lower demand for compute, storage hardware and people costs. The elephant in the room though is the impact on licence costs. Seeing the real benefit of reduced licence fees will be a challenge. We all know how Oracle likes to safeguard its support revenues. Although a significant reduction in Database cores could result, this will need to be offset by the purchase of the 12c Multi-tenant option which is some 37% of the list price of Enterprise Edition. Even a recalculation of the support fees after dropping a number of database cores should make a contribution. I don’t know how many references Oracle has of customers using the 12c Multi-tenant option in anger. I guess not too many. I am sure that there is an opportunity for early adopters to make a good case to negotiate a great deal. Some serious analysis and modelling will be required to work through cost savings on existing deployments. Green field and new projects will be great candidates for Oracle 12c and you would be almost crazy not to use it. So what do you think? Have you starting looking at 12c in terms of consolidation? Does it stack up?
With SQL Server 2014 released, there's the temptation to upgrade for many DBAs. However the licensing costs and debatable improvements in the product will temper the DBA's enthusiasm with the reality of the ROI seen by management. While reading about the licensing changes, I also saw this note from Tom LaRock, where he wrote about the features most of us aren't using. It made me think about upgrades, and perhaps the strategy Microsoft is employing. As Tom mentioned, the features not being used are Enterprise Edition features. This prevents many of us from upgrading to use them because Enterprise Edition is so expensive. Actually, even Standard Edition is expansive these days, given the per-core licensing, and I suspect lots of companies with SQL Server 2008, 2005, even 2000 are debating whether or not the upgrade is really worth the cost. Read the rest of "The Subtle Push to the Cloud" at SQLServerCentral.
Kevin Schmidt, Director of Application Platform Product Management at Sun, talks with host Chhandomay Mandal about Java Platform Enterprise Edition 6 and GlassFish Enterprise Server v3.
Kevin Schmidt, Director of Application Platform Product Management at Sun, talks with host Chhandomay Mandal about Java Platform Enterprise Edition 6 and GlassFish Enterprise Server v3.
Matt Hogstrom, architect for Geronimo-based IBM WebSphere Application Server Community Edition and a Geronimo committer, talks about the strengths of Apache Geronimo and what's new in V2.0, including support for Java Platform, Enterprise Edition 5. He talks about the idea behind WebSphere Community Edition, IBM's free application server based on the Geronimo code. Getting involved with the Geronimo community and what's ahead for Geronimo are also discussed.