Horizontal partition of data in a database or search engine
POPULARITY
Epicenter - Learn about Blockchain, Ethereum, Bitcoin and Distributed Technologies
As society evolves, so do its values and principles, but is that desirable for technology that seeks to build the most reliable, trustless and censorship resistant global settlement layer? Ever since the rise in popularity of Solana with its inflow of retail capital in a memecoin gold rush, Ethereum became even more criticized for sticking true to its core values despite the completely divergent demands of market participants. Moreover, the rollup centric scaling roadmap seemed to further silo attention and cause liquidity fragmentation, driving away value from Ethereum mainnet. As institutional demand is expected to grow with the introduction of staking ETFs, a new executive leadership took charge of Ethereum foundation to help steer the protocol's narrative at the intersection between community demands and Ethereum's ethos.Topics covered in this episode:Hsiao-Wei's & Tomasz' backgroundsHow they became co-executive directors of Ethereum FoundationEthereum Foundation's role moving forwardDecision making in Ethereum FoundationShaping Ethereum's narrativeThe layer 2 landscapeScaling Ethereum as the global settlement layerShardingRollup scalability & their trade-offsWhat values drive adoptionL2 interoperabilityFuture goalsEpisode links:Hsiao-Wei Wang on XTomasz Stanczak on XEthereum Foundation on XEthereum on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Friederike Ernst.
In this episode, Shardeum Founder Nischal Shetty chats with Keli Callaghan about a new smart contract platform that linearly scales using dynamic state sharding. Nischal shares his journey into the crypto space, the evolution of the crypto market in India, and the importance of community engagement and education in driving adoption. He discusses the unique features of Shardeum, including its user-friendly transaction model and plans for future growth, emphasizing the role of community and node operators in shaping the platform's success. With Shardeum's MainNet launching coming up in early May, tokenomics, node operators, and the Indian market opportunity are all covered topics. Links: X: https://x.com/shardeum Web: https://shardeum.org/ Whitepaper: https://shardeum.org/Shardeum_Whitepaper.pdf Nischal on X: https://x.com/NischalShetty
Star Wars Galaxies, Ultima Online, and now Stars Reach. Raph Koster has certainly made his mark on MMOs that are designed as systems-led experiences that feel like breathing worlds. This interview goes over Raph's history of game development, and why Stars Reach is the culmination of decades of experience, new exciting technology, and an organic sandbox players can deeply enjoy with each other. If that sounds exciting, consider checking out the Stars Reach Kickstarter: https://www.kickstarter.com/projects/starsreach/stars-reach DL Gaming: A PC Gamecast adds games both new and old to your Steam backlog one slightly inappropriate episode at a time.
An airhacks.fm conversation with Gerald Venzl (@GeraldVenzl) about: discussion on prepared statements and their benefits in Oracle databases, explanation of hard parsing vs soft parsing in database queries, overview of connection pooling and its importance in database performance, introduction to Oracle's Database Resident Connection Pool (DRCP), exploration of Oracle's support for serverless workloads, discussion on PL/SQL and JavaScript support in Oracle databases, brief mention of ADA programming language and its influence on PL/SQL, introduction to GraalVM and its role in Oracle databases, comparison of performance between PL/SQL and JavaScript in Oracle, mention of Oracle database support for ARM architecture including M1 Macs and Raspberry Pi 5, explanation of database sharding vs partitioning, discussion on the benefits of stored procedures for data-intensive operations Gerald Venzl on twitter: @GeraldVenzl
News includes the release of community-maintained prebuilt MacOS builds for OTP by the Erlef, advancements in Elixir NX with the ability to "shard" functions, and exciting updates in Phoenix Live View as it approaches its 1.0 milestone. We also cover Gleam's upcoming release, José Valim's success story with the Elixir type system, and information about the upcoming Elixir is Weird conference. Join us as we dive deeper into these stories and more! Show Notes online - http://podcast.thinkingelixir.com/229 (http://podcast.thinkingelixir.com/229) Elixir Community News https://elixirforum.com/t/new-community-maintained-otp-builds-for-macos/67338 (https://elixirforum.com/t/new-community-maintained-otp-builds-for-macos/67338?utm_source=thinkingelixir&utm_medium=shownotes) – The Erlef has released community-maintained prebuilt MacOS builds for OTP, eliminating the need to install additional dependencies. https://github.com/michallepicki/asdf-erlang-prebuilt-macos (https://github.com/michallepicki/asdf-erlang-prebuilt-macos?utm_source=thinkingelixir&utm_medium=shownotes) – The release includes guidance for using these prebuilt builds with asdf as an alternate Erlang plugin. https://dockyard.com/blog/2024/11/06/2024/nx-sharding-update-part-1 (https://dockyard.com/blog/2024/11/06/2024/nx-sharding-update-part-1?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir NX is gaining the ability to 'shard' Nx functions, allowing code to be processed in parallel for increased efficiency. https://bsky.app/profile/akoutmos.bsky.social/post/3laondxqnnc2w (https://bsky.app/profile/akoutmos.bsky.social/post/3laondxqnnc2w?utm_source=thinkingelixir&utm_medium=shownotes) – Peter Ulrich and Alex Koutmous released a paid library called Phx2Ban, a Fail2Ban alternative for the Phoenix framework. Phoenix Live View is nearing its 1.0 milestone, with interesting PRs being discussed. https://github.com/phoenixframework/phoenixliveview/pull/3482 (https://github.com/phoenixframework/phoenix_live_view/pull/3482?utm_source=thinkingelixir&utm_medium=shownotes) – A PR to keep assigns between live navigation in Phoenix Live View, enhancing performance by avoiding unnecessary reloads. https://github.com/phoenixframework/phoenixliveview/pull/3498 (https://github.com/phoenixframework/phoenix_live_view/pull/3498?utm_source=thinkingelixir&utm_medium=shownotes) – A PR to reserve curly brackets for HEEX syntax in Phoenix Live View, which aims to standardize interpolation syntax. https://github.com/phoenixframework/phoenixliveview/pull/3478 (https://github.com/phoenixframework/phoenix_live_view/pull/3478?utm_source=thinkingelixir&utm_medium=shownotes) – A PR proposing the concept of 'phx-portal' to allow content rendering outside its normal spot in LiveView. https://x.com/gleamlang/status/1855604711606358394 (https://x.com/gleamlang/status/1855604711606358394?utm_source=thinkingelixir&utm_medium=shownotes) – Gleam is preparing for a new release, with V1.6.0 RC-1 now available. https://github.com/gleam-lang/gleam/releases/tag/v1.6.0-rc1 (https://github.com/gleam-lang/gleam/releases/tag/v1.6.0-rc1?utm_source=thinkingelixir&utm_medium=shownotes) – The release notes for Gleam v1.6.0 RC-1 can be found here. https://github.com/gleam-lang/gleam/blob/v1.6.0-rc1/CHANGELOG.md (https://github.com/gleam-lang/gleam/blob/v1.6.0-rc1/CHANGELOG.md?utm_source=thinkingelixir&utm_medium=shownotes) – The changelog for Gleam v1.6.0 RC-1 is available for review. https://github.com/elixir-ecto/postgrex/commit/3308f277f455ec64f2d0d7be6263f77f295b1325#diff-0da854f0c1cda9486d776c72ecda6a2e595a7667b72688669bbd80d6b80f0f96R1210 (https://github.com/elixir-ecto/postgrex/commit/3308f277f455ec64f2d0d7be6263f77f295b1325#diff-0da854f0c1cda9486d776c72ecda6a2e595a7667b72688669bbd80d6b80f0f96R1210?utm_source=thinkingelixir&utm_medium=shownotes) – The Elixir type system identified dead code in Postgrex, showing its progress and usefulness. https://github.com/phoenixframework/phoenixliveview/commit/6c6e2aaf6a01957cc6bb8a27d2513bff273e8ca2 (https://github.com/phoenixframework/phoenix_live_view/commit/6c6e2aaf6a01957cc6bb8a27d2513bff273e8ca2?utm_source=thinkingelixir&utm_medium=shownotes) – The type system also identified dead code in Phoenix LiveView. https://x.com/josevalim/status/1856288364665639005 (https://x.com/josevalim/status/1856288364665639005?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim shared the success of the Elixir type system in identifying dead code. Elixir is Weird conference has a Call for Talks for their event on April 17, 2025, in Providence, RI, USA. https://bsky.app/profile/elixirisweird.bsky.social/post/3lapjx4lw4k2a (https://bsky.app/profile/elixirisweird.bsky.social/post/3lapjx4lw4k2a?utm_source=thinkingelixir&utm_medium=shownotes) – Details about the Elixir is Weird conference and the Call for Talks can be found here. https://x.com/sasajuric/status/1856261149320192317 (https://x.com/sasajuric/status/1856261149320192317?utm_source=thinkingelixir&utm_medium=shownotes) – Saša Jurić is considering a live coding presentation style for his Alchemy Conf talk. https://alchemyconf.com/ (https://alchemyconf.com/?utm_source=thinkingelixir&utm_medium=shownotes) – More information about Alchemy Conf, taking place from March 31 to April 3, can be found on their website. Discussion about Bluesky uptick and Elixir community members moving there. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - Bluesky (https://bsky.app/profile/thinkingelixir.com) - Message the show - X (https://x.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen on X - @brainlid (https://x.com/brainlid) - Mark Ericksen on Bluesky - @brainlid.bsky.social (https://bsky.app/profile/brainlid.bsky.social) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel on X - @bernheisel (https://x.com/bernheisel) - David Bernheisel on Bluesky - @david.bernheisel.com (https://bsky.app/profile/david.bernheisel.com) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)
Want to learn more Postgres? Get on the waiting list for the full course: https://masteringpostgres.com. In this interview, I dive deep with Craig Kerstiens from Crunchy Data into the world of Postgres, covering its rise to prominence, scaling at Heroku, and the power of Postgres extensions. Craig also shares insights on database sharding, the future of Postgres, and why developers love working with it. Follow Craig: Twitter: https://twitter.com/craigkerstiens Crunchy Data Blog: https://www.crunchydata.com/blog Follow Aaron: Twitter: https://twitter.com/aarondfrancis LinkedIn: https://www.linkedin.com/in/aarondfrancis Website: https://aaronfrancis.com - find articles, podcasts, courses, and more. Chapters: 00:00 - Introduction: Welcome to Database School 00:20 - Guest Introduction: Craig Kerstiens and Crunchy Data 01:40 - Craig's Journey from Heroku to Crunchy Data 02:55 - Scaling Postgres at Heroku 04:50 - Mastering Postgres Course Announcement 05:30 - The Importance of Postgres at Heroku 07:50 - The Value of Live SQL with Data Clips 09:25 - Data Clips for Business Intelligence and Real-Time Analytics 11:05 - Heroku's Unique Company Culture and How Data Clips Came to Be 12:30 - Postgres Extensions and Marketplace 14:00 - Citus: Scaling Postgres for Multi-Tenant Applications 15:40 - The Challenges of Sharding in Databases 18:00 - Managing Large Databases and Sharding Keys with Citus 24:00 - The Evolution of Postgres and Its Growing Popularity 31:00 - Postgres for Developers and the Importance of Extensions 35:00 - Extensions as Proving Grounds for Core Postgres Features 37:50 - Building an Extension Marketplace for Postgres 41:00 - Postgres as a Data Platform and Developer Flexibility 46:00 - Why Developers Love Postgres: Stability, Extensions, and Ownership 51:00 - DuckDB: A Fascinating New Database Approach 53:30 - Crunchy Data: What They Offer and Why It Matters 58:30 - Expanding Postgres with DuckDB for Data Warehousing 01:00:00 - Wrapping Up: Where to Find Craig and Crunchy Data
In this episode, we dive deep into the latest in scalability with Avi Zurlo from the Nil Foundation. We unpack sharding, its application in blockchain, and how it differs from traditional rollup solutions. Avi breaks down the complexities of ZK sharding, discusses the challenges of interoperability in a sharded system, and compares different approaches to scaling blockchains. The conversation also touches on the future of global shared state machines and the potential impact of sharding on developer experiences and application design. - - Follow Avi: https://x.com/avizurlo Follow Rex: https://x.com/LogarithmicRex Follow nosleepjon: https://x.com/nosleepjon Follow Expansion: https://x.com/ExpansionPod_ Subscribe on YouTube: https://www.youtube.com/@expansionpod Subscribe on Apple: http://apple.co/4bGKYYM Subscribe on Spotify: http://spoti.fi/3Vaubq1 Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ -- Join us at Permissionless III Oct 9-11. Use code: EXPANSION10 for a 10% discount: https://blockworks.co/event/permissionless-iii -- Timestamps: (00:00) Introduction (01:24) What is Sharding? (11:59) Evolution of Rollup Roadmap (24:58) Permissionless III Ad (26:30) VMs & Scalability (31:58) Is a Globally Shared State Machine Real? (41:53) Fee Market Design of Sharded Systems (44:34) DA vs Sharding - - Disclaimer Expansion was kickstarted by a grant from the Celestia Foundation. Nothing said on Expansion is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Rex, Jon, and our guests may hold positions in the companies, funds, or projects discussed.
Join hosts Lois Houston and Nikita Abraham in Part 2 of the discussion on database sharding with Ron Soltani, a Senior Principal Database & Security Instructor. They talk about sharding native replication, directory-based sharding, and coordinated backup and restore for sharded databases, explaining how these features work and their benefits. Additionally, they explore the automatic bulk data move on sharding keys and the ability to split and move partition sets, highlighting the flexibility and efficiency they bring to data management. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! In our last episode, we dove into database sharding and Oracle Database Sharding in particular. If you haven't listened to it yet, I'd suggest you go back and do so before you listen to this episode because it will give you a lot of context. 00:53 Lois: Right, Niki. Today, we will discuss all the 23ai new features related to database sharding. We will cover sharding native replication, directory-based sharding, coordinated backup and restore for sharded databases, and a few more. Nikita: And we're so happy to have Ron Soltani back on the podcast. If you don't already know him, Ron is a Senior Principal Database & Security Instructor with Oracle University. Hi Ron! Let's talk about sharding native replication, which is RAFT-based, meaning that it is reliable and fault tolerant-based, usually providing subzero or subsecond zero data loss replication support. Tell us more about it, please. 01:33 Ron: This is completely transparent replication built in within Oracle sharding that duplicates data across the different shards. So data are generally put into chunks. And then the chunks are replicated either between three or five different shards, depending on how much of the fault tolerance is required. This is completely provided by the Oracle sharding database, and does not require use of any other component like GoldenGate and Data Guard. So if you remember when we talked about the architecture, we said that each shard, each database can have a Data Guard component, whether through GoldenGate or whether through Data Guard to have a standby. And that way support high availability with the sharding native replication, you don't rely on the secondary database. You actually-- the shards will back each other up by holding replicas and being able to globally manage the replica, make sure everything is preserved, and manage all of the fault operations. Now this is a logical replication, generally consensus-based, kind of like different components all aware of each other. They know which component is good, depending on the load, depending on the failure. The sharded databases behind the scene decide who is actually serving the data to the client. That can provide subsecond failovers with zero data loss. 03:15 Lois: And what are the benefits of this? Ron: Major benefits for having sharding native replication is that it is completely transparent to the application or any of the structures. You just identify that you want to go ahead and use this replication and identify the replication factor. The rest is managed by the Oracle sharded database behind the scene. It supports fast failover with zero data loss, usually subsecond failovers. And depending on the number of replicas, it can even tolerate multiple failures like two server failures. And when the loads are submitted, the loads are also load-balanced across all of these shards based on where the data is located, based on the replicas. So this way, it can also provide you with a little bit of a better utilization of the hardware and load administration. So generally, it's designed to help you keep your regular SQL-based databases without having to resolve to FauxSQL or NoSQL environment getting into other databases. 04:33 Nikita: So next is directory-based sharding. Can you tell us what directory-based sharding is, Ron? Ron: Directory-based sharding basically allows the user to define the values that are used and combined for different partition, so better control, location of the data, in what partition, what shard. So this allows you to set up a good configuration. Now, many times we may have a key that may not be large enough for hash partitioning to distribute the data enough. Sometimes we may not even know what keys are going to come in the future. And these need to be built in the future. So having to build these, you really don't want to have to go reorganize the whole data based on new hash functions, and so when data cannot be managed and distributed using hash partitioning or when we need full control over combination of where data exists. 05:36 Lois: Can you give us a practical example of how this works? Ron: So let's say our company is very small in three different countries. So I can combine those three countries into one single shard. And then have three other big countries, each one sitting in their own individual shards. So all of this done through this directory-based sharding. However, what is good about this is the directory is created, which is a table, created behind the scene, stored in the catalog, available to the client that is cached with them, used for connection mapping, used for data access. So it can give you a lot of very high-level benefits. 06:24 Nikita: Speaking of benefits, what are the key advantages of using directory-based sharding? Ron: First benefit allow you to group the data together based on the whatever values you want, depending on what location you want to put them as far as across the shards are concerned. So all of that is much better and easier controlled by us or by the designers. Now, this is when there is not enough values available. So when you're going to use hash-based partition, that would result into an uneven distribution of the data. Therefore, we may be able to use this directory for better distribution of the data since we understand the data structure better than just the hash function. And having a specification where you can go ahead and create future component, future partitions, depending on how large they're going to be. Maybe you're creating them with an existing shard, later put them in another shard. So capability of having all of those controls become essential for management of this specific type of data. If a shard value, the key value is required, for example, as we said, client getting too big or can use the key value, split it or get multiple key value. Combine them. Move data from one location to another. So all of these components maintain automatically behind the scene by us providing the changes. And then the directory sharding and then the sharded database manages all of the data structure, movement, everything behind the scene using some of the future functionalities. And finally, large chunk of data, all of that can then be moved from one location to another. This is part of the automatic chunk data move and whatnot, but utilized within the directory-based sharding to allow us the control of this data and how we're going to move and manage the data based on the load as the load or the size of the data changes. 08:50 Lois: Ron, what is the purpose of the coordinated backup and restore system in Oracle Database Sharding? Ron: So, basically when we talk about a coordinated backup and restore, remember in a sharded database, I have different databases. Each database is a shard. When you take a backup, each database creates its own backup. So to have consistent data across all of the shards for the whole schema, it is extremely important for these databases to be coordinated when the backup is taken, when the restore is being done. So you have consistency of the data maintained across all of the shards. 09:28 Nikita: So, how does this coordination actually happen? Ron: You don't submit this through our main. You submit this through the Global Management tool that is used for the sharded database. And it's the Global Management tool that is actually submit your request to each database, but maintains the consistency of when the actual backup is taken, what SCN. So that SCN coordination across all of the shards is then maintained for the backup so you can create a consistent backup or restore to a consistent point in time across the sharded database. So now this system was enhanced in 23C to support multiple destinations. So you can now send your backup to an object store. You can send it to ZDLRA. You can send it to Amazon S3. So multiple locations can now be defined where you can send these backups to. You can also use multiple recovery catalogs. So let's say I have data that is located on different countries and we have requirement that data for each country must stay in that country. So I need to also use a separate catalog to maintain that partition. So now I can use multiple catalog and define which catalog is maintaining which partition to satisfy those type of requirements or any data administration requirement when it comes to backup recovery. In addition, you can also now specify different type of encryption to be used, whether you want to have different type of encryption algorithm for each of the databases that you're backing up that is maintained. It can be identified, and then set up for each one of those components. So these advancements now allow you to manage this coordinated backup and restore with all of the various specific configuration that may be required based on the data organization. So the encryption, now can also be done across that, as I mentioned, for different algorithms. And you can define different components. Finally, there is much better error handling and response available through this global system. Since things have been synchronized, you get much better information into diagnosing any issues. 12:15 Want to get the inside scoop on Oracle University? Head over to the Oracle University Learning Community. Attend exclusive events. Read up on the latest news. Get first-hand access to new products. Read the OU Learning Blog. Participate in challenges. And stay up-to-date with upcoming certification opportunities. Visit www.mylearn.oracle.com to get started. 12:41 Nikita: Welcome back! Continuing with the updates… next up is the automatic bulk data move on sharding keys. Ron, can you explain how this works and why it's significant? Ron: And by the way, this doesn't have to be a bulk data. This could be just an individual row or it could be bulk data, a huge piece of data that is going to be moved. Now, in the past, when the shard key of an existing record was going to be updated, we basically had to remove that row from the table, so moving it to a temporary table or moving it to another location. Basically, you're deleting the row, and then change the value and reinsert the row so the row would then be inserted into the proper location. That causes a lot of work and requires specific code-writing and whatnot to manage those specific type of situations. And of course, if there is a lot of data, now, you're moving those bulk data in twice. 13:45 Lois: Yeah… you're moving it to one location and then moving it back in. That's a lot of double work, not to mention that it all needs to be managed manually, right? So, how has this process been improved? Ron: So now, basically, you can just go ahead and update the value of the partition key, and then data will then automatically move to the new location. So this gives you complete flexibility of the shard key values. This is also completely transparent, and again, completely managed behind the scenes. All you do is identify what is going to be changed. Then the database will maintain the actual data location and movement behind the scenes. 14:31 Lois: And what are some of the specific benefits of this feature? Ron: Basically, it allows you to now be flexible, be able to update the shard key without having to worry about, oh, which location does this value have to exist? Do I have to delete it, reinsert it? And all of those different operations. And this is done automatically by Oracle database, but it does require for you to enable row movement at the table level. So for tables that are expected to have partition key updates kind of without knowing when that happens, can happen, any time it happens by the clients directly or something, then we may need to enable row movement at the table level and leave it enabled. It does have tiny bit of overhead of maintaining these row locations behind the scenes when enabled, as it maintains some metadata behind the scenes. But for cases that, let's say I know when the shard key is going to be changed, and we can use, let's say, a written procedure or something for that when the particular shard key is going to be changed. Then when the shard key is updated, the data will then automatically move to the new location based on that shard key operation. So we don't need to move the data manually in and out or to different locations. 16:03 Nikita: In our final segment, I want to bring up the update on splitting and moving a partition set, or basically subpartitioning tables and then being able to move all of the data associated with that in a bulk data move to a new location. Ron, can you explain how this process works? Ron: This gives us a lot of flexibility for data management based on future requirements, size of the data, key changes, or key management requirements. So generally when we use a composite sharding, remember, this is a combination of user-defined partitioning plus the system partitioning put together. That kind of defines a little bit more control over how the shards are, where the data is distributed evenly across the shards. So sometimes based on this type of configuration, we may actually need to split partition and that can cause the shard key values to be now assigned to a new shard space based on the partitioning reconfiguration. So data, this needs to be automatically managed. So when you go ahead and split partition or partitionsets, then the data based on your configuration, based on your identification can automatically move to the new location automatically between those shard spaces. 17:32 Lois: What are some of the key advantages of this for clients? Ron: This provides a huge benefit to clients because it allows them flexibility of better managing their configuration, expanding both configuration servers, the structures for better management of the data and the load. Data is completely online during all of this data move. Since this is being done behind the scenes by the database, it does not impact the availability of the data for anyone who is actually using the data. And then, data is generally moved using transportable tablespaces in big bulk and big chunks. So it's almost like copying portions of the files. If you remember in Oracle database, we could take a backup of big files as image copy in pieces. This is kind of similar where chunks of data can then be moved and then transported if possible depending on the organization of the data itself for those particular partitions. 18:48 Lois: So, what does it look like in practice? Ron: Well, clients now can go ahead and rearrange their data structure based on the adjustments of the partitioning that already exists within the sharded database. The bulk data move then automatically triggers once the customer execute the statement to go ahead and restructure the partitioning. And then all of the client, they're still accessing data. All of the data operation are completely maintained behind the scene. 19:28 Nikita: Thank you for joining us today, Ron. If you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 19:51 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this two-part episode, hosts Lois Houston and Nikita Abraham are joined by Ron Soltani, a Senior Principal Database & Security Instructor, to discuss the ins and outs of database sharding. In Part 1, they delve into the fundamentals of database sharding, including what it is and how it works, specifically looking at Oracle Database Sharding and its benefits. They also explore the architecture of a sharded database, examining components such as shards, shard catalogs, and shard directors. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! The last two weeks of the podcast have been dedicated to all things database security. We discussed why it's so important and looked at all the new features related to database security that have been released in Oracle Database 23ai, previously known as 23c. 00:55 Nikita: Today's episode is also going to be the first of two parts, and we're going to explore database sharding with Ron Soltani. Ron is a Senior Principal Database & Security Instructor with Oracle University. We'll ask Ron about what database sharding is and then talk specifically about Oracle Database Sharding. We'll look at the benefits of it and also discuss the architecture. Lois: All this will help us to prepare for next week's episode when we dive into each 23ai new feature related to Oracle Database Sharding. So, let's get to it. Hi Ron! What's database sharding? 01:32 Ron: This is basically an architecture to allow you to divide data for better computing and scaling across multiple environments instead of having a single system performing the work. So this allows you to do hyperscale computing and other different technologies that are included that will allow you to distribute your queries and all other requests across these multiple components to be able to get a very fast response. Now many times with this distributed segment across each kind of database that is called a shard allow you to have some geographical location component while you are not really sharing any of the servers or the components. So it allows you separation and data management for each of the shards separately. However, when it comes to the application, the sharded database is totally invisible. So as far as the application is concerned, they connect to a global service, submit their statements. Everything else is managed then by the sharded database underneath. With sharded tables, basically it gets distributed across each shard. Normally, this is done through horizontal partitioning. And then the data depending on the partitioning scheme will be distributed across like server A, server B, server C, which are independent servers that are running independent databases. 03:18 Nikita: And what about Oracle Database Sharding specifically? Ron: The Oracle Database Sharding allows you to automate how the data is distributed, replicated, and maintain the kind of a directory that defines the complete sharding scheme, while everything is distributed across many servers with no sharing whether the hardware or software. It allows you to have a very good scaling to be able to scale based on this partitioning across all of these independent servers. And based on the subset and the discrete data configuration, you can go ahead and distribute this data across these components where each shard is an independent data location or data component, a subset of data that can be used, whether individually on its own or globally across all of the shards together. And as we said to the application, the Oracle Database Sharding also looks as a single component. 04:35 Lois: Ron, what are some of the benefits of Oracle Database Sharding? Ron: With Oracle Database, you basically have linear scaling capability across as many shards as you like. And all of the different database configurations are supported with this. So you can have rack databases across the shards, Oracle Data Guard, GoldenGate. So all of the different components are still used to give you all of the high availability and every other kind of functionality that we generally used to having a single database with. It provides you with fault toleration. So each component could be down. It could have its own replicated data. It doesn't affect other location and availability of the data in those other locations. And finally, depending on data sovereignty and configuration, you could actually distribute data geographically across the different locations based on requirements and also data access to provide a higher speed for local data management. 05:46 Lois: I'd like to understand more about the architecture of Oracle Database Sharding. Ron, can you first give us a broad overview of how Oracle Database Sharding is structured? Ron: When it comes to dealing with Oracle Database architecture, the components include, first, your shards. The shards-- each one is an independent Oracle Database depending on the partitioning you decide on a partition key and then how the actual data is divided across those shards. 06:18 Nikita: So, these shards are like separate pieces of the database puzzle…Ok. What's next in the architecture? Ron: Then you have shard catalog. Shard catalog is a catalog of your sharding configuration, is aware of all of the components in the shard, and any kind of replicated object that master object exists in the shard catalog to be maintained from there. And it also manages the global queries acting as a proxy. So queries can be distributed across multiple shards. The data from the shards returned back to the catalog to group together and then sent back to the client. Now, this shard catalog is basically another version of an Oracle Database that is created independently of the shards that include the actual data, and its job is to maintain this catalog functionality. 07:19 Nikita: Got it. And what about the shard director? Ron: The shard director is like another form of a global service manager. So it understands the sharding by being able to access the catalog, knows where everything exists. The client connection pool will hit the shard director. In general, communication and then whether it's being distributed to the shard catalog to be able to proxy it, or, if the key is available, then the director can send the query directly to the shard based on the key where the data exists. So the shard can then respond to the client directly. So all of the connection pool and the components for global administration, generally managed by the shard director. 08:11 Nikita: Can we dive into each of these components in a little more detail? Let's go backwards and start with the shard director. Ron: The shard director, as we said, this is like a global service manager. It acts as a regional listener where all of the connection requests will be coming to the shard director and then distributed from that depending on the type of connection that is being used. Now the director understands the topology--maintains the complete understanding of the mapping of the data against the shards. And based on the shard key, if the request are specified on the specific key, it can then route the connection request directly to the shard that is appropriate where the data resides for the direct response. 09:03 Lois: And what can you tell us about the shard catalog? Ron: The shard catalog, this is another Oracle Database that is created for special purpose of holding the topology of the sharded database. And have all of the centralized information metadata about your sharded database. It also act as a proxy. So, if a client request comes in without providing a shard key, then the request would go to the catalog. It can be distributed to all of the shards. So the shards that you actually have the data can respond, but the data can then be combined and sent back to the client. So, it also creates the master copy of all the duplicate tables that are created in the shard database. 09:56 Lois: Ok. I've got it. Now, let's talk more about the shards themselves. Ron: Each shard is basically a database. And data is horizontally partitioned to be placed on each of these shards. So, this physical database is called the shard. And depending on the topology of your sharding, there could be user sharding, for example, where multiple keys are in a single shard or could be a system sharding that based on the hash value data is distributed whether singly or multiple data components across each shard. Now, this is completely transparent to the application. So, as far as application is concerned, this is a single database and the response everything that they do is generally just operating as a single database interaction. However, when it comes to the administrators, each shard is a separate database. Each shard can be managed independently and can have its own standby and other components that is then set up for high availability and management of the data operations. 11:21 Do you have an idea for a new course or learning opportunity? We'd love to hear it! Visit the Oracle University Learning Community and share your thoughts with us on the Idea Incubator. Your suggestion could find a place in future development projects! Visit mylearn.oracle.com to get started. 11:41 Nikita: Welcome back! Let's move on to global services and the various sharding methods. Ron, can you explain what global services are and how they function in a sharded database? Ron: Global services is generally the service that is used for the application to be able to connect to the sharded database. This is provided and supported through the shard director. So clients are routed using this global service. 12:11 Lois: What are the different sharding methods that are available? Ron: When it comes to sharding methods that were available, originally we started with the system sharding, which is a hash partition, basically data is distributed evenly across the shards. Then we needed to allow for the user-defined sharding because sometimes it's not about just distributing the data evenly, it's also about controlling where the data goes to be able to control individual query execution based on the keys. And even for data sovereignty and position of the data itself. And then a composite sharding, which provides you kind of a combination of the user-defined sharding and the system hash sharding that gives you a little bit of a combination of the two to better distribute your data across the shard. And finally, sub-partitioning all types of sub-partitionings are supported to provide a better structure of the data depending on the application schema design. 13:16 Nikita: Ron, how do clients typically connect to a sharded database? Ron: When it comes to the client connections, all the client connections are generally routed to the director and then managed from there. So there are multiple ways that clients can connect. One could be a direct connect. With a direct connect, they're providing the shard key in the request. Therefore, the director knowing the topology can route the client directly to the shard that has the data. The proxy routing is done by the catalog. This is when generally a shard key is not provided or data is requested from many shards. So data will then request is then sent to the catalog. The catalog database will then distribute the query to the shards, collects the results, and then combine sending it back to the client acting as a proxy sitting in the middle. And the middle tier routing, this is when you can expose the middle tier to the structure of your sharding. So when the middle tier send the request, the request identifies which shard the data is going to. So take advantage of that from the middle tier. So the data is then routed properly. But that requires exposing the structures and everything in the middle tier. 14:40 Lois: Let's dive a bit deeper into direct routing. What are the advantages of using this method? Ron: With the client request routing, as we talked about the direct routing, this allows the applications to get very quick data access when they know the key that is used for the distribution of the data. And that is used to access the data from the shard. This provides you a direct connection to a shard from the shard director. And once the connection is established, then the queries can get data directly across the shard with the key that is supplied. So the RAC respond for that particular subset of data with the data request. Now with the direct routing again, you get some advantages. The advantage is you have much better performance for capturing subset of the data because you don't have to wait for every shard to respond for a particular query. If you want to distribute data geographically or based on the specific key, of course, all of that is perfectly supported. And kind of allows you to now distribute your query to actually the location where the data exists. So for example, data that is in Canada can then be locally accessed in Canada through this direct access. And of course, when it comes to management of your client connection, load balancing of those connections. And of course, supporting all types of queries and application requests. 16:18 Nikita: And what about routing by proxy? Ron: The proxy routing is when queries do not supply the actual sharding key, where identifies which shard the data reside. Or the actual routing cannot be properly identified. Then the shard director will send the request to the catalog performing the work as the proxy. So proxy will then send a request to all of the shards. If any shards can be eliminated, would be. But generally all of the shards that could have any portion of the data will then get the request. The requests are then sent back to the proxy. And then the proxy will then coordinate the data going back and forth between the client. And the shard catalog basically hands this type of data access to the catalog to act as the proxy. And then the catalog is-- the shard director is no longer part of the connection management since everything is then handled by the shard catalog itself. 17:37 Lois: Can you explain middle tier routing, Ron? Ron: This generally allows you to use the middle tier to define which shard your data is being routed into. This is a type of routing that can be used where the data geographically have some sovereignty or the application is aware of the structure. So the middle tier is exposed to the sharded database topology. So understand exactly what these components are based on the specific request on the shard key, then the middle tier can then route the application to the appropriate location for the connection. And then the middle tier, and then the either one shard or the subset of shard will maintain those connections for the data access going back and forth since the topology is now being managed by the middle tier. Of course, all of the work that is done here still is known in the catalog, will be registered in the catalog. So catalog is fully aware of any operations that are going on, whether connection is done through middle tier or through direct routing. 18:54 Nikita: Ron, can you tell us how query execution and DDL operations work in a sharded database? Ron: When it comes to the query execution of the application, there are no changes, no requirement for identifying specifically how the data is distributed. All of that is maintained behind the scene based on your sharding topology. For the DDL, most of your tables, most of the structures work exactly the same way as it did before. There are some general structures that are associated to the sharded database that we will originally create and set up with mapping. Once the mappings are configured, then the rest of the components are created just like a regular database. 19:43 Lois: Ok. What about the deployment process? Is it complicated to set up a sharded database? Ron: The deployment for the sharded database is fully automated using Terraform, Kubernetes, and scripts that are put together. Basically what you do is you provide some of your configuration information, structure of your topology through an input file, like a parameter file type of a thing. And then you execute the scripts and then it will build everything else based on the structure that you have provided. 20:19 Nikita: What if someone wants to migrate from a non-sharded database to a sharded database? Is there support for that? Ron: If you are going to migrate from a regular database to a sharded database, there are two components that are fully shard aware. First, you have the Shard Advisor. This can look at your current structure, the schema, how the data is distributed. And the workload and how the data is used to give you recommendation in what type of sharding would work best based on the workload. And then Data Pump is fully aware of the sharding component. Normally, we use Data Pump and load into each of the databases individually on its own. So instead of one job having to read all the data and move data across many shards, data can be loaded individually across each shard using Data Pump for much faster operations. 21:18 Lois: Ron, thank you for joining us today. Now that we've had a good understanding of Oracle Database Sharding, we'll talk about the new 23ai features related to this topic next week. Nikita: And if you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Until next week, this is Nikita Abraham… Lois: And Lois Houston signing off! 21:45 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Epicenter - Learn about Blockchain, Ethereum, Bitcoin and Distributed Technologies
In 2017, Vitalik Buterin defined the ‘Scalability Trilemma' which consisted of 3 attributes that every blockchain had to balance depending on its intended use cases: decentralisation, scalability and security. While Ethereum sacrificed scalability in favour of security and decentralisation, others prioritised throughput over the other two. However, a solution was proposed, inspired from Web2 computer science - sharding. Despite the fact that Ethereum's ossification and significant progress in zero knowledge research led to a shift in Ethereum's roadmap away from execution sharding towards L2 rollups, there were other L1s that were designed from the get-go as sharded blockchains. One such example was Elrond, who implemented the beacon chain PoS consensus alongside a sharded execution layer. Their recent rebranding to MultiversX alludes to a multi-chain, interoperable ecosystem, in which sovereign chains can communicate in a similar manner to cross-shard transaction routing.Topics covered in this episode:Lucian's background and founding Elrond (MultiversX)Elrond's validator & shard architectureCross-shard composabilityVMs, smart contracts and transaction routingSelf sovereignty and modularityMultiversX visionRoadmapEpisode links:Lucian Mincu on TwitterMultiversX on TwitterxPortal on TwitterSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus1: Chorus1 is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Felix Lutsch.
Justin Bons is one of the most popular Bitcoin critics on Twitter. As a fan of new technologies, he believes that sharding and other scaling techniques will save the trilemma and bring uncompromising decentralization. Is it really so?
Sovereignty or interoperability. Your choice, anon. In the recent zeitgest of crypto builders has been the idea of achieving sovereignty while also keeping interoperability. Today, we're diving into this with Avi Zurlo, Chief Product Officer of =nil; Foundation. For this conversation, we had a focus on sharding and how =nil; Foundation is approaching this choice. Since its inception, =nil; Foundation has been a key player in low-level cryptographic infrastructure, focusing heavily on ZK technology. With innovative products like ZK LLVM and a ZK proof marketplace, =nil; is all in on building a scalable ZK rollup, based on zkSharding. In this pod, we explore the important distinctions between sharding and rollup-centric scaling. We also discuss horizontal scaling, distributing workloads across multiple machines, and scaling without sacrificing decentralization (and of course, interoperability). We also look ahead to the roadmap which promises significant scalability, and a long-term oriented approach with the goal to be resilient for decades to come. Somewhat technical, and quite insightful. Hope you enjoy! Website: https://therollup.co/ Spotify: https://open.spotify.com/show/1P6ZeYd.. Podcast: https://therollup.co/category/podcast Follow us on X: https://www.x.com/therollupco Follow Rob on X: https://www.x.com/robbie_rollup Follow Andy on X: https://www.x.com/ayyyeandy Join our TG group: https://t.me/+8ARkR_YZixE5YjBh The Rollup Disclosures: https://therollup.co/the-rollup-discl
This interview features Illia Polosukhin, co-founder of NEAR Protocol, who discusses his journey from working at Google on groundbreaking AI research to founding a revolutionary blockchain platform. The conversation covers key insights into blockchain technology, the significance of the "Attention is All You Need" research paper, and the future vision for NEAR Protocol in transforming internet infrastructure. Enjoy!
Epicenter - Learn about Blockchain, Ethereum, Bitcoin and Distributed Technologies
The initial scaling roadmap for Ethereum featured execution layer sharding. However, the rapid advancements of layer 2 scaling solutions in general, and zero knowledge proofs in particular, caused a restructuring of the original plan. The reason was that rollups would have required less changes made to Ethereum's base layer, hence lower risks. On the other hand, Near Protocol was designed from the ground up as a sharded system, capable of withstanding billions of transactions simultaneously, without sacrificing decentralisation or security.Topics covered in this episode:Illia's & Alex's backgroundsSharding and blockchain scalabilityChallenges of building a sharded blockchainNear namespaceShard security & validator samplingStateless validation and data availabilityState witnesses and chunk producersBlock productionShards vs. RollupsHow Near could further improveLeaderless chunk productionChain abstractionEpisode links:Illia Polosukhin on TwitterAlex Skidanov on TwitterNear Protocol on TwitterSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus1: Chorus1 is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Meher Roy.
Summary In this episode, Alan Orwick, the CEO and founder of Quai Network, discusses the background and development of Quai, a blockchain platform. He talks about his journey in the crypto space, including his involvement in the Texas Blockchain organization at the University of Texas. Alan explains how Quai solves the blockchain trilemma using a sharding approach and merged mining. He also discusses the future of Quai, including its plans for ecosystem growth and partnerships with other blockchains. Quai is the only scalable Proof of Work blockchain with EVM-compatible smart contracts. It aims to tap into the compute economy and revolutionize Proof of Work. The project has a global community, with active participation from miners in Vietnam, Europe, and Texas. Quai is focused on making mining more accessible, particularly with GPUs, and aims to utilize renewable energy sources for mining. The team is committed to building a decentralized and freely accessible system that provides an alternative to government-controlled currencies. Chapters 00:00 Introduction to Quai Network and Alan Orwick 11:04 Solving the Blockchain Trilemma with Sharding and Merged Mining 28:39 The Future of Quai: Ecosystem Growth and Partnerships 35:16 Exploring Proof of Work and Its Ethos 41:36 Defending Proof of Work and Its Environmental Impact 53:00 More about Quai Connect with Alan and Quai: X (Twitter): @0xalank | @QuaiNetwork Farcaster: @0xalank | @quai YouTube: https://www.youtube.com/@QuaiNetwork Discord: https://discord.gg/quai Website: https://qu.ai To learn more about ATX DAO: Check out the ATX DAO website Follow @ATXDAO on X (Twitter) Connect with us on LinkedIn Join the community in the ATX DAO Discord Connect with us on X (Twitter): Mason: @512mace Nick: @nickcasares Luke: @Luke152 Ash: @ashinthewild Support the Podcast: If you enjoyed this episode, please leave us a review and share it with your network. Subscribe for more insights, interviews, and deep dives into the world of Web 3. Tools & Resources We Love Podcast Recording & Editing - Riverside FM: We use Riverside FM to record and edit our episodes. If you're interested in getting into podcasting or just recording remote videos, be sure to check them out! Keywords Quai Network, blockchain, crypto, Texas Blockchain, proof of work, sharding, merged mining, ecosystem growth, partnerships, Quai, scalable, Proof of Work, EVM, smart contracts, compute economy, revolutionize, global community, mining, renewable energy, decentralized, alternative currency
In this episode, we delve into the intricate world of AI inference with cofounders of Firework AI. Discover the strategies behind optimizing AI performance, the importance of balancing latency and throughput, and the nuances of different AI architectures from GPT-3 to Stable Diffusion. Learn about their partnership with Stability AI, their unique focus on reducing total cost of ownership, and their vision for a seamless developer experience. RECOMMENDED PODCAST: How Do You Use ChatGPT with Dan Shipper. Dan Shipper talks to programmers, writers, founders, academics, tech executives, and others to walk through all of their ChatGPT use cases (including Nathan!). They even use ChatGPT together, live on the show. Listen to How Do You Use ChatGPT? from Dan Shipper and the team at Every, wherever you get your podcasts: https://link.chtbl.com/hdyuchatgpt SPONSORS: Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Plumb is a no-code AI app builder designed for product teams who care about quality and speed. What is taking you weeks to hand-code today can be done confidently in hours. Check out https://bit.ly/PlumbTCR for early access. Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist. CHAPTERS : (00:00:00) Introduction (00:08:34) Compute Stack (00:19:23) Fireworks Product Philosophy (00:24:11) Sponsors : Brave / Omneky (00:25:40) Fine-tuning Strategy (00:38:40) Sponsors : Plumb / Squad (00:40:33) NVIDIA Stack Overview (00:47:14) TensorFlow Triton Service (00:55:25) Reduced Precision Advantages (01:03:57) Different Deployment Scenarios (01:08:27) Seeking Intuition on Sharding (01:19:28) Announcing Stability AI Partnership (01:32:00) Closing Remarks
This week, Sal and Zen Llama host another podcast in the "Technical" series - this time talking with Piers Ridyard who is CEO of RDX Works. The episode goes into depth around what makes Radix such a highly novel design, why sharding is the inevitable solution for horizontally scaling blockchains, and why the notion of blocks needs to be revamped to allow for full atomic composability between shards. Find out why Radix may well be the final nail in the coffin of the scalability trilemma. - - Time Stamps (0:00) - Overview on Radix (3:47) - How Radix reduces risk of exploits (17:25) - Comparing Radix to Solana and Move (28:35) - Radix's novel approach to sharding (43:30) - How do nodes sync up the full state (45:09) - What are trade offs from a DevEx perspective? (48:46) - How to onboard devs to a new programming language (1:00:25) - How to get involved - - Podcast Resources Follow Sal: https://twitter.com/salxyz Follow Dave: https://twitter.com/SolBeachBum Follow Zen : https://twitter.com/ZenLlama Follow Unlayered: https://twitter.com/UnlayeredPod Subscribe on Spotify, Apple, or Google: https://unlayered.io/ Subscribe on YouTube: https://www.youtube.com/@UnlayeredPod - - Episode Resources Follow Piers : https://Twitter.com/PiersRidyard Follow Radix : https://Twitter.com/RadixDLT Radix Website : https://RadixDLT.com
Send us a Text Message.Unlock the full potential of your database management with our deep dive into scalability strategies that can revolutionize how you handle data growth and system performance. Jessica Ho, a sharp-minded listener, brought forth questions that led us to explore the intricate dance of read-through versus cache-aside caching. We break down when to use each technique for the utmost data consistency across varying applications. Not stopping there, we also shed light on Redis and its dual capabilities as a powerhouse in-memory data structure store, adept at enhancing your caching solution both locally and remotely.As we navigate through the labyrinth of database replication, you'll gain an understanding of the follower-leader model and its pivotal role in read-heavy applications—think TikTok or URL shorteners. We don't shy away from discussing the risks of single points of failure and the solutions like automatic failover that keep databases humming along. The conversation gets even more exciting as we delve into the advanced territory of multi-leader replication, a strategy that ups the ante on fault tolerance and caters to a global user base, reducing latency and the dread of write losses.The episode wraps with an exploration of the various sharding methods, each with its own set of benefits and hurdles. Whether it's key-based sharding, range-based sharding, directory-based sharding, or geobased sharding we help you navigate these techniques to find the best fit for your specific needs. And as a bonus, we tease what's on the horizon for our tech-savvy listeners: the enthralling world of messaging queues. Be sure to hit subscribe for this and other forthcoming topics that will arm you with the know-how to stay ahead in the tech game. Join us on this journey, and let's conquer the scalability challenge together!Support the Show.Dedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.
Lords: * Cort * https://a.co/d/iRrEZcy * Elena Topics: * My due date is literally tomorrow * Dogme 95 for web development * Visits from the neighborhood cats * Potato, by Jane Kenyon * https://poets.org/poem/potato-0 * Cooperative board games are hard to design * Space themed coop trick-taking card game: https://boardgamegeek.com/boardgame/284083/crew-quest-planet-nine * Building a conlang generator from the phonology up * https://en.wikipedia.org/wiki/Sonorityhierarchy * Linear algebra cursed conlang: https://www.youtube.com/watch?v=ze5ie_ryTk Microtopics: * The Be Real App. * Posting your mortifying skin condition for all the internet to see. * Being born. * The Dance Dance Revolution song "20,November," by Earth Wind and Fire. * PiCoSteveMo. * Tossing around hastily drawn concept art with your team. * Being born, again. * Having a kid for someone else. * Eating cigarettes off of the sidewalk. * A grab bag of thousands of possible pregnancy symptoms. * Literacy as a symptom of pregnancy. * A visceral reminder that you are part of a long chain of humans. * Which came first, humans or birth? * The comfort of the humans who are still around having individual experiences even after you die. * Tips n Tricks for dealing with fear of death. * Inviting dead people onto the show. * Asking for more pro-death art so you can feel better about death. * Pro-life, in the literal sense. * Flowers and mushrooms growing up through the bones. * Returning to the universe to nurture it. * Dumb Ways to Die. * Sum: 40 Tales from the Afterlives. * Thought experiments about something weird that could happen. * The Egg by Andy Weir. * Covering birth and death in the same topic. * Looking at photos of yourself from five years ago and thinking "oh shit!" * I am choosing to no longer have conscious experience, mom. You wouldn't understand, mom. * Swedish with a mouthful of potatoes. * Dogme 95. * Enpoopification. * A protocol for exchanging information on a computer. * Rewinding to a kinder, simpler web. * Avoiding all this gestures at the world * New rule: no web servers more powerful than a Raspberry Pi. * The cool thing that was on the web in the mid-90s. * Making art and putting it on the internet and getting a fan base. * The teenage gamer comic series making a comic about prostate exams. * Sharding the internet. * El Goonish Shive. * Anime hammers that you do when someone is being a pervert. * Coming to personal revelations regarding your neurodivergence or gender situation. * How to be a successful artist. * Not knowing if your favorite webcomic had ads because you use an adblocker. * Working at your parents animation studio as an inbetweener. * Merging your cats into one cat. * Neighborhood coyotes. * Cats beyond the reach of fear. * Window-peering Jim: he's just checking out your remodel. * Putting a GoPro on your neighbor's cat and livestreaming the inside of their house. * Cats with amazing life stories that they'll never tell you. * The consort of coffee grounds. * Making shepherd's pie for an entire hamlet. * A possibly accidental double line break. * A line break corresponding to a conceptual boundary. * The Story of Mel: a Real Programmer. * Adding left angle brackets to the start of every line until word wrap makes it a poem. * Blackout poetry. * Pumping gas as an element of Cottagecore. * A hamlet is just a city in New Jersey. * The fireworks on your forehead game. * A game where everyone stops talking. * The Yelling Game. * A dedicated period of yelling. * Space-themed trick taking games. * The Spaceteam card game. * Coming up with a set of place names that sound like they're from the same culture. * Assigning syllable groups to a morpheme. * Asking Claude. * Interpolating the obvious things. * The Sonority Hierarchy. * A gradient from less vowel-like to more vowel-like. * Cursed Conlangs. * Generating syllables and mushing them together.
What is data sharding, and why do you need it? Carl and Richard talk to Oren Eini about his latest work on RavenDB, including the new data sharding feature. Oren talks about the power of sharding a database across multiple servers to improve performance on massive data sets. While a sharded database is typically in a single data center, it is possible to distribute the shards across multiple locations. The conversation explores the advantages and disadvantages of the different approaches, including that you might not need it today, but it's great to know it's there when you do!
What is data sharding, and why do you need it? Carl and Richard talk to Oren Eini about his latest work on RavenDB, including the new data sharding feature. Oren talks about the power of sharding a database across multiple servers to improve performance on massive data sets. While a sharded database is typically in a single data center, it is possible to distribute the shards across multiple locations. The conversation explores the advantages and disadvantages of the different approaches, including that you might not need it today, but it's great to know it's there when you do!This show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/5634793/advertisement
Hey Everyone, In this video I talk to Franck Pachot about internals of YugabyteDB. Franck has joined the show previously to talk about general database internals and its again a pleasure to host him and talk about DistributedSQL, YugabyteDB, ACID properties, PostgreSQL compatibility etc. Chapters: 00:00 Introduction 01:26 What does Cloud Native means? 02:57 What is Distributed SQL? 03:47 Is DistributedSQL also based on Sharding? 05:44 What problem does DistributedSQL solves? 07:32 Writes - Behind the scenes. 10:59 Reads: Behind the scenes. 17:01 BTrees vs LSM: How is the data written do disc? 25:02 Why RocksDB? 29:52 How is data stored? Key Value? 33:56 Transactions: Complexity, SQL vs NoSQL 42:51 MVCC in YugabyteDB: How does it work? 45:08 Default Transaction Isolation level in YugabyteDB 51:57 Fault Tolerance & High Availability in Yugabyte 56:48 Thoughts on Postgres Compatibility and Future of Distributed SQL 01:03:53 Usecases not suitable for YugabyteDB Previous videos: Database Internals: Part1: https://youtu.be/DiLA0Ri6RfY?si=ToGv9NwjdyDE4LHO Part2: https://youtu.be/IW4cpnpVg7E?si=ep2Yb-j_eaWxvRwc Geo Distributed Applications: https://youtu.be/JQfnMp0OeTA?si=Rf2Y36-gnpQl18yj Postgres Compatibility: https://youtu.be/2dtu_Ki9TQY?si=rcUk4tiBmlsFPYzY I hope you liked this episode, please hit the like button and subscribe to the channel for more. Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Franck's Twitter and Linkedin: https://twitter.com/FranckPachot and https://www.linkedin.com/in/franckpachot/ Connect and follow here: https://twitter.com/thegeeknarrator and https://www.linkedin.com/in/kaivalyaapte/ Keep learning and growing. Cheers, The GeekNarrator
Anatoly Yakovenko is the Founder of Solana. Anatoly's Twitter: @aeyakovenko Solana's Twitter: https://twitter.com/solana Solana's Website: https://solana.com Logan Jastremski's Twitter: @LoganJastremski Frictionless's Twitter: @_Frictionless_ Frictionless's Website: https://frictionless.fund/ ___ Timecodes: 0:00 - Intro 2:55 - All to All Consensus 6:30 - Synchronizing the World's Information 15:30 - Finance is a regressive tax 17:01 - Sharding & L2s do not solve All to All Consensus 20:30 - Amortizing cost between nodes 25:50 - Raw data costs 31:00 - Consensus Design of Solana vs. Others 34:30 - Channel Capacity 44:00 - Node Count and Decentralization 49:20 - Network optimizations 55:25 - Sharding & L2s will never be cheaper than Solana 1:02:15 - Single Threaded VM problems 1:04:45 - Modular vs. Integrated Blockchains 1:08:25 - Infinity scaling is not required 1:12:30 - Asynchronous Execution on Solana 1:18:30 - Closing Thoughts
To get the show notes as well as get notified of new episodes, visit: https://www.scalingpostgres.com/episodes/277-postgres-releases-postgresql-survey-partitioning-sharding-bulk-loading/ In this episode of Scaling Postgres, we discuss new Postgres releases, taking the 2023 State of PostgreSQL survey, partitioning vs. sharding and the fastest way to do bulk loads.
Nikolay and Michael discuss sharding Postgres — what it means, why and when it's needed, and the available options right now. Here are some links to some things they mentioned:PGSQL Friday monthly blogging event https://www.pgsqlphriday.com/Did “sharding” come from Ultima Online? https://news.ycombinator.com/item?id=23438399 Our episode on partitioning: https://postgres.fm/episodes/partitioningVitess https://vitess.io/Citus https://www.citusdata.com/ Lessons learned from sharding Postgres (Notion 2021) https://www.notion.so/blog/sharding-postgres-at-notion The Great Re-shard (Notion 2023) https://www.notion.so/blog/the-great-re-shard The growing pains of database architecture (Figma 2023) https://www.figma.com/blog/how-figma-scaled-to-multiple-databases/Timescale multi-node https://docs.timescale.com/self-hosted/latest/multinode-timescaledb/about-multinode/ PgCat https://github.com/postgresml/pgcat SPQR https://github.com/pg-sharding/spqr PL/Proxy https://plproxy.github.io/ Sharding GitLab by top-level namespace https://about.gitlab.com/handbook/engineering/development/enablement/data_stores/database/doc/root-namespace-sharding.html Loose foreign keys (GitLab) https://docs.gitlab.com/ee/development/database/loose_foreign_keys.html ~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is brought to you by:Nikolay Samokhvalov, founder of Postgres.aiMichael Christofides, founder of pgMustardWith special thanks to:Jessie Draws for the amazing artwork
Join Lois Houston and Nikita Abraham, along with Alex Bouchereau, as they talk about Oracle Maximum Availability Architecture, which provides architecture, configuration, and lifecycle best practices for Oracle Databases. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Ranbir Singh, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00;00;00;00 - 00;00;39;11 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Principal Technical Editor with Oracle University, and I'm joined by Lois Houston, Director of Product Innovation and Go to Market Programs. 00;00;39;18 - 00;01;12;09 Hi, everyone. Last week, we discussed Oracle's Maximum Security Architecture, and today, we're moving on to Oracle Cloud Infrastructure's Maximum Availability Architecture. To take us through this, we're once again joined by Oracle Database Specialist Alex Bouchereau. Welcome, Alex. We're so happy you're becoming a regular on our podcast. So, to start, what is OCI Maximum Availability Architecture? Now, before we actually jump into the specifics, it's important to understand the problem we're trying to address. 00;01;12;11 - 00;01;38;01 And that is database downtime and data protection. We don't want any data loss and the impact of both of these types of occurrences can be significant. Now, $350K on average of costs of downtime per hour, 87 hours average amount of downtime per year is pretty significant. So, it's a very, very common occurrence. It's $10 million for a single outage, depending on how critical the application is. 00;01;38;03 - 00;02;02;28 And 91% of companies have experienced unplanned data center outages, which means this occurs fairly often. So, what can we do about this? How do we address the problem of data loss? It's important to understand a different terminology first. So, we'll start with high availability. High availability provides redundant components to go ahead and ensure your service is uninterrupted in case of a type of hardware failure. 00;02;03;01 - 00;02;24;24 So, if one server goes down, the other servers will be up. Ideally, you'll have a cluster to go ahead and provide that level of redundancy. And then we talk about scalability. Depending upon the workload, you want to ensure that you still have your performance. So, as your application becomes more popular and more end users go ahead and join it, the workload increases. 00;02;24;26 - 00;02;42;28 So, you want to ensure that the performance is not impacted at all. So, if we want to go ahead and minimize the time of our planned maintenance, which happens more often and a lot more often than unplanned outages, we need to do so in a rolling fashion. And that's where rolling upgrades, rolling patches, and all these types of features come into play. 00;02;42;29 - 00;03;10;20 Okay, so just to recap, the key terms you spoke about were high availability, which is if one server goes down, others will be up, scalability, which is even if the workload increases, performance isn't impacted, and rolling updates, which is managing planned updates seamlessly with no downtime. Great. What's next? Disaster recovery. So, we move from high availability to disaster recovery, protecting us from a complete site outage. 00;03;10;27 - 00;03;35;02 So, if the site goes down entirely, we want to have a redundant site to be able to failover to. That's where disaster recovery comes into play. And then how do we measure downtime and data loss? So, we do so with Recovery Point Objectives, or RPOs, measuring data loss and Recovery Time Objectives, or RTOs, measuring our downtime. 00;03;35;05 - 00;04;00;22 Alex, when you say measure downtime, how do we actually do that? Well, we use a technique called chaos engineering. Essentially, it's an art form at the end of the day because it's constantly evolving and changing over time. We're proactively breaking things in the system and we're testing how our failover, how our resiliency, and how our switchovers, and how everything goes ahead and works under the covers with all our different features. 00;04;00;23 - 00;04;21;28 A lot of components can suffer an outage, right? We have networks and servers, storage, and all these different components can fail. But also human error. Someone can delete a table. You could delete a bunch of rows. So, they can make a mistake on the system as well. That occurs very often. Data corruption and then, of course, power failures. 00;04;22;00 - 00;04;45;03 Godzilla could attack and take out the entire data center. Godzilla! Ha! And you want to be able to go ahead and have a disaster recovery in place. And then there's all kinds of maintenance activities that happen with application updates. You might want to reorganize the data without changing the application and the small, little optimizations. And these can all happen in isolation and or in combination with each other. 00;04;45;05 - 00;05;19;03 And so chaos engineers take all this into consideration and build out the use cases to go ahead and test the system. Do we have some best practices in place for this, then? Oracle Maximum Availability Architecture, MAA, is Oracle's best practice blueprint based on proven Oracle high availability technologies, end-to-end validation, expert recommendations, and customer experiences. The key goal of MAA is to achieve optimal high availability, data protection, and disaster recovery for Oracle customers at the lowest cost and complexity. 00;05;19;05 - 00;05;54;07 MAA consists of reference architectures for various buckets of HA service-level agreements, configuration practices, and HA lifecycle operational best practices, and are applicable for non-engineered systems, engineered systems, non-cloud, and cloud deployments. Availability of data and applications is an important element of every IT strategy. At Oracle, we've used our decades of enterprise experience to develop an all-encompassing framework that we can all call Oracle MAA, for Maximum Availability Architecture. 00;05;54;07 - 00;06;20;21 And how was Oracle's Maximum Availability Architecture developed? Oracle MAA starts with customer insights and expert recommendations. These have been collected from our huge pool of customers and community of database architects, software engineers, and database strategists. Over the years, this has helped the Oracle MAA development team gain a deep and complete understanding of various kinds of events that can affect availability. 00;06;20;24 - 00;06;48;11 Through this, they have developed an array of availability reference architectures. These reference architectures acknowledge not all data or applications require the same protection and that there are real tradeoffs in terms of cost and effort that should be considered. Whatever your availability goals may be for a database or related applications, Oracle has the product functionality and guidance to ensure you can make the right decision with full knowledge of the tradeoffs in terms of downtime, data loss, and costs. 00;06;48;11 - 00;07;04;01 These reference architectures use a wide array of our HA features, configurations, and operational practices. 00;07;04;03 - 00;07;29;04 Want to get the inside scoop on Oracle University? Head on over to the all-new Oracle University Learning Community. Attend exclusive events. Read up on the latest news. Get firsthand access to new products and stay up-to-date with upcoming certification opportunities. If you're already an Oracle MyLearn user, go to mylearn.oracle.com to join the community. You will need to log in first. If you've not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started. 00;07;29;04 - 00;07;57;19 Join the community today. Welcome back. Alex, you were telling us about how Oracle MAA or Maximum Availability Architecture has reference architectures that use a series of high availability features and configurations. But, how do these help our customers? They help our end customers achieve primarily four goals. 00;07;57;22 - 00;08;29;29 Number one, data protection, reducing data loss through flashback and absolute data protection through zero data loss recovery appliance. Number two, active replication, which allows customers to connect their applications to replicated sites in an active-active HA solution through Active Data Guard and GoldenGate. Number three, scale out, which allows customers the ability to scale compute nodes linearly through RAC, ASM, and Sharding. 00;08;30;01 - 00;08;58;19 Four, continuous availability. This allows transparent failovers of services across sites distributed locally or remote, through AC and GDS. These features and solutions allow customers to mitigate not only planned events, such as software upgrades, data schema changes, and patching, but also unplanned events, such as hardware failures and software crashes due to bugs. Finally, customers have various deployment choices on which we can deploy these HA solutions. 00;08;58;22 - 00;09;25;02 The insights, recommendations, reference architectures, features, configurations, best practices, and deployment choices combine to form a holistic blueprint, which allows customers to successfully achieve their high availability goals. What are the different technologies that come into play here? Well, we'll start with RAC. So, RAC is a clustering technology spread through different nodes across the different servers, so you don't have a single point of failure. 00;09;25;05 - 00;09;46;13 From a scalability standpoint and performance standpoint, you get a lot of benefit associated with that. You constantly add a new node whenever you want to without experiencing any downtime. So, you have that flexibility at this point. And if any type of outage occurs, all the committed transactions are going to be protected and we'll go ahead and we'll move that session over to a new service. 00;09;46;15 - 00;10;07;27 So, from that point, we want to go ahead and also protect our in-flight transactions. So, when it comes to in-flight transactions, how are we going to protect those in addition to the RAC nodes? Well, we can go ahead and do so with another piece of technology that's built into RAC, and that's the Transparent Application Continuity feature. So, this feature is going to expand the capabilities of RAC. 00;10;08;03 - 00;10;28;18 It's a feature of RAC to go ahead and protect our in-flight transactions so our application doesn't experience those transactions failing and coming back up to the layer, or even up to the end users. We want to capture those. We want to replay them. So that's what application continuity does. It allows us to go in and do that. 00;10;28;21 - 00;10;51;03 It supports a whole bunch of different technologies, from Java, .NET, PHP. You don't have to make any changes to the application. All you have to do is use the correct driver and have the connection string appropriately configured and everything else is happening in the database. What about for disaster recovery? Active Data Guard is the Oracle solution for disaster recovery. 00;10;51;05 - 00;11;29;08 It eliminates a single point of failure by providing one or more synchronized physical replicas of the production database. It uses Oracle Aware Replication to efficiently use network bandwidth and provide unique levels of data protection. It provides data availability with fast, manual, or automatic failover to standby should a primary fail and fast switch over to a standby for the purpose of minimizing planned downtime as well. An Active Data Guard standby is open, read only, while it is being synchronized, providing advanced features for data protection, availability, and protection offload. 00;11;29;08 - 00;11;50;23 We have different database services, right? We have our Oracle Database Cloud servers, we have Exadata Cloud servers, and we have Autonomous Database. Do they all have varying technologies built into them? All of them are Database Aware architecture at the end of the day. And the Oracle Database Cloud Service, you have the choice of single instance, or you can go ahead and choose between RAC as well. 00;11;50;25 - 00;12;23;25 You can use quick migration via Zero Downtime Migration, or ZDM for short. We have automated backups built in, and you can set up cross-regional or cross availability to do any DR with Active Data Guard through our control play. And we build on that with Exadata Cloud Service by going ahead and changing the foundation to Exadata, with all the rich benefits of performance, scalability, and optimizations for the Oracle Database, and all the different HA and DR technologies that run within it, to the cloud. 00;12;23;27 - 00;12;50;22 Very easy to go ahead and move from Exadata on-premise to Exadata Cloud Service. And you have choices. You can do the public cloud, or you can do Cloud@Customer or ExaCC, as we call it, to go ahead and run Exadata within your own data center--Exadata Cloud Service and your own data center. And building on top of that, we have Autonomous, which also builds on top of that Exadata infrastructure. 00;12;50;25 - 00;13;19;12 And we have two flavors of that. We have shared and we have dedicated, depending upon your requirements. And is all of this managed by Oracle? Now, at this point, everything's managed by Oracle and things like Data Guard can be configured. We call it Autonomous Data Guard in the Autonomous Database. With a simple two clicks, you can set up cross-regional or cross availability domain VR. And then everything is built, of course, from a high-available multitenant RAC infrastructure. 00;13;19;15 - 00;13;48;02 So, it's using all other technologies and optimizations that we've been talking about. Thanks, Alex, for listing out the different offerings we have. I think we can wind up for today. Any final thoughts? So high availability, disaster recovery, absolute requirements. Everybody should have it. Everybody should think of it ahead of time. We have different blueprints, different tiers of our MAA architecture that map different RTO and RPO requirements depending upon your needs. 00;13;48;04 - 00;14;12;01 And those may change over time. And finally, the business continuity we can provide with MAA is for both planned maintenance and unplanned outage events. So, it's for both. And that's a critical part to this as well. Thank you, Alex, for spending this time with us. That's it for this episode. Next week, we'll talk about managing Oracle Database with REST APIs, and ADB built-in tools. 00;14;12;04 - 00;16;57;28 Until then, this is Nikita Abraham and Lois Houston signing off. That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
As your system scales, eventually you'll find that your database is the source of all your problems. Apps are often a lot easier to scale than databases, but eventually you have to deal with the database itself. Read more › The post Database Sharding appeared first on Complete Developer Podcast.
Software Engineering Radio - The Podcast for Professional Software Developers
Sugu Sougoumarane discusses how to face the challenges of horizontally scaling MySQL databases through the Vitess distribution engine and Planetscale, a service built on top of Vitess. The journey began with the growing pains of scale at YouTube around the time of Google's acquisition of the video service. This episode explores ideas about topology management, sharding, Paxos, connection pooling, and how Vitess handles large transactions while abstracting complexity from the application layer.
In the ever-evolving world of blockchain and cryptocurrency, it can take time to keep up with the latest developments, technologies, and protocols. With so much jargon and technical language, it's easy to feel intimidated and overwhelmed, even for those well-versed in the space. However, on this episode of Tech Talks Daily, we have an exceptional guest who has a talent for simplifying complex topics and making them accessible to all. Meet Adam Simmons, the Chief Strategy Officer of Radix, a DeFi protocol that aims to lower barriers and bring decentralized finance to the mainstream. In this episode, we dive deep into what sets Radix apart from other DeFi platforms, how to scale blockchain, the future of DeFi, and how crypto can benefit the unbanked in underdeveloped economies. Adam's expertise and patient approach make him the perfect guide to explore the intricacies of DeFi and potential impact on the world. Radix's public ledger and token ($XRD) with a market cap of over $1bn is the only decentralized network that allows for fast building, rewards everyone who contributes to making it better, and scales without friction. During the conversation, Adam discusses the various challenges that DeFi faces in gaining mainstream adoption, such as high fees and slow transaction times. In addition, he highlights the importance of creating a user-friendly and seamless experience for newcomers to the space and predicts that DeFi will have more liquidity than any other single exchange within the next ten years. Adam shares his insights about the future of DeFi and its impact on unbanked and underdeveloped economies. By using crypto and blockchain technology, individuals and businesses in these regions can access financial services that were previously unavailable to them. This could lead to a more inclusive and equitable financial system. Adam Simmons' passion for DeFi, and his ability to communicate its potential impact clearly and concisely make this episode a must-listen for anyone interested in the future of finance and technology. Sponsored VPN Offer https://www.piavpn.com/techtalksdaily
Sandi, the CEO of Calimero, is leading the groundbreaking project to introduce private sharding to NEAR protocol. Private sharding is a revolutionary concept that allows public sidechains to operate on a protocol that stores data privately, is semi-decentralized, and can be fully run on a sidechain or integrated into the mainnet. This innovative approach paves the way for zero gas fees by creating app chains for specific use cases and expands the possibilities of usage. Private shards offer tremendous potential, from building private DAOs to providing fine-grained permissions for public proposals visible to everyone. Private shards utilize the same development kit as the mainnet, with added tools such as a console for dimension management. The podcast covers many other topics related to the blockchain industry, including the future of digital currencies, the importance of security and compliance, and the steps required to build a sustainable Web 3.0 business. Calimero, a Web 3.0 infrastructure provider, is taking important steps to launch their project on testnet and Mainnet after additional checks, and to collaborate with Aurora to support EVM deployments. The team is attending conferences to engage with potential customers and generate revenue and emphasizes the need for a strong team of engineering, marketing, and sales experts to build a sustainable business model in the Web 3.0 industry. Overall, this podcast provides valuable insights into both the technical and business aspects of the blockchain industry, highlighting the exciting potential for decentralized organizations and infrastructure projects in the future. Calimero https://twitter.com/CalimeroNetwork Sandi https://twitter.com/chefsale Ready Layer One Podcast https://readylayeronepodcast.com/ twitter.com/ready_layer_one Joe https://twitter.com/joespano_ Jared https://twitter.com/jarednotjerry1 NEAR near.org/ Aurora (NEAR EVM) https://aurora.dev/ NO FINANCIAL ADVICE– The Podcast, is provided for educational, informational, and entertainment purposes only, without any express or implied warranty of any kind, including warranties of accuracy, completeness, or fitness for any particular purpose. The information contained in or provided from or through this podcast and podcast is not intended to be and does not constitute financial advice, investment advice, trading advice, or any other advice. You should not make any decision, financial, investment, trading, or otherwise, based on any of the information presented on this website without undertaking independent due diligence and consultation with a professional broker or financial advisor. You understand that you are using any and all Information available on or through this podcast at your own risk. RISK STATEMENT– The trading of Bitcoins, and alternative cryptocurrencies has potential risks involved. Trading may not be suitable for all people. Anyone wishing to invest should seek his or her own independent financial or professional advice.
Bill provides updates on the current state of the economy and the upcoming Ethereum upgrades. Some of the key highlights of the session are below: Macro Update - According to the latest release of the S&P Flash US PMI Index, business activity has declined significantly over the past two years. The PMI has fallen to its lowest level since May 2020, and new business at service providers has shrunk significantly since the early months of the COVID-19 pandemic. Job growth has slowed, manufacturers and service businesses are being cautious with hiring, we are also seeing significant layoffs in the tech sector. The Federal Reserve has raised rates by 50 basis points, but more rate hikes are expected.ETH Update - Ethereum developers have announced plans to upgrade the network, the Shanghai update in March 2023, which will allow for the withdrawal of staked ETH. There is concern among some Ethereum users that enabling withdrawals may lead to a large number of stakers withdrawing their deposited ETH, which could negatively affect the altcoin's price. The next planned upgrade in November 2023, EIP 4844, will include upgrades that will lead to Sharding and would significantly reduce the cost of L2 transactions, making it easier to further scale the Ethereum network.
Dive into STEP Finance and learn what makes it so great for Solana users.Trade BTC, ETH, AVAX and other top cryptocurrencies with up to 30x leverage directly from your wallet (& save on fees using our link HERE)Spotify | YouTube | iTunes DeFi Slate Fam:There's been a few reasonably sized debates on Twitter about Ethereum's scaling roadmap including proto-dank-sharding. Some dot eth's seem to like it while others think its foolish.While you may or may not have an opinion, there is something you should know as a crypto user.Shardeum is taking sharding to a whole ‘nother level with their notable tech stack that includes high levels of security, unprecedented TPS and a profitable node hosting system that incentivizes more and more nodes as the demand for throughput increases.This episode is intense, its epic and it goes deep into what we believe is 100% the future of scaling to billions of users.Welcome to the future.Cheers,Andy
Welcome to Who Sharded? Which is a weekly news update show all about the NEAR and Aurora ecosystems. And we get our news from NEARWEEK's daily newsletter. It's def worth readying! Ready Layer One podcast https://readylayeronepodcast.com/ NEARWEEK https://nearweek.com/ This weeks stories: 20 million wallets Nightshade phase one https://twitter.com/NEARWEEK/status/1572303921233940481?s $NEAR: Positioned to Become the Leading Orderbook Chain https://medium.com/@ProximityFi/near-positioned-to-become-the-leading-orderbook-chain-b428c851bd6e The 'nearweek.pool.near' is deployed. Its purpose is to fund NEARWEEK's compliant DAO payments which powers the weekly newsletter and in time all content production. Come stake with us if you like what we do! freeCodeCamp.org receives a grant from NEAR Foundation The grant will be used to develop interactive Web3 courses around NEAR protocol and our ecosystem. The curriculum will also contain 10+ interactive practice projects to help you learn: how to build and deploy smart contracts with JavaScript, client side dApps and a range of real-world applications. The first of these are now in open beta. Go give it a try! Phase 1 of sharding is now released to mainnet. Upgrade is expected to be done by Sep 28 once enough validators adopt the new version. LFG!! Joe https://twitter.com/joespano_ Jared https://twitter.com/jarednotjerry1 NEAR near.org/ Aurora https://aurora.dev/ NO FINANCIAL ADVICE– The Podcast, is provided for educational, informational, and entertainment purposes only, without any express or implied warranty of any kind, including warranties of accuracy, completeness, or fitness for any particular purpose. The Information contained in or provided from or through this podcast and podcast is not intended to be and does not constitute financial advice, investment advice, trading advice, or any other advice. You should not make any decision, financial, investment, trading or otherwise, based on any of the information presented on this website without undertaking independent due diligence and consultation with a professional broker or financial advisory. You understand that you are using any and all Information available on or through this podcast at your own risk. RISK STATEMENT– The trading of Bitcoins, alternative cryptocurrencies has potential risks involved. Trading may not be suitable for all people. Anyone wishing to invest should seek his or her own independent financial or professional advice.
In this episode of Scaling Postgres, we discuss using rust for Postgres extensions, performance comparisons of TimescaleDB vs. Postgres, uninterrupted writes when sharding in Citus and the Postgres data flow. Subscribe at https://www.scalingpostgres.com to get notified of new episodes. Links for this episode: https://postgresml.org/blog/postgresml-is-moving-to-rust-for-our-2.0-release/ https://www.timescale.com/blog/postgresql-timescaledb-1000x-faster-queries-90-data-compression-and-much-more/ https://www.citusdata.com/blog/2022/09/19/citus-11-1-shards-postgres-tables-without-interruption/ https://www.crunchydata.com/blog/postgres-data-flow https://www.cybertec-postgresql.com/en/postgresql-sequences-vs-invoice-numbers/ https://postgrespro.com/blog/pgsql/5969741 https://www.crunchydata.com/blog/fun-with-postgres-functions https://www.depesz.com/2022/09/18/what-is-lateral-what-is-it-for-and-how-can-one-use-it/ https://www.postgresql.fastware.com/blog/column-lists-in-logical-replication-publications https://blog.dalibo.com/2022/09/19/psycopg-pipeline-mode.html https://pganalyze.com/blog/5mins-postgres-python-psycopg-3-1-pipeline-mode-query-performance https://ideia.me/using-the-timescale-gem-with-ruby https://b-peng.blogspot.com/2022/09/configuring-vip-route-table.html https://postgres.fm/episodes/index-maintenance https://postgresql.life/post/peter_smith/ https://www.rubberduckdevshow.com/episodes/59-rails-postgres-scaling-with-andrew-atkinson/
Welcome to the NFT Ninjas Podcast. The daily podcast that brings you the latest in NFT news, how-to guides and Ninja level inspiration. In this episode, we talk about the Sharding. What is it and why should we care? Brought to you by the dynamic ninjas themselves, Katie Brinkley a former broadcaster, social media strategist guru, and NFT addict and Vince Warnock, best-selling author, award-winning marketer and the founder of the Kanji Club NFTs, ATG Publishing and more. Join them as they guide you on your journey to become a blackbelt in NFTs. NFT Ninjas Podcast = https://christmasninjanft.com/podcast Christmas Ninja NFTS - https://christmasninjanft.com/ Free 5-Day NFT Challenge - https://christmasninjanft.com/free-nft-challenge/ Nominate someone that is worthy - https://christmasninjanft.com/worthy-nominations/
Episode 3 of Between 2 Layers! SUBSCRIBE on YouTube: https://www.youtube.com/channel/UCR_WlrGou7hm0ACXYpldcuA SUBSCRIBE on SPOTIFY: https://open.spotify.com/show/6j9P9RsV9gBz9mLhet0QRe?si=3b93146fbac04b42 SUBSCRIBE on APPLE: https://podcasts.apple.com/us/podcast/between-2-layers/id1637105783 TimeStamps: 00:00 - Intro 00:37 - Why is Asia At Forefront of Web3 Gaming? 05:45 - How to Create a Successful Web3 Economy 08:09 - IP Protection in the New World Order 15:31 - Gaming Super Apps 16:46 - The Ethereum Merge & Carbon Neutrality 24:20 - The Impact of Sharding 26:13 - What Will Happen to the Price of Ethereum? 32:21 - The ZK Wars are Heating Up 37:02 - Growing to a Billion Users 41:57 - Partnership with GameStop
Welcome to Who SHARDED? a weekly news #cryptopodcast show all about #nearprotocol and #aurora ecosystems. Today, we talk about the Stader Exploit, Joe's DeFi investing manifesto, the next phase of Sharding, crypto wallets, NFTs, marketing in a decentralised world, and NEARCON . Ready Layer One podcast https://readylayeronepodcast.com/ NEARWEEK https://nearweek.com/ Timestamps [00:53] The Stader Exploit Stader.Near's staking contract has been hacked, exploiter managed to Mint 20M+ NEARx and swap it for $NEAR through liquidity pools on Jumbo and Ref Finance. [03:49] What you need to know about this exploit. [06:49] Web3 decentralised finance. [11:59] What is Joe's DeFi investing manifesto? [17:13] 30+ wallets on NEAR MyNearWallet X HAPI Integration [19:25] What is the wallet selector? [22:10] Auditing NEAR wallets [26:30] Good Twitter Spaces to check out [28:06] NFTs [31:04] NEARCON 2022 [31:41] Nightshade Sharding github.com/near/nearcore/… [34:27] Marketing in the decentralised world. Notable Quotes “Never use anything that is Beta or Just Launched.” “The technology is only as good as the person who wrote it.” NEARWEEK has #70 newsletters! https://nearweek.com/newsletter/edition-70 AURORA Aurora Weekly Update: Axelar Launch, Fractal, NEARCON & more! Watch it here Or Read here Ready Layer One Podcast https://readylayeronepodcast.com/ Joe https://twitter.com/joespano_ Jared https://twitter.com/jarednotjerry1 NEAR near.org/ Aurora https://aurora.dev/ NO FINANCIAL ADVICE– The Podcast, is provided for educational, informational, and entertainment purposes only, without any express or implied warranty of any kind, including warranties of accuracy, completeness, or fitness for any particular purpose. The Information contained in or provided from or through this podcast and podcast is not intended to be and does not constitute financial advice, investment advice, trading advice, or any other advice. You should not make any decision, financial, investment, trading or otherwise, based on any of the information presented on this website without undertaking independent due diligence and consultation with a professional broker or financial advisory. You understand that you are using any and all Information available on or through this podcast at your own risk. RISK STATEMENT– The trading of Bitcoins, alternative cryptocurrencies has potential risks involved. Trading may not be suitable for all people. Anyone wishing to invest should seek his or her own independent financial or professional advice.
Spencer was one of the original creators of open-source, cross-platform image editing software GIMP (GNU Image Manipulation Program), authored while he was still in college. He went on to spend a decade at Google, plus two years as CTO of Viewfinder, later acquired by Square.In 2014, he cofounded Cockroach Labs to back his creation CockroachDB, a cloud-native distributed SQL database.Database sharding is essential for CockroachDB: “a critical part of how Cockroach achieves virtually everything,” says Spencer. Read up on how sharding a database can make it faster.Like many engineers who find themselves in the C-suite, Spencer went from full-time programmer to full-time CEO. He says it's been a “relatively gentle” evolution, but he can always go back.Like lots of you out there, Spencer started programming on a TI-99/4, the world's first 16-bit home computer.Connect with Spencer on LinkedIn or learn more about him.Today's Lifeboat badge goes to user Hughes M. for their answer to the question Multiple keys pointing to a single value in Redis (Cache) with Java.
Illia Polosukhin stops by to talk about NEAR Protocol - a climate-neutral, high-speed, low-fee, Layer 1 blockchain. Illia shares what makes NEAR one of the most exciting projects in crypto - a great user experience, support for Javascript, and much more. He discusses what problems NEAR solves, how they got started, what's coming next, and how NEAR can scale to 1 Billion users and beyond. JOIN THE FREE WOLF DEN NEWSLETTER
In this week's episode, Anna (https://twitter.com/annarrose) talks with Illia (https://twitter.com/ilblackdragon) from Near (https://twitter.com/NEARProtocol) and Alex (https://twitter.com/alexauroradev) from Aurora (https://twitter.com/auroraisnear) about the work they did that lead them to the Near ecosystem and what the connection is between the Aurora project and Near. They also cover what's new at Near since the last time Illia and Anna spoke, Aurora+, the Rainbow Bridge, and future plans with Near's sharding system. Here are some links for this episode: * Ep 91: Near Protocol's focus on UX (https://zeroknowledge.fm/91-2/) * Ep 157: Illia Polosukhin on the development and launch of NEAR protocol (https://zeroknowledge.fm/157-2/) * Cross Chain Series - Interview with Anna and Aurora (https://www.youtube.com/watch?v=jFudk4Hr5IQ) * SputnikVM (https://github.com/aurora-is-near/sputnikvm) * Near Bridge (https://near.org/bridge/) * Near Joins Multichain to Bring More Cross Chain (https://near.org/blog/near-joins-multichain-to-bring-more-cross-chain-ca) * Electron Labs (https://electronlabs.org/) Check out the Zero Knowledge Podcast Linktree (https://linktr.ee/zeroknowledge) to stay up-to-date on all the ZK-focused channels and events in our ecosystem. Today's episode is sponsored by Anoma (https://anoma.net/) Anoma is a suite of protocols that enable self-sovereign coordination. Their unique architecture facilitates the simplest forms of economic coordination such as two parties transferring an asset to each other. As well as more sophisticated ones like an asset agnostic bartering system involving multiple parties without direct “coincidence of wants." Anoma first fractal instance, Namada is planned for later in 2022, and it focuses on enabling shielded transfers for any assets with a few second transaction latency and near zero fees. Visit Anoma (https://anoma.net/) to learn more! If you like what we do: * Find all our links here! @ZeroKnowledge | Linktree (https://linktr.ee/zeroknowledge) * Subscribe to our podcast newsletter (https://zeroknowledge.substack.com) * Follow us on Twitter @zeroknowledgefm (https://twitter.com/zeroknowledgefm) * Join us on Telegram (https://zeroknowledge.fm/telegram) * Catch us on Youtube (https://zeroknowledge.fm/) * Head to the ZK Community Forum (https://community.zeroknowledge.fm/) * Support our Gitcoin Grant (https://zeroknowledge.fm/gitcoin-grant-329-zkp-2)
How do you prevent mortals from achieving godhood? Not by fighting, but by offering a deal. This week, we explore the world of the demi-divine, the stiched fabric between the mortal and immortal. What are demigods, exactly? Why do they exist? What roles might they play? What happens to the mortal who ascends? And does anything ever travel... the opposite direction?———Curious about something you heard in this episode? Chances are you can find out more about it in the Record of the Lorekeeper!Website: thelorekeepers.comEmail: lorekeeperspodcast@gmail.comTwitter: @thelorekeepersLorekeepers art by Sam Wade.Instagram: @bysamwade"Land of Heroes" theme by Josh Silker.
This might be the dankest livestream we've done to date. Vitalik Buterin, Dankrad Feist, and Protolambda join Tim Beiko to discuss all things Danksharding and Proto-Danksharding. Sharding what? Don't worry. By the end of this conversation, you'll know how sharding has evolved over time, why danksharding is possible, how proto-danksharding will get deployed, and so much more. ------
1st Week of May, 2022 ------
Naval's Podcast and Periscope Sessions Introduction 0:00 Haseeb's background 0:22 Vitalik's background 2:43 A blockchain you can build any app on top of 7:02 Eth trades efficiency for transparency 10:18 Like plain text, Eth is simple and efficient 12:41 Only high-value transactions can afford the blockchain 13:08 Doing away with ‘trusted' third parties 14:09 Trading performance for security 14:43 ‘Impregnable castles made of math' 16:23 Ethereum's limitations are latency and privacy 16:56 There are ways to get back your privacy 19:32 Can Eth provide a high level of decentralization and a high level of scaling at the same time? 20:39 Sharding leads to more centralization 21:49 Verifiability at the expense of scaling 24:00 How much decentralization is the right amount? 25:11 What happens when subsidies to join nodes disappear? 27:07 Stateless clients make it possible to verify the chain with very little on your hard drive 28:18 Staking culture is difficult to cultivate 28:52 New blockchain players tend to go for minimum viable decentralization 29:54 People don't value privacy until somebody goes to jail over it 30:32 Eth is ‘simple at the base' 31:12 Social recovery wallets make it easier to be your own bank 31:44 Block space is getting expensive 32:41 There's not a lot of innovation on Bitcoin, by design 33:35 Blockchain's ‘free-rider effect' 34:57 Innovation is slowest at Layer 1 35:34 Layer 2 moves faster because it's permissionless 36:59 What if we froze Layer 1 today? 37:25 Data for computation trade-off 38:57 Benchmarking blockchains apples-to-apples 40:02 Enshrining decentralization 41:43 Tensions between scaling and preserving value 42:27 There will be multiple stores of value 43:54 — Transcript http://nav.al/vitalik
On today's episode of Empire, Jason Yanowitz and Santiago Roel Santos are joined by Illia Polosukhin, co-founder of the Near. Near is one of the fastest-growing layer-one blockchains that uniquely focuses on the user and developer experience. Illia believes Near's innovative culture, simplicity and dynamic scalability will onboard the next 1B users to crypto. Don't miss this discussion on Near's roadmap, scalability solutions, the layer-one landscape, Ukraine and more. This podcast was recorded the day before NEAR announced their latest $350M raise led by Tiger Capital; just 3 months after raising $150M from Three Arrows Capital. -- BitMEX is back and better than ever. Sign up for BitmEX's new Spot Exchange for a chance to win a piece of $500,000 in BTC (new users). Learn more about them at https://www.bitmex.com/ -- Empire is brought to you by Blockworks, a financial media brand delivering breaking news and premium insights about digital assets to millions of investors. For more content like Empire, visit http://blockworks.co/podcasts Follow me and Santiago on Twitter, @JasonYanowitz and @santiagoroel; let us know what you thought of the show! Subscribe to the podcast on Spotify: https://open.spotify.com/show/4UTePv1CR3APdKOiosR3Iq?si=41bc452345b649f9 Subscribe to the podcast on Apple: https://apple.co/3srZf7M -- (00:00) Introduction (01:00) Near's Inspiration (06:35) Ethereum's Scaling Challenge (09:48) Scalability Tradeoff (15:00) Centralization (16:51) Enabling Innovation (24:03) Blockchains As Cities (27:18) Bitmex Ad (28:34) Sharding (32:35) Coordination Problem (37:27) Rollup Complexity (40:13) Solana and Avalanche (43:41) Composability (46:33) Low Switching Costs (48:38) Multichain and Bridges (52:59) Open Source Competition (56:03) Ukraine
Zilliqa (ZIL) is a simple cryptocurrency that aims to use sharding to solve the scalability issues facing several blockchains. Sharding, in simple words, is the process of dividing the mining network into smaller pieces or shards that process transactions in parallel. Continue reading on procommun.com
In episode 1211, Jack and Miles are joined by writer and host of How To Citizen,Baratunde Thurston to discuss… Mike Braun's Klan hood slipped off in an interview?, Mo Brooks WORE BODY ARMOR on Jan 6 and THIS is the thanks he gets?, The TikTok to Hollywood pipeline and more! Mike Braun's Klan hood slipped off in an interview? Mo Brooks WORE BODY ARMOR on Jan 6 and THIS is the thanks he gets? The TikTok to Hollywood pipeline LISTEN: Wood Trees by Begin See omnystudio.com/listener for privacy information.
NEAR Protocol is the topic of discussion in this week's coin review podcast. The Crypto Masters Brian and Ross talk about the various aspects of NEAR including dynamic sharding, their consensus mechanism Doom Slug, and more! NEAR is surely King of the Buzzwords! NEAR Protocol Website: https://near.org/ NEAR Protocol Whitepaper: https://near.org/papers/the-official-near-white-paper/ NEAR/ETH Rainbow Bridge: https://near.org/bridge/ NEAR Protocol Youtube Channel: https://www.youtube.com/channel/UCuKdIYVN8iE3fv8alyk1aMw What is Sharding?: https://www.investopedia.com/terms/s/sharding.asp#:~:text=Sharding%20is%20a%20database%20partitioning,process%20more%20transactions%20per%20second.&text=Sharding%20can%20help%20reduce%20the,blockchain%20network%20into%20separate%20shards. ======================================== *** The Crypto Masters Website *** ======================================== **** Your one-stop shop for crypto! **** * Coin prices and charts * Profit Calculators * Podcast Episodes * Marketcap Comparison Tool ======================================== thecryptomasters.com ---------------------------------------- Socials and Contact *** ---------------------------------------- Socials: facebook: @TheCryptocurrencyMasters twitter: @theCryptoMS1 instagram: @the_crypto_masters Contact: email: info@thecryptomasters.com Attributions: Intro/Outro Music: https://soundcloud.com/swamigeezy Intro/Outro Visuals: https://www.instagram.com/danimal_sound/
This week, Anna (https://twitter.com/annarrose) catches up on what has been happening in the Kusama network ecosystem with Will Pankiewicz, Raul Romanutti (https://twitter.com/nachortti), and Bruno Skvorc (https://twitter.com/bitfalls). First up, Will, the Master of Validators at Parity, takes us through the fundamentals of Kusama, how it differs from Polkadot and some future predictions for the ecosystem and its parachains. Next, Raul, Community Manager and council member, breaks down how Kusama's governance and council functions. Last but not least, we learn about RMRK (https://www.rmrk.app/) founded by Bruno, an NFT project on Kusama which started as a hack to support basic NFTs without smart contracts and grew into full-blown NFT ecosystem on Kusama. Here are some links for this episode: * Episode 46: Gavin Wood on Polkadot, Sharding and Substrate (https://zeroknowledge.fm/46-2/) * Episode 83: From Warp Sync to SPREE with Polkadot's Rob Habermeier (https://zeroknowledge.fm/83-2/) * Episode 104: Exploring Kusama, the Canary Chaos Network (https://zeroknowledge.fm/104-2/) * Episode 171: Mapping the Polkadot Ecosystem (https://zeroknowledge.fm/171-2/) * Episode 181: Aping into DeFi and NFTs with Andy Chorlian and Tarun Chitra (https://zeroknowledge.fm/181-2/) * RMRK (https://www.rmrk.app/) * Clusters: how trusted & trust-minimized bridges shape the multi-chain landscape (https://blog.celestia.org/clusters/) ZK Hack kicked off on October 26th and goes until December 7th — a multi-round online event with workshops and puzzle solving competitions. Put together by the Zero Knowledge Podcast (https://www.zeroknowledge.fm/) and the ZKValidator (https://zkvalidator.com/) and supported by all of our fantastic sponsors. Even if you missed the first session you can still join the next ones, head to the website to check out the schedule of workshops and puzzles: https://www.zkhack.dev/ Thank you to this week's sponsor. Centrifuge (https://centrifuge.io/) - A Real-world DeFi project. Centrifuge puts real-world assets on the blockchain, allowing issuers to get liquidity on their assets and investors to make a safe, stable yield in the volatile crypto world. They are hiring positions! Head to https://centrifuge.io/careers to learn more. If you like what we do: Subscribe to our podcast newsletter (https://zeroknowledge.substack.com/) to not miss any event! Follow us on Twitter - @zeroknowledgefm (https://twitter.com/zeroknowledgefm) Join us on Telegram (https://t.me/joinchat/TORo7aknkYNLHmCM) Catch us on Youtube (https://www.youtube.com/channel/UCYWsYz5cKw4wZ9Mpe4kuM_g) Read up on the r/ZKPodcast subreddit (https://www.reddit.com/r/zkpodcast) Give us feedback! -https://forms.gle/iKMSrVtcAn6BByH6A Support our Gitcoin Grant (https://gitcoin.co/grants/329/zero-knowledge-podcast-2) Support us on the ZKPatreon (https://www.patreon.com/zeroknowledge) Donate through coinbase.commerce (https://commerce.coinbase.com/checkout/f1e56274-c92b-4a99-802f-50727d651b38)