Podcasts about aws s3

  • 104PODCASTS
  • 148EPISODES
  • 37mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Feb 10, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about aws s3

Latest podcast episodes about aws s3

CISSP Cyber Training Podcast - CISSP Training Program
CCT 218: Design and validate assessment, test, and audit strategies for the CISSP (Domain 6.1)

CISSP Cyber Training Podcast - CISSP Training Program

Play Episode Listen Later Feb 10, 2025 34:43 Transcription Available


Send us a textUnlock the secrets to safeguarding your cloud storage from becoming a cyber attack vector in our latest episode of the CISSP Cyber Training Podcast with Shon Gerber. Discover how neglected AWS S3 buckets can pose significant threats akin to the notorious SolarWinds attack. Shon breaks down the importance of auditing and access controls while providing strategic guidance aligned with domain 6.1 of the CISSP to fortify your knowledge for the exam. This episode promises to equip you with the essential tools to protect your cloud infrastructure and maintain robust security practices.Transitioning to security testing, we explore various methodologies and the vital role they play in incident readiness and data integrity. From vulnerability assessments to penetration testing and the collaborative efforts of red, blue, and purple teams, Shon sheds light on the automation of these processes to enhance efficacy. We also demystify SOC 1 and SOC 2 reports and discuss their criticality in vendor risk management and regulatory compliance. With insights into audit standards like ISO 27001 and PCI DSS, this episode is your comprehensive guide to understanding and applying security measures across diverse sectors.Gain exclusive access to 360 FREE CISSP Practice Questions delivered directly to your inbox! Sign up at FreeCISSPQuestions.com and receive 30 expertly crafted practice questions every 15 days for the next 6 months—completely free! Don't miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!

Paul's Security Weekly
Deepseek, AMD, and Forgotten Buckets - PSW #860

Paul's Security Weekly

Play Episode Listen Later Feb 6, 2025 126:54


Deepseek troubles, AI models explained, AMD CPU microcode signature validation, what happens when you leave an AWS S3 bucket laying around, 3D printing tips, and the malware that never was on Ethernet to USB adapters. Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-860

Paul's Security Weekly TV
Deepseek, AMD, and Forgotten Buckets - PSW #860

Paul's Security Weekly TV

Play Episode Listen Later Feb 6, 2025 126:54


Deepseek troubles, AI models explained, AMD CPU microcode signature validation, what happens when you leave an AWS S3 bucket laying around, 3D printing tips, and the malware that never was on Ethernet to USB adapters. Show Notes: https://securityweekly.com/psw-860

Paul's Security Weekly (Podcast-Only)
Deepseek, AMD, and Forgotten Buckets - PSW #860

Paul's Security Weekly (Podcast-Only)

Play Episode Listen Later Feb 6, 2025 126:54


Deepseek troubles, AI models explained, AMD CPU microcode signature validation, what happens when you leave an AWS S3 bucket laying around, 3D printing tips, and the malware that never was on Ethernet to USB adapters. Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-860

Paul's Security Weekly (Video-Only)
Deepseek, AMD, and Forgotten Buckets - PSW #860

Paul's Security Weekly (Video-Only)

Play Episode Listen Later Feb 6, 2025 126:54


Deepseek troubles, AI models explained, AMD CPU microcode signature validation, what happens when you leave an AWS S3 bucket laying around, 3D printing tips, and the malware that never was on Ethernet to USB adapters. Show Notes: https://securityweekly.com/psw-860

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Episode Summary: This episode covers brute-force attacks on the password reset functionality of Hikvision devices, a macOS SIP bypass vulnerability, Linux rootkit malware, and a novel ransomware campaign targeting AWS S3 buckets. Topics Covered: Hikvision Password Reset Brute Forcing URL: https://isc.sans.edu/diary/Hikvision%20Password%20Reset%20Brute%20Forcing/31586 Hikvision devices are being targeted using old brute-force attacks exploiting predictable password reset codes. Analyzing CVE-2024-44243: A macOS System Integrity Protection Bypass URL: https://www.microsoft.com/en-us/security/blog/2025/01/13/analyzing-cve-2024-44243-a-macos-system-integrity-protection-bypass-through-kernel-extensions/ Microsoft details a macOS vulnerability allowing attackers to bypass SIP using kernel extensions. Rootkit Malware Controls Linux Systems Remotely URL: https://cybersecuritynews.com/rootkit-malware-controls-linux-systems-remotely/ A sophisticated rootkit targeting Linux systems uses zero-day vulnerabilities for remote control. Abusing AWS Native Services: Ransomware Encrypting S3 Buckets with SSE-C URL: https://www.halcyon.ai/blog/abusing-aws-native-services-ransomware-encrypting-s3-buckets-with-sse-c Attackers are using AWS s SSE-C encryption to lock S3 buckets during ransomware campaigns. We cover how the attack works and how to protect your AWS environment.

Irish Tech News Audio Articles
Martech New Years Resolutions: Don't forget link management as part of your stack

Irish Tech News Audio Articles

Play Episode Listen Later Dec 31, 2024 5:05


Marketing departments today operate like small companies, requiring specialised skills in technical operations, creative development, and strategic communication. Each discipline relies on multiple software solutions, making technology selection a critical challenge for marketing leaders. The marketing technology sector has grown significantly over the past decade. While this expansion initially drove companies toward bundled solutions, these all-in-one platforms often sacrifice specialised functionality for comprehensive coverage. Modern marketing teams are discovering that integrating best-in-class tools creates a more adaptable and effective technology stack. This approach enables companies to maintain visibility across the customer journey while providing teams with robust features for specific needs. Feather Hickox, Vice President of Marketing Rebrandly, said, "The future of marketing tech stacks is going to be tools that are largely API-based, can interact with each other easily, and that do critical functions really well," Link management plays a pivotal role in an integrated digital ecosystem. As the gateway to online experiences and conversions, links serve as essential touchpoints throughout the customer journey. Advanced platforms exemplify this specialised approach, providing extensive integrations across SMS messaging, CMS platforms, and other key marketing functions. Here are a few best practices for effective high-volume link management: 1. Centralise Link Management: Relying on multiple tools, such as an in-app social media link shortener and a separate shortener for campaign links, can cause inconsistencies. Instead, adopt a single, robust tool for both link creation and tracking. This ensures uniformity across platforms, enhances security, and simplifies maintenance and updates. 2. Opt for a Standardised API: Select an API that seamlessly integrates with your applications to streamline processes and ensure compatibility. The widespread utility of links raises several challenges when transforming link data into actionable insights: 1. Volume: Enterprise-level link interactions generate an immense amount of data, with millions or even billions of clicks per month. Managing and analysing this data in real time is not just beneficial - it's critical for staying competitive. 2. Data Fragmentation: Link data is often dispersed across multiple platforms and tools, leading to inconsistent tracking parameters and methodologies. This fragmentation makes it difficult to gain a comprehensive view of the customer journey or accurately attribute conversions to specific campaigns or audiences. 3. Analysis Paralysis: The sheer volume of link data can be overwhelming. Without proper context and organisation, it becomes challenging to distill this information into actionable insights that drive decision-making. To unlock the full potential of link data, consider adopting the following solutions for your Link Management in 2025: 1. Unified Tracking: Implementing a unified tracking system helps eliminate data silos, creating a single source of truth for link performance. This streamlines data collection and enables precise cross-channel analysis and attribution. 2. Real-Time Processing: In today's fast-paced digital environment, real-time data processing can be transformative. With real-time analytics, businesses can quickly identify and leverage emerging trends, make on-the-fly campaign adjustments, and address issues proactively. 3. Contextualised Data: Raw data alone has limited value. By correlating link performance with specific business outcomesand segmenting data meaningfully, organisations can extract actionable insights to inform decision-making. Rebrandly Clickstream for AWS simplifies the collection and storage of raw click traffic data from branded short links. This data is seamlessly delivered to your AWS S3 account, providing near real-time access. The company's robust API and developer resources, including the ...

Learn System Design
Mastering System Design Interviews: Building Scalable Web Crawlers

Learn System Design

Play Episode Listen Later Dec 17, 2024 32:14 Transcription Available


Send us a textWeb Crawler DesignsCan a simple idea like building a web crawler teach you the intricacies of system design? Join me, Ben Kitchell, as we uncover this fascinating intersection. Returning from a brief pause, I'm eager to guide you through the essential building blocks of a web crawler, from queuing seed URLs to parsing new links autonomously. These basic functionalities are your gateway to creating a minimum viable product or acing that system design interview. You'll gain insights into potential extensions like scheduled crawling and page prioritization, ensuring a strong foundation for tackling real-world challenges.Managing a billion URLs a month is no small feat, and scaling such a system requires meticulous planning. We'll break down the daunting numbers into digestible pieces, exploring how to efficiently store six petabytes of data annually. By examining different database models, you'll learn how to handle URLs, track visit timestamps, and keep data searchable. The focus is on creating a robust system that not only scales but does so in a way that meets evolving demands without compromising on performance.Navigating the complexities of designing a web crawler means making critical decisions about data storage and system architecture. We'll weigh the benefits of using cloud storage solutions like AWS S3 and Azure Blob Storage against maintaining dedicated servers. Discover the role of REST APIs in seamless user and service interactions, and explore search functionalities using Cassandra, Amazon Athena, or Google's BigQuery. Flexibility and foresight are key as we build systems that adapt to future needs. Thank you for your continued support—let's keep learning and growing on this exciting system design journey together.Support the showDedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.

S7aba Podcast
S4E14 - AWS S3: A low level design look

S7aba Podcast

Play Episode Listen Later Dec 12, 2024 54:12


AWS S3: A low level design look

The Datanation Podcast - Podcast for Data Engineers, Analysts and Scientists
63 – Reinvent, AWS S3 Table Buckets and Apache Iceberg

The Datanation Podcast - Podcast for Data Engineers, Analysts and Scientists

Play Episode Listen Later Dec 6, 2024


Alex Merced discusses his experience at AWS re:invent follow Alex at AlexMered.com/data

Datacenter Technical Deep Dives
Terraform: Stand up test environments freaky fast!

Datacenter Technical Deep Dives

Play Episode Listen Later Sep 24, 2024


In this episode of the vBrownBag, Shala demonstrates how & why she uses Hashicorp Terraform (for her day job!) to stand up proof of concept tests on AWS far faster than what is possible in the console. 00:00 Intro 1:37 Shala walks us through her GitLab repo

Datacenter Technical Deep Dives
Deep Dive: Automating the vBrownBag with AWS Serverless

Datacenter Technical Deep Dives

Play Episode Listen Later Sep 19, 2024


In this episode of the vBrownBag, Damian does a deeper dive into the Meatgrinder, showing how the different AWS services interact, how the process logs to CloudWatch, and more! 00:00 Intro 1:20 The AWS Services that power the Meatgrinder

Datacenter Technical Deep Dives
What's in the Bag? Automating the vBrownBag with AWS Serverless

Datacenter Technical Deep Dives

Play Episode Listen Later Sep 14, 2024


In this episode of the vBrownBag, a host is a guest! Damian does a deep dive into the vBrownBag Meatgrinder, an event-driven automation solution built with AWS Serverless that powers the show behind the scenes. Meatgrinder uses AWS S3, Event Bridge, Step Functions, Lambda, and Cloud Watch and handles post-production automation of vBrownBag content. We'll talk about the design decisions made while architecting the solution, and lessons learned along the way. 00:00 Intro and so much banter! 10:14 We actually start talking about the topic

LINUX Unplugged
570: RegreSSHion Strikes

LINUX Unplugged

Play Episode Listen Later Jul 8, 2024 47:06


We dig into the RegreSSHion bug, debate it's real threat and explore clever tools to build a tasty fried onion around your system.Sponsored By:Core Contributor Membership: Take $1 a month of your membership for a lifetime!Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:

Critical Thinking - Bug Bounty Podcast
Episode 75: *Rerun* of The OG Bug Bounty King - Frans Rosen

Critical Thinking - Bug Bounty Podcast

Play Episode Listen Later Jun 13, 2024 164:52


Episode 75: In this episode of Critical Thinking - Bug Bounty Podcast, Justin and Joel are sick, So instead of a new full episode, we're going back 30 episodes to review.Follow us on twitter at: @ctbbpodcastWe're new to this podcasting thing, so feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!------ Links ------Follow your hosts Rhynorater & Teknogeek on twitter:https://twitter.com/0xteknogeekhttps://twitter.com/rhynorater------ Ways to Support CTBBPodcast ------Hop on the CTBB Discord at https://ctbb.show/discord!Today's Guest: https://twitter.com/fransrosen DetectifyDiscovering s3 subdomain takeovershttps://labs.detectify.com/writeups/hostile-subdomain-takeover-using-heroku-github-desk-more/bucket-disclose.shhttps://gist.github.com/fransr/a155e5bd7ab11c93923ec8ce788e3368A deep dive into AWS S3 access controlsAttacking Modern Web TechnologiesLive Hacking like a MVHAccount hijacking using Dirty Dancing in sign-in OAuth flowsTimestamps:(00:00:00) Introduction(00:11:41) Franz Rosen's Bug Bounty Journey and Detectify (00:20:21) Pseudo-code, typing, and thinking like a dev(00:27:11) Hunter Methodologies and automationists(00:42:31) Time on targets, Iteration vs. Ideation(00:58:01) S3 subdomain takeovers(01:11:53) Blog posting and hosting motivations(01:20:21) Detectify and entrepreneurial endeavors(01:36:41) Attacking Modern Web Technologies(01:52:51) postMessage and MessagePort(02:05:00) Live Hacking and Collaboration(02:20:41) Account Hijacking and OAuth Flows(02:35:39) Hacking + Parenthood

Tech AI Radio
AWS S3の意外な落とし穴とセキュリティ考察

Tech AI Radio

Play Episode Listen Later May 31, 2024


DOU Podcast
Експорт IT послуг зростає | Проблеми з Telegram | Звільнення в Google — DOU News #145

DOU Podcast

Play Episode Listen Later May 6, 2024 25:09


The Daily Decrypt - Cyber News and Discussions
CyberSecurity News: Expensive AWS S3 Bucket, No MFA for Change Healthcare, Wpeeper Android Malware uses WordPress

The Daily Decrypt - Cyber News and Discussions

Play Episode Listen Later May 2, 2024


In today's episode, we discuss how a developer nearly faced a $1,300 bill due to a poorly named AWS S3 storage bucket, attracting unauthorized access (https://arstechnica.com/information-technology/2024/04/aws-s3-storage-bucket-with-unlucky-name-nearly-cost-developer-1300/). We also delve into the repercussions faced by Change Healthcare after a ransomware attack due to compromised credentials and lack of MFA (https://www.cybersecuritydive.com/news/change-healthcare-compromised-credentials-no-mfa/714792/). Lastly, we explore a new Android malware named Wpeeper that utilizes compromised WordPress sites to conceal C2 servers, posing a threat to unsuspecting users (https://thehackernews.com/2024/05/android-malware-wpeeper-uses.html). 00:00 Intro 00:55 Change Health Care 04:10 The High Cost of a Naming Mistake: A Developer's AWS Nightmare 07:54 Emerging Threats: The Rise of WPeeper Malware AWS, S3, Storage Bucket, Unauthorized Access,Change Healthcare, AlphV, ransomware, cybersecurity,Wpeeper, malware, WordPress, command-and-control Search phrases: 1. Ransomware group AlphV 2. Change Healthcare 3. Compromised credentials 4. Multifactor authentication 5. Ransomware consequences Change Healthcare 6. Cybersecurity breach consequences 7. Security measures for cybersecurity breach prevention 8. Wpeeper malware 9. Android device security protection 10. Compromised WordPress sites protection Change Healthcare's CEO just testified in front of the House Subcommittee that the service they used to deploy remote desktop services did not require multi factor authentication. Which led to one of the most impactful ransomware attacks in recent history. In other news, a very unlucky developer in his personal time accidentally incurred over 1, 300 worth of charges on his AWS account overnight. What was this developer doing and how did it lead to such high charges in such a short amount of time? Wpeeper Malware is utilizing compromised WordPress sites to hide its C2 servers, posing a significant threat to Android devices, with the potential to escalate further if undetected. How can users protect their Android devices from falling victim to this malware? You're listening to The Daily Decrypt. The CEO of Change Healthcare, which is a subsidiary of UnitedHealthcare that was breached, it's been all over the news, it's all over the news. Revealed in written testimony that Change Healthcare was compromised by Ransomware Group. accessing their systems with stolen credentials. Which we all knew, but the ransomware group used these compromised credentials to remotely access a Citrix portal, which is an application used to enable remote access to desktops. And this portal did not require multi factor authentication. I don't know much about Change Healthcare's inner infrastructure, but any portal that allows remote access to other desktops should be locked down pretty hard. And the fact that just a simple username and password can grant access can grant all of these different desktops is pretty terrible. And means that this attack could have likely been avoided had they enabled multi factor authentication. So if you're brand new to cybersecurity and you're listening to this podcast for the first time, you need to know that there are a few very easy things you can do to improve your posture online. Don't reuse passwords. Step one, one of the easiest way to do that is to use a password manager and have them generate your passwords for you. Number two, enable multi factor authentication that way, if someone does come into your username and password combination, they still have to get through some sort of device based authentication, like a ping on your cell phone or something like that, to allow them to log into your account. Now, in the case of United and Change Healthcare, one thing that they also could have done To help mitigate their negligence in not enabling multi factor authentication would be to have frequent dark web scams for any password in the system or any username in the system. And this can all be automated. If a password that is being used to access any system in your network is found on the dark web, immediately revoke that password and require that user to create a new one. But, that is slightly more complicated than just requiring multi factor authentication. So, probably start there. But, the attackers who carried out this ransomware were able to use credentials they found on the dark web to infiltrate the networks, gain access to remote desktops, and launch their ransomware within 9 days of their entry. So, that's pretty fast. A few years ago, that would have taken dozens of days, if not hundreds of days. The dwell time for attackers was pretty high back then. But now, single digits. That doesn't leave much time for defenders to find this type of attack. But the CEO acknowledged this negligence and shared his deep condolences for all of the patrons of Change Healthcare. The pharmacists, the doctors, a lot of work had to be put on hold For And it's very possible that people died as a result of this breach, having to be transferred to different hospitals, etc. This is a pretty tragic thing, so if you're in the healthcare industry, if you're in a position of power, make sure that all your internal systems, and especially external, but definitely internal as well, have multi factor authentication enabled. And if you want to go the extra mile, create some sort of automatic tool that probably exists online for free, that will check the dark web on a recurring basis for any passwords in your system. A cloud developer was setting up a proof of concept for a client. And it involved creating an empty storage bucket in AWS. The project was a document indexing system. And so this developer uploaded a couple of documents and then began working in other areas of the project. Then after two days of work, went back and checked the billing costs and found 1, 300 worth of charges. Now, if you're not familiar with AWS and their pricing, S3 storage buckets are really cheap. The daily decrypt is actually hosted in the S3 storage bucket and I pay less than 10 a month for all hosting. And I'm uploading audio, which is a lot larger than documents. Okay. So this bucket should have cost less than 5 a month, but after two days, There were 1300 in charges, so I really appreciate the developer sharing this story because it's an interesting case study. What happened? Well, the developer accidentally named the bucket the same thing that an open source software uses as a placeholder in their code. So what does that mean? Some other company, let's say it's Home Depot, alright? That came up in a previous reel. Home Depot has some software that backs up their files to Amazon S3 buckets on a recurring basis. Home Depot also has a non production version of that code that has placeholders for those S3 bucket names, such as placeholder bucket 1231 or something like that, so that when it comes time to upload their files, they replace that placeholder with the actual name of their bucket. but That sample code is running, and it's not doing anything because it's attempting to backup their files to a bucket that doesn't exist. Well, this developer lucked out and created an S3 bucket with that exact name of that placeholder, and this script now all of a sudden is trying to send all of Home Depot's backup files to this bucket And news to me, but AWS charges a fee, it's like 005 cents per request. And an automated system can generate thousands of requests. Per second, like it can go very fast. So just in two days, that 0. 0005 cents per request turned into 1, 300. Now these are unexpected charges. Amazon agrees he shouldn't have to pay for this, but it just goes to show how careful you have to be when naming your S3 buckets, especially if they're going to allow for public users to place files in them. But another really important aspect of this story that I find fascinating is that the developer, once he realized what was happening, decided to open up his bucket and allow for files to be placed there. And within 30 seconds, there were over 10 gigabytes of files placed in this bucket. And these files belonged to another company. One that's pretty reputable, so probably on the same lines of Home Depot. Now this developer won't disclose that because these files are currently being backed up and there's a huge risk for data leak, but this developer now has the source code for all kinds of files that belong to a pretty big company. So as a developer, make sure you name your AWS buckets, something pretty unique and maybe even add in a little suffix of random characters after anything you name. And as developers for companies, make sure you're not having automated scripts upload to bucket names that don't exist because Maybe someday they will exist and all those files will go to that bucket. The developer did reach out to the company that was affected by this and has received no response. But we're all hoping that the company responds and fixes their practice and hopefully shells out some money to this developer because that's a pretty big bug and they deserve compensation. And finally, cybersecurity researchers have identified a new Android malware named WPeeper that utilizes compromised WordPress sites to hide its command and control servers. And if you've been listening to this podcast for a while or keeping up to date on cybersecurity news, you'll know that there's a lot of opportunity within the WordPress framework to compromise WordPress sites. And it would be a great place to host a command and control server. WPPer is a binary that employs the HTTPS protocol for secure C2 communications and functions as a backdoor. The malware disguises itself within a repackaged version of the Up to down app store for Android aiming to evade detection and deceive users into installing the malicious payload. WPaper utilizes a complex C2 architecture that involves using infected WordPress sites as intermediaries to obfuscate its actual C2 servers with as many as 45 C2 servers identified in the infrastructure. The malware's capabilities involve collecting device information, updating C2 servers, downloading additional payloads, and self deleting. And to safeguard against similar malware attacks, users are advised to download apps only from reputable sources, carefully review app permissions, and just Be careful what you click on. Stay vigilant out there against suspicious activities that may be taking place on your phone. You might notice a performance lag. You might notice weird browsers opening up. And if you do, you might just want to restart your device, reset it. And if you do get curious and install a scanning tool, antivirus, anti malware, et cetera, make sure you do it from a reputable source. This has been the Daily Decrypt. If you found your key to unlocking the digital domain, show your support with a rating on Spotify or Apple Podcasts. It truly helps us stand at the frontier of cyber news. Don't forget to connect on Instagram or catch our episodes on YouTube. Until next time, keep your data safe and your curiosity alive.

GreyBeards on Storage
161: Greybeards talk AWS S3 storage with Andy Warfield, VP Distinguished Engineer, Amazon

GreyBeards on Storage

Play Episode Listen Later Jan 19, 2024 47:52


We talked with Andy Warfield (@AndyWarfield), VP Distinguished Engineer, Amazon, about 10 years ago, when at Coho Data (see our (005:) Greybeards talk scale out storage … podcast). Andy has been a good friend for a long time and he's been with Amazon S3 for over 5 years now. Since the recent S3 announcements at … Continue reading "161: Greybeards talk AWS S3 storage with Andy Warfield, VP Distinguished Engineer, Amazon"

Hacker Public Radio
HPR4035: Processing podcasts with sox

Hacker Public Radio

Play Episode Listen Later Jan 19, 2024


Processing Podcasts Ahuka's recent episodes about pre processing podcasts with audacity reminded me that I have been wanting to do an episode about pre-processing podcasts with sox. I no longer need to use sox to change the podcast tempo since now, I use Antena Pod on my phone When I started listening to podcasts the only playback options were either a PC or a mp3 player. I started out just downloading the podcasts to my PC from the podcast's web page. My first podcast automation was using bashpodder. bashpodder was simple to set up and run via cron. It would: - read a file to get a list of RSS feeds - Track previous downloads in a log - Download new episodes https://lincgeek.org/bashpodder/ A few of the podcasts I listened to were panels of a few hosts that were recorded live and released later as a podcast. Some of those shows were unedited and had some dead air that I wanted to remove. It took me a few tries, but I eventually figured out how to truncate silence with sox Many of the podcast players I used did not have the ability to alter the playback speed. So I also figured out how to change the tempo using sox. I stuck to using dedicated mp3 players for several years. Before the sansa clips came out, my favorite was the sansa e200 series https://en.wikipedia.org/wiki/Sansa_e200_series They could run the alternative firmware, rockbox. https://www.rockbox.org/ I remember wasting hours playing frozen bubble on my mp3 player. The sansa clips were a big innovation. Small, light, and cheap. They were my preferred player until I eventually switched to phones. I had a workflow set up - cron bashpodder - script to process with sox - script to reload podcast - mount - move from player to archive - move new files to player - unmount I did a HPR episode a few months ago about my first tech job. When I started there, I was given in iPhone. It was my first smart phone. While there, I had started taking walks on by lunch break. And I would get to listen to podcast while out. There were a few times where I would run out of episodes to listen to. So I decided to add some podcasts to my work iPhone. For most of the time I worked there, I would take my sansa with me and listen to every thing on it. Then if I ran out, I had my phone with me, so I would listen to podcasts on it. This process meant I had 2 sets of podcasts - provided by mashpodder - iPhone app. I kept this practice of having 2 podcast sources for a few years, but I eventual stopped using the sansa. Phones were getting better, and the sansa devices were getting harder to find. I wanted to start listening to my bashpodder podcasts on my phone. I looked for a few file transfer solutions, but eventually settled on making my own RSS feed of files I had downloaded. I found a python script that would take a directory listing of mp3s and build a RSS feed. Now I had a cron job that would - download - process with sox - create the RSS feed - rsync RSS XML file on podcast files to a VPS https://genrss.readthedocs.io/en/latest/ I used a VPS so I could download new episodes to my phone from anywhere. After a while, I experimented with using a AWS S3 to host the files. I stopped using S3 when the free tier ran out, and I started getting charged for storage and bandwidth. Eventually, when I started working at home I no longer needed the RSS feed to be available from anywhere. So I just started using a http server in my home lab to host my RSS feed and files. I can update my phone with the files I download and process as long as I am on my home network Also, one other change I made at some point was switching from bashpodder to mashpodder. There were a few podcast that bashpodder was not able to parse. Today, I listen to podcasts via antenna pod Most of of the podcasts I searched for and subscribed to via the app. There are still a few podcasts that I get via mashpodder and pre-process with sox. Since the phone app is good at altering the tempo (I like 2x), I no longer have to use sox for speeding up. But I still use sox for leveling the audio and truncating silence. My tendency is to have the podcasts that are produced by studios/companies via the app and podcasts produced by enthusiasts via mashpodder/sox set -euo pipefail IFS=$'nt' SOX="/usr/local/bin/sox" cd /mashpodder/podcasts/files if [ -z "$(ls -A )" ]; then echo "Empty" exit 0 else echo "Not Empty" fi for i in * do $SOX -v 0.5 $i "/mashpodder/podcasts/faster/$i.mp3" compand 0.3,1 6:-70,-60,-20 -5 -90 remix - silence 1 0.1 1% -1 0.1 1% stat mv -v $i ../archive/ done Delete old file from the Archive Generate a RSS feed of the faster directlry find /mashpodder/podcasts/archive/ -name "*mp3" -mtime +30 -delete cd /mashpodder/podcasts && python2.7 ../genRSS/genRSS.py -v -e mp3 -i 'faster/faster.gif' -t Faster -p "Faster Podcasts" -d faster -H http://address.of.web.host --sort-creation -o faster/faster.xml

Oracle University Podcast
Autonomous Database Tools

Oracle University Podcast

Play Episode Listen Later Jan 16, 2024 36:04


In this episode, hosts Lois Houston and Nikita Abraham speak with Oracle Database experts about the various tools you can use with Autonomous Database, including Oracle Application Express (APEX), Oracle Machine Learning, and more.   Oracle MyLearn: https://mylearn.oracle.com/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Tamal Chatterjee, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! We spent the last two episodes exploring Oracle Autonomous Database's deployment options: Serverless and Dedicated. Today, it's tool time! Lois: That's right, Niki. We'll be chatting with some of our Database experts on the tools that you can use with the Autonomous Database. We're going to hear from Patrick Wheeler, Kay Malcolm, Sangeetha Kuppuswamy, and Thea Lazarova. Nikita: First up, we have Patrick, to take us through two important tools. Patrick, let's start with Oracle Application Express. What is it and how does it help developers? 01:15 Patrick: Oracle Application Express, also known as APEX-- or perhaps APEX, we're flexible like that-- is a low-code development platform that enables you to build scalable, secure, enterprise apps with world-class features that can be deployed anywhere. Using APEX, developers can quickly develop and deploy compelling apps that solve real problems and provide immediate value. You don't need to be an expert in a vast array of technologies to deliver sophisticated solutions. Focus on solving the problem, and let APEX take care of the rest. 01:52 Lois: I love that it's so easy to use. OK, so how does Oracle APEX integrate with Oracle Database? What are the benefits of using APEX on Autonomous Database? Patrick: Oracle APEX is a fully supported, no-cost feature of Oracle Database. If you have Oracle Database, you already have Oracle APEX. You can access APEX from database actions. Oracle APEX on Autonomous Database provides a preconfigured, fully managed, and secure environment to both develop and deploy world-class applications. Oracle takes care of configuration, tuning, backups, patching, encryption, scaling, and more, leaving you free to focus on solving your business problems. APEX enables your organization to be more agile and develop solutions faster for less cost and with greater consistency. You can adapt to changing requirements with ease, and you can empower professional developers, citizen developers, and everyone else. 02:56 Nikita: So you really don't need to have a lot of specializations or be an expert to use APEX. That's so cool! Now, what are the steps involved in creating an application using APEX?  Patrick: You will be prompted to log in as the administrator at first. Then, you may create workspaces for your respective users and log in with those associated credentials. Application Express provides you with an easy-to-use, browser-based environment to load data, manage database objects, develop REST interfaces, and build applications which look and run great on both desktop and mobile devices. You can use APEX to develop a wide variety of solutions, import spreadsheets, and develop a single source of truth in minutes. Create compelling data visualizations against your existing data, deploy productivity apps to elegantly solve a business need, or build your next mission-critical data management application. There are no limits on the number of developers or end users for your applications. 04:01 Lois: Patrick, how does APEX use SQL? What role does SQL play in the development of APEX applications?  Patrick: APEX embraces SQL. Anything you can express with SQL can be easily employed in an APEX application. Application Express also enables low-code development, providing developers with powerful data management and data visualization components that deliver modern, responsive end user experiences out-of-the-box. Instead of writing code by hand, you're able to use intelligent wizards to guide you through the rapid creation of applications and components. Creating a new application from APEX App Builder is as easy as one, two, three. One, in App Builder, select a project name and appearance. Two, add pages and features to the app. Three, finalize settings, and click Create. 05:00 Nikita: OK. So, the other tool I want to ask you about is Oracle Machine Learning. What can you tell us about it, Patrick? Patrick: Oracle Machine Learning, or OML, is available with Autonomous Database. A new capability that we've introduced with Oracle Machine Learning is called Automatic Machine Learning, or AutoML. Its goal is to increase data scientist productivity while reducing overall compute time. In addition, AutoML enables non-experts to leverage machine learning by not requiring deep understanding of the algorithms and their settings. 05:37 Lois: And what are the key functions of AutoML? Patrick: AutoML consists of three main functions: Algorithm Selection, Feature Selection, and Model Tuning. With Automatic Algorithm Selection, the goal is to identify the in-database algorithms that are likely to achieve the highest model quality. Using metalearning, AutoML leverages machine learning itself to help find the best algorithm faster than with exhaustive search. With Automatic Feature Selection, the goal is to denoise data by eliminating features that don't add value to the model. By identifying the most predicted features and eliminating noise, model accuracy can often be significantly improved with a side benefit of faster model building and scoring. Automatic Model Tuning tunes algorithm hyperparameters, those parameters that determine the behavior of the algorithm, on the provided data. Auto Model Tuning can significantly improve model accuracy while avoiding manual or exhaustive search techniques, which can be costly both in terms of time and compute resources. 06:44 Lois: How does Oracle Machine Learning leverage the capabilities of Autonomous Database? Patrick: With Oracle Machine Learning, the full power of the database is accessible with the tremendous performance of parallel processing available, whether the machine learning algorithm is accessed via native database SQL or with OML4Py through Python or R.  07:07 Nikita: Patrick, talk to us about the Data Insights feature. How does it help analysts uncover hidden patterns and anomalies? Patrick: A feature I wanted to call the electromagnet, but they didn't let me. An analyst's job can often feel like looking for a needle in a haystack. So throw the switch and all that metallic stuff is going to slam up onto that electromagnet. Sure, there are going to be rusty old nails and screws and nuts and bolts, but there are going to be a few needles as well. It's far easier to pick the needles out of these few bits of metal than go rummaging around in a pile of hay, especially if you have allergies. That's more or less how our Insights tool works. Load your data, kick off a query, and grab a cup of coffee. Autonomous Database does all the hard work, scouring through this data looking for hidden patterns, anomalies, and outliers. Essentially, we run some analytic queries that predict expected values. And where the actual values differ significantly from expectation, the tool presents them here. Some of these might be uninteresting or obvious, but some are worthy of further investigation. You get this dashboard of various exceptional data patterns. Drill down on a specific gauge in this dashboard and significant deviations between actual and expected values are highlighted. 08:28 Lois: What a useful feature! Thank you, Patrick. Now, let's discuss some terms and concepts that are applicable to the Autonomous JSON Database with Kay. Hi Kay, what's the main focus of the Autonomous JSON Database? How does it support developers in building NoSQL-style applications? Kay: Autonomous Database supports the JavaScript Object Notation, also known as JSON, natively in the database. It supports applications that use the SODA API to store and retrieve JSON data or SQL queries to store and retrieve data stored in JSON-formatted data.  Oracle AJD is Oracle ATP, Autonomous Transaction Processing, but it's designed for developing NoSQL-style applications that use JSON documents. You can promote an AJD service to ATP. 09:22 Nikita: What makes the development of NoSQL-style, document-centric applications flexible on AJD?  Kay: Development of these NoSQL-style, document-centric applications is particularly flexible because the applications use schemaless data. This lets you quickly react to changing application requirements. There's no need to normalize the data into relational tables and no impediment to changing the data structure or organization at any time, in any way. A JSON document has its own internal structure, but no relation is imposed on separate JSON documents. Nikita: What does AJD do for developers? How does it actually help them? Kay: So Autonomous JSON Database, or AJD, is designed for you, the developer, to allow you to use simple document APIs and develop applications without having to know anything about SQL. That's a win. But at the same time, it does give you the ability to create highly complex SQL-based queries for reporting and analysis purposes. It has built-in binary JSON storage type, which is extremely efficient for searching and for updating. It also provides advanced indexing capabilities on the actual JSON data. It's built on Autonomous Database, so that gives you all of the self-driving capabilities we've been talking about, but you don't need a DBA to look after your database for you. You can do it all yourself. 11:00 Lois: For listeners who may not be familiar with JSON, can you tell us briefly what it is?  Kay: So I mentioned this earlier, but it's worth mentioning again. JSON stands for JavaScript Object Notation. It was originally developed as a human readable way of providing information to interchange between different programs. So a JSON document is a set of fields. Each of these fields has a value, and those values can be of various data types. We can have simple strings, we can have integers, we can even have real numbers. We can have Booleans that are true or false. We can have date strings, and we can even have the special value null. Additionally, values can be objects, and objects are effectively whole JSON documents embedded inside a document. And of course, there's no limit on the nesting. You can nest as far as you like. Finally, we can have a raise, and a raise can have a list of scalar data types or a list of objects. 12:13 Nikita: Kay, how does the concept of schema apply to JSON databases? Kay: Now, JSON documents are stored in something that we call collections. Each document may have its own schema, its own layout, to the JSON. So does this mean that JSON document databases are schemaless? Hmmm. Well, yes. But there's nothing to fear because you can always use a check constraint to enforce a schema constraint that you wish to introduce to your JSON data. Lois: Kay, what about indexing capabilities on JSON collections? Kay: You can create indexes on a JSON collection, and those indexes can be of various types, including our flexible search index, which indexes the entire content of the document within the JSON collection, without having to know anything in advance about the schema of those documents.  Lois: Thanks Kay! 13:18 AI is being used in nearly every industry—healthcare, manufacturing, retail, customer service, transportation, agriculture, you name it! And, it's only going to get more prevalent and transformational in the future. So it's no wonder that AI skills are the most sought after by employers.  We're happy to announce a new OCI AI Foundations certification and course that is available—for FREE! Want to learn about AI? Then this is the best place to start! So, get going! Head over to mylearn.oracle.com to find out more.  13:54 Nikita: Welcome back! Sangeetha, I want to bring you in to talk about Oracle Text. Now I know that Oracle Database is not only a relational store but also a document store. And you can load text and JSON assets along with your relational assets in a single database.  When I think about Oracle and databases, SQL development is what immediately comes to mind. So, can you talk a bit about the power of SQL as well as its challenges, especially in schema changes? Sangeetha: Traditionally, Oracle has been all about SQL development. And with SQL development, it's an incredibly powerful language. But it does take some advanced knowledge to make the best of it. So SQL requires you to define your schema up front. And making changes to that schema could be a little tricky and sometimes highly bureaucratic task. In contrast, JSON allows you to develop your schema as you go--the schemaless, perhaps schema-later model. By imposing less rigid requirements on the developer, it allows you to be more fluid and Agile development style. 15:09 Lois: How does Oracle Text use SQL to index, search, and analyze text and documents that are stored in the Oracle Database? Sangeetha: Oracle Text can perform linguistic analyses on documents as well as search text using a variety of strategies, including keyword searching, context queries, Boolean operations, pattern matching, mixed thematic queries, like HTML/XML session searching, and so on. It can also render search results in various formats, including unformatted text, HTML with term highlighting, and original document format. Oracle Text supports multiple languages and uses advanced relevance-ranking technology to improve search quality. Oracle Text also offers advantage features like classification, clustering, and support for information visualization metaphors. Oracle Text is now enabled automatically in Autonomous Database. It provides full-text search capabilities over text, XML, JSON content. It also could extend current applications to make better use of textual fields. It builds new applications specifically targeted at document searching. Now, all of the power of Oracle Database and a familiar development environment, rock-solid autonomous database infrastructure for your text apps, we can deal with text in many different places and many different types of text. So it is not just in the database. We can deal with data that's outside of the database as well. 17:03 Nikita: How does it handle text in various places and formats, both inside and outside the database? Sangeetha: So in the database, we can be looking a varchar2 column or LOB column or binary LOB columns if we are talking about binary documents such as PDF or Word. Outside of the database, we might have a document on the file system or out on the web with URLs pointing out to the document. If they are on the file system, then we would have a file name stored in the database table. And if they are on the web, then we should have a URL or a partial URL stored in the database. And we can then fetch the data from the locations and index it in the term documents format. We recognize many different document formats and extract the text from them automatically. So the basic forms we can deal with-- plain text, HTML, JSON, XML, and then formatted documents like Word docs, PDF documents, PowerPoint documents, and also so many different types of documents. All of those are automatically handled by the system and then processed into the format indexing. And we are not restricted by the English either here. There are various stages in the index pipeline. A document starts one, and it's taken through the different stages so until it finally reaches the index. 18:44 Lois: You mentioned the indexing pipeline. Can you take us through it? Sangeetha: So it starts with a data store. That's responsible for actually reaching the document. So once we fetch the document from the data store, we pass it on to the filter. And now the filter is responsible for processing binary documents into indexable text. So if you have a PDF, let's say a PDF document, that will go through the filter. And that will extract any images and return it into the stream of HTML text ready for indexing. Then we pass it on to the sectioner, which is responsible for identifying things like paragraphs and sentences. The output from the section is fed onto the lexer. The lexer is responsible for dividing the text into indexable words. The output of the lexer is fed into the index engine, which is responsible for laying out to the indexes on the disk. Storage, word list, and stop list are some additional inputs there. So storage tells exactly how to lay out the index on disk. Word list which has special preferences like desegmentation. And then stop is a list word that we don't want to index. So each of these stages and inputs can be customized. Oracle has something known as the extensibility framework, which originally was designed to allow people to extend capabilities of these products by adding new domain indexes. And this is what we've used to implement Oracle Text. So when kernel sees this phrase INDEXTYPE ctxsys.context, it knows to handle all of the hard work creating the index. 20:48 Nikita: Other than text indexing, Oracle Text offers additional operations, right? Can you share some examples of these operations? Sangeetha: So beyond the text index, other operations that we can do with the Oracle Text, some of which are search related. And some examples of that are these highlighting markups and snippets. Highlighting and markup are very similar. They are ways of fetching these results back with the search. And then it's marked up with highlighting within the document text. Snippet is very similar, but it's only bringing back the relevant chunks from the document that we are searching for. So rather than getting the whole document back to you, just get a few lines showing this in a context and the theme and extraction. So Oracle Text is capable of figuring out what a text is all about. We have a very large knowledge base of the English language, which will allow you to understand the concepts and the themes in the document. Then there's entity extraction, which is the ability to find out people, places, dates, times, zip codes, et cetera in the text. So this can be customized with your own user dictionary and your own user rules. 22:14 Lois: Moving on to advanced functionalities, how does Oracle Text utilize machine learning algorithms for document classification? And what are the key types of classifications? Sangeetha: The text analytics uses machine learning algorithms for document classification. We can process a large set of data documents in a very efficient manner using Oracle's own machine learning algorithms. So you can look at that as basically three different headings. First of all, there's classification. And that comes in two different types-- supervised and unsupervised. The supervised classification which means in this classification that it provides the training set, a set of documents that have already defined particular characteristics that you're looking for. And then there's unsupervised classification, which allows your system itself to figure out which documents are similar to each other. It does that by looking at features within the documents. And each of those features are represented as a dimension in a massively high dimensional feature space in documents, which are clustered together according to that nearest and nearness in the dimension in the feature space. Again, with the named entity recognition, we've already talked about that a little bit. And then finally, there is a sentiment analysis, the ability to identify whether the document is positive or negative within a given particular aspect. 23:56 Nikita: Now, for those who are already Oracle database users, how easy is it to enable text searching within applications using Oracle Text? Sangeetha: If you're already an Oracle database user, enabling text searching within your applications is quite straightforward. Oracle Text uses the same SQL language as the database. And it integrates seamlessly with your existing SQL. Oracle Text can be used from any programming language which has SQL interface, meaning just about all of them.  24:32 Lois: OK from Oracle Text, I'd like to move on to Oracle Spatial Studio. Can you tell us more about this tool? Sangeetha: Spatial Studio is a no-code, self-service application that makes it easy to access the sorts of spatial features that we've been looking at, in particular, in order to get that data prepared to use with spatial, visualizing results in maps and tables, and also doing the analysis and sharing results. Spatial Studios is encoded at no extra cost with Autonomous Database. The studio web application itself has no additional cost and it runs on the server. 25:13 Nikita: Let's talk a little more about the cost. How does the deployment of Spatial Studio work, in terms of the server it runs on?  Sangeetha: So, the server that it runs on, if it's running in the Cloud, that computing node, it would have some cost associated with it. It can also run on a free tier with a very small shape, just for evaluation and testing.  Spatial Studio is also available on the Oracle Cloud Marketplace. And there are a couple of self-paced workshops that you can access for installing and using Spatial Studio. 25:47 Lois: And how do developers access and work with Oracle Autonomous Database using Spatial Studio? Sangeetha: Oracle Spatial Studio allows you to access data in Oracle Database, including Oracle Autonomous Database. You can create connections to Oracle Autonomous Databases, and then you work with the data that's in the database. You can also see Spatial Studio to load data to Oracle Database, including Oracle Autonomous Database. So, you can load these spreadsheets in common spatial formats. And once you've loaded your data or accessed data that already exists in your Autonomous Database, if that data does not already include native geometrics, Oracle native geometric type, then you can prepare the data if it has addresses or if it has latitude and longitude coordinates as a part of the data. 26:43 Nikita: What about visualizing and analyzing spatial data using Spatial Studio? Sangeetha: Once you have the data prepared, you can easily drag and drop and start to visualize your data, style it, and look at it in different ways. And then, most importantly, you can start to ask spatial questions, do all kinds of spatial analysis, like we've talked about earlier. While Spatial Studio provides a GUI that allows you to perform those same kinds of spatial analysis. And then the results can be dropped on the map and visualized so that you can actually see the results of spatial questions that you're asking. When you've done some work, you can save your work in a project that you can return to later, and you can also publish and share the work you've done. 27:34 Lois: Thank you, Sangeetha. For the final part of our conversation today, we'll talk with Thea. Thea, thanks so much for joining us. Let's get the basics out of the way. How can data be loaded directly into Autonomous Database? Thea: Data can be loaded directly to ADB through applications such as SQL Developer, which can read data files, such as txt and xls, and load directly into tables in ADB. 27:59 Nikita: I see. And is there a better method to load data into ADB? Thea: A more efficient and preferred method for loading data into ADB is to stage the data cloud object store, preferably Oracle's, but also supported our Amazon S3 and Azure Blob Storage. Any file type can be staged in object store. Once the data is in object store, Autonomous Database can access a directly. Tools can be used to facilitate the data movement between object store and the database. 28:27 Lois: Are there specific steps or considerations when migrating a physical database to Autonomous? Thea: A physical database can simply be migrated to autonomous because database must be converted to pluggable database, upgraded to 19C, and encrypted. Additionally, any changes to an Oracle-shipped stored procedures or views must be found and reverted. All uses of container database admin privileges must be removed. And all legacy features that are not supported must be removed, such as legacy LOBs. Data Pump, expdp/impdp must be used for migrating databases versions 10.1 and above to Autonomous Database as it addresses the issues just mentioned. For online migrations, GoldenGate must be used to keep old and new database in sync. 29:15 Nikita: When you're choosing the method for migration and loading, what are the factors to keep in mind? Thea: It's important to segregate the methods by functionality and limitations of use against Autonomous Database. The considerations are as follows. Number one, how large is the database to be imported? Number two, what is the input file format? Number three, does the method support non-Oracle database sources? And number four, does the methods support using Oracle and/or third-party object store? 29:45 Lois: Now, let's move on to the tools that are available. What does the DBMS_CLOUD functionality do? Thea: The Oracle Autonomous Database has built-in functionality called DBMS_CLOUD specifically designed so the database can move data back and forth with external sources through a secure and transparent process. DBMS_CLOUD allows data movement from the Oracle object store. Data from any application or data source export to text-- .csv or JSON-- output from third-party data integration tools. DBMS_CLOUD can also access data stored on Object Storage from the other clouds, AWS S3 and Azure Blob Storage. DBMS_CLOUD does not impose any volume limit, so it's the preferred method to use. SQL*Loader can be used for loading data located on the local client file systems into Autonomous Database. There are limits around OS and client machines when using SQL*Loader. 30:49 Nikita: So then, when should I use Data Pump and SQL Developer for migration? Thea: Data Pump is the best way to migrate a full or part database into ADB, including databases from previous versions. Because Data Pump will perform the upgrade as part of the export/import process, this is the simplest way to get to ADB from any existing Oracle Database implementation. SQL Developer provides a GUI front end for using data pumps that can automate the whole export and import process from an existing database to ADB. SQL Developer also includes an import wizard that can be used to import data from several file types into ADB. A very common use of this wizard is for importing Excel files into ADW. Once a credential is created, it can be used to access a file as an external table or to ingest data from the file into a database table. DBMS_CLOUD makes it much easier to use external tables, and the organization external needed in other versions of the Oracle Database are not needed. 31:54 Lois: Thea, what about Oracle Object Store? How does it integrate with Autonomous Database, and what advantages does it offer for staging data? Thea: Oracle Object Store is directly integrated into Autonomous Database and is the best option for staging data that will be consumed by ADB. Any file type can be stored in object store, including SQL*Loader files, Excel, JSON, Parquet, and, of course, Data Pump DMP files. Flat files stored on object store can also be used as Oracle Database external tables, so they can queried directly from the database as part of a normal DML operation. Object store is a separate bin storage allocated to the Autonomous Database for database Object Storage, such as tables and indexes. That storage is part of the Exadata system Autonomous Database runs on, and it is automatically allocated and managed. Users do not have direct access to that storage. 32:50 Nikita: I know that one of the main considerations when loading and updating ADB is the network latency between the data source and the ADB. Can you tell us more about this? Thea: Many ways to measure this latency exist. One is the website cloudharmony.com, which provides many real-time metrics for connectivity between the client and Oracle Cloud Services. It's important to run these tests when determining with Oracle Cloud service location will provide the best connectivity. The Oracle Cloud Dashboard has an integrated tool that will provide real time and historic latency information between your existing location and any specified Oracle Data Center. When migrating data to Autonomous Database, table statistics are gathered automatically during direct-path load operations. If direct-path load operations are not used, such as with SQL Developer loads, the user can gather statistics manually as needed. 33:44 Lois: And finally, what can you tell us about the Data Migration Service? Thea: Database Migration Service is a fully managed service for migrating databases to ADB. It provides logical online and offline migration with minimal downtime and validates the environment before migration. We have a requirement that the source database is on Linux. And it would be interesting to see if we are going to have other use cases that we need other non-Linux operating systems. This requirement is because we are using SSH to directly execute commands on the source database. For this, we are certified on the Linux only. Target in the first release are Autonomous databases, ATP, or ADW, both serverless and dedicated. For agent environment, we require Linux operating system, and this is Linux-safe. In general, we're targeting a number of different use cases-- migrating from on-premise, third-party clouds, Oracle legacy clouds, such as Oracle Classic, or even migrating within OCI Cloud and doing that with or without direct connection. If you have any direct connection behind a firewall, we support offline migration. If you have a direct connection, we support both offline and online migration. For more information on all migration approaches are available for your particular situation, check out the Oracle Cloud Migration Advisor. 35:06 Nikita: I think we can wind up our episode with that. Thanks to all our experts for giving us their insights.  Lois: To learn more about the topics we've discussed today, visit mylearn.oracle.com and search for the Oracle Autonomous Database Administration Workshop. Remember, all of the training is free, so dive right in! Join us next week for another episode of the Oracle University Podcast. Until then, Lois Houston… Nikita: And Nikita Abraham, signing off! 35:35 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Critical Thinking - Bug Bounty Podcast
Episode 45: The OG Bug Bounty King - Frans Rosen

Critical Thinking - Bug Bounty Podcast

Play Episode Listen Later Nov 16, 2023 156:35


Episode 45: In this episode of Critical Thinking - Bug Bounty Podcast, we're thrilled to welcome Frans Rosén, an OG bug bounty hunter and co-founder of Detectify. We kick off with Frans sharing his journey bug bounty and security startups, before diving headfirst into a host of his blog posts. We also cover the value of pseudo-code for bug exploitation, understanding developer terminology, the challenges of collaboration and delegating tasks, and balancing hacking with parenting. If you're interested in bug bounty or entrepreneurship, you won't want to miss it!Follow us on twitter at: @ctbbpodcastWe're new to this podcasting thing, so feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!------ Links ------Follow your hosts Rhynorater & Teknogeek on twitter:https://twitter.com/0xteknogeekhttps://twitter.com/rhynorater------ Ways to Support CTBBPodcast ------Sign up for Caido using the referral code CTBBPODCAST for a 10% discount.Join our Discord!Today's Guest:https://twitter.com/fransrosenDetectifyDiscovering s3 subdomain takeoversBucket DiscloseA deep dive into AWS S3 access controlsAttacking Modern Web TechnologiesLive Hacking like a MVHAccount hijacking using Dirty Dancing in sign-in OAuth flowsTimestamps:(00:00:00) Introduction(00:04:50) Franz Rosen's Bug Bounty Journey and the creation of Detectify(00:13:30) Benefits of pseudo-code, typing, and thinking like a developer(00:20:20) Hunter Methodologies(00:35:40) Time on targets, Iteration vs. Ideation, and tips for standing out(00:51:10) S3 subdomain takeovers(01:05:02) Blog posting and hosting motivations(01:13:30) Detectify and entrepreneurial endeavors(01:29:50) Attacking Modern Web Technologies(01:46:00) postMessage and MessagePort(01:58:09) Live Hacking and Collaboration(02:13:50) Account Hijacking and OAuth Flows(02:28:48) Hacking/Parenting

Hacker News Recap
October 15th, 2023 | Finland to vote against the EU mass surveillance and encryption ban directive

Hacker News Recap

Play Episode Listen Later Oct 16, 2023 17:49


This is a recap of the top 10 posts on Hacker News on October 15th, 2023.This podcast was generated by wondercraft.ai(00:37): Finland to vote against the EU mass surveillance and encryption ban directiveOriginal post: https://news.ycombinator.com/item?id=37891886&utm_source=wondercraft_ai(02:26): "Hacker News" for retro computing and gamingOriginal post: https://news.ycombinator.com/item?id=37888144&utm_source=wondercraft_ai(04:12): Mastercard Should Stop Selling Our DataOriginal post: https://news.ycombinator.com/item?id=37892684&utm_source=wondercraft_ai(06:04): Google has sent internet into 'spiral of decline', claims DeepMind co-founderOriginal post: https://news.ycombinator.com/item?id=37887562&utm_source=wondercraft_ai(07:48): Signtime.apple: One-on-one sign language interpreting by AppleOriginal post: https://news.ycombinator.com/item?id=37890176&utm_source=wondercraft_ai(09:28): SSH-audit: SSH server and client security auditingOriginal post: https://news.ycombinator.com/item?id=37892028&utm_source=wondercraft_ai(10:56): Cloudflare Sippy: Incrementally Migrate Data from AWS S3 to Reduce Egress FeesOriginal post: https://news.ycombinator.com/item?id=37888135&utm_source=wondercraft_ai(12:45): Omnivore – free, open source, read-it-later AppOriginal post: https://news.ycombinator.com/item?id=37890742&utm_source=wondercraft_ai(14:19): Mark Twain at Stormfield (1909) [video]Original post: https://news.ycombinator.com/item?id=37890369&utm_source=wondercraft_ai(15:47): BeagleV-Ahead open-source RISC-V single board computerOriginal post: https://news.ycombinator.com/item?id=37887341&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

cloudonaut
#080 Self-hosted GitHub Runners on AWS + S3 Object Lambda + AWS Community Day Germany

cloudonaut

Play Episode Listen Later Sep 18, 2023 31:20


Mon, 18 Sep 2023 14:08:09 +0000 https://podcast.cloudonaut.io/80-self-hosted-github-runners-on-aws-s3-object-lambda-aws-community-day-germany 4f3ba8853d230bb2f84e177034e5976d Andreas and Michael Wittig are building on AWS since 2009. Follow their journey of developing products like bucketAV, marbot, and HyperEnv and learn from practice. Andreas and Michael Wittig are building on AWS since 2009. Follow their journey of developing products like bucketAV, marbot, and HyperEnv and learn from practice. Topics AWS Community Day Germany/DACH Self-hosted GitHub runners on AWS S3 Object Lambda Links Self-hosted GitHub runners on AWS HyperEnv for GitHub Actions Unboxing S3 Object Lambda (2021) S3 Object Lambda used to implement scan on download for bucketAV Subscribe Make sure you are not missing upcoming shows … Podcast feed YouTube channel Newsletter Projects bucketAV — Antivirus protection for Amazon S3 marbot — AWS Monitoring made simple! HyperEnv for GitHub Actions — Deploy self-hosted GitHub runners on AWS with ease! attachmentAV — Antivirus for Atlassian Jira and Confluence Contact and Feedback hello@cloudonaut.io Mastodon (Andreas) Mastodon (Michael) LinkedIn (Andreas) LinkedIn (Michael) 80 full Andreas and Michael Wittig are building on AWS since 2009. Follow their journey of developing products like bucketAV, marbot, and HyperEnv and learn from practice. no Andreas Wittig and Mich

AWS Bites
95. Mounting S3 as a Filesystem

AWS Bites

Play Episode Listen Later Sep 14, 2023 15:00


Saddle up for a cloud adventure like no other in this episode of AWS Bites, where Eoin and Luciano explore the untamed world of AWS S3 Mountpoint. Just like a trusty steed on the digital prairie, Mountpoint gallops into action to solve complex use cases, making it a valuable asset for managing massive data, achieving high throughput, and effortlessly fetching information from the AWS S3 wilderness. Dive deep into the inner workings of Mountpoint, a Rust-powered Linux-exclusive application that harnesses the Linux FUSE subsystem to provide optimal S3 performance. While exploring alternatives like s3fs-fuse and goofys, discover the benefits of sticking to native AWS tools for certain scenarios. Uncover Mountpoint's performance prowess, thanks to its integration with AWS Common Runtime libraries, and learn when to hop on this cloud cowboy or opt for a more native approach. Wrapping up, don't forget to check out AWS Storage's blog post for an even deeper dive into Mountpoint's capabilities. Whether you're a seasoned cloud wrangler or a newcomer to the digital rodeo, this video will equip you with the knowledge to navigate the AWS S3 Mountpoint frontier confidently.

InfosecTrain
What is AWS S3 Object Lock? | How to use Amazon S3 Object Lock?

InfosecTrain

Play Episode Listen Later Aug 18, 2023 2:56


Adventures with Spirit Podcast

As our Spiritual connections evolve, we want our podcast to evolve with us. And in return, YOUR spiritual connection will evolve, too. So this season, we are focusing on Spiritual Evolution in Real Life. From Biology to Theology, we will be talking about how we have evolved in not only our spiritual practice but in our mind and bodies as well. Let's buckle in and get ready for some turbulence because we are ascending into the sky as we begin Season 3 of Adventures with Spirit.

InfosecTrain
What is AWS S3 Glacier?

InfosecTrain

Play Episode Listen Later Jun 27, 2023 4:24


What is AWS S3 Glacier? AWS S3 Glacier is a low-cost, secure, and durable archival service that Amazon Web Services (AWS) provides. It is designed for long-term data archiving and backup. S3 Glacier offers durable storage with features like data redundancy, encryption, and data integrity checks. How does AWS S3 Glacier Works? AWS S3 Glacier is designed to provide secure and durable long-term data archiving and backup storage. Here's how AWS S3 Glacier works: View More: What is AWS S3 Glacier?

Cloud Security Podcast
AWS ReInforce 2023 Recap & Highlights

Cloud Security Podcast

Play Episode Listen Later Jun 23, 2023 55:25


Cloud Security Podcast - AWS ReInforce 2023 or AWS Re:inforce 2023 highlights in a recap from the 2 Day affair for all things AWS Cloud Security! We were lucky enough to be there. This is a recap of the major announcements and highlights from major themes around the event. Episode YouTube Video - https://www.youtube.com/watch?v=UhVBvnmmfnQ Cloud Security Podcast Website - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.cloudsecuritypodcast.tv⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ FREE CLOUD Security BOOTCAMP - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.cloudsecuritybootcamp.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Host Twitter: Ashish Rajan (⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@hashishrajan⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠) Podcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecureNews⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security News ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security BootCamp⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Timeline (00:00) Introduction (02:20) What is AWS re:inforce? (04:33) Neha Rungta explains Verified Access (05:38) Neha Rungta explains Verified Permissions (07:53) What verified permissions means for you! (09:35) Amazon EC2 Connect Endpoint (11:08) Amazon GuardDuty Updates (12:42) Amazon Inspector Code Scan for Lambda function (14:26) Amazon Inspector SBOM Export (17:35) Amazon Code Whisperer (18:00) Amazon Code Guru (20:15) Finding groups in Amazon Detective (22:25) Dual Layer Encryption for AWS S3 (23:18) AWS Global Partner Security Initiative (26:12) Key Themes from AWS re:inforce (26:45) Shared Responsibility Model (27:56) Cloud Security Newsletter (30:04) Generative AI (31:29) Amazon Bedrock (34:04) Shift from ransomware to wiperware (35:29) Nancy Wang explains AWS Backup Vault Lock (37:18) Nancy explains double encryption with S3 Bucket (38:41) Nancy explains how vault helps with data loss. (40:20) AWS Backup Vault Lock (41:55) Zero Trust and Identity (45:03) DevSecOps (46:47) How GenAI will impact cloud security roles? (49:32) Amazon Security Lake (52:26) Quantum Computing See you at the next episode!

The Cloudcast
The Economics & Beyond of Object Storage

The Cloudcast

Play Episode Listen Later Jun 7, 2023 36:14


Jon Toor (CMO @CloudianStorage) talks about the history and evolution of object storage, the rise of enterprise class object storage, and the changing economics of cloud storage.SHOW: 725CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT - "CLOUDCAST BASICS"SHOW SPONSORS:CloudZero – Cloud Cost Visibility and Savings​​CloudZero provides immediate and ongoing savings with 100% visibility into your total cloud spendDatadog Application Monitoring: Modern Application Performance MonitoringGet started monitoring service dependencies to eliminate latency and errors and enhance your users app experience with a free 14 day Datadog trial. Listeners of The Cloudcast will also receive a free Datadog T-shirt.SHOW NOTES:Cloudian websiteTopic 1 - Welcome to the show Jon. Tell us a little bit about your background. Your career and Cloudian parallel in many ways.Topic 2 - Cloudian has been around since before object storage was cool. We first heard about Cloudian back in the OpenStack and early AWS S3 days. Object storage has come a long way. Can you help everyone frame where we were and where we are today?Topic 3 - We've seen the rise of Enterprise class, S3 compatible object storage for use cases like hybrid cloud, data sovereignty, and more recently analytics such as data lakehouses. Where are you seeing implementations these days as we've moved beyond basic, simple storage behind cloud backends.  Topic 4 - With the recent changes to the world economy, how much does economics come into conversations around the design of solutions. There's often a healthy tension between what is technically possible and what is economically feasible. How does that design conversation play out lately?Topic 5 - We used to talk about “Data Gravity” all the time. The concept for those unfamiliar is that data has a certain weight and attracts more data to existing sources and becomes hard to move over time. We haven't talked about it as much in recent years and we are seeing the rise of hybrid and multicloud solutions but folks often don't think about access to the data. Where are folks building large data sets? What are they using them for? Are they ever moving them?Topic 6 - Last question, Cloudian is well known for their partnerships, alliances and solutions. You partner with hardware companies, software companies, backup companies, public clouds, etc. It's quite a mix. Has this been a factor in Cloudian's longevity and tell everyone a little bit about how this came to be and how important you see this for the future. FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnet

The Datanation Podcast - Podcast for Data Engineers, Analysts and Scientists
BONUS: What is Object Storage like AWS S3, Minio and more!

The Datanation Podcast - Podcast for Data Engineers, Analysts and Scientists

Play Episode Listen Later Apr 12, 2023


Alex Merced discusses what is Object Storage and the history of file systems. Join the community at datanation.click

Screaming in the Cloud
Making Open-Source Multi-Cloud Truly Free with AB Periasamy

Screaming in the Cloud

Play Episode Listen Later Mar 28, 2023 40:04


AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world.  AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

The Cloudcast
Cost-Efficient Scale-Out Cloud Storage

The Cloudcast

Play Episode Listen Later Mar 22, 2023 32:40


Gleb Budman (@GlebBudman, CEO/Co-Founder of @Backblaze) talks about the evolution of cloud storage, the shift from on-prem to cloud, best practices and the rise of ransomware.SHOW: 704CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT - "CLOUDCAST BASICS"SHOW SPONSORS:Datadog Synthetic Monitoring: Frontend and Backend Modern MonitoringEnsure frontend issues don't impair user experience by detecting user-facing issues with API and browser tests with a free 14 day Datadog trial. Listeners of The Cloudcast will also receive a free Datadog T-shirt. Solve your IAM mess with Strata's Identity Orchestration platformHave an identity challenge you thought was too big, too complicated, or too expensive to fix? Let us solve it for you! Visit strata.io/cloudcast to share your toughest IAM challenge and receive a set of AirPods ProMake Cloud Native Ubiquitous with Cloud Native Computing Foundation (CNCF)Join the foundation of doers, CNCF is the open source, vendor-neutral hub of cloud native computing, hosting projects like Kubernetes and Prometheus to make cloud native universal and sustainableKubeConEU Virtual Event Registration Code: Please use the code KCEUVCCP, while supplies last.SHOW NOTES:Backblaze (homepage)B2 Cloud Storage (1/5th the price of AWS S3)Backblaze Blog Questions for Gleb? Topic 1 - Welcome to the show. You started Backblaze in 2007, just a year after AWS S3 launched. What made you decide to start a storage company when EMC, HP and NetApp dominated with big enterprise boxes, and S3 seemed like a weird new thing for Amazon sellers? Topic 2 - Over the last couple of years, it feels like there has been a shift in how companies think about “the cloud”. We're seeing more specialty clouds. How do you see this trend playing out in the market?Topic 3 - You've been through multiple stages of how the cloud has evolved. Where do you see us now in terms of cloud evolution, and what are some of the things you see coming on the horizon?  Topic 4 - Backblaze is well known for disrupting both the cost of cloud storage, but also how storage systems are built. Given today's economic climate, are you seeing more companies demand more flexibility &/or efficiency on how they store data?  Topic 5 - We continue to see ransomware attacks across all industries. Is this leading companies to rethink their backup and disaster-recovery strategies?Topic 6 - From a storage perspective, do you see bottlenecks emerging about how this appetite for more and more data will eventually run into problems?FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnet

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for February 21st, 2023 - Episode 185

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Feb 21, 2023 32:21


2023-02-21 Weekly News - Episode 185Watch the video version on YouTube at https://youtube.com/live/pzrKwZI8W9g?feature=share Hosts:  Eric Peterson - Senior Developer at Ortus Solutions Grant Copley - Senior Developer at Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there. A few ways  to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube.  Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github  Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week BOXLife store: https://www.ortussolutions.com/about-us/shop Buy Ortus's Books 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Learn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes   Patreon Support ( SENSIBLE )Goal 1 - We have 42 patreons providing 100% of the funding for our Modernize or Die Podcasts via our Patreon site: https://www.patreon.com/ortussolutions.Goal 2 - We are 37% of the way to fully fund the hosting of ForgeBox.ioNews and AnnouncementsICYMI - Authentication Bypass Vulnerability in Mura CMS and Masa CMS – Preliminary Security AdvisoryMultiple versions of Mura CMS and Masa CMS contain an authentication bypass vulnerability that can allow an  unauthenticated attacker to login as any Site Member or System User.This is a preliminary security advisory, and is being shared so that impacted organizations can update and patch as needed.  Additional technical details will be released on March 6, 2023.https://coldfusion.adobe.com/2023/01/muracms/ICYMI - State of the CF Union 2023 ReleasedHelp us find out the state of the CF Union – what versions of CFML Engine do people use, what frameworks, tools etc.https://teratech.com/state-of-the-cf-union-2023-surveyColdFusion Summit East 2023 MVC Training WorkshopWe are excited to announce a training workshop before the ColdFusion Summit East in Washington, D.C., on April 4th, 2023. Luis Majano, the creator of The ColdBox Platform, will be leading this workshop, bringing you a deep dive 1-day workshop: ColdFusion MVC for Dummies.The workshop will combine a variety of theories, hands-on coding, and best practices to give you all the tools needed to leave the workshop ready to build MVC-powered apps when you return to your office.https://www.ortussolutions.com/blog/coldfusion-summit-east-2023-mvc-training-workshopNew Releases and UpdatesCBSecurity 3.1 ReleasedWe are happy to announce our first minor release for CBSecurity v3.1.0. This release includes a major upgrade of our cbcsrf library, but more importantly a way to generate secure and random passwords using our new createPassword() method in our CBSecurity object.https://www.ortussolutions.com/blog/cbsecurity-31-releasedWebinar / Meetups and WorkshopsOrtus Event Calendar for Googlehttps://calendar.google.com/calendar/u/0?cid=Y181NjJhMWVmNjFjNGIxZTJlNmQ4OGVkNzg0NTcyOGQ1Njg5N2RkNGJiNjhjMTQwZjc3Mzc2ODk1MmIyOTQyMWVkQGdyb3VwLmNhbGVuZGFyLmdvb2dsZS5jb20CFCasts Content Updateshttps://www.cfcasts.comRecent Releases Mastering CommandBox 5 - 3 new videos - https://cfcasts.com/series/mastering-commandbox-5 CFConfig env var overrides - https://cfcasts.com/series/mastering-commandbox-5/videos/cfconfig-env-var-overrides HTTP2 support - https://cfcasts.com/series/mastering-commandbox-5/videos/http2-support AJP Secret - https://cfcasts.com/series/mastering-commandbox-5/videos/ajp-secret 2023 ForgeBox Module of the Week Series - 1 new Video https://cfcasts.com/series/2023-forgebox-modules-of-the-week  2023 VS Code Hint tip and Trick of the Week Series - 1 new Video https://cfcasts.com/series/2023-vs-code-hint-tip-and-trick-of-the-week  Coming Soon Brad with more CommandBox Videos - 24!!! More ForgeBox and VS Code Podcast snippet videos CBWire Series from Grant - Fill out the Poll here https://community.ortussolutions.com/t/poll-cbwire-cfcasts-com-series/9513  ColdBox Elixir from Eric Getting Started with ContentBox from Daniel Garcia Conferences and TrainingGithub GalaxyMarch 28th, 2023Save the date for our global enterprise event focused on improving efficiency, security, and developer productivity.GitHub Galaxy—formerly known as GitHub InFocus—is new and reimagined.Virtual registration is right around the corner.VIP summits: Join us in-person for a VIP summit near you, with breakout sessions, networking, and more for enterprise leaders.https://galaxy.github.com/Dev NexusApril 4-6th, 2023 in AtlantaGeorgia World Congress Center285 Andrew Young International Blvd NWAtlanta, GA 30313Kubernetes, Java, Software architecture, Kotlin, Performance Tuninghttps://devnexus.com/CFSummit EastThursday, April 6, 20238:00am - 4:00pmMarriott Marquis Washington, DCComplimentary; breakfast and lunch will be providedhttps://carahevents.carahsoft.com/Event/Details/341389-adobehttps://carahevents.carahsoft.com/Event/Details/344168-adobeVueJS LiveMAY 12 & 15, 2023ONLINE + LONDON, UKCODE / CREATE / COMMUNICATE35 SPEAKERS, 10 WORKSHOPS10000+ JOINING ONLINE GLOBALLY300 LUCKIES MEETING IN LONDONhttps://vuejslive.com/Into the Box 2023 - 10th EditionMay 17-19, 2023The conference will be held in The Woodlands (Houston), TexasThis year we will continue the tradition of training and offering a pre-conference hands-on training day on May 17th and our live Mariachi Band Party! However, we are back to our Spring schedule and beautiful weather in The Woodlands! Also, this 2023 will mark our 10 year anniversary. So we might have two live bands and much more!!!CLOSED -  call for speakers for the Into The Box Conference for 2023 is open until Jan 31stSessions announced Soon.https://www.intothebox.org/blog/into-the-box-2023-call-for-speakershttps://itb2023.eventbrite.com/VueConf.usNEW ORLEANS, LA • MAY 24-26, 2023Jazz. Code. Vue.Workshop day: May 24Main Conference: May 25-26https://vueconf.us/CFCamp is backJune 22-23rd, 2023Marriott Hotel Munich Airport, FreisingCall for Speakers is now open!https://www.papercall.io/cfcamp2023https://www.cfcamp.org/More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/https://github.com/scraly/developers-conferences-agendaBlogs, Tweets, and Videos of the Week2/14/23 - Tweet - Luis Majano - ColdBox 7 WireBox Module InceptionGet ready for ColdBox 7 Hierarchical Injectors for Modules. Each module can have an injector and dependency resolution. Mix and match module versions in complete hierarchical isolation! No other MVC framework offers these capabilities in ANY language except CFML.https://twitter.com/lmajano/status/16256223077409996802/15/23 - Discourse - Zac Spitzer - Lucee 5.3.10.120 Stable ReleaseUpdates on some awesome deploying Lucee tricks. Lucee has adopted the configImport approach which @bdw429s pioneered and it is now supported it natively, via several methods There are some additional methods for deploying extensions / updates via the /deploy folder on startup For warming up images for fast deployment, there LUCEE_ENABLE_WARMUP env var And there are also startup listeners which can be used to programmatically configure your Lucee server using good old CFML Don't forget Lucee is open source, so anything you can do via the Lucee admin, can be done in cfml! Lastly, for running CI with Lucee, we have developed the script-runner, which can be used to run test cases in CI, it's headless, so there's no http server, but it's super quick. (Pretty much all the Lucee repos use script-runner to run their tests using Github Actions) https://dev.lucee.org/t/how-do-we-make-automating-builds-and-deployments-of-lucee-applications-rock/194/132/19/23 - Blog - Ben Nadel - Updating Permanent Elements On Page Navigation In Hotwire Turbo And Lucee CFMLIn a Hotwire Turbo application, when you add the data-turbo-permanent attribute to an element (accompanied by an id attribute), this element will be cached and then replaced into subsequent pages that contain an element with the same id. Element permanence is awesome when you want to, for example, lazy-load a Turbo-Frame once and then have it persist across pages. But, it means that updating the content of said element gets tricky. I wanted to explore this idea in the context of "Toast Messages" in Lucee CFML.https://www.bennadel.com/blog/4410-updating-permanent-elements-on-page-navigation-in-hotwire-turbo-and-lucee-cfml.htmCFML JobsSeveral positions available on https://www.getcfmljobs.com/Listing over 51 ColdFusion positions from 31 companies across 24 locations in 5 Countries.1 new job listed this weekColdfusion Consultant/Developer - OUTSIDE IR35 at London, UK - https://www.getcfmljobs.com/jobs/index.cfm/united-kingdom/Coldfusion-ConsultantDeveloper-OUTSIDE-IR35-at-London/11555Adobe posted of a job posting at Sees Candies for a CFML Developerhttps://seescandiescareers.mua.hrdepartment.com/hr/ats/Posting/view/9577Other Job LinksThere is a jobs channel in the CFML slack team, and in the Box team slack now tooForgeBox Module of the Weekaws-cfml aws-cfml is a CFML library for interacting with AWS APIs.It requires Lucee 4.5+ or ColdFusion 11+.It currently supports the following APIs: cognitoIdentity dynamodb ec2 ec2 auto-scaling groups elasticsearch elastictranscoder polly rekognition s3 secretsmanager sns ssm sqs translate Note: It has full support for the AWS S3 and AWS DynamoDB REST APIs. Other services are supported to varying degrees - if you are using a service and add a method you need, please consider contributing it back to this project.Soon to support the Amazon Connect service!https://forgebox.io/view/aws-cfmlVS Code Hint Tips and Tricks of the WeekSQLTools, Database management for VS Code Beautifier and formatter for SQL code Query runner, history and bookmarks Connection explorer Generator for INSERT queries Pluggable driver architecture Official Drivers: CockroachDB MariaDB Microsoft SQL Server MySQL PostgresSQL SQLLite https://vscode-sqltools.mteixeira.dev/en/home/Thank you to all of our Patreon SupportersThese individuals are personally supporting our open source initiatives to ensure the great toolings like CommandBox, ForgeBox, ColdBox,  ContentBox, TestBox and all the other boxes keep getting the continuous development they need, and funds the cloud infrastructure at our community relies on like ForgeBox for our Package Management with CommandBox.You can support us on Patreon here https://www.patreon.com/ortussolutionsDon't forget, we have Annual Memberships, pay for the year and save 10% - great for businesses. Bronze Packages and up, now get a ForgeBox Pro and CFCasts subscriptions as a perk for their Patreon Subscription. All Patreon supporters have a Profile badge on the Community Website All Patreon supporters have their own Private Forum access on the Community Website All Patreon supporters have their own Private Channel access BoxTeam Slack https://community.ortussolutions.com/Top Patreons ( SENSIBLE ) John Wilson - Synaptrix Tomorrows Guides Jordan Clark Gary Knight Mario Rodrigues Giancarlo Gomez  David Belanger  Dan Card Jeffry McGee - Sunstar Media Dean Maunder Nolan Erck  Abdul Raheen And many more PatreonsYou can see an up to date list of all sponsors on Ortus Solutions' Websitehttps://ortussolutions.com/about-us/sponsorsThanks everyone!!! ★ Support this podcast on Patreon ★

Web3 101
S1E20|DWeb的乌托邦共识与商业难题

Web3 101

Play Episode Listen Later Feb 10, 2023 46:48


相比基于区块链的Lens,Nostr、Bluesky(AT Protocol)和长毛象(ActivityPub)等这些知名去中心化产品/协议,包含在了一个更大的概念和组织之下——DWeb,每个人都可以运行自有节点,成为网络中一份子,参与运行和治理之中,以Decentralize&Offline first为核心。DWeb作为远景,区块链被当作实现工具,两者有所交融,却又非常不同。 本期我们再次与Web3 Revolution (https://linktr.ee/w3revolution)联动,原汁原味地复现出了一群拥有乌托邦共识的小团体,在区块链之外,是如何思考将人重新放回网络中重要位置的故事,并且为了实现去中心化,他们面临着效率上的妥协,和商业上的难题。 主播|阿伟 Awaei ,Twitter:@web3awaei (https://twitter.com/web3awaei) 嘉宾|Hana,Web3 Revolution 主播 Twitter:@afrawang.eth (https://twitter.com/afrazhaowang) 嘉宾|Yisi Liu,Mask Network Cofounder&CTO,DWeb上海发起人 Twitter:@yisiliu (https://twitter.com/TheYisiLiu) 【你将听到】 02:28 DWeb是什么?和区块链的关系 04:01 一群拥有乌托邦共识的小团体,DWeb起源故事 10:03 Decentralize&Offline first,DWeb的核心理念 11:34 什么是社区网络,和局域网有什么区别? 17:11 Mapeo&Althea 2个极具DWeb精神的产品 25:25 JOMO!参加DWeb Camp的真实体验 30:00 如何让人重回网络参与者的位置,而非商品 32:36 DWeb如何赚钱?捐赠,咨询,挑战艰巨 35:35 Dweb VS 以太坊,开源社区的2种面貌 37:35 一个很穷的非盈利组织在各地区的不同诠释 42:27 DWeb是个目标,区块链则是工具之一 45:56 来自Brewster Kahle的DWeb精神语录 【提及产品及人物】 DWeb (https://getdweb.net):Decentralized Web 去中心化网络,同时也是组织名称 DWeb Camp 2022 (https://dwebcamp.org/) DWeb Shanghai (https://www.meetup.com/dweb-shanghai/?cookie-check=8XpjsYuBPdMfLNr) GunDB (https://gun.eco/):开源去中心化图数据库 AWS S3:亚马逊最重要的云服务产品,对象存储服务 Matrix (https://matrix.org/):端到端加密、去中心化的即时通信协议 Mesh network:可动态断扩展的网络架构,实现无线设备之间的传输 Internet Archive (https://archive.org/):互联网档案馆 V2EX (https://www.v2ex.com/):06年上线的网络论坛 Mapeo (https://www.digital-democracy.org/mapeo) Althea (https://www.althea.net/) 网络社区:people's open network (https://peoplesopen.net) 和 toronto mesh (https://tomesh.net/) Joachim Lohkamp (https://twitter.com/joachimlohkamp):DWeb发起人 Tim Berners-Lee:万维网的发明者 Vint Cerf:TCP/IP共同发明人 Brewster Lurton Kahle:Internet Archive创始人 Wendy Hanamura:Internet Archive合作伙伴关系总监 Gavin Wood:公链Polkadot创始人,Web3概念提出者 Juan Benet:IPFS 创始人 Muneeb Ali:Stacks 联合创始人 FOMO:Fear of missing out JOMO:joy of missing out sweet spot 甜蜜点 pretentious 自命不凡的_ 【相关阅读】 Decentralized Web:构建从底层到前端都无需许可的 DWeb (https://mirror.xyz/vicoindao.eth/8ikNtuZhTk1-QtBOCYKOSonaaDu9KBWVwArSwaBr_30) DWeb Camp:走出 FOMO 的精神内耗 (https://mirror.xyz/0x30bF18409211FB048b8Abf44c27052c93cF329F2/etNvCjZGCQobM9zh4cDV-5XmEi8T24qWCyrdfTxiiFo) DWeb 和 Web3 的关系、Web3 新赛道关注 (https://mirror.xyz/0x30bF18409211FB048b8Abf44c27052c93cF329F2/C7wio-Kz_x1s90NNfuDuMYLpJRufMQu4MMwsi-HLCJc) 要想看懂 Web3,須先看懂 DWeb (https://www.panewslab.com/zh_hk/articledetails/toqa6u91.html) Web3时代的“新社交”该如何构建技术堆栈? (https://www.chaincatcher.com/article/2069709) 【BGM】 Mumbai — Ooyy 【后期】 Amei 【在这里找到我们】 中国用户:苹果播客|小宇宙 海外用户:Apple Podcast|Google Podcast|Amazon Music|Spotify Twitter:@Web3_101 (https://twitter.com/Web3_101) 【嘉宾言论仅代表个人,本期节目不构成任何投资建议哦】

Cloud Security Podcast
AWS Goat - Cloud Penetration Testing

Cloud Security Podcast

Play Episode Listen Later Jan 24, 2023 53:33


Cloud Security Podcast - This month we are talking about "Breaking the AWS Cloud" and next up on this series, we spoke to Nishant Sharma (Nishant's Linkedin), Director, Lab Platform, INE. If you have tried pentesting in AWS Cloud or want to start today with AWS Goat, then this episode with Nishant, behind AWS Goat will help you understand how you can upskill and maybe even show others how to be better at pentesting AWS Cloud. Episode ShowNotes, Links and Transcript on Cloud Security Podcast: www.cloudsecuritypodcast.tv Host Twitter: Ashish Rajan (@hashishrajan) Guest Twitter: Nishant Sharma (Nishant's Linkedin) Podcast Twitter - @CloudSecPod @CloudSecureNews If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - Cloud Security News - Cloud Security Academy Spotify TimeStamp for Interview Questions (00:00) Introduction (03:51) snyk.io/csp (04:51) What is Cloud Pentesting? (06:19) Cloud pentesting vs Web App & Network (08:37) What is AWS Goat? (13:12) Do you need permission from AWS to do pentesting? (14:03) Pentesting an application vs pentesting AWS S3 (15:40) What is AWS Goat testing? (18:14) Cloud penetration testing tools (19:59) How useful is a metadata of a cloud instance? (22:24) AWS Pentesting and OWASP Top 10 (25:31) How to build internal training for Cloud Security? (29:43) Keep building knowledge on AWS Goat (30:33) Using CloudShell for AWS pentesting (34:09) ChatGPT for cloud pentesting (36:28) Vulnerable serverless application (39:40) Pentesting Amazon ECS (43:01) How do you protect against ECS misconfigurations? (47:38) What is the future plan for AWS Goat? (50:28) Fun Questions See you at the next episode!

Data Engineering Podcast
An Exploration Of Tobias' Experience In Building A Data Lakehouse From Scratch

Data Engineering Podcast

Play Episode Listen Later Dec 26, 2022 71:59


Summary Five years of hosting the Data Engineering Podcast has provided Tobias Macey with a wealth of insight into the work of building and operating data systems at a variety of scales and for myriad purposes. In order to condense that acquired knowledge into a format that is useful to everyone Scott Hirleman turns the tables in this episode and asks Tobias about the tactical and strategic aspects of his experiences applying those lessons to the work of building a data platform from scratch. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you're ready to build your next pipeline, or want to test out the projects you hear about on the show, you'll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode (https://www.dataengineeringpodcast.com/linode) today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don't forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan's active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan (https://www.dataengineeringpodcast.com/atlan) today to learn more about how Atlan's active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Struggling with broken pipelines? Stale dashboards? Missing data? If this resonates with you, you're not alone. Data engineers struggling with unreliable data need look no further than Monte Carlo, the leading end-to-end Data Observability Platform! Trusted by the data teams at Fox, JetBlue, and PagerDuty, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo monitors and alerts for data issues across your data warehouses, data lakes, dbt models, Airflow jobs, and business intelligence tools, reducing time to detection and resolution from weeks to just minutes. Monte Carlo also gives you a holistic picture of data health with automatic, end-to-end lineage from ingestion to the BI layer directly out of the box. Start trusting your data with Monte Carlo today! Visit dataengineeringpodcast.com/montecarlo (http://www.dataengineeringpodcast.com/montecarlo) to learn more. Your host is Tobias Macey and today I'm being interviewed by Scott Hirleman about my work on the podcasts and my experience building a data platform Interview Introduction How did you get involved in the area of data management? Data platform building journey Why are you building, who are the users/use cases How to focus on doing what matters over cool tools How to build a good UX Anything surprising or did you discover anything you didn't expect at the start How to build so it's modular and can be improved in the future General build vs buy and vendor selection process Obviously have a good BS detector - how can others build theirs So many tools, where do you start - capability need, vendor suite offering, etc. Anything surprising in doing much of this at once How do you think about TCO in build versus buy Any advice Guest call out Be brave, believe you are good enough to be on the show Look at past episodes and don't pitch the same as what's been on recently And vendors, be smart, work with your customers to come up with a good pitch for them as guests... Tobias' advice and learnings from building out a data platform: Advice: when considering a tool, start from what are you actually trying to do. Yes, everyone has tools they want to use because they are cool (or some resume-driven development). Once you have a potential tool, is the capabilty you want to use a unloved feature or a main part of the product. If it's a feature, will they give it the care and attention it needs? Advice: lean heavily on open source. You can fix things yourself and better direct the community's work than just filing a ticket and hoping with a vendor. Learning: there is likely going to be some painful pieces missing, especially around metadata, as you build out your platform. Advice: build in a modular way and think of what is my escape hatch? Yes, you have to lock yourself in a bit but build with the possibility of a vendor or a tool going away - whether that is your choice (e.g. too expensive) or it literally disappears (anyone remember FoundationDB?). Learning: be prepared for tools to connect with each other but the connection to not be as robust as you want. Again, be prepared to have metadata challenges especially. Advice: build your foundation to be strong. This will limit pain as things evolve and change. You can't build a large building on a bad foundation - or at least it's a BAD idea... Advice: spend the time to work with your data consumers to figure out what questions they want to answer. Then abstract that to build to general challenges instead of point solutions. Learning: it's easy to put data in S3 but it can be painfully difficult to query it. There's a missing piece as to how to store it for easy querying, not just the metadata issues. Advice: it's okay to pay a vendor to lessen pain. But becoming wholly reliant on them can put you in a bad spot. Advice: look to create paved path / easy path approaches. If someone wants to follow the preset path, it's easy for them. If they want to go their own way, more power to them, but not the data platform team's problem if it isn't working well. Learning: there will be places you didn't expect to bend - again, that metadata layer for Tobias - to get things done sooner. It's okay to not have the end platform built at launch, move forward and get something going. Advice: "one of the perennial problems in technlogy is the bias towards speed and action without necessarily understanding the destination." Really consider the path and if you are creating a scalable and maintainable solution instead of pushing for speed to deliver something. Advice: consider building a buffer layer between upstream sources so if there are changes, it doesn't automatically break things downstream. Tobias' data platform components: data lakehouse paradigm, Airbyte for data integration (chosen over Meltano), Trino/Starburst Galaxy for distributed querying, AWS S3 for the storage layer, AWS Glue for very basic metadata cataloguing, Dagster as the crucial orchestration layer, dbt Contact Info LinkedIn (https://www.linkedin.com/in/scotthirleman/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers Links Data Mesh Community (https://datameshlearning.com/community/) Podcast (https://www.linkedin.com/company/80887002/admin/) OSI Model (https://en.wikipedia.org/wiki/OSI_model) Schemata (https://schemata.app/) Podcast Episode (https://www.dataengineeringpodcast.com/schemata-schema-compatibility-utility-episode-324/) Atlan (https://atlan.com/) Podcast Episode (https://www.dataengineeringpodcast.com/atlan-data-team-collaboration-episode-179/) OpenMetadata (https://open-metadata.org/) Podcast Episode (https://www.dataengineeringpodcast.com/openmetadata-universal-metadata-layer-episode-237/) Chris Riccomini (https://daappod.com/data-mesh-radio/devops-for-data-mesh-chris-riccomini/) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)

Cyber Security Today
Cyber Security Today, Dec. 23, 2022 - A new A new attack vector against Exchange and more unprotected data found on AWS S3 buckets

Cyber Security Today

Play Episode Listen Later Dec 23, 2022 5:55


This episode reports on protecting Exchange Servers and Exchange Online, a report on the FIN7 ransomware gang and more bad Android apps

Segurança Legal
Episódio #319 – Café Segurança Legal

Segurança Legal

Play Episode Listen Later Jul 11, 2022 47:49


Neste episódio: Acordo do Serpro com empresa americana, EDPB lança guidelines para o cálculo de multas pela GDPR, Banco Safra multado em 2,4 milhões, Falha de configuração no AWS S3 expõe 3TB de dados sensíveisContinue reading

The Laravel Podcast
Spatie's Laravel-Backup, with Freek Van der Herten

The Laravel Podcast

Play Episode Listen Later Jun 10, 2022 27:31


Freek Van der Herten's Twitter - https://twitter.com/freekmurzeFreek Van der Herten's Blog - https://freek.devSpatie - https://spatie.beSpatie Twitter - https://twitter.com/spatie_be?lang=enOh Dear -  https://ohdear.appLaravel-Backup GitHub - https://github.com/spatie/laravel-backupLaravel-Backup Introduction - https://spatie.be/docs/laravel-backup/v8/introductionVapor - https://vapor.laravel.com/AWS S3 - https://aws.amazon.com/s3/Forge - https://forge.laravel.com/Zend Framework - https://framework.zend.com/DigitalOcean - https://www.digitalocean.com/Composer- https://getcomposer.org/Grandfather-father-son scheme - https://en.wikipedia.org/wiki/Backup_rotation_scheme#:~:text=Grandfather%2Dfather%2Dson%20backup%20is,a%20FIFO%20system%20as%20above.DB-Dumper GitHub - https://github.com/spatie/db-dumperDB-Snapshots GitHub - https://github.com/spatie/laravel-db-snapshotsLaravel Backup Server - https://spatie.be/products/laravel-backup-server

Changelog Master Feed
Kaizen! We are flying ✈️ (Ship It! #50)

Changelog Master Feed

Play Episode Listen Later Apr 27, 2022 67:40 Transcription Available


This is our 5th Kaizen where we talk about the next improvement to changelog.com: we are now running on fly.io and our PostgreSQL is managed. This is a migration that many were curious about, including Simmy de Klerk, the person that requested this episode. After migrating all our media files to AWS S3 (check episode 40), we thought that this part was going to be easy. Plan met reality. Pull request 407 has all the details. We want to emphasise the type of partner relationships that we seek at Changelog & why they are important to us, as well as to our listeners. Honeycomb & Fly embody the principles that we care about, and Gerhard thinks that we are currently missing a Kubernetes partner.

Ship It! DevOps, Infra, Cloud Native
Kaizen! We are flying ✈️

Ship It! DevOps, Infra, Cloud Native

Play Episode Listen Later Apr 27, 2022 67:40 Transcription Available


This is our 5th Kaizen where we talk about the next improvement to changelog.com: we are now running on fly.io and our PostgreSQL is managed. This is a migration that many were curious about, including Simmy de Klerk, the person that requested this episode. After migrating all our media files to AWS S3 (check episode 40), we thought that this part was going to be easy. Plan met reality. Pull request 407 has all the details. We want to emphasise the type of partner relationships that we seek at Changelog & why they are important to us, as well as to our listeners. Honeycomb & Fly embody the principles that we care about, and Gerhard thinks that we are currently missing a Kubernetes partner.

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News for April 26th, 2022 - Episode 145

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Apr 26, 2022 72:54


2022-04-26 Weekly News - Episode 145Watch the video version on YouTube at https://youtu.be/c7n9_RJZLZY Hosts: Gavin Pickin - Senior Developer at Ortus SolutionsDaniel Garcia - Senior Developer at Ortus SolutionsThanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-en out there. A few ways  to say thanks back to Ortus Solutions:Like and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content every weekBuy Ortus's Book - 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Patreon SupportWe have 35 patreons providing 92% of the funding for our Modernize or Die Podcasts via our Patreon site: https://www.patreon.com/ortussolutions. News and EventsNew Into the Box Dates Announced - Almost 100% finalizedOrtus Solutions is happy to announce we have new finalized dates for Into the Box 2022 and the venue. Into the Box 2022 will be hosted in Houston Texas, Tuesday September 6th through Thursday September 8th, 2022. The conference will be at a new venue, the Houston CityPlace Marriott at Springwoods Village.Adobe semi officially announced their dates (still un-official at the time of writing this post) and they were close, back to back weeks at the end of September/October. We felt like the ColdFusion community deserves more in person conferences, ColdFusion Community members need the opportunity to speak and or attend more in person coldfusion conferences. If we left the conferences back to back with only a travel day/weekend in between, it would have been hard for many if not most coldfusion community members to attend both.By changing the dates, it might still be hard or impossible for a lot of speakers, sponsors, and community members, but now those percentages have increased, and both conferences will be more successful, and that will help the community be more successful... and at the end of the day, we all win if ColdFusion wins.Since we moved dates for ITB 2022 - We're extending the Call for Speaker Deadline - April 30, 2022Since we had to make changes to the schedule, we wanted to make sure every community member had the opportunity to submit their proposal.Into the Box will be live in Houston in September 2022.https://forms.gle/HR1vQf2T5rs8yCZo9https://intothebox.orgAdobe Announced Adobe Developer Week 2022July 18-22, 2022Online - Virtual - FreeThe Adobe ColdFusion Developer Week is back - bigger and better than ever! This year, our experts are gearing up to host a series of webinars on all things ColdFusion. This is your chance to learn with them, get your questions answered, and build cloud-native applications with ease.Note: Speakers listed are 2021 speakers currently - check back for updateshttps://adobe-coldfusion-devweek-2022.attendease.com/registration/form Lucee 5.3.9.131-Snapshot Installers released - Stable release coming today!So we solved the last blocker for the 5.3.9 release, stable release tomorrow!Here are the preview installers, they bundleApache Tomcat/9.0.62Java 11.0.15 (Eclipse Adoptium) 64bitBonCode 1.0.42Notes: Java 17 is still not fully working, but Lucee will start instead of crashing on startup.Users with M1 Macs should now be able to use a native ARM JVM.https://dev.lucee.org/t/preview-5-3-9-131-snapshot-installers/10012 New Beta for the S3 Lucee Extension 2.0.0.71 (awslib) We had been using the older, no longer maintained jets3t library, but it's no longer maintained and was causing a range of minor problems which led us to decided to switch over to the the AWS S3 java library.Those problems beinglarge multipart uploads failing sometimesoccasional OSGI issues with the jets3t properties fileBasically as an end user, there is no functional difference between the 0.9.154 and 2.0.0.71 versions, in our testing the new version is a bit faster, especially with file deletion.https://dev.lucee.org/t/s3-extension-2-0-0-71-beta-awslib/10014 CFBreak is BackA once weekly email newsletter for the ColdFusion / CFML community.Hi, this is Pete Freitag, you're receiving this email because you signed up for my CFML / ColdFusion monthly newsletter CFML News here https://tinyletter.com/cfml a few years ago.I haven't posted to the newsletter since 2020, so I decided it is time for a refresh, and a rebrand of the newsletter.https://www.cfbreak.com/ CFWheels has joined Open Source CollectiveCFWheels has joined the Open Source Collective allowing us to raise, manage, and spend money transparently.https://cfwheels.org/blog/cfwheels-joins-open-source-collective/ Hot deal on Adobe ColdFusion from Fusion Reactor - Pricing good until April 30thAdobe ColdFusion Hot Sale. Upgrades to Adobe ColdFusion are now available at an exclusive rate. Upgrade to ColdFusion 21 if you have CF9, 10, 11, or 2016 and get the following deal:25% discount compared to the full price of CF21This offer is only available to FusionReactor customers for STD and ENT editions of ColdFusion. If you're not already a customer, then by adding FusionReactor in, you still have a significant saving. FusionReactor prices start from $19 per month, see our APM pricing page.https://www.fusion-reactor.com/blog/news/coldfusion-hot-sale/ ICYMI - Mid-Michigan CFUG - John Farrar is presenting on 13 ways to modernize with Vue 34/19/2022 - 7 pm eastern time.Learn everything that is new and how to transition to Vue 3.Meeting URL: https://bit.ly/3rwOxvq Recording Available:  https://www.youtube.com/watch?v=V6nMoMO5o1oOnline ColdFusion Meetup - "Updating the Java underlying ColdFusion", with Charlie ArehartThursday, April 28, 20229:00 AM to 10:00 AM PDTWith Java updates happening about quarterly (and one just last week), it's important that ColdFusion administrators and/or developers keep up to date on the Java version which underlies their CF (or Lucee) deployments. While the simplest question may seem to be "how do I do such an update, effectively" (and it really can be quite simple), there's a good bit more to updating the Java (aka jvm, jdk, jre) which underlies your CFML engine.In this session, veteran troubleshooter Charlie Arehart will share his experience helping people deal with this topic for many years, including:Considering, planning the jvm update (what jvm do you have, what can you update to, why should you?)Performing the jvm update (where to get it, how to install it, how to configure CF to use it)Avoiding various potential gotchas when updating the JVMHow to be made aware of new JVM versionsWhether you use CF or Lucee, deployed traditionally or via Commandbox (or even containers), most of the discussion will apply to you.https://www.meetup.com/coldfusionmeetup/events/285508327/?response=3Ortus Webinar - April - cbSecurity: Passwords, Tokens, and JWTs with Eric PetersonApril 29th 202211:00 AM Central Time (US and Canada)Learn how to integrate cbSecurity into your application whether you are using passwords, API tokens, JWTs, or a combination of all three!More Webinars: https://www.ortussolutions.com/events/webinars Hawaii ColdFusion Meetup Group - Using ColdFusion ORMs with Nick KwiatkowskiFriday, April 29, 20224:00 PM to 5:00 PM PDTThe ColdFusion language introduced the concept of ORM (Object Relation Mappings) to allow developers to be able to do database work without having to write database-dependent SQL.Nick Kwiatkowski is an adjunct professor at Michigan State University, a member of the Mid-Michigan CFUG, and Apache Foundation Member. His day job also includes managing the telecommunications platforms at MSU as well as managing a variety of applications on campus. He has been a ColdFusion developer for nearly 25 years and an instructor for 15.https://www.meetup.com/hawaii-coldfusion-meetup-group/events/285109975/ Online ColdFusion Meetup - “Code Reuse in ColdFusion - Is Spaghetti Code still Spaghetti if it is DRY?” with Gavin PickinThursday, May 12 20229:00 AM to 10:00 AM PDTFind out the difference between DRY code and WET code, and what one is better, and more importantly, WHY.We write code once, but we read it over and over again, maintaining our code is 90% of the job... code reuse is our friend. You are already Re-using code, even if you didn't know you were.We'll learn about the different types of Code Reuse in ColdFusion, and the pros and cons of each.www.meetup.com/coldfusionmeetup/events/285524970/ Adobe WorkshopsJoin the Adobe ColdFusion Workshop to learn how you and your agency can leverage ColdFusion to create amazing web content. This one-day training will cover all facets of Adobe ColdFusion that developers need to build applications that can run across multiple cloud providers or on-premiseICYMI - THURSDAY, APRIL 21, 202210:00 AM PDTAdobe ColdFusion TruthsMark Takatahttps://adobe-coldfusion-truths.meetus.adobeevents.com/TODAY - TUESDAY, APRIL 26, 20229:00 AM CETAdobe ColdFusion WorkshopDamien Bruyndonckx (Brew-en-dohnx) https://adobe-workshop-coldfusion.meetus.adobeevents.com/FREE :)Full list - https://meetus.adobeevents.com/coldfusion/ CFCasts Content Updateshttps://www.cfcasts.comJust ReleasedGavin Pickin - Publish Your First ForgeBox PackageMinimum Requirements for a Package https://www.cfcasts.com/series/publish-your-first-forgebox-package/videos/minimum-requirements-for-a-package What happens if your slug for your package isn't unique?  https://www.cfcasts.com/series/publish-your-first-forgebox-package/videos/what-happens-if-your-slug-for-your-package-isn't-unique Coming SoonMore… Gavin Pickin - Publish Your First ForgeBox PackageConferences and TrainingDockerConMay 10, 2022Free Online Virtual ConferenceDockerCon will be a free, immersive online experience complete with Docker product demos , breakout sessions, deep technical sessions from Docker and our partners, Docker experts, Docker Captains, our community and luminaries from across the industry and much more. Don't miss your chance to gather and connect with colleagues from around the world at the largest developer conference of the year. Sign up to pre-register for DockerCon 2022!https://www.docker.com/dockercon/ US VueJS ConfFORT LAUDERDALE, FL • JUNE 8-10, 2022Beach. Code. Vue.Workshop day: June 8Main Conference: June 9-10https://us.vuejs.org/Adobe Developer Week 2022July 18-22, 2022Online - Virtual - FreeThe Adobe ColdFusion Developer Week is back - bigger and better than ever! This year, our experts are gearing up to host a series of webinars on all things ColdFusion. This is your chance to learn with them, get your questions answered, and build cloud-native applications with ease.Note: Speakers listed are 2021 speakers currently - check back for updateshttps://adobe-coldfusion-devweek-2022.attendease.com/registration/form THAT ConferenceHowdy. We're a full-stack, tech-obsessed community of fun, code-loving humans who share and learn together.We geek-out in Texas and Wisconsin once a year but we host digital events all the time.For a limited time all monthly THAT Online events are free and do not require a ticket to participate.Read more at: https://that.us/events/thatus/2022-5/ on THAT.There have webinars too https://that.us/activities/WISCONSIN DELLS, WI / JULY 25TH - 28TH, 2022A four-day summer camp for developers passionate about learning all things mobile, web, cloud, and technology.https://that.us/events/wi/2022/ Our very own Daniel Garcia is speaking there https://that.us/activities/sb6dRP8ZNIBIKngxswIt CF SummitIn person at Las Vegas, NV in October 2022!Official-”ish” dates:Oct 3rd & 4th - CFSummit ConferenceOct 5th - Adobe Certified Professional: Adobe ColdFusion Certification Classes & Testshttps://twitter.com/MarkTakata/status/1511210472518787073VueJS Forge June 29-30thOrganized by Vue School_The largest hands-on Vue.js EventTeam up with 1000s of fellow Vue.js devs from around the globe to build a real-world application in just 2 days in this FREE hackathon-style event.Make connections. Build together. Learn together.Sign up as an Individual or signup as a company (by booking a call)https://vuejsforge.com/Into The Box 2022Solid Dates - September 6, 7 and 8, 2022One day workshops before the two day conference!Early bird pricing available until April 30, 2022Call for Speakers - Extended until April 30, 2022https://forms.gle/HR1vQf2T5rs8yCZo9Conference Website:https://intothebox.orgInto the Box Latam 2022Tentative dates - Dec 1-2CFCampStill waiting as well.More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Tweets, and Videos of the WeekLooking for more content, check out the other ColdFusion related Podcasts​Working Code Podcast https://workingcode.dev/ ​CF Alive https://teratech.com/podcast/ April 25, 2022 - Blog - Mark Takata - Adobe - Turning on NULL support in ColdFusion 2018+While playing around with booleans, I ended up running into some fun stuff(tm) having to do with NULL. As you might be aware, as of Adobe ColdFusion 2018, the framework has supported NULL values, but what you might not be aware of is that you can turn them on and off either globally (via the Administrator) or on a per-application level.https://coldfusion.adobe.com/2022/04/turning-on-null-support-in-coldfusion-2018/ April 26, 2022 - Blog - Ben Nadel - Considering The Separation Of Concerns When Invoking A Remote API In ColdFusionWhen dealing with a local database in ColdFusion, the levels of abstraction and the separations of concern feel somewhat second nature. Yes, I've wrestled with some irrational guilt over returning Query objects from my DAL (Data Access Layer); but, on balance, I love the query object's simplicity and power; and, returning it from the DAL makes life easy. Lately, however, I've had to start consuming some remote APIs (microservices). And, when it comes to making HTTP calls, the separation of concerns is less clear in my head - it seems that so much more can go wrong when consuming a remote API.https://www.bennadel.com/blog/4254-considering-the-separation-of-concerns-when-invoking-a-remote-api-in-coldfusion.htmBen is essentially setting up a gateway to abstract getting the data so he can standardize what the service is receiving, so it shouldn't matter where the data is coming from.April 22, 2022 - Blog - Ben Nadel - ArraySlice() Has An Exponential Performance Overhead In Lucee CFML 5.3.8.201The other day, I tweeted about Lucee CFML struggling with a massive array. I had created a data-processing algorithm that was taking an array of user-generated data and splitting it up into chunks of 100 so that I could then gather some aggregates on that data in the database. Everything was running fine until I hit a user that had 2.1 million entries in this array. This was an unexpected volume of data, and it crushed the CFML server. 2.1M is a lot of data to my "human brain"; but, it's not a lot of data for a computer. As such, I started to investigate the root performance bottleneck; and, I discovered that the arraySlice() function in Lucee CFML 5.3.8.201 has a performance overhead that appears to increase exponentially with the size of the array.https://www.bennadel.com/blog/4253-arrayslice-has-an-exponential-performance-overhead-in-lucee-cfml-5-3-8-201.htm @bdw429s just left a comment on the blog-post about .subList() as well. It looks crazy-fast! This seems like the fastest possible implementation.April 22, 2022 - Blog - Charlie Arehart - Updated - Solving problems calling out of CF via https, by updating JVMIf you're getting errors in calling out to https urls from CF, especially if it was working and now is not, you may NOT need to import a certificate, nor modify any jvm args. You may simply need to update the JVM that CF uses, as discussed in this post.https://coldfusion.adobe.com/2019/06/error-calling-cf-via-https-solved-updating-jvm/ 4/22/2022- Tweet - Brad Wood - Ortus Solutions - It sucks that CF engines still don't allow for CFCs to extend Java classesIt sucks that CF engines still don't allow for CFCs to extend Java classes.  That prevents me from integrating with Java libraries like this one who don't allow interface implementations, but require abstract base class extension.  https://github.com/bkiers/Liqp/issues/226 4/22/2022 - Tweet - Brad Wood - Ortus Solutions - native Java threading can't access application/session/request scopesOne of the missing pieces for CF devs using native Java threading is the inability of your code to access your application/session/request scopes.  ColdBox works around this but we really need out of the box CF engine support! https://luceeserver.atlassian.net/browse/LDEV-3960 https://twitter.com/bdw429s/status/1517584339235745795https://twitter.com/bdw429s4/19/2022 - Blog - Charlie Arehart - New updates released for Java 8, 11, 17, and 18 as of Apr 2022New JVM updates have been released today (Apr 19, 2022) for the current long-term support (LTS) releases of Oracle Java, 8, 11, and 17, as well as the new interim update 18. (Note that prior to Java 9, releases of Java were known technically as 1.x, to 8 is referred to in resources below as 1.8.)The new updates are 1.8.0_331, (aka 8u331), 11.0.15, 17.0.3, and 18.0.1 respectively). And as is generally the case with these Java updates, most of them have the same changes and fixes.For more on them, including changes as well as the security and bug fixes they each contain, see the Oracle resources I list below, as well as some additional info I offer for if you may be skipping to this from a JVM update from before Apr 2021. I also offer info for Adobe ColdFusion users on where to find the updated Java versions, what JVM versions Adobe CF supports, and more.https://www.carehart.org/blog/client/index.cfm/2022/4/19/java_updates_Apr_2022 CFML JobsSeveral positions available on https://www.getcfmljobs.com/Listing over 75  ColdFusion positions from 43 companies across 36 locations in 5 Countries.2 new job listedFull-Time - Mid/Senior CFML Developer at Cleveland, OH - United States Apr 22https://www.getcfmljobs.com/viewjob.cfm?jobid=11462Full-Time - Senior ColdFusion/Lucee Engineer (Remote) at Remote - United States Apr 19https://www.getcfmljobs.com/viewjob.cfm?jobid=11461 Other Job LinksOrtus Solutionshttps://www.ortussolutions.com/about-us/careers Consortium Inchttps://www.dice.com/jobs/detail/-/10183574/7322396 There is a jobs channel in the box team slack now tooForgeBox Module of the WeekCBMailServices PreMail FilterThis is a tool that fires on the PreMail interception point, allowing you to filter emails being sent from your application using CBMailServices.This supports multiple enviromnents, so you can turn on the filter for just one environment, or multiple environments, and you can choose to override the global settings, with settings for just one environment, whether that is allowed email addresses, or required email addresses.https://www.forgebox.io/view/cbmailservices-premail-filter VS Code Hint Tips and Tricks of the WeekDepot Data Editor by Afterschool StudioStructured data editor for VS Code - Edit JSON data directly inside of code with a spreadsheet like interface. Can be used to replace the need for .csv or XML filesExtension: https://marketplace.visualstudio.com/items?itemName=afterschool.depot Bonus VS Code Livestream Recording - JSON Data in VS Code with Depot Extension

Changelog Master Feed
Kaizen! New beginnings (Ship It! #40)

Changelog Master Feed

Play Episode Listen Later Feb 16, 2022 76:17 Transcription Available


We finally did it! All our static files are served from AWS S3. This is the most significant improvement to our app's architecture in years, and now we have unlocked the next level: multi-cloud. We talk about that at length, and how it fits in our 2022 setup. The TL;DR is that changelog.com will fly, both literally and figuratively. We also address Steve's comment that he left on our previous Kaizen episode - thanks Steve! Towards the end, we talk about Gerhard's new beginnings at Dagger, where he gets to work with a world-class team and build the next-gen CI/CD. That's right, Gerhard is now walking the Ship It talk all day, every day. If you want to watch him code live, you can do so every Thursday, in our weekly community session. Kaizen!

Ship It! DevOps, Infra, Cloud Native
Kaizen! New beginnings

Ship It! DevOps, Infra, Cloud Native

Play Episode Listen Later Feb 16, 2022 76:17 Transcription Available


We finally did it! All our static files are served from AWS S3. This is the most significant improvement to our app's architecture in years, and now we have unlocked the next level: multi-cloud. We talk about that at length, and how it fits in our 2022 setup. The TL;DR is that changelog.com will fly, both literally and figuratively. We also address Steve's comment that he left on our previous Kaizen episode - thanks Steve! Towards the end, we talk about Gerhard's new beginnings at Dagger, where he gets to work with a world-class team and build the next-gen CI/CD. That's right, Gerhard is now walking the Ship It talk all day, every day. If you want to watch him code live, you can do so every Thursday, in our weekly community session. Kaizen!

Working Code
030: Carol's Consult Catch-Up Conversation

Working Code

Play Episode Listen Later Jul 7, 2021 48:34


Ten weeks ago, in Episode 20, Carol described a problem at work in which her customers were using Support Tickets as a means to look-up information in lieu of logging into the customer dashboard. This email-based workflow has been putting a large burden on the Support staff. And, Carol wanted to brainstorm on ways in which she could improve the overall situation and the efficiency of her team. Today, we circle back with Carol to see how it's going. Which is to say, to see just how hard Carol is crushing it!It's amazing to see how much Carol has accomplished in just a few months. Topics include natural language processing, AWS SAM, AWS Lambda, AWS S3, AWS SNS, AWS EventBridge, AWS CloudWatcher, AWS Parameter Store, Sumo Logic, and much more! It's kind of mind-boggling to see it all coming together so quickly.Notes & LinksAWS LambdaAWS S3AWS SNSAWS EventBridgeAWS CloudWatchAWS Parameter StoreAWS SAMSumo LogicJestKent C. Dodds: Testing JavaScriptNetlifyJWTFollow the show! Our website is workingcode.dev and we're @WorkingCodePod on Twitter and Instagram. Or, leave us a message at (512) 253-2633‬ (that's 512-253-CODE). New episodes drop weekly on Wednesday.And, if you're feeling the love, support us on Patreon.

44BITS 팟캐스트 - 클라우드, 개발, 가젯
개발자 연봉 인상, AWS S3 15주년, Go 개발자 설문, git plan

44BITS 팟캐스트 - 클라우드, 개발, 가젯

Play Episode Listen Later Mar 20, 2021 58:09


44bits 팟캐스트 114번째 로그에서는 개발자 연봉 인상, AWS S3 15주년, Go 개발자 설문, git plan에 대해서 이야기를 나누었습니다. git plan git-plan - GitHub AWS S3 15주년 Amazon S3 탄생 1…

iSmart Podcast
The Real Richard Hendricks from the HBO sitcom Silicon Valley with Al Wegener, Founder and CEO at Anacode

iSmart Podcast

Play Episode Listen Later Sep 11, 2020 48:43


Anacode reduces AWS S3 storage costs for faster storage, via massively parallel lossless compression. The ONLY storage service on AWS using lossless compression to make YOUR storage reads 2x faster for 35% lower monthly price per TB. Available on AWS via monthly subscription.   Founded Samplify Systems, a venture-backed high-speed compression start-up. Named on 50+ granted Samplify patents. Developed the real-time compression technology, wrote compress/decompress software in C, managed the development of the FPGA hardware prototype, raised a $300k seed round from Charles River Ventures, and attracted a world-class engineering, sales, and marketing team to bring the Samplify vision to market. Raised $23M from Charles River Ventures, Formative Ventures, IDT, and Schlumberger. Visited 50+ customers in US, Europe, and Asia, selling benefits of real-time compression for medical imaging (CT, ultrasound, MRI), seismic (wireline, RTM), wireless (CPRI, LTE, remote radio heads, WiMax), and data conveter (A/D, D/A) applications. Active in recruiting and hiring talented staff of 18+ employees. Quarterly Technical Advisory Board (Stanford, Xilinx, and Graychip/TI members). Support this podcast

The Data Life Podcast
23: Let's Talk AWS SageMaker for ML Model Deployment

The Data Life Podcast

Play Episode Listen Later Jun 17, 2020 19:46


In this episode, we talk about Amazon SageMaker and how it can help with ML model development including model building, training and deployment. We cover 3 advantages in each of these 3 areas.  We cover points such as: 1. Host ML endpoints for deploying models to thousands or millions of users. 2. Saving costs for model training using SageMaker. 3. Use CloudWatch logs with SageMaker endpoints to debug ML models.  4. Use preconfigured environments or models provided by AWS. 5. Automatically save model artifacts in AWS S3 as you train in SageMaker.  6. Use of version control for SageMaker notebooks with Github. and more…  Please rate, subscribe and share this episode with anyone who might find SageMaker useful in their work. I feel that SageMaker is a great tool and want to share about it with data scientists.  For comments/feedback/questions or if you think I have missed something in the episode, please reach out to me at LinkedIn: https://www.linkedin.com/in/sanketgupta107/ --- Send in a voice message: https://anchor.fm/the-data-life-podcast/message Support this podcast: https://anchor.fm/the-data-life-podcast/support