Podcasts about SFTP

  • 83PODCASTS
  • 128EPISODES
  • 41mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 20, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about SFTP

Latest podcast episodes about SFTP

Strength for Today's Pastor
174- Correcting Our Thinking on Forgiveness- with Dr. Bruce Hebel

Strength for Today's Pastor

Play Episode Listen Later May 20, 2025 54:43 Transcription Available


Comments? Questions? Send us a message!Welcome to Podcast 174. Today I'm with Dr. Bruce Hebel, author of the incredible, life altering book Forgiving Forward: Experience the Freedom of the Gospel through the Power of Forgiveness.All close followers of Jesus Christ know that we should forgive others, but there are things about forgiveness that are about as clear as mud to us. In this episode of SFTP, we talked with Bruce about the common misconceptions that are out there about what forgiveness really means, and how to actually do it.Among the misconceptions are the following ones:-forgiveness is not a process (it's a decision)-to forgive someone, there's no need to go to that person and tell him/her. -Forgiveness is transacted with the Father. See Mark 11:25-26-forgiveness is not the same thing as reconciliation-forgiveness is not salvific; but, forgiveness of the wounds of others determines how the Father acts toward us in this life (Matthew 6:12, 14-15)And more...Feedback and/or questions welcomed!For Poimen Ministries, its staff, ministries, and focus, go to poimenministries.com. To contact Poimen Ministries, email us at strongerpastors@gmail.com. May the Lord revive His work in the midst of these years!

Cyber Bites
Cyber Bites - 2nd May 2025

Cyber Bites

Play Episode Listen Later May 1, 2025 13:45


We hit a milestone today as this is our 50th Podcast Episode! A Big thank you to You, our listeners for your continued support!* Kali Linux Users Face Update Issues After Repository Signing Key Loss* CISOs Advised to Secure Personal Protections Against Scapegoating and Whistleblowing Risks* WhatsApp Launches Advanced Chat Privacy to Safeguard Sensitive Conversations* Samsung Confirms Security Vulnerability in Galaxy Devices That Could Expose Passwords* Former Disney Menu Manager Sentenced to 3 Years for Malicious System AttacksKali Linux Users Face Update Issues After Repository Signing Key Losshttps://www.kali.org/blog/new-kali-archive-signing-key/Offensive Security has announced that Kali Linux users will need to manually install a new repository signing key following the loss of the previous key. Without this update, users will experience system update failures.The company recently lost access to the old repository signing key (ED444FF07D8D0BF6) and had to create a new one (ED65462EC8D5E4C5), which has been signed by Kali Linux developers using signatures on the Ubuntu OpenPGP key server. OffSec emphasized that the key wasn't compromised, so the old one remains in the keyring.Users attempting to update their systems with the old key will encounter error messages stating "Missing key 827C8569F2518CC677FECA1AED65462EC8D5E4C5, which is needed to verify signature."To address this issue, the Kali Linux repository was frozen on February 18th. "In the coming day(s), pretty much every Kali system out there will fail to update," OffSec warned. "This is not only you, this is for everyone, and this is entirely our fault."To avoid update failures, users are advised to manually download and install the new repository signing key by running the command: sudo wget https://archive.kali.org/archive-keyring.gpg -O /usr/share/keyrings/kali-archive-keyring.gpgFor users unwilling to manually update the keyring, OffSec recommends reinstalling Kali using images that include the updated keyring.This isn't the first time Kali Linux users have faced such issues. A similar incident occurred in February 2018 when developers allowed the GPG key to expire, also requiring manual updates from users.CISOs Advised to Secure Personal Protections Against Scapegoating and Whistleblowing Riskshttps://path.rsaconference.com/flow/rsac/us25/FullAgenda/page/catalog/session/1727392520218001o5wvhttps://www.theregister.com/2025/04/28/ciso_rsa_whistleblowing/Chief Information Security Officers should negotiate personal liability insurance and golden parachute agreements when starting new roles to protect themselves in case of organizational conflicts, according to a panel of security experts at the RSA Conference.During a session on CISO whistleblowing, experienced security leaders shared cautionary tales and strategic advice for navigating the increasingly precarious position that has earned the role the nickname "chief scapegoat officer" in some organizations.Dd Budiharto, former CISO at Marathon Oil and Philips 66, revealed she was once fired for refusing to approve fraudulent invoices for work that wasn't delivered. "I'm proud to say I've been fired for not being willing to compromise my integrity," she stated. Despite losing her position, Budiharto chose not to pursue legal action against her former employer, a decision the panel unanimously supported as wise to avoid industry blacklisting.Andrew Wilder, CISO of veterinarian network Vetcor, emphasized that security executives should insist on two critical insurance policies before accepting new positions: directors and officers insurance (D&O) and personal legal liability insurance (PLLI). "You want to have personal legal liability insurance that covers you, not while you are an officer of an organization, but after you leave the organization as well," Wilder advised.Wilder referenced the case of former Uber CISO Joe Sullivan, noting that Sullivan's Uber-provided PLLI covered PR costs during his legal proceedings following a data breach cover-up. He also stressed the importance of negotiating severance packages to ensure whistleblowing decisions can be made on ethical rather than financial grounds.The panelists agreed that thorough documentation is essential for CISOs. Herman Brown, CIO for San Francisco's District Attorney's Office, recommended documenting all conversations and decisions. "Email is a great form of documentation that doesn't just stand for 'electronic mail,' it also stands for 'evidential mail,'" he noted.Security leaders were warned to be particularly careful about going to the press with complaints, which the panel suggested could result in even worse professional consequences than legal action. Similarly, Budiharto cautioned against trusting internal human resources departments or ethics panels, reminding attendees that HR ultimately works to protect the company, not individual employees.The panel underscored that proper governance, documentation, and clear communication with leadership about shared security responsibilities are essential practices for CISOs navigating the complex political and ethical challenges of their role.WhatsApp Launches Advanced Chat Privacy to Safeguard Sensitive Conversationshttps://blog.whatsapp.com/introducing-advanced-chat-privacyWhatsApp has rolled out a new "Advanced Chat Privacy" feature designed to provide users with enhanced protection for sensitive information shared in both private and group conversations.The new privacy option, accessible by tapping on a chat name, aims to prevent the unauthorized extraction of media and conversation content. "Today we're introducing our latest layer for privacy called 'Advanced Chat Privacy.' This new setting available in both chats and groups helps prevent others from taking content outside of WhatsApp for when you may want extra privacy," WhatsApp announced in its release.When enabled, the feature blocks other users from exporting chat histories, automatically downloading media to their devices, and using messages for AI features. According to WhatsApp, this ensures "everyone in the chat has greater confidence that no one can take what is being said outside the chat."The company noted that this initial version is now available to all users who have updated to the latest version of the app, with plans to strengthen the feature with additional protections in the future. However, WhatsApp acknowledges that certain vulnerabilities remain, such as the possibility of someone photographing a conversation screen even when screenshots are blocked.This latest privacy enhancement continues WhatsApp's long-standing commitment to user security, which began nearly seven years ago with the introduction of end-to-end encryption. The platform has steadily expanded its privacy capabilities since then, implementing end-to-end encrypted chat backups for iOS and Android in October 2021, followed by default disappearing messages for new chats in December of the same year.More recent security updates include chat locking with password or fingerprint protection, a Secret Code feature to hide locked chats, and location hiding during calls by routing connections through WhatsApp's servers. Since October 2024, the platform has also encrypted contact databases for privacy-preserving synchronization.Meta reported in early 2020 that WhatsApp serves more than two billion users across over 180 countries, making these privacy enhancements significant for a substantial portion of the global messaging community.Samsung Confirms Security Vulnerability in Galaxy Devices That Could Expose Passwordshttps://us.community.samsung.com/t5/Suggestions/Implement-Auto-Delete-Clipboard-History-to-Prevent-Sensitive/m-p/3200743Samsung has acknowledged a significant security flaw in its Galaxy devices that potentially exposes user passwords and other sensitive information stored in the clipboard.The issue was brought to light by a user identified as "OicitrapDraz" who posted concerns on Samsung's community forum on April 14. "I copy passwords from my password manager all the time," the user wrote. "How is it that Samsung's clipboard saves everything in plain text with no expiration? That's a huge security issue."In response, Samsung confirmed the vulnerability, stating: "We understand your concerns regarding clipboard behavior and how it may affect sensitive content. Clipboard history in One UI is managed at the system level." The company added that the user's "suggestion for more control over clipboard data—such as auto-clear or exclusion options—has been noted and shared with the appropriate team for consideration."One UI is Samsung's customized version of Android that runs on Galaxy smartphones and tablets. The security flaw means that sensitive information copied to the clipboard remains accessible in plain text without any automatic expiration or encryption.As a temporary solution, Samsung recommended that users "manually clear clipboard history when needed and use secure input methods for sensitive information." This stopgap measure puts the burden of security on users rather than providing a system-level fix.Security experts are particularly concerned now that this vulnerability has been publicly acknowledged, as it creates a potential "clipboard wormhole" that attackers could exploit to access passwords and other confidential information on affected devices. Users of Samsung Galaxy devices are advised to exercise extreme caution when copying sensitive information until a more comprehensive solution is implemented.Former Disney Menu Manager Sentenced to 3 Years for Malicious System Attackshttps://www.theregister.com/2025/04/29/former_disney_employee_jailed/A former Disney employee has received a 36-month prison sentence and been ordered to pay nearly $688,000 in fines after pleading guilty to sabotaging the entertainment giant's restaurant menu systems following his termination.Michael Scheuer, a Winter Garden, Florida resident who previously served as Disney's Menu Production Manager, was arrested in October and charged with violating the Computer Fraud and Abuse Act (CFAA) and committing aggravated identity theft. He accepted a plea agreement in January, with sentencing finalized last week in federal court in Orlando.According to court documents, Scheuer's June 13, 2024 termination from Disney for misconduct was described as "contentious and not amicable." In July, he retaliated by making unauthorized access to Disney's Menu Creator application, hosted by a third-party vendor in Minnesota, and implementing various destructive changes.The attacks included replacing Disney's themed fonts with Wingdings, rendering menus unreadable, and altering menu images and background files to display as blank white pages. These changes propagated throughout the database, making the Menu Creator system inoperable for one to two weeks. The damage was so severe that Disney has since abandoned the application entirely.Particularly concerning were Scheuer's alterations to allergen information, falsely indicating certain menu items were safe for people with specific allergies—changes that "could have had fatal consequences depending on the type and severity of a customer's allergy," according to the plea agreement. He also modified wine region labels to reference locations of mass shootings, added swastika graphics, and altered QR codes to direct customers to a website promoting a boycott of Israel.Scheuer employed multiple methods to conduct his attacks, including using an administrative account via a Mullvad VPN, exploiting a URL-based contractor access mechanism, and targeting SFTP servers that stored menu files. He also conducted denial of service attacks that made over 100,000 incorrect login attempts, locking out fourteen Disney employees from their enterprise accounts.The FBI executed a search warrant at Scheuer's residence on September 23, 2024, at which point the attacks immediately ceased. Agents discovered virtual machines used for the attacks and a "doxxing file" containing personal information on five Disney employees and a family member of one worker.Following his prison term, Scheuer will undergo three years of supervised release with various conditions, including a prohibition on contacting Disney or any of the individual victims. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit edwinkwan.substack.com

AWS Morning Brief
The Art of Amazon Q Developer

AWS Morning Brief

Play Episode Listen Later Apr 28, 2025 4:31


AWS Morning Brief for the week of April 28th, with Corey Quinn. Links:Amazon CloudWatch agent now supports Red Hat OpenShift Service on AWS (ROSA) Amazon Cognito now supports refresh token rotation Amazon Q Developer releases state-of-the-art agent for feature development AWS Account Management now supports IAM-based account name updates AWS CodeBuild adds support for specifying EC2 instance type and configurable storage size AWS Console Mobile Application adds support for Amazon Lightsail AWS STS global endpoint now serves your requests locally in Regions enabled by default AWS Transfer Family introduces Terraform module for deploying SFTP server endpoints How Smartsheet reduced latency and optimized costs in their serverless architecture In the works – New Availability Zone in Maryland for US East (Northern Virginia) Region CVE-2025-3857 – Infinite loop condition in Amazon.IonDotnet I annotated Amazon CEO Andy Jassy's 2024 Letter to Shareholders 

Strength for Today's Pastor
169 Pastoring in Sin City (Las Vegas)

Strength for Today's Pastor

Play Episode Listen Later Jan 28, 2025 46:48 Transcription Available


Comments? Questions? Send us a message!Episode 169 of SFTP tells the story of what it's like to minister in Las Vegas, Nevada, aka "sin city".It CAN be done! Where sin abounds, grace much more abounds. We all know that, but it's for real in the Las Vegas valley.Hear from Pastor John Knapp about some of the ways the Holy Spirit has led their church, Calvary Chapel Green Valley, to be fruitful and growing in Christ.The transcript is also part of this episode...For Poimen Ministries, its staff, ministries, and focus, go to poimenministries.com. To contact Poimen Ministries, email us at strongerpastors@gmail.com. May the Lord revive His work in the midst of these years!

TubbTalk - The Podcast for IT Consultants
[170] How to Master Email and File Migrations for MSPs

TubbTalk - The Podcast for IT Consultants

Play Episode Listen Later Dec 8, 2024 46:43


In this interview, Richard Tubb speaks to Colin Hogg, the vice president of sales for Couchdrop. They have an amazing migration solution for MSPs called Movebot. It makes it much easier to do data migrations for your clients.Colin explains the Movebot Migration solution and how the organisation manages rapid technology developments and how they quickly respond to and fix issues. He also talks about how they do the most common migrations for MSPs, and their real-time monitoring.Richard asks Colin how Movebot supports their MSP clients when things go wrong, and the advice he'd give on how to email migration, and how to do SharePoint migrations more effectively. Colin shares how they keep up to date with all the different platforms and how they work, as well as which products and platforms Movebot currently support. He also touches on Couchdrop as a company, and gives his advice to MSPs who want to DIY their data migrations.Mentioned in This EpisodePodcast interview with Michael LawsonVideo walkthrough demo of MovebotFree trial of MovebotMovebot's parent company and SFTP solution: CouchdropMulti-tier storage: Apache IgniteMicrosoft collaboration tool: SharePoint

Black Hills Information Security
2024-11-25 - Discordgate

Black Hills Information Security

Play Episode Listen Later Nov 27, 2024 66:22


00:00:00 - PreShow Banter™ — Discordgate00:09:24 - BHIS - Talkin' Bout [infosec] News 2024-11-2500:10:46 - Story # 1: DOJ says Google must sell Chrome to crack open its search monopoly00:12:08 - Story # 1b: DOJ's staggering proposal would hurt consumers and America's global technological leadership00:19:16 - Story # 2: The Nearest Neighbor Attack: How A Russian APT Weaponized Nearby Wi-Fi Networks for Covert Access00:24:37 - Story # 3: Palo Alto Networks tackles firewall-busting zero-days with critical patches00:25:46 - Discordgate Follow Up00:26:26 - Story # 4: Enhancing Cyber Resilience: Insights from CISA Red Team Assessment of a US Critical Infrastructure Sector Organization00:31:08 - Story # 5: Fintech giant Finastra investigates data breach after SFTP hack00:34:01 - Story # 6: CFPB Finalizes Rule on Federal Oversight of Popular Digital Payment Apps to Protect Personal Data, Reduce Fraud, and Stop Illegal “Debanking”00:38:49 - Story # 7: T-Mobile finally managed to thwart a data breach before it occured00:40:22 - Story # 8: D-Link urges users to retire VPN routers impacted by unfixed RCE flaw00:43:07 - Story # 9: US seizes PopeyeTools cybercrime marketplace, charges administrators00:46:19 - Story # 10: Razzlekhan, crypto's most embarrassing rapper, is going to prison00:48:31 - Story # 10b: Netflix has a perfectly timed Razzlekhan doc coming out in December00:50:10 - Story # 11: Microsoft Defender Is Not Enough Anymore—This Malware Gets Around It00:55:11 - Story # 12: Microsoft president asks Trump to “push harder” against Russian hacks00:57:02 - Story # 13: Hackers Breach Andrew Tate's Online ‘University,' Exposing 800,000 Users01:00:36 - Story # 14: 7-Zip affected by dangerous vulnerability: users must update the app manually01:01:31 - Story # 15: Microsoft disrupts ONNX phishing-as-a-service infrastructure01:03:07 - Story # 16: US charges five linked to Scattered Spider cybercrime gang01:04:25 - Plug: Secure Code Summit 2024

Strength for Today's Pastor
166- The Power of the Gospel in Forgiveness - Matthew 18.21-35 (with Bill Holdridge)

Strength for Today's Pastor

Play Episode Listen Later Nov 18, 2024 54:50 Transcription Available


Comments? Questions? Send us a message!Podcast 166 is one that I've wanted to do for some time, but needed to wait for the right time.Now is the right time.Pastor, do you want to experience the power of the gospel in forgiveness? Do you want this for your fellowship? I know you do.In this episode of SFTP, discover: The forgiveness mandate, directly from the Lord's Jesus.The blessing of knowing we're forgiven.The blessings that result from forgiving others.The surprising meaning of the parable of the unforgiving servant in Matthew 18:21-35.The torture that results from refusing to forgive others. The protocols (how to do it) of forgiveness.Tried and tested resources to go to for help.Listen in, and then share, share, share.For Poimen Ministries, its staff, ministries, and focus, go to poimenministries.com. To contact Poimen Ministries, email us at strongerpastors@gmail.com. May the Lord revive His work in the midst of these years!

Sixteen:Nine
Lisa Schneider & Travis McMahand, Videotel

Sixteen:Nine

Play Episode Listen Later Oct 17, 2024 33:23


The 16:9 PODCAST IS SPONSORED BY SCREENFEED – DIGITAL SIGNAGE CONTENT There are not a lot of companies that have been involved in what we now call digital signage for 44 years, but Videotel has been selling technology that puts marketing information on screens since 1980. The company started with VCRs (younger readers may have to Google that) and then started designing, manufacturing and selling DVD players that, unlike consumer devices, would happily play out a set of repeating video files for weeks, months and years. Back in the days before fast internet connections, cloud computing and small form factor PCs, that's how a lot of what we now know as digital signage was done. About 14 years ago, the San Diego-area company added dedicated, solid state digital signage media players - and that product line has steadily grown to include networked and interactive versions. The company also now has interactive accessories for stuff like lift and learn, and directional speakers that help drive experiences in everything from retail to museums. I had a good conversation with Lisa Schneider, who runs sales and marketing, and Travis McMahand, Videotel's CTO. We get into the company's roots, the evolution to solid state media players, and how Videotel successfully competes with $400 and higher players, when at least part of the buyer market seems driven mostly by finding devices that are less than $100. Subscribe from wherever you pick up new podcasts. TRANSCRIPT Thank you for joining me. Can you introduce yourselves and tell me what Videotel is all about?  Lisa Schneider: Yes, absolutely. Hello, Dave. Thank you for having both Travis and me today. We appreciate it. My name is Lisa Schneider and I am the executive vice president for sales and marketing for our company, Videotel Digital. We were founded in 1980. Gosh, it's been almost 44 years now, back when we were manufacturing top-loading VCRs, that went into industrial-grade DVD players, and now in the last 14 years, we are manufacturing digital signage media players. We have interactive solutions that include various sensors like motion sensors, proximity sensors, and weight sensors. We've got mechanical LED push buttons and touchless IR buttons and RFID tags, and things like that that create interactive displays. We also provide directional audio speakers. We have various form factors for all types of projects, and then we also have Travis on the line with us. I'll let Travis introduce himself.  Travis McMahand: Oh, hi, I'm Travis McMahand I am the CTO of Videotel Digital.  Where's the company based? Is it in San Diego?  Lisa Schneider: Yes, we are in, it's San Diego. It's actually Chula Vista, borderline San Diego. So in California.  San Diego area.  Lisa Schneider: Yes, San Diego area. Beautiful San Diego.  So I've been aware of your company forever and going all the way back to the days when you were doing industrial grade, commercially oriented DVD players. In the early days of digital science before things were networked, that's what people were using and if you used a regular DVD player or even a VCR or something like that, the thing was really not set up to play over and over again if you were using just like a consumer-grade device. So the whole idea was you were, you guys developed commercial-grade versions that were rated to last, for days, weeks, months, years. Is that accurate?  Lisa Schneider: Yes, that is accurate, and it was, that was our flagship product back in the day. That was because we made a truly industrial-grade player and it would auto power on, auto seamlessly loop and repeat without any manual interaction, even without a remote. So it was a looping player.  We actually still have three different types of industrial-grade DVD players that we still offer. They're actually really popular in healthcare facilities because they are specifically UL-approved for medical DVD players still, and they are still out there and, we are still producing them.  The attraction for that at the time was that just the absence of really networked media players unless you were quite sophisticated and were using big box PCs and everything else, I assume that market with the exception of what you're saying about Hospitals is largely gone away? Lisa Schneider: It hasn't been, for example, like sometimes with waiting rooms, people are still using DVDs for movies, for entertainment purposes, not just in healthcare. Sometimes there are still people who are self-burned content for museums. It's just simple for them to just throw the disc in and then they walk away and it just continuously loops. So they're still out there.  It's not completely gone away and we are one of the only ones left though that is still really providing the industrial grade DVD players. You said about 14 years ago, you got into digital signage media players that were not based on DVDs, it was based on hard drives or solid-state storage. Lisa Schneider: Yes, we started with solid state media players that were just simply looping off of an SD card or USB, no network connection, none of the fancy stuff, and that was really kind of the migration from the DVD, because people didn't want to use DVDs anymore. They just wanted to upload their content, do the same thing, load them, and go. So we probably still have a few versions of just solid-state players. That's how we entered the market. But one of the really cool things we did was we made one of them interactive, which, that's where we come into the interactive solutions, which we can talk about too.  The primary products that you have now are network-connected, right? Lisa Schneider: We have both. We still have solid-state digital signage players, for those simple needs, and then we do have networked players as well.  I'm thinking there's an awful lot of cases like retail marketing for brands for product launches and things like that, where, yes, you could use a network digital signage player, but it's loading up a set of files at the start and that's really all it's ever going to use, right?  Lisa Schneider: Yes, that's a lot of the use cases, where they just want to upload the content and let it go but there are obviously use cases where the content is ever-changing and they can push out content on our remote players, network players, via quick push. Do you have device management? Will you know what's going on with these devices, as they're out in a big box or whatever?  Lisa Schneider: Yes, one of our new player, we actually just did a press release on it for our VP92 4K network player. It will allow customers to use our free embedded software on the player that will allow them to push out the content remotely and that they can see what is being played in the various locations, wherever the box is deployed to, and then if it's just a single unit or hundreds of units if it's up in the thousands, then we recommend our cloud-based CMS software where they can manage, do all the management within the software itself. So you have your own software, but I'm assuming you're not selling yourself as a software company?  Lisa Schneider: No, we are not selling ourselves as a software company. We have hardware and then we have various software options. But it is embedded in the players to make it extremely simple to use. And it's tuned specifically for your devices.  Lisa Schneider: Correct. Yes.  Can a third-party CMS company, a CMS software company use your boxes?  Lisa Schneider: Travis that might be a good question for you to answer.  Travis McMahand: That's a possibility. We design our players to be simple and reliable. We don't make it so difficult to set up a program, so in doing that, we've hidden or disabled, certain features within the operating system. But we can still work with companies if they have a specific application or service that they want to use. We can definitely work with companies to try to make that happen.  Okay, so I guess a scenario would be something like a retailer or even a brand that has networks in stores and is using a CMS for the big displays for retail marketing and they say, we would like to use your stuff for the interactive or whatever, can we use the same platform to manage both of them? Travis McMahand: Yeah, that's a possibility. It goes on a case-by-case basis.  It's not something you're actively marketing, but technically you could do it if it makes sense for both sides, right?  Lisa Schneider: Absolutely, Dave. That's what I'd like to interject. We're open to those conversations with anybody who is interested, especially if it is a larger project is something I would entertain.  The hardware sector has been a tough one for a lot of the companies that have media players, with maybe the notable exception of BrightSign, which has a very big footprint everywhere, but the PC guys in particular, struggled to, in recent years, get relevance, and a lot of that seems to be driven by a race to the bottom to see how low we can go in terms of cost for a media player, and we now have Amazon with a custom build or kind of a stripped-out build of its Fire Sticks that are $99.  Has it been a challenge to compete with that stuff or, do you operate differently or have a different market? Lisa Schneider: That's a great question, and yes, I did see that new Amazon $99 Fire Stick. But we're very unique in a sense where, sure you can purchase a $99 player, but is it really industrial grade? Is it going to seamlessly auto-loop? Is it reliable? Can you connect interactive devices to it? Can you grow within it?  So it is a challenge in some aspects of it. But it seems that the ones that go, the customers that choose that route, end up circling back and they want more. It wasn't enough. The price was good, but then they realized, if we invest a little bit more, we're getting all of these things that we can grow into. And your media playout boxes, they look like they're industrial grade, ruggedized. They've got what looks like heat sinks or things like that built into it. Are these designs done in Chula Vista and then you have contracts manufacturer over in Asia?  Lisa Schneider: Yes, spot on. We design our own players in-house, and then we have them made, and we have a stringent process and a bunch of design engineers that are just constantly trying to break things, and they do a really good job of it, and we are very careful as to what we put out in the market and that is what also makes us unique.  And that allows you to sell at a higher price point because you have people looking at it and going, okay, I get why I'm paying more for this.  Lisa Schneider: Yeah, absolutely. We're not just relabeling some cheapo boxes. This is something that is designed in-house and these are the features, and we are able to get a little bit more because we are using premium parts during the manufacturing process and other other reasons as well, not just for the hardware. Travis, is it your own operating system or are you running on some kind of flavor of Linux?  Travis McMahand: The base operating system is Android. We use Android and just build off of that, build our apps within it, and set up all of the settings exactly the way we want.   Has that been that from the start or did you evolve into Android? Travis McMahand: Oh, we evolved into Android. We originally started when we first were doing players that were playing content just from USB and SD cards. It was really simple at that point and we were doing our own operating system. embedded operating system within the player at that point.  But then when we wanted to start adding more features and opening up the platform to more types of files, we found that using an Android-based operating system was much easier and more reliable to get going.  When I look at your customer list, it's impressive. You've got a whole bunch of interesting customers, a lot of them appear to be attractions, museums, that sort of thing, some brands, and so on. How would you describe, Lisa, your core customer base?  Lisa Schneider: I would say our core customer base is going to be in the retail space, museums, trade shows and events, and probably hotels, if I had to go down the list, but, we have just about everybody from food and beverage to restaurants, people use our players for menu boards, a lot of kiosks. We do healthcare, dentist offices, med spas, we're into transit on buses and bus stations. Just like the list goes on, but I would say those would probably be our top.  Because you have these ruggedized boxes, is it advantageous that you are in the San Diego area as opposed to on the other side of the Pacific in terms of, there are companies that also sell small form factor, ruggedized media, and play-out boxes that are in Taiwan or China. But the challenge you sometimes have is the time zones and language.  Lisa Schneider: Yeah, we make it known that we are in the States, obviously we wanted to serve the US, but also we can maintain and manage the international business as well. But yeah, it's just was like just evolved in that way.  You sell direct or do you have channels and distribution?  Lisa Schneider: Great question. We do both. We sell directly more to the small business owners, and then most of our business is through integrators and the reseller channel. Why did they come your way? We covered it, but I'm still curious about the pitch when you get asked, why you?  Lisa Schneider: Yeah, why Videotel? We've been around for 44 years, and we have experience. One of the big things is we really do listen to our customers. The word gets out, we offer free customer support and free technical support. You get a live person on the phone. Our customers are constantly, it's funny how they say, gosh, thank you for calling me back. It's “does so and so not call you back?” So they love us for those reasons and that we do offer industrial-grade equipment. It lasts and thank goodness the rumor is out there that we make solid products.  I'm also curious about how you get the manufacturing done. Do you have somebody located in China or wherever it is?  Lisa Schneider: Yes, we do. We have a team of people that are in China that we've actually had relationships for, gosh, I think 15 years now and so it's a very solid relationship. It makes it so much easier to communicate, and then they're the interface to all of the manufacturing that goes on. So there's our front lines there and it makes it a lot easier for us. Are your deployments, I guess they're not your deployments, but the projects that are done, do they tend to be small quantities and a lot of different customers or do you have accounts where they may have hundreds or even thousands of your units because they're all over the place?  Lisa Schneider: Yeah I would say probably 70% of our business is the small to mid-sized business and then the other 30% is where you're going to find the more enterprise larger on the upward of a thousand projects.  As a biz dev person when you have a whole cavalcade of smaller customers that gets to be a challenge to manage, right? Lisa Schneider: It can be, but this is what makes life a lot easier for us as a team here is that our products are so easy to use that they are plug-and-play. They can figure it out. We have a whole learning program, a bunch of videos on our website, examples, and things. We have a chat, so it flows nicely. It really does.  It hasn't been too much of a burden because I think we just make things that are simple. That's what we pride ourselves on.  So I go directly to you, I'm a museum, and I need three of these things for looping videos. I order them, and you ship them to me, what do I need to do to get them running?  Lisa Schneider: You need to simply connect it to your HDMI to any of the screens that you're using, and then you upload the content on the USB SD and then you insert it and it will play or if you're using the network capabilities, same thing, you plug it in, you get your network connection and then depending on if you're using the free embedded software, It's a matter of just opening up the software on the player and choosing the source that you want to use. If it's, let's say USB SD, if you want to use SFTP or LAN, you can point your content to a URL, but it's basic clicking of a button and uploading your file, it's just that simple. So if you want to get fancy, you can, and use file transfers and everything else, but, if IT is not your role, but you just need to get something running, you don't need to bring somebody in to help you.  Lisa Schneider: That is exactly right. In fact, I always tell people, I say, you do not need a computer science or an IT degree to run our stuff. So you get it, take it out of the box, plug and play, and you're really ready to go.  If it's networked, wre you getting questions about security these days?  Lisa Schneider: Yes, I think we have. Not too many, but Travis would be better suited to answer that. He handles that side of that.  Travis McMahand: Yeah, for security, we really try to lock down the player to make it so that it really doesn't have any incoming ports to it. So you can't log directly into the player over a network. The only settings that you can do on the player itself are done with either the infrared remote control that comes with the player, or you can connect a keyboard and mouse to our network players to access the menu that way. But for the sources, say you're using a shared LAN folder or an SFTP site, you're putting your content on one of those sources, and then you're just pointing the player to that source and telling it, okay, go check for new content or check for changed content every hour or every day or whatever. So the player itself goes out and checks for the content. There's no inbound traffic to the player that the player doesn't initiate,  Gotcha, and I gather from the IT and IS crowd lately that whereas several years ago they were pretty jumpy about using Android, they're now pretty comfy with it.  Travis McMahand: Yeah, I haven't heard too many complaints about it being an Android player. Once we explained the security that we have built into the player, we really haven't had any pushback on that.  Let's talk about interactive. Why did you go down that path? Was it customer demand?  Lisa Schneider: Yes. You know what? It actually was. It started with a museum actually in San Diego and they came to us and they said, this is what we're looking for. We want to trigger content and this is how we want to do it, and we thought, okay, we'll give it a shot, and then it just bloomed from there.  Once we came up with a way of connecting and how we're gonna create the interaction, then we started evolving from there with all the different types of interactive sources. You also have interactive devices that are paired with your playout boxes. Was that a kind of a learned decision that you're best to develop your own as opposed to trying to integrate other stuff that may be used for other purposes and could maybe be hacked to work with this? Lisa Schneider: Yeah, I think that the reason we just did it in the house was because we wanted to control it, and also really because everything was just out was so complex. We're like, okay, we need to just bring this down 10 notches. Let's just, we need to make the super simple for our customers. So then that's why we just developed it in-house.  Yeah, in my dark past, I worked with a company that did digital signage solutions and had its own media players, and I remember there was an ask for interactive, and we had to source big ass buttons from a company that made buttons for slot machines, and then we had a guy who worked out of a, I think a motorhome somewhere in California and designed controller boards. So we had a custom controller board for this thing and everything else and it worked, but good God, that was a process as opposed to just saying, “Yeah, I want this play-out box and, I need two of these, and two of those interactive things.”  Lisa Schneider: Oh, that's pretty much how it went down with us too. And our team really loves the challenge. We love doing custom work. I know Travis is working on a bunch of stuff right now with different projects. We have one that is using our interactive IP push, is what we call it, for the automotive parts department actually, and I can't say who just yet, but, it's a customer that just reported back that with our solution that we created, this was the first time in the history of their company they had the highest productivity.  Again, it comes from the customers. That's what I mean. When I say we listen to our customers, they help you grow,  When I look at the different accessories that you have, there's motion sensors, there's triggers and buttons and things like that. What typically are your customers using?  Lisa Schneider: I would say the most popular would be the LED push buttons and/or the IR touchless buttons for museums, also in the trade show booths, I would say our sense solution, is our motion sensor that kind of detects human distance from a display. So you can lure somebody into something and then it would trigger the content once they're in within a certain range. I would say that was probably our two most popular solutions.  We also have a sensor that's a wave to-play where you just wave your hand over an area and it will trigger content. That's another popular one too. I've always wondered about some of those things because if there's any sort of a learning curve, like you, you've got to get somebody to wave to change a file or whatever that is, how hard is it for them to figure that out? Even though it sounds simple.  Lisa Schneider: Yeah, we do. We make suggestions for, God, the stuff that some of these integrators come up with is so cool, but they do make it really simple, using decal stickers or there it's on plywood or it says, wave your hand here, or it's within the content itself. So they come up to the screen. It's pretty pretty self-explanatory.  We're not one of those where it's going to tell you your age, your hair color and eye color and anything to that degree. This is more simple. We don't find too many people standing there lost looking at it, thank goodness.  Are many using the capabilities to lift and learn?  Lisa Schneider: Yes, that's one of our newer solutions that we just came out with, really cool, lifting a product off of a display and then it triggering. Retailers are really taking to that right now as well as the museums, where you have to physically hold something, lift it.  Yeah. It's interesting because that's something that's an idea, a concept, and a capability that has been around for 10-15 years. So when I've seen companies saying, look, we can do lift and learn. I'm thinking, okay, you didn't exactly discover fire. It's been around, but it really didn't have much marketplace adoption until now, it seems, recently.  Lisa Schneider: It is, but the lift and learns that was out there before were again, so complex and we take it down a notch and it's just a matter of making a harness, they plug it in the back of the player, plug it in the TV and you name your files a certain way and they're good to go. So yeah, we're not reinventing anything, but. We have made it so simple to use that you don't need that, to a degree to figure it out.  When you talk to prospects and they ask about your company and everything, is there a reference customer, or a project that you tend to trot out and say, here's a great example of what we do? Lisa Schneider: Yes, there are so many. We're lucky enough to have a whole plethora on our website of case studies that have participated in that. So if I say yes, if it's a museum, I say, okay, I have a link to a laundry list of museums, and here's what we did for them. They shared their photos and how they did it. So yes, we absolutely can do that.  All right, this has been great. It's terrific to finally have a chat with you guys and find out more about your company. Thanks for spending some time with me  Lisa Schneider: Yes, absolutely. Thank you for the opportunity. It's always so good to chat with you.  Travis McMahand: Yeah. Thank you

The Guy R Cook Report - Got a Minute?

Got a Minute? Checkout today's episode Do you use SSH or SFTP for data transfers ----more----I help goal oriented business owners that run established companies to leverage the power of the internet Contact Guy R Cook @ https://guyrcook.com In the meantime, go ahead follow me on X: @guyrcookreport Click to Tweet Contact Guy R Cook Follow Practical Digital Strategies on Podbean iPhone and Android App | Podbean   https://bit.ly/3m6TJDV Thanks for listening, viewing or reading the show notes for this episode. This episode of Practical Digital Strategies is on YouTube too @ This episode of Practical Digital Strategies Have a great new year, and hopefully your efforts to Entertain, Educate, Convince or Inspire are in play Stay Connected: Subscribe to our channel for more essential tips and strategies tailored for content creators. If you liked this content, then please subscribe to my YouTube Channel YouTube@PracticalDigitalStrategies for Practical Digital Strategies for Content Creators. You can also find me on X - X and Facebook - Facebook. for continuous updates and community support. Engage with us by leaving your thoughts in the comments below! vDomainHosting, Inc 3110 S Neel Place Kennewick, WA 509-200-1429 #practicaldigitalstrategies

AWS Morning Brief
FTP is Eternal at Enterprises

AWS Morning Brief

Play Episode Listen Later Sep 23, 2024 4:20


AWS Morning Brief for the week of September 23, with Corey Quinn. Links:AWS Transfer Family increases throughput and file sizes supported by SFTP connectors AWS WAF Bot Control Managed Rule expands bot detection capabilities AWS named as a Leader in the 2024 Gartner Magic Quadrant for Desktop as a Service (DaaS)Announcing General Availability of the AWS SDK for Swift Reinventing the Amazon Q Developer agent for software development Support for AWS DeepComposer ending soon Unlock AWS Cost and Usage insights with generative AI powered by Amazon Bedrock AWS Welcomes the OpenSearch Software Foundation The Rise of Chatbots: Revolutionizing Customer Engagement 

Automators
163: Text Automation Workflows 2024

Automators

Play Episode Listen Later Sep 6, 2024 58:38


Fri, 06 Sep 2024 19:45:00 GMT http://relay.fm/automators/163 http://relay.fm/automators/163 Text Automation Workflows 2024 163 David Sparks and Rosemary Orchard The Automators share some of their favorite tricks for creating, editing, and working with text. The Automators share some of their favorite tricks for creating, editing, and working with text. clean 3518 The Automators share some of their favorite tricks for creating, editing, and working with text. This episode of Automators is sponsored by: LinkedIn Jobs: Find the qualified candidates you want to talk to, faster. Post your job for free today. Links and Show Notes: Credits The Automators Rosemary Orchard David Sparks The Editor Jim Metzendorf The Fixer Kerry Provanzano Get Automators Max: a longer, ad-free version of the show Submit Feedback Relay for St. Jude - St. Jude Children's Research Hospital Drafts | Where Text Starts Just Press Record Whisper Memos Mail Drop | Drafts User Guide Workspaces | Drafts User Guide Flags & Tagging | Drafts User Guide Actions | Drafts User Guide Bare Bones Software | BBEdit 15 Textastic - Text, Code, and Markup Editor with Syntax Highlighting - FTP, SFTP, SSH, Dropbox, Google Drive - for iPad AI Writer - Free Online AI Text Generator Grammarly: Free AI Writing Assistance TextSoap - Automate Your Text Cleanup Mind Map & Brainstorm Ideas - MindNode Obsidian - Sharpen your thinking DEVONtechnologies | DEVONthink, professional document and information management for the Mac and iOS Cheatsheet: Sticky Note Widget on the App Store Cheatsheet Notes Cheatshee

Relay FM Master Feed
Automators 163: Text Automation Workflows 2024

Relay FM Master Feed

Play Episode Listen Later Sep 6, 2024 58:38


Fri, 06 Sep 2024 19:45:00 GMT http://relay.fm/automators/163 http://relay.fm/automators/163 David Sparks and Rosemary Orchard The Automators share some of their favorite tricks for creating, editing, and working with text. The Automators share some of their favorite tricks for creating, editing, and working with text. clean 3518 The Automators share some of their favorite tricks for creating, editing, and working with text. This episode of Automators is sponsored by: LinkedIn Jobs: Find the qualified candidates you want to talk to, faster. Post your job for free today. Links and Show Notes: Credits The Automators Rosemary Orchard David Sparks The Editor Jim Metzendorf The Fixer Kerry Provanzano Get Automators Max: a longer, ad-free version of the show Submit Feedback Relay for St. Jude - St. Jude Children's Research Hospital Drafts | Where Text Starts Just Press Record Whisper Memos Mail Drop | Drafts User Guide Workspaces | Drafts User Guide Flags & Tagging | Drafts User Guide Actions | Drafts User Guide Bare Bones Software | BBEdit 15 Textastic - Text, Code, and Markup Editor with Syntax Highlighting - FTP, SFTP, SSH, Dropbox, Google Drive - for iPad AI Writer - Free Online AI Text Generator Grammarly: Free AI Writing Assistance TextSoap - Automate Your Text Cleanup Mind Map & Brainstorm Ideas - MindNode Obsidian - Sharpen your thinking DEVONtechnologies | DEVONthink, professional document and information management for the Mac and iOS Cheatsheet: Sticky Note Widget on the App Store Cheatsheet Notes

DataTalks.Club
DataOps, Observability, and The Cure for Data Team Blues - Christopher Bergh

DataTalks.Club

Play Episode Listen Later Aug 15, 2024 53:47


0:00 hi everyone Welcome to our event this event is brought to you by data dos club which is a community of people who love 0:06 data and we have weekly events and today one is one of such events and I guess we 0:12 are also a community of people who like to wake up early if you're from the states right Christopher or maybe not so 0:19 much because this is the time we usually have uh uh our events uh for our guests 0:27 and presenters from the states we usually do it in the evening of Berlin time but yes unfortunately it kind of 0:34 slipped my mind but anyways we have a lot of events you can check them in the 0:41 description like there's a link um I don't think there are a lot of them right now on that link but we will be 0:48 adding more and more I think we have like five or six uh interviews scheduled so um keep an eye on that do not forget 0:56 to subscribe to our YouTube channel this way you will get notified about all our future streams that will be as awesome 1:02 as the one today and of course very important do not forget to join our community where you can hang out with 1:09 other data enthusiasts during today's interview you can ask any question there's a pin Link in live chat so click 1:18 on that link ask your question and we will be covering these questions during the interview now I will stop sharing my 1:27 screen and uh there is there's a a message in uh and Christopher is from 1:34 you so we actually have this on YouTube but so they have not seen what you wrote 1:39 but there is a message from to anyone who's watching this right now from Christopher saying hello everyone can I 1:46 call you Chris or you okay I should go I should uh I should look on YouTube then okay yeah but anyways I'll you don't 1:53 need like you we'll need to focus on answering questions and I'll keep an eye 1:58 I'll be keeping an eye on all the question questions so um 2:04 yeah if you're ready we can start I'm ready yeah and you prefer Christopher 2:10 not Chris right Chris is fine Chris is fine it's a bit shorter um 2:18 okay so this week we'll talk about data Ops again maybe it's a tradition that we talk about data Ops every like once per 2:25 year but we actually skipped one year so because we did not have we haven't had 2:31 Chris for some time so today we have a very special guest Christopher Christopher is the co-founder CEO and 2:37 head chef or hat cook at data kitchen with 25 years of experience maybe this 2:43 is outdated uh cuz probably now you have more and maybe you stopped counting I 2:48 don't know but like with tons of years of experience in analytics and software engineering Christopher is known as the 2:55 co-author of the data Ops cookbook and data Ops Manifesto and it's not the 3:00 first time we have Christopher here on the podcast we interviewed him two years ago also about data Ops and this one 3:07 will be about data hops so we'll catch up and see what actually changed in in 3:13 these two years and yeah so welcome to the interview well thank you for having 3:19 me I'm I'm happy to be here and talking all things related to data Ops and why 3:24 why why bother with data Ops and happy to talk about the company or or what's changed 3:30 excited yeah so let's dive in so the questions for today's interview are prepared by Johanna berer as always 3:37 thanks Johanna for your help so before we start with our main topic for today 3:42 data Ops uh let's start with your ground can you tell us about your career Journey so far and also for those who 3:50 have not heard have not listened to the previous podcast maybe you can um talk 3:55 about yourself and also for those who did listen to the previous you can also maybe give a summary of what has changed 4:03 in the last two years so we'll do yeah so um my name is Chris so I guess I'm 4:09 a sort of an engineer so I spent about the first 15 years of my career in 4:15 software sort of working and building some AI systems some non- AI systems uh 4:21 at uh Us's NASA and MIT linol lab and then some startups and then um 4:30 Microsoft and then about 2005 I got I got the data bug uh I think you know my 4:35 kids were small and I thought oh this data thing was easy and I'd be able to go home uh for dinner at 5 and life 4:41 would be fine um because I was a big you started your own company right and uh it didn't work out that way 4:50 and um and what was interesting is is for me it the problem wasn't doing the 4:57 data like I we had smart people who did data science and data engineering the act of creating things it was like the 5:04 systems around the data that were hard um things it was really hard to not have 5:11 errors in production and I would sort of driving to work and I had a Blackberry at the time and I would not look at my 5:18 Blackberry all all morning I had this long drive to work and I'd sit in the parking lot and take a deep breath and 5:24 look at my Blackberry and go uh oh is there going to be any problems today and I'd be and if there wasn't I'd walk and 5:30 very happy um and if there was I'd have to like rce myself um and you know and 5:36 then the second problem is the team I worked for we just couldn't go fast enough the customers were super 5:42 demanding they didn't care they all they always thought things should be faster and we are always behind and so um how 5:50 do you you know how do you live in that world where things are breaking left and right you're terrified of making errors 5:57 um and then second you just can't go fast enough um and it's preh Hadoop era 6:02 right it's like before all this big data Tech yeah before this was we were using 6:08 uh SQL Server um and we actually you know we had smart people so we we we 6:14 built an engine in SQL Server that made SQL Server a column or 6:20 database so we built a column or database inside of SQL Server um so uh 6:26 in order to make certain things fast and and uh yeah it was it was really uh it's not 6:33 bad I mean the principles are the same right before Hadoop it's it's still a database there's still indexes there's 6:38 still queries um things like that we we uh at the time uh you would use olap 6:43 engines we didn't use those but you those reports you know are for models it's it's not that different um you know 6:50 we had a rack of servers instead of the cloud um so yeah and I think so what what I 6:57 took from that was uh it's just hard to run a team of people to do do data and analytics and it's not 7:05 really I I took it from a manager perspective I started to read Deming and 7:11 think about the work that we do as a factory you know and in a factory that produces insight and not automobiles um 7:18 and so how do you run that factory so it produces things that are good of good 7:24 quality and then second since I had come from software I've been very influenced 7:29 by by the devops movement how you automate deployment how you run in an agile way how you 7:35 produce um how you how you change things quickly and how you innovate and so 7:41 those two things of like running you know running a really good solid production line that has very low errors 7:47 um and then second changing that production line at at very very often they're kind of opposite right um and so 7:55 how do you how do you as a manager how do you technically approach that and 8:00 then um 10 years ago when we started data kitchen um we've always been a profitable company and so we started off 8:07 uh with some customers we started building some software and realized that we couldn't work any other way and that 8:13 the way we work wasn't understood by a lot of people so we had to write a book and a Manifesto to kind of share our our 8:21 methods and then so yeah we've been in so we've been in business now about a little over 10 8:28 years oh that's cool and uh like what 8:33 uh so let's talk about dat offs and you mentioned devops and how you were inspired by that and by the way like do 8:41 you remember roughly when devops as I think started to appear like when did people start calling these principles 8:49 and like tools around them as de yeah so agile Manifesto well first of all the I 8:57 mean I had a boss in 1990 at Nasa who had this idea build a 9:03 little test a little learn a lot right that was his Mantra and then which made 9:09 made a lot of sense um and so and then the sort of agile software Manifesto 9:14 came out which is very similar in 2001 and then um the sort of first real 9:22 devops was a guy at Twitter started to do automat automated deployment you know 9:27 push a button and that was like 200 Nish and so the first I think devops 9:33 Meetup was around then so it's it's it's been 15 years I guess 6 like I was 9:39 trying to so I started my career in 2010 so I my first job was a Java 9:44 developer and like I remember for some things like we would just uh SFTP to the 9:52 machine and then put the jar archive there and then like keep our fingers crossed that it doesn't break uh uh like 10:00 it was not really the I wouldn't call it this way right you were deploying you 10:06 had a Dey process I put it yeah 10:11 right was that so that was documented too it was like put the jar on production cross your 10:17 fingers I think there was uh like a page on uh some internal Viki uh yeah that 10:25 describes like with passwords and don't like what you should do yeah that was and and I think what's interesting is 10:33 why that changed right and and we laugh at it now but that was why didn't you 10:38 invest in automating deployment or a whole bunch of automated regression 10:44 tests right that would run because I think in software now that would be rare 10:49 that people wouldn't use C CD they wouldn't have some automated tests you know functional 10:56 regression tests that would be the exception whereas that the norm at the beginning of your career and so that's 11:03 what's interesting and I think you know if we if we talk about what's changed in the last two three years I I think it is 11:10 getting more standard there are um there's a lot more companies who are 11:15 talking data Ops or data observability um there's a lot more tools that are a lot more people are 11:22 using get in data and analytics than ever before I think thanks to DBT um and 11:29 there's a lot of tools that are I think getting more code Centric right that 11:35 they're not treating their configuration like a black box there there's several 11:41 bi tools that tout the fact that they that they're uh you know they're they're git Centric you know and and so and that 11:49 they're testable and that they have apis so things like that I think people maybe let's take a step back and just do a 11:57 quick summary of what data Ops data Ops is and then we can talk about like what changed in the last two years sure so I 12:06 guess it starts with a problem and that it's it sort of 12:11 admits some dark things about data and analytics and that we're not really successful and we're not really happy um 12:19 and if you look at the statistics on sort of projects and problems and even 12:25 the psychology like I think about a year or two we did a survey of 12:31 data Engineers 700 data engineers and 78% of them wanted their job to come with a therapist and 50% were thinking 12:38 of leaving the career altogether and so why why is everyone sort of unhappy well I I I think what happens is 12:46 teams either fall into two buckets they're sort of heroic teams who 12:52 are doing their they're working night and day they're trying really hard for their customer um and then they get 13:01 burnt out and then they quit honestly and then the second team have wrapped 13:06 their projects up in so much process and proceduralism and steps that doing 13:12 anything is sort of so slow and boring that they again leave in frustration um 13:18 or or live in cynicism and and that like the only outcome is quit and 13:24 start uh woodworking yeah the only outcome really is quit and start working 13:29 and um as a as a manager I always hated that right because when when your team 13:35 is either full of heroes or proceduralism you always have people who have the whole system in their head 13:42 they're certainly key people and then when they leave they take all that knowledge with them and then that 13:48 creates a bottleneck and so both of which are aren aren't and I think the 13:53 main idea of data Ops is there's a balance between fear and herois 14:00 that you can live you don't you know you don't have to be fearful 95% of the time maybe one or two% it's good to be 14:06 fearful and you don't have to be a hero again maybe one or two per it's good to be a hero but there's a balance um and 14:13 and in that balance you actually are much more prod

Pierwsze kroki w IT
Wprowadzenie do sieci komputerowych

Pierwsze kroki w IT

Play Episode Listen Later Jul 4, 2024 67:12


Marcel Guzenda, Konsultant IT, Twórca Internetowy oraz Przedsiębiorca opowiada o tym jak działają sieci komputerowe. [more] Rozmawiamy m.in. o teoretycznych aspektach, jak również praktycznych. Poruszymy tematy związane z urządzeniami, protokołami oraz softwarem. Pełen opis odcinka, polecane materiały i linki oraz transkrypcję znajdziesz na: https://devmentor.pl/b/ || devmentor.pl/rozmowa ⬅ Chcesz przebranżowić się do IT i poznać rozwiązania, które innym pozwoliły skutecznie znaleźć pracę? Jestem doświadczonym developerem oraz mentorem programowania – chętnie odpowiem na Twoje pytania o naukę programowania oraz świat IT. Umów się na bezpłatną, niezobowiązującą rozmowę! ~ Mateusz Bogolubow, twórca podcastu Pierwsze kroki w IT || devmentor.pl/podcast ⬅ Oficjalna strona podcastu

DekNet
Seedbox 3

DekNet

Play Episode Listen Later Jun 16, 2024 29:58


TECNOLOGIA y LIBERTAD--------------------------PODCAST: https://www.spreaker.com/user/dekkartwitter.com/D3kkaRtwitter.com/Dek_Netmastodon.social/@DekkaRCódigo referido Crypto.com: https://crypto.com/app/hhsww88jd4#Bitcoin BTC: dekkar$paystring.crypt

Automators
154: Automating on the iPad

Automators

Play Episode Listen Later May 17, 2024 40:15


Fri, 17 May 2024 19:30:00 GMT http://relay.fm/automators/154 http://relay.fm/automators/154 Automating on the iPad 154 David Sparks and Rosemary Orchard After Apple's latest announcement, Rosemary and David review their iPad usage and how they automate on it. After Apple's latest announcement, Rosemary and David review their iPad usage and how they automate on it. clean 2415 After Apple's latest announcement, Rosemary and David review their iPad usage and how they automate on it. Links and Show Notes: Get Automators Max: a longer, ad-free version of the show Submit Feedback a-Shell Textastic - Text, Code, and Markup Editor with Syntax Highlighting - FTP, SFTP, SSH, Dropbox, Google Drive - for iPad Actions For Obsidian Pyto - Python 3 on the App Store Rubyist — Powerful iOS a

Relay FM Master Feed
Automators 154: Automating on the iPad

Relay FM Master Feed

Play Episode Listen Later May 17, 2024 40:15


Fri, 17 May 2024 19:30:00 GMT http://relay.fm/automators/154 http://relay.fm/automators/154 David Sparks and Rosemary Orchard After Apple's latest announcement, Rosemary and David review their iPad usage and how they automate on it. After Apple's latest announcement, Rosemary and David review their iPad usage and how they automate on it. clean 2415 After Apple's latest announcement, Rosemary and David review their iPad usage and how they automate on it. Links and Show Notes: Get Automators Max: a longer, ad-free version of the show Submit Feedback a-Shell Textastic - Text, Code, and Markup Editor with Syntax Highlighting - FTP, SFTP, SSH, Dropbox, Google Drive - for iPad Actions For Obsidian Pyto - Python 3 on the App Store Rubyist — Pow

TubbTalk - The Podcast for IT Consultants
TubbTalk 146: Growth and Success Advice From The Wizard of MSP Data Migrations

TubbTalk - The Podcast for IT Consultants

Play Episode Listen Later Feb 25, 2024 57:56


In this episode, Richard Tubb speaks to Michael Lawson, who has over15 years of experience in working with MSPs. A seasoned startup founder, he's recently built an exciting new platform for MSPs. Movebot is designed to make data migration easier, particularly at scale.Michael tells Richard how he came up with the idea for Movebot, the tools it integrates with and the tech requirements for MSPs to run it effectively for themselves and their clients.Richard asks Michael to explain the types of data Movebot migrates and how to use it for data merges, as well as giving some examples of how real-life MSP use the tool in their businesses.Michael also talks through how Movebot transfers live data and how Movebot helps its global MSP clients to stay data compliant and up to date with legislation changes in their home countries.Richard asks Michael what Movebot's tech support and outsourcing offerings look like, the pricing structure, and his plans for Couchdrop and Movebot in 2024 and beyond.Mentioned in This EpisodeMovebot's parent company and SFTP solution: CouchdropCloud storage solution: DropboxMulti-tier storage: Apache IgniteMicrosoft collaboration tool: SharePointMicrosoft's online communication suite: Office 365Google's online communication suite: WorkspaceCloud-based client portal: HuddleProject management tool: BIM 360Amazon cloud storage: S3Cloud storage: WasabiCloud storage and backup tool: BackblazeUK data protection legislation: GDPRUS health legislation: HIPAACalifornian consumer legislation: CCPAPrivacy experts: KeepablOnline collaboration tool: SlackMSP peer group: IT NationOnline chat forums: DiscordMovebot's Discord channel

Strength for Today's Pastor
156- A Way to Vitalize (and Utilize) Our Youth- with Gary Malkus Jr.

Strength for Today's Pastor

Play Episode Listen Later Jan 2, 2024 44:45


It's a huge problem. And because of it, we're losing many of our best youth to the world, the flesh, and the devil. Too often, the young people in our churches run off to college or university and subsequently abandon the faith. Also, in our churches there often is a big chasm between what happens in youth or college groups, and what is going on in adult church. The younger generation isn't learning how to connect or get involved in service to Christ within the body. Many recognize these problems, and some pastors are actually doing something radical and visionary to rectify it. One such pastor is Gary Malkus, Jr. Gary is the senior pastor of Calvary Chapel in Victorville, California. Gary's story was told in SFTP episode 155. In epidode 156, Gary's vision to reach the next generation is told. What he has to say will get pastors and youth leaders thinking, and some will be moved to do something about their thoughts. For the sake of God's kingdom... --- Support this podcast: https://podcasters.spotify.com/pod/show/bill-holdridge/support

Field Sales Leadership Guide
How to integrate your tech stack, Ep #25

Field Sales Leadership Guide

Play Episode Listen Later Dec 13, 2023 27:58


Welcome to another insightful episode of the Field Sales Leadership Guide Podcast! In this episode, Mary Keough is joined by Justin Lu, Head of Customer Success at Map My Customers, as they delve into the crucial topic of integrating technology within your company. With co-host JT unavailable, Justin steps in to share his expertise on how Map My Customers can seamlessly integrate with various technologies, especially focusing on the challenges faced by field sales teams.The conversation kicks off by highlighting the shift in the importance of integrations as technology becomes more ubiquitous. Instead of merely checking boxes for features, the focus is now on how technology aligns and works with existing systems in your company. Justin and Mary explore the impact of these integrations on outside sales teams and how they enhance the efficiency of processes.They dive into four key ways companies can integrate their tech stack effectively:Native Integration (Gold Standard): Justin explains the advantages of native integration, emphasizing how it streamlines the process by having the software vendor do the work. Matt My Customers offers native integrations with popular CRMs like Salesforce, Zoho, Dynamics, and HubSpot.Open API (Silver Standard): Justin discusses the flexibility of using an Open API, which allows integration with virtually any system that supports API connections. While it provides robust technological solutions, it may require programming knowledge and resources to set up.SFTP or CSV Upload (Bronze Standard): This method involves manually uploading data or automating the process through the Secure File Transfer Protocol. Justin shares that it's a stable solution with a daily frequency, providing a middle ground between native integration and simpler methods like CSV uploads.Zapier (Convenient Workaround): Justin introduces Zapier, a tool that enables API connections with minimal programming knowledge. While convenient for certain tasks, it may have limitations and could be less suitable for complex integrations with larger systems.The discussion emphasizes the importance of getting buy-in from the start, ensuring ease of use for sales reps, and the crucial role of managers in enforcing CRM usage. Justin concludes by highlighting the value of a well-integrated tech stack in driving sales efficiency and revenue growth.Tune in to gain valuable insights into navigating the integration landscape and optimizing technology for your field sales team's success. Don't miss out on practical tips and strategies shared by Justin and Mary in this information-packed episode!LISTEN IF YOU ARE INTERESTED IN…Double data entry is the death knell of CRM adoption  [4:30]Tech stack integrations don't have to be complicated.  [13:30]Integrating data silos helps the whole company [24:00]Connect with the guestJustin LuConnect with the hostsMary KeoughJT Rimbley Connect With Map My CustomersOn Twitter On Facebook On LinkedIn Subscribe to FIELD SALES LEADERSHIP GUIDE

viewSource
Modern Deployment for Laravel and WordPress

viewSource

Play Episode Listen Later Nov 27, 2023 41:35


It's the fourth and final episode of our series exploring Laravel. Brian takes us through the deployment process using Laravel Forge and AWS. Aurooba discusses "modern" WordPress development and how WordPress solutions like SpinupWP compare to tools like Netlify and Forge.A full transcript of the episode is available on the website. Watch the video podcast on YouTube and subscribe to our channel and newsletter to hear about episodes (and more) first!Suggest an episode - https://suggest.viewsource.fm/All the code - https://github.com/viewSourcePodcast/suggest-episodeTailcolor (Tailwind Color Generator) - https://tailcolor.com/Laravel Forge - https://forge.laravel.com/Spinup WP - https://spinupwp.com/Brian's website – https://www.briancoords.comAurooba's website – https://aurooba.com (00:00) - S02E04 - Laravel pt 4 (00:07) - Our Completed Laravel App (02:34) - Tailwind and Colors (04:56) - AlpineJS and Package Bloat (07:57) - Single Page Apps on Laravel (09:43) - Brian's Three Open Terminals (11:52) - Scaffolds and CLIs in WordPress (15:03) - Handling Build Assets in your Deployment (18:36) - Deployment - Forge (and SpinupWP) (24:25) - Connecting AWS to Forge (27:44) - Automated Git Deployments (31:20) - Git vs SFTP in Managed WordPress Hosting (34:33) - Other cool things like queues (37:14) - Final Thoughts

DekNet
Seguridad suficiente y transparente

DekNet

Play Episode Listen Later Sep 4, 2023 31:56


Thoughtstuff - Tom Morgan on Microsoft Teams, Skype for Business and Office 365 Development

Audio version of video on YouTube. New Teams Toolkit for Visual Studio release with exciting features for .NET developers Azure PowerShell Functions - connect SFTP and Storage Account  Azure SDK Release (August 2023) Microsoft Teams: Watermark support for recording playback (Premium) Subscribe to all my videos at: https://thoughtstuff.co.uk/video Podcast: https://thoughtstuff.co.uk/itunes, https://thoughtstuff.co.uk/spotify or https://thoughtstuff.co.uk/podcast Blog: https://blog.thoughtstuff.co.uk

Strength for Today's Pastor
147- "I Wrote a Book!" (with Pastor Paul LeBoutillier)

Strength for Today's Pastor

Play Episode Listen Later Aug 15, 2023 33:30


Podcast number 147 features Pastor Paul LeBoutillier, who recently, and very excitedly, announced that he has written and published his first book. This podcast is aimed at encouraging pastors who have something to say, to say it in written form! In this episode, Paul will share with us about how the book came to be, some of the important components of writing a book, and encouragement to do so. Paul is the founding pastor of Calvary Chapel Ontario, Oregon, which began in December of 1990. He is known as a practical, filled-with-wisdom Bible teacher who is dedicated to teaching the Bible. Paul doesn't just teach from the Bible, he teaches the Bible itself, working hard to give the people the whole counsel of God contained in the 66 books of the Bible … book by book, chapter by chapter. Paul was with us twice in the early days of this podcast, in episodes 23 (“What About Pastoral Counseling”) and 27 (“Maintaining Unity Among Church Leadership”). We recommend going back into the SFTP archives and giving them a listen. You'll be strengthened, for sure.

Actualizing Success
Modern Approach: TMS Vendor Evaluations

Actualizing Success

Play Episode Listen Later Aug 14, 2023 19:22


On this latest episode of Actualizing Success, Actualize Consulting's COO Kerry Wekelo, Senior Consultant Danecia Stewart, and Managing Director Priscila Nagalli examine the significant industry shifts impacting Treasury Management System vendor evaluations. Our experts delve into the latest trends and a modern approach that includes identifying metrics, functionality, integrating platforms, and utilizing new technologies that are reshaping this dynamic sector. At Actualize Consulting, we are committed to providing unparalleled value to our clients by delivering cost and time savings, leveraging our diverse team's TMS expertise, and assisting with identifying genuine needs and negotiating with vendors. Listen to learn more about:·         Market support functionality·         Growth of different companies·         Integration across platforms·         APIs vs SFTP vs Swift About Priscila Nagalli-          Priscila Nagalli is a Managing Director at Actualize Consulting with over 25 years' experience in international cash, banking, payment, investments, compliance, liquidity, and currency risk management, as well as leading complex global transformation projects. She is an economist, CFA, and CTP Charterholder. She is fluent in English, Portuguese and Spanish.-          Email: pnagalli@actualizeconsulting.com About Danecia Stewart-          Danecia is a Senior Consultant at Actualize Consulting with over 15 years of experience in liquidity management, budgeting and forecasting, mergers and acquisitions, due diligence, corporate finance, commodity finance, relationship management, financial analysis, policies/procedures, and global trade finance. She earned her Bachelor in Business Administration from the University of Houston.-          Email: dstewart@actualizeconsulting.com About Kerry Wekelo -          Kerry is Chief Operating Officer at Actualize Consulting. Her book and program, Culture Infusion: 9 Principles for Creating and Maintaining a Thriving Organizational Culture and latest book Gratitude Infusion, are the impetus behind Actualize Consulting being named Top Company Culture by Entrepreneur Magazine, a Top Workplace by The Washington Post, and Great Place to Work-Certified. In her leadership, Kerry blends her experiences as a consultant, executive coach, award-winning author, mindfulness expert, and entrepreneur. Kerry has been featured on ABC, NBC, NPR, The New York Times, Thrive Global, SHRM, Inc., and Forbes.Email: kelam@actualizeconsulting.comThanks for listening to this episode of the Actualizing Success Podcast! We hope you enjoyed the discussion and come back for more. In the meantime, don't forget to rate this episode and leave a review to let us know how you like it. If you have any questions, please contact Paul Baram. More Info: Website: www.actualizeconsulting.com If you have any questions or comments, we'd love to hear from you. You can contact us at podcast@actualizeconsulting.com.  

The Cloud Pod
221: The Biggest Innovator in SFTP in 30 Years? Amazon Web Services!

The Cloud Pod

Play Episode Listen Later Aug 7, 2023 53:37


Welcome episode 221 of The Cloud Pod podcast - where the forecast is always cloudy! This week your hosts, Justin, Jonathan, Ryan, and Matthew look at some of the announcements from AWS Summit, as well as try to predict the future - probably incorrectly - about what's in store at Next 2023. Plus, we talk more about the storm attack, SFTP connectors (and no, that isn't how you get to the Moscone Center for Next) Llama 2, Google Cloud Deploy and more!  Titles we almost went with this week: Now You Too Can Get Ignored by Google Support via Mobile App The Tech Sector Apparently Believes Multi-Cloud is Great… We Hate You All.  The cloud pod now wants all your HIPAA Data The Meta Llama is Spreading Everywhere The Cloud Pod Recursively Deploys Deploy A big thanks to this week's sponsor: Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring?  Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.

Security Squawk
Unmasking the Cl0p Ransomware Gang: A Journey into Cybersecurity Realities

Security Squawk

Play Episode Listen Later Jul 24, 2023 42:41


Welcome to the Security Squawk Podcast, where we delve into the world of cybersecurity and the ever-evolving threats faced by businesses worldwide. In this episode, we uncover the shocking exploits of the notorious "Cl0p" ransomware gang, which has been wreaking havoc by exploiting a zero-day vulnerability in the popular "MoveIt" application. We shed light on the ruthless tactics employed by this gang, which have resulted in the extortion of millions of dollars from their unfortunate victims. With estimates soaring up to $100 million in just a few months, the scale of their malicious activities is truly alarming. Discover the chilling impact of their attacks, as companies are forced to make a harrowing choice - pay the ransom or risk having sensitive data exposed to the world. But that's not all; the "Cl0p" gang has recently taken an even more dangerous turn by publishing stolen data on regular websites, making it readily accessible to anyone and potentially causing further chaos. We bring valuable insights into the prevailing cybersecurity landscape and expose the flawed approach many businesses take to safeguard themselves. We explore the common practice of relying solely on cyber insurance and doing the bare minimum to meet insurance requirements. While this might seem like a convenient solution, it falls woefully short in preventing cyberattacks and data breaches. We also delve into the significance of public disclosure, legislation, and enforcement in driving positive change within the cybersecurity industry. The use of AI and automation is not without its challenges, as we discuss the critical need for human involvement and verification to ensure the efficacy of these tools. In the face of ever-evolving cyber threats, our conversation ultimately underscores the importance of adopting a holistic and comprehensive approach to cybersecurity. It goes beyond just technology, touching upon the internal processes and responsibilities that must be embraced by businesses and individuals alike. Tune in to the Security Squawk Podcast and equip yourself with the knowledge and understanding needed to navigate the treacherous waters of cybersecurity, and join us in advocating for a safer digital world. Together, we can fortify our defenses and stand strong against cyber criminals.

The Treasury Update Podcast
The Power of Connectivity and API Technology in Treasury Management

The Treasury Update Podcast

Play Episode Listen Later Jul 10, 2023 14:27


In today's episode, Host Craig Jeffery of Strategic Treasurer talks with German Karaivanov of GTreasury as they dive into the world of connectivity and API technology.  Explore the core technological and practical differences between file-based/SFTP connectivity methods and API-based approaches, and gain insight into why these distinctions are crucial for companies, particularly treasury groups. 

The Cloud Pod
217: The Cloud Pod Whispers Its Secrets to Azure Open AI

The Cloud Pod

Play Episode Listen Later Jul 8, 2023 39:53


Welcome to the newest episode of The Cloud Pod podcast - where the forecast is always cloudy! Today your hosts Justin, Jonathan, and Matt discuss all things cloud and AI, as well as some really interesting forays into quantum computing, changes to Google domains, Google accusing Microsoft of cloud monopoly shenanigans, and the fact that Azure wants all your industry secrets. Also, Finops and all the logs you could hope for. Are your secrets safe? Better tune in and find out!  Titles we almost went with this week: The Cloud Pod Adds Domains to the Killed by Google list The Cloud Pod Whispers it's Secrets to Azure OpenAI The Cloud Pod Accuses the Cloud of Being a Monopoly The Cloud Pod Does Not Pass Go and Does Not collect $200 A big thanks to this week's sponsor: Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring?  Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.

The Cloud Pod
216: The Cloud Pod is Feeling Elevated Enough to Record the Podcast

The Cloud Pod

Play Episode Listen Later Jun 30, 2023 30:53


Welcome to the newest episode of The Cloud Pod podcast - where the forecast is always cloudy! Today your hosts are Jonathan and Matt as we discuss all things cloud and AI, including Temporary Elevated Access Management (or TEAM, since we REALLY like acronyms today)  FTP servers, SQL servers and all the other servers, as well as pipelines, whether or not the government should regulate AI (spoiler alert: the AI companies don't think so) and some updates to security at Amazon and Google.  Titles we almost went with this week: The Cloud Pod's FTP server now with post-quantum keys support The CloudPod can now Team into your account, but only temporarily  The CloudPod dusts off their old floppy drive  The CloudPod dusts off their old SQL server disks The CloudPod is feeling temporarily elevated to do a podcast The CloudPod promise that AI will not take over the world The CloudPod duals with keys The CloudPod is feeling temporarily elevated. A big thanks to this week's sponsor: Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring?  Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.

Enrollment Insights Podcast
Dive Deeper into SFTP for Student Imports [Slate Stage]

Enrollment Insights Podcast

Play Episode Listen Later May 22, 2023 36:02


This is a recording of a Niche webinar performed on the 2023 Slate Stage in which we diver deeper into what SFTP is and why admissions offices should be using it for importing students. Join Brian Clark, Damien Snook, and Will Patch to explore this topic and answer audience questions. In the Enrollment Insights Podcast, you'll hear about novel solutions to problems, ways to make processes better for students, and the questions that spark internal reflection and end up changing entire processes.

Screaming in the Cloud
Making Open-Source Multi-Cloud Truly Free with AB Periasamy

Screaming in the Cloud

Play Episode Listen Later Mar 28, 2023 40:04


AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world.  AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Oxide and Friends
On Silicon Valley Bank with Eric Vishria

Oxide and Friends

Play Episode Listen Later Mar 17, 2023 59:24


Eric Vishria of Benchmark and Oxide CEO, Steve Tuck, join Bryan and Adam to talk about Silicon Valley Bank, its role in the startup ecosystem, and the short- and long-term effects of its collapse.We've been hosting a live show weekly on Mondays at 5p for about an hour, and recording them all; here is the recording from March 17th, 2023.In addition to Bryan Cantrill and Adam Leventhal, we were joined special guests Eric Vishria and Steve Tuck.(Did we miss your name and/or get it wrong? Drop a PR!)Curated chat log from the show: davidf: Sharing this here because I loved every bit of it: My Startup Banking Story by Mitchell Hashimoto ewen: 'The teller looks at the paper, then looks at me, then looks back at the paper, then asks ""Are you the HashiCorp guy?"" '

The Drunken Peasants Podcast
Tommy C & Matt Pitt Pay Us a Visit | 1161

The Drunken Peasants Podcast

Play Episode Listen Later Mar 11, 2023 190:23


Join Ben & Billy for episode 1161! Tonight, we'll be joined by Tommy C and Matt Pitt from SFTP! Support the show and get 20% off your order by visiting https://bit.ly/3SkPjHq and using your code PEASANTS at checkout! New PC Fundraiser: https://drunkenpeasants.betterworld.org/campaigns/help-drunken-peasants-get-new-co Support our audio feed to get EXTRA content:  https://podcasts.apple.com/us/podcast/the-drunken-peasants-podcast/id1013248653https://open.spotify.com/show/6eulbMV0APnJ5yNR8Jc3IMhttps://bit.ly/SticherDrunkenPeasants Streamlabs Link: https://streamlabs.com/drunkenpeasants/tip *Google Calendar* https://calendar.google.com/calendar/embed?src=sund2qrenq20a2d5802cpp9i6k%40group.calendar.google.com&ctz=America%2FLos_Angeles*iCal* https://calendar.google.com/calendar/ical/sund2qrenq20a2d5802cpp9i6k%40group.calendar.google.com/public/basic.icsIntegrate into your Calendar: http://bit.ly/DPTAPCalendar SUPPORT US:  https://patreon.com/DPhttps://bit.ly/BraveAppDPhttps://bit.ly/BenBillyMerchhttps://streamlabs.com/drunkenpeasantshttps://youtube.com/DrunkenPeasants/joinhttps://subscribestar.com/DrunkenPeasantsPODSURVEY: https://podsurvey.com/peasants SOCIAL MEDIA:https://discord.gg/2fnWTbEhttps://fb.com/DrunkenPeasantshttps://twitch.tv/DrunkenPeasantshttps://twitter.com/DrunkenPeasantshttps://podcasts.apple.com/us/podcast/the-drunken-peasants-podcast/id1013248653https://open.spotify.com/show/6eulbMV0APnJ5yNR8Jc3IMhttps://bit.ly/SticherDrunkenPeasantshttps://bit.ly/DPUndergroundhttp://bit.ly/DPTAPCalendar BEN:  https://bit.ly/BenpaiYT BILLY THE FRIDGE:  https://youtube.com/Overweighthttps://twitter.com/BillyTheFridgehttps://instagram.com/BillyTheFridge PO BOX:The Drunken Peasants1100 Bellevue Way NESte 8A # 422Bellevue, WA 98004Be sure to put the name on the package you send as "The Drunken Peasants". If you would like to send something to a certain peasant, include a note inside the package with what goes to who. SPECIAL THANKS:https://twitter.com/GFIX_https://twitter.com/SYNJE_Grafxhttps://twitter.com/MarshalMansonhttps://berserkyd.bandcamp.comhttps://youtube.com/channel/UC9BV1g_9Iq67_yCyj5AX_4Q DISCLAIMER:The views and opinions expressed on our show by hosts, guests, or viewers, are their own and do not necessarily reflect those of Drunken Peasants. 

Hacker Public Radio
HPR3795: 2022-2023 New Years Show Episode 1

Hacker Public Radio

Play Episode Listen Later Feb 17, 2023


Episode #1 Welcome to the 11th Annual Hacker Public Radio show. It is December the 31st 2022 and the time is 10 hundred hours UTC. We start the show by sending Greetings to Christmas Island/Kiribati and Samoa Kiritimati, Apia. Chatting with Honkey, Mordancy, Joe, Ken, and others Discussed: pi hole, podman, RPIs, Pfsense, and netminers new micro pc Introduction by Ken and Honkey. History: The New Years Celebrations. Civilizations around the world have been celebrating the start of each new year for at least four millennia. Today, most New Year’s festivities begin on December 31 (New Year’s Eve), the last day of the Gregorian calendar, and continue into the early hours of January 1 (New Year’s Day). HPR: So you want to do a podcast? Wikihow: How to make a good podcast. Death Wish Coffee We lead with an alternative point of view, providing bold, smooth cups of coffee to our people. We find fresh ways to enjoy coffee, and we foster community along the way. Disrupting the status quo interests us, so we create edgy, sarcastic content. We live to rebel against blah beans—and a boring, lackluster life. Thailand Elephant Sanctuary VLC commandline: List of commands and arguments. VLC commandline: Documentation. VLC commandline: Audio streaming from the commandline. pavucontrol: PulseAudio Volume Control. Hearse Club youtube: MotorWeek Over the Edge: Hearse Convention. xiph: The Ogg container format. Ogg is a multimedia container format, and the native file and stream format for the Xiph.org multimedia codecs. As with all Xiph.org technology is it an open format free for anyone to use. Library of Congress: .ogg file format. Wikipedia: .mp3 file format. xiph: .flac file format. FLAC stands for Free Lossless Audio Codec, an audio format similar to MP3, but lossless, meaning that audio is compressed in FLAC without any loss in quality. This is similar to how Zip works, except with FLAC you will get much better compression because it is designed specifically for audio, and you can play back compressed FLAC files in your favorite player (or your car or home stereo, see supported devices) just like you would an MP3 file. Wikipedia: .flac file format. elephantguide: How Much Can An Elephant Lift? Royal Thai Embassy: Thailand’s wild tiger population shows impressive growth. bangkokpost: Thailand has highest number of wild tigers in Southeast Asia. mumble: Mumble is a free, open source, low latency, high quality voice chat application. atpinc: What is M.2? Keys and Sockets Explained. armbian: Linux for ARM development boards. pine64: ROCK64 is a credit card sized Single Board Computer. docker: realies/nicotine. kubuntu: Kubuntu is a free, complete, and open-source alternative to Microsoft Windows and Mac OS X which contains everything you need to work, play, or share. Check out the Feature Tour if you would like to learn more! podman: Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. docker: A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Containers and VMs Together? cockpit: Cockpit is a web-based graphical interface for servers, intended for everyone. manage virtual machines in Cockpit. etherpad: big boy show notes. redhat: Transitioning from Docker to Podman. lugcast: We are an open Podcast/LUG that meets every first and third Friday of every month using mumble. [logitech:](https://www.logitech G435 Ultra-light Wireless Bluetooth Gaming Headset. fit philosophy: Junk volume. "Junk volume" refers to exercise that doesn't improve strength or build muscle, wasting your time and energy. Leg day workout jitsi: Jitsi Free & Open Source Video Conferencing Projects. mintCast The podcast by the Linux Mint community for all users of Linux. The Linux link tech show The Linux Link Tech Show is one of the longest running Linux podcasts in the world. PETG 3D Printing Filament. MIM-104 Patriot military-today The Patriot is a long-range air defense missile system. samsclub: rancher: suse rancher: raspberrypi single board computers. pfsense: pfSense is a firewall/router computer software distribution based on FreeBSD. snort: Snort is the foremost Open Source Intrusion Prevention System (IPS) in the world. Snort IPS uses a series of rules that help define malicious network activity and uses those rules to find packets that match against them and generates alerts for users. Snort can be deployed inline to stop these packets, as well. Snort has three primary uses: As a packet sniffer like tcpdump, as a packet logger — which is useful for network traffic debugging, or it can be used as a full-blown network intrusion prevention system. Snort can be downloaded and configured for personal and business use alike. pi-hole: In addition to blocking advertisements, Pi-hole has an informative Web interface that shows stats on all the domains being queried on your network. nlnetlabs: Unbound Unbound is a validating, recursive, caching DNS resolver. It is designed to be fast and lean and incorporates modern features based on open standards. DHCP server dietpi: DietPi is an extremely lightweight Debian OS, highly optimised for minimal CPU and RAM resource usage, ensuring your SBC always runs at its maximum potential. servethehome: Project Tiny Mini Micro, cool 1 liter pc builds. filezilla: The FileZilla Client supports FTP, FTP over TLS (FTPS), and SFTP. redhat: Configure a Network Team Using the Text User Interface, nmtui. howtogeek: Manage Linux Wi-Fi Networks With Nmtui. travelcodex: The Southwest Airlines Meltdown. gpd kickstarter: Arduboy, the game system the size of a credit card. pine64: Pinetab 2. orangepi: Orange Pi 800, Mini PC in a keyboard. southeastlinuxfest: The SouthEast LinuxFest is a community event for anyone who wants to learn more about Linux and Open Source Software. fosdem: FOSDEM is a free event for software developers to meet, share ideas and collaborate. stallman: Richard Stallman's Personal Site. freedos: FreeDOS is a complete, free, DOS-compatible operating system. While we provide some utilities, you should be able to run any program intended for MS-DOS. reactos: Imagine running your favorite Windows applications and drivers in an open-source environment you can trust. wikipedia: Windows 3.0. winehq: a compatibility layer capable of running Windows applications on several POSIX-compliant operating systems, such as Linux, macOS, & BSD. codeweavers: playonlinux: PlayOnLinux is a piece of software which allows you to easily install and use numerous games and apps designed to run with Microsoft® Windows®. protondb: Proton is a new tool released by Valve Software that has been integrated with Steam Play to make playing Windows games on Linux as simple as hitting the Play button within Steam. libreoffice: LibreOffice is a free and powerful office suite. linuxmint: Linux Mint is a community-driven Linux distribution based on Ubuntu, bundled with a variety of free and open-source applications. xfce: Xfce or XFCE is a free and open-source desktop environment for Linux and other Unix-like operating systems. crunchbang: CrunchBang was a Debian GNU/Linux based distribution offering a great blend of speed, style and substance. openbox: gnome: mozilla: firefox google chrome AMD autism toastmasters Toastmasters International is a nonprofit educational organization that teaches public speaking and leadership skills through a worldwide network of clubs. openssl Asperger syndrome STEM BASIC BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages designed for ease of use. The original version was created by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1963. IRC IRC is short for Internet Relay Chat. It is a popular chat service still in use today. second life walmart aldi morrisons boots walgreens zulu clock Thanks To: Mumble Server: Delwin HPR Site/VPS: Joshua Knapp - AnHonestHost.com Streams: Honkeymagoo EtherPad: HonkeyMagoo Shownotes by: Sgoti and hplovecraft

XenTegra - IGEL Weekly
IGEL Weekly: How to deploy IGEL OS firmware and custom partitions via Azure sftp

XenTegra - IGEL Weekly

Play Episode Listen Later Jan 24, 2023 48:53 Transcription Available


Written by Edwin ten Haaf, IGEL Community MemberMore and more of our IGEL customers want to facilitate work from anywhere.In the office they were familiar with using #IGELOS driven devices.  By providing end users with notebooks running IGEL OS or UD Pocket(IGEL  on a stick) user can safely and easily connect to their virtual  workplace.The management backend (UMS) is managing these devices in the  local network.  connect to ICG and the  management backend connects to ICG and the devices can be managed as if  they were local.   If you're ICG is installed and configured well you can now manage  you're devices outside the office. Deploy and update profiles, support  users with shadow functionality. One important part that has to be  configured separately is the distribution of IGEL OS firmware and Custom  Partitions (Additional Software running on the IGEL OS like MS Teams and  Zoom)For this you need to point you're devices to a remote https/sftp  location. Please read here howHost: Andy WhitesideCo-host: Sebastien Perusat

The Nonlinear Library
EA - Some observations from an EA-adjacent (?) charitable effort by patio11

The Nonlinear Library

Play Episode Listen Later Dec 9, 2022 12:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some observations from an EA-adjacent (?) charitable effort, published by patio11 on December 9, 2022 on The Effective Altruism Forum. Hiya folks! I'm Patrick McKenzie, better known on the Internets as patio11. (Proof.) Long-time-listener, first-time-caller; I don't think I would consider myself an EA but I've been reading y'all, and adjacent intellectual spaces, for some time now. Epistemic status: Arbitrarily high confidence with regards to facts of the VaccinateCA experience (though speaking only for myself), moderately high confidence with respect to inferences made about vaccine policy and mechanisms for impact last year, one geek's opinion with respect to implicit advice to you all going forward. A Thing That Happened Last Year As some of the California-based EAs may remember, the rollout of the covid-19 vaccines in California and across the U.S. was... not optimal. I accidentally ended up founding a charity, VaccinateCA, which ran the national shadow vaccine location information infrastructure for 6 months. The core product at the start of the sprint, which some of you may be familiar with, was a site which listed places to get the vaccine in California, sourced by a volunteer-driven operation to conduct an ongoing census of medical providers by calling them. Importantly, that was not our primary vector for impact, though it was very important to our trajectory. I recently wrote an oral history of VaccinateCA. It may be worth your time. Obligatory disclaimer: I'm speaking, there and here, in entirely my personal capacity, not on behalf of the organization (now wound-down) or others. A brief summary of impact: I think this effort likely saved many thousands of lives at the margin, at a cost of approximately $1.2 million. This feels remarkable relative to my priors for cost of charitably saving lives at scale in the US, and hence this post. Some themes of the experience I think you may find useful: Enabling trade as a mechanism for impact To a first approximation, Google, the White House, the California governor's office, the Alameda County health department, the pharmacist at CVS, and several hundred thousand other actors have unified values and expectations with regards to the desirability of vaccinating residents of America against covid-19. They are also bad at trading with each other. Pathologically so, in many cases. One of the reasons we had such leveraged impact is that we didn't have to build Google, or recruit a few hundred million Americans to use it every day. We just had to find a very small number of people within Google and convince them that Google users would benefit from seeing our data on their surfaces as quickly as possible. Google and large national pharmacy chains cannot quickly negotiate an API, even given substantial mutual desire to do so. As it turns out, pharmacists already have a data store—pharmacists—and a transport layer—the English language spoken over a telephone call—and if you add a for loop, a cron job, and an SFTP upload to that then Google basically doesn't care about pharmacy chain IT anymore. Repeat this by many other pairwise interactions between actors within an ecosystem, and we got leveraged impact through their ongoing operations, with a surprising amount of insight into (and perhaps some level of influence upon) policy decisions which your prior (and my prior) would have probably suggested "arbitrarily high confidence that that is substantially above your pay grade." We didn't have to be chosen by e.g. the White House as the officially blessed initiative. We just had to find that initiative and be useful to it. (Though, if—God forbid—I ever have to do this again, I would give serious consideration to becoming the national initiative prior to asking for permission to do so and then asking the White House whether the ...

Your Cyber Path: How to Get Your Dream Cybersecurity Job
EP 84: The CIA triad - The Basis of Cyber Security (Confidentiality...and How These Are Used in Our Daily Careers)

Your Cyber Path: How to Get Your Dream Cybersecurity Job

Play Episode Listen Later Nov 25, 2022 28:53


https://www.yourcyberpath.com/84/ In this short episode, Jason and Kip discuss the first aspect of the CIA Triad which is Confidentiality.  They break down the critically important confidentiality point and how it works in the real world, highlighting that it's not about the information itself but more likely about where that information is in the flow.  They also mention how confidentiality is brought up in certification exams and how it's always connected to encryption.  They finish up by doing some mock interview questions about things like secure erase, encryption, and secure file transfer to simulate situations that you could face when applying for cybersecurity jobs. What You'll Learn ●    What are the three states of data? ●    What questions related to confidentiality could you meet in your certification exams? ●    What interview questions could you get on confidentiality and how to answer them perfectly? ●     What is the difference between SFTP and FTPS? Relevant Websites For This Episode ●   https://www.yourcyberpath.com/ Other Relevant Episodes ●    Episode 62 - The NIST Cybersecurity Framework ●    Episode 56 - Cybersecurity careers in the Defense sector ●   Episode 80 - Risk Management Framework with Drew Church

The Cloud Pod
186: Google Cloud Next, More Like Google Cloud Passed

The Cloud Pod

Play Episode Listen Later Oct 31, 2022 72:24


On The Cloud Pod this week, Amazon EC2 Trn1 instances for high-performance model training are now available, 123 new things were announced at Google Cloud Next ‘22, Several new Azure capabilities were announced at Microsoft Ignite, and many new announcements were made at Oracle CloudWorld. Thank you to our sponsor, Foghorn Consulting, which provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you're having trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week. Episode Highlights ⏰ Amazon EC2 Trn1 instances for high-performance model training are now available. ⏰ 123 new things were announced at Google Cloud Next ‘22. ⏰ Several new Azure capabilities were announced at Microsoft Ignite. ⏰ Many new announcements from Oracle CloudWorld. Top Quote

DekNet
SFTP sobre SSH

DekNet

Play Episode Listen Later Sep 9, 2022 34:54


https://t.me/+ZTPOqXWVV2M4NTM8

The Tech Authority Podcast
30 Apps IT Professionals use daily - WinSCP

The Tech Authority Podcast

Play Episode Listen Later Jun 18, 2022 1:40


WinSCP is an open source free SFTP client, FTP client, WebDAV client, S3 client and SCP client for Windows. Its main function is file transfer between a local and a remote computer.

Screaming in the Cloud
Hard Charging Software onto the AWS Marketplace with David Gatti

Screaming in the Cloud

Play Episode Listen Later Mar 15, 2022 35:53


About DavidDavid is an AWS expert who likes to design and build scalable solutions that are fully automated and take care of themselves. Now he is focusing on selling his own products on the AWS Marketplace.Links: 0x4447: https://0x4447.com/ Products page: https://products.0x4447.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Today's episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that's built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you're defining those as, which depends probably on where you work. It's getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that's exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn't eat all the data you've gotten on the system, it's exactly what you've been looking for. Check it out today at min.io/download, and see for yourself. That's min.io/download, and be sure to tell them that I sent you.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's promoted episode is brought to us by 0x4447. And my guest today is David Gatti, their CEO. David, thank you for taking the time to speak with me today.David: Thank you for getting me on the show.Corey: One of the things that I find fascinating about what you do and where you come from is that for the last five years, you've been running an independent company that I would classify based upon our conversations as pretty close to a consultancy. However, you've gone down the path that I didn't when I set up my own consultancy, and started actually selling software—not just software: Solutions—as a packaged thing that you can wind up doling out to various customers, whereas I just went with the very high touch approach of, “Oh, let me come in and have a whole series of conversations with people.” Your scale is a heck of a lot more. So, do you view yourself these days as a software company, as a consultancy, or something else entirely?David: So, right now, I did put aside the consultancy because yeah, one thing that I realized, it's possible but it's very hard to scale, it's also hard to find people at the same level. So yeah, the scalability of the business is quite hard, whereas with software sold on the AWS Marketplace, that is much easier to scale than what I was doing before, and that's why I decided to take a break from consulting and focusing one hundred percent on the products that I sell on the AWS Marketplace to see how this goes and how it actually works, and can a business be built around it.Corey: The common wisdom that I've encountered is that consulting, especially when you're doing it yourself, is one of those things that is terrific when you find yourself in the position that I originally did of your employer showing up and, “Knock, knock,” “Who's there?” “Not you anymore. Get out.” And there's a somewhat, in my case, limited runway as far as how long I've got before I have to go find another job. With consulting, you can effectively go out and start talking to people, and provided that you can land a project, it starts throwing off revenue, basically immediately, whereas building software, building packages, things that you end up selling to people, it's almost like a real estate business on some level, where you have to take a lot of investment up front to wind up building the thing, where—because no one is, generally speaking, going to pay you spec work to go ahead and build something for 18 months and come back and hope that it works.David: Right.Corey: I also bias towards the services because I'm bad at writing code. You, on the other hand, write things that seem to actually work, which is another refreshing difference.David: Yes. So, I did that, but now I have a guy that is just a Linux expert. So, you were saying that there is a high investment in the beginning, but what actually—in my case what happened, I've been selling these products for the past three years basically as a hobby. So, when I was doing AWS consulting, I was seeing, like, a company has a problem, a repeating problem, so I was just creating a product, putting it on the Marketplace, and then sending it to them. So basically, they had a situation where I can manage those projects to update when there's a need to do an update, and there was always a standardization behind that, right?So, if they had, you know, five SFTP servers, and there was a need to make an update, I was making the update on my image, putting it on the Marketplace, and then updating all those servers in one go in a much quicker fashion then managing them one by one, right? And so I had this thing for three years. So now, when I started doing this full-time, I have a little bit of a leap on what's going on. So, I already had a bunch of clients that are using their products, so that actually helped me not to have to wait three years before I saw any revenue coming in.Corey: I always thought that the challenge behind building something like this was that well, you needed to actually be conversant in a programming language; that was the thing that you needed to package and build these things. But I take a look at what you have on the AWS Marketplace—and I will throw a link to this in the [show notes 00:04:39]—but you offer right now four different offerings: A Rsyslog server, a Samba server, VPN server, and an SFTP server, and every one of those four things, back in my DevOps days, I built and implemented on AWS, generally either from scratch or from something in the Marketplace—and I'll get to that in a bit—that didn't really meet a variety of needs. And every single time I built these things, it drove me up a wall because I had to do this without, like, solving a global problem locally, myself, to meet some pile of needs, then I had to worry about the maintenance of the thing, making sure that the care and feeding continued to work. And it just wasn't—it didn't work for me in the way that I wanted it to. It never occurred to me that I really could have just solved this whole thing once, [unintelligible 00:05:28] it on the Marketplace, and then just gone and grabbed the thing.David: Exactly. So, that was my exact thinking here. Especially when your work with the client, this [unintelligible 00:05:38] was also great [idea 00:05:39] because when you work with clients, they want to do things as fast as possible, right? So, can they say, “I need an SFTP server?” Of course, it takes, you know, half a day to set up something, but then they scream at you and say, like, “Hey, do the next thing. Do the next thing. Do the next thing.” And you never end up configuring the server that you're making a reliable way, sometimes you misconfigure it because, oh I forgot this option, and now everybody on the internet can access the server itself.Corey: Wait, screw up a server config? That doesn't sound like something I would do.David: Well, of course not.Corey: Yeah, no one [unintelligible 00:06:08] they're going to until oops.David: Yes. You're amazing and you're perfect, of course, but I'm not. And I was seeing, like, oh, you know, in the middle of the night, oh, I forgot this option. I forgot this. I forgot that.And so there was never a, basically, one place when the configuration just correct, right? And that was something that sparked my idea when I realized the Marketplace exists. It's like, oh, wait a moment, I can spend few weeks to do it, right, put it there and never worry about it again. And so if when a client says like, “Hey, I need this,” I can deploy it literally, in less than one minute. You have any of those products that actually I'm selling up and running, right?And of course, the VPN is going to be a little bit slower because it needs to generate all the certificates at the beginning, but for example, the SFTP one is just poof, you're deployment with our CloudFormation file, provide username and password, and you're up and running. And I see, for example, this thing with clients, which sometimes it's funny, when there's two clients that they use the SFTP server only once a day for one hour. So, every day is like one new instance created, then one instance removed, and one instance created and one instance removed. And so it keeps on going like that.Corey: The thing that always drove me nuts about building these things out was first I had to go and find something on those rare occasions where I used the Marketplace. Again, I wasn't really working in the same modern Marketplace that we think of today when we talk about the AWS Marketplace. It was very early on, the only way that it would deliver software was via, “Here's an AMI, grab the thing, and go ahead and deploy it, and it's going to have an additional hourly cost on. It the end.” And more or less the whole Henry Ford approach of, “Oh, you can get it in any color you want, as long as it's black.”So, back in those days, I would spin up an OpenVPN server—and I did this at several companies—I would go and find the thing on the Marketplace from I think it was the OpenVPN company behind the project. Great, I grabbed the thing, it had no additional cost through the Marketplace. I then had to go and get a custom license file from the vendor themselves, load the thing in, then start provisioning users. And this had no integration that I could discern with anything else we had going on, so all of this stuff was built through the web config on this thing, there was no facility for backing the thing up—certificate, material, et cetera, et cetera—so if something happened to that instance or that image, or we had to go through a DR exercise, well, time to reprovision everyone by hand again. And it was annoying because the money didn't matter. At a company scale, it really doesn't for something like this unless you're into the usurious ranges. It does not matter.It's the, I want to manage this simply and effectively in a way that makes sense, and in many cases in a way that is congruent with our on-prem environment. So, “Oh, there's a custom AWS service that offers something kind of like this. Use that instead.” It's, yeah, I don't like the idea, personally, of having to use a higher-level managed service that I'm very often going to need the most, right when things are getting wonky during an outage scenario. I want something that I understand and can work with.And I've always liked, even if I have all the latest whiz-bang accesses into an environment, in production environments, I spin up something like this anyway, just to give myself a backdoor in the event that everything else breaks. And I really like how you've structured your VPN server as far as backing up its config, sharing its configs, you can scale it to more than one instance—what a ridiculous concept that is—and so on and so forth.David: So, it's not more than one—I mean, yes, you can deploy to more than one time, but the thing that—because again, when you were saying, like, companies don't care about the cost, right? It's more about how annoying it is to use and set up, right? And so I'm one of those people that when I, for example, see things like I've been playing with servers since the '90s, right, and I was keeping rebuilding and recreating everything every single time from scratch.And, yeah, it was always painful. It took always a lot of time. For example, our server took six months to set up the right way. And also the pricing [unintelligible 00:10:11] the competition has is quite aggravating, I will say. Like, it's very hard to scale above a certain point, especially for the midsize companies.And the goal with the Marketplace is also, like, make it as simple as possible. Because AWS itself doesn't make it easy to be on the Marketplace, and it's almost, like, crazy how hard it is. So, for anybody who will like to—who might think, like, “Oh, I would like to try this AWS Marketplace thing,” I would say should do it, but be super patient. You cannot rush it because it's going to take you on average six months to understand how even the process of uploading anything and updating it and managing it is going to take it because their website that they've built has nothing to do with the console and it's a completely custom solution that is very clunky and still very old-fashioned, how you have to manage it.Corey: Tell me more about that. I've never gone through the process of putting something up on the Marketplace. To my understanding, you need to be an AWS partner in order to use the Marketplace, correct?David: No you don't have to.Corey: Okay.David: No. Thankfully not. I hope it's not going to do this thing is not going to change. [crosstalk 00:11:20]—Corey: Yeah. I wound up manifesting it into existence by saying that. Yeah. If you're on the Marketplace team listening to this, don't do that, please. I really don't want to get yelled at and have made things worse for people.David: Don't give them ideas. [laugh]. Okay?Corey: Exactly.David: No, it's anybody can do it. But yeah, how to add a new product. So, the process is you have to build an AMI first. And then you have to submit the AMI to AWS by first creating a special AMI role—sorry, I always get confused AMI, [IAM 00:11:51], I never—IAM is users. Okay.Corey: I think we have a few more acronyms that use most of the same letters. I think that's the right answer here.David: [laugh]. So, either IAM or AMI, whichever is responsible for roles, you have to create a special role to give AWS access to your AMI. Then you submit the image to AWS providing the role that they have to use. They scan it and they do simple checks to make sure that you don't for example, have SSH enabled with regular users, do some regular scanning to make sure that you're not using an image from ten years ago, right, of Linux. And once you pass that, you are able to actually create your first product.Then you have to write your title, description provide, for example, the ports that needs to be open, the URLs to separate resources, the pricing page, which takes on average one hour to fill up because let's say that you have 20 instances that you support, and for every instance, you have to write the price for that instance per one hour. Then if you want to have a discount of let's say 20%—because you can set it by the hour, or someone can pay you for the full year. And so for the full year, you might have a discount. So, you have to have also the price per hour discounted by the amount of percentage that you want, and then you have to repeat it 40 times. Because there is no way to upload that.Corey: That feels like the internal AWS billing system in some respects. “Well, if it's good enough for us it good enough for our customers.” And—David: [laugh]. Exactly.Corey: —now, I have empathy for the folks in the billing system internally; their job is very hard, but that doesn't mean that it's okay to wind up exposing those sharp edges to folks who are, you know, paying customers of these things.David: Right. And it'd be a simple thing like being able to import the CSV file with just two columns and that would be perfect. But no, you have to do it by hand. There is no other way. So hopefully—Corey: Or someone has to. Welcome to the crappiest internship of your life.David: Exactly.Corey: It feels like bringing people into data entry for stuff like that is cheating.David: Exactly. So, you do that and then I don't remember exactly what the other steps are to a new creating a completely new product because I did that three years ago, and so now, I'm been just updating those products, but yeah, then they have to review your submission, and once everything is okay, then your product is on the Marketplace, and you can—are already accept everything. If you, for example, want to have the image also available in some specific regions that are not the default ones, you have to enable this by hand. I don't remember anymore how, but it's not obvious.Corey: And you have to keep redoing this every time they launch a new region as well, I would imagine.David: So, they say that you can have enabled the option to automatically add it, but it still won't work. Well, it will work, but… let's say, so in my case, I'm using CloudFormation. I gave a complimentary CloudFormation file where if you want to deploy my product, you go to the documentation page, you click the orange button, and you basically provide the parameters, and you click next, next, next and the product is deployed within a few minutes.And in that CloudFormation file, I have a map of every AMI in every region. Okay? So, if they add a new region and they automatically add the AMI there, then if you don't get notified that there is a new region, you don't know that you have to update the CloudFormation file, and then someone might say, like, “Hey, David, why this product is not deployed in this region.” It's like, “Oops. I didn't know that they have to update the CloudFormation file with a new region.” Right?Corey: Yeah, I'm a big believer in ClickOps, the idea of doing things in the console, but everything you're talking about sounds like a fraught enough process that I'm guessing you have some form of automation that helps you with a lot of this.David: Yeah. So, I hate repeating anything more than once, so everything in my book is automated as much as possible. The documentation, for example, how I structure it, there is a section that tells you how to deploy it by just using CloudFormation file and clicking next, next, next, next until you have it. And then there's also the option if you want to deploy manually because you don't trust what the CloudFormation file is doing, right? Of course, you can see the source file if you wanted to, but sometimes people are a little bit wary about big CloudFormation files.In any case, I have this option, but they have this option as a separate thing. So, AWS has an option where you could add a CloudFormation file that goes with your product. The problem is to be able to submit a CloudFormation file natively so they will take care of it requires you to get Microsoft Office 365. Because they give you an Excel file that has, I think, a few thousand columns. And for example, numbers under [unintelligible 00:16:40], when you export, you save the final—or sorry, you export it, it will cut around 500 columns. So, you miss, like, two-thirds of what AWS will likely to send you. And why they do that, I have no idea. I don't know if they still do it after three years, but when I was doing it, they told me like, “Hey, this is the file. Fill it by hand.”Corey: About that time period, that was exactly how they did large-scale corporate discounts on custom contracts is that they would edit the AWS bill in Excel, or if not, the next closest thing to it because there were periodically errors that looked an awful lot like someone typo-ing something by hand.David: What—Corey: Computers are generally bad doing that, and it took an extra couple of weeks to get those bills, which is right around the speed of human.David: Wow.Corey: I see none of those problems anymore, which tells me, that's right, someone finally upgraded off of Microsoft Excel to the new level. Probably Airtable.David: [laugh]. Maybe. So, I don't know if that process is still there, but what they did, like, then I realized, oh, wait a moment, I can just have a CloudFormation file in S3 bucket publicly available and just use that instead of going through that process. Because I didn't want to pay on a yearly basis for a product that I'm going to use literally once a year. That didn't make any sense to me and so I decided I'm going to do it this way. That's why, yeah, if they add on a new region, I have to go out and update my own CloudFormation file because I maintain that myself, whereas they would maintain it for me, I guess.Corey: The way that I see all of the nuts and bolts of the engineering parts of getting all these things up and running on the Marketplace, it feels like it is finicky; it is sharp edges that AWS is basically known for in many respects, but without the impetus of making that meaningfully better, just because there's such an overriding business reason, that—it's not like there's a good competitor for something like this. So, if you want to sell things to AWS people in most frictionless way possible, it reflects on the AWS bill, causes discounting, counts for their spend commitments, and the rest, it's really the AWS Marketplace is the only game in town for a lot of that.David: Right. So, I don't know if they don't do it because they don't have enough competition or pressure because to me when I first started doing this AWS Marketplace, it felt to me like more Amazon than AWS, right? It feels more like an Amazon team was behind it and not people from AWS itself. It felt like completely something different. Not to mention, yeah, the console that they provide is something completely custom that has nothing to do with the typical AWS console.Corey: I've heard stories about the underpants store division's seller tools as well; very similar to the experience you're describing.David: Mmm. And also the support is different. So, it's not connected to the AWS console one. The good thing about it, it's free, but it's also only by email. And so yeah, it's a very weird, clunky situation where I mean, I'm someone that, I guess, loves the pain of AWS. [laugh].I don't know if that's a good thing or a bad thing. But when I started, I decided, you know what, I'm going to figure it out, and once I do, I'm going to feel happy that I was able to. Maybe that's their goal: It's to give us purpose in life. So, maybe that's the goal of AWS. I don't know.Corey: There are times I really wonder about that where it feels like it could be so much more than it is, but it's not. And, again, my experience with it is very similar to what you've described, where it's buying an AMI, the end. But now they're talking about selling SaaS subscriptions on it, they're talking about selling professional services—in some cases—on it. And effectively, it almost feels like it's trying to become the Marketplace through which all IT transacting starts to happen. And the tailwind that sort of is giving energy to a lot of those efforts is, if you have a multimillion-dollar spend commitment with AWS in return for discounting, you have to make sure you spend enough within the timeframe, 50% of all spend on the AWS Marketplace counts toward that.Now, other cloud providers, it's 100% of spend, but you know, AWS is nothing if not very tight with the dollar. So okay, fine, whatever. There's a reason for companies to go down that path. Talk to me a little bit about the business aspect of it because for me, it seems like the clear win, in the absence of anything else is—especially at larger companies—they already have a business relationship with AWS. The value to someone selling software on the Marketplace feels like it would be, first and foremost, an end-run around companies procurement departments.It's just oh, someone has to click a button and they're up and running, as opposed to going through the entire onboarding and contracting and all the rest, manual way. Other than the technical challenges of getting things up and running on it, how have you found that it works as far as getting in front of additional customers, as far as driving adoption? You could theoretically have—I imagine—have not gone down the Marketplace route at all and just sold this directly on your website, click here to buy a license file the way that a lot of stuff I used to as well, and would have cut out a lot of the painful building an AMI and putting it into the Marketplace story. What's the value to being in the Marketplace?David: Yeah, so in the beginning, the value was basically that it's on the Marketplace, as I was saying, I was using it with pre-existing clients, so it was easy for me because I knew AWS images were there. So, it was easy to just click my own CloudFormation file and tell the client after one minute, “Hey, it's up and running. You have a bunch of profiles for your VPN. Enjoy and have fun.” Right?That experience, once you have it on the Marketplace, it's nice because it just works. And you don't have to do much work. Then I realized that AWS, in the search bar in the console, when you were typing, for example, you know, you type EC2, S3, CloudFormation, to find the service, what they were doing originally is when you were typing in the search bar, you were getting the services of AWS, and then when there was nothing left, they were showing the results of the Marketplace, which was basically amazing because you have primetime in the console with your product, you had to do zero marketing, and you get every week, took new clients that are using our product. And the trend was growing pretty, pretty well.And that was a proposition that is just amazing. Like, nobody has that because you can have Fortune 500 companies using our product without doing anything. It just—is it simple to deploy? Yes. Does it provide value? Is the price great? And people were just using them. Fast forward now; what happened is AWS changed the console. And instead of showing, after the services, the Marketplace, like, now they show the sub-section of the services, they show the results from the blog, the articles, videos, whatever, I don't even know what they've put there—Corey: Originally, you could search my name in that search bar, and it would pop up a profile of me they did for re:Inforce in the security blog.David: [laugh]. There you go.Corey: “Meet Corey Quinn. A ‘cloud economist'—scare quotes and all—who does not work here. And it was glorious. Now, they've changed the algorithm so it pops up. “Oh, you want Corey Quinn, you must mean IoT Core.” So, that blog post is still there, but it's below the fold because of course they give precedence to a service that they have that nobody uses or understands. Because, Amazon.David: Yeah, of course. And so that was awful because suddenly I realized that, oh, I'm getting less and less new clients because you know, after six months, one year, people are shutting off their things because they're finished using them, and I will not getting new ones. But at that time, I was doing [AWS 00:24:06] consulting, so it's like, oh, maybe it was a glitch in the Matrix, whatever. I got lucky.But then after a few months, I realized, wait a moment. When I was working in AWS, I realized that the console results changed, and I went like, oh, that's what happened and that's why I'm getting less clients, right? So, in the beginning, that was a great thing and that's why I'm actually paying you to promote my business and my products because now there is no way to put the products in front of customers because AWS took it away. And so that's why I decided to actually go full-force on this to make sure that I promote as much as possible because that one cool feature that AWS was providing, they took it away for whatever reason because blog posts are more important than their partners, [laugh] I guess.Corey: Well, it depends on the partner and the tier of partner, and it feels like it's a matter—to be clear, full disclosure: I am not an AWS partner; I'm not partnered with any vendor in this space, for either real or perceived conflict of interest issues, so I don't have a particular horse in the race. But back when there were a small number of partners, the network really worked. Now, there are tens of thousands of partners, and well, what winds up being surfaced? Customers, as a result seem to be caring less about various partner statuses, unless they're trying to check a box on some contractual requirement. Instead, they just want the problem solved, and it's becoming increasingly challenging to differentiate just by the nature of how this works.I don't believe, in 2022, that you could build almost anything, and put it on the AWS Marketplace in isolation and expect that to suddenly drive adoption by the fact that you're there. It feels, to me, at least on the other side of the fence, that the Marketplace experience is all about, you go there and you look for the name of the thing that you already know that you want because you've heard about it from other means, and then you just click it and you go, and that's the end of it. It's a procurement story; it's not a discoverability story.David: Right. And yeah, so that's sort of a bit disappointing, and I even made a post on Reddit about it to just bring this up to AWS itself to say, like, “Hey, UI change is pretty severe.” Because I mean, they get a percentage of every hour, the products are running, so basically they shoot themselves in the foot by making less money because now they're getting less products are being shown to potential customers. So, yeah, that's a disappointing thing.When it comes to also you ask what other way there is to show their products to potential customers, so there is an option where AWS can help you out. And when I talked to them, I think last year, they said that if you reach $2 million in sales a year, then they will basically show you around other potential customers, right? Which is a little bit disappointing because especially if you're a small company like mine, it's pretty hard to get to that $2 million in a meaningful time. And if once you reach that point, you might go like, “Hmm, how is this going to help me if you now show me in front of other people?” So yeah.And of course, I understand them in a sense that if they show a product from the Marketplace to a big company and the product turns out to be of poor quality, then of course the client is going to tell AWS like, “Why you're showing us something that just doesn't do its job?” Right? But it'd be nice to have a [unintelligible 00:27:24] when you say, “Okay, you're starting out. After a few years, so we can show you to this midsize clients.” You don't have to go to, immediately, Fortune 500 companies. That doesn't make any sense, right?Corey: And I still—even the companies that are at that level, I've talked to them about how they've grown their business, and not a single one has ever credited anything AWS did to help them grow. Other than, “Well, they threw re:Invent, so we spent extortionate piles of money and set up a booth there, and the fact that we were allowed in the building to talk to people was helpful, I guess.” But it's all through their own works on this, I'm not convinced, to be very direct with you, that AWS knows how to effectively drive sales and adoption of things on their own Marketplace. That is an increasing source of concern.David: Right. And then there's no plan of what to do with a company that is starting on the Marketplace, once it's a few—or it's already a few years and established in the Marketplace and a big one. Yeah, they don't have any way to go about it, which is a bit disappointing. But again, I like a challenge. I like the misery of AWS, so I'm just doing it. [laugh].Corey: No, I hear you. Would you recommend other people in your position explore selling on the Marketplace, given the challenges and advantages both that you've experienced?David: So, if you were to start from scratch, it will take you, like, three years—maybe not three years, but it's not something that should be the primary revenue source of the business if you want to go into the AWS Marketplace situation because you have to have enough capital to do enough marketing to see if you can get in front of people. If you already do some consulting like me, where I did some stuff on the side, and then realized, oh, people are using it, people like it, they get some feedback, the want new features, like, “Oh, maybe I can start growing this bigger and bigger, right?” It's not something that's going to happen immediately. And especially the updating process that happens, it can get quite stressful because when you make an update—so you have a version of a product that's working and running, right? Now, you make an update and you have to spend at least a week or even sometimes two weeks to test that out to make sure that you didn't miss anything because you don't want people to update something and it stops working right?Corey: You can't break customer experiences on these things.David: Yeah. No.Corey: It becomes a nightmare.David: Because especially you don't know if, literally, a Fortune 500 company is using your product or, like, a tiny company that has only ten employees, right?Corey: Your update broke the file server with a VPN means it's unlikely that they're going to come back anytime soon, too.David: Right.Corey: You're also depending on AWS, in some respects, to steward the relationship because you're you don't have direct contact with your buyers.David: No. So, that's important thing. They don't give you access to the contacts; they give you access to the company information. So, I actually do have Fortune 500 companies using my products, but yeah, there's no way to get in touch with them. The only thing that you get is the company name, the address, the domain that they used to create an email. So, at least you can get a sense of, like, who this company is.But yeah, there is no way to get in touch if there is a problem. So, the only way that you can notify the customer that there's a new update is when you make an update, there is a text area that you can say what's new, what did you change, right? And that's the only communication that you get with the client. So if, for example, you do a big mistake, [laugh], you basically have that just little text box, and hopefully, someone reads it. But you know, AWS is known for sending 20 emails a week for every account that you open. Good luck getting through that noise.Corey: Hope that you don't miss the important ones as you go through. No—David: Exactly.Corey: —I hear you. These are problems that I think are on AWS's plate to solve. Hopefully, someone over there is listening to this and will at least reach out with a bit of a better story. I really want to thank you for taking the time to speak with me today. We'll include links, of course, to this in the [show notes 00:31:09]. Where else can people find you?David: They can find us basically on the product page of what we sell. So, we have products.0x4447.com/. That's where, basically, we keep all our products. We keep updating the page to provide more information about those products, how to get in touch with us, we provide training, demos, anything that you want. It's very easy to get in touch with us instead of—sometimes when it comes to AWS. So yeah, we are out there, pretty easy to find us. The domain—the company name is so unique that you either get our website or—Corey: Easy to find on Google.David: Yeah, so we're basically—the hex editor. And that's basically it. [laugh].Corey: Excellent. Well, we'll definitely put links to that in the show [notes 00:31:50]. Thank you so much for taking the time to speak with me today. I really appreciate it.David: Thank you very much.Corey: David Gatti, CEO of 0x4447. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment that makes sure to mention exactly how long you've been working on the AWS Marketplace team.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Ctrl+Alt+Azure
124 - SFTP with Azure Storage

Ctrl+Alt+Azure

Play Episode Listen Later Mar 8, 2022 41:15


(00:00) - Intro and catching up.(06:10) - Show content starts.Show links- SFTP on Azure Storage (Microsoft Docs)- Building a modern data integration solution using SFTP (Jussi)SPONSORThis episode is sponsored by ScriptRunner.ScriptRunner is a great solution to centrally manage PowerShell Scripts and standardize and automate IT tasks via a Graphical User Interface for helpdesk or end-users. Check it out on scriptrunner.com

Microsoft Cloud IT Pro Podcast
Episode 257 – The Address Space Formerly Known as 127/8

Microsoft Cloud IT Pro Podcast

Play Episode Listen Later Nov 25, 2021 34:21


Ben and Scott cover SFTP support on Azure Blob Storage, the GA of NFS v4.1 on Azure Files, and the new AZ-305 exam.

Game of Roses
Andara Rose Bowles, Welcome To Planet Earth: TWIBN 4-8-21

Game of Roses

Play Episode Listen Later Apr 8, 2021 74:06


Open: Details are given about where to buy the latest GOR 4TRR Merch.SOTW: NY legalizes recreational cannabis and what is the role of marijuana in BNBAU: What if Covid never happened? What would the state of the game be?BNN: James and Kirkconnell spotted together in NY, GB has a name, PP gonna Podcast and more!PPOTW: April Fools jokes, Yankees games, Delivery Room ring lights and more!SFTP: ??? Learn more about your ad choices. Visit megaphone.fm/adchoices See acast.com/privacy for privacy and opt-out information.

The Laravel Podcast
Storage, with Frank de Jonge

The Laravel Podcast

Play Episode Listen Later Dec 8, 2020 60:14


Frank de Jonge Twitter - https://twitter.com/frankdejongeFrank on GitHub - https://github.com/frankdejongeFrank's Blog -  https://blog.frankdejonge.nl/Flysystem - https://flysystem.thephpleague.com/v2/docs/Mollie Payments - https://www.mollie.com/enThe PHP League of Extraordinary Packages - https://thephpleague.com/#packagesLaravel Documents: File Storage - https://laravel.com/docs/8.x/filesystemChristoph Rumpel Episode - https://laravelpodcast.simplecast.com/episodes/the-service-container-with-christoph-rumpelStreamed Download - https://laravel.com/docs/8.x/responses#streamed-downloadsReplicate Adapter - https://flysystem.thephpleague.com/v1/docs/adapter/replicate/Flysystem V2 - https://flysystem.thephpleague.com/v2/docs/what-is-new/Schiphol Airport - https://www.schiphol.nl/nl/Ecologi, Tree Sponsorship - https://ecologi.com/frankdejonge   Episode SponsorshipTranscription sponsored by LarajobsEditing sponsored byTighten

HCM Cloud Talk Radio
HCM Cloud Talk Radio - Oracle HCM Cloud CoE – UCM Utility

HCM Cloud Talk Radio

Play Episode Listen Later Nov 6, 2018 13:21


Are you an Oracle HCM Cloud customer or partner searching for an efficient mechanism to transfer files from/to UCM? Then join us on HCM Cloud Talk Radio as Srirama Sista, Oracle HCM Cloud Technical Solutions Manager, introduces the Oracle HCM Center of Excellence (CoE) UCM Utility which is an API intended for customers and partners that want to use UCM as their content server instead of conventional SFTP.

The PPC Show Podcast
This Week In Ad Tech News and Headlines (May 15-19th)

The PPC Show Podcast

Play Episode Listen Later May 18, 2017 29:16


This week on The PPC Show, Paul and JD break down the top news and trends in ad tech and digital marketing. Show Notes. GOOGLE Google advertisers can now see historical Quality Score data in AdWords Plus, no more hovering over individual keywords to see Quality Score data. http://searchengineland.com/google-adwords-quality-score-reporting-improvements-275010 2 things Data by day - segment it and watch your QS change over time. The big new development is advertisers will finally get some historical Quality Score data in AdWords. Four new columns — “Qual. Score (hist.),” “Landing page exper. (hist.),” “Ad relevance (hist.)” and “Exp. CTR (hist.)” — show the last known score in the selected date range (as far back as January 2016). Google is opening the beta for its “buy button,” dubbed Purchases on Google. Sales & Orders reported finding the option in Merchant Center Tuesday. http://searchengineland.com/purchases-google-quietly-opens-beta-request-google-merchant-center-275182 Schedule offline conversion imports in AdWords http://searchengineland.com/now-can-schedule-offline-conversion-imports-adwords-275307 Imports from files can be scheduled daily or weekly. Need to use Google Sheets or link to a file over HTTPS or SFTP. You can't upload a file on a schedule obviously. Uploads can be scheduled to import daily or weekly. Last June, Google launched a native conversion syncing solution for Salesforce users. Google Adds Expandable AdWords Ads With Carousels on Mobile http://www.thesempost.com/google-adds-expandable-adwords-ads-carousels/ Twitter Biz Stone is going back to Twitter. Medium and Jelly. Welcome back Biz! Twitter updated its privacy policy on Wednesday so that it can use the information it collects about people's off-Twitter web browsing for up to 30 days, as opposed to the previous 10-day maximum, http://marketingland.com/know-twitters-latest-privacy-policy-update-215112 -The micro-blogging giant has also chosen to start tracking what apps are sitting alongside Twitter on users' phones, their locations and what websites they've visited. For the latter, that's only for sites that integrate Twitter content, like embedded tweets. Furthermore, the company will "not store web page visit data for users who are in the European Union and EFTA States. It's easy to understand why Twitter is making these changes: it's been reporting some torrid financial results of late. In February, it reported increasing losses from $167 million in the fourth quarter of 2016, compared to a year-earlier loss of $90 million. For 10 straight quarters it's reported slowing growth. Settings & Privacy -> Your Twitter data Paul: You are currently part of 18291 audiences from 3842 advertisers. JD: You are currently part of 13733 audiences from 3078 advertisers. Show notes: https://twitter.com/settings/your_twitter_data FACEBOOK Facebook Delivery Insights Ever wonder if your ads are competing for visibility in the Facebook auction? Or how much they're competing against each other? There's a lot that goes into determining who sees which ads on Facebook. Put simply there's too much content available to be able to show people everything they could potentially see on Facebook, every day. Well Facebook took some big steps toward providing more campaign transparency and predictability with Delivery Insights. https://blog.adstage.io/2017/05/17/facebook-delivery-insights/ Facebook Video Bug! The bug affected billing only for the following conditions: for the video carousel ad unit; when the advertiser chose to bid on link clicks; and only for people who were on smartphone web browsers. https://newsroom.fb.com/news/2017/05/video-carousel-ads-on-mobile-web/ Facebook Algo Update! Reducing Links to Low-Quality Web Page Experiences https://www.facebook.com/business/news/reducing-links-to-low-quality-web-page-experiences --- Send in a voice message: https://anchor.fm/the-ppc-show-podcast/message

BSD Now
171: The APU - BSD Style!

BSD Now

Play Episode Listen Later Dec 7, 2016 87:13


Today on the show, we've got a look at running OpenBSD on a APU, some BSD in your Android, managing your own FreeBSD cloud service with ansible and much more. Keep it turned on your place to B...SD! This episode was brought to you by Headlines OpenBSD on PC Engines APU2 (https://github.com/elad/openbsd-apu2) A detailed walkthrough of building an OpenBSD firewall on a PC Engines APU2 It starts with a breakdown of the parts that were purchases, totally around $200 Then the reader is walked through configuring the serial console, flashing the ROM, and updating the BIOS The next step is actually creating a custom OpenBSD install image, and pre-configuring its serial console. Starting with OpenBSD 6.0, this step is done automatically by the installer Installation: Power off the APU2 Insert the bootable OpenBSD installer USB flash drive to one of the USB slots on the APU2 Power on the APU2, press F10 to get to the boot menu, and choose to boot from USB (usually option number 1) At the boot> prompt, remember the serial console settings (see above) Also at the boot> prompt, press Enter to start the installer Follow the installation instructions The driver used for wireless networking is athn(4). It might not work properly out of the box. Once OpenBSD is installed, run fw_update with no arguments. It will figure out which firmware updates are required and will download and install them. When it finishes, reboot. Where the rubber meets the road… (part one) (https://functionallyparanoid.com/2016/11/29/where-the-rubber-meets-the-road-part-one/) A user describes their adventures installing OpenBSD and Arch Linux on a new Lenovo X1 Carbon (4th gen, skylake) They also detail why they moved away from their beloved Macbook, which while long, does describe a journey away from Apple that we've heard elsewhere. The journey begins with getting a new Windows laptop, shrinking the partition and creating space for a triple-boot install, of Windows / Arch / OpenBSD Brian then details how he setup the partitioning and performed the initial Arch installation, getting it tuned to his specifications. Next up was OpenBSD though, and that went sideways initially due to a new NVMe drive that wasn't fully supported (yet) The article is split into two parts (we will bring you the next installment at a future date), but he leaves us with the plan of attack to build a custom OpenBSD kernel with corrected PCI device identifiers. We wish Brian luck, and look forward to the “rest of the story” soon. *** Howto setup a FreeBSD jail server using iocage and ansible. (https://github.com/JoergFiedler/freebsd-ansible-demo) Setting up a FreeBSD jail server can be a daunting task. However when a guide comes along which shows you how to do that, including not exposing a single (non-jailed) port to the outside world, you know we had a take a closer look. This guide comes to us from GitHub, courtesy of Joerg Fielder. The project goals seem notable: Ansible playbook that creates a FreeBSD server which hosts multiple jails. Travis is used to run/test the playbook. No service on the host is exposed externally. All external connections terminate within a jail. Roles can be reused using Ansible Galaxy. Combine any of those roles to create FreeBSD server, which perfectly suits you. To get started, you'll need a machine with Ansible, Vagrant and VirtualBox, and your credentials to AWS if you want it to automatically create / destroy EC2 instances. There's already an impressive list of Anisible roles created for you to start with: freebsd-build-server - Creates a FreeBSD poudriere build server freebsd-jail-host - FreeBSD Jail host freebsd-jailed - Provides a jail freebsd-jailed-nginx - Provides a jailed nginx server freebsd-jailed-php-fpm - Creates a php-fpm pool and a ZFS dataset which is used as web root by php-fpm freebsd-jailed-sftp - Installs a SFTP server freebsd-jailed-sshd - Provides a jailed sshd server. freebsd-jailed-syslogd - Provides a jailed syslogd freebsd-jailed-btsync - Provides a jailed btsync instance server freebsd-jailed-joomla - Installs Joomla freebsd-jailed-mariadb - Provides a jailed MariaDB server freebsd-jailed-wordpress - Provides a jailed Wordpress server. Since the machines have to be customized before starting, he mentions that cloud-init is used to do the following: activate pf firewall add a pass all keep state rule to pf to keep track of connection states, which in turn allows you to reload the pf service without losing the connection install the following packages: sudo bash python27 allow passwordless sudo for user ec2-user “ From there it is pretty straight-forward, just a couple commands to spin up the VM's either locally on your VirtualBox host, or in the cloud with AWS. Internally the VM's are auto-configured with iocage to create jails, where all your actual services run. A neat project, check it out today if you want a shake-n-bake type cloud + jail solution. Colin Percival's bsdiff helps reduce Android apk bandwidth usage by 6 petabytes per day (http://android-developers.blogspot.ca/2016/12/saving-data-reducing-the-size-of-app-updates-by-65-percent.html) A post on the official Android-Developers blog, talks about how they used bsdiff (and bspatch) to reduce the size of Android application updates by 65% bsdiff was developed by FreeBSD's Colin Percival Earlier this year, we announced that we started using the bsdiff algorithm (by Colin Percival). Using bsdiff, we were able to reduce the size of app updates on average by 47% compared to the full APK size. This post is actually about the second generation of the code. Today, we're excited to share a new approach that goes further — File-by-File patching. App Updates using File-by-File patching are, on average, 65% smaller than the full app, and in some cases more than 90% smaller. Android apps are packaged as APKs, which are ZIP files with special conventions. Most of the content within the ZIP files (and APKs) is compressed using a technology called Deflate. Deflate is really good at compressing data but it has a drawback: it makes identifying changes in the original (uncompressed) content really hard. Even a tiny change to the original content (like changing one word in a book) can make the compressed output of deflate look completely different. Describing the differences between the original content is easy, but describing the differences between the compressed content is so hard that it leads to inefficient patches. So in the second generation of the code, they use bsdiff on each individual file, then package that, rather than diffing the original and new archives bsdiff is used in a great many other places, including shrinking the updates for the Firefox and Chrome browsers You can find out more about bsdiff here: http://www.daemonology.net/bsdiff/ A far more sophisticated algorithm, which typically provides roughly 20% smaller patches, is described in my doctoral thesis (http://www.daemonology.net/papers/thesis.pdf). Considering the gains, it is interesting that no one has implemented Colin's more sophisticated algorithm Colin had an interesting observation (https://twitter.com/cperciva/status/806426180379230208) last night: “I just realized that bandwidth savings due to bsdiff are now roughly equal to what the total internet traffic was when I wrote it in 2003.” *** News Roundup Distrowatch does an in-depth review of NAS4Free (https://distrowatch.com/weekly.php?issue=20161114#nas4free) Jesse Smith over at DistroWatch has done a pretty in-depth review of Nas4Free. The review starts with mentioning that NAS4Free works on 3 platforms, ARM/i386/AMD64 and for the purposes of this review he would be using AMD64 builds. After going through the initial install (doing typical disk management operations, such as GPT/MBR, etc) he was ready to begin using the product. One concern originally observed was that the initial boot seemed rather slow. Investigation revealed this was due to it loading the entire OS image into memory, and the first (long) disk read did take some time, but once loaded was super responsive. The next steps involved doing the initial configuration, which meant creating a new ZFS storage pool. After this process was done, he did find one puzzling UI option called “VM” which indicated it can be linked to VirtualBox in some way, but the Docs didn't reveal its secrets of usage. Additionally covered were some of the various “Access” methods, including traditional UNIX permissions, AD and LDAP, and then various Sharing services which are typical to a NAS, Such as NFS / Samba and others. One neat feature was the built-in file-browser via the web-interface, which allows you another method of getting at your data when sometimes NFS / Samba or WebDav aren't enough. Jesse gives us a nice round-up conclusion as well Most of the NAS operating systems I have used in the past were built around useful features. Some focused on making storage easy to set up and manage, others focused on services, such as making files available over multiple protocols or managing torrents. Some strive to be very easy to set up. NAS4Free does pretty well in each of the above categories. It may not be the easiest platform to set up, but it's probably a close second. It may not have the prettiest interface for managing settings, but it is quite easy to navigate. NAS4Free may not have the most add-on services and access protocols, but I suspect there are more than enough of both for most people. Where NAS4Free does better than most other solutions I have looked at is security. I don't think the project's website or documentation particularly focuses on security as a feature, but there are plenty of little security features that I liked. NAS4Free makes it very easy to lock the text console, which is good because we do not all keep our NAS boxes behind locked doors. The system is fairly easy to upgrade and appears to publish regular security updates in the form of new firmware. NAS4Free makes it fairly easy to set up user accounts, handle permissions and manage home directories. It's also pretty straight forward to switch from HTTP to HTTPS and to block people not on the local network from accessing the NAS's web interface. All in all, I like NAS4Free. It's a good, general purpose NAS operating system. While I did not feel the project did anything really amazing in any one category, nor did I run into any serious issues. The NAS ran as expected, was fairly straight forward to set up and easy to manage. This strikes me as an especially good platform for home or small business users who want an easy set up, some basic security and a solid collection of features. Browsix: Unix in the browser tab (https://browsix.org/) Browsix is a research project from the PLASMA lab at the University of Massachusetts, Amherst. The goal: Run C, C++, Go and Node.js programs as processes in browsers, including LaTeX, GNU Make, Go HTTP servers, and POSIX shell scripts. “Processes are built on top of Web Workers, letting applications run in parallel and spawn subprocesses. System calls include fork, spawn, exec, and wait.” Pipes are supported with pipe(2) enabling developers to compose processes into pipelines. Sockets include support for TCP socket servers and clients, making it possible to run applications like databases and HTTP servers together with their clients in the browser. Browsix comprises two core parts: A kernel written in TypeScript that makes core Unix features (including pipes, concurrent processes, signals, sockets, and a shared file system) available to web applications. Extended JavaScript runtimes for C, C++, Go, and Node.js that support running programs written in these languages as processes in the browser. This seems like an interesting project, although I am not sure how it would be used as more than a toy *** Book Review: PAM Mastery (https://www.cyberciti.biz/reviews/book-review-pam-mastery/) nixCraft does a book review of Michael W. Lucas' “Pam Mastery” Linux, FreeBSD, and Unix-like systems are multi-user and need some way of authenticating individual users. Back in the old days, this was done in different ways. You need to change each Unix application to use different authentication scheme. Before PAM, if you wanted to use an SQL database to authenticate users, you had to write specific support for that into each of your applications. Same for LDAP, etc. So Open Group lead to the development of PAM for the Unix-like system. Today Linux, FreeBSD, MacOS X and many other Unix-like systems are configured to use a centralized authentication mechanism called Pluggable Authentication Modules (PAM). The book “PAM Mastery” deals with the black magic of PAM. Of course, each OS chose to implement PAM a little bit differently The book starts with the basic concepts about PAM and authentication. You learn about Multi-Factor Authentication and why use PAM instead of changing each program to authenticate the user. The author went into great details about why PAM is useful for developers and sysadmin for several reasons. The examples cover CentOS Linux (RHEL and clones), Debian Linux, and FreeBSD Unix system. I like the way the author described PAM Configuration Files and Common Modules that covers everyday scenarios for the sysadmin. PAM configuration file format and PAM Module Interfaces are discussed in easy to understand language. Control flags in PAM can be very confusing for new sysadmins. Modules can be stacked in a particular order, and the control flags determine how important the success or failure of a particular module. There is also a chapter about using one-time passwords (Google Authenticator) for your application. The final chapter is all about enforcing good password policies for users and apps using PAM. The sysadmin would find this book useful as it covers a common authentication scheme that can be used with a wide variety of applications on Unix. You will master PAM topics and take control over authentication for your organization IT infrastructure. If you are Linux or Unix sysadmin, I would highly recommend this book. Once again Michael W Lucas nailed it. The only book you may need for PAM deployment. get “PAM Mastery” (https://www.michaelwlucas.com/tools/pam) *** Reflections on Trusting Trust - Ken Thompson, co-author of UNIX (http://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html) Ken Thompson's "cc hack" - Presented in the journal, Communication of the ACM, Vol. 27, No. 8, August 1984, in a paper entitled "Reflections on Trusting Trust", Ken Thompson, co-author of UNIX, recounted a story of how he created a version of the C compiler that, when presented with the source code for the "login" program, would automatically compile in a backdoor to allow him entry to the system. This is only half the story, though. In order to hide this trojan horse, Ken also added to this version of "cc" the ability to recognize if it was recompiling itself to make sure that the newly compiled C compiler contained both the "login" backdoor, and the code to insert both trojans into a newly compiled C compiler. In this way, the source code for the C compiler would never show that these trojans existed. The article starts off by talking about a content to write a program that produces its own source code as output. Or rather, a C program, that writes a C program, that produces its own source code as output. The C compiler is written in C. What I am about to describe is one of many "chicken and egg" problems that arise when compilers are written in their own language. In this case, I will use a specific example from the C compiler. Suppose we wish to alter the C compiler to include the sequence "v" to represent the vertical tab character. The extension to Figure 2 is obvious and is presented in Figure 3. We then recompile the C compiler, but we get a diagnostic. Obviously, since the binary version of the compiler does not know about "v," the source is not legal C. We must "train" the compiler. After it "knows" what "v" means, then our new change will become legal C. We look up on an ASCII chart that a vertical tab is decimal 11. We alter our source to look like Figure 4. Now the old compiler accepts the new source. We install the resulting binary as the new official C compiler and now we can write the portable version the way we had it in Figure 3. The actual bug I planted in the compiler would match code in the UNIX "login" command. The replacement code would miscompile the login command so that it would accept either the intended encrypted password or a particular known password. Thus if this code were installed in binary and the binary were used to compile the login command, I could log into that system as any user. Such blatant code would not go undetected for long. Even the most casual perusal of the source of the C compiler would raise suspicions. Next “simply add a second Trojan horse to the one that already exists. The second pattern is aimed at the C compiler. The replacement code is a Stage I self-reproducing program that inserts both Trojan horses into the compiler. This requires a learning phase as in the Stage II example. First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere. So now there is a trojan'd version of cc. If you compile a clean version of cc, using the bad cc, you will get a bad cc. If you use the bad cc to compile the login program, it will have a backdoor. The source code for both backdoors no longer exists on the system. You can audit the source code of cc and login all you want, they are trustworthy. The compiler you use to compile your new compiler, is the untrustworthy bit, but you have no way to know it is untrustworthy, and no way to make a new compiler, without using the bad compiler. The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect. Acknowledgment: I first read of the possibility of such a Trojan horse in an Air Force critique of the security of an early implementation of Multics. I can- not find a more specific reference to this document. I would appreciate it if anyone who can supply this reference would let me know. Beastie Bits Custom made Beastie Stockings (https://www.etsy.com/listing/496638945/freebsd-beastie-christmas-stocking) Migrating ZFS from mirrored pool to raidz1 pool (http://ximalas.info/2016/12/06/migrating-zfs-from-mirrored-pool-to-raidz1-pool/) OpenBSD and you (https://home.nuug.no/~peter/blug2016/) Watson.org FreeBSD and Linux cross reference (http://fxr.watson.org/) OpenGrok (http://bxr.su/) FreeBSD SA-16:37: libc (https://www.freebsd.org/security/advisories/FreeBSD-SA-16:37.libc.asc) -- A 26+ year old bug found in BSD's libc, all BSDs likely affected -- A specially crafted argument can trigger a static buffer overflow in the library, with possibility to rewrite following static buffers that belong to other library functions. HardenedBSD issues correction for libc patch (https://github.com/HardenedBSD/hardenedBSD/commit/fb823297fbced336b6beeeb624e2dc65b67aa0eb) -- original patch improperly calculates how many bytes are remaining in the buffer. From December the 27th until the 30th there the 33rd Chaos Communication Congress[0] is going to take place in Hamburg, Germany. Think of it as the yearly gathering of the european hackerscene and their overseas friends. I am one of the persons organizing the "BSD assembly (https://events.ccc.de/congress/2016/wiki/Assembly:BSD)" as a gathering place for BSD enthusiasts and waving the flag amidst the all the other projects / communities. Feedback/Questions Chris - IPFW + Wifi (http://pastebin.com/WRiuW6nn) Jason - bhyve pci (http://pastebin.com/JgerqZZP) Al - pf errors (http://pastebin.com/3XY5MVca) Zach - Xorg settings (http://pastebin.com/Kty0qYXM) Bart - Wireless Support (http://pastebin.com/m3D81GBW) ***