POPULARITY
Today on the Salesforce Admins Podcast, we talk to Jennifer Lee, Lead Admin Evangelist at Salesforce and the host of How I Solved It and Automate This! Join us as we chat about everything coming with the Spring '25 release and what's new for Agentforce and AI on Salesforce. You should subscribe for the full […] The post Spring '25 Salesforce Features: AI, Flows, and User Management Updates appeared first on Salesforce Admins.
Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)
In this conversation, Krish Palaniappan introduces the fundamentals of product management through a practical example using the Snowpal web application. The discussion focuses on the implementation of a Kanban board feature, emphasizing the importance of communication between product managers and engineering teams. Key topics include defining requirements, user management of project lists, and the functionalities that should be supported in the Kanban mode. The conversation highlights the iterative nature of product development and the need for clear, actionable requirements to ensure successful feature implementation. Takeaways Product management involves practical application rather than just theory. Effective communication between product managers and developers is crucial. Understanding user needs is essential for feature development. Requirements should be clear and actionable for successful implementation. User management of project lists is a key functionality. The Kanban board enhances project management capabilities. Iterative feedback is important for refining features. Collaboration between teams accelerates product development. Defining user actions helps in creating a better user experience. Reusing existing components can simplify feature implementation. Initial conversations are crucial for aligning product and engineering teams. Iterative development allows for flexibility and responsiveness to user feedback. User experience should be prioritized to avoid overwhelming users with complexity. Clear communication and defined actions help prevent project delays. Understanding user actions is essential for effective UI design. API integration is a key component of modern software development. Assumptions should be minimized to reduce rework and inefficiencies. Collaboration between teams enhances productivity and project outcomes. Using existing libraries can save time and resources in development. Regular feedback loops are vital for successful product iterations. Chapters 00:00 Introduction to Product Management Basics 02:54 Exploring the Snowpal Web Application 05:58 The Kanban Board Feature Discussion 09:14 Initial Conversations Between Product and Engineering Teams 11:55 Defining Requirements for the Kanban Board 15:11 User Management of Project Lists 18:01 Actions and Functionalities in Kanban Mode 25:00 Initial Conversations and Actions 28:50 Understanding User Actions and Functionality 31:12 Iterative Development and User Experience 34:03 Creating Project Lists and User Interactions 39:01 Managing User Actions and API Integration 45:39 Finalizing Features and Team Collaboration Purchase course in one of 2 ways: 1. Go to https://getsnowpal.com, and purchase it on the Web 2. On your phone: (i) If you are an iPhone user, go to http://ios.snowpal.com, and watch the course on the go. (ii). If you are an Android user, go to http://android.snowpal.com.
In this episode, Amy and Brad sit down with Michael Chan to discuss WorkOS, a tool simplifying authentication and authorization for developers. They explore how WorkOS makes complex processes like OAuth, SSO, and MFA easy to implement, compare it to other auth providers, and dive deep into AuthKit's capabilities.SponsorsWorkOS - WorkOS helps you launch enterprise features like SSO and user management with ease. Thanks to the AuthKit for JavaScript, your team can integrate in minutes and focus on what truly matters—building your app.Show Notes00:00 - Intro01:15 - Introduction to WorkOSWorkOSAuthKitWorkOS on YouTube02:23 - Comparing WorkOS with Competitors03:50 - Features of WorkOS AuthKit06:53 - WorkOS's Evolution and Target Audience09:30 - Challenges in Implementing Auth Solutions10:30 - Should Developers Build Their Own Auth?Selma's Blog Post: One Does Not Simply Delete Cookies12:50 - The Cascade of Auth Decisions: Emails and Databases14:22 - WorkOS Integration with Astro and Remix19:50 - Key Benefits of WorkOS for Developers22:00 - Integrating AuthKit with Next and RemixSam Selikoff's YouTube Video on WorkOS + AuthKit + Remix: Using AuthKit's Headless APIs in Remix24:01 - Challenges in Documentation for DevelopersDivio's Guide to Documentation33:06 - The Future of Documentation and AI's Role35:00 - Wrap-up
Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)
In this episode, Krish Palaniappan interviews Andrew Brooks, co-founder of contextual.io, an AI orchestration platform. The episode includes a live demonstration of the platform, showcasing its features, user interface, and capabilities for integrating AI solutions into business processes. They discuss the functionalities and features of Contextual, focusing on managing permissions, API access, usage-based pricing, and the role of services. They explore the mapping of services to tenants, promoting services across environments, and the importance of atomic services. They highlight the significance of reusing flows, interacting with agents, & visualizing real-time data flow. They delve into the intricacies of data flow in AI applications, focusing on the differences between event and HTTP flows, the importance of designing efficient flows for scalability, and the seamless integration of third-party APIs. Takeaways Contextual.io focuses on AI orchestration and workflow automation. The platform is designed to be user-friendly for developers. AI solutions can significantly enhance business processes. Contextual allows for custom solutions tailored to specific organizational needs. The platform supports integration with various third-party services. User management and security are critical components of the platform. Developers can create and manipulate object types and flows easily. The visual editor simplifies the development process for AI solutions. Contextual's API allows for seamless interaction with external systems. Managing permissions is crucial to prevent unauthorized access. Contextual offers a usage-based pricing model for flexibility. Services in Contextual package components for deployment. Understanding tenant mapping is essential for service management. Promoting services can be done selectively across environments. Atomic services allow for independent promotion of components. Navigating the Contextual interface is user-friendly and intuitive. The Hello AI World demo is a great starting point for new users. Documentation is vital, but many developers prefer hands-on exploration. Exporting and importing flows as JSON enhances collaboration. Data flows through nodes in AI applications. Understanding the difference between event and HTTP flows is crucial. Integrating third-party APIs can simplify complex workflows. Visual orchestration makes AI tools accessible to developers. Splits and joins within flows allow for parallel processing. Chapters 00:00 Introduction to Andrew Brooks and Contextual.io 08:44 Exploring Contextual's AI Orchestration Platform 15:30 Understanding AI Workflows and Use Cases 24:18 Live Demo: Navigating Contextual's Interface 38:00 Creating Object Types and API Integration 58:43 User Management and Security Features 01:16:39 The Importance of Atomic Services 01:22:19 Understanding Instructions and Documentation 01:30:05 Real-Time Data Flow Visualization 01:38:02 Uptime and Reliability of Services 01:51:59 Customizing the Flow Editor 02:05:07 Understanding Data Flow in AI Applications 02:12:13 Navigating Event and HTTP Flows 02:19:20 Designing Efficient Flows for Scalability 02:28:24 Integrating Third-Party APIs Seamlessly 02:37:29 Debugging and Monitoring Flows 02:49:53 Contextual's Mission and Value Proposition
In dieser Episode sprechen wir über die in-process Datenbank DuckDB, die im Juni Version 1.0.0 erreicht hat und einen innovativen Ansatz verfolgt. DuckDB wird direkt aus dem Code heraus gestartet und benötigt keine Berechtigungen oder User-Management, was an SQlite erinnert. Außerdem beleuchten wir die These, dass die "Big Data" Ära vorbei ist, warum das so ist und was das eigentlich mit DuckDB zu tun hat. ***Links*** DuckDB: https://duckdb.org/ MotherDB: https://motherduck.com/ Blog: Big Data is Dead by Jordan Tigani https://motherduck.com/blog/big-data-is-dead/ Fragen, Feedback und Themenwünsche gern an podcast@inwt-statistics.de
Today on the Salesforce Admins Podcast, we talk to Christine Stevens, Senior Salesforce Consultant at Turnberry Solutions. Join us as we chat about the keys to user management and why documentation is so important. You should subscribe for the full episode, but here are a few takeaways from our conversation with Christine Stevens. Start with […] The post Christine Stevens on User Management appeared first on Salesforce Admins.
In episode 124 of Jamstack Radio, Brian speaks with James Perkins of Clerk. Together they explore tools and practices for simplifying authentication and user management, the latest trends in DevRel hiring, and the correlation between internal feedback loops and productivity within small orgs.
In episode 124 of Jamstack Radio, Brian speaks with James Perkins of Clerk. Together they explore tools and practices for simplifying authentication and user management, the latest trends in DevRel hiring, and the correlation between internal feedback loops and productivity within small orgs.
As a Developer, How do you manage the users that can access the system? Jonathan and Will share some pointers and approaches on how you can protect your system or account from being accessed by multiple or anonymous users. About this Episode Different Infrastructure as code tools Ways how to manage group permission Advantages of Single Sign-on 2 Factor Authentication Sponsors Chuck's Resume Template Developer Book Club starting with Clean Architecture by Robert C. Martin Become a Top 1% Dev with a Top End Devs Membership Picks Jonathan -Watch The Lord of the Rings: The Rings of Power - Season 1 Jonathan - Finite and Infinite Games Will - asana Will - Bearmax Massage Gun
In this episode of PodSpot, the UK's number one HubSpot themed podcast, host Ian Townshend is joined by HubSpot's very own Senior Customer Onboarding Specialist, Siobhan Brady. From security and information management through to integrations and automation, this episode is brimming with insights.
Die Themen: Gorillas - Zapp, - Arive - VC-Szene - McMakler - HomeDay - Infarm - Nuri - Topi - Saiga +++ Gorillas steht vor Downround #EXKLUSIV +++ Quick Commerce-Update: Zapp, Arive #EXKLUSIV +++ Über die Stimmung bei den großen VCs #ANALYSE +++ Ab September erwarten VCs Fundraising-Marathon #ANALYSE +++ McMakler und HomeDay in der Krise #ANALYSE +++ Umsatz bei Infarm schrumpft #ANALYSE +++ Nuri steht vor Pay2Play-Investmentrunde #EXKLUSIV +++ Topi steht vor großer Investmentrunde #EXKLUSIV +++ Saiga schlittert in die Insolvenz #EXKLUSIV Unser Sponsor Wenn du für dein Business eine Web-Applikation brauchst, weil du zum Beispiel das nächste Software-as-a-Service Start-up gründest, dann solltest du jetzt zuhören. Die heutige Ausgabe wird präsentiert von ROQ Technology. Jeder, der schon einmal eine Web App gebaut hat weiß, dass die ersten Schritte viel Zeit kosten: Die Auswahl der Technologien, das initiale Setup der Infrastruktur und der App, oder die Entwicklung von Login, User Management, Notifications, Mailing oder Chat - es sind immer die gleichen Schritte. Dabei kann viel schief gehen. Mit der ROQ-Suite ist das anders. Ihr baut ab Tag 1 auf einer leistungsstarken und flexibel anpassbaren Applikation auf. Und Features wie Login, User Management, Notifications, Mailing, Chat und viele mehr sind bereits integriert. So spart ihr schnell über 200 Entwicklertage und eure Entwickler lieben den modernen Tech Stack. Damit ist eure einzigartige App schneller live als je zuvor. Und auch wenn ihr noch kein Tech Team habt, könnt ihr jetzt trotzdem direkt loslegen. Das ROQ Founders' Program unterstützt euch in der Entwicklung eurer App. Hörer:innen des Insider-Podcasts erhalten die komplette ROQ Suite drei Monate for free. Geht auf www.roq.tech/insider und sichert euch das exklusive Angebot. Vor dem Mikro Alexander Hüsing, deutsche-startups.de - www.linkedin.com/in/alexander-huesing/ & www.twitter.com/azrael74 Sven Schmidt, Maschinensucher - www.linkedin.com/in/sven-schmidt-maschinensucher/ Hintergrund Der deutsche-startups.de-Podcast besteht aus den Formaten #Insider, #StartupRadar, #Interview und #Startup101. Mehr unter: www.deutsche-startups.de/tag/Podcast/ Anregungen bitte an podcast@deutsche-startups.de. Unseren anonymen Briefkasten findet ihr hier: www.deutsche-startups.de/stille-post/
Today on the Salesforce Admins Podcast, we talk to Andrew Russo, Salesforce Architect at BACA Systems. Join us as we chat about the amazing user management super app he built and how you can approach building apps to be an even more awesome admin. You should subscribe for the full episode, but here are a […] The post Create a User Management Super App with Andrew Russo appeared first on Salesforce Admins.
Highlights from this week's episode:Sokratis' realization that big corporations were not the best thing for him (2:56)Transitioning for Workable to Clerk.dev (3:40)Convincing developers to outsource components to a service (9:36)Clerk's layered solutions and how it affects the developer and the end-user (12:41)Starting with Clerk from scratch vs. using Clerk to replace an existing component (19:55)Synergies and SaaS starter kit (24:06)Building Clerk to avoid a single point of failure (29:19)Reflecting back on the transformation and growth of Workable, and how it was like working at eight different companies (33:03)Lessons that Sokratis has taken away from his years as a developer (42:18)The Data Stack Show is a weekly podcast powered by RudderStack. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
As a marketers, many use Google Analytics for basic monthly reports like traffic and general sales data to measure their website. However it also provides insights on how they are getting there, what they are viewing, how they are moving through your site and what they are looking for. It helps guide marketers to create more relevant content and informed decisions based on your audience's behaviors.It's something I haven't spent much time digging into so I made a commitment to invest in an online course, "Google Analytics for Marketers" and so far it's been worth the investment of time and money. This episode isn't going to be a comprehensive overview but something to get you thinking about using analytics and optimizing your site and content. If you work with websites and measure them Google Analytics is treasure trove of data and capabilities including;Google Analytics and Google Analytics 4 - why you need them and what are the key differencesProper Setting up - important things to look for such as Views, User Management, Filters and basic settingsFilters, Filters, Filters - the best thing and worst thing for your GAReports - getting familiar with them, secondary dimensions, benchmarking, and behaviorsAudience - important to understand your audience, demographics of users, the browsers they use, and how are they engaging with your siteTraffic Reports - Where people are coming from, different types of referalsSite reports - where did people enter, what pages did they accessGoals - Setting goals, goals reports, ecommerce goalsThere is a lot to Google Analytics and the information it can tell you about your audience, helping them find the content they are looking for, making it easier to convert and so much more. CONNECT WITH USEnjoy this episode or have questions? We want to hear from you. Connect with Chris Sabbarese at Corona Tools on Twitterand our new GILN Facebook Group.This closed group is like-minded individuals who care about and discuss, gardening and plants, trees and landscaping related topics.
On this edition of Petri Dish, we (virtually) sat down with Nuvolex to learn more about their SaaS Management platform and to better understand their automation tools. You can listen to the conversation below or check out their website, here. The post Petri Dish: Talking Tenant and Microsoft User Management with Nuvolex appeared first on Petri.
Welcome to HubShots Episode 230: What parts of HubSpot do Enterprises focus on? This edition we dive into: • the parts of HubSpot we see Enterprises focussing on, including: • Lifecycle management • Lead scoring (when it becomes relevant) • User Management and Teams • Content Partitioning • Service Hub Pro NPS surveys • Attribution reporting • Advanced Workflows Did a colleague forward this episode to you? Sign up here to get yours every Friday. Please forward this on to your work colleagues. Recorded: Monday 07 December 2020 | Published: Friday 11 December 2020
What are you getting paid for as a programmer? To Create fast - a software of high quality. Focus on a technology and become flash fast with it and produce software. Also - an Observation - A lot of the Internet Systems are User Oriented - with User Management, Roles, Content with some defined visibility.
In the news, OpenVPN and JumpCloud Partner to Bring Secure Cloud-based Authentication and User Management to VPN, IdenTrust and Device Authority Collaborate to Deliver Secure Lifecycle Management to the IoT, Tufin Prices NYSE IPO at $108 Million, Bad security hygiene still a major risk for enterprise IT networks and much more! Full Show Notes: https://wiki.securityweekly.com/ES_Episode133 Visit http://securityweekly.com/esw for all the latest episodes!
In the news, OpenVPN and JumpCloud Partner to Bring Secure Cloud-based Authentication and User Management to VPN, IdenTrust and Device Authority Collaborate to Deliver Secure Lifecycle Management to the IoT, Tufin Prices NYSE IPO at $108 Million, Bad security hygiene still a major risk for enterprise IT networks and much more! Full Show Notes: https://wiki.securityweekly.com/ES_Episode133 Visit http://securityweekly.com/esw for all the latest episodes!
In the episode, I will share my thoughts on user management. Host: Paul Joyner Email: paul@sysadmintoday.com Facebook: https://www.facebook.com/sysadmintoday Twitter: https://twitter.com/SysadminToday Show Links AD Manage Plushttps://www.manageengine.com/products/ad-manager/ Office 365 Offboarding scripthttps://gallery.technet.microsoft.com/office/Staff-Off-Boarding-Script-430bdd0a Please Support the Channel https://www.patreon.com/sysadmintoday
Greg Swift joins us to discuss using Terraform to manage Github users. Please excuse our technical difficulties with the live demo. The documents are uploaded to Github and a recorded demo will follow. Links: https://github.com/github-as-code https://www.hashicorp.com/blog/managing-github-with-terraform https://www.terraform.io/docs/
This week on BSD Now, Adrian Chadd on bringing up 802.11ac in FreeBSD, a PFsense and OpenVPN tutorial, and we talk about an interesting ZFS storage pool checkpoint project. This episode was brought to you by Headlines Bringing up 802.11ac on FreeBSD (http://adrianchadd.blogspot.com/2017/04/bringing-up-80211ac-on-freebsd.html) Adrian Chadd has a new blog post about his work to bring 802.11ac support to FreeBSD 802.11ac allows for speeds up to 500mbps and total bandwidth into multiple gigabits The FreeBSD net80211 stack has reasonably good 802.11n support, but no 802.11ac support. I decided a while ago to start adding basic 802.11ac support. It was a good exercise in figuring out what the minimum set of required features are and another excuse to go find some of the broken corner cases in net80211 that needed addressing. 802.11ac introduces a few new concepts that the stack needs to understand. I decided to use the QCA 802.11ac parts because (a) I know the firmware and general chip stuff from the first generation 11ac parts well, and (b) I know that it does a bunch of stuff (like rate control, packet scheduling, etc) so I don't have to do it. If I chose, say, the Intel 11ac parts then I'd have to implement a lot more of the fiddly stuff to get good behaviour. Step one - adding VHT channels. I decided in the shorter term to cheat and just add VHT channels to the already very large ieee80211channel map. The linux way of there being a channel context rather than hundreds of static channels to choose from is better in the long run, but I wanted to get things up and running. So, that's what I did first - I added VHT flags for 20, 40, 80, 80+80 and 160MHz operating modes and I did the bare work required to populate the channel lists with VHT channels as well. Then I needed to glue it into an 11ac driver. My ath10k port was far enough along to attempt this, so I added enough glue to say "I support VHT" to the iccaps field and propagated it to the driver for monitor mode configuration. And yes, after a bit of dancing, I managed to get a VHT channel to show up in ath10k in monitor mode and could capture 80MHz wide packets. Success! By far the most fiddly was getting channel promotion to work. net80211 supports the concept of dumb NICs (like atheros 11abgn parts) very well, where you can have multiple virtual interfaces but the "driver" view of the right configuration is what's programmed into the hardware. For firmware NICs which do this themselves (like basically everything sold today) this isn't exactly all that helpful. So, for now, it's limited to a single VAP, and the VAP configuration is partially derived from the global state and partially derived from the negotiated state. It's annoying, but it is adding to the list of things I will have to fix later. the QCA chips/firmware do 802.11 crypto offload. They actually pretend that there's no key - you don't include the IV, you don't include padding, or anything. You send commands to set the crypto keys and then you send unencrypted 802.11 frames (or 802.3 frames if you want to do ethernet only.) This means that I had to teach net80211 a few things: + frames decrypted by the hardware needed to have a "I'm decrypted" bit set, because the 802.11 header field saying "I'm decrypted!" is cleared + frames encrypted don't have the "i'm encrypted" bit set + frames encrypted/decrypted have no padding, so I needed to teach the input path and crypto paths to not validate those if the hardware said "we offload it all." Now comes the hard bit of fixing the shortcomings before I can commit the driver. There are .. lots. The first one is the global state. The ath10k firmware allows what they call 'vdevs' (virtual devices) - for example, multiple SSID/BSSID support is implemented with multiple vdevs. STA+WDS is implemented with vdevs. STA+P2P is implemented with vdevs. So, technically speaking I should go and find all of the global state that should really be per-vdev and make it per-vdev. This is tricky though, because a lot of the state isn't kept per-VAP even though it should be. Anyway, so far so good. I need to do some of the above and land it in FreeBSD-HEAD so I can finish off the ath10k port and commit what I have to FreeBSD. There's a lot of stuff coming - including all of the wave-2 stuff (like multiuser MIMO / MU-MIMO) which I just plainly haven't talked about yet. Viva la FreeBSD wireless! pfSense and OpenVPN Routing (http://www.terrafoundry.net/blog/2017/04/12/pfsense-openvpn/) This article tries to be a simple guide on how to enable your home (or small office) https://www.pfsense.org/ (pfSense) setup to route some traffic via the vanilla Internet, and some via a VPN site that you've setup in a remote location. Reasons to Setup a VPN: Control Security Privacy Fun VPNs do not instantly guarantee privacy, they're a layer, as with any other measure you might invoke. In this example I used a server that's directly under my name. Sure, it was a country with strict privacy laws, but that doesn't mean that the outgoing IP address wouldn't be logged somewhere down the line. There's also no reason you have to use your own OpenVPN install, there are many, many personal providers out there, who can offer the same functionality, and a degree of anonymity. (If you and a hundred other people are all coming from one IP, it becomes extremely difficult to differentiate, some VPN providers even claim a ‘logless' setup.) VPNs can be slow. The reason I have a split-setup in this article, is because there are devices that I want to connect to the internet quickly, and that I'm never doing sensitive things on, like banking. I don't mind if my Reddit-browsing and IRC messages are a bit slower, but my Nintendo Switch and PS4 should have a nippy connection. Services like Netflix can and do block VPN traffic in some cases. This is more of an issue for wider VPN providers (I suspect, but have no proof, that they just blanket block known VPN IP addresses.) If your VPN is in another country, search results and tracking can be skewed. This is arguable a good thing, who wants to be tracked? But it can also lead to frustration if your DuckDuckGo results are tailored to the middle of Paris, rather than your flat in Birmingham. The tutorial walks through the basic setup: Labeling the interfaces, configuring DHCP, creating a VPN: Now that we have our OpenVPN connection set up, we'll double check that we've got our interfaces assigned With any luck (after we've assigned our OPENVPN connection correctly, you should now see your new Virtual Interface on the pfSense Dashboard We're charging full steam towards the sections that start to lose people. Don't be disheartened if you've had a few issues up to now, there is no “right” way to set up a VPN installation, and it may be that you have to tweak a few things and dive into a few man-pages before you're set up. NAT is tricky, and frankly it only exists because we stretched out IPv4 for much longer than we should have. That being said it's a necessary evil in this day and age, so let's set up our connection to work with it. We need NAT here because we're going to masque our machines on the LAN interface to show as coming from the OpenVPN client IP address, to the OpenVPN server. Head over to Firewall -> NAT -> Outbound. The first thing we need to do in this section, is to change the Outbound NAT Mode to something we can work with, in this case “Hybrid.” Configure the LAN interface to be NAT'd to the OpenVPN address, and the INSECURE interface to use your regular ISP connection Configure the firewall to allow traffic from the LAN network to reach the INSECURE network Then add a second rule allowing traffic from the LAN network to any address, and set the gateway the the OPENVPN connection And there you have it, traffic from the LAN is routed via the VPN, and traffic from the INSECURE network uses the naked internet connection *** Switching to OpenBSD (https://mndrix.blogspot.co.uk/2017/05/switching-to-openbsd.html) After 12 years, I switched from macOS to OpenBSD. It's clean, focused, stable, consistent and lets me get my work done without any hassle. When I first became interested in computers, I thought operating systems were fascinating. For years I would reinstall an operating system every other weekend just to try a different configuration: MS-DOS 3.3, Windows 3.0, Linux 1.0 (countless hours recompiling kernels). In high school, I settled down and ran OS/2 for 5 years until I graduated college. I switched to Linux after college and used it exclusively for 5 years. I got tired of configuring Linux, so I switched to OS X for the next 12 years, where things just worked. But Snow Leopard was 7 years ago. These days, OS X is like running a denial of service attack against myself. macOS has a dozen apps I don't use but can't remove. Updating them requires a restart. Frequent updates to the browser require a restart. A minor XCode update requires me to download a 4.3 GB file. My monitors frequently turn off and require a restart to fix. A system's availability is a function (http://techthoughts.typepad.com/managing_computers/2007/11/availability-mt.html) of mean time between failure and mean time to repair. For macOS, both numbers are heading in the wrong direction for me. I don't hold any hard feelings about it, but it's time for me to get off this OS and back to productive work. I found OpenBSD very refreshing, so I created a bootable thumb drive and within an hour had it up and running on a two-year old laptop. I've been using it for my daily work for the past two weeks and it's been great. Simple, boring and productive. Just the way I like it. The documentation is fantastic. I've been using Unix for years and have learned quite a bit just by reading their man pages. OS releases come like clockwork every 6 months and are supported for 12. Security and other updates seem relatively rare between releases (roughly one small patch per week during 6.0). With syspatch in 6.1, installing them should be really easy too. ZFS Storage Pool Checkpoint Project (https://sdimitro.github.io/post/zpool-checkpoint) During the OpenZFS summit last year (2016), Dan Kimmel and I quickly hacked together the zpool checkpoint command in ZFS, which allows reverting an entire pool to a previous state. Since it was just for a hackathon, our design was bare bones and our implementation far from complete. Around a month later, we had a new and almost complete design within Delphix and I was able to start the implementation on my own. I completed the implementation last month, and we're now running regression tests, so I decided to write this blog post explaining what a storage pool checkpoint is, why we need it within Delphix, and how to use it. The Delphix product is basically a VM running DelphixOS (a derivative of illumos) with our application stack on top of it. During an upgrade, the VM reboots into the new OS bits and then runs some scripts that update the environment (directories, snapshots, open connections, etc.) for the new version of our app stack. Software being software, failures can happen at different points during the upgrade process. When an upgrade script that makes changes to ZFS fails, we have a corresponding rollback script that attempts to bring ZFS and our app stack back to their previous state. This is very tricky as we need to undo every single modification applied to ZFS (including dataset creation and renaming, or enabling new zpool features). The idea of Storage Pool Checkpoint (aka zpool checkpoint) deals with exactly that. It can be thought of as a “pool-wide snapshot” (or a variation of extreme rewind that doesn't corrupt your data). It remembers the entire state of the pool at the point that it was taken and the user can revert back to it later or discard it. Its generic use case is an administrator that is about to perform a set of destructive actions to ZFS as part of a critical procedure. She takes a checkpoint of the pool before performing the actions, then rewinds back to it if one of them fails or puts the pool into an unexpected state. Otherwise, she discards it. With the assumption that no one else is making modifications to ZFS, she basically wraps all these actions into a “high-level transaction”. I definitely see value in this for the appliance use case Some usage examples follow, along with some caveats. One of the restrictions is that you cannot attach, detach, or remove a device while a checkpoint exists. However, the zpool add operation is still possible, however if you roll back to the checkpoint, the device will no longer be part of the pool. Rather than a shortcoming, this seems like a nice feature, a way to help users avoid the most common foot shooting (which I witnessed in person at Linux Fest), adding a new log or cache device, but missing a keyword and adding it is a storage vdev rather than a aux vdev. This operation could simply be undone if a checkpoint where taken before the device was added. *** News Roundup Review of TrueOS (https://distrowatch.com/weekly.php?issue=20170501#trueos) TrueOS, which was formerly named PC-BSD, is a FreeBSD-based operating system. TrueOS is a rolling release platform which is based on FreeBSD's "CURRENT" branch, providing TrueOS with the latest drivers and features from FreeBSD. Apart from the name change, TrueOS has deviated from the old PC-BSD project in a number of ways. The system installer is now more streamlined (and I will touch on that later) and TrueOS is a rolling release platform while PC-BSD defaulted to point releases. Another change is PC-BSD used to allow the user to customize which software was installed at boot time, including the desktop environment. The TrueOS project now selects a minimal amount of software for the user and defaults to using the Lumina desktop environment. From the conclusions: What I took away from my time with TrueOS is that the project is different in a lot of ways from PC-BSD. Much more than just the name has changed. The system is now more focused on cutting edge software and features in FreeBSD's development branch. The install process has been streamlined and the user begins with a set of default software rather than selecting desired packages during the initial setup. The configuration tools, particularly the Control Panel and AppCafe, have changed a lot in the past year. The designs have a more flat, minimal look. It used to be that PC-BSD did not have a default desktop exactly, but there tended to be a focus on KDE. With TrueOS the project's in-house desktop, Lumina, serves as the default environment and I think it holds up fairly well. In all, I think TrueOS offers a convenient way to experiment with new FreeBSD technologies and ZFS. I also think people who want to run FreeBSD on a desktop computer may want to look at TrueOS as it sets up a graphical environment automatically. However, people who want a stable desktop platform with lots of applications available out of the box may not find what they want with this project. A simple guide to install Ubuntu on FreeBSD with byhve (https://www.davd.eu/install-ubuntu-on-freebsd-with-bhyve/) David Prandzioch writes in his blog: For some reasons I needed a Linux installation on my NAS. bhyve is a lightweight virtualization solution for FreeBSD that makes that easy and efficient. However, the CLI of bhyve is somewhat bulky and bare making it hard to use, especially for the first time. This is what vm-bhyve solves - it provides a simple CLI for working with virtual machines. More details follow about what steps are needed to setup vm_bhyve on FreeBSD Also check out his other tutorials on his blog: https://www.davd.eu/freebsd/ (https://www.davd.eu/freebsd/) *** Graphical Overview of the Architecture of FreeBSD (https://dspinellis.github.io/unix-architecture/arch.pdf) This diagram tries to show the different components that make up the FreeBSD Operating Systems It breaks down the various utilities, libraries, and components into some categories and sub-categories: User Commands: Development (cc, ld, nm, as, etc) File Management (ls, cp, cmp, mkdir) Multiuser Commands (login, chown, su, who) Number Processing (bc, dc, units, expr) Text Processing (cut, grep, sort, uniq, wc) User Messaging (mail, mesg, write, talk) Little Languages (sed, awk, m4) Network Clients (ftp, scp, fetch) Document Preparation (*roff, eqn, tbl, refer) Administrator and System Commands Filesystem Management (fsck, newfs, gpart, mount, umount) Networking (ifconfig, route, arp) User Management (adduser, pw, vipw, sa, quota*) Statistics (iostat, vmstat, pstat, gstat, top) Network Servers (sshd, ftpd, ntpd, routed, rpc.*) Scheduling (cron, periodic, rc.*, atrun) Libraries (C Standard, Operating System, Peripheral Access, System File Access, Data Handling, Security, Internationalization, Threads) System Call Interface (File I/O, Mountable Filesystems, File ACLs, File Permissions, Processes, Process Tracing, IPC, Memory Mapping, Shared Memory, Kernel Events, Memory Locking, Capsicum, Auditing, Jails) Bootstrapping (Loaders, Configuration, Kernel Modules) Kernel Utility Functions Privilege Management (acl, mac, priv) Multitasking (kproc, kthread, taskqueue, swi, ithread) Memory Management (vmem, uma, pbuf, sbuf, mbuf, mbchain, malloc/free) Generic (nvlist, osd, socket, mbuf_tags, bitset) Virtualization (cpuset, crypto, device, devclass, driver) Synchronization (lock, sx, sema, mutex, condvar_, atomic_*, signal) Operations (sysctl, dtrace, watchdog, stack, alq, ktr, panic) I/O Subsystem Special Devices (line discipline, tty, raw character, raw disk) Filesystems (UFS, FFS, NFS, CD9660, Ext2, UDF, ZFS, devfs, procfs) Sockets Network Protocols (TCP, UDP, UCMP, IPSec, IP4, IP6) Netgraph (50+ modules) Drivers and Abstractions Character Devices CAM (ATA, SATA, SAS, SPI) Network Interface Drivers (802.11, ifae, 100+, ifxl, NDIS) GEOM Storage (stripe, mirror, raid3, raid5, concat) Encryption / Compression (eli, bde, shsec, uzip) Filesystem (label, journal, cache, mbr, bsd) Virtualization (md, nop, gate, virtstor) Process Control Subsystems Scheduler Memory Management Inter-process Communication Debugging Support *** Official OpenBSD 6.1 CD - There's only One! (http://undeadly.org/cgi?action=article&sid=20170503203426&mode=expanded) Ebay auction Link (http://www.ebay.com/itm/The-only-Official-OpenBSD-6-1-CD-set-to-be-made-For-auction-for-the-project-/252910718452) Now it turns out that in fact, exactly one CD set was made, and it can be yours if you are the successful bidder in the auction that ends on May 13, 2017 (About 3 days from when this episode was recorded). The CD set is hand made and signed by Theo de Raadt. Fun Fact: The winning bidder will have an OpenBSD CD set that even Theo doesn't have. *** Beastie Bits Hardware Wanted by OpenBSD developers (https://www.openbsd.org/want.html) Donate hardware to FreeBSD developers (https://www.freebsd.org/donations/index.html#components) Announcing NetBSD and the Google Summer of Code Projects 2017 (https://blog.netbsd.org/tnf/entry/announcing_netbsd_and_the_google) Announcing FreeBSD GSoC 2017 Projects (https://wiki.freebsd.org/SummerOfCode2017Projects) LibreSSL 2.5.4 Released (https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-2.5.4-relnotes.txt) CharmBUG Meeting - Tor Browser Bundle Hack-a-thon (https://www.meetup.com/CharmBUG/events/238218840/) pkgsrcCon 2017 CFT (https://mail-index.netbsd.org/netbsd-advocacy/2017/05/01/msg000735.html) Experimental Price Cuts (https://blather.michaelwlucas.com/archives/2931) Linux Fest North West 2017: Three Generations of FreeNAS: The World's most popular storage OS turns 12 (https://www.youtube.com/watch?v=x6VznQz3VEY) *** Feedback/Questions Don - Reproducible builds & gcc/clang (http://dpaste.com/2AXX75X#wrap) architect - C development on BSD (http://dpaste.com/0FJ854X#wrap) David - Linux ABI (http://dpaste.com/2CCK2WF#wrap) Tom - ZFS (http://dpaste.com/2Z25FKJ#wrap) RAIDZ Stripe Width Myth, Busted (https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz) Ivan - Jails (http://dpaste.com/1Z173WA#wrap) ***
Secure user sign-up and sign-in is critical for many mobile and web applications. Amazon Cognito is the easiest way to secure your mobile and web applications by providing a comprehensive identity solution for end user management, registration, sign-in, and security. In this product deep dive, we will walk through Cognito’s feature set, which includes serverless flows for user management and sign-in, a fully managed user directory, integrations with existing corporate directories, and many other features. In addition, we will cover key use cases and discuss the associated benefits.
Would you like to learn more about Oracle’s Security Auditing and Reporting capabilities for the HCM Cloud Service? Muthuvel Arumugam, Oracle Senior Director and Enterprise Architect talks about the various audit reporting functions that are available today in the HCM Cloud Service.
As cloud services become essential business tools, managing and securing access to enterprise networks becomes more complex. Shared accounts, former employees keeping access to critical systems, and employees using their private cloud services at work all pose security risks. Chad Hensler of clearlogin joins me to discuss how to handle these challenges.
The AD User Self-Update is a workflow project that allows organizations to more effectively manage their Active Directory environments. It allows for users within an organization to request changes be made to their AD accounts which will be committed upon execution of a 3 step approval process. The workflow pulls the user’s AD information into a form from which the user can edit and submit the changes. The workflow checks to see if the user has an approved manager, if not then the user can search for a manager to submit their changes to. If the user does have a manager, they will be given an option to select that manager for submission or select/search for a different manager. Once submitted by the user, the workflow will be sent to the manager via email/link for approval. The manager will have the option to either accept or reject the changes. If the changes are accepted they will be updated and a confirmation email will be sent to the user. If the changes are rejected (manager can/must give reason in a text box) then the user will be notified via email stating what the manager wrote and the changes will not be updated. More information: itsdelivers.com/services/itautomation
Justin Taylor interview Baber Amin about Privileged User Management in Part 1 of a 2 Part interview.