345 Tech Talks are conversations between Andrew and Paul about how to build and operate amazing software. We focus on software delivery, technology and architecture. Looking at a different topic each week, we roam freely over the domain of tech and invite you to ramble along with us!
In this episode, Andrew Rivers and Danny Hayter from 345 Technology chat about Agile projects and Agile commercials... how you manage these and how to manage things when they go wrong!
In this episode, Andrew talks with Gerry Kelly from Optus Homes, who have designed an app that allows tenants to manage their home rental account. It integrates with existing housing management solutions in both the social and private rental sectors. For more information you can visit: https://optus-homes.com
In this episode, Andrew talks with David Royle from SRM Europe. They chat about new ways of thinking that put data science at the heart of your business decision-making, along with how to build a long-lasting, data centric culture into your business.
In this episode, Dr Andrew Rivers goes down deep geek with Nino Crudele, an Azure MVP and Certified Ethical Hacker, an expert in cloud security and governance. Please watch and enjoy!
In this episode, Andrew Rivers from 345 Technology is joined by Steve Pereira from Visible Value Stream Consulting - https://visible.is - talking about agile, lean and finding value in software projects.
In this episode, Andrew from 345 talks with Dirk Huibers from Spotr.ai about image recognition and AI.
In this episode Andrew is joined by Chris Tabb, partner at Leading Edge IT, and a Big Data expert. Please sit back and enjoy them talking about all sorts of things data - Chris' favourite topic!
Sudeshna Sen is a strategist and data scientist, with a wealth of experience in business strategy and data science, skilled in helping clients adopt AI as a strategic priority. She is Head of Data Science and Insights at a top events company.In this podcast, Andrew chats to her about all things data science and machine learning. Sudeshna is also a career strategist, helping helping people achieve their career goals. Visit her website and find out more here: https://www.theabundancepsyche.com
Andrew Burgess is one of the UK's leading experts in AI strategy. With a wealth of accreditations and roles including government advisor, Andrew has a wealth of knowledge on AI that he is thankfully happy to share.In this special podcast we start by talking about an area that both our businesses are involved in - social housing. We drill into the ways in which AI is helping this sector, especially with the slightly different ethical lens than would be the case with commercial businesses.We then talk about AI strategy more broadly, and how Andrew goes about developing AI strategy for organisations.Visit Andrew's company site, Greenhouse Intelligence, here: https://thegreenhouse.ai/Andrew's personal site is here: https://ajburgess.com/Be sure to sign up for the monthly newsletter "That Space Cadet Glow".You can find out more about Andrew's book, "The Executive Guide to Artificial Intelligence" here: https://ajburgess.com/blog/executive-guide-artificial-intelligence-palgrave-macmillan-2018/
In this episode Andrew and Danny chat about AI strategy, and how you get started in AI.
In this episode Andrew and Danny chat about the legacy product BizTalk, why you might have it in your datacenter, why you might want to get rid of it, and what you can do to make this happen using tools such as the BizTalk Migrator.
In this episode Andrew and Danny talk about 2020 and the effect not only on us at 345, but also how the same trends have affected everyone else. We also look forward to 2021, what's going to happen with technology and why we're optimistic despite it all.
In this episode Andrew and Danny look into why measuring the temperature in supermarket fridges is really important, and it turns out to be amazingly interesting!
In this episode we take a first look at IoT and cover off some of the common scenarios where IoT is a great solution.
In this episode Andrew and Danny look into the reasons why you might need a modern data warehouse, and how these differ from old-fashioned data warehouses.
How do you cut the most pairs of shoes out of a hide of leather? And make sure there are no blemishes? All this and more in this episode!
How will AI ensure you don't get a dented can of baked beans? Or that you don't get green crisps in your packet of cheese & onion? In this video Andrew and Danny chat about quality control in manufacturing and how AI is improving this all the time.
In this episode Andrew and Danny talk about the AI behind Chat Bots and how they can change your business and why,as a consumer, you might be happy to use it.
In this episode, Andrew and Dan discuss the technology changes coming down the line that impact the future of integration on-premises - but also in the Cloud!
This is a recording a bonus follow-up session off the back of our recent webinar where we were building Logic Apps LIVE! Andrew and Dan take you on a journey of discovery in the world of cloud integration, building your first logic app and showing you some great tips along the way.
This is a recording of our recent webinar where we were building Logic Apps LIVE! Andrew and Dan take you on a journey of discovery in the world of cloud integration, building your first logic app and showing you some great tips along the way.
In this episode Andrew and Danny run a simple demo that builds a Machine Learning Model using Azure Machine Learning and Python. This is to demonstrate that the tools and the code to build machine learning models are easy to use and accessible...what you need to focus on is the data and choosing the correct model.
In this video Andrew and Dan discuss what a target solution would look like for a migrated BizTalk application running on Azure Integration Services. This considers things like how we support publish-subscribe, ordered delivery, correlation and port processing. This webinar series is all about migrating BizTalk applications to AIS, which is inspired by the announcement that Microsoft are releasing the BizTalk Migrator this autumn - a BizTalk Migration Tool that will make the journey of migrating from BizTalk to Azure simpler, faster and easier.
In this video Andrew and Dan discuss what the technology stack would look like for a migrated BizTalk application running on Azure Integration Services. This webinar series is all about migrating BizTalk applications to AIS, which is inspired by the announcement that Microsoft are releasing the BizTalk Migrator this autumn - A BizTalk Migration Tool that will make the journey of migrating from BizTalk to Azure simpler, faster and easier.
At Integrate 2020 at the start of June Microsoft announced that they would be releasing a BizTalk Migrator - a tool to help you migrate BizTalk applications to Azure Integration Services. This webinar unpacks all of the information released so far so you can get the best insights into what it means for you.
A machine learning model is a mathematical function. There, that was easy! In order for Machine Learning AI models to operate on something you have to reduce it to numbers. Fortunately, everything in the world of computers is a number underneath. A photo is a collection of numbers. A video is a collection of numbers. You feed these into your model and what you get back is....more numbers. The maths that makes this happen is quite mind-bending. The good news - you don't need to know it! The AI tools we now have available do the heavy lifting for us and let us build and use models without needing a PhD in mathematics.
In this video Andrew and Danny talk about the first steps you can take to start using AI within your organisation. It's as easy as calling an API. The only limits are your imagination and ambition.
In this presentation we're talking about Data Lakes. This is a big topic, and we're covering all the bases for planning your Data Lake: What is a data lake? Why would you need one? What you would put in one? How you would build one? What you do when you’ve got one? What does one cost? What problems should you avoid?
This video is a recording of the online March 2020 meetup of a combined Data Science South Coast and Solent IoT and ML Meetup. In this webinar Tom Wright presents a great talk on the use of data science in biddable marketing. The sheer scale of the data challenge here is mind boggling, with trillions of data points processed every month. Join Data Science South Coast here: https://www.meetup.com/Data-Science-South-Coast/ Join the Solent IoT and ML meetup here: https://www.meetup.com/Solent-IoT-and-Machine_Learning-Meetup/ Category
Things you need to look out for in data lakes: - Security - Access and audit - Data sovereignty - Compliance, GDPR We take you through the things you should get right up-front so your data lake solution is fit for purpose!
The diagram we're talking about in the video is here: https://345.technology/wp-content/uploads/2019/12/Data-Strategy-Your-Business-2048x1445.png The 345 data engineering pack is available for download here: https://345.technology/wp-content/uploads/2019/12/345-Technology-Data-Engineering-Pack.pdf
Show notes: https://www.345.systems/podcast/episode-11-first-impressions-from-blockchain-expo-global/ This week the 345 / Glu team were at Blockchain Expo Global (https://www.blockchain-expo.com/) at Kensington Olympia (https://olympia.london/) , seeing what's happening in the world of blockchain and talking to people from across the industry. From this I've distilled some thoughts about what's hot at the moment and what's not. This is my personal take on the expo, I'm sure the rest of the guys will have more to add! On the 345 Tech Talks podcast we haven't even started talking about blockchain yet, so it would be unfair for us to dive in too deep. Good job, because this is a good starting point for an overview. What's Blockchain? So what's blockchain? Well, we can conceive of a blockchain as a type of database where chunks of data ("blocks") are written one after another to form a series of connected links ("chain"), thus blockchain. There you go. Not too hard was it? The clever stuff is making that actually work, but from an application point of view, a database that has chunks of data written in sequence in a way that can't be tampered with is a great starting point on which you can begin understanding everything else. Use Cases for Blockchain Technology Owing to the write-once-and-it's-there-forever nature of blockchain there are certain applications that are best suited to the technology. For sure, cryptocurrency payments were the genesis of the technology. They're a great example of a permanent immutable record. Other areas that are great applications for blockchain are any type of legal document or assertion that needs to be recorded and used as evidence later. Proof of ownership. Copyright assertion. Land registry. Auditable events. All these are great examples of blockchain in action. The Rise, Fall and Rise of Cryptocurrency Tokens A year ago tokens were all the rage. People were setting up crypto businesses, creating tokens, inventing "tokenomics" for their business and then hoping people would buy into it. The mania ended soon afterwards, and lots of projects went with them. In some ways this is a good thing, because we need to sort the wheat from the chaff, sort the good and durable from the Ponzi schemes. Cryptocurrency tokens are in fact a great way to represent ownership of real things. You can make the indivisible divisible. You can own a millionth of a house, a tenth of a car and buy a sack of next year's harvest. In short, cryptocurrency tokens, backed by smart contracts (software code embedded within the token you own that enforces rules), offer a way to bring the tools of finance to everyone. You won't need to be able to access capital markets in London or New York in order to raise equity for your business in future. It's going to be available for everyone. We're a few years away from this, and this application of blockchain is in a bit of a lull. It's going to come back though, because the underlying need to widen access to finance hasn't gone away. Wallets and Storage A difference between this year and last is the level of sophistication there has been in the wallet space. Last year people were writing wallet apps and getting people to use them. This year we're talking integrated hardware, software, multi-signature workflow. You name it, the storage of cryptocurrency assets is maturing. Essentially, these are the modern equivalent of safety deposit boxes for your crypto assets. Really friendly people on the stands too. Talking about Glu The project the 345 team have been engaged on lately is Glu (https://www.glu.lu), and the expo was an ideal place to validate our product ideas with industry insiders. Glu is the definitive product catalog for cryproturrencies, tokens and blockchain so in many ways we were pushing against an open door. The general feeling was that there is a need and an opportunity for Glu's product in this industry, to bind everyone together and help build trust in an industry so known for individuality and infighting. We left the expo with a renewed sense of mission to complete the Glu product. The live beta launches next month, so be sure to keep checking the website to see when it's landed! Thanks for listening, see you next week when I'll be back talking tech with Paul.
To view the episode on the 345 website click here (https://www.345.systems/podcast/episode-10-recap-reorient-remux/?utm_source=ep10&utm_medium=Audioboom&utm_campaign=345TechTalks) . I've been wanting to do this episode for a while now, to place some context around why we have chosen the subjects we have, and where we have come from in terms of our thinking. Paul being away for the weekend has given me an ideal opportunity to sneak in with this one! 345 as an organisation stemmed from a great kickass dev team that the founders used to be in. When that particular team disbanded we left to form 345 and continue our good work. The specialisms we had adopted over the years did give 345 a particular focus on the following areas: Business process automation. Call it what you will - Straight Through Processing (STP), Business Process Management (BPM), Integration, Orchestration - all of these terms mean a similar thing. We spent a lot of time writing software that automated business processes. Microservices. When you're automating business processes you need to link steps of a process together, and each step can be expressed as a separate service. Microservices are borne from a set of architectural principles that require any such service to be independent from its peers. This was a natural follow-on from the work we did in step 1. IoT. Internet of Things - thousands, millions of devices connected over the Internet. Each transmitting and receiving data, which can be aggregated, transformed, stored, munged and mined to do a whole array of amazing things. We also look at what we do as a set of "pillars", i.e. areas that define how we work: Method. This pillar is all about how we build technology, and is the accumulated wisdom regarding how to build software, what you should think about to make sure you've covered all your bases, and what constitutes success. We focus on practice over process, because the right people doing the right things with the right goals will get you 90% of the way to your result. Automation. This is how we remove the drudgery of repetitive tasks from creative people so that can focus on what they do best: using their brains to imagine a better future. We let the robots do that they are best at: consistently working at repetitive tasks. Technology. We create stacks of technology that form the bedrock of proven solutions. Having pre-baked stacks helps us to focus on using technology to solve problems instead of worrying about whether we are using the right technology in the first place. In this episode we go through the previous episodes and show where they fit into the three pillars, and talk about what's coming up, especially what's in the pipeline with regard to single page applications and IoT!
Summary At a high level, our approach is split into 4 areas: Define - discovery and definition of what we're going to deliver Deliver - delivering the DevOps, infrastructure and shared services Exploit - maximising the ability to deliver apps onto this platform Maintain - ensuring we stay current, and making sure we can adapt to future needs In this podcast we're looking at the first two of these. We are going from scratch to the point where you have the technology in place on which you can deliver and operate your microservices. When we talk about the technology platform, the latest iteration is the one we have implemented for Glu (https://www.glu.lu/ (https://www.glu.lu/?utm_source=ep09&utm_medium=Audioboom&utm_campaign=345TechTalks) ). This is launching in Q2 2019 and is a full, living example of the tech we are talking about. Define In the define phase we have 3 main areas of activity: Technical scope - discovering the main goals and constraints within which we work. This includes things like preferred cloud vendor, services that we must connect to, how to apply security. Technical blueprint - creating a definition that can be shared with everyone involved so that we communicate a common understanding of what is going to be delivered. We already have a template based on what we would deliver, and we make the appropriate adaptations to this blueprint based on what we have learned in discovery. Implementation plan - this is the backlog of tasks that need to be done and a first cut at priority. We would approach this in an agile fashion, if you're following this methodology you might choose a different way of project planning. These activities are not expected to take a long time. We would expect this phase to be 2-4 weeks for a simple implementation and 4-8 weeks when there are particularly new or unique constraints in place. If this is taking longer than this timeframe you should take a hard look at your constraints to make sure you are not over-complicating things. Deliver After we've defined our blueprint we move swiftly on to implementing it. You can get in touch with us about how to reuse our out-of-the-box platform, you could modify it to suit your own needs or simply follow these steps to create your own from scratch. Environments - we create the cloud accounts that resources will be deployed into. We can't start building without this, and it's the foundation for billing and security. At the very least we'd expect to have a master "parent" account and within this there would be a DevOps account and a separate account for each environment so we get billing broken down in each. DevOps infrastructure - we create scripts to set up the DevOps environment, which will then persist for the duration of the project. Unlike typical application DevOps, this is triggered manually and tends to stay in place. In our case we use Spinnaker running on its own dedicated EKS cluster, and you could use other tools such as Jenkins in the same way. Environments - as described in episodes 6 and 7 we create repositories and DevOps processes for each environment. In the podcast we talk about how we treat different environments, for example we spin up and tear down performance testing environments each time we use them, whereas many of the other environments are created once, kept live and then maintained. Note that we're talking about each of the environments being based on Kubernetes. We are not trying to replicate a complex on-premise deployment here, that's not the point. We're delivering a new, clean microservices platform that lets you build apps quickly using standardised technology. Logging and diagnostics - as we described in episode 1, we would then put in place the logging and diagnostic tools into each of the environments as this is shared across all of the applications that run on the platform. Data services - we then deploy the common data and messaging services that are also shared across applications. This would typically include NoSQL databases such as MongoDB or ScyllaDB and messaging such as Kafka. We would also apply service mesh technologies such as Istio. All this gets us to the point where you have a platform you can hand over to your app teams. You're ready to cut the ribbon on it and start exploiting the opportunity you have created. Listen in next time for how best to get your app teams delivering and making use of this platform.
This article and episode is aimed at a technical audience: architects, developers and release managers. A foundation for rapidly building microservices on Kubernetes We’re looking now at the DevOps stack we’re building with, notably the stack that we are using to build Glu (https://www.glu.lu/ (https://www.glu.lu/?utm_source=ep08&utm_medium=Podcast&utm_campaign=345TechTalks) ) that will go into live Beta next month. If you’ve been following the podcast series you’ll know that we’re building a microservices platform leaning heavily on Kubernetes hosted in AWS. The tech stack we describe here is our way of doing this – if you’re looking to put in place a similar stack elsewhere you should be able to get some great ideas from what we’re discussing here. You can always book a free call with us to talk about your technology stack and your DevOps needs, we love hearing from you! Going through the stack one piece at a time We’ll step through the stack piece by piece and explain what we use each of these for. This is a whistlestop tour of what we discuss in the episode, so please take time to listen to the episode in full! GitHub As we’ve said in the last episode (https://www.345.systems/podcast/episode-6-our-10-principles-for-devops-part-one/) , Git is the source of truth. This is true for both infrastructure and applications. You can use any flavour of Git; we have chosen GitHub (https://github.com/) for a number of reasons: Cloud-based, no maintenance, developers can access from anywhere. The ecosystem of apps that play well with GitHub via webhooks. The tooling around pull requests, branch permissions and protecting branches, meaning we can keep our core code branches clean, tested and high quality. Shippable We use a cloud-based CI tool called Shippable (https://www.shippable.com/) that integrates with GitHub. This runs our CI process. We use this because: Builds run in containers so we can control the build stack easily. It works well with GitHub for reporting build success and code coverage. It’s easy to use. It’s inexpensive. It’s cloud-based and hosted. Minimal maintenance. Spinnaker We use Spinnaker (https://www.spinnaker.io/) as a deployment platform. We host this in AWS using a dedicated Kubernetes cluster just for Spinnaker. This is because the DevOps tools need to run outside of the other environments. Spinnaker comes from the Netflix stable, and supports a number of deployment models we’re interested in using such as canary and blue-green. Pypyr Pypyr (https://github.com/pypyr) is an opensource pipeline runner developer by fellow 345 partner Thomas (https://www.345.systems/author/thomas/) . We use Pypyr because it lets us do many of the DevOps tasks that you normally write shell scripts for, but Pypyr lets you express them as YAML files. This gives us a more readable script, plus the underlying code to execute the steps is tested and high quality. ClickUp We use ClickUp (https://clickup.com/) for project management. Tasks integrate with GitHub well, and the software is not opinionated in how it is used, unlike some others (JIRA, we might be looking at you here). ClickUp lets us organise tasks very flexibly through its tagging system, which lets a task belong to multiple hierarchies at the same time. We can prioritise, organise by actor, organise by size, area of the system. It’s easy to search, filter and update. The software is still maturing and there are areas where it can be seen as weak, however the flexibility we gain more than offsets this (to us, at any rate). Developer Workstation We use a posix workstation for development. A lot of the devs like to work natively on Macs, whereas if you’re on Windows we use virtual machines running a flavour of Linux. Arch Linux (https://www.archlinux.org/) is a popular one, as it’s so lean. IDE We are not prescriptive about IDE, but our default choice is Visual Studio Code (https://code.visualstudio.com/) . Some of the guys use other editors, it depends on their personal choice. Slack For internal communications we use Slack (https://slack.com/) . This gives us a flexible ChatOps platform that integrates with the tools above so we get a feed from GitHub for pull requests and merges, we get a feed from Shippable for builds, and we can get feeds from ClickUp when actions have taken place on tasks.
This follows on from the first part where we covered the first 5 principles of DevOps: Automate everything Git is the source of truth No sensitive data or values are stored in Git Adopt an Infrastructure as Code (IaC) approach Adopt an Immutable Infrastructure approach We get straight onto the content by diving straight into the remaining 5 principles: Adopt an Immutable Application approach Especially pertinent to testing. We need to make sure that our application is isolated when regression testing so that we don't pollute our results when other tests are running or other changes are being tested. This unblocks our release pipeline because we can test in parallel and not in series. A great thing here is that we can reduce our infrastructure costs using Kubernetes as we can create short-lived isolated applications for testing. Previously we might have had to have a large number of test environments to parallelise, at high cost. Each infrastructure environment gets its own Git repository This is a security isolation, but also good practice because we need to look at the demands on each different environment. Use Git branches to represent deployed instances of the same shape. We use GitHub Releases to trigger the deployment of changes to deployed infrastructure environments. Shared infrastructure services on Kubernetes, such as database services or queuing services, are deployed to the environment as part of infrastructure setup. Each application microservice and shared library gets its own Git repository Microservices are independently deployable and upgradable. This means that they have to have their own DevOps pipeline. More work to bootstrap a repo but more flexibility in the long run. This need not be high cost, DevOps pipelines are usually built on a cookie-cutter basis, using a standard template / pattern. Shared libraries can be packaged separately through a package manager or vendoring approach. Application configuration is environment specific Environment specific properties (not sensitive properties such as secrets, keys, certs etc) are output as part of infrastructure setup and stored in DevOps Kubernetes cluster. Application configuration is defined in the application microservice Helm chart as Kubernetes ConfigMap or Secret templates. Application configuration is treated like code, meaning configuration changes are subject to the same peer reviews and quality processes as code changes and Git is used to audit the changes. It's the responsibility of the developer to identify configurable parts of their code. It's the responsibility of us as leaders to coach and teach them. Application changes must be backwards compatible Independently upgrading individual microservices brings challenges. Especially relevant for microservices architectures, but good practice in general, if we're making changes we need to support backwards / forwards compatibility so that we don't break our system when we modify an interface. Ensure you can rollback when there are failures in an upgrade, or fix forward, depending on the failure and ability to roll back. Things to watch out for: Message protocols must be backwards and forwards compatible. Data changes must be compatible. Service interfaces must be compatible. Deleting elements from interfaces and name changes are particularly problematic. New operations are OK. New data elements are OK as long as you can deserialize / ignore them in your old versions. Changes that are not backwards compatible requires a new version of the service deployed that runs side by side.
If you work in software development, and if you haven't been living in a cave since 1994, you'll have heard about DevOps. Everyone talks about it, everyone has their own idea about what's involved and everyone assumes that everyone else has got it better. Maybe it's more like sex than we thought... This is another big topic and we've split the discussion across two episodes. In this episode we're introducing DevOps and looking at principles 1-5. In the next episode we'll look at principles 6-10 and wrap up. This is a feast of content for those who work in software, and you're in for a treat if you like to draw on the experience of people with decades of industry experience. Here are the highlights of the episode: Principle #1: Automate everything Spare everyone from the toil of repetitive work. Humans should focus on solving problems not cranking the handle on the machine. Automation is a prerequisite for quality. Without automation there is no repeatability, and without repeatability there is no quality. Our automation includes the following: Testing Static code analysis Building and packaging applications Deploying and configuring infrastructure Principle #2: Git it the source of truth Everything goes through a Git repository (well, almost - see #3). We create appropriate security boundaries around our knowledge and IP. We have an audit record of changes, who did what, who approved what, and when. Avoids issues from dispersed knowledge and information. Principle #3: No sensitive data or values are stored in Git Applies to application secrets, keys, certificates with a private key, personal data, tokens. You can get into big trouble very easily if you have credentials in the wrong place! Paul gives an example of someone who left their AWS keys in a Git repo that accidentally became public. Use secure stores such as AWS Secrets Manager, AWS KMS, HSM's, K8s Secrets. Adopt a rotation policy for secrets and make sure your DevOps process can handle rotating secrets. Think about how you will secure your sensitive informaition, how it is processed and who has access to it. Principle #4: Adopt an infrastructure as code (IaC) approach Infrastructure becomes declared as templates and can be automated. Changes to infrastructure are captured and can be rebuilt when needed. Principle #5: Adopt an immutable infrastructure approach VM nodes and containers are replaced rather than changed. This approach prevents configuration drift, which is a danger with mutable infrastructure tools such as Chef and Puppet. Updated images can be tested and verified prior to deployment. Live production infrastructure is not updated while running, which improves availability. This approach gives you the ability to roll back, perform canary deployments, blue/green deployments etc. We use desired state for services (e.g. cloud services) where we are not provisioning the infrastructure ourselves. An example of this is AWS EKS.
In this episode we're closing off the remaining 4 outcomes for success in software development. In the first episode we covered Rapid Delivery, Available & Scalable and Secure. Remember, you can always read the 7 outcomes on the 345 site here (https://www.345.systems/how-we-help/) . The last 4 outcomes discussed in this episode are: Quality & Bug-Free Quality in software is layered, we approach it in a number of ways: Does the solution meet design intent? (This can be tech design and product design) Does the code pass static analysis, does it appear to be well written? Does the code pass unit tests, and is code coverage sufficient? Does the solution work when integrated with the other components? Does the solution meet non-functional goals such as performance and resilience? Does the usability meet the needs of the users? Do users love the solution? Each of these layers of quality control needs a different approach to testing and quality assurance. Each of these layers needs to be built into the process. Automate as many as you can. The hardest quality tests to automate are the ones around capturing design intent and assessing usability. Costs Optimised Organisations that feel they are spending too much on building and operating their technology first need to get a handle on where they are spending their money. A high spend on CapEx used to come with investment in large quantities of hardware, datacentres and even licences. If this is the main problem and it's draining cash from your business make sure you really need hardware! Most businesses now should be able to operate with virtually no on-premise servers. It's only when you're still running legacy mainframes and minicomputers that you should be struggling here. Cloud can help, and if you're really pushed look at building a private cloud with some decent virtualisation technology. A high spend on OpEx can often come as a result of underperformance in some of the other areas. Operational spend on customer support, data entry and other manual processing can be a result of functional gaps in the software. High tech support costs can be a result of poor quality, with callouts to investigate and fix errors. High OpEx can also be a symptom of poor technology choice, or inappropriate solution. When there are high development costs we need to break down where the spend is: High spend on testing is often a result of lack of automation. To fox, reduce the toil and get your testing processes automated. High spend on developers can be a symptom of inappropriate technology choice (wrong tools for the job), inappropriate solution design (excessive complexity / excessive cross-dependencies) or wrong skills. High spend on architecture, analysis and design can be a symptom when organisations lack a delivery culture and produce documents as a displacement activity to avoid delivering anything tangible. Functional & Lovable We define software as being functional when it meets all required functions, and we're not using manual work-arounds to fill gaps in our business processes where the software should be. We're taking opportunities in our business and delivering success. When we struggle to meet our functional goals, it can be for a number of reasons. It can be a symptom of the fact that we've delivered too slowly and run out of budget, and hence cut scope. It can be because we're struggling to implement a poor design or use the wrong technology. Lovable software is another level. This is where our users connect emotionally with our software. It's the vibe you get from Apple. It's competitive advantage, it's a reason why you get chosen ahead of your competitors. Our 6-Strata framework has specific areas where we look at emotional design. When we lack emotional design the first place we look is at the mix of personalities and skills in the team. It's generally a different mindset / worldview, not just a different skillset, that produces lovable design. Standards Compliant Standards are becoming an increasing part of our lives, whether we like it or not. Most busineses will find themselves affected by GDPR, accessibility, PCI-DSS or other general standards. Regulated businesses will need to meet standards specific to their industry. This outcome is not to be read as "am I standards compliant or not" - we find business that need to be generally are. The question is more like "am I able to meet the required standards easily, without negatively impacting my business". If you're finding it hard to meet standards it's often a symptom that you have chosen the wrong technology or you have problems in solution architecture. See this on the 345 website here (https://www.345.systems/podcast/episode-5-7-outcomes-for-success-in-software-development-part-2/) .
Blog post for the episode is here: https://www.345.systems/podcast/episode-4-7-outcomes-for-success-in-software-development-part-1/ (https://www.345.systems/podcast/episode-4-7-outcomes-for-success-in-software-development-(part-1)/) Define what success looks like for you The 345 Method is a way of getting you to success. You start by looking at the following areas that are affected by your technology solutions: Rapid Delivery Available & Scalable Secure Quality & Bug-Free Costs Optimised Functional & Lovable Standards Compliant You use a consistent scoring system You use a scoring system from 0 to 10 that allows you to assess where you are. The scoring system is like this: 0 I’m the worst in the world at this 1 I’m the worst in my industry at this 2 This is seriously detrimental to my business 3 This has a negative impact on my business 4 This is holding my business back a little 5 I’d like to improve, but I’m doing OK 6 I’m contributing to the success of my business 7 I compare well with others in my industry 8 I’m one of the leaders in my industry, we're providing competitive advantage 9 I’m the best in my industry at this 10 I’m the best in the world at this Find the pain points so you can overcome them When you have a score against these criteria you can establish where your main pain points are so together we can drill into them and fix them. You can use our reference framework that we call the 6 Strata (https://www.345.systems/how-we-do-it) , that provides you with a framework for action. This allows you to cover all the bases. The main learnings from the podcast... Rapid Delivery We use 3 dimensions to look at rapid delivery: Development velocity, release cadence, elapsed time. Development velocity can be improved in a number of ways: Choose the right technologies and tools for the job (Technology) Use them in the right way (Solution) Ensure your team know how to use them (Skill) Make your team work in the right way (Development Process) Release cadence can have a number of root causes that you should tackle: Manual infrastructure provision Manual deployments Manual testing cycles There can also be legitimate reasons to have a slow cadence, but try to keep a good process wherever you can The elapsed time between writing code and it going live is too long This is often the flip side of slow cadence Look at automation in your testing and release processes Available & Scalable Availability For your solution to be available it means you can recover from any sort of loss. You might express this as RPO (recovery point objective) - how much time / data loss you can tolerate You might also use RTO (recovery time objective) - how much time is required to restore the service Scalability For a solution to be scalable, it means you can increase capacity when demand is high and reduce capacity when demand is low Scalability can be hard with physical infrastructure as it takes time to provision, is expensive, and is hard to scale down Secure We all want secure systems, right? Your ability to be secure will depend on a number of areas. Mostly, these are: Technology - you have chosen the appropriate technology Solution - your solution is designed with attack vectors in mind Operational processes - make sure production is locked down and cannot be tampered with Regular reviews - keep reviewing your security on a regular basis, and have a means of feeding the recommendations from security reviews into your backlog Summary That's a very quick whistlestop tour of the episode. Start by defining success and then talk to us about how you can deliver software faster and better with the 345 Method, and unleash your inner software superhero.
The highlights of the podcast are: Kubernetes contributes to 3 of the 7 Outcomes, (https://www.345.systems/how-we-help/) specifically Rapid Delivery, Avalilable & Scalable and Costs Optimised. We briefly cover the concept of Microservices: breaking an application into small units that are independently deployable and scalable. This reduces the complexity of our applications and reduces the regression burden as our services are isolated. Containerising applications means that your application is separated from other applications running on the same machine. Basic Kubernetes terms: Cluster: A group of machines working together to host Kubernetes. Nodes: A machine in the cluster. Master node: A machine running Kubernetes services, which control, monitor and coordinate the applications running on the cluster. Worker node: A machine that hosts applications, that has work assigned to it by the master nodes. Pod: A unit of deployment that can be one or more containers. Pods are scalable. Manifest: A file that describes how a pod should operate. Helm chart: A description of an application that spans multiple pods. We discuss configuration of a pod, notable through a ConfigMap and secrets. We look at deployment options for pods. These can be: Replicaset: Multiple copies of the same container running across the cluster. This is the typical application option. Daemonset: An instance of a worker that runs on each node. An example of this might be to collate logs. Statefulset: An instance that is aware of state. Can be used to “remember” node names and to link to persistent storage. This is how we create NoSQL database clusters in Kubernetes. We look at hosting options. In particular we call out: Amazon EKS – this is the one we typically use – hosts the master nodes and you then add your own worker nodes into the cluster. Azure AKS – equivalent to EKS and superseding Service Fabric. Workstation developers typically use Minikube to host their development version. We also talk about options for high availability by spreading clusters over multiple datacenters and regions.
In this episode we’re talking database technology. Specifically, we’re talking about how we have moved to NoSQL databases as standard as we are designing for load. There’s a lot of ground to cover ranging from the underlying computer science to the choices you can make to get started. The highlights of the podcast are: We begin by referencing the 7 Outcomes, especially those that are most relevant to NoSQL technology. Depending on the use case, NoSQL can assist with Rapid Delivery. Usually the drivers for NoSQL are Available & Scalable and Costs Optimised. We explore relational databases, their strengths and how they achieve a highly consistent, high-quality data model. We look at the problems that come with traditional relational databases, especially the issues of scale and availability. Horizontal scale is only achieved through a partitioning strategy. These strategies used to be implemented through system design but over time we have been able to take advantage of technologies that have already solved these problems. Availability is achieved through ensuring multiple copies of data are stored on separate machines. In the event of a failure we can fall back on a different copy of the data to ensure continuity of service. There are different strategies for achieving this, and the choice of technology depends on your design goals and use case. Data consistency: there is always a trade-off between consistency and performance. Technologies such as Cassandra that write to a log and have a background process to update the current data set give high write performance but lower consistency. Technologies such as MongoDB that ensure repeatable reads are necessarily slower but allow a more consistent experience when reading data. The choices we have made for our NoSQL: Fit for purpose, match the technology to the use case (rapid delivery) Can host in Kubernetes (at least, host the compute then map the storage) Efficient runtime (costs optimised) Highly available and scalable We look at the GLU NoSQL stack and the reasons we have chosen each: ElasticSearch (https://www.elastic.co/) : highly flexible and available with the way it manages nodes. We use this for high volumes of data that we need to query in a flexible manner, such as diagnostic and audit information. Prometheus (https://prometheus.io/) : geared towards time-series data, especially when this data needs to be examined in a near-real-time manner. We use this for system performance data and dashboard it using Grafana (https://grafana.com/) . ScyllaDB (https://www.scylladb.com/) : this is a Cassandra-like database that is more efficient on compute as there is no Java involved (a JVM needs tuning as part of a performance test cycle). Very fast, especially for writes. ArangoDB (https://www.arangodb.com/) : This is used as a multi-model database. We are especially interested in the graphing model as we can add and traverse relationships between entities. This is great for product recommendations, social information and other cases where relationships between different entities need to be captured and used in real time. We also mention cloud-hosted NoSQL databases, notably that Cosmos (https://azure.microsoft.com/en-gb/services/cosmos-db/) is an interesting one to consider. The show web page is here: https://www.345.systems/podcast/episode-2-from-yes-sql-to-nosql/
This is the inaugural episode of the 345 Tech Talks podcast. In this episode Andrew and Paul discuss the issue of tracing and debugging microservices in Kubernetes. This is a technical deep dive into a subject that can make or break your ability to build, test and operate a large production system. A while back we wrote an article “Best Practices for Tracing and Debugging Microservices (https://www.345.systems/technical/best-practices-tracing-debugging-microservices/) ” that has turned out to be our most viewed web page ever on the 345 site. The original article is a brief look at some of the main considerations, so when we were looking for a subject for our first podcast episode this was an ideal candidate. Some of the main points from the episode: Building applications in Kubernetes helps with 3 of the 7 Outcomes for Success: Rapid Delivery, Availability & Scalability and Cost Optimised. The ability to read detailed diagnostic information is essential if you are going to build large scale distributed applications. This is especially important if a single process involves calls to many different microservices. You need to ensure you can piece together the diagnostic information from every component involved in fulfilling an operation. These can span many machines and services. Paul describes the best ways of doing this by passing a correlation identifier through all the services. We have a tech stack for containerized microservices running in Kubernetes that includes FluentBit, ElasticSearch, Kibana, Prometheus and Grafana. This stack is explained in detail, with descriptions of why each part of the stack is chosen. We discuss the information you need to trace and the structure of the data that is best. FluentBit and FluentD are data collectors that feed into ElasticSearch for storage. It’s best to have an interface to view, search and filter the log information and that’s where we use Kibana. Performance data is handled differently. We store this in Prometheus because it’s better at handling realtime time-series data. We also move this into ElasticSearch for long-term storage. We discuss how you need an archiving strategy. It’s important to understand how much data you need on fast storage, how much on slow storage, when you can put data into cold storage and when you can purge it. This helps you keep a good balance of performance and cost, whilst meeting any regulatory data retention requirements. If you’ve ever been the guy who needs to fix a system when it’s down, you know the value of good diagnostic information! The episode web page is here: https://www.345.systems/podcast/episode-1-tracing-and-debugging-microservices-in-kubernetes/ (https://www.345.systems/podcast/episode-1-tracing-and-debugging-microservices-in-kubernetes/)