How To Build and Design a Cloud-Native Software From Scratch

essidsolutions

Developers building cloud-native software take full advantage of public cloud offerings as well as containers to make the software distributed, flexible and scalable. For developers, cloud-native software allows for rapid innovation and for users and customers, it means they get the applications they want as quickly as possible along with timely updates. Developers can also try out new things because cloud pricing structures are such that users are only charged for what they use. And if a new idea doesn’t work, it can be discarded. There’s no need to buy and install additional software, and the new cloud-native software can be deployed at any time.

Here’s a look at how to build cloud-native software, including how to deploy code to containers to run in Kubernetes (K8s) and/or Docker, plus a look at installing K8s on a workstation and in the cloud.

Application Design

Originally, as they got bigger, applications were written with subroutines that served as building blocks for the larger application. Around 2015, this idea of subroutines was taken much further as applications began to be broken down into microservices. These are small units of execution that do their job well. Using this approach means that each part of an application is self-contained, modular, and independent. And each microservice can be updated independently of the rest of the application.

Communication between the microservices is handled using APIs. Using RESTful APIs allows calls to be made using standard protocols and information to be passed on. REST stands for representational state transfer. A REST APIOpens a new window is loosely coupled, which means that the microservices and user interfaces can communicate without dependencies.

Because the number of connected microservices can grow hugely, some way of mapping out the connectivity is needed. This is where a service mesh comes in. Its function is to look after communication between services and to ensure requests are delivered correctly – especially as the number of services available increases. It’s installed alongside the application code, without affecting the application code, as a series of network proxies.

Learn More: How To Implement Chaos Engineering Using Cloud To Improve On-Prem Systems

Containers are discrete units of software used to hold the application code with all the binaries and libraries needed to run the application. Applications in containers are portable and require less management. Containers can be easily created, updated, and deleted, making it very easy to test and deliver new applications. There is no need to install anything on the host operating system because the container includes all its dependencies.

Kubernetes is a container orchestrator that makes the provision, deployment, and scaling of any number of containers much easier. It performs tasks such as placing containers on hosts, removing containers, and even load balancing. It also makes potability easier with network and storage abstractions plus standardized resource and configuration definitions. Kubernetes can also be used to create logical units of containers comprising an application. And that makes it easier to manage and discover them.

Fun fact: Kubernetes is often abbreviated as K8s. The ‘8′ comes from the missing eight letters of ‘ubernete.’

Building the Application

Cloud-native applications are most likely to be built using Java or PHP, .Net, Python, Golang, Ruby, or JavaScript. And they will be built with the information from the previous section, front and centre in the developer’s mind.

The most popular IDEsOpens a new window to use are:

  • CodePen
  • JSFiddle
  • Microsoft Azure Notebooks
  • Observable
  • Repl.it
  • Codenvy
  • Google Cloud Shell
  • Codeanywhere

Developers will plan the application as a collection of services. They will decouple the data from the processing and use microservices. And they will then use a service mesh to ensure the microservices talk to each other and can be used by other applications.

Cloud-native service mesh tools include:

  • Consul
  • Istio
  • Kuma.

Examples of cloud-native networks tools include:

  • Calico
  • Cilium
  • Contiv
  • Flannel
  • Weavenet

To make working with Kubernetes-based cloud-native applications easier, there are applications such as:

  • Draft
  • Okteto
  • Skaffold
  • Telepresence.

Learn More: How To Build A Simple Data Pipeline on Google Cloud Platform

Deploying Code to Containers

Before an application is deployed, there are a number of tasks that need to be completed. These include:

  • Minifying files
  • Optimizing images
  • Preprocessing CSS
  • Transpiling JavaScript – this is where the source code, which has been written in one programming language, is converted into another programming language. Languages that transpile to JavaScript are called compile-to-JS languages.
  • Running code through a linter – a linter tool will scan code and identify any problems with it, eg programming errors. Some tools can help to fix problems as well.
  • Running tests
  • Invalidating caches – customers don’t want to see old data. If the content of a cache is no longer current, then it is declared invalid and can be deleted or refreshed, but not shown to customers.
  • Creating and/or moving files
  • Compiling source code into binary code
  • Taking the compiled code and packaging it for distribution
  • Producing documentation and/or release notes.

The other thing to bear in mind is that each application is composed of microservices, and not all of them will necessarily go into the same container. In fact, a container could contain a single microservice. Most likely, an application would be divided into logical parts, and each part could go in its own container. So, for example, the front end might be in one container and the back end of the application in another. That way, depending on the traffic the application is getting, new instances can be started to handle the increased volume. So, using containers allows an application to scale.

That’s where an application using containers would be using Docker. The thing to remember about a scaled-up application is that all the containers in operation need to be coordinated to work effectively. The issue with many applications is that they don’t scale up linearly. What’s needed is some way to manage containers and activity between containers. That’s where orchestration comes in, and that can be provided by Kubernetes. It can automate the process of deploying, managing, and scaling applications in containers.

Installing Kubernetes on a Workstation

Before Kubernetes can be installed on Windows 10, Hyper-V and Docker have to be installed first. You can then install Kubernetes and use the Kubernetes dashboard.

Hyper-V is the Windows virtualization software that’s available with Windows 10 Enterprise, Pro, or Education versions and needs at least 4GB RAM and CPU Virtualization support.

To install Docker, visit this siteOpens a new window and click Get Docker Desktop for Windows (stable). Then follow the setup instructions. Docker creates a Linux virtual machine that runs on top of Hyper-V. In addition, all the required services are started. Docker has a GUI tool that allows Kubernetes to be installed.

The final stage is to install the Kubernetes Dashboard, which allows Kubernetes resources to be managed. Using the dashboard is probably easier than using kubectl, a CLI tool. To securely access the dashboard, several steps need to be followed. Firstly, new users need to be granted permissions, after which the generated token can be used. The dashboard can also be accessed using the default token. A workstation can be used to test new applications before they are uploaded to the cloud.

Learn More: How To Build Data Pipelines for a Multi-Cloud Environment

Installing Kubernetes in the Cloud

Kubernetes can be installed in the cloud manually or using an installer.

Doing it manually is probably the best way to understand exactly how everything works and requires each service to be set up manually. Tools, like Ansible and Terraform, can be used to help with the process. These allow the creation of scripts for repeatable tasks. Ansible also lets users interact directly with the Kubernetes API server. Ansible would be the tool of choice for managing software resources Terraform would be used for provisioning infrastructure.

Using an installer is easier and some offer tooling for administrative tasks. Examples include:

  • Kops (Kubernetes Operations) uses a command line interface to create, delete, upgrade, and maintain clusters.
  • Kubeadm lets users get clusters up and running quickly. It does support upgrades.
  • Kubicorn works with kubeadm. It bootstraps a cluster, allows the management of infrastructure, and can also be used to take and save snapshots.

With Azure, users need to create a resource group, where Azure resources can be deployed and managed. Next, an Azure Container Registry is created. Then, a Kubernetes Azure Kubernetes Service (AKS) can be deployed that can authenticate to an Azure container registry. And, finally, the application can be uploaded.

For users thinking of using AWS, Amazon offers its Amazon Elastic Kubernetes Service (EKS), which is a hosted EC2 service that makes using Kubernetes easier.

 There’s also Amazon Virtual Private CloudOpens a new window (VPC) service, which lets users define logically isolated sections of the AWS Cloud and then launch services onto a virtual network.

Users need to create an AWS account, install the AWS CLI, install kops and kubectl tools mentioned above, and then create a dedicated user for kops in IAM (Identity and Access Management). A DNS for the cluster needs to be set up and then the files can be uploaded.

Bottom Line

Containers make deploying applications so much more flexible. Using Kubernetes adds orchestration capabilities making scaling up more manageable. Running new applications on a workstation gives an opportunity to test the applications before they are deployed in the cloud. However, there are a number of stages to go through in the process.

Let us know your thoughts in the comment section below or on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We would love to hear from you!