Skip to main content
BLOG

Kubernetes’ Best Practices, Use Cases & Case Studies

By September 15, 2022No Comments
kubernetes-case-studies

Kubernetes is one of the most popular container orchestration tools in the DevOps space today. It has also become an integral part of the Cloud Native Computing Foundation (CNCF), and is an open source container orchestration software designed by Google that provides container clustering functionality. Kubernetes has been widely adopted by many organizations, enterprises, startups and in the DevOps industry. Here we will look at some of the best practices, its use cases and case studies on how we leverage Kubernetes at Niveus.

Google Kubernetes Engine (GKE) provides the most automated and scalable managed containerisation platform to deploy and operate containerized applications anywhere – be it in hybrid or multi-cloud environments. 

Kubernetes – Case studies and use cases 

Here are a few case studies and use cases for Kubernetes implementation with Niveus –

  • Niveus converted VMs into lightweight containers by deploying applications on Google Kubernetes Engine hosted on a hybrid environment for a leading private bank 
  • We built a scalable platform using Kubernetes & GCP for VWO / Wingify’s cloud native transformation 
  • We enabled application containerisation and deployment on GKE for a rental-solutions digital startup who were looking to migrate from AWS to GCP for reduced infrastructure cost 
  • A leading BPO service provider built an elastic platform on GKE with multi system integrations and provision to import various data sources for analysis

K8s use cases include building microservices architecture, migrating with lift and shift method – from servers to cloud, performing cloud- native network functions, leveraging better Machine Learning capabilities, generating better computing power for resource-hungry tasks, and building robust CI/CD software development lifecycle. 

Kubernetes – Best practices 

Kubernetes or K8s for short, can be properly used when the following best practices are implemented –

  1. Use namespaces – Namespaces in K8s help to organize objects, create logical partitions within your cluster, and for better security. The 3 namespaces in a K8s cluster are default, kube-public and kube-system. Role Based Access Control (RABC) controls access to particular namespaces, limiting the access of a group and controls the effective range of any mistakes that might occur. Limiting different teams to different namespaces helps avoid duplicated work and resource conflict.
  2. Use readiness and liveness probes – These probes essentially run a kind of health check for the K8s. Readiness probes are limited to pods, and these check whether a request to a pod is capable of service and are only directed to it when confirmed. Otherwise, it directs the requests elsewhere. Liveness probes test if an application is running in order to maintain the health of the application. For example, a web app could be tested to see if it responds to a particular path. If the application doesn’t respond, the pod won’t be marked as healthy and will fail the pod’s test. This allows Kubelet to launch a new one and test it again. This is great as a recovery mechanism in case the process becomes unresponsive.
  1. Use auto-scaling – Auto-scaling can be employed to dynamically adjust the number of pods (horizontal pod autoscaler), the amount of resources consumed by the pods (vertical autoscaler), or the number of nodes in the cluster (cluster autoscaler), where it is appropriate, depending on the demand for the resources.
  2. Use resource requests and limits – It’s important to set resource requests and limits which are the minimum and maximum amount of resources that can be used for containers so that they don’t use too many resources and cause problems for other applications on the cluster. If there are no limits, pods can use more resources than they need, which can reduce the total amount of resources available and cause issues like nodes crashing or new pods not being able to be scheduled correctly.
  3. Deploy your pods across nodes – Avoid running pods individually. Instead run them always to be part of a Deployment, DaemonSet, ReplicaSet or StatefulSet. This way, if one pod goes down, the others can keep the deployment running. To further improve fault tolerance, anti-affinity rules can be used to spread the pods across multiple nodes.
  4. Use multiple nodes – Run K8s on multiple nodes, if you want to build in fault tolerance. Multiple nodes in your cluster help workloads to be spread between them.
  5. Use Role-Based Access Control (RBAC) – It is essential to use RBAC to properly secure your K8s cluster. Users, groups, and service accounts can be assigned permissions using the principle of least privilege, to perform actions allowed by a role on a particular namespace, or actions allowed by a ClusterRole on the entire cluster. Each role can have multiple permissions. To assign roles to users, groups, service accounts, or other entities, RoleBinding or ClusterRoleBinding objects are used.
  6. Use a cloud service for external hosting – Kubernetes is a powerful container orchestration platform that can be complex to set up and manage on your own hardware. Google Kubernetes Engine (GKE) offers K8s as a Platform as a Service (PaaS), making it much easier to scale your cluster by adding or removing nodes. This leaves your engineers free to focus on what’s running on the K8s cluster itself. If you’re interested in learning more about GKE on Google Cloud Platform (GCP), get in touch with us. We can help you get started and answer any questions you might have.
  7. Upgrade your Kubernetes version – Up-to-date versions of K8s fix vulnerabilities and improve security. So, it’s important to run the latest version on your cluster. Support for older versions is usually not as good. 
  8. Monitor your cluster resources and audit policy logs – Monitoring the components in the K8s control plane is important to keep resource consumption under control. The control plane keeps the system running. Correct K8s operations depend on these components functioning properly. To enable audit logging in K8s, start the kube-apiserver with the appropriate flag turned on. This will generate an audit.log file that contains all requests made to the K8s API. Review this file regularly to look for any potential issues on the cluster. The default policies for Kubernetes clusters are defined in audit-policy.yaml, but these can be customized as needed.
  9. Use a version control system – K8s configuration files are to be controlled in a Version Control System (VCS) to allow for a raft of benefits, including increased security, enabling an audit trail of changes, and increasing the stability of the cluster. Approval gates to monitor for any changes made, so the team can peer-review the changes before they are committed.
  10. Use a Git-based workflow (GitOps) – For deployments of K8s to be successful, teams would require considerable thought on the workflow processes. With git-based workflow, automated use of CI/CD (Continuous Integration / Continuous Delivery) pipelines, increases application deployment efficiency, and speed, with an audit trail of deployments.
  11. Reduce the size of your containers – Smaller the image sizes, the faster is your builds and deployments. With this you can reduce the amount of resources that containers consume on your K8s cluster. Smaller images are faster pulled than larger images, and require less storage space. Following this approach can also provide security benefits by reducing the potential number of vectors of attack for malicious actors.
  12. Organize your objects with labels – K8s labels are key-value pairs that help you organize your resources in a cluster. They provide information about how different components in a K8s system interact. The official K8s documentation recommends using labels such as name, instance, version, component, part-of, and managed-by when tagging pods.
  13. Use network policies – Network policies can control traffic between objects in a K8s cluster, at the IP and port level, similar to the concept of security groups in cloud platforms. They restrict access to resources. Typically, all traffic should be denied by default, then rules are to be allowed in place to allow required traffic.
  14. Use a firewall – A firewall in front of your K8s cluster will help restrict requests to the API server from the outside world. You should whitelist IP addresses and restrict open ports. This can go a long way in building a resilient K8s environment. 

While GCP’S Kubernetes is a magnificent tool, its power does come with a learning curve. If you’re looking for simplifying your Kubernetes implementation, reach us at biz@niveussolutions.com

Rohan Shetty

Author Rohan Shetty

Rohan Shetty is an experienced Cloud Leader, working with our Customer Engineering Team here at Niveus. As a certified Google Cloud Architect, Rohan works to connect Google Cloud solutions to business challenges, across industries.

More posts by Rohan Shetty

Leave a Reply