Skip to main content
BLOG

Migrating Legacy Applications to Microservices: A Smooth Path to Agility with GKE

By June 7, 2024No Comments
VM-to-microservice-migration

Migrating legacy applications to microservices is a powerful strategy for organizations seeking to modernize their infrastructure and embrace DevOps practices. However, this transition can present challenges. While containerization on Google Kubernetes Engine (GKE) offers numerous benefits, not all legacy workloads are easily adaptable to a microservices architecture. In this blog, we will explore the specific limitations and challenges associated with deploying containerized workloads built from migrated legacy applications in a DevOps environment. We’ll also discuss potential solutions to address these issues and ensure a smooth path to agility with GKE.

Ready to Migrate? Contact Niveus Today 

Microservices are a popular architectural style for building applications as a collection of loosely coupled, single-function services. These services communicate with each other through APIs and can be independently developed, deployed, and scaled. Containerization is a virtualization technique that packages an application’s code with all its dependencies (libraries, binaries, configuration files) into a lightweight, standalone unit called a container. This containerized approach offers several advantages over traditional VM deployments, including portability, agility, resource efficiency, and isolation. Containerization provides a standardized unit for packaging and deploying applications. Popular containerization tools include Docker and containers. However, managing multiple containers at scale can be complex. 

Pre-requisites For Successful Containerization 

Before diving into VM to microservice migration and containerization, let’s explore the key factors that influence a smooth transition: 

  • Source Code Access: Access to the source code is essential for using build packs. With Dockerfile, we can utilize previous build binaries, except in the case of PHP applications.
  • Application Package Dependencies: It is crucial to check application package dependencies. If any packages are outdated and incompatible with the latest versions, they must be upgraded and tested.
  • Networking Or Runtime Issues: For legacy applications with multiple services that cannot be run as multiple containers due to networking or runtime issues, it is preferable to run them on virtual machines (VMs).
  • Application Code and Packages Compatibility: If the application code and packages only function on older operating systems, running them on VMs is recommended.
  • Applications Requirements: Applications that require physical machines or bare metal, due to specific resource needs, latency requirements, or network compatibility issues, should also continue to run on their current infrastructure.
  • Third-party applications: Third-party applications cannot be containerized directly and would require additional steps for successful integration. 

Developer’s Code Changes

Optimizing your application for containerization is a crucial step when migrating to microservices. It involves some key code changes that ensure smooth operation and scalability within the containerized environment. Here’s a breakdown of essential considerations:

  • Logging Strategy: For efficient log management, prioritize sending logs to standard output (stdout) instead of writing them to files within the container. This allows Kubernetes to capture and centralize logs for easier monitoring and troubleshooting.
  • Health Checks: Implement a health check URL within your application. This endpoint verifies the container’s health and helps Kubernetes identify and restart unhealthy containers automatically.
  • Stateless Design: Containers are inherently stateless. Strive to make your application stateless by persisting session information and application state in external storage solutions like databases or cloud storage. This ensures consistent behavior across restarts and avoids data loss.
  • Configuration Management: Avoid hardcoding configuration details directly in your code. Instead, leverage environment variables or configuration files mounted as volumes. This allows for dynamic configuration changes without requiring code modifications.
  • Google Services Integration: If your application interacts with Google services, you might need to modify the code to authenticate and access them securely within the containerized environment. Refer to Google Cloud documentation for specific integration guidance.

By following these practices, you can prepare your application for a smooth transition to a containerized architecture on platforms like GKE.

Challenges With Enabling Containerization

While containerization offers significant benefits, it comes with its own set of challenges. One key consideration is the initial investment of time required from developers or DevOps teams. Transitioning to containerization involves tasks such as creating Dockerfiles, utilizing buildpacks, or crafting Kubernetes manifests. Additionally, there’s a learning curve associated with understanding these tools and their best practices. Another critical aspect is configuration management. Traditional VM deployments often rely on mutable configurations stored within the image itself, whereas container images are immutable by design. This shift necessitates changes in how application configurations are loaded during deployments on Kubernetes or Google Kubernetes Engine (GKE). To manage configuration dynamically, solutions such as environment variables, ConfigMaps, and Secrets can be effectively leveraged.

VM On-PREM (App Build/Deploy along with the layers)

Before delving into the containerization process, it’s important to understand the typical application building process and technology stack used in an on-premises environment.

As shown in the diagram above, when an application is running on a virtual machine, it consists of several layers, including –

  1. Physical VMs or Infrastructure
  2. Virtualization Layer
  3. Virtual Machines with the following sub-layers:
  • Operating System (OS)
  • Framework or Runtime Environment
  • Application Binaries
  • Application Packages
  • Application Logging (to file or standard output)
  • Application Environment Variables
  • Application Configuration Files
  • Secrets (for database connections, etc.)

There is a build and deploy process involved. The source code is built into binaries or stored in a repository as static images. The binary code is then deployed to the VM. A VM image is created, and the startup script may be modified to point to the new package or to accommodate other changes from the new deployment.

Solutions for Addressing Challenges with Containerization

At Niveus, we understand the challenges associated with microservices migration strategies and the impact these can have on your business. That’s why we’ve developed a powerful in-house containerization platform designed to streamline your transition and empower you to leverage the full potential of GKE.

Explore how our platform simplifies every step of your containerization journey:

  • Manage the lifecycle of containerizing apps and create the required manifest for deployment on Kubernetes.
  • Enjoy a basic CI/CD pipeline to kickstart your projects.
  • Detect outdated packages that may not run with the base OS.
  • Set up Jenkins pipelines for CI/CD tasks effortlessly.
  • Utilize build packs for containerization, eliminating the need for a Dockerfile. Note: For PHP applications, a Dockerfile is required due to limitations.
  • Generate Helm Kubernetes manifests while seamlessly integrating with Argo CD and other CD solutions.
  • Address code changes (e.g., Storage bucket, GSM, PubSub) with ease using code snippets, supplemented with manual intervention as necessary.
  • Migrate third-party source code to containers using Anthos.
  • Secure cross-service communications and integrate with load balancers using Anthos service mesh or Istio.
  • Deploy applications on GKE, Nginx, Istio, or any other open-source ingress, integrated with load balancers.
  • Collaborate closely with developers to resolve app integration issues and modernization challenges, ensuring seamless Kubernetes integration and robust health checks.
  • Minimize code and integration issues by providing design and architecture reviews conducted by experienced architects.
  • Leverage GKE for microservices monitoring, utilizing cloud monitoring or a variety of third-party APM solutions like Prometheus, Grafana, ELK, Dynatrace, or AppDynamics.

Typical Flow for Running Containers/Microservices on GKE

Once we migrate the application running on VM to a pod running on GKE, we will be able to leverage the advantages of containerization, DevOps, and the power of Google Cloud. We will walk through the CI/CD pipeline and technology stack on Google Cloud.

Technology stack

As seen in the above diagram, the VM requires underlying infrastructure, an operating system, and tightly-coupled application runtime environments and packages. In certain scenarios, particularly when running on the cloud, rather than deploying directly on VMs, VM application images are created to ensure scalability and to leverage cloud advantages.

CI/CD Workflow

On VMs:

Developers typically build and test applications either on their local machines or on virtual machines by deploying the binaries directly.

Containerization and Deployment on GKE:

  1. Code Change and Build Process: Developers make code changes and initiate the build process. Build packs or Dockerfiles are used to generate the application image.
  2. Image Deployment: Once the image is created, it is uploaded to an image repository, such as an artifact repository. Helm charts are then updated to include the Kubernetes manifests necessary for the application to function.
  3. Configuration Management: As container images are immutable, any application configuration, secrets, or settings must be mapped via ConfigMaps or Kubernetes Secrets.
  4. Pipeline Enhancements: This is a basic flow, and typically additional steps are added to the pipeline in collaboration with the client. These may include unit testing, software composition analysis (SCA), static application security testing (SAST), functional testing, dynamic application security testing (DAST), and more.

Future Considerations: While GKE concepts like ingress and egress, CNI (Container Network Interface), CSI (Container Storage Interface), etc., have not been covered in detail here, they will be addressed in future discussions.

Conclusion

While migrating from VMs to containerized microservices on Google Kubernetes Engine (GKE) might seem like a complex undertaking, the rewards are substantial. Increased agility, scalability, and resource efficiency all contribute to a more modern and efficient application landscape. By understanding the prerequisites, developer considerations, and potential challenges involved in a microservices migration strategy, you can ensure a smooth transition and unlock the full potential of containerized applications.Ready to take the next step on your journey to microservices? Niveus can help! We offer a suite of tools and services specifically designed to streamline your VM to microservice migration. This includes our in-house containerization platform and expert guidance on integrating with Google Cloud services. Contact us today to discuss your specific needs and embark on your path towards a modern, cloud-native microservices architecture.

Begin Containerization Journey Today!

Omkar Nadkarni

Author Omkar Nadkarni

Omkar Nadkarni is a Senior Cloud Architect from the Infrastructure modernization team. His extensive work in bringing infrastructure solutions for business modernization has made him a key driver for migrating large enterprises.

More posts by Omkar Nadkarni
We use cookies to make our website a better place. Cookies help to provide a more personalized experience and web analytics for us. For new detail on our privacy policy click on View more
Accept
Decline