Set Up GitOps with ArgoCD for Continuous Delivery on EKS

Posted By : Rajat

Sep 02, 2024

In the rapidly evolving landscape of cloud-native technologies, Kubernetes has emerged as the de facto standard for container orchestration. As organizations scale their infrastructure, managing multiple Kubernetes clusters becomes inevitable. With this growth comes the challenge of ensuring consistency, reliability, and efficiency across all clusters. Enter ArgoCD, a powerful tool for continuous delivery and GitOps workflows in Kubernetes. In this blog post, we'll explore why integrating multiple clusters in ArgoCD is essential and how we can integrate multiple AWS EKS clusters in ArgoCD. If you are looking to leverage the potential of DevOps and blockchain together, explore our DevOps blockchain development services.

 

Image description

 

The Benefits of Integration

 

Centralized Management

 

Managing multiple Kubernetes clusters manually can be overwhelming and prone to errors. By integrating these clusters with ArgoCD, organizations gain a unified platform for application management and deployment. This integration simplifies operations by providing a centralized control point, which enhances efficiency and reduces operational complexity.

 

Consistency and Standardization

 

In a multi-cluster environment, variations in configurations can lead to inconsistencies in deployments. ArgoCD addresses this challenge by enforcing standardized configurations and deployment practices across all clusters. This consistency promotes adherence to best practices and ensures uniform deployment strategies.

 

Scalability

 

As organizations expand, they often utilize multiple clusters to balance workloads and enhance fault tolerance. ArgoCD facilitates this multi-cluster approach by supporting the seamless scaling of applications. This capability allows organizations to optimize resource usage and scale their applications effectively across clusters.

 

Monitoring [Health Checks and Logging]: 

 

ArgoCD provides comprehensive monitoring capabilities for the deployment status and health of applications across clusters. Through its user interface and API, organizations can access centralized visibility into application health and status. This integration ensures that monitoring and logging are streamlined, enabling a cohesive view of application performance across all clusters from a single dashboard.

 

Also, Explore | Containers, Microservices, And DevOps For Modern App Development

 

Integrating Multiple AWS EKS Clusters in ArgoCD

 

Let's say we have AWS accounts as follows:

 

  • Account A with account id:
  • Account B with account id:
  • Account C with account id:

 

Account A is where ArgoCD runs.

 

To authenticate and access the external cluster we need to add the configuration as follows:

 

In Account A:

 

  • Create an IAM role named argocd-manager.
  • Create a role policy named argocd-role-policy and attach it to a role named argocd-manager having the assume role policy given below

 

RolePolicyDocument
 

cat >argocd-role-policy.json <

 

AssumeRolePolicyDocument
 

{
    'Version': '2012-10-17',
    'Statement': [
        {
            'Effect': 'Allow',
            'Principal': {
                'Federated': 'arn:aws:iam:::oidc-provider/oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE'
            },
            'Action': 'sts:AssumeRoleWithWebIdentity',
            'Condition': {
                'StringEquals': {
                    'oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub': [ 'system:serviceaccount:argocd:argocd-server', 'system:serviceaccount:argocd:argocd-application-controller' ]
                    'oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud': 'sts.amazonaws.com'
                }
            }
        }
    ]
}

 

Also, Check | The Rise of Blockchain in DevOps solution

 

Now In Account B:

 

Create an IAM role named deployer having trust relationship as follows:

 

{
    'Version': '2012-10-17',
    'Statement': [
        {
            'Sid': '',
            'Effect': 'Allow',
            'Principal': {
                'AWS': 'arn:aws:iam:::role/argocd-manager'
            },
            'Action': 'sts:AssumeRole'
        }
    ]
}

 

Map this role in AWS auth-config configmap Kubernetes object in Account B EKS cluster

 

kubectl edit -n kube-system configmap/aws-auth


# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam:::role/my-role
      username: system:node:{{EC2PrivateDNSName}}
    - groups:
      - system:masters
      rolearn: arn:aws:iam:::role/deployer # deployer role arn
      username: deployer
  mapUsers: |
    - groups:
      - system:masters
      userarn: arn:aws:iam:::user/admin
      username: admin
    - groups:
      - system:masters      
      userarn: arn:aws:iam:::user/alpha-user
      username: my-user

 

Follow the same procedure in Account C as we have followed in Account B.

 

In Account A (where argocd is installed), add the following configuration in argocd helm chart values

 

Note: IAM role deployer must be created first in Account B or C

 

  • arn:aws:iam:::role/deployer
  • arn:aws:iam:::role/deployer

 

global:
  securityContext: # Set deployments securityContext/fsGroup to 999 so that the user of the docker image can use IAM Authenticator. We need this because the IAM Authenticator will try to mount a secret on /var/run/secrets/eks.amazonaws.com/serviceaccount/token. If the correct fsGroup (999 corresponds to the argocd user) isn't set, this will fail.
    runAsGroup: 999
    fsGroup: 999

controller:
  serviceAccount:
    create: true
    name: argocd-application-controller
    annotations: {eks.amazonaws.com/role-arn: arn:aws:iam:::role/argocd-manager} # Account A - IAM role service account
    automountServiceAccountToken: true

server:
  serviceAccount:
    create: true
    name: argocd-server
    annotations: {eks.amazonaws.com/role-arn: arn:aws:iam:::role/argocd-manager} # Account A - IAM role service account
    automountServiceAccountToken: true

 configs:
  # -- Provide one or multiple [external cluster credentials]
  # @default -- `[]` (See [values.yaml])
  ## Ref:
  ## - https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#clusters
  ## - https://argo-cd.readthedocs.io/en/stable/operator-manual/security/#external-cluster-credentials
  ## - https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#project-scoped-repositories-and-clusters
  clusterCredentials:
    - name: development
      server: https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.abc.region.eks.amazonaws.com # EKS cluster API server endpoint of Account B
      config:
        awsAuthConfig:
          clusterName: eks-development
          roleARN: arn:aws:iam:::role/deployer # Deployer role arn of Account B
        tlsClientConfig:
          # Base64 encoded PEM-encoded bytes (typically read from a client certificate file).
          caData: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx........==' # EKS cluster certificate authority
    - name: staging
      server: https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.abc.region.eks.amazonaws.com # EKS cluster API server endpoint of Account C
      config:
        awsAuthConfig:
          clusterName: eks-staging
          roleARN: arn:aws:iam:::role/deployer # Deployer role arn of Account C
        tlsClientConfig:
          # Base64 encoded PEM-encoded bytes (typically read from a client certificate file).
          caData: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx........==' # EKS cluster certificate authority

 

Also, Read | Increasing Inevitability of DevOps for Blockchain Development

 

Obtain the EKS certificate of the respective cluster using AWS CLI:
 

aws eks describe-cluster \
        --region=${AWS_DEFAULT_REGION} \
        --name=${CLUSTER_NAME} \
        --output=text \
        --query 'cluster.{certificateAuthorityData: certificateAuthority.data}' | base64 -D

 

The important thing to note is that we need to set deployments securityContext/fsGroup to 999 so that the user of the docker image can use IAM Authenticator. We need this because the IAM Authenticator will try to mount a secret on /var/run/secrets/eks.amazonaws.com/serviceaccount/token. If the correct fsGroup (999 corresponds to the argocd user) isn't set, this will fail.

 

If you are looking for DevOps services to manage your projects, explore the expertise of our skilled DevOps engineers

 

Leave a

Comment

Name is required

Invalid Name

Comment is required

Recaptcha is required.

blog-detail

October 15, 2024 at 06:44 pm

Your comment is awaiting moderation.

By using this site, you allow our use of cookies. For more information on the cookies we use and how to delete or block them, please read our cookie notice.

Chat with Us
Telegram Button
Youtube Button
Contact Us

Oodles | Blockchain Development Company

Name is required

Please enter a valid Name

Please enter a valid Phone Number

Please remove URL from text