CNE AWS Ready: Configuring the Cloud Native Experience

CNE AWS Ready: Adding OSGi Modules or Client Extensions With Overlays

The Cloud Native Experience (CNE) toolkit uses overlays to customize standard Liferay deployments. Upload custom files (e.g., OSGi modules, client extensions, configuration files, or site initializers) to a dedicated S3 bucket and configure your environment to apply them during pod startup.

This approach eliminates the need for custom Liferay builds, applies changes on top of the standard container, and is fully managed through GitOps.

CNE supports two ways to deploy OSGi modules or client extensions via overlays:

  1. Manual overlays: In some cases, you must upload modules directly to the overlay bucket in AWS and update GitOps configurations manually. This workflow is useful for testing, debugging, or one-off deployments.

  2. Automated overlays: The bootstrap process and GitOps synchronization handle module deployment automatically whenever changes are committed to your repository. This is the recommended approach for most production environments.

Manual Overlay Workflow (AWS UI)

  1. Locate the Overlay Bucket.

    Search in the AWS Console for the S3 bucket associated with your environment. This bucket stores overlays for OSGi modules and client extensions.

  2. Upload your modules or extensions.

    Create a top-level folder in the bucket to store your overlay resources, then upload your OSGi modules, client extensions, or configuration files into this folder.

  3. Enable the overlay in GitOps.

    In your environment-specific liferay.yaml, configure the overlay section to reference your bucket and resources:

    liferay-default:
       overlay:
          bucketName: gma-bstest1-overlay-able-dev-766749
          copy:
             - from: overlay_test_1/osgi/client-extensions/*
               into: osgi/client-extensions
          enabled: true
    
  4. Commit and push changes.

Argo CD synchronizes the environment and applies your overlay resources to the Liferay deployment.

Automated Overlay Workflow (GitOps and CI)

  1. Locate the Overlay Bucket.

    Switch to the environment namespace:

    kubens liferay-[projectId]-[environmentId]
    

    List the overlay bucket resource to find the S3 bucket name:

    kubectl get buckets.s3.aws.m.upbound.io --selector component=overlay
    

    Copy the value in the EXTERNAL-NAME column. This is the S3 bucket name used to store overlay resources.

  2. Retrieve CI Uploader Credentials.

    Switch to the environment namespace:

    kubens liferay-<projectId>-<environmentId>
    

    Identify the CI uploader user:

    kubectl get users.iam.aws.m.upbound.io --selector aws.liferay.com/username
    

    Retrieve access keys:

    AWS_S3_ACCESS_KEY_ID=$(kubectl get secret <ci-uploader-secret> -o jsonpath='{.data.username}' | base64 -d)
    AWS_S3_ACCESS_KEY_SECRET=$(kubectl get secret <ci-uploader-secret> -o jsonpath='{.data.password}' | base64 -d)
    
  3. Build and Upload Customizations.

    Configure the following GitHub Actions secrets:

    • AWS_REGION
    • AWS_S3_BUCKET_NAME
    • AWS_ACCESS_KEY_ID
    • AWS_ACCESS_KEY_SECRET

    Example workflow:

    name: CNE Workspace Build
    
    on: [push]
    
    jobs:
       build:
          runs-on: ubuntu-latest
    
          steps:
             - uses: actions/checkout@v2
             - name: Set up JDK 11
               uses: actions/setup-java@v2
               with:
                  java-version: '11'
                  distribution: 'adopt'
             - name: Build with Gradle
               run: cd workspace/ && ./gradlew build deploy
             - name: Configure AWS Credentials
               uses: aws-actions/configure-aws-credentials@v1
               with:
                  aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
                  aws-secret-access-key: ${{ secrets.AWS_ACCESS_KEY_SECRET }}
                  aws-region: ${{ secrets.AWS_REGION }}
             - name: Upload customizations to S3 bucket
               run: |
                  aws s3 cp \
                     --recursive \
                     --region ${{ secrets.AWS_REGION }} \
                     workspace/bundles/osgi/ \
                     s3://${{ secrets.AWS_S3_BUCKET_NAME }}/workspace_build_${{ github.run_number }}/osgi/
    
  4. Enable the Overlay in GitOps.

    In your environment-specific liferay.yaml (liferay/<projectId>/<environmentId>/liferay.yaml), configure the overlay to use the uploaded artifacts:

    liferay-default:
       overlay:
          bucketName: <overlay-bucket-name>
          copy:
             - from: "workspace_build_<build-number>/osgi/client-extensions/*"
               into: osgi/client-extensions
          enabled: true
    
    Tip

    Always quote the from path, for example workspace_build_1/osgi/client-extensions/*, to avoid YAML parsing issues.

    Commit and push the change to your GitOps repository. Argo CD synchronizes the environment and applies the overlay configuration.

    Note

    Each build uses a unique folder to avoid overwriting previous artifacts. Overlays apply at pod startup through the init container. Manage overlay configuration through GitOps.

  5. Verify the Overlay Deployment.

    Check the init container logs in the Liferay pod:

    kubectl logs liferay-default-0 -c liferay-overlay
    

    Look for lines that show files copied from S3 to the pod.

    '/mnt/liferay/overlay/.../file.zip' -> '/temp/osgi/.../file.zip'
    
    Note

    Overlay files are applied during pod startup. Restart the pod to apply new overlay content.