Legacy Knowledge Base
Published Jun. 30, 2025

Why the Memory Metrics in Liferay PaaS differ from Liferay DXP

Written By

Michael Wall

How To articles are not official guidelines or officially supported documentation. They are community-contributed content and may not always reflect the latest updates to Liferay DXP. We welcome your feedback to improve How To articles!

While we make every effort to ensure this Knowledge Base is accurate, it may not always reflect the most recent updates or official guidelines.We appreciate your understanding and encourage you to reach out with any feedback or concerns.

Legacy Article

You are viewing an article from our legacy "FastTrack" publication program, made available for informational purposes. Articles in this program were published without a requirement for independent editing or verification and are provided"as is" without guarantee.

Before using any information from this article, independently verify its suitability for your situation and project.
Note: please note that Liferay has renamed its Liferay Experience Could offerings to Liferay SaaS (formerly LXC) and Liferay PaaS (formerly LXC-SM).

Issue

  • Explain why the memory metrics for the Liferay service shown in Liferay PaaS differ to those shown within Liferay DXP

Environment

  • Liferay PaaS

Resolution

Context: Liferay Service Memory Settings

The total memory available to the docker container (in MB) is set by the 'memory' value in the Liferay service LCP.json, for example:

"memory": 16384

The JVM heap memory is typically set in the LIFERAY_JVM_OPTS of the Liferay service LCP.json or equivalent Environment Variable using the -Xms and -Xmx flags for example:

-Xms4096m

-Xmx12288m

-XX:MaxMetaspaceSize=1024m

-XX:MetaspaceSize=1024m

-XX:NewSize=1536m

-XX:MaxNewSize=1536m

etc.

The recommended configuration is to set the -Xms flag to 25% of the available memory (in MB), and to set the -Xmx flag to 75% of the available memory (in MB). See here for more details. This is applicable with or without Auto-scaling enabled, see here for Auto-scaling setup for more details.

Note that the Metaspace memory is in addition to the JVM heap memory.

The above snippet is an example, it is not a recommendation. See the appropriate Liferay DXP Deployment Checklist (DXP 7.4 here, DXP 7.3 here and DXP 7.2 here) for further information on JVM Options and performance tuning.

Liferay DXP Memory Metrics are based on JVM Heap Memory

The charts and metrics in the Liferay DXP > Server Administration > Resources tab are based on the JVM heap memory alone. 

They are calculated as follows:

Runtime runtime = Runtime.getRuntime();

long totalMemory = runtime.totalMemory();

long usedMemory = totalMemory - runtime.freeMemory();

long maximumMemory = runtime.maxMemory();

out.println("Used Memory: " + usedMemory + " Bytes");

out.println("Total Memory: " + totalMemory + " Bytes");

out.println("Maximum Memory: " + maximumMemory + " Bytes");

Note: The Groovy script above can be run from the Server Administration > Script tab.

Where:

  • Total Memory is the total amount of heap memory in the JVM. The value returned by this method typically increases over time but generally doesn't go down, even after Garbage Collection increases the free memory.
  • Used Memory is the Total Memory less the free heap memory in the JVM. This can increase and reduce e.g. after Garbage Collection. Free memory is available for the JVM to use but is still allocated to the JVM, so is not free to the container.
  • Maximum Memory is the maximum amount of heap memory that the JVM will attempt to use i.e. the -Xmx value defined for the JVM.

In summary: Triggering Garbage Collection may increase the ‘free memory’ and may reduce the ‘used memory’ but it won’t necessarily change the ‘total memory’.

See here for more details.

Liferay DXP Cloud Console Memory Metrics are based on Container Memory Usage

The charts and metrics in the Liferay Cloud Console > Monitoring are based on the docker container memory usage, from https://cloud.google.com/monitoring/api/metrics_kubernetes > container/memory/used_bytes, filtering for ‘non-evictable’ memory - i.e. memory that cannot be easily reclaimed by the kernel.

For the Liferay service the memory usage here includes:

  • JVM heap memory (Total Memory from the JVM) 
  • JVM non-heap memory (such as Metaspace, Thread, GC and others) 
  • non-JVM memory used by the container

See here and here for more details on JVM non-heap memory.

Additional Information

  • The output from the 'free -m' command run from the Liferay DXP Cloud Console > Liferay Service > Shell is for the host machine, not the docker container, so is meaningless in this context.
  • In a multi-node environment the Liferay Cloud Console shows the memory usage for each individual node, whereas the Liferay DXP Resources metrics are for the current node only.
  • Liferay PaaS Real-Time Alerts for Memory are based on the container memory usage, the defined quotas and a DXP Cloud console user’s Alert Preferences. The default memory quota is 80%. See here and here for more details.
  • Liferay PaaS Auto-scaling is based on the average container memory (and average CPU) utilization versus the target average utilization defined in the LCP.json. See here for more details.
  • See here for the appropriate Liferay DXP 7.x Compatibility Matrix with the supported / recommended JDK version:

“Compatibility with Java JDK 8 runtime has been deprecated as of Q4.2023 (this does not apply to previous versions of DXP). Java JDK 11 is now the recommended runtime environment.”

  • See here for steps to create thread and heap dumps in Liferay PaaS.
  • For more advanced monitoring requirements Liferay recommends the use of an Application Performance Monitoring (APM) tool such as: 
    • Glowroot which is included in the Liferay DXP 2023.Q4 Quarterly Release and onwards. Glowroot can be configured for the Liferay PaaS Liferay service in all environments.
    • Dynatrace which can be configured for the Liferay PaaS Liferay service in High Availability (HA) production environments. Please contact your account manager for more information.
Did this article resolve your issue ?

Legacy Knowledge Base