Maintaining Clustered Installations

Maintaining Clustered Installations

Setting up your Liferay DXP installation to function in a cluster provides performance and scalability improvements, but also requires additional consideration to properly support and maintain. This includes deploying new and updated plugins and modules, installing patches and fix packs, changing configurations, and more. The Liferay DXP cluster maintenance methods outlined maximize server uptime and minimize risks for server maintenance. Liferay DXP supports using standard cluster maintenance techniques:

  • Rolling restarts: Nodes are shut down and updated one at a time.
  • Blue-Green deployment: Blue-Green involves duplicating the current environment (blue environment), updating the duplicate (green environment), and cutting over users to the updated environment (green).

The techniques are compared below.

Cluster Update Techniques

Update Rolling Restart Blue-Green
Activation keys 1
Application server updates
Cluster code changes 2
Fix pack installation and removal (revertable fix pack)
Fix pack installation (non-revertible fix pack)
JVM setting changes
Java version (major)
Java version (minor)
Plugin/module installation
Plugin/module update (backward-compatible data/schema changes)
Plugin/module update (non-backward-compatible data/schema changes) 3
Portal property changes
System Setting changes via configuration admin files

[1] Activation key update using Blue-Green is only supported for virtual cluster activation keys. Please see Virtual Cluster Activation Key for Liferay DXP and Liferay Commerce for details.

[2] Data and data schema changes that are not backward-compatible include, but are not limited to these:

  • Modifying data in existing columns
  • Dropping columns
  • Changing column types
  • Changing data formats used in columns (such as changing from XML to JSON)
  • Updating a Service Builder service module’s data schema to a version outside of the module’s required data schema range. A module’s Liferay-Require-SchemaVersion (specified in its bnd.bnd) must match the module’s schema version value in the Release_ table. Installing a module with a new schema version updates the Release_ table with that schema version and triggers a data upgrade process. If you install such a module on one node, the schema version in the Release_ table no longer matches the Liferay-Require-SchemaVersion of the modules on the other nodes, and the module’s Service Builder services become unavailable until the module is installed on the other nodes. Such changes cannot be reverted: the database must be restored from a backup. These schema version changes must be applied while all nodes are shut down.

[3] Cluster communication must stay intact. For this reason, cluster code must not be updated in rolling restarts. The Customer Portal identifies DXP fix packs that contain such changes as non-revertible. Here are packages you must not change in rolling restarts:

  • com.liferay.portal.kernel.cluster
  • com.liferay.portal.kernel.cluster.*
  • com.liferay.portal.kernel.exception.NoSuchClusterGroupException
  • com.liferay.portal.kernel.scheduler.multiple
  • com.liferay.portal.kernel.scheduler.multiple.*
  • com.liferay.portal.cache.multiple
  • com.liferay.portal.cache.multiple.*
  • com.liferay.portal.scheduler.multiple
  • com.liferay.portal.scheduler.multiple.*