Legacy Knowledge Base
Published Jun. 30, 2025

When enabling Cross Cluster Replication error "leader index [liferay-0] does not have soft deletes enabled. soft deletes must be enabled when the index is created by setting index.soft_deletes.enabled to true" is thrown

Written By

Jorge Diaz

How To articles are not official guidelines or officially supporteddocumentation. They are community-contributed content and may not alwaysreflect the latest updates to Liferay DXP. We welcome your feedback toimprove How to articles!

While we make every effort to ensure this Knowledge Base is accurate, itmay not always reflect the most recent updates or official guidelines.We appreciate your understanding and encourage you to reach out with anyfeedback or concerns.

Legacy Article

You are viewing an article from our legacy "FastTrack"publication program, made available for informational purposes. Articlesin this program were published without a requirement for independentediting or verification and are provided "as is" withoutguarantee.

Before using any information from this article, independently verify itssuitability for your situation and project.

Issue

When we try to enable Cross Cluster Replication in our Elasticsearch 6 installation, we are getting the following error:

  • leader index [liferay-0] does not have soft deletes enabled. soft deletes must be enabled when the index is created by setting index.soft_deletes.enabled to true
$ curl -X PUT "127.0.0.1:9200/liferay-0/_ccr/follow?wait_for_active_shards=1&pretty" -H 'Content-Type: application/json' -d'
> {
>   "remote_cluster" : "leader",
>   "leader_index" : "liferay-0"
> }
> '
{
  "error" : {
    "root_cause" : [
      {
        "type" : "illegal_argument_exception",
        "reason" : "leader index [liferay-0] does not have soft deletes enabled. soft deletes must be enabled when the index is created by setting index.soft_deletes.enabled to true"
      }
    ],
    "type" : "illegal_argument_exception",
    "reason" : "leader index [liferay-0] does not have soft deletes enabled. soft deletes must be enabled when the index is created by setting index.soft_deletes.enabled to true"
  },
  "status" : 400
}

It seems the soft_deletes property must be set at index creation time, but Elasticsearch 6 version sets it to false by default.  How can we solve this?

Environment

  • DXP 7.0, 7.1, or 7.2 using Elasticsearch 6

Resolution

Unfortunately, it is not possible to change this value in an existing index, it is necessary to regenerate the index so that at the moment of recreating it, it is configured with soft_deletes to true.

To change the configuration and regenerate the index, the steps you should follow are as follows:

Step 1: Add index.soft_deletes.enabled: true to the Liferay configuration

This is necessary so that when Liferay creates an index, it is created with this soft_deletes setting to true.

To add this configuration:

  1. In Liferay, go to Control Panel => Settings => System Settings
  2. Open the "Elasticsearch 6" configuration
  3. In the "Additional Index Configurations" setting, you have to add:
index.soft_deletes.enabled: true

Step 2: Regenerate all Liferay system indexes to have soft_deletes set to true.

You can do this from Liferay (simpler and slower) or Elasticsearch (more complicated, but faster).

Option 2.1 - From Liferay (simpler and slower)

This option consists of regenerating the Elasticsearch indexes from the Liferay database:

  1. Click on Control Panel → Configuration → Search (in DXP 7.3 and 7.4, it is under Control Panel → Search, and the Index Actions tab)
  2. Click the Execute button to Reindex all search indexes

Please note that, depending on how many entities there are, the reindexing may be a time and memory-intensive operation. Please plan the timing of your reindex accordingly.

After regenerating the Elasticsearch index, it will be created with the soft_deletes setting set to true.

Option 2.2 - From Elasticsearch (more complicated, but faster)

This option consists of regenerating the Elasticsearch indexes from the data found in Elasticsearch.

It will be faster than the previous option, but it is also more complex as it has more manual steps.

The idea is to create a temporary index with the correct configuration, copy the data from the original index, delete the original index and regenerate it by copying the data from the temporary index.

Note: replace index-name with your index name and replace localhost:9200 with your machine and port.

1. Download the configuration of the index from the URL:http://localhost:9200/index-name?pretty

2. Create a config.json with the following structure:

{
"mappings" : {<<copy here the mappings from the index>>},
"settings" : {<<copy here the setting from the index>>}
}

We will use this config.json file as a parameter to create the index with the new configuration.

3. Inside the settings section of the previous file:

  • Remove the attributes: creation_date, provided_name, uuid, and version, as they are regenerated internally by Elasticsearch. If they are not removed, an error will be thrown.
  • Add the option of soft deletes:
"soft_deletes" : {
"enabled" : "true"
},

4. Create a temporary index using the new configuration of the config.json file. You can do it using CURL:

curl -X PUT http://localhost:9200/index-name_temp -d @config.json -H'Content-Type: application/json'

5. Verify that the index has been created with the same configuration as the original index, except that it has soft_deletes enabled, to do this access the configuration of the source and temporary index and compare the files:

  • Source index configuration: http://localhost:9200/index-name?pretty
  • Temporary index configuration: http://localhost:9200/index-name_temp?pretty

Download and compare both configurations, to verify that they are the same, except that the temporary one has the soft_deletes set to true.

6. Copy the data from the original index to the temporary index using the _reindex API of Elasticsearch. You can do it using CURL:

curl -X POST http://localhost:9200/_reindex -d '{"source": {"index": "index-name"},"dest": {"index": "index-name_temp"}}' -H'Content-Type: application/json'

7. Delete the original index. You can do it using CURL:

curl -X DELETE http://localhost:9200/index-name

8. Create a new index with the original name, using the new configuration of the config.json file. You can do it using CURL:

curl -X PUT http://localhost:9200/index-name -d @config.json  -H'Content-Type: application/json'

9. Copy the data from the temporary index back to the index with the original name using the _reindex API of Elasticsearch. You can do it using CURL:

curl -X POST http://localhost:9200/_reindex -d '{"source": {"index": "index-name_temp"},"dest": {"index": "index-name"}}'  -H'Content-Type: application/json'

10. Delete the temporary index. You can do it using CURL:

curl -X DELETE http://localhost:9200/index-name_temp

Important: during steps 7, 8, and 9 the original index information is deleted and regenerated from the temporary index, so during this time there will be Liferay functionalities that will not work properly, for example, Searches, Asset Publishers, Users list, etc..., so it is important to perform this operation in a period with no user activity.

Repeat this operation with all the indexes of your Elasticsearch.

 

Should I reindex in Liferay or Elasticsearch?

We consider that it is better to follow Option 2.1, reindexing from Liferay, as it is much simpler and is a more proven operation.
If reindexing in your system is a very expensive operation that takes many hours and it is not feasible because of the loss of service, you can always opt for Option 2.2. It also has a period of unavailability since the original index must be deleted and regenerated, but this time of unavailability will be shorter since we avoid all the logic that makes Liferay retrieve the data from the DB and send them to Elasticsearch.

 

Did this article resolve your issue ?

Legacy Knowledge Base