Configuring Internally Managed Services
Internally managed services are configured and managed within the chart. These services are defined using the dependencies
property:
dependencies:
<name>:
portalBundleDenyList: []
portalComponentDenyList: []
portalProperties: ""
source: ""
<type>: {}
You can define as many dependent services as needed using this model.
-
<name>
- A string used to discriminate between different resources. Each key represents a unique service dependency.TipKeep the key short and concise, using no special characters or capital letters.
-
portalBundleDenyList
- An array of bundle symbolic names that should be disabled from loading. -
portalComponentDenyList
- An array of OSGi components that should be disabled from loading. -
portalProperties
- A string containing portal properties specific to the dependency. -
source
- A string used to choose the workload type for the service. Currently, onlystatefulset
is supported. -
<type>
- An object that can contain all values defined in the chart’s default.yaml
file except forconfigMap
,autoscaling
andserviceAccount
.
Database
Here’s an example of what it would look like to implement the model for a database
dependency. This example uses PostgreSQL in a cluster with a single replica and persistent storage of 1Gi:
dependencies:
database:
portalProperties: |
jdbc.default.driverClassName=org.postgresql.Driver
jdbc.default.url=jdbc:postgresql://liferay-default-database:5432/lportal?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false
jdbc.default.username=<user>
jdbc.default.password=<password>
source: statefulset
statefulset:
env:
- name: POSTGRES_DB
value: lportal
- name: POSTGRES_PASSWORD
value: <password>
- name: POSTGRES_USER
value: <user>
- name: PGUSER
value: <user>
- name: PGDATA
value: /var/lib/postgresql/data/db
image:
pullPolicy: IfNotPresent
repository: postgres
tag: 16
livenessProbe:
exec:
command: ["sh", "-c", "pg_isready -U $POSTGRES_USER -d $POSTGRES_DB"]
ports:
- containerPort: 5432
name: <port>
protocol: TCP
readinessProbe:
exec:
command: ["sh", "-c", "pg_isready -U $POSTGRES_USER -d $POSTGRES_DB"]
replicaCount: 1
service:
ports:
- name: <portname>
port: 5432
protocol: TCP
targetPort: <port>
type: ClusterIP
startupProbe:
exec:
command: ["sh", "-c", "pg_isready -U $POSTGRES_USER -d $POSTGRES_DB"]
storage: 1Gi
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: liferay-database-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: liferay-database-pvc
Search Engine
Here’s an example of what it would look like to implement the model for a search
dependency. This example uses Elasticsearch in a cluster with a single replica and persistent storage of 1Gi:
dependencies:
search:
source: statefulset
statefulset:
env:
- name: xpack.security.enabled
value: "false"
- name: xpack.security.transport.ssl.enabled
value: "false"
- name: xpack.security.http.ssl.enabled
value: "false"
- name: cluster.name
value: liferay_cluster
- name: discovery.type
value: single-node
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
- name: ELASTIC_PASSWORD
value: <password>
image:
repository: elasticsearch
tag: 8.17.0
pullPolicy: IfNotPresent
initContainers:
- command:
- sysctl
- -w
- vm.max_map_count=262144
image: busybox:stable-uclibc
name: increase-vm-max-map
securityContext:
privileged: true
- command: ["sh", "-c", "ulimit -n 65536"]
image: busybox:stable-uclibc
name: increase-fd-ulimit
securityContext:
privileged: true
- command:
- sh
- -c
- |
if [ ! -d ./plugins/analysis-icu ];then
bin/elasticsearch-plugin install --batch analysis-icu analysis-kuromoji analysis-smartcn analysis-stempel
else
echo "Plugins already installed!"
fi
if [ ! -e ./_config/log4j2.properties ];then
cp -rv ./config/* ./_config
fi
image: elasticsearch:8.17.0
imagePullPolicy: IfNotPresent
name: install-plugins
volumeMounts:
- mountPath: /usr/share/elasticsearch/plugins
name: liferay-search-pvc
subPath: plugins
- mountPath: /usr/share/elasticsearch/_config
name: liferay-search-pvc
subPath: config
livenessProbe:
tcpSocket:
port: <port>
podSecurityContext:
fsGroup: 1000
ports:
- containerPort: 9200
name: <portname>
protocol: TCP
readinessProbe:
tcpSocket:
port: <port>
replicaCount: 1
service:
ports:
- name: <portname>
port: 9200
protocol: TCP
targetPort: <port>
type: ClusterIP
startupProbe:
failureThreshold: 30
tcpSocket:
port: <port>
storage: 1Gi
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: liferay-search-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMounts:
- mountPath: /usr/share/elasticsearch/config
name: liferay-search-pvc
subPath: config
- mountPath: /usr/share/elasticsearch/data
name: liferay-search-pvc
subPath: data
- mountPath: /usr/share/elasticsearch/logs
name: liferay-search-pvc
subPath: logs
- mountPath: /usr/share/elasticsearch/plugins
name: liferay-search-pvc
subPath: plugins
Next, add the necessary OSGi config file using the configMap
property:
configmap:
data:
com.liferay.portal.search.elasticsearch7.configuration.ElasticsearchConfiguration.config: |
authenticationEnabled=B"false"
clusterName="liferay_cluster"
httpSSLEnabled=B"false"
indexNamePrefix="liferay-"
networkHostAddresses=["http://liferay-default-search:9200"]
operationMode="REMOTE"
password="<password>"
username="<user>"
customVolumeMounts:
x-search-internal:
- mountPath: /opt/liferay/osgi/configs/com.liferay.portal.search.elasticsearch7.configuration.ElasticsearchConfiguration.config
name: liferay-configmap
subPath: com.liferay.portal.search.elasticsearch7.configuration.ElasticsearchConfiguration.config
File Storage
Here’s an example of what it would look like to implement the model for an objectstorage
dependency. This example uses MinIO in a cluster with a single replica and persistent storage of 1Gi:
dependencies:
objectstorage:
portalProperties: |
dl.store.impl=com.liferay.portal.store.s3.S3Store
source: statefulset
statefulset:
env:
- name: MINIO_API_PORT_NUMBER
value: "9000"
- name: MINIO_CONSOLE_PORT_NUMBER
value: "9001"
- name: MINIO_DEFAULT_BUCKETS
value: <buckets>
- name: MINIO_REGION
value: us-west-1
- name: MINIO_ROOT_PASSWORD
value: <password>
- name: MINIO_ROOT_USER
value: <user>
- name: MINIO_SCHEME
value: http
- name: MINIO_SERVER_URL
value: http://localhost:9000
image:
repository: bitnami/minio
tag: 2024
pullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /minio/health/live
port: api
scheme: HTTP
podSecurityContext:
fsGroup: 1001
fsGroupChangePolicy: OnRootMismatch
ports:
- containerPort: 9000
name: api
protocol: TCP
- containerPort: 9001
name: console
protocol: TCP
readinessProbe:
httpGet:
path: /minio/health/ready
port: api
scheme: HTTP
replicaCount: 1
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 1001
runAsNonRoot: true
runAsUser: 1001
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
service:
ports:
- name: api
port: 9000
protocol: TCP
targetPort: api
- name: console
port: 9001
protocol: TCP
targetPort: console
type: ClusterIP
startupProbe:
httpGet:
path: /minio/health/ready
port: api
scheme: HTTP
storage: 1Gi
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: liferay-objectstorage-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMounts:
- mountPath: /tmp
name: liferay-objectstorage-pvc
subPath: tmp-dir
- mountPath: /opt/bitnami/minio/tmp
name: liferay-objectstorage-pvc
subPath: app-tmp-dir
- mountPath: /.mc
name: liferay-objectstorage-pvc
subPath: app-mc-dir
- mountPath: /bitnami/minio/data
name: liferay-objectstorage-pvc
subPath: data-dir
Next, add the necessary OSGi config file using the configMap
property:
configmap:
data:
com.liferay.portal.store.s3.configuration.S3StoreConfiguration.config: |
accessKey="<accesskey>"
bucketName="<bucketname>"
connectionProtocol="HTTP"
connectionTimeout=i"20"
corePoolSize=i"3"
httpClientMaxConnections=i"10"
httpClientMaxErrorRetry=i"3"
s3Endpoint="liferay-default-objectstorage:9000"
s3PathStyle=B"true"
s3Region="us-west-1"
s3StorageClass="STANDARD"
secretKey="<secretkey>"
customVolumeMounts:
x-object-storage-internal:
- mountPath: /opt/liferay/osgi/configs/com.liferay.portal.store.s3.configuration.S3StoreConfiguration.config
name: liferay-configmap
subPath: com.liferay.portal.store.s3.configuration.S3StoreConfiguration.config