Instructions for installing FCEv3 and its dependent components on CentOS 9 Stream. It details the installation and configuration of components that are normally delivered with the virtual machine (OVA) to enable solution deployment without the need of a VM. It does not cover all aspects of a successful FCE deployment. See the FCEv3 installation guide at help.recordpoint.com: Connectors – RecordPoint.
PART 0 — PREREQUISITIES
Access Installation Files from RecordPoint
From the FCEv3 Connector Configuration within Records365, download the FCE application via RPM Package Manager File link. This zip file contains RPMs for the FCE application
-
RecordPoint will supply a package for the file scanner application,
diskover-2.2.3-<id>.zip. It contains two directories/diskover-2.2.3-the file scanner application/packageRepos- several .repo files for various package repositories. Typically copied into/etc/yum/repos.don the target machine. Optional, provided for convenience.
Uncompress the two packages and make them available to the target installation environment.
0.1 Port Access
The following ports must be accessible for users and services within the VM:
Port |
Service |
Description |
|---|---|---|
8000 |
Diskover |
Data indexing and analytics |
8080 |
FCE Management UI |
Web-based management interface |
9090 |
FCE API |
API access for integrations |
5601 |
Kibana |
Dashboard and reporting interface |
15672 |
RabbitMQ Management UI |
Web-based administrative interface used to monitor and manage the RabbitMQ broker |
5672 |
RabbitMQ client port |
Must be open for RabbitMQ to perform work |
0.2 Move Cockpit off port 9090
(If applicable)
The Cockpit project documents changing the listening address and port using a systemd socket override and SELinux port labeling.
See the Cockpit project for more information: TCP Port and Address
In the steps below, Port 9091 is used as an example. Replace 9091 with any free port. 127.0.0.1 keeps cockpit local-only. Use 0.0.0.0:PORT or a specific server IP if remote access is required.
Create a system override for Cockpit’s socket:
The double empty ListenStream= lines matter, forcing systemd to drop all inherited listeners.
sudo mkdir -p /etc/systemd/system/cockpit.socket.d sudo tee /etc/systemd/system/cockpit.socket.d/override.conf >/dev/null <<'EOF' [Socket] ListenStream= ListenStream= ListenStream=127.0.0.1:9091 EOF
Allow Cockpit’s new port in SELinux: Replace 9091 with the chosen port.
sudo semanage port -a -t websm_port_t -p tcp 9091 \ || sudo semanage port -m -t websm_port_t -p tcp 9091
Reload systemd and restart cockpit:
sudo systemctl daemon-reload sudo systemctl restart cockpit.socket
Verify Cockpit is listening on the new port:
sudo ss -lntp | grep ':9091'
PART 1 — SYSTEM PREPARATION
Recommend beginning with a general system update: sudo dnf update -y
1.1 Create Vagrant User and Install Base Tools
sudo useradd -m -s /bin/bash vagrant sudo passwd vagrant sudo groupadd vagrant 2>/dev/null || true sudo usermod -g vagrant vagrant
Install tooling: (Note: gedit not required, just the text editor used for this guide)
sudo dnf install -y epel-release sudo dnf install -y cifs-utils sudo dnf install -y sshpass sudo dnf install -y python3-pip sudo dnf install -y gedit python3 -m pip install --upgrade pip
PART 2 — ELASTICSEARCH INSTALLATION
The installation instructions for Elasticsearch 8.19 using the RPM package are published by Elastic at Install Elasticsearch with RPM | Elastic Docs. The steps below are required for FCEv3 and based off the official Elastic documentation above.
2.1 Import the Elasticsearch GPG key
Download and Install the public signing key:
sudo -i rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
2.2 Install the RPM repository
From the RecordPoint set of installation files move the files /packageRepos/elasticsearch.repo and /packageRepos/kibana.repo to the /etc/yum.repos.d/ directory on the target machine.
The repository is now ready for use. Install Elasticsearch with this command:
sudo dnf install --enablerepo=elasticsearch elasticsearch
This command generates the elasticsearch password. Store the password for future use. Per Elasticsearch, this password may be updated via:
sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
Execute the following commands to configure the elasticsearch service to start automatically using systemd:
sudo systemctl daemon-reload sudo systemctl enable elasticsearch.service sudo systemctl start elasticsearch.service
By default the Elasticsearch service doesn’t log information in the systemd journal. RecordPoint recommends enabling journalctl logging, the --quiet option must be removed from the ExecStart command line in the elasticsearch.service file.
2.3 Add Elasticsearch CA to System Trust Store
sudo cp /etc/elasticsearch/certs/http_ca.crt \ /etc/pki/ca-trust/source/anchors/elastic-http-ca.crt sudo update-ca-trust
2.4 Configure TLS-Enabled Elasticsearch
This step enables X-Pack security with TLS for both HTTP and transport layers in ElasticSearch.
Edit:
sudo gedit /etc/elasticsearch/elasticsearch.yml
Ensure the following settings are present and match exactly. These settings configure the cluster, enable security, and apply TLS certificates:
cluster.name: FCEv3-cluster node.name: node-1 path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch bootstrap.memory_lock: true network.host: 0.0.0.0 cluster.initial_master_nodes: ["node-1"] xpack.security.enabled: true xpack.security.enrollment.enabled: true xpack.security.http.ssl: enabled: true keystore.path: certs/http.p12 xpack.security.transport.ssl: enabled: true verification_mode: certificate keystore.path: certs/transport.p12 truststore.path: certs/transport.p12
Find and comment out the lines below (These were around lines 109 and lines 113 in RecordPoint’s deployment tests):
#cluster.initial_master_nodes: ["localhost"] #http.host: 0.0.0.0
2.5 Update Elasticsearch systemd service settings
Create the systemd override directory:
sudo mkdir -p /etc/systemd/system/elasticsearch.service.d
Create this file:
sudo gedit /etc/systemd/system/elasticsearch.service.d/elasticsearch.conf
Add this content:
[Service] LimitMEMLOCK=infinity LimitNPROC=4096 LimitNOFILE=65536
Reload systemd daemon:
sudo systemctl daemon-reload
2.6 Open Firewall Ports for Elasticsearch
To enable external access to the Elasticsearch API:
sudo firewall-cmd --add-port=9200/tcp --permanent sudo firewall-cmd --reload
2.7 Restart and validate Elasticsearch access
Replace $ELASTIC_PASSWORD with the elastic password
sudo systemctl restart elasticsearch.service curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200
PART 3 — DISKOVER-WEB INSTALLATION
This Diskover-Web section of the guide is based on the 2.2.3 Diskover guide (specifically the “Diskover-Web Installation” section) documented here:
3.1 Install NGINX
Install the remi repo on CentOS/RHEL 8.x:
sudo dnf install https://rpms.remirepo.net/enterprise/remi-release-9.rpm
Install the NGINX Web server application on CentOS/RHEL 8.x:
sudo dnf install nginx
For SELinux on CentOS/RHEL 8.x add the following to allow NGINX to start:
semanage permissive -a httpd_t
Enable NGINX to start at boot, start it now and check status:
systemctl enable nginx systemctl start nginx systemctl status nginx
3.2 Install PHP 8.1 and PHP-FPM (FastCGI)
Enable remi php 8.1:
sudo dnf module enable php:remi-8.1
Install PHP and other PHP packages:
sudo dnf install php php-common php-fpm php-opcache php-cli php-gd php-mysqlnd php-ldap php-pecl-zip php-xml php-xmlrpc php-mbstring php-json php-sqlite3
Copy php production ini file php.ini-production to php.ini file:
sudo cp /usr/share/doc/php-common/php.ini-production /etc/php.ini
3.3 Configure NGINX
sudo gedit /etc/php-fpm.d/www.conf
user = nginx group = nginx listen = /var/run/php-fpm/www.sock listen.owner = nginx listen.group = nginx listen.mode = 0660
Start FPM:
sudo systemctl enable php-fpm sudo systemctl start php-fpm
Update the PHP-FPM-runtime Directory Permissions:
sudo chown -R root:nginx /var/run/php-fpm sudo chown -R nginx:nginx /var/lib/php/session sudo systemctl restart php-fpm
3.4 Deploy Diskover-Web Files
From the set of Recordpoint provided files, within the “diskover-2.2.3” directory, locate the diskover-web directory and copy diskover-web to its deployment directory:
sudo cp -a diskover-web /var/www/
Set ownership to Nginx
sudo chown -R nginx:nginx /var/www/diskover-web
Create actual files from the sample files
cd /var/www/diskover-web/public && \ sudo -u nginx bash -c 'for f in *.txt.sample; do cp "$f" "${f%.sample}"; done' && \ sudo chmod 660 *.txt
Create actual task files from the sample task files
cd /var/www/diskover-web/public/tasks && \ sudo -u nginx bash -c 'for f in *.json.sample; do cp "$f" "${f%.sample}"; done' && \ sudo chmod 660 *.json
3.5 Full NGINX Virtual Host Configuration
Create a new nginx configuration file
sudo gedit /etc/nginx/conf.d/diskover-web.conf
Add the following content:
server { listen 8000; server_name diskover-web; root /var/www/diskover-web/public; index index.php index.html index.htm; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; location / { try_files $uri $uri/ /index.php?$args =404; } location ~ \.php(/|$) { fastcgi_split_path_info ^(.+\.php)(/.+)$; set $path_info $fastcgi_path_info; fastcgi_param PATH_INFO $path_info; try_files $fastcgi_script_name =404; fastcgi_pass unix:/var/run/php-fpm/www.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_read_timeout 900; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; } }
[Information] This step is only required if the contents of diskover-web.conf were populated from Diskover’s installation guide. Ensure fastcgi_pass unix is equal to:
fastcgi_pass unix:/var/run/php-fpm/www.sock;
Restart:
sudo systemctl restart nginx
3.6 Open Firewall Port for Diskover-Web
sudo firewall-cmd --add-port=8000/tcp --permanent sudo firewall-cmd --reload
3.7 Configure Diskover-Web Constants.php
cd /var/www/diskover-web/src/diskover sudo cp Constants.php.sample Constants.php sudo gedit Constants.php
Locate an update the const ES_HOSTS section:
const ES_HOSTS = [ [ 'hosts' => ['localhost'], 'port' => 9200, 'user' => 'elastic', 'pass' => '<ElasticPassword>', 'https' => TRUE, 'sslverification' => FALSE ] ];
Locate “const ES_SSLVERIFICATION" and set its value to FALSE. Save the file.
3.8 Set Dashboard as Default Page
cd /var/www/diskover-web/public sudo ln -sf dashboard.php index.php
Restart:
sudo systemctl restart php-fpm nginx
PART 4 — FILE INDEXER INSTALLATION
The Diskover indexer section of the guide is based on the Diskover 2.2.3 Installation guide (specifically the Diskover Indexer section) found here:
4.1 Create Log Directory
sudo mkdir -p /var/log/diskover
Ensure correct permissions for the user running Diskover (typically root):
sudo chmod 755 /var/log/diskover
4.2 Install core Diskover
Returning to the Recordpoint set of provided files navigate to the “diskover-2.2.3” directory and copy the directory ‘diskover’ to its deployment directory.
sudo cp -a diskover /opt/
4.3 Install Dependencies
cd /opt/diskover python3 -m pip install -r /opt/diskover/requirements.txt python3 -m pip install "elasticsearch<8.0.0" python3 -m pip install croniter==1.0.15
4.4 Install Default Configs
The present working directory should still be /opt/diskover:
for d in configs_sample/*; do d=$(basename "$d"); mkdir -p ~/.config/$d; cp configs_sample/$d/config.yaml ~/.config/$d/; done
4.5 Configure Elasticsearch HTTPS
sudo gedit ~/.config/diskover/config.yaml
databases: elasticsearch: host: localhost port: 9200 user: elastic pass: '<ElasticPassword>' https: True sslverification: False
4.6 Generate License files
Generate the hardware ID:
cd /opt/diskover python3 diskover_lic.py -g
This will produce a hardware ID. Once the hardware ID is generated, pass it to your RecordPoint representative. RecordPoint will return two license files, diskover.lic and diskover-web.lic.
Copy the diskover.lic file to:
sudo cp diskover.lic /opt/diskover/
Copy the diskover-web.lic file to:
sudo cp diskover-web.lic /var/www/diskover-web/src/diskover/
Set proper ownership and permissions:
sudo chown nginx:nginx /var/www/diskover-web/src/diskover/diskover-web.lic sudo chmod 644 /var/www/diskover-web/src/diskover/diskover-web.lic
4.7 Task Worker Setup
Create a service definition file for the scanner:
sudo gedit /etc/systemd/system/diskoverd.service
Paste in:
[Unit] Description=diskoverd task worker daemon After=network.target [Service] Type=simple User=root WorkingDirectory=/opt/diskover ExecStart=/usr/bin/python3 /opt/diskover/diskoverd.py -n worker-%H Restart=always [Install] WantedBy=multi-user.target
Enable the new service:
sudo chmod 644 /etc/systemd/system/diskoverd.service sudo systemctl daemon-reload sudo systemctl enable diskoverd.service sudo systemctl start diskoverd.service
PART 5 — RABBITMQ INSTALLATION
These instructions for installing Zero-dependency Erlang for RabbitMQ is based off documents found here:GitHub - rabbitmq/erlang-rpm: Latest Erlang/OTP releases packaged as a zero dependency RPM, just enough for running RabbitMQ
Installing RabbitMQ was based off their document found here:
Installing on RPM-based Linux | RabbitMQ
5.1 Import signing keys
These allow for RPM signature verification
sudo rpm --import https://github.com/rabbitmq/signing-keys/releases/download/2.0/rabbitmq-release-signing-key.asc sudo rpm --import https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key
5.2 Install Erlang
Download the Erlang 26 RPM
curl -L -o erlang-26.2.5.16-1.el9.x86_64.rpm \ https://github.com/rabbitmq/erlang-rpm/releases/download/v26.2.5.16/erlang-26.2.5.16-1.el9.x86_64.rpm
Install the Erlang 26 RPM:
sudo rpm -Uvh erlang-26.2.5.16-1.el9.x86_64.rpm
5.3 Install RabbitMQ
Download RabbitMQ 4.2 RPM:
curl -L -o rabbitmq-server-4.2.2-1.el8.noarch.rpm \ https://github.com/rabbitmq/rabbitmq-server/releases/download/v4.2.2/rabbitmq-server-4.2.2-1.el8.noarch.rpm
Install RabbitMQ 4.2 RPM:
sudo rpm -Uvh rabbitmq-server-4.2.2-1.el8.noarch.rpm
After installation, these rpm files may be removed.
5.4 Start RabbitMQ
sudo systemctl enable rabbitmq-server sudo systemctl start rabbitmq-server sudo systemctl status rabbitmq-server
5.5 Enable Management UI
sudo rabbitmq-plugins enable rabbitmq_management sudo systemctl restart rabbitmq-server
5.6 Configure Required RabbitMQ Users
sudo rabbitmqctl add_user fceindexbuilder fceindexbuilder1 2>/dev/null || true sudo rabbitmqctl change_password fceindexbuilder mypassword1 sudo rabbitmqctl set_permissions -p / fceindexbuilder ".*" ".*" ".*" sudo rabbitmqctl add_user admin darkdata sudo rabbitmqctl set_user_tags admin administrator sudo rabbitmqctl set_permissions -p / admin "." "." ".*" sudo rabbitmqctl delete_user guest sudo rabbitmqctl list_users
PART 6 — FCEv3 INSTALLATION
Installing Kibana 8.19.8 using instructions from this document:
Install Kibana with RPM | Elastic Docs
6.1 Install Required System Dependencies
Confirm Python development tools are installed:
sudo dnf install -y python3-devel
pip dependencies:
pip3 install "numpy<2.0"
6.2 Install Kibana
If RecordPoint’s version of kibana.repo wasn’t used, create a file called kibana.repo in the /etc/yum.repos.d/ containing:
[kibana-8.X] name=Kibana repository for 8.x packages baseurl=https://artifacts.elastic.co/packages/8.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
Run Install and start Kibana:
sudo dnf install kibana sudo systemctl enable kibana sudo systemctl restart kibana
Kibana should become available on:
http://localhost:5601
6.3 Diskover 2.2.3 on CENTOS with SELinux Enforcing and No Global SHA-1
Run Diskover 2.2.3 on a hardened CENTOS system with SELinux enforcing without enabling system-wide SHA-1 crypto policy. Diskover licensing currently depends on SHA-1. Modern RHEL crypto policies block SHA-1 at the policy level even if the algorithm exists. This section scopes SHA-1 availability only to Diskover processes, avoiding global crypto weakening.
sudo mkdir -p /etc/ssl/diskover sudo tee /etc/ssl/diskover/openssl-diskover.cnf >/dev/null <<'EOF' openssl_conf = openssl_init [openssl_init] providers = provider_sect alg_section = evp_properties [evp_properties] rh-allow-sha1-signatures = yes [provider_sect] default = default_sect legacy = legacy_sect [default_sect] activate = 1 [legacy_sect] activate = 1 EOF
This file is intentionally minimal.
Scope OpenSSL Override to Diskover. Edit the php-fpm pool used by Diskover.
sudo gedit /etc/php-fpm.d/www.conf
Under the [www] section ensure:
clear_env = no env[OPENSSL_CONF] = /etc/ssl/diskover/openssl-diskover.cnf
Restart php-fpm:
sudo systemctl restart php-fpm
Add systemd override for php-fpm:
Note: This command opens the file in the nano editor. To save and exit, press Ctrl + X, then Y, then Enter.
sudo systemctl edit php-fpm
Insert:
[Service] Environment=OPENSSL_CONF=/etc/ssl/diskover/openssl-diskover.cnf
Ensure SELinux File Contexts for Diskover. Label Diskover files correctly so SELinux allows access:
sudo semanage fcontext -a -t httpd_sys_content_t "/var/www/diskover-web(/.*)?" sudo restorecon -Rv /var/www/diskover-web
Ensure the license file has the correct context:
sudo restorecon -v /var/www/diskover-web/src/diskover/diskover-web.lic
Set required SELinux booleans. Diskover requires outbound network access and executable memory for PHP/OpenSSL behavior. These are required under SELinux enforcing mode:
sudo setsebool -P httpd_can_network_connect 1 sudo setsebool -P httpd_execmem 1
SELinux Port Labeling:
Diskover Web (ngninx on 8000, Elasticsearch on 9200):
sudo semanage port -m -t http_port_t -p tcp 8000 || \ sudo semanage port -a -t http_port_t -p tcp 8000 || \ sudo semanage port -m -t http_port_t -p tcp 9200 || \ sudo semanage port -a -t http_port_t -p tcp 9200
Reboot:
sudo reboot
7 Install FCEv3 Components
At this point, FCE dependencies are installed:
Elasticsearch URL is correct
Configuration Files all contain accurate information
Diskover License files have been generated and set
Dependencies are installed
Kibana + Elasticsearch are online
Diskover Web, Indexer, and Task Worker are installed
Necessary ports are free
FCE may now be deployed normally as described in the public documentation at Connector Setup – RecordPoint.
Important Note: When executing the configure-ova-rpms.sh script, the the Elasticsearch and RabbitMQ passwords must be specified via the ‘--elastic-password’ and ‘--rabbitmq-password’ parameters. We also recommend piping the output to a file, which can aid with troubleshooting.
cd /opt/recordpoint-connector-IndexBuilder/Deployment sudo chmod a+x ./configure-ova-rpms.sh sudo ./configure-ova-rpms.sh --elastic-password ESPwdHere --rabbitmq-password darkdata > configure.out
Troubleshooting
Tail the NGINX error log (live) for issues with diskover-web at localhost:8000
sudo tail -f /var/log/nginx/error.log
Or tail the last N lines (not live)
sudo tail -n 100 /var/log/nginx/error.log
If the connector.service does not start properly, ensure that RabbitMQ is running, and that the RabbitMQ users exist after running the configure ova script:
sudo rabbitmqctl add_user fceindexbuilder fceindexbuilder1 2>/dev/null || true sudo rabbitmqctl change_password fceindexbuilder mypassword1 sudo rabbitmqctl set_permissions -p / fceindexbuilder ".*" ".*" ".*" sudo rabbitmqctl add_user admin darkdata sudo rabbitmqctl set_user_tags admin administrator sudo rabbitmqctl set_permissions -p / admin "." "." ".*" sudo rabbitmqctl delete_user guest sudo rabbitmqctl list_users
Diskover Crawls
Directories must be labeled with the appropriate SELinux context to allow shared access. This ensures the vagrant user can execute against indexed data. In addition, the vagrant user must have appropriate filesystem permissions (read and write) on any directories it is expected to access.