OLS dead without reason?

#1
Suddenly, the OLS 1.8.3 on my CloudLinux 9 was dead. In stderr.log, there are these lines

Code:
2025-06-25 00:04:01.202 [STDERR] 2025-06-25 00:04:01.140229 Cgid: Child process with pid: 3927493 was killed by signal: 1, core dump: 0
2025-06-25 00:04:01.492 [STDERR] 2025-06-25 00:04:01.492104 Cgid: Child process with pid: 3928861 was killed by signal: 1, core dump: 0
2025-06-25 00:04:01.504 [STDERR] 2025-06-25 00:04:01.504891 Cgid: Child process with pid: 3928882 was killed by signal: 1, core dump: 0
2025-06-25 00:04:01.770 [STDERR] 2025-06-25 00:04:01.770713 Cgid: Child process with pid: 3928778 was killed by signal: 1, core dump: 0
2025-06-25 00:04:02.063049 Cgroups returning success file: /sys/fs/cgroup/systemd/user.slice/user-1045.slice/litespeed-exec.scope/cgroup.procs, pid: 3929223
2025-06-25 00:04:02.202 [STDERR] 2025-06-25 00:04:02.058489 Cgid: Child process with pid: 3928989 was killed by signal: 1, core dump: 0
When I tried to start lsws again, the command systemctl start lsws exited without any error, in stderr.log there was no more log either. But the lsws service was still dead. Command ps showed some litespeed processes were still alive.

Code:
[root@servername ~]# systemctl status lsws
× litespeed.service - The OpenLiteSpeed HTTP Server
     Loaded: loaded (/etc/systemd/system/litespeed.service; enabled; preset: disabled)
     Active: failed (Result: exit-code) since Wed 2025-06-25 00:07:28 +07; 5s ago
   Duration: 2.288s
    Process: 3934802 ExecStart=/usr/local/lsws/bin/lswsctrl start (code=exited, status=0/SUCCESS)
   Main PID: 3934818 (code=exited, status=1/FAILURE)
     CGroup: /system.slice/litespeed.service
             ├─3901827 "openlitespeed (lshttpd - main)"
             ├─3901828 "openlitespeed (lscgid)"
             └─3901858 "openlitespeed (lshttpd - #01)"

Jun 25 00:07:24 servername systemd[1]: Starting The OpenLiteSpeed HTTP Server...
Jun 25 00:07:24 servername lswsctrl[3934802]: [OK] litespeed: pid=3934818.
Jun 25 00:07:26 servername systemd[1]: Started The OpenLiteSpeed HTTP Server.
Jun 25 00:07:28 servername systemd[1]: litespeed.service: Main process exited, code=exited, status=1/FAILURE
Jun 25 00:07:28 servername systemd[1]: litespeed.service: Failed with result 'exit-code'.
Jun 25 00:07:28 servername systemd[1]: litespeed.service: Unit process 3901827 (litespeed) remains running after unit stopped.
Jun 25 00:07:28 servername systemd[1]: litespeed.service: Unit process 3901828 (litespeed) remains running after unit stopped.
Jun 25 00:07:28 servername systemd[1]: litespeed.service: Unit process 3901858 (litespeed) remains running after unit stopped.
Jun 25 00:07:28 servername systemd[1]: litespeed.service: Unit process 3934822 (litespeed) remains running after unit stopped.
[root@servername ~]# ps aux |grep lsws
root     3934900  0.0  0.0   3876  2124 pts/1    S+   00:07   0:00 grep --color=auto lsws
[root@servername ~]# ps aux |grep litespeed
root     3901827  0.3  0.6 170976 107976 ?       S    Jun24   0:04 openlitespeed (lshttpd - main)
root     3901828  0.0  0.1  30104 20696 ?        S    Jun24   0:00 openlitespeed (lscgid)
apache   3901858  1.5  0.7 186992 127720 ?       SN   Jun24   0:21 openlitespeed (lshttpd - #01)
root     3934911  0.0  0.0   3876  2108 pts/1    S+   00:07   0:00 grep --color=auto litespeed
Needed to kill those processes manually, then I could start lsws again. Where else can I find the reason for this, please?
Thanks.
 
#3
The only errors are:
Code:
2025-06-25 00:04:13.679325 [ERROR] [3929394] HttpListener::start(): Can't listen at address adminListener: Address already in use!
2025-06-25 00:04:13.679420 [ERROR] [3929394] HttpServer::addListener(adminListener) failed to create new listener
2025-06-25 00:04:13.679426 [ERROR] [3929394] [config:admin:listener:adminListener] failed to start listener on address *:7080!
2025-06-25 00:04:13.679433 [ERROR] [3929394] [config:admin:listener] No listener is available for admin virtual host!
2025-06-25 00:04:13.698777 [ERROR] [3929394] Fatal error in configuration, exit!
The last error: Fatal error in configuration, is incorrect. There is no failed in the configuration, I just needed to kill active processes for restarting OLS. It looks like that OLS tried to restart itself, but not successful??? But why?
 
Top