Memcache connection failed

jcm

New Member
#1
Hello,

I hoep this is not a real noob question so really hope someone can help.

I installed the openlisespeed image on Google Cloud from the Marketplace.

One thing I cannot get to work is memcache. It shows it is enabled but "connection failed"

Host: localhost
(also tried 127.0.0.1 and /var/www/memcached.sock)
Port: 11211

Running Started memcached daemon I noticed something that might cause it, not sure:

Feb 05 16:23:07 vm systemd[1]: Started memcached daemon.
Feb 05 16:23:07 vm systemd-memcached-wrapper[661]: Could not open the pid file /var/r>
lines 1-12/12 (END)...skipping...
● memcached.service - memcached daemon
Loaded: loaded (/lib/systemd/system/memcached.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2021-02-05 16:23:07 UTC; 17h ago
Docs: man:memcached(1)
Main PID: 661 (memcached)
Tasks: 10 (limit: 8939)
Memory: 38.2M
CGroup: /system.slice/memcached.service
└─661 /usr/bin/memcached -m 64 -p 11211 -u www-data -l 127.0.0.1 -P /var/run/memcached/memcached.pid -s /var/www/memcached.sock -a 0770 -p /tmp/memcached.pid

Feb 05 16:23:07 lvm systemd[1]: Started memcached daemon.
Feb 05 16:23:07 vm systemd-memcached-wrapper[661]: Could not open the pid file /var/run/memcached/memcached.pid.tmp for writing: Permission denied


Running: ss -lptun | grep 11211
No result, just a next line so no response at all.

Running: telnet localhost 11211
vm:~# telnet localhost 11211
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused

Please let me know if I am missing something or how to I can fix this.

Thank you in advance!
 

Cold-Egg

Administrator
#2
That's normal because you can not connect to Unix socket with IP/Port. Try
Code:
nc -U /var/www/memcached.sock
stats
If output without any error, then this Memcached service is good, then you might want to restart the cache plugin to see if it helps.
 

jcm

New Member
#3
Hi Cold-Egg,

Thank you for the response.

I have disabled/enabled the plugin with host set as: /var/www/memcached.sock and still showing that connnection failed.

The first part of the output shows rejected connections:

STAT pid 586
STAT uptime 80150
STAT time 1612771015
STAT version 1.5.22
STAT libevent 2.1.11-stable
STAT pointer_size 64
STAT rusage_user 9.827101
STAT rusage_system 6.205027
STAT max_connections 1024
STAT curr_connections 45
STAT total_connections 47
STAT rejected_connections 46
STAT connection_structures 46
STAT reserved_fds 20
...

I have installed from GCP Marketplace, (https://) console.cloud.google.com/marketplace/product/gc-image-pub/openlitespeed-wordpress and no othe rmodifications made. Am I missing some permissions as it is showing rejected_connections count one lower as the total with every run?

service memcached start = it runs
service memcached status =error in results as mentioned, "Could not open the pid file /var/run/memcached/memcached.pid.tmp for writing: Permission denied "

What else can I check/fix to get it to work? thank you.
 

jcm

New Member
#4
It's finannly showing connected!

Just one question,, in the stats it still shows rejected connections count one lower than the total, is that normal?

STAT pid 586
STAT uptime 83037
STAT time 1612773902
STAT version 1.5.22
STAT libevent 2.1.11-stable
STAT pointer_size 64
STAT rusage_user 11.952845
STAT rusage_system 8.218715
STAT max_connections 1024
STAT curr_connections 8
STAT total_connections 130
STAT rejected_connections 129
STAT connection_structures 120
 
#5
Hello,

I hoep this is not a real noob question so really hope someone can help.

I installed the openlisespeed image on Google Cloud from the Marketplace.

One thing I cannot get to work is memcache. It shows it is enabled but "connection failed"

Host: localhost
(also tried 127.0.0.1 and /var/www/memcached.sock)
Port: 11211

Running Started memcached daemon I noticed something that might cause it, not sure:

Feb 05 16:23:07 vm systemd[1]: Started memcached daemon.
Feb 05 16:23:07 vm systemd-memcached-wrapper[661]: Could not open the pid file /var/r>
lines 1-12/12 (END)...skipping...
● memcached.service - memcached daemon
Loaded: loaded (/lib/systemd/system/memcached.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2021-02-05 16:23:07 UTC; 17h ago
Docs: man:memcached(1)
Main PID: 661 (memcached)
Tasks: 10 (limit: 8939)
Memory: 38.2M
CGroup: /system.slice/memcached.service
└─661 /usr/bin/memcached -m 64 -p 11211 -u www-data -l 127.0.0.1 -P /var/run/memcached/memcached.pid -s /var/www/memcached.sock -a 0770 -p /tmp/memcached.pid


Feb 05 16:23:07 lvm systemd[1]: Started memcached daemon.
Feb 05 16:23:07 vm systemd-memcached-wrapper[661]: Could not open the pid file /var/run/memcached/memcached.pid.tmp for writing: Permission denied


Running: ss -lptun | grep 11211
No result, just a next line so no response at all.

Running: telnet localhost 11211
vm:~# telnet localhost 11211
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused

Please let me know if I am missing something or how to I can fix this.

Thank you in advance!
The GCloud deployment has memcached set up in an Unix Socket. In order for it to work with the LSCache plugin you have to use localhost as the specified directory and 0 as the port (the idea of the UNIX socket is to avoid routing thus ports don't matter)

Regarding the PID file error, it is because of weird Ubuntu/Memcached permission issues in the specified PID folder. Even I was unable to change it with root access and chown-ing the folder with any user/group or access level, it just kept reverting the permissions of that directory. A workaround is to remove the -P line in the memcached.conf file. (removing the line had no effect on a test server I used)
The bug has been around since 2013: Forum post describing the issue a bit

Locate memcached.conf
default path should be /etc/memcached.conf
the .conf file should have the following:

-u www-data (should be the same user as lsphp)
-s /var/www/memcached.sock (this is the UNIX socket)
-a 0770 (this is the permission level)
-p /tmp/memcached.pid (this is the temp PID location)
-P /var/run/memcached/memcached.pid

the last line is the problematic bit you see in the memcached status, it basically tells memcached where to put the PID file but as described in the link above, it messes up permissions and results in a fail but the PID file was already created so either delete this to remove the error message or just ignore it as memcached should work as intended regardless.

Bonus for memcached config:
While we are at it, a great way of optimizing memcached is to use igbinary serializer, it will greatly help reduce the memory footprint of memcached and will work wonders on small servers coupled with the UNIX socket.
In order to do that, locate the memcached.ini using the following command:
Code:
php -i | grep memcached
open the .ini and replace the line SERIALIZER_PHP with SERIALIZER_IGBINARY
Please note that you would need the igbinary PECL extension and memcached with igbinary support (add --enable-memcached-igbinary during memcached installation) (both are done by default for you using the GCloud deployment of OLS)
after doing so you would have to flush the old cache:
Code:
nc -U /var/www/memcached.sock
flush_all
quit
or
Code:
echo "flush_all" | nc -U /var/www/memcached.sock
How to read the stats section from the memcached command @Cold-Egg provided above:

Important bits:
limit_maxbytes = max cache size in bytes
bytes = current utilization (if "bytes" is close to limit_maxbytes, increase the memory/cache size)
evictions = number of data that was removed from the cache before it had expired, usually because there is no more space for new data (if evictions stat increases, it is time to increase the cache size. Ideally evictions should be 0)
delete_miss - data that could not be found during delete operation, probably because of evictions or other factors.
Source

Complete stats descriptions from memcached: Source
Code:
|-----------------------+---------+-------------------------------------------|
| Name                  | Type    | Meaning                                   |
|-----------------------+---------+-------------------------------------------|
| pid                   | 32u     | Process id of this server process         |
| uptime                | 32u     | Number of secs since the server started   |
| time                  | 32u     | current UNIX time according to the server |
| version               | string  | Version string of this server             |
| pointer_size          | 32      | Default size of pointers on the host OS   |
|                       |         | (generally 32 or 64)                      |
| rusage_user           | 32u.32u | Accumulated user time for this process    |
|                       |         | (seconds:microseconds)                    |
| rusage_system         | 32u.32u | Accumulated system time for this process  |
|                       |         | (seconds:microseconds)                    |
| curr_items            | 64u     | Current number of items stored            |
| total_items           | 64u     | Total number of items stored since        |
|                       |         | the server started                        |
| bytes                 | 64u     | Current number of bytes used              |
|                       |         | to store items                            |
| max_connections       | 32u     | Max number of simultaneous connections    |
| curr_connections      | 32u     | Number of open connections                |
| total_connections     | 32u     | Total number of connections opened since  |
|                       |         | the server started running                |
| rejected_connections  | 64u     | Conns rejected in maxconns_fast mode      |
| connection_structures | 32u     | Number of connection structures allocated |
|                       |         | by the server                             |
| response_obj_oom      | 64u     | Connections closed by lack of memory      |
| response_obj_count    | 64u     | Total response objects in use             |
| response_obj_bytes    | 64u     | Total bytes used for resp. objects. is a  |
|                       |         | subset of bytes from read_buf_bytes.      |
| read_buf_count        | 64u     | Total read/resp buffers allocated         |
| read_buf_bytes        | 64u     | Total read/resp buffer bytes allocated    |
| read_buf_bytes_free   | 64u     | Total read/resp buffer bytes cached       |
| read_buf_oom          | 64u     | Connections closed by lack of memory      |
| reserved_fds          | 32u     | Number of misc fds used internally        |
| cmd_get               | 64u     | Cumulative number of retrieval reqs       |
| cmd_set               | 64u     | Cumulative number of storage reqs         |
| cmd_flush             | 64u     | Cumulative number of flush reqs           |
| cmd_touch             | 64u     | Cumulative number of touch reqs           |
| get_hits              | 64u     | Number of keys that have been requested   |
|                       |         | and found present                         |
| get_misses            | 64u     | Number of items that have been requested  |
|                       |         | and not found                             |
| get_expired           | 64u     | Number of items that have been requested  |
|                       |         | but had already expired.                  |
| get_flushed           | 64u     | Number of items that have been requested  |
|                       |         | but have been flushed via flush_all       |
| delete_misses         | 64u     | Number of deletions reqs for missing keys |
| delete_hits           | 64u     | Number of deletion reqs resulting in      |
|                       |         | an item being removed.                    |
| incr_misses           | 64u     | Number of incr reqs against missing keys. |
| incr_hits             | 64u     | Number of successful incr reqs.           |
| decr_misses           | 64u     | Number of decr reqs against missing keys. |
| decr_hits             | 64u     | Number of successful decr reqs.           |
| cas_misses            | 64u     | Number of CAS reqs against missing keys.  |
| cas_hits              | 64u     | Number of successful CAS reqs.            |
| cas_badval            | 64u     | Number of CAS reqs for which a key was    |
|                       |         | found, but the CAS value did not match.   |
| touch_hits            | 64u     | Number of keys that have been touched     |
|                       |         | with a new expiration time                |
| touch_misses          | 64u     | Number of items that have been touched    |
|                       |         | and not found                             |
| auth_cmds             | 64u     | Number of authentication commands         |
|                       |         | handled, success or failure.              |
| auth_errors           | 64u     | Number of failed authentications.         |
| idle_kicks            | 64u     | Number of connections closed due to       |
|                       |         | reaching their idle timeout.              |
| evictions             | 64u     | Number of valid items removed from cache  |
|                       |         | to free memory for new items              |
| reclaimed             | 64u     | Number of times an entry was stored using |
|                       |         | memory from an expired entry              |
| bytes_read            | 64u     | Total number of bytes read by this server |
|                       |         | from network                              |
| bytes_written         | 64u     | Total number of bytes sent by this server |
|                       |         | to network                                |
| limit_maxbytes        | size_t  | Number of bytes this server is allowed to |
|                       |         | use for storage.                          |
| accepting_conns       | bool    | Whether or not server is accepting conns  |
| listen_disabled_num   | 64u     | Number of times server has stopped        |
|                       |         | accepting new connections (maxconns).     |
| time_in_listen_disabled_us                                                  |
|                       | 64u     | Number of microseconds in maxconns.       |
| threads               | 32u     | Number of worker threads requested.       |
|                       |         | (see doc/threads.txt)                     |
| conn_yields           | 64u     | Number of times any connection yielded to |
|                       |         | another due to hitting the -R limit.      |
| hash_power_level      | 32u     | Current size multiplier for hash table    |
| hash_bytes            | 64u     | Bytes currently used by hash tables       |
| hash_is_expanding     | bool    | Indicates if the hash table is being      |
|                       |         | grown to a new size                       |
| expired_unfetched     | 64u     | Items pulled from LRU that were never     |
|                       |         | touched by get/incr/append/etc before     |
|                       |         | expiring                                  |
| evicted_unfetched     | 64u     | Items evicted from LRU that were never    |
|                       |         | touched by get/incr/append/etc.           |
| evicted_active        | 64u     | Items evicted from LRU that had been hit  |
|                       |         | recently but did not jump to top of LRU   |
| slab_reassign_running | bool    | If a slab page is being moved             |
| slabs_moved           | 64u     | Total slab pages moved                    |
| crawler_reclaimed     | 64u     | Total items freed by LRU Crawler          |
| crawler_items_checked | 64u     | Total items examined by LRU Crawler       |
| lrutail_reflocked     | 64u     | Times LRU tail was found with active ref. |
|                       |         | Items can be evicted to avoid OOM errors. |
| moves_to_cold         | 64u     | Items moved from HOT/WARM to COLD LRU's   |
| moves_to_warm         | 64u     | Items moved from COLD to WARM LRU         |
| moves_within_lru      | 64u     | Items reshuffled within HOT or WARM LRU's |
| direct_reclaims       | 64u     | Times worker threads had to directly      |
|                       |         | reclaim or evict items.                   |
| lru_crawler_starts    | 64u     | Times an LRU crawler was started          |
| lru_maintainer_juggles                                                      |
|                       | 64u     | Number of times the LRU bg thread woke up |
| slab_global_page_pool | 32u     | Slab pages returned to global pool for    |
|                       |         | reassignment to other slab classes.       |
| slab_reassign_rescues | 64u     | Items rescued from eviction in page move  |
| slab_reassign_evictions_nomem                                               |
|                       | 64u     | Valid items evicted during a page move    |
|                       |         | (due to no free memory in slab)           |
| slab_reassign_chunk_rescues                                                 |
|                       | 64u     | Individual sections of an item rescued    |
|                       |         | during a page move.                       |
| slab_reassign_inline_reclaim                                                |
|                       | 64u     | Internal stat counter for when the page   |
|                       |         | mover clears memory from the chunk        |
|                       |         | freelist when it wasn't expecting to.     |
| slab_reassign_busy_items                                                    |
|                       | 64u     | Items busy during page move, requiring a  |
|                       |         | retry before page can be moved.           |
| slab_reassign_busy_deletes                                                  |
|                       | 64u     | Items busy during page move, requiring    |
|                       |         | deletion before page can be moved.        |
| log_worker_dropped    | 64u     | Logs a worker never wrote due to full buf |
| log_worker_written    | 64u     | Logs written by a worker, to be picked up |
| log_watcher_skipped   | 64u     | Logs not sent to slow watchers.           |
| log_watcher_sent      | 64u     | Logs written to watchers.                 |
| unexected_napi_ids    | 64u     | Number of times an unexpected napi id is  |
|                       |         | is received. See doc/napi_ids.txt         |
| round_robin_fallback  | 64u     | Number of times napi id of 0 is received  |
|                       |         | resulting in fallback to round robin      |
|                       |         | thread selection. See doc/napi_ids.txt    |
|-----------------------+---------+-------------------------------------------|
 

jcm

New Member
#6
The GCloud deployment has memcached set up in an Unix Socket. In order for it to work with the LSCache plugin you have to use localhost as the specified directory and 0 as the port (the idea of the UNIX socket is to avoid routing thus ports don't matter)
Thank you for all the info a lot to digest.

just a note on the hot & port, what you have said is not true as port needs to be 11211 and host /var/www/memcached.sock otherwise it won't work.

See this doc for host/port: https://docs.litespeedtech.com/cloud/images/wordpress/
 
#7
Thank you for all the info a lot to digest.

just a note on the hot & port, what you have said is not true as port needs to be 11211 and host /var/www/memcached.sock otherwise it won't work.

See this doc for host/port: https://docs.litespeedtech.com/cloud/images/wordpress/
Thank you for pointing out my mistake! I mixed up the order of the two possible configurations. What you stated above is false, the correct setup should be as follows:

Memcached over TCP/IP:
Host: localhost
Port: 11211

Memcached over UNIX socket:
Host: /path/to/memcached.sock (in your case: /var/www/memcached.sock)
Port: 0

You can check that in the memcached.conf file
-p 11211 (this is the line for port)
-l 127.0.0.1 (this is the line for local IP)
if you see those uncommented, then this means you are using memcached in TCP/IP mode hence the need for IP and Port!

-s /var/www/memcached.sock (this is the UNIX socket)
-a 0770 (this is the permission level)
-p /tmp/memcached.pid (this is the temp PID location)
-P /var/run/memcached/memcached.pid
if you see those uncommented, this means you are using a UNIX socket and there is no IP or Port thus the need to select path to memcached.sock as the host and 0 as the port!

Furthermore, executing the "nc -U /var/www/memcached.sock" command is only possible while using UNIX socket (-U switch is specifically for that), the TCP/IP command uses telnet to connect to memcached as it requires IP and port!

On another note, do not rely solely on the doc you referenced as it is just a brief guide of how things are setup in the automatic deployment and thus fairly lacking any advanced configurations.
For quick guides, you can read the OLS knowledge base resources located here, for any advanced documentation you should look at the LiteSpeed docs here.

Here is an example of a working config in LSCache:
LSCache memcached config.JPG
 

jcm

New Member
#8
Openlitespeed install on GCP/AWS:

Host: /var/www/memcached.sock
Port: 11211

That is all that works, literally, so it is not false and the plugin will even give you the two port options look at your screenshot at the bottom. Port 0 = not connected for both installs so it has to be either of the ports with the correct port and not localhost.
 
#9
Openlitespeed install on GCP/AWS:

Host: /var/www/memcached.sock
Port: 11211

That is all that works, literally, so it is not false and the plugin will even give you the two port options look at your screenshot at the bottom. Port 0 = not connected for both installs so it has to be either of the ports with the correct port and not localhost.
I am sorry to say that you are indeed incorrect! I literally provided the solution with a screenshot of an actually working LSCache config.
Your admin might be cached so it just shows and expired value in the LSCache plugin but I can assure you port 11211 does not and will never work with Memcached in a UNIX socket, period! (there is no port to listen to!) The same is true for Redis if you want it configured in a UNIX socket.

Please consider reading the memcached documentation and google how UNIX and TCP/IP work/differ from each other.

@Cold-Egg, please provide some clarity whenever possible!
 

jcm

New Member
#11
when you use memcached as unix socket mode

host ---> /path/to/it.sock

port ---> 0
Hi Lsqtwrk,

I am a newbie in every sense of the world especially when it comes to memcache. I've worked with redis extensively but memcache is a new thing.

So if you are saying path to/it.sock and port 0, why does the documentation then refer to path and port 11211? Is this then unique to the images for cloud so the image setup for that port on AWS/GCP or am I missing something. See the Litespeed documenation: https://docs.litespeedtech.com/cloud/images/wordpress/
 
#12
Hi Lsqtwrk,

I am a newbie in every sense of the world especially when it comes to memcache. I've worked with redis extensively but memcache is a new thing.

So if you are saying path to/it.sock and port 0, why does the documentation then refer to path and port 11211? Is this then unique to the images for cloud so the image setup for that port on AWS/GCP or am I missing something. See the Litespeed documenation: https://docs.litespeedtech.com/cloud/images/wordpress/
The link you provided is a mere guide of the OLS deployment for GCP. It assumes you already know what you are doing (setting-up a linux server on your own), it is there to provide info of what was already pre-configured for you.
Furthermore the entire link you are referencing does not mention even once the port 11211, in fact it explicitly states it is using UNIX socket for better performance and shows how to switch from memcached to redis in LSCache plugin. The knowledge of what an UNIX socket is and how it differs from TCP/IP falls under your understanding of how networking works.

I am not sure you even read the contents of your own link but here are some screenshots that may clear your confusion:

The doc clearly states that the object cache (so both memcached and redis) is set to an UNIX socket:
OpenLiteSpeed GCP deployment auto set-up.JPG
They further provide info on how to switch from memcached to redis in the LSCache plugin with a screenshot example, there is no mention in the entire document of port 11211 and even the screenshot reflects it as the port field is blank.
How to switch from memcached to redis in OpenLiteSpeed GCP deployment using LSCache.JPG
Here is another screenshot of the doc stating where to look for the .conf files and how to change the user if you encounter any permission issues.
How to fix object cache permissions in OpenLiteSpeed.JPG

I hope this clears your confusion and you get things working properly!

For actual documentation of OLS and LSE, please look at the resources I already linked in my comment above:
For quick guides, you can read the OLS knowledge base resources located here, for any advanced documentation you should look at the LiteSpeed docs here.
 

lsqtwrk

Administrator
#13
Hi Lsqtwrk,

I am a newbie in every sense of the world especially when it comes to memcache. I've worked with redis extensively but memcache is a new thing.

So if you are saying path to/it.sock and port 0, why does the documentation then refer to path and port 11211? Is this then unique to the images for cloud so the image setup for that port on AWS/GCP or am I missing something. See the Litespeed documenation: https://docs.litespeedtech.com/cloud/images/wordpress/
please let me know where exactly is gives 11211 with socket ? I will ask doc team to update the inaccurate doc
 

jcm

New Member
#14
please let me know where exactly is gives 11211 with socket ? I will ask doc team to update the inaccurate doc
Hi lsqtwrk,

That is the problem I guess, there is no reference to the port in the documents but on the plugin itself it gives the default ports as per the screenshot below: "Default port for Memcached is 11211. Default port for Redis is 6379. " So it might be more usefull to have a reference to say 0 if it needs to be 0 rather than not say anything at all about the port.
 

Attachments

Top