minimizing buffer size (i.e. page cache) for bulk copy/rsync -- Re: Swappiness in Buster

coderman coderman at protonmail.com
Wed Jul 8 08:22:05 PDT 2020


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Wednesday, July 8, 2020 7:53 AM, Zenaan Harkness <zen at freedbms.net> wrote:

> Anyone here able to answer this annoying buffer issue on bulk copies?

[ paraphrasing: page cache gets swamped by bulk copy, driving interactive desktop applications to heavy latency... ]

use an LD_PRELOAD hack to force mlock() on the apps you want to keep interactive:

https://stackoverflow.com/questions/37818335/mlock-a-program-from-a-wrapper

---


If you have the sources for the program, add a command-line option so that the program calls mlockall(MCL_CURRENT | MCL_FUTURE) at some point. That locks it in memory.

If you want to control the address spaces the kernel loads the program into, you need to delve into kernel internals. Most likely, there is no reason to do so; only people with really funky hardware would.

If you don't have the sources, or don't want to recompile the program, then you can create a dynamic library that executes the command, and inject it into the process via LD_PRELOAD.

Save the following as lockall.c:

#include <stdlib.h>
#include <unistd.h>
#include <sys/mman.h>
#include <string.h>
#include <errno.h>

static void wrerr(const char *p)
{
    if (p) {
        const char *q = p + strlen(p);
        ssize_t     n;

        while (p < q) {
            n = write(STDERR_FILENO, p, (size_t)(q - p));
            if (n > 0)
                p += n;
            else
            if (n != -1 || errno != EINTR)
                return;
        }
    }
}

static void init(void) __attribute__((constructor));
static void init(void)
{
    int saved_errno = errno;

    if (mlockall(MCL_CURRENT | MCL_FUTURE) == -1) {
        const char *errmsg = strerror(errno);
        wrerr("Cannot lock all memory: ");
        wrerr(errmsg);
        wrerr(".\n");
        exit(127);
    } else
        wrerr("All memory locked.\n");

    errno = saved_errno;
}

Compile it to a dynamic library liblockall.so using

gcc -Wall -O2 -fPIC -shared lockall.c -Wl,-soname,liblockall.so -o liblockall.so

Install the library somewhere typical, for example

sudo install -o 0 -g 0 -m 0664 liblockall.so /usr/lib/

so you can run any binary, and lock it into memory, using

LD_PRELOAD=liblockall.so binary arguments..

If you install the library somewhere else (not listed in /etc/ld.so.conf), you'll need to specify path to the library, like

LD_PRELOAD=/usr/lib/liblockall.so binary arguments..

Typically, you'll see the message Cannot lock all memory: Cannot allocate memory. printed by the interposed library, when running commands as a normal user. (The superuser, or root, typically has no such limit.) This is because for obvious reasons, most Linux distributions limit the amount of memory an unprivileged user can lock into memory; this is the RLIMIT_MEMLOCK resource limit. Run ulimit -l to see the per-process resource limits currently set (for the current user, obviously).

I suggest you set a suitable limit of how much memory the process can run, running e.g. ulimit -l 16384 bash-built-in before executing the (to set the limit to 16384*1024 bytes, or 16 MiB), if running as superuser (root). If the process leaks memory, instead of crashing your machine (because it locked all available memory), the process will die (from SIGSEGV) if it exceeds the limit. That is, you'd start your process using

ulimit -l 16384
LD_PRELOAD=/usr/lib/liblockall.so binary arguments..

if using Bash or dash shell.

If running as a dedicated user, most distributions use the pam_limits.so PAM module to set the resource limits "automatically". The limits are listed either in the /etc/security/limits.conf file, or in a file in the /etc/security/limits.d/ subdirectory, using this format; the memlock item specifies the amount of memory each process can lock, in units of 1024 bytes. So, if your service runs as user mydev, and you wish to allow the user to lock up to 16 megabytes = 16384*1024 bytes per process, then add line mydev - memlock 16384 into /etc/security/limits.conf or /etc/security/limits.d/mydev.conf, whichever your Linux distribution prefers/suggests.

Prior to PAM, shadow-utils were used to control the resource limits. The memlock resource limit is specified in units of 1024 bytes; a limit of 16 megabytes would be set using M16384. So, if using shadow-utils instead of PAM, adding line mydev M16384 (followed by whatever the other limits you wish to specify) to /etc/limits should do the trick.

---end-cut---

best regards,


More information about the cypherpunks mailing list