4 comments

  • dist-epoch 2 minutes ago
    Both Canonical and Microsoft recommend enabling swap file for Ubuntu cloud images, even if you allocate plenty of RAM to the VM.

    Any thoughts on that?

  • ChocolateGod 15 minutes ago
    I'd like to see Linux gain support for actual memory compression, without the need to go through zram, similar to macOS/Windows.
  • FooBarWidget 1 hour ago
    One pet peeve I have with virtual memory management on Linux is that, as memory usage approaches 100%, the kernel starts evicting executable pages because technically they're read-only and can be loaded from disk. Thus, the entire system grinds to a halt in a behavior that looks like swapping, because every program that wants to execute instructions has to load its instructions from disk again, only to have those instruction pages be evicted again when context switching to another program. This behavior is especially counter intuitive because disabling swap does not prevent this problem. There are no convenient settings for administrators for preventing this problem.

    It's good that we have better swapping now, but I wish they'd address the above. I'd rather have programs getting OOMKilled or throwing errors before the system grinds to a halt, where I can't even ssh in and run 'ps'.

    • man8alexd 50 minutes ago
      Actively used executable pages are explicitly excluded from reclaim. And if they are not used, why should they stay in memory when the memory is constrained? It is not the first time I have heard complaints about executable pages, but it seems to be some kind of common misunderstanding.

      https://news.ycombinator.com/item?id=45369516

    • robinsonb5 1 hour ago
      Indeed. I think what's really needed is some way to mark pages as "required for interactivity" so that nothing related to the user interface gets paged out, ever. That, I think, would go at least some way towards restoring the feeling of "having a computer's full attention" that we had thirty years ago.
      • akdev1l 37 minutes ago
        Seems the applications can call mlockall() to do this
    • 112233 21 minutes ago
      Is there a way to make linux kernel schedule in a "batch friendly way"? Say I do "make -j" and get 200 gcc processes diong jobserver LTO link with 2GB RSS each. In my head, optimal way through such mess is get as many processes as can fit into RAM without swapping, run them to completion, and schedule additional processes as resources become available. A depth first, "infinite latency" mode.

      Any combination of cgroups, /proc flags and other forbidden knobs to get such behaviour?

    • nolist_policy 52 minutes ago
      Linux swap has been fixed on Chromebooks for years thanks to MGLRU. It's upstream since Linux 6.1 and you can try it with

        echo y >/sys/kernel/mm/lru_gen/enabled
    • worldsavior 39 minutes ago
      Program instructions size is small thus loading is fast, so no need to worry about that too much. I'd look on different things first.
      • twic 37 minutes ago
        Have you measured this, or is this just an opinion?
        • man8alexd 29 minutes ago
          Look into /proc/<PID>/status and /proc/<PID>/smaps
  • iberator 1 hour ago
    Another useless feature into Linux kernel. Who uses swap space nowadays?! Last time I used swap on Linux device was around Pentium 2 era but in reality closer to 486DX era
    • Titan2189 1 hour ago
      We use it in production. Workloads with unpredictable memory usage (32Mb to 4Gb per process), but we also want to start enough processes to saturate the CPU. Before we configured & enabled swap we were either sitting at low CPU utilisation or OOM
    • ch_123 57 minutes ago
      I ran Linux without swap for some years on a laptop with a large-for-the-time amount of RAM (about 8GB). It _mostly_ worked, but sudden spikes of memory usage would render the system unresponsive. Usually it would recover, but it in some cases it required a power cycle.

      Similarly, on a server where you might expect most of the physical memory to get used, it ends up being very important for stability. Think of VM or container hosts in particular.

      • GCUMstlyHarmls 46 minutes ago
        I dont get why anti-swap is so prevalent in Linux discussions. Like, what does it hurt to stick 8-16-32gb extra "oh fuck" space on your drive.

        Either you're going to never exhaust your system ram, so it doesn't matter, minimally exhaust it and swap in some peak load but at least nothing goes down, or exhaust it all and start having things get OOM'd which feels bad to me.

        Am I out of touch? Surely it's the children who are wrong.

        • manuel_w 32 minutes ago
          The pro-swap stance has never made sense to me because it feels like a logical loop.

          There’s a common rule of thumb that says you should have swap space equal to some multiple of your RAM.

          For instance, if I have 8 GB of RAM, people recommend adding 8 GB of swap. But since I like having plenty of memory, I install 16 GB of RAM instead—and yet, people still tell me to use swap. Why? At that point, I already have the same total memory as those with 8 GB of RAM and 8 GB of swap combined.

          Then, if I upgrade to 24 GB of RAM, the advice doesn’t change—they still insist on enabling swap. I could install an absurd amount of RAM, and people would still tell me to set up swap space.

          It seems that for some, using swap has become dogma. I just don’t see the reasoning. Memory is limited either way; whether it’s RAM or RAM + swap, the total available space is what really matters. So why insist on swap for its own sake?

          • viraptor 0 minutes ago
            You're mashing together two groups. One claims having swap is good actually. The other claims you need N times ram for swap. They're not the same group.
          • ch_123 12 minutes ago
            You're implying that people are telling you to set up swap without any reason, when in fact there are good reasons - namely dealing with memory pressure. Maybe you could fit so much RAM into your computer that you never hit pressure - but why would you do that vs allocating a few GB of disk space for swap?

            Also, as has been pointed out by another commenter, 8GB of swap for a system with 8GB of physical memory is overkill.

            • tremon 5 minutes ago
              I'm also in the GP's camp; RAM is for volatile data, disk is for data persistence. The first "why would you do that" that needs to be addressed is why volatile data should be written to disk. And "it's just a few % of your disk" is not a sufficient answer to that question.
          • man8alexd 27 minutes ago
            This rule of thumb is outdated by two decades.

            The proper rule of thumb is to make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens.

            • tremon 12 minutes ago
              That's not useful as a rule of thumb, since you can't know the size of "all inactive anonymous pages" without doing extensive runtime analysis of the system under consideration. That's pretty much the opposite of what a rule of thumb is for.
        • ch_123 35 minutes ago
          I think it's some kind of misplaced desire to be "lightweight" and avoid allocating disk space that cannot be used for regular storage. My motivation way back when for wanting to avoid swap was due to concerns about SSD wear issues, but those have been solved for a long time ago.
        • man8alexd 40 minutes ago
          8-16-32gb of swap space without cgroup limits would get the system into swap thrashing and make it unresponsive.
      • solstice 50 minutes ago
        I had a similar experience with Kubuntu on a xps13 from 2016 with only 8GB of RAM and the system suddenly freezing so hard that a hard reboot was required. While looking for the cause, I noticed that the system had only 250 MB of swap space. After increasing that to 10 GB there have been no further instances of freezing so far.
    • wongarsu 48 minutes ago
      It's unloved on Linux because using Linux under memory pressure sucks. But that's not a good reason to abandon improvements. Even more so with the direction RAM prices are headed
      • man8alexd 35 minutes ago
        It sucks without proper cgroup limits because swap makes OOM slower to trigger. Either set the cgroup limits or make the swap small.
        • ChocolateGod 13 minutes ago
          This requires additional setup from the user, the default setup should just "work".
    • SCdF 51 minutes ago
      You should still use swap. It's not "2x RAM" as advice anymore, and hasn't been for years: https://chrisdown.name/2018/01/02/in-defence-of-swap.html

      tl;dr; give it 4-8GB and forget about it.

      • ch_123 27 minutes ago
        I've heard "square root of physical memory" as a heuristic, although in practice I use less than this with some of my larger systems.
        • man8alexd 21 minutes ago
          The proper rule of thumb is to make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens.
    • sl-1 1 hour ago
      It is still useful for many workloads, I use it in work and on my own machines