6 comments

  • randomtoast 2 hours ago
    0.2 tok/s is fine for experimentation, but it is not interactive in any meaningful sense. For many use cases, a well-quantized 8B or 13B that stays resident will simply deliver a better latency-quality tradeoff
    • xaskasdf 1 minute ago
      yeah, actually I wanted to see if this was possible at all. I managed to get around 3000 tokens/s on a ps2 with classic transformers, since the emotion engine is capable of 32 bit addresses, but it has like 32gb of ram. So I ran into the question of why was that fast and I couldn't get that speed even with small models, and the deal is that the instructions went right of the memory to the gpu and that's the main difference that does when a regular computer does inference: it has to request the instructions to the cpu every time. As I mentioned too, on professional cards you can avoid these problems naturally, since they got instructions precisely for this, but sadly I don't have 30k bucks to spare on a gpu :(
    • Wuzado 1 hour ago
      I can imagine a couple scenarios in which a high-quality, large model would be much preferred over lower latency models, primarily when you need the quality.
    • tyfon 1 hour ago
      I didn't really understand the performance table until I saw the top ones were 8B models.

      But 5 seconds / token is quite slow yeah. I guess this is for low ram machines? I'm pretty sure my 5950x with 128 gb ram can run this faster on the CPU with some layers / prefill on the 3060 gpu I have.

      I also see that they claim the process is compute bound at 2 seconds/token, but that doesn't seem correct with a 3090?

      • tgrowazay 1 hour ago
        LLM speed is roughly <memory_bandwidth> / <model_size> tok/s.

        DDR4 tops out about 27Gbs

        DDR5 can do around 40Gbs

        So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone.

        • uf00lme 25 minutes ago
          Channels matter a lot, quad channel ddr4 is going to beat ddr5 in dual channel most of the time.
        • someguy2026 54 minutes ago
          DRAM speeds is one thing, but you should also account for the data rate of the PCIe bus (and/or VRAM speed). But yes, holding it "lukewarm" in DRAM rather than on NVMe storage is obviously faster.
        • zozbot234 44 minutes ago
          Should be active param size, not model size.
        • vlovich123 59 minutes ago
          Faster than the 0.2tok/s this approach manages
  • rl3 1 hour ago
    Nice. I've been looking at doing something similar, more on the order of running a 1T model with less than half the available VRAM.

    One workup indicated it was theoretically possible to modify a piece of SGLang's routing layer to support JIT predict-ahead expert swaps from Gen5 NVMe storage straight into GPU memory.

    I'm hoping that proves true. The setup relies on NVIDIA Dynamo, so NIXL primitives are available to support that.

    Curious if anyone's tried this already.

  • exabrial 26 minutes ago
    I feel like we need an entirely new type of silicon for LLMs. Something completely focused on bandwidth and storage probably at the sacrifice of raw computation power.
  • Wuzado 1 hour ago
    I wonder - could this be used for multi-tier MoE? Eg. active + most used in VRAM, often used in RAM and less used in NVMe?
    • rao-v 1 hour ago
      Yeah I’ve often wondered why folks aren’t training two tier MoEs for VRAM + RAM. We already have designs for shared experts so it cannot be hard to implement a router that allocated 10x or 100x as often to “core” experts vs the “nice to have” experts. I suppose balancing during training is tricky but some sort of custom loss on the router layers should work.

      I’ve also wondered why the routers aren’t training to be serially consistent so you can predict layers to swap into VRAM a few layers ahead to maximize available bandwidth.

      • reitzensteinm 46 minutes ago
        I think part of the issue is that in production deployments, you're batching high enough that you'll be paging in those long tail experts constantly.

        Unless you're handing that in some kind of fancy way, you'll be holding up the batch while waiting for host memory which will kill your throughout.

        It makes much more sense for non batched local inference, especially if you can keep the MoE routing stable like you say, but most folks aren't optimising for that.

        • zozbot234 40 minutes ago
          Ideally, you should rearrange batches so that inference steps that rely on the same experts get batched together, then inferences that would "hold up" a batch simply wait for that one "long tail" expert to be loaded, whereupon they can progress. This might require checkpointing partial inference steps more often, but that ought to be doable.
          • reitzensteinm 26 minutes ago
            I think this is doable for very long tail experts that get swapped in for specialised topics - say, orbital mechanics.

            But for experts that light up at, say, 1% frequency per batch, you're doing an awful lot of transfers from DRAM which you amortize over a single token, instead of reads from HBM which you amortize over 32 tokens.

      • svnt 50 minutes ago
        Maybe I am misunderstanding something but:

        1) This is basically the intention of several recent MoE models: keep particular generally useful experts hot in VRAM.

        2) Unless you can swap layers in faster than you consume them there is no point to predicting layers (what does this even really mean? did you mean predicting experts?).

        It seems at the moment the best you can do is keep experts and layers more likely to be used for a given query in VRAM and offload the rest, but this is work-dependent.

      • hedgehog 51 minutes ago
        I don't have links handy but there is active research in this area.
  • throwaway2027 1 hour ago
    Didn't DirectX add an API for loading assets directly to GPU memory? Would that work?
  • jauntywundrkind 1 hour ago
    Could be neat to see what giving the 8b like 6gb ram instead of 10gb. Something in-between, where you still need NVMe, but not like the 3x ratio of the 70b model on 23GB.

    Nice work. PCI-P2P (GPU-Direct (tm)) is such great stuff. Cool to see!