Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I suspect his neural net library used in this could become quite useful and propagate far and wide if it were open-sourced: https://bellard.org/libnc/


For reference, this is what it's License section says:

  The LibNC library is free to use as a binary shared library.
  Contact the author if access to its source code is required. 
https://bellard.org/libnc/libnc.html#License


Compression ratio/speed

  Program or model     size: bytes    rato: bpb  speed: KB/s
  xz -9                24 865 244     1.99       1020
  LSTM (small)         20 500 039     1.64       41.7 
  Transformer          18 126 936     1.45       1.79 (!)
  LSTM (large2)        16 791 077     1.34       2.38

Note the decompression is not faster than compression (unlike xz)


Okay. Stupid question. What in the world do the ratio numbers mean? I get that "bpb" means "byte-per-byte". Is it input bytes per output bytes? Or the other way around? And why do some of the ratios go below 1.0? e.g. NNCP v2 (Transformer) 0.914


bpb - it's bits per byte, e.g 8/bpt is the compression ratio.


Aha! thank you. Now I'm much more impressed


bits per byte?


It's a lightweight ML library... But I'm not sure if it makes sense for anything with CUDA as a dependency to be lightweight.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: