This seems to be missing the point of Kolmogorov complexity. In Kolmogorov complexity we aren't concerned with the complexity of some fixed finite string, for this very reason.
It's a mathematical device for reasoning about compressibility. It's like saying "What's the use of Shannon Entropy of fixed strings?"
Algorithmic information theory also allows you to re-derive Godel's Incompleteness Theorem or the Halting Theorem from it, in a sense, it's a super-set of those concepts.
For example, we can talk about the compression of knowledge into finite sets of axioms and many bits of information exist in such a system.
Or we can reason about concepts like Omega, a truly random number in mathematics, incompressible in the sense then there is provably no shorter way to encode it other than brute force calculating it, but which is a real number, and which we can know the first few bits of it. (and a quite interesting number in the sense that it's an Oracle of knowledge and if you had it's complete expansion, you could solve many questions in mathematics with it)
I think the author's point is that K(s) is the shortest program in _any_ language. Thus for any string s there exists a language ("silly,s") that produces the program with length 0 (or 1).
But this just misunderstands the definition. In the standard definition you pick _some_ Turing-complete language, and then define K(s) with respect to that language. You can then go on to show that in the limit of longer and longer strings, it does not make much difference which language you pick.
The author defines language as a mapper from input to output, which is necessary but not sufficient. Its L_silly (written in Ruby) is not a language since it uses "eval" which depends on the entirety of Ruby.