val result = directProduct(cyclicGroupOfDegree3, finiteAbelianGroupOfDegree7)
and the second kind, people like me, who do this:
// Compute the direct product of 2 cyclic groups
val z = dP( cg1, cg2 )
You can easily guess that the 2nd kind are math majors. If my math professor started writing everything out in plain English like the first example, he'd never be able to cover a theorem per class. He'd still be writing the first lemma and the hour would be up.
So he resorts to 1-character symbols, precise notations, and terse comments. The idea is - if the reader doesn't grok direct products or cyclic groups, he's fucked anyways, so why bother ? And if he does grok them, why impose cognitive overload by spelling it all out in great detail, just use 1-character symbols and move on.
Now both these styles are in direct conflict with each other, and in the Fortran/C/C++ community during the 80s-90s ( when every respectable programmer had a copy of Numerical Recipes on their desk ), you would emulate the 2nd kind.
In the 2000s & later, people got rid of their Numerical Recipes & exchanged them for "14 days to EJB" and "Learn Python in 21 days" and "Master Ruby in 7 days" and the like...the community started becoming a lot less math-y and a lot more verbose, and style 1 is back in vogue. Nowadays I get pulled up constantly in code review for using single character names....but I think this too will pass :)
cyclicGroupOfDegree3 is a terrible name, unless you need to be really specific about degrees. cyclicGroup, please. (and I could imagine: cyclicGroup.degree() == 3)
dP is also a terrible name. My first thought goes to derivatives. directProduct is the right name. dProduct, dProd (if you like the Numerical Recipes style, expounded below) are better than dP, but still wrong for a library.
So first, let's assume directProduct is a library somewhere; maybe one you've even created. So let's reconstruct:
val z = directProduct(cg1, cg2)
Better, and more believable. And if the declarations of cg1 and cg2 are obvious (ie, the lines directly preceding) then you might have a case. I imagine directProduct(group1, group2) would actually be a happy medium. And if you use z in the next line or two, I'd let that slide.
The thing about Numerical Recipes is that often you're taking math syntax and coding it. Often doing so requires a good deal of commenting and temporary variables. One thing the book gets very wrong is it's function declarations (the function body is a separate argument) -- at the very least, rooted in a past of 80-character-lines and before code reviews. The first example in the book:
void flmoon(int n, int nph, long* jd, float* frac)
ought, in a modern era, be something like:
void MoonPhaseToDay(int n_cycles, int phase, long* julian_day, float* fractional_day);
If for no other reason than I can have some hope at understanding the second argument, or finding it again. You'll also notice flmoon is a misnomer -- it computes more than full moons!
The idea is - if the reader doesn't grok direct products or cyclic groups, he's fucked anyways, so why bother ? And if he does grok them, why impose cognitive overload by spelling it all out in great detail, just use 1-character symbols and move on.
This case analysis works if you only consider people who've been sitting in on the class the whole time, but someone who appears mid-way through the semester (or analogously, starts looking at some project's code without having been involved in its development) may understand group theory and still have trouble following the lecture.
// Compute the direct product of 2 cyclic groups
I would much rather give the function a name that describes its purpose (and give a full description in a comment at the function's definition) than annotate every use of the function.
Notation is what it is because it serves our ( ie. mathematicians') interest so well. If every one of us started writing
Integral Of Quadratic Polynomial = Cubic Polynomial plus constant
maybe we would increase the size of the audience a tad bit....but the downside would be, we'd move at a glacial pace & never make progress.
Things like direct product and cyclic groups are basic....almost trivial even.
If a non-programmer asks you "Why do you guys say
int x = 10;
float y = 0.2
Why not
x is an whole number who value is 10
y is a fraction whose value is one fifth.
you can sit down & reason with him for a while....but if he insists that everything be spelled down in such great verbose detail, you will at some point, pick up your starbucks coffee and say "Dude, this programming thing, its not for you. The average program is like 10s of 1000s of LOC and if I start writing everything out in plain English, I'm going to get writers cramp & file for disability insurance."
Trust me, math gets immensely complicated very, very fast. The only way to have even a fighting chance of keeping up is terse notation ( and frequent breaks ).
One reason for this schism is the lack of rigor.
eg. When a programmer says "function", he is order of magnitude less rigorous than what a mathematician means by that word. You ask a programmer what probability is, and he will say "you know, whether something will happen or not, how likely it is to happen, so if it doesn't happen we say 0, if it is sure to happen we say 1, otherwise its some number between 0 & 1. Then you have Bayes rule, random variable, distribution, blah blah...I can google it :))"
You ask a mathematician what probability is...even the most basic definition would be something like "a function from the sample space to the closed interval [0,1] ". Note how incredibly precise that is. By the word "function", the mathematician has told you that if you take the cross product of the domain ie. a Set of unique outcomes of your experiment, with the range, which is the closed interval [0,1], you'll get a shit-ton of tuples, and if you then filter out those tuples so that that every outcome from the domain has exactly one image in the range, then that is what we call "probability". And this is just the beginning...the more advanced the mathematician is, the more precise he'll get. I've seen hardcore hackers who've designed major systems that use numerical libraries walk out of a measure theory class on day 1, simply because they overestimate how little they know. Calling APIs is very, very different from doing math. The professor is like the compiler - he isn't going to care if you know or not what measure is or what a topological space is...its a given that you've done the work that's laid out in the pre-reqs, and if you haven't, go write an API or something, don't bother the mathematician....atleast that's the general attitude in most American universities I've seen. If you tell him "describe its purpose and give a full description" he will look at you as if you are from Mars, and then tell you to enroll in the undergraduate section of Real Analysis 101 :)
If a non-programmer asks you "Why do you guys say int x = 10; float y = 0.2 Why not x is an whole number who value is 10 y is a fraction whose value is one fifth.
Knowing what language we're working in is enough to know that's exactly what that code says. This is not the case in mathematical notation. C × A could be the direct product of two groups, or maybe the tensor product of two graphs, or perhaps the product of two lattices, or it could be one of who knows how many other things. The issue here is not high precision, but the opposite: heavy reliance on convention and context in order to be unambiguous.
"C x A" is a combination of C and A. C, x, and A are defined once somewhere at the beginning of the paper/book.
Should I take this (which doesn't actually hold in the general case) to mean that every document uses its own language? That isn't exactly good for readability. We certainly don't consider every software project to be written in its own language (and no, the semantics of C does not tell us what the dP function does).
I would argue that the self-documenting code in your first example (albeit excessively verbose to suite your argument) is better than having to put a comment above every terse statement one writes?.
Programming with descriptive and meaningful variable / function names that have intrinsic meaning skips this step: 'wait, what was the cg2 again? let me waste time by looking back through the code and figuring that out again'. Particularly for others reading your code later.
But I try to fit somewhere in between the two examples you've shown, meaningful but not too long -> the benefits of longer variable / functions names diminishes when names get too long as variable names tend to blend together (see law of diminishing returns).
To me, I highly prefer the first expression you've written (well, without the exaggerated style you've given it). The second may save you some time in the short run, but when you come back to your code in a year, you think you'll know what dP means? Differential of P? Distance to P? Distance from P?
I realize you've got a comment above it, but if you really code like that then you are very different from any code I've come across where the programmers used short, non-descriptive variable names.
As my coding has improved over the years, I've gone from an imperative, abbreviated variable style to a functional, long-named variable style. Even with the long names, my code is MUCH more readable and it's still about 2x less typing than the imperative style!
(Also, I don't think all math majors use that second style).
Math notation should be terse because you have to do symbolic manipulation on the expressions. It's easy to argue for longer variable names when you are just using the ready-made results. I think it's acceptable to use long names for programming.
I guess by degree you mean order? And by finiteAbelianGroupOfDegree7 you mean cyclicGroupOfDegree7. Of course, you should always use theorems to keep your notation consistent. :)
People who do this:
and the second kind, people like me, who do this: You can easily guess that the 2nd kind are math majors. If my math professor started writing everything out in plain English like the first example, he'd never be able to cover a theorem per class. He'd still be writing the first lemma and the hour would be up.So he resorts to 1-character symbols, precise notations, and terse comments. The idea is - if the reader doesn't grok direct products or cyclic groups, he's fucked anyways, so why bother ? And if he does grok them, why impose cognitive overload by spelling it all out in great detail, just use 1-character symbols and move on.
Now both these styles are in direct conflict with each other, and in the Fortran/C/C++ community during the 80s-90s ( when every respectable programmer had a copy of Numerical Recipes on their desk ), you would emulate the 2nd kind.
In the 2000s & later, people got rid of their Numerical Recipes & exchanged them for "14 days to EJB" and "Learn Python in 21 days" and "Master Ruby in 7 days" and the like...the community started becoming a lot less math-y and a lot more verbose, and style 1 is back in vogue. Nowadays I get pulled up constantly in code review for using single character names....but I think this too will pass :)