I agree that with vector math, overloaded arithmetic operators are easier to read. However, I don't see how you could add "overloading, but only for actual math" - once it's in, people will repurpose it for all kinds of cursed purposes.
The implementation would probably be ugly, but I wonder if it could be implemented by using a comptime string to represent the operation, e.g. something like:
fn doMath(comptime op: []const u8, args: anytype) MathReturnType(op, args) {
// TODO: implement me
}
const result = doMath(
\\a + b
,
.{ .a = a, .b = b }
);
Where the implementation would call `.add` etc on the parameters when infix operators were used.
There was a comment several days ago (https://news.ycombinator.com/item?id=29825516) that made me reconsider the whole enterprise of operator overloading even for math, specifically its last paragraph.
The gist is that you can easily build them to preclude useful optimizations and efficient execution, when it's often more desirable to be fast than to have syntactic sugar, hence having explicit function calls like multiply_add(a, b, c) instead of a+b*c. If you really want syntactic sugar when it comes to math, operator overloading probably isn't the way to implement it, it'd be nicer to have something with the full context so there can be optimizing reductions. Lisp macros can do that, or you might have some other kind of parser (that might have to work on strings), or with sufficient cleverness you could build an overloaded operator nest full of context-accumulating operations-to-perform that either require some doMath wrapper at the end or a final overload of operations producing a fully computed return type.
I prefer languages that don't cripple expressive freedom and so overall I'm not anti-operator-overloading in general even if I think some overloads are pretty questionable (I dislike C++'s arrow overload for Optionals) but I no longer think that e.g. a math-focused library is an obvious win or exception to the downsides of the expressive power granted from operator overloading.
I think vector math is a compelling example but it is far from the only one. Bignum arithmetic is probably just as common if not more so, in fact I see that Zig actually has a bignum library built in if I understand correctly. Using that library will be painful because of this choice. There are plenty of other such examples, imagine implementing (and then using) something like SymPy in Zig.
One of the issues with operator overloading itself in relation to bignums and similar is the need for an allocator (there's nowhere to pass one) along with the lack of error handling (suppose allocation fails) and when/how to cleanup. Same with string concatenation at runtime via operators and many other places where it could be used.
So even if zig allowed for operator overloading, those issues would have to be solved.
The implementation would probably be ugly, but I wonder if it could be implemented by using a comptime string to represent the operation, e.g. something like:
Where the implementation would call `.add` etc on the parameters when infix operators were used.