>The string processing is powerful, but inconvenient when you want to do things like indexing by offsets or ranges, due to Unicode semantics. (This is probably a good thing in general.)
This is being too generous to Swift's poorly designed String API. The author gets into it immediately after the quote with an Array<Character> workaround, regex issues, and later Substring pain. It's not a fatal flaw, a language backed by one of the richest companies in the world can have few fatal flaws, but AoC in particular shines a light on it.
I really like Swift as an application/games language but I think it unlikely it can ever escape that domain.
I wholeheartedly disagree and counterpoint that all other String APIs are wrong (bold statement, I know). Accessing a random index of a String is a complex (slow) operation, and as such, should be reflected as complex in the code, especially since people usually think it is not complex.
If you want an array of UInt8, just use that.
The part about the regex I agree with. They are slow and that’s a shame. I do not personally use regex much though, and don’t think it should be done much in prod either, unless there are no other options, but that does not excuse a poor implementation.
Regarding the domain, I recognize it seems to have difficulties escaping the “native iOS/macOS apps,” but IMHO it should not. It is a language that is simple to use, with a reasonable memory handling default (ARC), though it can also use the memory ownership model of rust. Generally speaking using Swift is possible everywhere. I use it personally for an app (native and web front, and back), and it is extremely cool.
Its ecosystem is also becoming quite interesting. Most of the libs are from Apple, yes, but they are also very qualitative.
All in all I think it’s shame Swift is not more used overall in the industry.
FWIW, AoC is very non-representative of real-world string manipulation problems.
The AoC format goes out of its way to express all problem inputs and outputs in simple strings with only basic ASCII text, just for compatibility with the most programming environments. This is very different from almost all real-world problem, where the complexities of human language are huge.
I have to agree strongly with my sibling commenter. Every other language gets it horribly wrong.
In app dev (Swift's primary use case), strings are most often semantically sequences of graphemes. And, if you at all care about computer science, array subscripting must be O(1).
Swift does the right thing for both requirements. Beautiful.
OK, yes, maybe they should add a native `nthCharacter(n:)`, but that's nitpicking. It's a one-liner to add yourself.
I don't think Rust gets this horribly wrong. &str is some bytes which we've agreed are UTF-8 encoded text. So, it's not a sequence of graphemes, though it does promise that it could be interpreted that way, and it is a sequence of bytes but not just any bytes.
In Rust "AbcdeF"[1] isn't a thing, it won't compile, but "AbcdeF"[1..=1] says we want the UTF-8 substring starting from byte 1 through to byte 1 and that compiles, and it'll work because that string does have a valid UTF-8 substring there, it's "b" -- However it'll panic if we try to "€300"[1..=1] because that's no longer a valid UTF-8 substring, that's nonsense.
For app dev this is too low level, but it's nice to have a string abstraction that's at home on a small embedded device where it doesn't matter that I can interpret flags, or an emoji with appropriate skin tones, or whatever else as a distinct single grapheme in Unicode, but we would like to do a bit better than "Only ASCII works in this device" in 2025.
> In Rust "AbcdeF"[1] isn't a thing, it won't compile, but "AbcdeF"[1..=1] says we want the UTF-8 substring starting from byte 1 through to byte 1 and that compiles, and it'll work because that string does have a valid UTF-8 substring there, it's "b" -- However it'll panic if we try to "€300"[1..=1]
I disagree. IMO, an API that uses byte offsets to substring on Unicode code points (or even larger units?) already is a bad idea, but then, having it panic when the byte offsets do not happen to be code point/(extended) grapheme cluster boundaries?
How are you supposed to use that when, as you say ”we would like to do a bit better than "Only ASCII works in this device" in 2025”?
It's often the case that we know where a substring we want starts and ends, so this operation makes sense - because we know there's a valid substring this won't panic. For example if we know there's a literal colon at bytes 17 and 39 in our string foo, foo[18..39] is the UTF-8 text from bytes 18 to 38 inclusive, representing the string between those colons.
One source of confusion here, is not realising that UTF-8 is a self-synchronising encoding. There are a lot of tricks that are correct and fast with UTF-8 but would be a disaster in the other multi-byte encodings or if (which is never the case in Rust) this isn't actually a UTF-8 string.
You can do better than "only ASCII works in this device", and making the default string type to be Unicode is the wrong way to do that. For some applications, you might not need to interpret text at all, or you might need to only interpret ASCII text even if the text is not necessarily purely ASCII; other times you will want to do other things, but Unicode is not a very good character set (there are others but what is appropriate will depend much on the specific application in use; sometimes none are appropriate), and even if you are using Unicode you still don't need a Unicode string type, and you do not need it to check for valid UTF-8 for every string operation by default, because that will result in inefficiency.
In 1995 what you describe isn't crazy. Who knows if this "Unicode" will go anywhere.
In 2005 it's rather old-fashioned. There's lots of 8859-1 and cp1252 out there but people aren't making so much of it, and Unicode aka 10646 is clearly the future.
In 2015 it's a done deal.
Here we are in 2025. Stop treating non-Unicode text as anything other than an aberration.
You don't need checks "for every string operation". You need a properly designed string type.
I think using "extended grapheme clusters" (EGC) (rather than code points or bytes) is a good idea. But, why not let you do "x[:2]" (or "x[0..<2]") for s String with the first two EGCs? (maybe better yet - make that return "String?")
That's what I meant by "must be O(1)". I.e. constant time. String's Index type puts the non-constant cost of identifying a grapheme's location into the index creation functions (`index(_:offsetBy:)`, etc). Once you have an Index, then you can use it to subscript the string in constant time.
Like I said, you can easily extend String to look up graphemes by integer index, but you should define it as a function, not a subscript, to honor the convention of using subscripts only for constant-time access.
It's also just not a normal use case. In ten years of exclusive Swift usage, I've never had to get a string's nth grapheme, except for toy problems like Advent of Code.
Because that implies that String is a random access collection. You cannot constant-time index into a String, so the API doesn't allow you to use array indexing.
If you know it's safe to do you can get a representation as a list of UInt8 and then index into that.
I disagree. I think it should be indexed by bytes. One reason is what the other comment explains about not being constant-time (which is a significant reason), although the other is that this restricts it to Unicode (which has its own problems) and to specific versions of Unicode, and can potentially cause problems when using a different version of Unicode. A separate library can be used to deal with code points and/or EGC if this is important for a specific application; these features should not be inherent to the string type.
In practice, that is tiring as hell, verbose, awkward, unintuitive, requiring types attached to a specific instance for characters to do numeric indexing anyway and a whole bunch of other unnecessary ceremony not required in other languages.
We don't care that it takes longer, we all know that, we still need to do a bunch of string operations anyway, and it's way worse with swift than to do an equivalent thing than it is than pretty much any other language.
In Swift (and in other programming languages) it does use Unicode, but I think it probably would be better not to be. But, even when there is a Unicode String type, I still think that it probably should not be grapheme clusters, in my opinion; I explained some of the reasons for this.
In absolute terms the number of people with deep knowledge here is probably higher than it’s ever been. The ratio has changed, which is what I think he’s actually lamenting here. He mentions civil engineering; one thing I find myself often bringing up is that software engineering is a brand new discipline. The operating domain of softare is immense, maybe infinite. So I’m less curmudgeonly than Jon is here, we’ve got a barely century or so of software experience as a species, there’s gonna be slop.
Hillel Wayne offers a much better, more formally grounded survey of how software engineers compare to “real” engineers - https://www.hillelwayne.com/post/are-we-really-engineers/. He actually interviews civil engineers turned software engineers, among other “Crossovers” as he dubs them. Turns out, software engineering is as “real” as other types of engineering to crossovers who’ve done both. Software engineers just _think_ that other types of engineering are more real.
The likelihood is low that Jon has spoken to any meaningful number of civil engineers in making his comparison to the software engineering field. Rather, he reasons from some idealized version of what he believes civil engineering to be.
The “crossovers” all studied engineering. I think that makes this an incredibly biased sample. At best, I think one can say that those who studied engineering apply the engineering design process they learned in other engineering fields to their new jobs in software.
Beyond that, I think the survey would need to expand the pool to a representative group working in software. Possibly those who studied CS, those who studied other fields, self-taught without formal higher education, and include people who studied (and possibly are licensed) software engineering, to provide a control group.
Even within the engineering field, not everyone is an engineer or practicing engineering. There are different levels of education and credentialing and those people fill useful positions. For some reason everyone in software insists that they are doing engineering and are an engineer without having studied any engineering topics. (I’m not talking about having to cover the chemistry, physics, differential equations and other topics that aren’t core to software.)
Thanks for taking the time to read the article! My responses are key’d by paragraph number.
1. The study is comparative in that it answers the question do engineers from other fields consider software engineering “real”. The interviewees answer more than whether they apply their an engineering design process, they comment on what they see in the industry from others, and resoundingly agree that software engineers are abundant and no different than “real” engineers.
2. We agree, aspects of the study could be improved. Nonetheless, I think Hillel’s analysis serves its purpose in leading the discussion forward on whether software engineering is “real” in a more productive direction than Jon Blow’s comments do, necessarily, as the topic of this HN thread.
3. Part 1 of Hillel’s article, linked above, addresses and agrees with your point directly. Suggesting that we don’t have a vocabulary vibrant enough to describe all aspects of the work people do with and on software yet.
I'll be honest, I've read the article several times. I'm way too dumb to get what the author's point is.
I think the video opens up more interesting lines of discussion. There are two points I think the video has that are relevant. One is the discussion on deep knowledge (there is a body of knowledge to learn about software engineering including fundamentals of the discipline). The other is really about the engineering design process (there are domain specific techniques for software that can be used, but it's common to all engineering disciplines).
As other commenters mentioned, the fundamentals are probably more accessible and more known now than at any other time. The minimum bar to enter the field has probably gone down substantially because of all of the abstraction and tooling that exists, and because there is no standards to what job titles are, everyone wants to be called an engineer.
Sure, this is probably a little much and there are plenty of people working in the field that don't need to know much of it. That brings us to the vocabulary, which is already in place! Engineer, technologist, technician, skilled trades, and unskilled labor are different categories of jobs within the existing fields of engineering. These range from requiring no formal education to requiring years of formal education. Again, everyone wants to be an engineer and there is nothing in America stopping them from calling themselves one.
And yet, if you have two tasks that each take 1 hour, and only 90 minutes until the deadline, only one task will be completed by you, regardless of quadrant, and you will then be reminded that priority is singular.
That scenario moves the goalpost towards a losing proposition.
You have two important and urgent things and already know you fail at least one of them.
TFA is about non urgent important tasks that you want to show incremental progress on.
If there are multiple of those then those short bursts of works actual work can be eaten up by context switching, reprioritization, scheduling, etc.
What I do in those cases of to arbitrarily pick an order to do those things. (Likeliness to become urgent in the future, difficulty, unpleasantness, etc...) If I can't, I just pick a random order
I would not have been as kind to FF7 as this author was. By any modern measure it is a bad game. The raw story material and the character design is fantastic. But the gameplay. The gameplay is so tedious the author couldn’t even continue without modifying the game.
I think the main problem with 13 is that it doesn't give the illusion of freedom. I finished FFX for the first time recently (having previously gotten stuck on Gagazet Seymour), and it's almost as restrictive as 13, it just hides it better with cities, temple puzzles, and the sphere grid. In 13 there's no illusion about the fact that you have no choice but to keep running straight down the hallways, and leveling up only at the pace the game allows. A little misidrection or a few minigames would have gone such a long way to break up the pacing.
Totally agree; I started replaying 13 somewhat recently, and just felt worn down after playing for a few hours. The nonstop battles with no real break between them just isn't fun, despite the game having amazing aesthetics.
yeah, one of my problems with Japanese properties in general is this obvious design by committee thing and budget issues that turn great pitches into incoherent nonsense by the end
you can basically count on one hand how many anime style or Japanese gaming properties don't go off the rails and finish the initial story.
Final Fantasy VII is a prime example, the world building at the beginning of the first game in the “VII” franchise is fantastic. In such an arduous funnel, most people see that. But then there’s vampires, dueling identities, ridiculous backstories that cause continuity issues and separate subsequently released games for each, pretty much just ignore the final boss in the first game and the resolution in the post cut scenes, because now there’s multiple universes for no reason with no foreshadowing whatsoever. like. stop. there was no cliffhanger or loose ends.
they even got fans jumping through mental hoops about the definition of “remake” to rationalize the unnecessary ridiculousness of breaking the story even further for no benefit, but in 4k. they could have just done the exact same story without the kingdom hearts nonsense but its like someone at Square can’t be fired for some dumb distinctly Japanese reason.
It seems like you’re talking about the modern FF7 Remake, which is actually an alternate universe version of the original FF7 series.
The original didn’t have a lot of the bits you are calling out because it wasn’t trying to retcon in 20+ years worth of story buildup, and is a much simpler story.
That’s not to justify the complexity of the FF7 lore. I think it and Kingdom Hearts are both ridiculously over complex now, but that people delineate between Remake and the original because of that huge divergence in story.
The article and rest of the comments are about the original game.
I was referring to the first game until I was referring to the remake. read it again?
I find the first game charming and evocative, while acknowledging there were some parts to expand upon, loose ends even, but the way they did that definitely jumped the shark.
Ah my bad. I misunderstood what the subject was in the paragraph when you talked about the “first game” and the criticisms after, since they’re largely scoped to the remake rather than earlier games in the franchise.
When I played it in college - I recognized that playing it fair and square would create undue competition for my studies.
I used the hex edit cheats to max out my characters, and breezed through it in 23 hours. A great decision. I got to experience all that plot, all the mini games, and only compromised the combat build aspect of it. The last battle, even at max stats, was not a cakewalk. There were weapons I still have not defeated.
They don't like the encounter rates. It definitely got annoying at times, but the encounters and leveling up have been part and parcel for this genre of RPGs.
Personally I'm the delayed gratification type where I'll level up to a point where I steamroll all the bad guys in the main storyline. Even high encounter rate areas aren't bad when it's a one hit kill.
I'd do that pretty much every playthrough when I got to the woods where you find Yuffie. I'd farm limit breaks up to lvl3 on the groups of 5-6 little bug guys that you can run into. By the time I'd get my limits to 3 they'd all be pretty OP in terms of their level.
I played several FF games a few years ago for the first time and felt like this for almost all of them (all on the Switch, don't remember which numbers offhand). The gameplay elements were just clunky and I gave up on those a few hours in, though I did complete X. Haven't done X-2 yet though.
Seems like Jonathan Blow, the creator of Jai, releases a game every decade or so. I wonder if the plan is to release his next game in a couple of years. Presumably it is written in this new language? Maybe the compiler will be released at the same time under the guise of “hey this is so production ready we have already shipped a game with it.”
fn add(x: Option<i32>, y: Option<i32>) -> Option<i32> {
if x.is_none() || y.is_none() {
return None;
}
return Some(x.unwrap() + y.unwrap());
}
The above looks kind of clunky because of the none checks it needs to perform, and it also sucks that we have to extract values out of both options and construct a new option out of that. However, we can much better than this thanks to Option’s special properties! Here’s what we could do
fn add(x: Option<i32>, y: Option<i32>) -> Option<i32> {
x.zip(y).map(|(a, b)| a+b)
}
Do folks really prefer the latter example? The first one is so clear to me and the second looks inscrutable.
You do get better about using the functional operators as you use them, and they can be incredibly powerful and convenient in certain operations, but in this case he's missing the simplest implementation of this function using the `?` operator:
To slightly elaborate on yccs27's good answer, at the moment, you can use ? for both Option<T> and Result<T, E>, but you can only use them in functions that return the same type, that is
if you have them mixed in the body, they won't work directly. What you should/can do there depends on specifics. For example, if you have a function that returns Option and you have a Result inside, and you want any E to turn into None, you can add .ok() before the ? and then it still looks nice. (The compiler will even suggest this one!)
And an even smaller caveat: If you use older (distro provided) Rust versions note that it may look like there is some partial implementation of allowing mixing with NoneError, etc. Ignore this. It doesn't work, and was removed in later versions.
But also: don't use your distro provided version of Rust. It's intended for compiling distro packages that depend on Rust, not for developing with Rust. Get Rust from https://rustup.rs
And hence for writing rust code you would like to package for your distro of choice. I publish my software because I want others to use it, and including it in distro repos is the most friendly way to do that.
I also (written a few tools in rust, dabble now and then) was under the impression that it only worked with Result. Is this recent or just never encountered?
TLDR; learning Rust through canonical code in tutorials often requires the student to learn bits about the language that are more advanced than the actual problem the resp. tutorial tries to teach how to solve in Rust. ;)
I prefer the latter now that I understand how all the Result/Option transformations work. As a beginner this would be hard to read but the former looks clunky.
Clippy also got pretty good lately at suggesting such transformations instead of if... blocks. I.e. I guess that means they are considered canonical.
In general I find canonical Rust often more concise than what a beginner would come up with but it does require deeper understanding. I guess this is one of the reasons why Rust is considered 'hard to learn' by many people.
You could actually teach Rust using pretty verbose code that would work but it wouldn't be canonical (and often also not efficient, e.g. the classic for... loop that pushes onto a Vec vs something that uses collect()).
This is very true - to fully explain a "hello world" program you'd have to dive into macro syntax... When writing my Rust book I often start by showing the naive solution, and then later move to more canonical code once I've introduced more syntax that enables something more elegant. But I'm aware that I'm showing something non-optimal to start. Maybe that loses some trust, like people think they're learning something only to be told later on that it's wrong? On the other hand if you start with the optimal solution you have to teach so much syntax that it's overwhelming. I expect that some folk want every example to be perfect, but I'm going with an approach where you iterate through each chapter and as you progress through the whole book the examples get better and more idiomatic.
This is basically an eternal battle when teaching. Personally I prefer to try and stay to only idiomatic code if at all possible, but there's pros and cons to each approach.
(This is one reason why I'm glad other people are also writing books! Not everyone likes my style!)
I'm sorry, but this perspective is absolutely bizarre to me.
We should not avoid language features that reduce boilerplate and drastically increase comprehension for for people who have experience in a language in order to cater to people who have minimal or no experience in that language.
Should we not use `?` in Rust because it might be obscure to someone who's never used the language? Should we not use any of the `Iterator` functions (including `zip` and `map`) because they might be confusing to C programmers who only know `for` loops?
Functional concepts are everywhere these days. Most of them are not hard. `zip` and `map` do not require understanding of homotopy type theory to understand, they are essentially trivial functions. They are available across every type you could possibly iterate over in Rust, and if you understand what they do on one of those you essentially understand what they do on all of them.
This is a toy example, but virtually every piece of hard evidence we have in this field shows that—within reason—more concise code has fewer bugs and is easier to comprehend than longer, more verbose equivalents. Writ large across a project, doubling or tripling the amount of code to bend over backwards accommodating complete novices is lunacy.
I half agree. The question mark version seems to be good as well, in fact also more idiomatic.
If the zip/map version is more common then I take everything back. But it seems less malleable and clear than the pattern matching examples. I had to stop and think for a second to get it. I find pattern matching in general more declarative for small amounts of items.
---
In terms of preferring clearer and simpler code: Absolutely yes. I avoid unnecessary abstractions, especially if the gains are so minor or questionable. It's a matter of empathy and foresight.
Whether that's the case here: I don't know. It might very well be that this is common and clear in the Rust world.
Every abstraction is unnecessary. And the gains are almost always minor… until you apply some of those abstractions across an entire code base.
C-style `for` loops were the norm for decades. Now virtually every language gives you some ability to iterate directly over every element in a collection. Replacing a `for` loop with an iterator over each element is never necessary. The old way worked for decades. The gains are minor. Should we go back to C-style `for` loops? If not, why not?
When you understand the answer to that, you’ll understand why that same logic applies to trivial functions like `zip` and `map` that simply take the idea one minor step further.
I’m not arguing against abstraction in general. I argue that clarity comes first and that in this specific instance pattern matching seems more clear to me.
Sorry, this is just a pet peeve of mine in general.
All the time I see people argue against “unnecessary” abstractions. But this almost always comes from the perspective of “I don’t personally understand it yet” which is just not a reasonable bar for anything to have to clear. Second most frequently it’s “less clear” which often just means “I haven’t internalized it yet” which is likewise a terrible evaluation method for something like functional iteration methods that have the possibility of being used virtually everywhere. And almost equally as often, the underlying objection is that it isn’t useful because the old way is just fine thanks, which is typically a perspective that has completely forgotten about all of the sharp edges and bugs that we all just grew to accept from the preexisting approach.
All of these types of objections are knee-jerk reactions. There are good arguments against bad abstractions, leaky abstractions, infrequently-used abstractions, overly-complex abstractions, and all sorts of other failures to abstract well. But people are so used to these that they reflexively oppose any new abstraction as overly complex or unnecessary simply because it’s new to them.
And that’s a terrible perspective to have, because quite literally all of the progress that has ever been made in the practice of software engineering has been due to abstraction.
The `zip` and `map` functions used here are actually functions of the module std::option, and not std::iter. While they are the same idea "in essence", they have different implementation. The std::option ones are a simple pattern match, while the std::iter ones are more complex. For example, std::iter::zip returns an std::iter::Zip, while std::option::zip returns an Option of a tuple.
I'll also add that option's zip and map are also implemented with a pattern match, like above.
One error and one deviation from the established norm for a toy example is a lot. At the scale of a codebase it would be a catastrophe.
But if you look closely you'll notice that `zip` and `map` were called directly on an array here and not actually on an iterator. That's a third implementation of the same concepts. If Rust had HKTs they could all be the exact same implementation, but not today.
The important thing, though, is that they all conceptually do the same thing. Understanding one essentially translates to understanding them all. If zip/map are called directly on two Options, you get an Option containing a tuple back out. If they're done on two arrays, you get an array containing tuples back out. If they're done on two iterators, you get an iterator containing tuples back out.
> But if you look closely you'll notice that `zip` and `map` were called directly on an array here and not actually on an iterator.
No, I don't think that's true, unless we're talking about two different things. In the article, and in the following post https://news.ycombinator.com/item?id=34428999, zip is user on an Option<i32>, takes another Option<i32>, return an Option<(i32, i32)> (which is a tuple, not an array), on which map is applied to extract the two values and add them.
> If zip/map are called directly on two Options, you get an Option containing a tuple back out. If they're done on two arrays, you get an array containing tuples back out. If they're done on two iterators, you get an iterator containing tuples back out.
But that's my whole point. std::option::map is not the same function as std::array::map, which is not the same function as core::iter::Iterator::map. One big difference, for example, is that core::iter::Iterator::map is lazy, while the others are not, hence the note to try to avoid chaining std::array::map, and being careful around it in performance-critical code: https://doc.rust-lang.org/src/core/array/mod.rs.html#466.
Even with HKTs, while you could share some code, that wouldn't solve the fact that the "direct map" (std::option::map for example) is strict, and the other map (std::option::iter::map) is lazy. Especially in a language often used for performance-sensitive tasks, I can't agree that understanding one map translates to understanding them all, since that would be ignoring part of their ergonomics, and more importantly their performance characteristics.
> Do folks really prefer the latter example? The first one is so clear to me and the second looks inscrutable.
Literally everything in programming is inscrutable until you learn it the first time. The latter should be trivial to understand for anyone who's spent even a little amount of time in a language with functional elements.
A day-one beginner doesn't understand a `for` loop. You probably think they're trivial. Bitwise operations are the same. They might be new to you, but `zip` and `map` frankly don't take much more effort to understand than anything else you probably take for granted. `zip` walks through everything in two separate wrappers and pairs up each element inside. `map` opens up a wrapper, lets you do something with what's inside, and re-wraps the result.
For instance, you can do the exact same thing with arrays. Pair up each element inside (like a zipper on clothing), then for every element inside, add them together:
[1, 3].zip([4, 1]).map(|(a, b)| a + b ) # [5, 4]
That said, you can write this specific function even simpler:
The latter is a lot clearer and simpler. The former requires me to reason about control flow, if, and early return, a whole bunch of magic concepts. The latter is just an expression made of normal functions; I could click through and read their implementation if I was confused.
You're correct, except that "else" is a keyword and so cannot be used there. You'd want
_ => None,
instead, which is the "catch all" arm.
(For those that didn't catch it, the parent code is trying to use None, but it's actually a tuple, and there's four different cases here, not two. So the catch-all arm is better than spelling each of them out in this case.)
This has been a thing since Rust 1.0. Just use the beautiful properties of match (or the "later" `if let`, of course). I prefer this and wish I could say it was idiomatic, but some tools like clippy push users over to helper methods instead of simple applications of match.
Clippy will not complain about the parent's code. It's not really in place of an if; there's four cases there. To be honest, I find 'if let... else None' to be worse looking than the match, though I'm unsure if I truly prefer the zip version to the Some(x? + y?) version.
Once you know what map does with an option I'd say it is mostly pretty readable. Basically map (when run against an Option value) is a way to say "if the value passed in has a value run this function on it otherwise return None.
The first one is very clear, I agree. However if I wrote Rust daily, I would probably be familiar with the second one and would prefer it. Here's an article kind of related to that, in this case talking about APL, that I think explains very well the tradeoffs: https://beyondloom.com/blog/denial.html.
To try with my own words: programming is about shared understanding of a problem, but also the tools used to solve the problem. Code is text, text has a target audience. When it is experts you can use more complex words, or more domain-specific words. When it's intended for a wider audience, taking the time to explain and properly define things, sometimes multiple times, can be necessary.
According to Rust's documentation of Some:
> zip returns Some((s, o)) if self is Some(s) and the provided Option value is Some(o); otherwise, returns None
> zip_with calls the provided function f and returns Some(f(s, o)) if self is Some(s) and the provided Option value is Some(o); otherwise, returns None
Using zip_with seems more appropriate (x.zip_with(y, +) or something) but zip_with is nightly. I also don't like how object chaining makes so that x seems more "fundamental", or "in another category" than y and +, while really x and y are the same, and + is something else. The if solution shows clearly that x and y are the same, by treating them exactly the same. The second solution also introduces a and b from nowhere, doubling the number of variables used in the function. All small things, but I think it can help put words on why precisely the second isn't as readable as it may seem.
It's interesting how much can be said about a simple "add" function.
Not sure if it's only me but after using `zip` for the first time in any language, I tend to overuse it too much while there are better, more idiomatic alternatives.
What’s there to say? It works the same way as `zip` does for an iterator over a `Vec`. So if you understand `zip` as applied to an iterator over `Vec`, then you might understand `zip` applied to `Option`.
In other words:
Y is clear if you already understand X. Like how returning early is simple to understand if you understand if-blocks.
That’s the problem with replying to these kinds of questions: the response is so short and context-dependent that it can look curt.
EDIT: the first code is also inelegant in that it effectively checks if both values are `None` twice in the `Some` case: once in the if-condition and once in `unwrap()` (the check that panics if you try to unwrap a `None`).
My issue was with the `zip()` usage. For lists I know that it will stop short once one of the lists has run out of items, but I haven't seen it used this way to combine optional values. I'm assuming it only produces a result if all of the elements passed in are non-null (based on the prior code) but it still seems too clever IMO. IDK, maybe this is a common pattern I'm unaware of.
I've done a non trivial amount of functional programming and know what zip and map do but I can't off the top of my head work out how that example works.
Options can be mapped/zipped/“iterated” over (traversed might be a better word?).
So in this case it’s using that fact to form a tuple of non-null values whenever the option is not null, and then acting on that. I think it’s kinda neat, but I wouldn’t personally use zip in this case, I’d have gone with map or and_then depending on whether the output of my operation is T or Option<T>.
As someone new to Rust, I look at the latter and see `.zip()` (apparently unrelated to python zip?), and then a lambda/block which intuitively feels like a heavyweight thing to use for adding (even though I'm like 90% sure the compiler doesn't make this heavyweight).
By comparison, the first one is straightforward and obvious. It's certainly kinda "ugly" in that it treats Options as very "dumb" things. But I don't need to read any docs or think about what's actually happening when I read the ugly version.
So TLDR: This reads a bit like "clever" code or code-golfing, which isn't always a bad thing, especially if the codebase is mature and contributors are expected to mentally translate between these versions with ease.
What you find "clever" or not is really a function of what you are most used to seeing. There are likely many folks who use combinators frequently who find them easier to read, myself included.
The first example, to me, is the worst of all worlds, if you want to be explicit use `match`. Otherwise, having a condition then using `unwrap` just feels like it's needlessly complicated for no reason... Just adding my subjective assessment to the bikeshed.
In the first example—the longer, tedious one—I have to look at the condition to make sure the resulting `unwrap`s never actually happen, and if I reason about it wrong I get an application panic.
x.zip(y).map(|(a, b)| a + b )
The above is 100% clear, can obviously never panic, trivially produces optimal code, and is how you'd write the exact same operation to add elements between sets, arrays, or anything else iterable.
People act like the above requires a Ph.D. in Haskell when it really requires about fifteen minutes of playing around with basic functional concepts that are in at least half the popular programming languages these days. At which point you realize a ton of annoyingly tedious problems can be solved in one line of code that can be easily comprehended by anyone else who's done the same thing.
It's the same thing as driving on the highway. Anyone driving slower than you is an idiot, anyone driving faster than you is a maniac.
> So TLDR: This reads a bit like "clever" code or code-golfing, which isn't always a bad thing, especially if the codebase is mature and contributors are expected to mentally translate between these versions with ease.
You contradict yourself. You can’t deride it as “clever” (whatever the quotes mean) and then in the next breath say that it might be a practical style.
And yes, Rust code in the wild does look like this.
Quite possibly much worse but a bit too early to tell. It's getting there though. And I feel some sympathy for Hughes, none for Elon. Hughes was never outright malicious though probably lots of what he did had a net negative effect.
This is being too generous to Swift's poorly designed String API. The author gets into it immediately after the quote with an Array<Character> workaround, regex issues, and later Substring pain. It's not a fatal flaw, a language backed by one of the richest companies in the world can have few fatal flaws, but AoC in particular shines a light on it.
I really like Swift as an application/games language but I think it unlikely it can ever escape that domain.
reply