None the less, I'm also curious about the choice, but couldn't find a lot about it. There has to be some trade-off I guess to using LDPC instead of Reed-Solomon. I only found this paper, but haven't read through it, so no conclusion as of yet:
> Efforts are underway in National Aeronautics and Space Administration (NASA) to upgrade both the S-band (nominal data rate) and the K-band (high data rate) receivers in the Space Network (SN) and the Deep Space Network (DSN) in order to support upcoming missions such as the new Crew Exploration Vehicle (CEV) and the James Webb Space Telescope (JWST). These modernization efforts provide an opportunity to infuse modern forward error correcting (FEC) codes that were not available when the original receivers were built. Low-density parity-check (LDPC) codes are the state-of-the-art in FEC technology that exhibits capacity approaching performance. The Jet Propulsion Laboratory (JPL) has designed a family of LDPC codes that are similar in structure and therefore, leads to a single decoder implementation. The Accumulate- Repeat-by-4-Jagged-Accumulate (AR4JA) code design offers a family of codes with rates 1/2, 2/3, 4/5 and length 1024, 4096, 16384 information bits.1, 2 Performance is less than one dB from capacity for all combinations.
My guess at this point is probably just "We've used Reed-Solomon a bunch, we know it works. We're working on newer techniques, but lets use what we know works"
Reed-Solomon is better at detecting longer runs of missing data (which could come from objects passing by, for example), and is a lot cheaper to decode - computation in space is very expensive.
I suspect you're right but it seems that the capacity advantage of convolutional codes only become significant in very low SNR, so maybe deep space probe applications. Also unless interleaving is used, Reed-Solomon can do better against bursts of errors, though am nor sure why the noise profile would be ay different.
So, as you say, maybe it was just faster to integrate the already certified equipment at that stage of the development.
idk the article also mentions they've been working on it for 20 years I wouldn't be surprised if they just got to a point that was good enough and then didn't want to mess with things
real tragedy is that they didn't use cutting edge web7.0 tech for their front end smh
I think there's a pretty good chance that their data encoding scheme was working, and so they just left it in a working state, without upgrading it to use modern best practices.
Note that this mission was specced and designed ages ago too, so just as the observations it makes are views of the past, the engineering to do so is a time capsule too