[1] "[...] it turns out the error was caused due to buggy code and nothing I was or wasn't doing wrong." - if the code is so obviously buggy and the backdoor part isn't obviously a bug, the developers are probably just being sloppy, not malicious (bad or rushed development).
I think it can go the other way in a couple of cases though:
A) WD (by a government) or its staff (by WD management) were ordered to put in the backdoor, but didn't agree with doing so, thus made it obvious in the hopes that it would be found.
B) A backdoor that is found but written off as sloppy development is less damaging than a bug that if found and analysed looks deliberate (because bad development practices are hardly new for hardware manufacturers). _Potentially_ makes exploiting it less risky as well - if it's an obvious or known thing, the attacker could be anyone. If it's a subtle, undisclosed bug (that hasn't been used against many targets), that suggests, to some extent, the involvement of whomever could arrange for the bug to be placed there.
It probably isn't deliberate, but that possibility certainly isn't excluded either, so I'd be cautious about treating this as a hard and fast rule.
Wow. Don't think I've had a comment with negative points on HN before. Curious as to why the downvotes. I was attempting to point out that I did not consider what I said in the original comment to be a hard and fast rule.
Is the objection that the way I said it was too disrespectful (no offense was intended, hence the :-) )? Disagreeing with the original statement (it might have downvotes too, but maybe more upvotes are hiding that)? Too obvious? Something else?
If you feel compelled to put a smiley face after a sentence to make it more agreeable, it's a sign that you were uncomfortable with the tone of the sentence. Adding a smiley just comes across as passive aggressive or condescending. Just rewrite the original sentence so that you're comfortable with the tone, and omit the smiley.
Having identified a backdoor in a major product myself, the eventual meeting with product managers led to a clear and frank discussion about the cause. That being, it was there because it was expected only "certified" engineers would know about it.
The initial view was a hotfix that just changed the hardcoded password, because senior management felt lightning would not strike twice.
I found the password (actually, it used an s/key generator[0]) by similar means to this write up. It didn't matter if I "found" it again because at this point I was in direct contact with them and considered "trusted". The principle that someone else could just as easily find it gets back to the lighting striking twice argument, and was refuted for a long time. Eventually they agreed to change behavior but it took a while.
The timeline on the below link only refers to one specific incident, the time discussing this in principle went for roughly 11 months, and left me extremely disillusioned with the concept of responsible disclosure.
By the same token, if there's little accountability on the project, one might intentionally leave a debugging mechanism in for dubious purposes because they feel they have deniability if/when it's found. "Oh sorry, just a debugging thing I accidentally forgot."
It's just a rule of thumb, not an absolute. I tend to believe most developers are just doing a job (good or bad), and not out to get someone. So to me it's a numbers game where one explanation seems much more likely.
It's not that difficult to put in something much more subtle and still have deniability if the intent was malicious.
I think you give these guys way too much credit. It's more likely some clueless PM simply asked a clueless junior developer to add a backdoor and due to lack of good processes nobody ever reviewed that commit (assuming those guys even use version control).
If a recent CS great doesn't have common sense, they chose the wrong career path. That individual would probably be equally bad in other industries though.
Ah, yes; the Russian defense. I know it well. How did it go?
"The hard-coded backdoor that was found can't be a hard-coded backdoor, because Western Digital would never be so crass and incompetent as to put a hard-coded backdoor hidden in such a way that a security researcher would find it and attribute it to them."
How can one argue against such flawless logic when it even has an aphorism to describe it?
The keyword in the rule of thumb known as Occam's razor is neither that the explanations be simple or complex, but that they be adequate to a knowledgeable person, and that given two (or more) highly adequate explanations -- one simple and one (or more) complex -- that the complex ones be cut in favor of the simplest one.
The original statement was "Never attribute to stupidity alone that which is adequately explained by both stupidity and malice."
It seems to me that both are adequate explanations. One explanation requires one factor (stupidity), and the other two factors (stupidity and malice). The one factor explanation would seem to be the simpler of the two explanations.
It seems to me the original statement reverses the logic of Occam's razor by preferring the more complex example. I think my point still stands. The original statement makes as much sense as trying to reverse Occam's razor.
I always wondered they were able to sell these devices so cheaply. I recently bought 2 4tb ones to take the hard drives out as they were each $30 cheaper than buying a regular 4tb hardrive.
I pulled some old MyBook hard drives out of their cases and discovered they were unreadable via a standard SATA connection. They were older MyBooks designed for XP, so I thought it was just having trouble because they were using 4k sectors.
I found some information that claimed the older MyBooks would AES encrypt the data (even if you never setup a password) making the data totally inaccessible if the factory enclosure ever broke.
Fuck that shit. I pulled the drive back in and copied everything off, then formatted the disk from a real PC and threw that shit away. Today I always buy separate disks and enclosures that allow direct disk access.
I bet they do the encryption like that so that you can quickly securely wipe them (just drop the encryption key) or easily add a password to the encryption later (just encrypt the encryption key by a password). It's too bad if it's completely unusable without the enclosure, but that general technique is one that I'm a huge fan of.
If the claim is true, would you rather someone pull it out and access your data? This is like breaking HSM. So why “fuck that shit”? I hear a lot of people worrying not enough encryption. So perhaps take this as a positive thing??? Having an extra layer of protection is never a bad idea.
Because I didn't ask for it! In fact, I already ran TrueCrypt and LUKS on those drives, so I was encrypting the data myself! But now, if any of the enclosures break (the controllers go bad), I have no way of recovering the data at all! That's a shitty design.
What if this is the backup and the enclosure is damaged by some natural disaster but the disk is OK. I guess he should have thought of that possibility, but what if he didn't?
Yes that'S the entire point of the discussion. Yet people still claim you should backup your backup. But where are you going to backup it to? To another encrypted drive where the enclosure might fail? No you're going to use an unencrypted HDD and be done with it. You can always add encryption on top of an unencrypted HDD.
Sure, I completely agree. It strikes me as unnecessary, however, for a manufacturer to deliberately add a counter-intuitive and indeed dangerous element to a device commonly used for backup. In particular, many home backup systems are not created by experts who might think of things like this.
You would think that but at least with the 8tb version they are often standard red drives that look the exact same as any other. It could be just that since they are their own drives, they can afford to price the hard drives much closer to their manufacturing costs.
I think it's standard price discrimination— internal drives are often bought alongside other expensive equipment as part of a "build" so that purchaser is a lot less price sensitive overall than someone who walks into Best Buy looking for a USB3 drive to use with Time Machine.
Or the drives could be refurbished, returned or otherwise unable to be sold as new. Based on the pricing, that always seemed like the most likely explanation.
Worth checking the model before doing so - I've had at least one drive where the USB connection was built into the drive itself (i.e. no SATA connection). I think it was a 2.5" drive, but I'd presume it's at least possible with a 3.5" drive.
Well, shit. I've had one of these as a stand-in for hopefully eventually getting a Synology NAS, and now I'm paranoid about continuing to use my WD MyCloud. The thing is, I do believe I have a lot of reasons to believe I can trust Synology more, I don't even want to trust anyone. Not Intel or AMD, not WD or Synology. Computers are quickly becoming a source of implicit distrust for me.
> Well, shit. I've had one of these as a stand-in for hopefully eventually getting a Synology NAS, and now I'm paranoid about continuing to use my WD MyCloud.
The hardware is still fine. You can put Debian on it!
There is a very active forum of people replacing the WD firmware with Debian on various models (EX2 Ultra, EX2100, EX4100):
https://forum.doozan.com/list.php?2
Sadly homebrew Debian variants aren't very safe either. Usually they lack timely OS updates, ship pregenerated ssh host keys in the images, stuff like that. Even Raspbian is pretty bad at this though better than many smaller ones.
I guess people here now that. The thing is, these little consumer devices can be had dead-cheap. I bought one for my parents and it cost €30 more than the hard-disk inside would have costed retail. For €30 more I got a nice little case, an embedded device, that is good enough for the task and now I will install OMV on it.
Sure, at home I have a real server with ECC RAM, running on a Solaris derivate with additional VMs... but hey...! ;-)
Aahhg. This get repeated so often. FreeNAS and ZFS are not any worse and probebly better then everything else at dealing non-ECC ram. Its recommended because its generally a smart idea.
ZFS is worse because ZFS in general only has one failure recovery method: restore from tape.
Most other mainstream filesystems have tools to restore and attempt at repair the filesystem meaning that you can often restore the data.
ZFS also utilizes the memory more which means that it is more likely to be affected by memory issues.
Is it likely? No. But since us mortals seldom have tape backups and usually can't afford to backup everything on their NAS I'd say that ECC is recommended for ZFS, more so than most other filesystems. Just as a consequence of how it is used and the lack of recovery tools, not because of the filesystem itself.
If you're upgrading the overall specs, it's going to be more expensive, yep.
If you're trying to provide equivalent functionality, you can keep the same equivalent specs (or more likely, get a spec upgrade anyway) and deliver the same quality end result (IE, no ECC RAM) at a lower cost. You're unlikely to suffer a performance or integrity penalty compared to the equivalent off-the-shelf NAS solution.
If you want to substantially improve on the off-the-shelf NAS solution, ECC RAM is recommended, and will at that point, probably become more expensive.
Then again, if you want 10+ drives, you'll probably still be cheaper doing it yourself.
You need to upgrade the specs because ZFS is much more demanding than a normal linux fs that I assume most COTS solutions use (the absolute minimum requirements for freenas is 8 GB of ram, I would guess most of the shelf solutions have no more than 2 GB). The CPUs used in COTS solutions also wouldn't cut it in a freenas system.
The big death blow however is the inability to grow a vdev which means you have to pay the for redundancy each time you upgrade. Which isn't cost effective, so the alternative is to buy all drives up front which robs you of taking advantage of falling prices (as you probably don't need all that space on day one but gradually increase the storage used). This forces you to pick smaller drives that are more cost effective today (compared to buying a few larger drives that will become cheaper in the future), which means that you will have to replace the drives sooner when you run out of room in your chassi (or get tired of noise / power consumption).
I love freenas, in my opinion it is the best solution for the home NAS. I use it myself, but I do consider it to be quite costly (whether it is worth it is up to you). The end results all things considering is easily more than double that of a COTS solution that you can easily grow or a more regular linux+mdadm setup. And you also need to buy more of that up front rather than spreading it out over the years.
But it does depend on what your needs are and how much data you use.
You may not be able to grow an existing vdev, but you can add another vdev to an existing zpool. And although having all vdevs be identical gives more consistent performance, it’s also possible to mix different vdevs in the same zpool. Multiple vdevs also result in better performance, as each raidz/raidz2/raidz3 vdev only has the equivalent IOPS of a single drive, but without the write hole drawbacks of more traditional RAID5. Recovery time with multiple vdevs would also be much better than a single large vdev, which is becoming more and more of a concern as drive capacities continually increase while IOPS remain more or less the same.
Yes, all very true, but all of that is utterly irrelevant for the home NAS in 95% of cases since performance is not that critical and it comes at an enormous cost of wasted space, also physical space is a huge deal since adding two redundancy drives for each raidz2 quickly adds up. As well as noise etc.
And if we want to talk optimized setups it really isn't recommended to add new vdevs to a already populated pool since the usage will be quite unbalanced and affect performance as well.
Synology has its own backdoor as well [1]. Luckily the backdoor is only active when the OS crashed and the device is in recovery mode.
They also have some very odd practices, like hard-coding specific users and the shell to be used inside OpenSSH binary [2]. This was 2012; haven't checked if the code improved since then.
I'm aware of FreeNAS. I used to run a custom NAS setup. What I've learned is that its not worth my time to be making NAS setups. It can be a surprising ordeal from both a hardware and software PoV. If I were to build my own NAS, the first item on my wish list would be open source hardware with no firmware blobs and no hidden processors that I didn't ask for. Where do you go to get that? AMD has PSP, Intel has ME, ARM has TrustZone (yes I'm aware these are not identical,) and all of them feel untrustworthy to me now.
If RISC-V is fast enough to power a NAS and open source SATA controllers exist, I would absolutely prefer a setup involving that.
If I'm not going to get that... I'd probably prefer a setup from Synology. They'll do a good enough job, probably better than I could, even if the price is a premium.
I have won and its pretty awesome, running for years, have replaced the disks multible times by now.
Its very upgradable as well, you can put in more ram, caches and so on.
Open Hardware is pretty much a no-go at the moment, but iXSystems does support a lot of Open Source.
I would recommend this setup above any of these Synology things, when I compare my problems with my friends who have Synology, I usually come out pretty happy with my choices.
I understand what you’re trying to say, but what's the alternative? Stop using computers? Once Meltdown and Spectre are out of the picture, there will be another crisis. That’ll happen forever, I think.
Honestly, the older I get, the more sick of IT I become, but what can we do about this?
Im in the market for a NAS right now, so this is perfect timing. I’m more willing to trust FreeNAS than MyCloud but as soon as I write that, I remember lots of Open Source is crap too.
You get FreeNas on good hardware by people who know what they are doing. I have been running this for years and I have no complaints exept that the 8 slot variant did not exists when I bought mine.
I'm having modest success using an eSATA enclosure on Windows with Storage Spaces. On the Linux side, BTRFS may be suitable but harder to manage (This is part of what Synology/NetGear are providing management around.)
Unfortunately, I've learned this the hard way. Twice.
I'm sure it's more stable now than it was when I used it last, but it's frustrating when people say "no, this time it's really stable!" and then you lose data due to a bug. This same thing has never happened to me with ZFS on Linux, which is a shame because the licensing issues make it painful for me to use. I like up to date kernels.
I feel like btrfs should be phased out. It was clearly developed in a fashion that did nothing to prevent preventable bugs. This may be the same frustration that's leading RedHat to work on their own competing filesystem...
Any suggestions for a dynamically grow-able/shrinkable system of mixed drives for linux? When I last looked (and why I went with Storage Spaces) was that ZFS requires pools to be fixed at initialization, and I wasn't sure what other OSS solutions were mature at the time.
I have no suggestions honestly, I have looked into it pretty extensively and would like to go ZFS. But it will cost a decent amount for the setup I am looking for. It seems to be "all or nothing"
I recently moved to a FreeNAS machine, as these devices had the weakest CPU and little ram as possible to function. It made buffering media on the network a challenging task when it should have been effortless.
A general question: why is no one suing people who put in back doors? Where are the "reckless negligence" suits? Especially injured third parties, who never agreed to an overreaching EULA.
You can't sue on formal grounds, you need a material claim that your rights have been hurt. reckless negligence sounds like a criterion to determine whether the damage is actually a liability of the defendant. And that you had to install a patch, which is available according to other comments, is likely not enough. Unless there are rules I don't know about, or you can construct an argument that backdoors were per se illegal.
Maybe you could claim that someone is offering copyrighted material on the internet, because that's illegal per se, no downloading required, but beware of the backfire.
I was approached on Twitter by an attorney for a class action against Payfone et al when their demo popped up here. It won’t take much effort to find one who sees a case here.
Synology. Their software is the OS X of NAS's. Everything is laid out logically and they have great long-term support. They're the only NAS manufacturer with a statement about Meltdown that I could find.
FYI - Synology DSM does not support full disk encryption. It can do folder encryption through eCryptfs though. Just a heads up if that's something that you require.
Synology is arguably the best you can buy. Their long-term support is great, and the bundled apps are great by first-party standards. Their Knowledge Base covers every detail you would need to harden your NAS setup, including HTTPS w/ Lets Encrypt.
I really love my QNAP device, but the software stack is so amazingly complex it sort of scares me how many vulnerabilities might be lurking. Hopefully the QNAP security team is full of decent folks.
For reference, I have a QNAP j3455 device. Remotely streams up to 4 hardware accelerated plex streams, runs lots of SSL connections, hosts all my files in a myriad of cool ways, never gets about 30% CPU load. It's amazing.
I was rather hoping that someone would list some low power options. When I look into building my own, I usually end up looking at what is basically a PC, with typical PC power draw.
A relevant but still more general question: How can we protect ourself against backdoored products that are covertly subsidized by governments?
Sure, demanding the sources is a necessary first step. But what happens when the manufaturer blocks and there is not enough competition in the market, see Intel and laptops?
This situation has been a problem for years now. What can be done? What regulation or law would help? What should we demand?
> A relevant but still more general question: How can we protect ourself against backdoored products that are covertly subsidized by governments?
There's a big issue with quality on devices but spreading conspiracy theories only harms that cause. There's no reason to believe this is connected to a government — and it's way below the level of craft we've seen in that regard – and making dubious claims is more likely to cause people to take you and the broader argument less seriously.
> This situation has been a problem for years now. What can be done? What regulation or law would help? What should we demand?
Two good starting points would be protection for security researchers and the requirement that manufacturers promptly support devices for a reasonable amount of time. Things like this happen because there's very little perceived cost to shipping something shoddy compared with not getting as many features to market as quickly as possible.
A followup point, especially for restoring trust that there aren't sophisticated backdoors, would be not just source code but fully reproducible, user-installable builds. This is still fundamentally a losing game if you don't trust the hardware but it'd dramatically increase the odds of someone being able to notice an error, not to mention being a huge win for users’ ability to improve an orphaned device.
The reason why that's unlikely to happen is that companies treat source code as a significant asset, which is why I first mentioned a longer support period. My favorite approach for this problem would be regulation requiring mandatory release of source code, the toolchain, signing keys, etc. if the manufacturer stops supporting something, so the places which want to keep their trade secrets can still do so but are required to help their users at the same time.
"The reason why that's unlikely to happen is that companies treat source code as a significant asset, which is why I first mentioned a longer support period. My favorite approach for this problem would be regulation requiring mandatory release of source code, the toolchain, signing keys, etc. if the manufacturer stops supporting something, so the places which want to keep their trade secrets can still do so but are required to help their users at the same time."
I mostly concur, therefore I certainly hope you're all donating to the Software Freedom Conservancy for their GNU GPL enforcement efforts (see https://sfconservancy.org/supporter/ for more) and encouraging people to license their free software under a strongly-copylefted free software license such as the GNU GPL v3 or later, or the AGPL v3 or later. These licenses allow users to request and deserve to receive complete corresponding source code, build instructions, signing keys, and other materials needed to build the software.
We all need free software for all our computers and we need it whether a manufacturer supports something or not. It's not the public's job to look out for Western Digital's interests including their alleged trade secrets. Western Digital still supports the WDMyCloud device but apparently can't be trusted to handle the software that device runs. It would help WDMyCloud users to publish that device's entire software as free software (if they haven't already), as well as the other things you rightly mention (build instructions, signing keys, and anything else needed to get the device running) so users aren't waiting for this less trusted party to make better choices. Users ought to be free to run, inspect, modify, and share this software or get someone else they trust to do this work on their behalf.
Regarding the first point, I don't positively state that governments do covertly subsidize backdoored products, but it's realistic and a real attack vector. Even if for example the state were totally benign right now, we need to be able to talk about the worst case in case the state goes haywire, without being put in the corner of crackpots.
Regarding the second point, I completely agree with you. First a period of binding and liable responsibility of the manufacturer for the product including software, followed by the release of full source, toolchain and necessary cryptographic material. I think this should be law, and the first period must be time-limited. Reproducible builds could ensure that backdoors in the software are at least retrospectively detected, and the company and people behind it can be hold accountable for it.
> "There's a big issue with quality on devices but spreading conspiracy theories only harms that cause. There's no reason to believe this is connected to a government — and it's way below the level of craft we've seen..."
I think you could be underestimating how and to what level the IC misinformation game is played.
I'm not disagreeing with you. Just pointing out your argument isn't as lockdown air tight as believed.
We've also seen that the governmental attacks are sophisticated because they don't want it to be detected or used by an enemy. The NSA attacks were things like intercepting specific hardware shipments or network links – which reduces the number of people who could possibly notice the problem – or getting people to use a random number generator which only they have the key to reverse[1].
In contrast, this is a textbook example of a sloppy developer who doesn't understand security but is writing network facing code which is never properly audited, and it's consistent with the number of other bugs mentioned.
Saying that it might be a government is like saying that because the CIA has killed people every pedestrian hit by a drunk driver is probably an assassination.
I would say that if people on HN are saying the attack is too stupid to be government supported, then the government has succeeded at their primary goal of having plausible deniability with these issues.
If we take recent history, we now have hard evidence of all sorts of conspiracy theory type stuff being absolutely true. With that in mind, do we just keep defaulting to 'not government' every time there's a deliberate backdoor identified? Sounds like a great way to maintain the status quo and ensure that no action is ever taken to curb this.
> this is a textbook example of a sloppy developer who doesn't understand security
Your argument is that because one thing which some people considered a conspiracy theory, but most experts did not, was true we should believe all of them?
Yes, there’s a hardcoded password. The field has a long history of people adding those to make support easier, and I’d bet a lot more that that password means someone with that name worked on the mydlink project than that the NSA put it there, just as most burglaries are routine crime even if the CIA or FBI has been known to quietly bug houses.
I've always argued (not always seriously) that closed source anything should be illegal. The response is often: But how do I exploit? The desire to exploit is apparently as strong as the desire to cry CONSPIRACY THEORIES which in all honesty is just 1984 style crimestop. If gubberment can backdoor your head like that no system is safe :P
Lets just make open sourcre the rule and be done with it.
You all like open source and you are going to keep it. This is why you agree.
open source is only going to protect you from non-intentional backdoors. if the government really wanted a backdoor, there's nothing preventing them from loading a modified version that has a backdoor. short of dumping the firmware from nvram + reproducible builds, you won't be safe.
So the government that subsidizes such things is the government that's going to protect us...from it (i.e., our government)? Not to go all tinfoil hat on you, but I think it's fairly obvious what side the gov is on. Hint: It's not the same side we're on.
The article doesn't seem to make clear that the 04 firmware which fixes this has been out for years (mid 2014, specifically). One nice thing about this device is that it is a real Linux system which can be used for hosting cheap services.
The article mentions firmware version 2.30.165 where as mine is running 2.11.168 and when checking for updates, reports back I have the latest. I have the EX4 models.
I only run mine on private/home networks with no remote access in to them.
Watch out for things like <img src="http: / / your_nas_ip?backdoor&evil+stuff"> on random websites. They can put in 500+ images to cover all of 192.168.0.0/23.
2.11.168 is the latest firmware for My Cloud Mirror gen 1 [1]. 2.30.165 is the latest firmware for My Cloud Mirror gen 2 [2].
Both firmwares were released in Nov 2017, and I suspect the vulnerabilities were fixed at that time as well. At the very least nas_sharing.cgi was removed in both versions. But I haven't had a chance to finish my investigations [3].
From what I can tell the hard-coded backdoor vulnerability was remediated, but I see no indication the unrestricted file upload vulnerability has been remediated in any of the firmwares I tested. But I'm not a security expert. I've reached out to Gulftech and WD for clarification.
WD probably contracted D-Link to make these devices for them, i.e. D-Link is the OEM. The latter has been known for quite a few router vulnerabilities...
...but on the bright(?) side, I remember finding lots of software and other fun stuff on "public" D-Link NASes a few years ago, including information critical to repairing the products of one well-known and notoriously-closed company. ;-)
FYI, some MyCloud devices can be modified to just run Debian. I treat my MyCloud as a cheap Linux box with lots of storage in a convenient form factor. If it weren't for that, I'd just build a computer.
I am now less sure how anything is secured once it can be reached through the Internet.
May be I have the old way of a NAS that is NOT reachable through the internet at all.
I really want a Time Capsule for all my iOS devices,, and have it only accessible within my Network. But then i am also paranoid about Bit rot on HDD. As I have seen far too many of my Photos or Video with this problem. And I dont believe any consumer grade NAS are quite capable of handling them yet.
I have yet to find a usecase where I want ALL of my files, Photos, Movies or whatever accessible when ever I am. Most of the time I only need one file form work, and it is normally in dropbox or email.
If you want remote access, don't make the NAS directly accessible from the internet, set up a properly secured VPN instead. It is an additional step, but you'll be the one in control of access, not whichever faulty services are running on your NAS.
And don't buy a NAS appliance, buy an inexpensive server like a Lenovo TS150 or HP MicroServer, add a couple of RAID disks (Btrfs or ZFS preferred over hardware or software raid) and run something Linux/BSD based that you have better control over.
Btrfs and ZFS should be able to prevent bitrot. ECC memory is highly recommended.
Yes, That is why I said consumer grade. I dont want to fiddle with things anymore. I know Synology offers Btrfs, but they only offer it at the higher end spec machine. I think their newest DS218 doesn't support either Btrfs or snapshots, and it doesn't have ECC Memory.
All your bits will rot if your house burns down. You will need something cross region (maybe your whole city gets nuked) but then you'll be exposed to the internet again. Snail mail your drives I guess?
Correct me if I'm wrong but if this is on your home network then you're only vulnerable to other people on your network right? Just don't port forward access for a start.
" the D-Link DNS-320L had the same exact hard coded backdoor and same exact file upload vulnerability that was present within the WDMyCloud. So, it seems that the WDMyCloud software shares a large amount of the D-Link DNS-320L code, backdoor and all. There are also other undeniable examples such as misspelled function names and other anomalies that match up within both the WDMyCloud and the D-Link DNS-320L ShareCenter code."
I haven't had a chance to finish my investigations yet, but at least part of the vulnerabilities were corrected with the latest firmware, so I'd start by updating the firmware. I wrote more here: https://news.ycombinator.com/item?id=16091555
Either both contracted the software out to the same third party company who reused code or one contracted the software out to the other.
I've personally seen and worked on, many times, code originally developed for one customer reused in another project for another customer. Mind you, this is totally legal the way our contracts are worded.
For all we know WD could have paid DLink to produce the whole damn thing and just stick a WD logo on them. That sort of thing is far from unheard of.
FFS can't you just buy anything these days without worrying about backdoors, vulnerabilities in CPUs, MOBO? Why we just can't have nice things? Why do browsers have to run DRM, CPU can be controlled by any USB driver, MOBOs have webservers with code execution from LAN and internet. IoT cameras make biggest botnets ever known. At this point I'm just waiting for a fucking internet controlled kettles and ovens that will burn down neighbourhoods and cities.
When I was shopping for a lock recently, the store had electronic locks you could control with your phone! There's no way a lock maker could possibly do IoT security right. Who in their right mind would get that lock? The burglar just has to see its distinctive panel and google some crack when it's eventually cracked.
Economics. Enough people prefer faster and cheaper. Security and reliability cost time and money. By the time your product ships it's already obsolete and still costs more than customers are willing to pay.
Probably won't change until the costs of these types of bugs/design flaws outweigh the costs of preventing them.
If the backdoor was a hack for easier debugging that got left in, not having it during development could make for a longer more expensive development cycle. In which case
it should have been removed before production.
It could also be intended as a support tool to ease hard to debug solutions remotely. Not having it could make support issues more expensive and slower. Very insecure and misguided (security by obscurity is not security) if this is the case, but not malicious. Just a stupid attempt at saving money. This apparently was the case for another commenter
on a different product.
While it's possible it was placed with malicious intent, there are plausible (and all too common) alternatives that explain it.
I (strongly) disagree. You should expect the worst when using crap that you, or anyone else (publicly) cannot verify. To expect any less is to become complacent. Products, and companies behind them, must earn your trust, and hiding behind binary blobs is not the way.
I mean yes, you or I might just get an ITX board or a Pi and run our own Linux on it to setup a NAS, but most of the average users aren't going to do that, mostly because they don't have the knowledge or expertise, or because they don't have the time.
Vendors with names people trust should be held accountable when they fuck up like this.
Also you can't trust everything all the way down. A Pi still has proprietary video chips, every Intel and AMD board has some kind of management software, etc. etc. There is no way for one human being to be all Tony Stark and build literally everything from the ground up.
Your parent post is talking about expectations. You are talking about reality. Just as everyone should have access to clean water and medical care, but that’s not actually true.
If you don't have access to clean water, you should publicly object.
If you don't know how to determine if you have clean water (e.g. the parent comment), then either you demand proof or expect, uh, dynsentery and a bad time. You should not assume you have clean water.
Quite the opposite, actually: Everyone should expect to be backdoored by proprietary products if they use them, and take appropriate security measures.
https://www.gnu.org/proprietary/ has a list of other kinds of malware found in proprietary software sorted by type and company or type of product. It's very informative reading and touches on issues which often come up on sites like Hacker News.
I didn't state how I think it should be, but how reality is.
Now, for your mother, the government has the responsibility to protect the people from harms the people can't solve on their own. This happens in crime investigations, ecological regulations, car regulations and nuclear material regulation, among many other areas. In my eyes, security in IT products is clearly one of those areas. Your mother should ask the politician she voted for why they don't fulfill their responsibility.
In Germany, the highest court has stated legally binding in a decision in 2016 (red point 38 in [0]) that the state is responsible for the confidentiality and integrity of the IT equipment of the population. So far the german government chose to ignore that decision.
Merely installing Ubuntu on a ThinkPad doesn't disclaim you of proprietary software, by a long shot.
Do you also run an open BIOS, no CPU ME, no cellphone basebands, OSS controllers in your hard drives, and control the firmware on all of your USB/PCIe peripherals?
If you actually do, you ought to realize that you're a one in hundreds of millions type individual, and that those hoops don't work for most people; even technically minded ones.
And if you don't, you're a hypocrite who should be more sympathetic to the numerous compromises that exist in reality.
> Do you also run an open BIOS, no CPU ME, no cellphone basebands, OSS controllers in your hard drives, and control the firmware on all of your USB/PCIe peripherals?
I saddly can't answer all these questions with yes yet, but I'm constantly looking to improve that situation.
Still, my situation is better in that regard than what most people have. Most people are literally illiterate in the digital world. Schools are failing in this aspect. Often they ignore the digitalization or think it just means to use new media and the internet, which is the totally wrong angle to teach this transformation of our world. Google is especially evil in trying to profit from this situation by creating schooling centres where Google teaches its vision of a digital world using its tools and services.
> And if you don't, you're a hypocrite who should be more sympathetic to the numerous compromises that exist in reality.
Why am I a hypocrite for stating a simple fact? If you use propietary system, you are at the mercy of somebody else who most likely woudn't even answer your questions about it.
I'm very sympathetic to what happens to people around me and also those that are not around me. That is why I advocate for legislation and regulation to solve this problem. The state must fund free and open software and legally require open documents. Also, they must finally get their shit together and properly integrate the digital world into schools, and no, this is not done by giving every child a tablet. I want this to get better for everyone.
Still, I have little sympathy for the decision to buy for example Apple products. There are alternatives that are more open. Whoever chooses the closed system whenever an open alternative exists unnecessarily takes a risk by given a very limited set of people a lot of power over them. Don't do that, don't choose proprietary systems. Don't use Windows, don't use Apple products. It's not hard not to.
What part makes no sense? The backdoor in question is "secret", it's "hard-coded" and it's a "backdoor". Saying "secret backdoor" may be redundant, but it still makes sense.
If you look in the document there is a section about a hard-coded backdoor admin account that cannot be changed. So secret hard-coded backdoor does make sense to me.
While I'm not a fan of the title it was submitted with, a backdoor is a heck of a lot different than a "normal" vulnerability; to the point where just saying "vulnerability" is a bit misleading.
Perhaps: "WD MyCloud Multiple Vulnerabilities (including hard coded backdoor)"
If someone actually wanted to implant a "secret" backdoor it would be disguised as a subtle bug and/or obfuscated in some way.
"Never attribute to malice that which is adequately explained by stupidity." Hanlon's Razor - https://en.wikipedia.org/wiki/Hanlon%27s_razor
[1] "[...] it turns out the error was caused due to buggy code and nothing I was or wasn't doing wrong." - if the code is so obviously buggy and the backdoor part isn't obviously a bug, the developers are probably just being sloppy, not malicious (bad or rushed development).