I tried between two enterprise dlink switches, and also between a synology NAS and a dlink switch.
Link aggregation in general only provides performance with multiple connections to multiple machines, so using link aggregation between one client (Mac) and one NAS will like result in zero performance improvement (the packets will only use one of the cables). It only makes sense if you have two or more NAS that you want to access simulatenously (or two or more clients accessing the same NAS, but that wouldn't be a use case for 2x 10gbe port on a client).
Synology also supports balance slb bonding, which in theory goes around this single connection restriction. However I ran into some connection problems with some Windows clients. Never went to the bottom of them but they went away when I disabled the bonding.
In any case, it is hard to saturate a 10gbe connection with a single NAS, unless it is packed with SSDs, which I wouldn't assume for mass storage. So I am not sure there is much value in aggregating the links in the first place.
D-Link gear isn’t exactly what I’d judge networking standards on, they work I suppose but they’re hardly what I would install in even a small business office.
I have multiple LACP bonds on my Juniper EX2200 at home working without issue, though the single stream limits you mentioned are the one thing LACP can’t fix.
"enterprise" DLink switches aren't really a thing yet, regardless of what their marketing team wants to brand them as. :(
Cisco, HPE, etc have "enterprise" switches. DLink might be in a decade.
> it is hard to saturate a 10gbe connection with a single NAS, unless it is packed with SSDs
No, it's just a matter of having enough spindles behind it.
As a rough guide, with a (say) average spinning rust HDD able to push out 100MB/s when reading, you'd only need 10 such drives to push out 1000MB/s (raw).
In the real world, you need extra spindles as some of the data being pushed out is just internal checksum/redundancy, and doesn't go over the network.
But for reading back large files in mostly sequential access, you'll hit 1GB/s from about 10 drives onwards pretty easily. More drives, more throughput.
I would defer to people with more enterprise hardware experience than me for serious NAS set ups, but my experience with various generations of 12 disks synology nas is that you loose a lot of performance to disk vibrations / inefficiencies of the raid implementation / sync between drives / tcp, etc. So I don't think it scales linearly. With a synology DS3615 and 12 HGST Helium drives in RAID5, I barely get over 1GB/s locally while each drive individually is capable of over 200MB/s sustained speed.
Yeah, no idea with Synology. When I was originally looking at NAS solutions, they (and QNAP) just seemed expensive for not much product.
Went with FreeNAS instead, as I was already very familiar with building systems, it's based on FreeBSD (OSS), and it gives better tuning on the higher end.
LACP does generally “just work”, the problem is when you want one machine or session to be able to max multiple links. The solution here remains the same as it always has - multipath. I hope Apple has added support for SMB multi-channel support for these users since I last checked.