Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do not at the moment - I perhaps phrased it poorly - but the cost of the in-ground copper wire is but one part of the costs of an ISP - datacenters, call centres, exchanges + dslams, specialist staff, CEO's bonus pay, so on and so forth. To paint the cost of DSL as almost entirely maintenance of the in-ground copper is nonsense.


Well, that's going into the company to provide you with that in-ground wire. It's in the base cost, not affected by how much data you use, which was the main point. Marginal data is cheap.


So you're claiming that the only part of managing throughput is the in-ground wires? That ISPs don't use network admins to manage load congestion? That ISPs don't have to purchase new hardware and other capacity to handle increased bandwidth consumption?

Do you think all ISPs are running off the ancient network equipment they were originally set up with back in the day?

I'm honestly surpised that on /Hacker News/ of all places, so many people thing that the only thing contributing to their bandwidth cost is in-ground copper.


Bandwidth costs money. Nobody says it doesn't. However, the amount of money 5Mbps of bandwidth costs is trivial compared to the cost of maintaining a separate copper pair to your house.

At /my scale/ I can buy bandwidth at around $1 per megabit/sec. 1Mbps is, uh, around 320 gigabytes of data. Even at half utilization that's less than a penny per gigabyte, and any but the smallest of local ISPs are going to dwarf my bandwidth usage. This is the sort of market where cost scales down quite a lot as you scale up, so I would bet quite a lot of money that comcast pays less than I do.

sure, if I wanted to double my usage, I'd have to buy some new network equipment, and spend some network admin time, but generally speaking the cost of your network equipment (at least at my scale) is fairly small compared to the cost of buying the transit in the first place. It certainly doesn't double the cost of bandwidth; and even if it did, bandwidth would still be cheaper than the local loop to your customer's home.

So yeah, uh, your average dsl is what five megabits/sec on a really good day? are you seriously trying to tell me that you can buy a copper loop for less than five bucks a month?

(Actually, I've been in the industry all my life and I've never been involved in buying copper loops; I guess it's possible they are that cheap; If they are, and I find out (oy, the costs of almost everything in this industry are guarded like state secrets. It's such a pain in the ass.) I'll start a DSL ISP. but I'd be very surprised if that were the case. )


Are you seriously trying to tell me that a company with a thousand users requiring 100kb/day versus a company with a thousand users requiring 10GB/day has exactly the same costs apart from just the upstream bandwidth?

That the latter company somehow gets 'cheaper' overalleven though it requires newer network equipment to handle the load?

That the light internet users cause as much support issues as the heavy internet users?

Of course they don't - for the same number of users, heavy internet use increases backend costs. Those 100k users aren't causing congestion with torrents or video, for example. And just like in-ground copper, exchanges, datacentres and the like aren't cheap to install or maintain. Sure they get a bit 'cheaper' with scale, but they're still not 'cheap' - especially if you've got to keep upgrading your equipment to handle load.

Costs in a business rise in things other than directly-related items. Take an example: I've spent the morning doing an emergency repair on an RMA'd item that the service boys don't have the time to do. This has meant that the other critical tasks I've had to do have been pushed back. Sure, you can say that the repair of the RMA'd item is just my wages + spare parts, but you'd be wrong. It's had knock-on costs elsewhere in the system.


>Are you seriously trying to tell me that a company with a thousand users requiring 100kb/day versus a company with a thousand users requiring 10GB/day has exactly the same costs apart from just the upstream bandwidth?

Assuming the same customer port speed for both? actually, yeah. costs would be almost the same, ignoring upstream bandwidth costs (really, costs would be pretty similar including upstream bandwidth.)

1000 customers, 10Gigabytes a day, you are looking at something like 1 gigabit, if that 10gigabit is even, and anymore, all modern networking gear does at least one gigabit.

edit: I'm assuming a smooth distribution and that you are willing to run the thing full on, both of which are bad ideas. give yourself 50% headroom, and buy two. Gigabit hardware is so cheap, you might as well.

You seem to imply that older networking equipment is cheaper. It's not. Sure, if you come by the office, I'll offload some crap on to you for free. This doesn't mean that it's cheaper than new stuff in production. There are many places in my network where a 10mbps switch would be plenty fast, hell, I even sell 10Mbps ports to some people. but I don't want the headache of dealing with ancient crap. I'd rather pay the up-front costs to get new(er) stuff than deal with the operating expence inherent to using old hardware. If I sell someone a 10Mbps port, I'll take a gig switch and step down the port speed.

Anyhow, I guess at this point we're just arguing to personal authority, and I don't even know who you are or what you do. If you have knowledge of what the cost of a copper pair is, let me know.


I agree that we're at cross purposes, and you're closer to the nitty gritty than I am. I do admit to surprise that you find it more expensive to maintain old equipment than it is to purchase and maintain new equipment though. That's startling, and goes against the behaviours in the tech companies I've worked in.


>I do admit to surprise that you find it more expensive to maintain old equipment than it is to purchase and maintain new equipment though. That's startling, and goes against the behaviours in the tech companies I've worked in.

Really, running off old kit is the sysadmin equivalent to the "technical debt" talked about here in the context of messy spaghetti code. It's cheaper now, but will have to pay later. Usually with usurious interest.

You should keep servers closer to 3-5 years Your SysAdmin will say 3, the IRS says 5. It's generally agreed that after 5 you should give it to your little sister or use it in the test lab. I mean, I know people who run old servers for fun; a good friend of mine runs his website off a .com era 10 CPU sun enterprise. but that's hobby work, and even so his co-lo provider is loosing money on the deal he's getting. If he was paying for his own power, he'd probably step down to a single-socked 12 core opteron which would have more compute power and more ram, and use approximately 1/10th the power.

At my company, I estimate a reboot to cost me about a thousand dollars, that is, if everything comes up cleanly afterwards. Right now, we're debating if we should extend the servers out to four years rather than throwing them out after three. But in this case, I own the whole thing, rather than one department, so I feel it on both ends; both the problems caused by old garbage and the cost of buying new stuff.

Network hardware, generally speaking, lasts longer than server hardware, but depending on the badness of downtime on that particular network, a 5 year cycle still isn't bad, and running 10 year old kit is kind of crazy. (and if we are talking 100Mbps stuff, you are starting to talk about 10 year old kit.)

I remember at another contract gig, I was getting paid around seventy five bucks an hour (this was really a full time job in all but name, so they paid closer to full time job rates than short-term contract rates)

A lot of my time was spent repairing or cleaning up after 10 year old servers. Most of the time when I touched something? for what they were paying me alone, not counting downtime costs, they could usually have bought new kit. And half the time, these were critical dev servers; one of the hard drives failed years ago, and now the second was returning read errors. It was absolutely crazy that they didn't just replace all of this garbage at once.

I mean, they eventually did, I was actually hired on to help virtualize all these tiny 10 year old servers on to much larger, newer servers. But the job would have been quite a lot easier if they had replaced these servers after 5 years rather than 10. I suspect that the downtime and cleanup ended up being more expensive than the capital cost of new servers would have been.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: