Hacker Newsnew | past | comments | ask | show | jobs | submit | CountVonGuetzli's commentslogin

Arial is licensed font, distributed by monotype.



... is an Apache-licensed metrically-compatible alternative, for everyone else who doesn't already know what an Armio. is


You mean rack mounts for humans?


For us, introducing a simple device and location validation system (track which users log in with which devices and from where), combined with breached password detection from HIBP, which both can trigger an email validation code flow, practically solved the credential stuffing issues we had immediately.

For the user it's kind of a a soft MFA via email where they don't have to enable it, but also don't always get the challenge.

Astonishingly, we had barely any complaints about the system via customer care and also didn't notice a drop in (valid) logins or conversion rates.


To me, that seems like a pretty reasonable approach... adding a password change at the end would probably be a good last add.

I tend to generate my passphrases for sites now, my only complaint is a password field should accept at least 100 characters. Assuming it's salted+hashed anyway, it's almost irresponsible to limit to under 20 characters. I'd rather see a minimum of 15 chars and a suggestion to use a "phrase or short sentence" in the hint/tip.

I wrote an auth system and integrated the zxcvbn strength check and HIBP as default enabled options. The password entry allowed for up to 1kb input, mostly as a practical limit. I also tend to prefer having auth separated from the apps, in that if auth fails via DDoS, etc, then already authenticated users aren't interrupted.


> a password field should accept at least 100 characters. Assuming it's salted+hashed anyway

There was recently a bug in bcrypt implementation where characters after first 64 were silently ignored.

Anyway, while it is easy to require long password it is almost impossible to detect password reuse. The only way to solve the issue is to not let users to choose passwords, if they want to change it then generate a new one for them. And that isn't happening unless sites are forced to do it by government.


As long as I can use a password manager for passwords... unfortunately, I have to login to the OS to get to the password manager itself.

I think there are plenty of other solutions, including 2fa, push notifications and likely more valuable than any of the previous mentioned bits would be to ensure that SSO works across an organization.

In general, simply requiring a minimum length of say 15 chars and the suggestion to use a phrase or sentence is enough. I've switched Bitwarden to the word generation option with capitals and numbers, which usually works, except when there's an arbitrarily small maximum length on the input field.

I switched because trying to type 20 random characters including special characters in under 20s (was a remote terminal limit on a VM I'd misconfigured and had no other way in) was pretty much impossible and had to run the reimage from scratch.


It would be really cool if it didn't just show the ping, but how much worse it is compared to the theoretical optimum (speed of light in fiber optic medium, which I believe is about 30% slower than c).

I raise this because I've been in multiple system architecture meetings where people were complaining about latency between data centers, only to later realize that it was pretty close to what is theoretically possible in the first place.


I'm under the impression that within the hyperscalers (and probably the big colo/hosting firms, too), this is known. It's important to them, and customers, especially when a customer is trying to architect an HA or DR system and needs to ensure they don't inadvertently choose a region (or even a zone that isn't physically in the same place at other zones in the same region) that has "artificially" (can be for all kinds of legitimate reasons) latency from the primary zone.

This is not an uncommon scenario. My current employer specializes in SAP migrations to cloud and this is now a conversation we have with both AWS & GCP networking specialists when pricing & scoping projects... after having made incorrect assumptions and being bitten by unacceptable latency in the past.


Doesn't look like this is a ping[0]! Which is good. Rather it is a socket stream connecting over tcp/443. Ping (ICMP) would be a poor metric.

[0] https://github.com/mda590/cloudping.co/blob/8918ee8d7e632765...


ping is synonymous with echo-request, which is largely transport agnostic.

but you're right


why 443? are you assuming ssl here? serious question, I'm not sure. But if it is, wouldn't it be hard to disregard the weight of SSL in the metric?


The code closes the connection immediately after opening a plain TCP socket, so no SSL work is done. Presumably 443 is just a convenient port to use.


tcp/443 is likely an open port on the target service (Dynamodb based on the domain name). TLS is not involved.

ICMP ECHO would be a bad choice as it is deprioritized by routers[0].

[0] https://archive.nanog.org/sites/default/files/traceroute-201...


The script connects to well known 'dynamodb.' + region_name + '.amazonaws.com' server that expects HTTPS


You would have to map out the cables to do that.

Light in fiber optic cable travels roughly 70% of the speed of light ~210,000 km/s Earth's circumferences is ~40,000 kilometers. Direct route from the other side of Earth to another would be roughly 100 milliseconds, round trip 200 ms.


It’s pretty trivial to do this, any big fiber company will provide you with Google Earth KMZ files (protected by NDA) when considering a purchase. This is absolutely necessary when designing a redundant network or if you want lower latency.


Since light travels at 100% the speed of light in a vacuum (by definition), I have wondered if latency over far distances could be improved by sending the data through a constellation of satellites in low earth orbit instead. Though I suspect the set of tradeoffs here (much lower throughput, much higher cost, more jitter in the latency due to satellites constantly moving around relative to the terrestrial surface) probably wouldn't make this worth it for a slight decrease in latency for any use case.


Hollow core fiber (HCF) is designed to substantially reduce the latency of normal fiber while maintaining equivalent bandwidth. It's been deployed quite a bit for low latency trading applications within a metro area, but might find more uses in reducing long-haul interconnect latency.


Absolutely! The distance to LEO satellites (like spacex or kuiper) is low enough that you would beat latency of fiber paths once the destination is far enough.


In the past we just had line of sight microwave links all over the US instead.

I think it's just too damn expensive for your average webapp to cut out ten milliseconds from backend latency.


Yes. There are companies that sell microwave links over radio relay towers to various high frequency traders.


I am pretty sure this was one of the advertised strength of Starlink. Technically the journey is a bit longer, but because you can rely on the full speed of light you still come out ahead.


Cable mapping would be nice but 100ms is a meaningfully long amount of time to make straight-line comparison worthwhile


clicking around that map, I don't see any examples where the latency is a long way out of line with the distance.

Obviously it's theoretically possible to do ~40% better by using hollow fibers and as-the-crow-flies fiber routing, but few are willing to pay for that.


The 'practical' way to beat fiber optics is to use either

(i) a series of overground direct microwave connections (often used by trading firms)

(ii) a series of laser links between low altitude satellites. This would be faster in principle for long distances, and presumably Starlink will eventually offer this service to people that are very latency sensitive


Low-bandwidth/low-latency people tend to also demand high reliability and consistency. A low-orbit satellite network might be fast but, because sats move to quickly, cannot be consistent in that speed. Sats also won't ever connect data centers other than perhaps for administrative stuff. The bandwidth/reliability/growth potential just isn't there compared to bundles of traditional fiber.


> Low-bandwidth/low-latency people tend to also demand high reliability and consistency.

For trading applications, people will absolutely pay for a service that is hard down 75% of the time and has 50% packet loss the rest, but saves a millisecond over the fastest reliable line. Because otherwise someone else will be faster than you when the service is working.

They can get reliability and consistency with a redundant slower line.


Can you provide a source to this statement? The redundancy needed to transmit at desirable reliability with 50 % packet loss would, I imagine, very quickly eat into any millisecond gains -- even with theoretically optimal coding.

Someone more familiar with Shannon than I could probably quickly back-of-the-napkin this.


Financial companies have taken and upgraded/invested in microwave links because they can be comparatively economical to get "as the crow flies" distances between sites:

https://www.latimes.com/business/la-fi-high-speed-trading-20...

https://arstechnica.com/information-technology/2016/11/priva...

https://en.wikipedia.org/wiki/TD-2#Reemergence

I'm not sure about the high packet loss statement, but it wouldn't suprise me that it's true if the latency is lower enough to get to take advantage of arbitrage opportunities often enough to justify the cost.


Traders wouldn't use redundancy etc. Whenever a packet with info arrives, they would trade on that info (eg. "$MSFT stock is about to go down, so buy before it drops!"). If there is packet loss, then some info is lost, and therefore some profitable trading opportunities are missed. But thats okay.

There are thousands of such opportunities each second - they can come from consumer 'order flow' - ie. information that someone would like to buy a stock tells you the price will slightly rise, so go buy ahead of them and sell after them in some remote location.


There is also a market for stocks that trade on different exchanges, resulting in fleeting differences in price between exchanges. Those who learn of price moves first can take advantage of such differences. In such cases, all you need to transmit is the current stock price. The local machine can then decide to buy or sell.


There's definitely a few billion a year in revenue for Starlink if they sell very low latency, medium bandwidth connections between Asia, the US, Europe and Australia to trading firms. Even if the reliability is much worse than fiber.


Starlink latencies sadly aren't competitive due to the routing paths it uses. And sadly there are currently no competitors to starlink.


The routing paths traveling via ground stations, you mean? My understanding is that they were experimenting with improvements to this, they just haven't deployed anything yet.


A radio will beat starlink on ping times. Even a simple ham bouncing a off the ionosphere can win out over an orbiting satellite, at least for the very small amounts of data needed for a trade order. The difficulty in such schemes is reliability, which can be hit-or-miss depending on a hundred factors.


No, even with proposed inter-satellite routing paths, they are too slow. The trading industry has very much done the math on this.

The comparison is against radio and hollow-core fiber, not conventional fiber.


Laser links between satellites have been active since late 2022, or was there some additional improvement you're referring to?


I haven't kept track of that, but there is no other improvement. Even with the straightest possible laser links in space, they are too slow.


> sats move to quickly, cannot be consistent

Satellites in geostationary orbit are a (very common) thing.


Geostationary is so much further than LEO though so worse latency


AU <-> South Africa & South America is way less than distance.


Author here - Interesting. Someone on X also gave this idea to me. Any good resources for how to accurately compute this?


The theoretical best latency would be something like speed_of_light_in_fiber/great_circle_distance_between_regions, both of which are pretty easy to find. The first is a constant you can look up, and the second you can compute from coordinates of each region pair.


Thats what we did as well, via wolfram alpha. I.e. we were too lazy to look up everything ourselves and just asked it straight up how long of a roundtrip it would be between two destinations via fiber. We checked one result and it was spot on. This was six years ago tho


IIRC about 125 miles per ms


I recommend this book as well (absolute beginner here). Learned to see the world a bit differently because of it.


Also, if for example the SaaS you’re running sends a lot of system emails that really shouldn’t end up in spam filters, you can’t afford to let things like marketing campaigns negatively influence your domain’s spam score.

Easier and safer to have separate domains.


After doing the first-time CTO thing three years ago in an established company with over 100 engineers, I think these two are the minimum required reading:

An Elegant Puzzle: Systems of Eng Management (https://lethain.com/elegant-puzzle)

and

The Art of Leadership, small things done well (https://www.amazon.com/Art-Leadership-Small-Things-Done/dp/1...)

There are a lot more that were helpful to me, but those two encompass most of the important concepts and skills already in a usefully synthesized way, at least for me.


Having seen the parts tracking system of a european helicopter manufacturer: They had the most insanely detailed parts tracking system (including certa for every part and sub-assembly and supplier) I have ever seen. I would be surprised if it takes them a lot of effort to figure out which planes are affected.


Yes and no. In case engine maintenance isnoutsourced, which it often is, the necessary information might not be available to Delta, or any other operator / airline, right away.

That being said, this particular UK company caused the biggest civil aerospace scandal since the 737 MAX. I have zero tolerance for jeopardizing tracibility of parts due to negligence, what the UK company did is outright criminal.

Unfortunately, there is nothing Delta and other Airlines coupd have done to prevent this from happening.


From https://www.washingtontimes.com/news/2023/oct/5/airlines-gro...:

According to a report from The Wall Street Journal, when reporters went to visit the company’s headquarters in London, nobody in the building had heard of AOG Technics.

This means that nobody at Delta nor any other airline even took the time to pay a visit to the company's so-called headquarters. For an airline looking to choose a new supplier for airplane parts, I'd say they skipped even to most basic due diligence checks.


“Hey, Boeing, have you heard of these guys?”


Wholeheartedly disagree they there is nothing they could have done.

Am I surprised? Not really. The whole thing is a pisstake because it seems so inconceivable on its face. Nobody would expect someone to do this in the industry at that level for sure. And so the kind of processes needed to mitigate it were unlikely to be employed.

However, in a very tightly controlled and regulated industry such as this I also don’t think it is unreasonable to expect that they should have the level of verification and just plain awareness of their suppliers to not get caught with their pants down on this en masse.

This isn’t exactly counterfeit and legit bins in an Amazon warehouse getting mixed together before you get the die roll on if you get the genuine $10 kitchen tool or if you get the alibaba special when it shows up at your front door.


> Unfortunately, there is nothing Delta and other Airlines could have done to prevent this from happening.

You posit that the necessary information was not available to Delta, at least not immediately, due to outsourced engine maintenance. Why can't Delta require their contractor (which wasn't the supplier, AOG) to check that documentation meets requirements? Why can't Delta check the documentation themselves once it is available, prior to the engines needing later service as in this case?


If it was the same supplier, AOG Technologies, there isn't reallyuch an airline or direct customer of AOGs could heve done.

It was not a problem of absent documentation, because that is obviously rigorously checked by everyone in the chain. So Delta, and regulators, force suppliers to check all documentation for parts. Delta doesn't have to check their sub-tier suppliers documentation, since their tier one is a certified company (in Europe that would ne EASA Chapter 145 for maintenance). What Delta does, is auditing their tier 1s, those tier 1s audit their tier 1s (Delta's sub tier suppliers), and ao on and ao forth.

AOG was audited and certified, which means their paper wirk, process discriptions and so on were compliant. And then AOG cheated, kind of like showing good parts for audit and supplying crap later. If AOG, or anyone else, is that criminal, the only way to catch them is by chance or an investigation after a part fails.


Yes, Milton Friedman, 1970, New York Times: https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...

Wikipedia has a summary on the idea behind the essay https://en.wikipedia.org/wiki/Friedman_doctrine


Unity is running as fast as possible in the opposite direction of Friedman's advice.

The businessmen believe that they are defending free enterprise when they declaim that business is not concerned “merely” with profit but also with promoting desirable “social” ends; that business has a “social conscience” and takes seriously its responsibilities for providing employment, eliminating discrimination, avoiding pollution and whatever else may be the catchwords of the contemporary crop of reformers.

San Francisco based Unity is on the vanguard of the new socially aware Corp.

https://unity.com/esg

https://unity.com/unity-uk-binary-gender-pay-gap-report

https://unity.com/inclusion-diversity


Interesting that this is enough evidence to the contrary for you. To me that looks more like what they say, not what they do, and is a well crafted smoke screen. They still seem to be acting as if they’re purely interested in extracting maximum shareholder value, at the expense of the public.


Interesting that this is enough evidence to the contrary for you.

It is just a few easy examples. I've used Unity for a decade, visited the offices, worked with ex-Unity employees. They aren't maximizing shareholder value, they are very bloated, and they are trying to survive and avoid bankruptcy.

at the expense of the public.

What does that even mean? There is no expense to the public. They give their product to 90% of developers for free. Unity used to charge every seat.


Not sure where you’re getting your information from, but I feel like the opposite is true. If you want to work remotely, they even have a remote working contract specifically designed for that case.

Maybe what looks archaic to you is that under French labor law, employees can’t be treated differently if they have the same type of contract. Meaning one individual can’t work two days from home and another three. Or one person can’t just have their travel expenses to and from the office payed, unless all employees with the same contract have that too.

I believe this is why they had to introduce the new contract for remote work. With this, their strong worker protections stay in tact while still allowing for more modern ways of working.

What is archaic is their bloody governmental control system. There is an inspector for every stupid thing you can imagine and it is an enormous waste if time and money.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: