Hacker Newsnew | past | comments | ask | show | jobs | submit | ale_jrb's commentslogin

The bottom of the post mentioned that sales have increased by 40% with the new design. It will probably take some time to know if that sticks, but it seems like it works from that point of view, even if it was maybe overpriced.


The 40% increase is ~$18k a month based on the numbers in the article. That means that redesign pays for itself in three months. That's the type of "regret" that I want.


But it's always medically necessary, because weight changes are a leading indicator of many issues, and the long-term trend is needed to see these. If people feel that they don't understand why it's necessary, maybe better communication and education is needed, but I don't think these cards help that.


The furlough system transferred a large amount of money to households, because people had substantially reduced expenses (due to not having to commute, and service-based industries being closed, for example) but only slightly reduced income.

This is generally true in most countries: although the exact mechanism was different, people ended up with more discretionary income and fewer services to spend it on, and so demanded more goods.


They're probably less attainable specifically around San Fransisco and Silicon Valley than in most of Western Europe though, even on a SWE salary. It's definitely true in cheaper areas and states though.


The maintainers are the process, as they are reviewing it, so it's absoutely attacking the maintainers.


I feel like fiasco might be overstating it a little, but basically atom is incredibly slow and this is probably the main reason that it never overtook Sublime and friends in the same way the VS Code did.


I understood it that reading from `/dev/random` will still block just after boot (i.e. before it's initialised) unless you pass the new flag `GRND_INSECURE` to `getrandom`. After it's initialised, however, it will never block because it's now just using the CRNG.


Please don't confuse `/dev/random` and `getrandom` - they are separate interfaces.


This is true, but kicking off a large subset of the userbase every time an account is compromised (or more realistically, every time someone changes their password—which should certainly invalidate their old tokens) isn't going to be a valid trade off for most applications with customers.


A 'standard' JWT claim is Issued-At ('iat'). If you want lightweight JWT's, you're going to be loading minimal user data serverside anyways (logging, roles, etc.) -- it's trivial to compare the iat timestamp to a 'last change' field in your user object.

Of course, this gets to the point of the article -- if you're loading data serverside, that's not reeeaaallly the intended multi-party-claim-exchange use case of JWT's....


You don't have to kick them off. You can make it a trigger to just check whether their session has been invalidated. Store last password change date or something similar, and if the db user is still valid and the password date is older than the JWT creation date, just update the JWT and let them continue. If not, require a new login procedure or deny outright, depending on account status.

> every time someone changes their password—which should certainly invalidate their old tokens

Using a variation of above, you can just reverify the JWT every X minutes, and know that if you change a password for normal reasons all JWT tokens will be invalidated within X minutes.

It's a trade off. If you can live with what you're trading (or at least live with it after mitigations are put in place), it might be worth it.


We tie our users session in the JWT to the session in a central database, this allows us to invalidate individual sessions.

The reason for using JWT is that the UI and backend consume the same session object seamlessly. Before what we got in our PHP session and what state we shared with the UI were manually kept in sync through a API request.


> session object

what fields are in this session object besides an identifier?


>It's a trade off

But, what are you getting for this added complexity--above and beyond what "traditional" user session management provides?


You can serve pages that don't require DB access without ever hitting the DB. Static content that is restricted to authed accounts is able to be served immediately and directly. Requests that access resources other then ones shared with the auth system do not need to first access the auth system to verify the account. Even if the auth system is on the same DB as the rest of the content, it's no longer a sequential bottleneck.


Yeah, I can see scenarios where these features would be useful, but wonder how many architectures truly benefit from it? I'd guess that a good many people implement JWT b/c it's the accepted approach or in premature anticipation of a future scaled-out architecture that would land it more in one of the beneficial scenarios.

Of course, if the DB concern is one of performance, then there's also a performance trade-off with JWT since the (sometimes large) token is constantly transferred over the wire and compute is used to decrypt it. That may be faster than a DB-round trip, but it's not free. Of course, "old-school" session-management keeps the auth state in memory server-side, which can be the fastest--though comes with a memory hit. But, constantly processing the token in-memory also assigns a memory penalty (even if slightly more transient).

In any case, I think it's like a lot of popular tech that evokes strong reactions on both sides: The success of the tech causes it to be overused, invariably in use-cases for which it isn't as well-suited or was not originally designed.

In this case, the important need for session expiration alone makes me question hard whether JWT is a good fit for session management in a lot of user-facing apps. In some cases it will be, but in others, there are probably better solutions.


Sure, I'm not making a case for using JWT, I'm just noting the options and capabilities that you have if you do, which I think weren't accurately expressed initially.

There are cases where JWT can make a lot of sense. For example, a high capacity API where you don't necessarily want to check auth for every request, especially if you can rely on some sort of caching infrastructure. In that case it may make sense to use JWT, re-check the authentication on some set schedule (whether once an hour or day), and save yourself what might be a few thousand authentications a minute.


Yeah, I always thought the API scenario was the original use case for JWT anyway and believe that it does makes more sense. I think API token expiry is a requirement too, but not as pressing or frequent as on the user-side.

With the advent of SPAs and RESTful designs, it was probably tempting to say, "hey, let's allow our user-facing client apps to hit the API directly and use the same JWT token scheme for auth there". So, whereas it was generally a good scheme for APIs, it became a YMMV thing once it diverged into the client.


The problem, as the linked 'Slightly Sarcastic Flowchart' says, is: how do you handle the invalidation server going down? If you just assume that tokens are valid in this case, then an attacker just has to kill the server and they're back to being impossible to invalidate. If you assume they're invalid, you're back to having centralised state, which mostly defeats the purpose.


The bloom filter allows you to largely distribute the work of the server, and serves as a reasonable proxy during a failure off the server. More importantly though, there's no reason it need be a centralized server. Invalidated tokens could be broadcast to a wide number of servers that each maintain the invalidated token list (it's a great case for a CRDT, since it is append only with a TTL). Normally it'd be a pretty compact list, and if it isn't, you probably want to take a more defensive posture anyway.

...and beyond that, if an attacker can take out your invalidation server, which needn't be directly accessible to the public, you've already had a pretty serious security breech. I think the least of your problems would be the invalidation server.


Thank you for this. While I love a good old nerd round of 'stump the wizard' I also appreciate someone pumping the breaks with some realism.

Some security conversations get to the point where it's stringing together a highly unlikely chain of scenarios requiring multiple pivots, multiple concurrent failures (stochastic or otherwise), and a state sponsored actor in order to slap a 'do not use' recommendation on something.

If companies like Nike can run _Magento_ and educational sites still require Flash then I can use JWTs. Maybe...


It might even be a real use case for blockchain!


It’s largely irrelevant because the revocation bloom filters are cached on each service, and if the auth service is down then tokens can’t be revoked anyway so the list is still accurate enough.

TBH I don’t think the author of the article has expirenced the nightmare that is a hot session store at a large scale before, you end up with needing to troubleshoot IO latency issues with basically no tooling that can show you where the problem is and you’re up against the hardware limits and what ever black box your cloud provider has made. Where as with JWT everything happens in normal user space and can be reasonably reasoned about with a bit of complexity without razor thin latency deps on IO performance.


I'm confused by your comment. Session stores seem like the easiest storage to scale horizontally. The workload is just a distributed hash, _maybe_ with atomic updates. "Three nines" durability over one day is perfectly acceptable. Use a consistent hash ring, no replication, and add nodes until you achieve the performance required.


It’s easier if you have one lookup key for a session, but often that’s not sufficient, consider the case of termination of a session by email address or by a session id. The other issue is you still have every service making high rates of network calls that require low latency since they’re often inline with the UX. So you’re relying more on a chain of IO between services which is nearly impossible to performance troubleshoot with today’s tooling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: