Hacker Newsnew | past | comments | ask | show | jobs | submit | gfody's commentslogin

"structured query language" is actually a backronym, SEQUEL is indeed a programming language and the only mainstream 4GL. consider the output of the compiler (query planner) is a program with specific behavior, just that your sql code is not the only source - the other inputs are the schema and its constraints and statistics. it's an elegant way to factor the sourcecode for a program, I wonder if Raymond Boyce didn't die young what kind of amazing technology we might have today.

the best implementation of structured logging I've seen is dotnet build's binlogs (https://msbuildlog.com), I would love to see it evolve into a general purpose logging solution

market research shows that 100% of the people interested in this style of development are mac users


I wonder too, for a DNS query do you ever need keepalive or chunked encoding? HTTP/1.0 seems appropriate and http2 seems overkill


DNS seems like exactly the scenario where you would want http2 (or http1.1 pipelining but nobody supports that). You need to make a bunch of dns requests at once, and dont want to have to wait a roundtrip to make the next one.


ok multiple requests makes sense for keepalive (or just support a "batch" query, it's http already why adhere so tightly to the udp protocol)

http/1.0 w/keepalive is common (amazon s3 for example) perfectly suitable simple protocol for this


Keepalive is not really what you want here.

For this usecase you want to be able to send off multiple requests before recieving their responses (you want to prevent head of line blocking).

If anything, keep alive is probably counter productive. If that is your only option its better to just make separate connections.


makes sense but I still would prefer to solve that problem with "batch" semantics at a higher level rather than depend on the wire protocol to bend over backwards


The problem with batch semantics is you do have to know everything up front. You cant just do one request and then 20 ms later another.

For DNS this might come up in format parsing. E.g. in html, First you see <script> tag, fire off the DNS request for that, and go back to parsing. Before you get the DNS result you see an <img> tag for a different domain and want to fire off the DNS result for that. With a batch method you would have to wait until you have all the domain names before sending off the request (this might get more important if you are recieving the file you are patsing over the network and you dont know if the next packet containing the next part of the file is 1ms away or 2000ms).


clearly dns requests ought to be batched in this scenario, but we can imagine a smarter mechanism than http2 multiplexing to do it

the problem with relying on the wire protocol to streamline requests that should've been batched is that it lacks the context to do it well



microsofts own stuff never seems to be what gets momentum. there's a strong aftermarket for better ways like back in the borland era bcb and delphi, the more things change the more they stay the same!


not to be confused with jq for querying json?


pleasant contradiction to betteridge's law


some apps, when allowed to run in the background, cause a multi-second delay in global hotkeys. this happened on 10 too but 11 pushes way more apps by default so it's more likely to hit. you can hunt down the culprit and remove its permission to run in the background or just deny them all (I've yet to experience a negative consequence from that)


the question was why not use encryption (sqids/hashids/etc) to secure publicly exposed surrogate keys, I don't think this reply is on point .. surrogate keys ideally are never exposed (for a slew of reasons beyond just leaking information) so securing them is a perfectly reasonable thing to do (as seen everywhere on the internet). otoh using any form of uuid as surrogate key is an awful thing to do to your db engine (making its job significantly harder for no benefit)

> You've embrittled your system.

this is the main argument for keeping surrogate keys internal - they really should be thought of like pointers, dangling pointers outside of your control are brittle. ideally anything exposed to the wild that points back to a surrogate key decodes with extra information you can use to invalidate it (like a safe-pointer!)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: