Hacker Newsnew | past | comments | ask | show | jobs | submit | ossopite's commentslogin

I think the send/recv with a timeout example is very interesting, because in a language where futures start running immediately without being polled, I think the situation is likely to be the opposite way around. send with a timeout is probably safe (you may still send if the timeout happened, which you might be sad about, but the message isn't lost), while recv with a timeout is probably unsafe, because you might read the message out of the channel but then discard it because you selected the timeout completion instead. And the fix is similar, you want to select either the timeout or 'something is available' from the channel, and if you select the latter you can peek to get the available data.


Thanks, that is a great point.


Isn't this exactly what cancellation-safety is all about?


Single core performance isn't just clock frequency. It must be multiplied by average IPC, but really it's more difficult since you have to account for factors like new SIMD instructions. Effective IPC improvements are where a significant fraction of single core speedup came from in this period


What?

The visitor pattern is a technique for dynamic dispatch on two values (typically one represents 'which variant of data are we working with' and the other 'which operation are we performing'). You would not generally use that in recursive descent parsing, because when parsing you don't have an AST yet, so 'which variant of data' doesn't make sense, you are just consuming tokens from a stream.


> you are just consuming tokens from a stream.

My guy... Do you think that parsers just like... concat tokens into tuples or something....??? Do you not understand that after lexing you have tokens (which are a "type") and AST node construction (an "operation") and that the grammar of a language is naturally a graph.... Like where else would you get the "recursion" from....

If that doesn't make sense I invite you to read some literature:

> makeAST():

> asks the tokenizer for the next token t, and then asks t to call the appropriate factory method the int token and the id token call makeLeaf(), the left parenthesis token calls makeBinOp() all other tokens should flag an error! does the above "smell" like the visitor pattern to you or not? Who are the hosts and who are the visitors?

https://www.clear.rice.edu/comp212/02-fall/labs/11/


OK I might be wrong about the visitor pattern, but what I really did not like is to use the accept() and visitBlah() way to execute AST nodes: https://craftinginterpreters.com/representing-code.html#the-...

I did continue reading the book (not the original author of that reply) but I do think it is distracting for newbies. I had to come back to this page over and over again to recollect memory about the pattern, because I usually read it one chapter or a few sections every week, so every time I had to remind myself how this visitBlah() and accept() pair works. I really think a big switch() (or anything that works but is simpler) would be a lot easier to understand.

The other reason I dislike this kind of stuffs is that I have someone in the team who really likes to use patterns for every piece of code. It's kinda difficult to tell whether it is over-engineering or not, but my principle is that intuition always beats less lines of code (or DRY), unless it is absurdly more lines of code or repetition. And to test that principle you just grab a newbie and see which one makes more sense to him.


> I really think a big switch() (or anything that works but is simpler) would be a lot easier to understand.

It's much easier conceptually to implement this using recursion instead of a while loop and a token stack (it's basically DFS). So I disagree with you there.

> The other reason I dislike this kind of stuffs is that I have someone in the team who really likes to use patterns for every piece of code. It's kinda difficult to tell whether it is over-engineering or not, but my principle is that intuition always beats less lines of code (or DRY), unless it is absurdly more lines of code or repetition. And to test that principle you just grab a newbie and see which one makes more sense to him

I'm with you - I really don't give a shit about patterns (which was my whole original point - who cares). But that last part I don't agree with - systems code (like a parser) doesn't need to be legible to a noob. Of course we're talking about a textbook so your probably right but like I said most production parsers and AST traversals are written exactly this same way. So anyone learning this stuff hoping to get a job doing it should just get used to it.


> so every time I had to remind myself how this visitBlah() and accept() pair works. I really think a big switch()…

This is just and alternative implementation of the visitor pattern. Whether you implement it using dynamic dispatch or a switch or an if stack its all the same pattern…


Nope, you had it right.

Visitor thoroughly confuses me in the context of parsing (maybe in all contexts.)

visit and accept are not the verbs I want to be seeing in the code. I want to see then, or, and try.


Parsers "accept" or "reject" programs. It's completely standard language.


Alright, so where can I read more about the visitor pattern's "reject" method?


> all other tokens should flag an error

Ie the link I posted above


I see that you've found an example of how recursive descent parsing actually can be implemented with the visitor pattern, which I've never come across before, and I didn't read it carefully enough to understand the motivation - but that doesn't mean they are the same thing - the recursive descent parsers I've seen before just inspect which tokens are seen and directly construct AST nodes

as an adendum, the reason I don't understand the motivation is that the visitor pattern in the way I described it is useful when you have many different operations to perform on your AST. If you have only one operation on tokens - parsing into an AST - I'm not sure why you need dynamic dispatch on a second thing, the first thing being the token type. Maybe the construction is that different operations correspond to different 'grammar rules'?


> why you need dynamic dispatch on a second thing

You're overindexing on maximally generic visitor pattern. If you have one type of visitor but nonetheless dispatch based on type that's still visitor pattern.

EDIT: to be honest who even cares. My initial point was why in the hell would you stop reading a book because a particular "pattern" offends you. And I'll reassert it here: who cares whether a recursive descent parser fits the exact definition of visitor pattern or not - you have members of a class that do stuff (construct AST nodes) and possibly track other data and then call other members. I usually call that a visitor class even if it's the only one that ever exists <shrug>


Ok, that's true, but my claim is that recursive descent parsing does not have to use the visitor pattern and indeed using recursive descent parsing is not the same as using the visitor pattern (you can do the former without the latter and I claim that you usually do)


> just inspect which tokens are seen and directly construct AST nodes

I'll repeat myself: this is not possible because you need to recursively construct the nodes (how else would you get a tree...).


I think I'm missing something here. if you have a grammar rule R with children A and B, and a function in your recursive descent parser that corresponds to R, why can R not call the parser functions for A and B, which return AST nodes themselves, and then construct another AST node using the result of those? Where was the visitor pattern required here?


Me too. No-one's denying that recursion is happening. We're just not sure about it being synonymous with the Visitor Pattern.


I would add Vernor Vinge's A Fire Upon the Deep and A Deepness in the Sky to these suggestions


From the headline I can't decide if it was a study in astronomy or gastronomy


It seems like it originated in the Isabelle proof assistant ML dialect in the mid 90s https://web.archive.org/web/20190217164203/https://blogs.msd...


I'm not sure if t-strings help here? unless I misread the PEP, it seems like they still eagerly evaluate the interpolations.

There is an observation that you can use `lambda` inside to delay evaluation of an interpolation, but I think this lambda captures any variables it uses from the context.


> There is an observation that you can use `lambda` inside to delay evaluation of an interpolation, but I think this lambda captures any variables it uses from the context.

Actually lambda works fine here

    >>> name = 'Sue'
    >>> template = lambda name: f'Hello {name}'
    >>> template('Bob')
    'Hello Bob'


> An early version of this PEP proposed that interpolations should be lazily evaluated. [...] This was rejected for several reasons [...]

Bummer. This could have been so useful:

    statement_endpoint: Final = "/api/v2/accounts/{iban}/statement"
(Though str.format isn’t really that bad here either.)


There was a very very long discussion on this point alone, and there are a lot of weird edge cases, and led to weird syntax things. The high level idea was to defer lazy eval to a later PEP if its still needed enough.

There are a lot of existing workarounds in the discussions if you are interested enough in using it, such as using lambdas and t-strings together.


Would be useful in that exact case, but would be an absolute nightmare to debug, on par with using global variables as function inputs


Yeah, to be honest, every time this comes to mind I think “wow, this would be really neat!”, then realize just using .format() explicitly is way easier to read.


I do think that people are far too hesitant to bind member functions sometimes:

  statement_endpoint: Final = "/api/v2/accounts/{iban}/statement".format
(it's likely that typecheckers suck at this like they suck at everything else though)


> I'm not sure if t-strings help here?

That's correct, they don't. Evaluation of t-string expressions is immediate, just like with f-strings.

Since we have the full generality of Python at our disposal, a typical solution is to simply wrap your t-string in a function or a lambda.

(An early version of the PEP had tools for deferred evaluation but these were dropped for being too complex, particularly for a first cut.)


And that actually makes "Template Strings" a misnomer in my mind. I mean, deferred (and repeated) evaluation of a template is the thing that makes template a template.

Kinda messy PEP, IMO, I'm less excited by it than I'd like to be. The goal is clear, but the whole design feels backwards.


Naming is hard; for instance, JavaScript also has its own template strings, which are also eagerly evaluated.


You could do something like t”Hello, {“name”}” (or wrap “name” in a class to make it slightly less hacky).


Lambda only captures variables which haven't been passed in as argument.


ROT13

Fb V guvax gur nafjre eribyirf nebhaq gur pbzcbfvgvba ynj. Va bgure jbeqf, vf sznc (sznc frg s) t rdhny gb sznc frg (t . s) sbe nal s naq t?

V guvax guvf ubyqf bayl vs s naq t ner cher (be znlor whfg vs t vf cher?). fb znlor vs lbh vtaber hafnsr bcrengvbaf va unfxryy vg'f gehr? vg'f pregnvayl abg gehr va s#/bpnzy. vzntvar gung s vf sha _ -> 0 naq t vf sha _ -> arkg_vag_sebz_zhgnoyr_pbhagre (). gura sznc frg (t . s) qbrfa'g punatr gur fvmr bs gur frg juvyr sznc (sznc frg s) t cebqhprf n fvatyrgba frg.


Let's assume all the functions are pure. Otherwise, even Maybe would not be a functor.


Gung'f snve. Gura V guvax, abg pbafvqrevat nal cnegvphyne cebtenzzvat ynathntr, gur nafjre qrcraqf ba ubj Frg qrgrezvarf rdhnyvgl bs ryrzragf. Vs vg hfrf fbzr vzcyrzragngvba bs rdhnyvgl/pbzcnevfba sbe glcr r gung pna fnl gung fbzr inyhrf bs glcr r ner rdhny gung ner arireguryrff qvfgvathvfunoyr, gura Frg vfa'g n shapgbe


Irel tbbq. V guvax gung'f gur rffrapr bs gur nafjre. Gur shapgbe ynjf unir na vzcyvpvg qrcraqrapl ba gur fhofgvghgvba cebcregl bs rdhnyvgl, juvpu thnenagrrf gung k = l vzcyvrf s(k) = s(l).


I hadn't heard of this property before, thanks for introducing me to it!

V gubhtug vg fbhaqrq fhecevfvat sbe n glcr r gb orunir guvf jnl jura V jebgr zl pbzzrag, ohg abj V ernyvfr gung vg zvtug or pbzzba va nal vzcyrzragngvba bs rdhnyf gung uvqrf vagreany fgngr, n fvzcyr rknzcyr orvat n frg/znc qngnglcr jubfr rdhnyvgl vzcyrzragngvba vtaberf vafregvba beqre ohg jubfr vgrengvba beqre qrcraqf ba vg.


I suppose by enumerations you mean sum types. I would argue that these are pretty fundamental? you have product types (structs/records/tuples) - a value is made up of x and y - and sum types - a value can be either X or Y. I think the combination of these is what you need to precisely express any concrete data type.


I did mean sum types, variants, etc. It's not really clear what I meant by representing the data but I'm referring to type inference. SML can't solve the problem, and Lisp doesn't have it.


I'm curious what you have in mind when it comes to ways in which OCaml is insistent on being functional while F# isn't. After all, OCaml has mutable data structures, mutable record fields, for loops and so on. Is it just that more libraries assume immutability and use functional abstractions?


To be fair, my knowledge of F# is a bit basic, but I meant stuff like classes (including abstract ones), interfaces, and the ingrained interop with C#.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: