Hacker Newsnew | past | comments | ask | show | jobs | submit | jfe's commentslogin

I compute the resized windows and columns on mouseup to imitate how the Acme text editor does resizing, but the event handler which computes the resize could very well be called on mousemove if preferred. Maybe it'd be worth adding that as a customization in the library interface.


Thanks. Which browser are you using? I noticed this issue in Firefox but thought I fixed it. I've tested it using the latest versions of Chrome and Firefox and it seemed to work.


Safari.


Omnino does not implement the "right-click to load" functionality of Acme, as it's meant to be a general-purpose library. However, I have implemented this functionality on my personal website and plan to open-source this code as a separate library soon. You can see it here: https://spelunca.xyz/


Very cool. I keep middle-clicking the Delwin button, which my browser interprets differently than I want.


Thanks. I'm not sure how many people have three mouse buttons but it wouldn't hurt to make the buttons respond to middle-clicks the same way they do to left-clicks.


what is with programmers and needing to defend a combination of syntax, semantics, and runtime? C++ is a programming language with advantages and disadvantages. end of story.


> C++ is a programming language with advantages and disadvantages. end of story.

Sounds like the beginning of a story, during which you explain the advantages and disadvantages, determine where C++ is best used, and compare it to other options.


would you also criticize newton's principia mathematica because he expressed the concept of infinitesimals using geometry instead of using liebniz's or lagrange's arguably clearer notations?

K&R was written in a different time, where computing had stricter (but not really different) constraints, but it's still arguably the clearest expositions of the language around.

considering computing hasn't changed that much since K&R was written, it's unlikely your idea of good code differs much from what was done 40 years ago. for example, functional programming, which is the popular dogma today, was invented around that time.

take the good (it's not hard to find in a book like K&R), discard the (perceived) bad, and move on with your life.


hand-roll it. a 3-minute programming task doesn't warrant pulling in a graph of dependencies a la leftpad.


But hand rolling is also hard and annoying to support if you want to support long-name options and short name optionals, out of order with potential spaces in odd places that still work as most users will expect them to along with reasonable usage-instruction documetantation.


See comment about Thor


Never do this. There are a basket of edge-cases and weirdness that you'll miss in your three-minute re-implementation that aren't worth that pain you'll have using or debugging later on.


There's a better alternative: don't have complex usage in the first place. You can get all the functionality you want by having multiple executables calling a common library.

Consider netcat. You read the man page, and try to do something following its advice. Often enough, it just doesn't work. Because it's trying to cover a range of unrelated usages through a single entrypoint.

It's easy to set up multiple exes cleanly. Have a non-executable module that contains the functions that your application requires (e.g. call it lib.rb). For each distinct usage, create a separate executable script that imports lib.rb, extracts a, b and c from the for just that usage and then call lib.usage(a, b, c).

Users will find this easy to discover. You will find it easy to maintain, even compared to dedicated parsing DSLs.

We don't special-case functions to do multiple things based on complex argument cases. We just create well-defined functions. We should think of executables in the same terms.


There are really only edge cases in usage models - I've found option libraries all fairly poor, because they expose a global view of the options, rather than an iterator view, for the most part.

If args are in a structure that can be peeked and shifted, you're in a good place for context sensitive options. It's just a lexer.

My tools tend towards this syntax:

    cmd [<global opt>...] subcommand [<local opt>...]
Composing the data structure for this with option libraries is often more work than iterating over a peekable stream of words.


As always this depends. In principle I agree for example I wrote pastel-cli(https://github.com/piotrmurach/pastel-cli) that doesn't use any parser to figure out arguments. However, it's super basic, for anything more complex I would look for more powerful parsers. Lexing command input as much as it is fun can be very thorny issue.


looks like the fox is guarding the hen-house.


i wonder if the size could be reduced by replacing the yacc code with a hand-written parser.


There is no yacc code in either V7 or mJS.

V7 uses hand-written recursive-descent parser. Initially, it was using ordinary C functions, and that created a problem on systems with low stack size. E.g. each '(' starts statement parsing from the top, so 1 + (2 + (3 + (4 + 5))) consumed stack, and sometimes resulted in stack overflow in e.g. interrupt handlers or network callbacks.

Therefore we have rewritten recursive descent using C coroutines, and that is extremely sophisticated piece of work. See https://raw.githubusercontent.com/cesanta/v7/master/v7.c , search for #line 1 "v7/src/parser.c"

mJS on the other hand uses lemon parser generator - the one from sqlite project. It generates a LALR parser which is quite efficient in terms of memory.


So it is API compatible with V7?


Nope. It's similar though, cause some of the concepts, and the code, were reused. mJS does not need an embedding API, really. The intent is that FFI is used.


I used to think that being a software engineer meant you were intelligent, and worked hard to earn the title, but after working in the industry for 6 years, I decided that, large-scale software project management problems aside, industrial computing just isn't that hard relative to math and the sciences.

Once you know a dozen languages and understand the running themes of computing, it's all sort of old-hat. Ironically, as the field has been flooded with young, inexperienced devs to satisfy market demand, the titles of "software developer" or "nerd" have become social badges to indicate one's intelligence and cutting-edge-ness. We use 50-year-old operating systems and call ourselves innovators.

Maybe, like Groucho Marx, I just don't want to belong to any club that would accept me as a member, but I think that if you're looking to level-up intellectually, studying math and science, but especially math, is the way to do it. I was never good at math, but I've spent the past year teaching myself calculus and the struggle has been well worth the expansion in my world-view.


I think "industrial computing just isn't that hard relative to math and the sciences" is only true if you are working on a limited set of applications.

In my job as a software engineer, I've had to: Invent new algorithms and prove their correctness. Formalize datatypes as CRDTs. Design (and model check) a specialized distributed consensus algorithm. Read and implement _many_ algorithms from academic papers, including motion vector estimation. And build complex statistical models.

All of the above require either formal use or the actual practice of mathematics. Sometimes very advanced mathematics including multivariate calculus, statistics, graph theory, number theory, category theory, and so on.

I guess what I'm saying is, software engineering can be as easily as challenging as math and science. Because at its "peak", it is math and science. Not all mathematicians and scientist work on hard problems. Not all software engineers do either. But don't be to quick to judge the field based on a limited sample size.


Can you elaborate on what you worked on? Was it part of academics or industry? I guess it was embedded systems, as these domain seems pretty open to formalization (as they are pretty low-level), as they often demand high reliability.


Most of my formal work was for distributed systems. Some of it was for video transcode and packaging work. All of it was for industry.

For example, I implemented a distributed adaptive bitrate video packager whose core synchronization algorithm I needed to both develop, and then for my own comfort, formally prove. I did a proof of correctness via exhaustion and a proof that it achieved consensus in a minimal number of steps by induction.

This is pretty typical for any "distributed" algorithm because intuiting correctness for concurrent and distributed stuff is ... really hard. That's why people prove garbage collection algorithms.

When designing large distributed systems a thorough understanding of statistics is required to understand workloads. How many transactions per second should this micro service support? Better pull out that Poisson distribution. What cache size do I need? Better grab a zipf distribution, some sample data, and R. Want to understand the interplay of several factors on workload? I hope you are comfortable with multivariable calculus.

I don't face problems like these daily, but when I do I'm fucking glad for every inch of math I know. Which to be honest is still probably not enough.


Thanks for this thoughtful answer.


I'm mildly surprised you didn't know calculus before becoming a software engineer. I suddenly realized that I falsely assumed calculus would be common knowledge to all programmers and now I'm wondering what else would not be that I've been assuming.


I taught myself programming after graduating from art school. I never took calculus or trigonometry even in high school, so my math skills are as basic as you can get as an adult.

Still, I'm currently trying to teach myself math because it seems like it will help me improve as a developer. It's very hard, though, to learn this stuff when you're 32.


Why is it any harder at 32 than at 22?


It could be age related cognitive decline, real or imagined, but it could also be prioritizing any of the following over learning largely irrelevant mathematics: children and family, exercise, bills/finances/health care, social obligations, politics.

Plus, it's much more likely for a 22 year old to be able to go to college / grad school and be required to learn math.


I can assume 8/10 web developers aren't proficient in calculus, especially the ones who switched careers from non-STEM fields or those who graduated from coding bootcamps.


Depends on what you're doing. My current work project is an embedded device that collects various sensor data. It does a lot of math. The data needs to be filtered, unit converted, averaged, and integrated all in real-time. I don't have the luxury of a fast processor and have to employ efficient numeric algorithms that can save clock cycles. I use regression analysis to approximate curves from datasheets and statistical methods to check my results.

Some people actually do need to use math for their professional work. Is this capital-H hard? No. But you do need an education in the fundamentals to know what you're doing.


You can combine software engineering and math. Try formal methods. Beautiful abstract algebra, like Galois connections, and super-useful. An appetizer:

* http://www.concrete-semantics.org/

* http://dl.acm.org/citation.cfm?id=555142


> I think that if you're looking to level-up intellectually, studying math and science, but especially math, is the way to do it.

I couldn’t agree more.


i think software is like carpentry. more about wisdom than smarts


iirc dennis ritchie was an applied mathematician.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: