Hacker Newsnew | past | comments | ask | show | jobs | submit | bytasv's commentslogin

We’ve been building Design/UI development app with first class support for code generation (react + next) - https://www.kubi.design

It’s UI/UX is pretty similar to Figma (you can import from Figma). We do support integration of your own design system, that means that you can use actual code components in design canvas with all interactions and business logic. (Currently we have integrated Ant design and Mantine libraries)

We are looking for beta testers who want to speed up UI development or simply save money on FE development.


Even if the process that they have is working it doesn't mean that it can not be improved. And if you obviously see the parts that could be improved because you already stepped on the same rake many times why not do that? Ofc as a newcomer trying to change things you are always gonna be welcomed with strange looks but that's another side of it - how to implement the changes so that everyone is happy...


@1. You don't need threads if workers are enough for you, that's how you should do computation intensive tasks...


Computation intensive tasks often take large amounts of data as input. And sharing data with a worker always has to be done by serializing this data (in a message). So for large inputs, this approach doesn't work (the main thread would block the cpu while serializing the messages).

But my biggest problem with workers is that they don't have an event-loop, so I can't share asynchronous code between the main thread and the workers.


There is no need to serialize large amounts of data. The way it is designed to work in Node is you use a stream. So for example lets say you have a multi TB data dump in Amazon S3, you want to process it, and then upload a transformed multi TB result set back to Amazon S3. (This is something I've worked on before).

The way it works is you open a download stream from S3, pipe it into a Node.js transform stream, and then pipe that stream into an upload stream that uploads the data back to S3 using the multipart upload API.

The Node.js design is very much like using Unix pipes. You can pipe a huge multi TB file through grep without blocking anything. The data just streams from disk into the grep process, grep filters it down to things that match, and then streams the results onto the screen.

Computation on huge streams in Node.js works the same way. Your event loop remains unblocked even when operating on a stream TB's in size because you are only ever touching a portion of the dataset at a time. Additionally if you do it properly your overall memory usage remains low as you are exporting the data back out of the machine as fast as it comes in. I've used this technique to process streaming data many GB in size while keeping the node process under 200 MB of memory used from the system perspective.

Recommended reading: https://nodejs.org/api/stream.html

Here is an example of an upload stream that I created for the use case of processing a large multi TB data set and piping the result up to Amazon S3: https://www.npmjs.com/package/s3-upload-stream


For streaming, I can see that this can work.

But basically, what I wanted to do, is implement a module that works as an index between threads (e.g., a search-tree for fast lookup). However, since in Node.js all threads are in a separate process, it is (afaict) impossible to make this efficient, as processes do not share data.


So in Node.js this would be accomplished by using a shared data store like Redis. For example I run eight processes per c3.xlarge instance, and the instances share a Redis which contains data like that. Particularly indexes could be stored in the Redis hash structure.

Basically Node.js is designed around the concept of microservices and separation of concerns. Rather than doing everything in one giant, multithreaded monolithic process you break your service up into loosely coupled components that talk to each other via messaging and share common datastores. Some people really like this pattern (I'm a strong advocate of it myself) because it scales really, really well.


Well, the "index" was merely an example. Actually, what I want to do is implement persistent data structures (a.k.a. functional or immutable data structures) in a combination of javascript and C++. See [1]

[1] http://en.wikipedia.org/wiki/Persistent_data_structure


The `servicebus` module is a really cool way to coordinate events between microservices, especially if they don't necessarily "know" about each other.


It sounds like nodejs is not what you want then.


You might agree with the author if you are working with self-managing team where everyone is capable to deliver something daily.

But more often that's not the case:

1. "Members of the team already know what I did yesterday, because they can see the completed items in the "DONE" column on the Scrum Board." - this is not true, because in case someone is a slacker in a team and does nothing you won't be able to track that, you can't see progress of "doing nothing" unless two days in a row you hear the same person still "working on the same problem". Everyone who participates in a stand-ups knows that bad feeling when you are on a standup and you don't have to say anything about what you have done, you force yourself to fix that because that embarrasses you.

2. "This question is a little maddening, isn't it? I can tell you what I'm planning on doing today, like maybe finishing the task that I just told you I was halfway through, and then grabbing the next..." - this is to prevent multiple persons on jumping the same task, that's your daily planning.

3. "Waiting for the daily stand-up is a terrible way to deal with impediments." - again, nobody forces you to wait for a standup to deal with problems that arise, this is used to explain why were you spending so much time on fixing some small bug. For example - you spent whole day trying to fix a bug, but the fix itself was a "one-liner". After looking to your commit I could make a false assumption that you were slacking the whole day and only made a single "change" where on a standup you have time to explain what problems forced you to spent whole day on tracking down that bug... Another example - someone spends whole day on a feature but still doesn't finish that even though it's trivial to implement, he might either explain that he faced a problems that prevented him from implementing that feature faster or he won't have anything to say because he was slacking


Is "Countries represented" only from W15 or overall? If it's only from W15, could it be possible to know the list of all countries represented since the beginning of YC?


This is only representative of W15 companies. We don't have a list of countries overall, since it's not something we've always tracked. But that would certainly be interesting to see.


I'm also using it but only on a side project. The primary reason for choosing ESLint over the other two is ability to create your own rules.

I think that whatever style rule You set you should be able to add rule for that or else nobody will care about it.

Talking about other two: - JSLint - is too restricted (heard many calling it too Crockfordish) in many cases and You don't have much control on how one should write code - JSHint is much like JSLint but with a much more options to customise how You want to write Your code


It would be really interesting to see the same test done with 2 and then maybe 3 dynos running. I'm wondering how well would that scale with more dynos running. Is it possible to buy dynos for shorter time than a month?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: