The "hide" link is right next to the "flag" link. Using flag instead of hide puts more strain on the mods, and is not the right thing to do for "this topic doesn't apply to my interests."
If your problem is that you have no means to control what other people find important enough to talk about on a public forum, in their spare time, or that the means at your disposal to do so are insufficient to make other people saying things that make you uncomfortable go away... That isn't a problem that can or should should be fixed. Hell, the desire you've expressed could be uncharitably interpreted being contributory to part of the problem that has people around you discussing politics in the first place.
FWIW I agree with you and recognize that to be one of the reasons it frequently isn’t allowed.
I also think there’s very few places with the power to meaningfully dialog with and among people who build stuff in Silicon Valley. I have dozens of friends, coworkers, etc who are in FAANG or the newer big tech companies, and all of them are extremely well paid, and most will insist they work for positive reasons. I believe in that most of them believe in other people, and don’t want to build a surveillance society or one that concentrates all wealth and power in a few.
For this reason, I think that some conversations on here are important to have - the impact technology is having on people who are outside the tech sphere, the effect of leaders of our companies on the economy, geopolitics, and power generally. Mark Facebook is a powerful player on the world stage. So is Paul Graham, and Sundar Pichai. Davos just took place - leaders from major economies are seeking guidance from these people who many people here work for. Let nobody say they aren’t participating in politics. Where you work matters, what you build matters. It’s not tinkering around in people’s garages anymore - they’re building the infinity gauntlet and someone is gathering all the gems. The Death Star plans are on AWS.
To pretend otherwise is to deny one’s responsibility - in the short term frequently profitable. In the long term, the pendulum tends to swing back..
But it is the right thing to do for "this topic violates HN guidelines both in letter and in spirit, as well as predictably causing low-quality discussion threads".
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
To the latter point, hundreds of comments in, and nobody has even brought up the intellectual curiosity angle of this (what limits are in place to the Federal government using data from Federal programs for law enforcement purposes? and does it matter if the program is administered by individual states?).
Instead it's just political rage bait, including citing the Rev Niemöller poem as if we're talking about Nazis.
(It used to be part of Internet culture that the moment you compared something mundane to the Nazis, you automatically lost the argument and were mocked mercilessly. We should bring that back.)
I've never had a good experience with individual metrics leaderboards. On one team we had a JIRA story point tracker shown on a tv by a clueless exec. Devs did everything they could to game the system and tasks that required uncertainty (hard tasks) went undone. I believe it contributed to the cog culture that caused an exodus of developers.
However, I love the idea of an occasional team based leaderboard for an event. I've held bug and security hackathons with teams of 3-5 and have had no problem with them.
> But Omarchy is a reminder that we live in a world where software isn’t just software, but the people who make it.
I get people are totally within their rights to ban movies/software/sports, etc. for creators whose beliefs they disagree with. However, software is the people who make it? I rarely, if ever, know the authors who create software or what they believe in.
Not shocking, a lot of time goes into making Vite and they need to make money.
One approach is to setup consulting services. Looks like Void Zero's approach is to start building value-add tools and features on top of Vite that are no longer free.
The decision that users must make now is whether it's worth the risk investing in Vite, assuming that more and more functionality will move to the paid tier.
Consulting doesn't scale though and draws resources from development itself, harming both the OSS and the business side.
All functionality as part of OSS projects will stay there. OSS projects such as Vite, Vitest, Rolldown and Oxc will stay open source.
Eventually, the (financial) success of Vite+ is directly tied to the health, stability, and adoption of the free, open-source Vite ecosystem, so the incentive is rather low.
Nearly everything we discuss here is political. Some people get triggered when the current administration gets criticized and pretend that's the line where politics starts, but it isn't. Accepting what the government does without question is the height of incuriosity. If the site owners had any interest in enforcing the guidelines, this place wouldn't be a cesspool of LLM shovelware self-promotion.
Come on now, security isn't easy but it's not rocket science. If someone is competent enough to be developing applications they're certainly competent enough to do security correctly by researching first.
Firstly, they are not prizes, they are microscopic chances at prizes. The company is cutting down on cost by playing prospect theory-based games with their employees' salary.
Secondly, it is not a reward for working efficiently, it is a reward for working the most efficiently. There is only one winner and the other participants get nothing. The company gets the added value either way of course. This is a great if you want to break up employee solidarity too.
Thirdly, the term 'efficiency' implies there is some sort of process optimization going on; the same energy goes in, more value comes out. I doubt workers are suddenly provided with better tooling during 'power hour', so 'efficiency' probably just means 'run faster, exhaust yourself'.
Finally, you can bet 'power hour' metrics are used to adjust regular productivity targets related to other 'incentives', overall making the jobs more demanding and less rewarding over time.
The whole system reeks of commoditization of labor, treating workers as mere goods that can be bought and systematically manipulated, and thinking about living, feeling humans in this way is disgusting to me.
>The company is cutting down on cost by playing prospect theory-based games with their employees' salary.
I suppose we cant expect Amazon workers to go into excel and calculate their yearly earnings from prizes?
If they did this, its a non issue.
But I suppose we can't expect Amazon workers to do this... correct?
>The whole system reeks of commoditization of labor
You say this like its a bad thing. If you can turn labor into something an unskilled/uneducated worker can do, humanity is more productive. This is the goal of creating processes.
... you say this in response to to someone responding to you and specifically describing why the downvotes are flooding in. Engineers like you give the whole profession a bad name, and we already have one. Seriously, as another poster commented I sincerely hope you can find a way to empathize. One day the engineering bubble may just burst, and you may yet find yourself in a job you hate but need. Perhaps that may help with some perspective?
But then again, I'm an engineer, so maybe I make better decisions.
Seems unlikely. Engineers are as prone to bad decision making as anyone else. Some of them - the supercilious dickheads who believe their own hype - even more so.
Unions form to prevent a race to the bottom. Individual employees have no power at all, so employers can keep pushing them. Changing jobs is not frictionless, even in a good job market. Most people who work in low-paying positions can't afford to miss a few months of pay because they quit their job to look for something better. Since people have to work to survive, especially in the US with its poor social safety net, you get situations where people are forced to take a job no matter how exploitative it is.
There is no such thing as a race to the bottom.. This is a common trope from people who don't understand economics. Union benefits are zero sum rent extraction, and in the long run, inhibit economic growth and with it broad-based wage growth.
> It is indeed not frictionless and requires some effort, but it's certainly doable.
You've just explained why this power isn't realistic to invoke. If you're a warehouse worker, can you afford the unemployment time long enough to find another job? Can your family?
I think yelling "I quit!" and slamming the door happens only in movies.
In real life most people first secure a new job and then resign from their current position.
To answer your question: no, my family cannot afford me not working for half a year as well. I believe this is the case for most of people not living with parents.
you say that as if the cost of getting a different job was zero, where in fact if there's only one employer around and you have a family or something similar, it's more likely to be in tens of thousands of dollars if you include future medical bills for stress-related illnesses. it's very expensive to be poor and/or out of options.
My understanding is that one of the primary differentiators between serfs and lords was that lords were able to own land.
Since many (most?) people in the US cannot afford the land and other resources necessary to provide for themselves (e.g. via farming or craftsmanship), I'm not sure that it's unfair to draw a parallel between them and serfs.
Many of the people I grew up with in the southwest own their homes (for the sake of this discussion, finance == own). Almost none of my friends in NYC own their homes. I recently moved out of NYC to a new england state and most of the people in my office own their homes.
It's really not that uncommon for people outside of major cities to own their home?
Anyone know if there have been any improvements to cold start times for Lambdas in a VPC? That was the absolute death knell for us. If you're using Lambdas as a service backend for mobile/web apps, it's extremely common those Lambdas will be talking to a DB, and any decent security practice would require that DB to be in a VPC. Cold starts for Lambdas in a VPC could be on the order of 8-10 seconds: https://medium.freecodecamp.org/lambda-vpc-cold-starts-a-lat...
I just got out of a session at re:invent where they covered that they were working on improving VPC cold start times by reducing the amount of time it takes to associate an ENI with your lambda function. The method they're using also reduces the number of IPs needed to 1 per subnet.
We recently had to abandon Lambdas, 10+ second cold start, and for some reason when adding an API Gateway you get another +200ms on each request (Google it, common issue apparently).
So, 10+ seconds cold start, and 200 + 200-300ms (around 500-600ms avg) calls to the Lambda function. Complete garbage for our application at least (I imagine using it for background processing might not be an issue with latency).
Switched over to EC2, less than 200ms response total, no cold starts.
Agreed! I'm much more concerned with VPC performance - I don't have a single lambda outside of a VPC. Firecracker is extremely cool, and I'm very glad to see the improved perf at the VM level, but that's not my bottleneck.
Thankfully, in my case, I have a very steady flow of data so I don't expect too many cold starts.
One thing though, does your lambdas need both public and private access? Else you can place them in a subnet for private only, since the slow part is the ENI for the Nat Gateway.
Cold starts for the VM are only part of the problem. If you're on a JITed runtime, a cold start also means compilation and optimization. It would be nice if they had ways to cache the JITed machine code so they could start back up with it already compiled and optimized.
You can generally resolve it yourself by poking seldom used functions to keep them hot. But no, they haven’t provided a solution to cold start (unless you consider ec2 or fargate a solution).
> You can generally resolve it yourself by poking seldom used functions to keep them hot.
We've tried this and it helps somewhat but when AWS attempts to scale your function based on load, cold starts re-appear. We've moved away from Lambdas where a dependable response time is required.
If you are experiencing cold starts it means that function is not used very often. If it's not used very often that likely means it's not user facing (or something less important like a Terms of Service page). If that's the case, why do you need instant response times?
No, that's not what it means. If you have high concurrent execution, you get 'cold start' every time the underlying service 'scales out' to support more.
The MORE you use lambda concurrently, the more you hit the cold start issue.
Granted, it's just for that one cold start execution per-scale node (and they could probably just optionally pre-fire to warm things in that instance, like with a cache), but it's definitely there horizontally.
I really with they would add an init() callback that is called on cold start but before any traffic is sent to your lambda. It wouldn't help when there are no lambdas running but it could be useful when things are scaling up, especially if you can ask for additional concurrency above the actual concurrency necessary for spikes.
I don't think so. When it spins it up, the request is already in flight. Otherwise this would have been solved by everyone but instead everyone sees terrible cold start times.
This is along the lines of what the other responses to this comment have said, but https://hackernoon.com/im-afraid-you-re-thinking-about-aws-l... gives a very detailed overview. It's titled "I'm afraid you’re thinking about AWS Lambda cold starts all wrong", because the way you're thinking about cold start times is common (and wrong).
that’s not entirely true. while your warm lambdas can and will take the traffic it your traffic ramps up, additional lambda instances will be spun up. you will pay cold start prices as they are spinning up. so, even if you have a heavily used lambda fn, depending on the traffic your p99 will still look pretty bad and you will not be able to guarantee that all requests will be processes in x ms or less.
That's not for in-VPC functions, although if the underlying instance changes with firecracker migration users might see ENI start improvements. Currently your ENI usage is roughly floor(3GB/Memory Allocation)*(Concurrent Executions). If the 3GB changes users will see huge gains as each ENI creation can take around 9s.
I'm wondering how that's even possible if it includes the time for downloading your code from S3. I.e. normal cold starts (as I understand it) involve fetching the code from S3 to install on a VM. Perhaps they aren't including that time when claiming single milli cold start times?