Hacker Newsnew | past | comments | ask | show | jobs | submit | x1f604's commentslogin

BAR runs fine on low end CPUs...until you have like 2,000 units on speed metal

This is one of the problems that BAR solves beautifully - a player could leave and rejoin later and the game would continue running just fine. An existing player can choose to take their stuff or not, or take it and give it back when the player rejoins. Truly elegant.

The history behind BAR is also fascinating.

There was the Total Annihilation RTS and while it had the normal 2d overhead view, all the data was in 3d.

A Swedish gaming clan put together an accelerated full 3d engine to replay Total Annihilation recorded demos. As it got more and more features it was realized that most of what was needed to play TA was being recreated so they closed the loop and made it into a full game engine which they called SpringRTS. There was the default accurate TA game code but there was also a very popular mod that was not afraid to change things a bit, basically "we like Total Annihilation but also think it could be better" and they called it Balanced Annihilation. We are almost there. BA lived under the Spring project for a few years, but really when you think about it there are ip problems with it using the TA assets, also, I suspect someone wanted to do engine work but was having a hard time with upstreaming it, so it forked off the Spring project, they rebuilt all the units(same unit different skin) are doing a ton of great engine work and called it BAR (retronymed into Beyond All Reason but I suspect it originally stood for Balanced Annihilation Reborn or something like that). So BAR is basically a highly modified legally distinct Total Annihilation.

Zero-K is another great RTS based on this engine. It drifts further off the TA formula than BAR does.


BAR in general is such a great showcase in A) FOSS games can be good, work great, scale nice and be fun and B) BAR is a game built by gamers who play and enjoy their own game, for other gamers, which seems more and more rare these days.

Are there any BAR forks or alternatives that don't force me into proprietary walled gardens for exchange with the community? I would even donate! Even regularly!

What is BAR?

Beyond All Reason as mentioned in a sibling thread.

"Currently in development". It's got robots.


> It's got robots.

"Shut up and take my money, I'm making a donation" :)


You might also like Iron Harvest. It takes place in an alternate history WWI setting with dieselpunk mechs.

I feel like the issue is more that their pathing algorithm is very inefficient. Not sure why using multiple cores would solve the problem if the cause of the lag is that their pathing algorithm is cubic time or something

The pathfinding algorithm has been decently optimized. The reality is that 600 units finding paths and keeping them up to date is a lot of work.

Some of the pathfinding is precomputed, some cannot be as it involves other units and formations.

Most other RTS games work around this by either relaxing the constraints or implement some amount of parallelism.


There is also some exciting work going on to improve the path finding of formations to make them behave more naturally: https://gitea.wildfiregames.com/0ad/0ad/pulls/8608

A realistic version would have unit commanders pathfinding over long distance, and the plebs following them unless they got merc'ed

I think this actually can be used to optimize the pathfinding. Basically create a fake unit that is used for the real/complex path, then have the other units follow it with basic collision or a very tuned down pathfinding algorithm similar to the hack I did where I just counted loop iterations and bailed after some threshold.

> Demis Hassabis has recently given an estimate of human-level AGI in 5 years

He said 50% chance of AGI in 5 years.


Yes, and I wonder how many "5 year" project estimates, even for well understood engineering endeavors, end up being accurate to within 50% (obviously overruns are more common then the opposite)?

I took his "50% 5-year" estimate as essentially a project estimate for something semi-concrete they are working towards. That sort of timeline and confidence level doesn't seem to allow for a whole lot of unknowns and open-ended research problems to be solved, but OTOH who knows if he is giving his true opinion, or where those numbers came from.


It is not rare for flash storage devices to lose data on power loss, even data that is FLUSH'd. See https://news.ycombinator.com/item?id=38371307

There are known cases where power loss during a write can corrupt previously written data (data at rest). This is not some rare occurrence. This is why enterprise flash storage devices have power loss protection.

See also: https://serverfault.com/questions/923971/is-there-a-way-to-p...


I wish someone would sell an SSD that was at most a firmware update away between regular NVMe drive and ZNS NVMe drive. The latter just doesn't leave much room for the firmware to be clever and just swallow data.

Maybe also add a pSLC formatting mode for a namespace so one can be explicit about that capability...

It just has to be a drive that's useable as a generic gaming SSD so people can just buy it and have casual fun with it, like they did with Nvidia GTX GPUs and CUDA.


Unfortunately manifacturers almost always prefer price gouging on features that "CuStOmErS aRe NoT GoInG tO nEeD". Is it even a ZNS device available for someone who isn't a hyperscale datacenter operator nowadays?


Either you ask a manufacturer like WD, or you go to ebay AFAIK.

That said, ZNS is actually something specifically about being able to extract more value out of the same hardware (as the firmware no longer causes write amplification behind your back), which in turns means that the value for such a ZNS-capable drive ought to be strictly higher than for the traditional-only version with the same hardware.

And given that enterprise SSDs seem to only really get value from an OEM's holographic sticker on them (compare almost-new-grade used prices for those with the sticker on them vs. the just plain SSD/HDD original model number, missing the premium sticker), besides the common write-back-emergency capacitors that allow a physical write-back cache in the drive to ("safely") claim write-through semantics to the host, it should IMO be in the interest of the manufacturers to push ZNS:

ZNS makes, for ZNS-appropriate applications, the exact same hardware perform better despite requiring less fancy firmware. Also, especially, there's much less need for write-back cache as the drive doesn't sort individual random writes into something less prone to write amplification: the host software is responsible for sorting data together for minimizing write amplification (usually, arranging for data that will likely be deleted together to be physically in the same erasure block).

Also, I'm not sure how exactly "bad" bins of flash behave, but I'd not be surprised if ZNS's support for zones having less usable space than LBA/address range occupied (which can btw. change upon recycling/erasing the zone!) would allow rather poor quality flash to still be effectively utilized, as even rather unpredictable degradation can be handled this way. Basically, due to Copy-on-Write storage systems (like, Btrfs or many modern database backends (specifically, LSM-Tree ones)) inherently needing some slack/empty space, it's rather easy to cope with this space decreasing as a result of write operations, regardless of if the application/user data has actually grown from the writes: you just buy and add another drive/cluster-node when you run out of space, and until then, you can use 100% of the SSDs flash capacity, instead of up-front wasting capacity just to never have to decrease the drive's usable capacity over the warranty period.

That said: https://priceblaze.com/0TS2109-WesternDigital-Solid-State-Dr... claims (by part number) to be this model: https://www.westerndigital.com/en-ae/products/internal-drive... . That's about 150 $/TB. Refurbished; doesn't say how much life has been sucked out of them.

Give me, say, a Samsung 990 Pro 2 TB for 250 EUR but with firmware for ZNS-reformatting, instead of the 200 EUR MSRP/173 EUR Amazon.de price for the normal version.

Oh, and please let me use a decent portion of that 2 GB LPDDR4 as controller memory buffer at least if I'm in a ZNS-only formatting situation. It's after all not needed for keeping large block translation tables around, as ZNS only needs to track where physically a logical zone is currently located (wear leveling), and which individual blocks are marked dead in that physical zone (easy linear mapping between the non-contiguous usable physical blocks and the contiguous usable logical blocks). Beyond that, I guess technically it needs to keep track of open/closed zones and write pointers and filled/valid lengths.

Furthermore, I don't even need them to warranty the device lifespan in ZNS, only that it isn't bricked from activating ZNS mode. It would be nice to get as many drive-writes warranty as the non-ZNS version gets, though.


ZNS (Zoned Namespace) technology seems to offer significant benefits by reducing write amplification and improving hardware efficiency. It makes sense that manufacturers would push for ZNS adoption, as it can enhance performance without needing complex firmware. The potential for using lower-quality flash effectively is also intriguing. However, the market dynamics, like the value added by OEM stickers and the need for write-back capacitors, complicate things. Overall, ZNS appears to be a promising advancement for specific applications.


From the book:

(Warning: Spoilers ahead)

> The next day I told Parry that I was flattered but would not make pentaborane. He was affable, showed no surprise, no disappointment, just produced a list of names, most of which had been crossed off; ours was close to the bottom. He crossed us off and drove off in his little auto leaving for Gittman's, or perhaps, another victim. Later I heard that he visited two more candidates who displayed equal lack of interest and the following Spring the Navy put up its own plant, which blew up with considerable loss of life. The story did not make the press.


When Gergel was writing this I was working for one of the similar companies, a research chemicals company fairly close to the bottom of that list also.

We made lots of different unique chemicals ourselves but distributed many more.

Quite a few from Columbia Organics, I remember their isopropyl bromide well.


Hahaha. Fuck. The history of pentaborane is littered with human tragedy. What an appropriate compound for this troubled age.


Definitely don’t read about the history of acetylene then.

Same as it’s always been.


Hahah. Oh gosh. As an aside: Your username checks out. Azides are nothing to be sneezed at either, IIRC.


Hah, first time someone noted that connection!

On the original topic of the thread, check out Chemical Forces video on boranes - [https://youtu.be/8hrYlhTYl5U?si=4SDJq4MxAEu714iY]

I’m not a chemist, but used to read my copy of ‘chemistry of powders and explosives’ to get to sleep, and synthesized a few out of curiosity over the years. There are some real fun wiki holes in the topic too.

The azides do tend to be a bit unstable as well, same as the fulminates.

Most are still more stable than the organic peroxides, at least if they’re uncontaminated,

Energetics chemists tend to be the Leroy Jenkins of scientists.

Lead(ii) azide has mostly been replaced by lead styphnate or other compounds in commercial use, safer to synthesize [https://en.m.wikipedia.org/wiki/Lead_styphnate]


In the analytical lab we had been using dinitrophenylhydrazine, in very low concentrations, in the determination of trace aldehydes. When the previous bottle was almost empty, I found out it could not be reordered from our established supplier. One chemist showed me how little there was left, he had been banging the bottle against the bench to get the last gram out. I was about in shock, apparently less so than the compound itself, and advised don't do that again because it's like a cross between TNT and rocket fuel.

Then found out the DNPH was no longer available in dry form, packed under water now under a different part number and with a revised SDS.


I don't think it's a register allocation failure but is in fact necessitated by the ABI requirement (calling convention) for the first parameter to be in xmm0 and the return value to also be placed into xmm0.

So when you have an algorithm like clamp which requires v to be "preserved" throughout the computation you can't overwrite xmm0 with the first instruction, basically you need to "save" and "restore" it which means an extra instruction.

I'm not sure why this causes the extra assembly to be generated in the "realistic" code example though. See https://godbolt.org/z/hd44KjMMn


Even with -march=x86-64-v4 at -O3 the compiler still generates fewer lines of assembly for the incorrect clamp compared to the correct clamp for this "realistic" code:

https://godbolt.org/z/hd44KjMMn


Even with -march=znver1 at -O3 the compiler still generates fewer lines of assembly for the incorrect clamp compared to the correct clamp for this "realistic" code:

https://godbolt.org/z/WMKbeq5TY


Yes, you are correct, the faster clamp is incorrect because it does not return v when v is equal to lo and hi.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: