It's a shame the campaign of RA3 was boring. They got the theme and cutscenes right, but the campaign missions were rather slow, generic and forgettable.
It's the opposite of C&C3, which had a good campaign but the theme was a step back from the scifi of Tiberian Sun. Especially the GDI/NOD units were way less futuristic, and the alien ones were a bit too similar to each other in style. The cutscenes were also mostly boring compared to earlier games.
If I recall correctly, the expansion pack for C&C3 was much more interesting in these aspects, but the gameplay suffered.
The hash-based algorithm is only O(n) because the entry size has a limit. In a more general case, it would be something more like O(m(n * e)). Here n is the number of entries, e is the maximum entry size and m is a function describing how caching and other details affect the computation. With small enough data, the hash is very fast due to CPU caches, even if it takes more steps, as the time taken by a step is smaller. The article explains this topic in a less handwavey manner.
Also, memory access is constant time only to some upper limit allowed by the hardware, which requires significant changes to the implementation when the data does not fit the system memory. So, the hash algorithm will not stay O(n) once you go past the available memory.
The sorting algorithms do not suffer from these complexities quite as much, and similar approaches can be used with data sets that do not fit a single system's memory. The sorting-based algorithms will likely win in the galactically large cases.
Edit: Also, once the hash table would need to grow beyond what the hash function can describe (e.g. beyond 64 bit integers), you need to grow the function's data type. This is essentially a hidden log(n) factor, as the required length of the data type is log(n) of the maximum data size.
Interestingly you need a hash function big enough to be unique for all data points with high probability, it doesn't take much to point out that this is at least O(log(n)) if all items are unique.
Also if items take up k bytes then the hash must typically be O(k), and both the hashing and radix sort are O(n k).
Really radix sort should be considered O(N) where N is the total amount of data in bytes. It can beat the theoretical limit because it sorts lexicographically, which is not always an option.
I encountered similar trouble with both Firefox and Chrome on Ubuntu.
Based on a quick Firefox performance report on the minified source code, most of the time seems to be spent in functionality looking like frame handling. There are some signs of time calculations.
One educated guess would be that something in frame time calculations goes off, possibly due to restricted time resolution to prevent timing related fingerprinting. This would cause next frame computation to start immediately instead of after the intended timeout.
Extracting text from DOCX is easy. Anything related to layout is non-trivial and extremely brittle.
To get the layout correct, you need to reverse engineer details down to Word's numerical accuracy so that content appears at the correct position in more complex cases. People like creating brittle documents where a pixel of difference can break the layout and cause content to misalign and appear on separate pages.
This will be a major problem for cases like the text saying "look at the above picture" but the picture was not anchored properly and floated to the next page due to rendering differences compared to a specific version of Word.
One major problem here is the mixup of UX and technical implementation details. From UX point of view, the link example goes nowhere, it just opens a dialog. From that point of view it the fact it uses anchors to do so is not really relevant.
From purely technical point of view, the question would be rather irrelevant as the distinction between a button and link is mostly how a human perceives it, it does not matter for the program.
This is likely an explanation for the awkwardness that the author mentions feeling of this implementation, and is supported by it not making sense to open this kind of a "link" in a new tab (because it does not go anywhere).
> A more reasonable question would be, how many partially completed sudoku grids are there which have a unique solution. We don't know the answer to that.
If you interpret "unique" to mean that two puzzles that lead to the same solution count as one, the answer would be equal to the number of completed grids. Just remove one number and you get a partially completed grid, and there cannot be more.
The interpretation you mean is probably how many partially completed sudoku grids have only a single solution instead of multiple, which leads to a much more interesting question.
No, in Sudoku, the constraint of a "unique solution" is that a valid puzzle (i.e. an incomplete grid) must only have one correct way of being filled in without violating the rules. If there are two possible ways of filling the grid without violating the rules, the puzzle does not have a unique solution.
It has nothing to do with whether then solution to one puzzle (starting grid) happens to be the same as the solution to a different puzzle (starting grid).
Then the maximum number of solveable puzzles must be the maximum number of unique grids...since you remove grid entries to make the puzzle and if you remove entries with the constraint that the puzzle remains solveable with a single solution etc etc
Imagine you start with one unique solved grid. There's 81 numbers, so by just removing one entry you arguably have 81 different "puzzles" that all lead to the same unique solution. From all of these you can also remove any second number, and get another 6480 puzzles that will still all have the same unique solution.
Now imagine taking away 61 numbers from the solved puzzle to get a sudoku with 20 starting numbers. Some of those puzzles have to be discarded because they might lead to multiple possible solutions, but you will still be left with millions of possible puzzles that look distinct to a human, and require different strategies to solve, all leading to the same solved grid.
In theory yes, but layouts that require just filling in missing numbers are not puzzles. The layouts with missing numbers become puzzles when some thought must go into entry selection. The question is how many puzzles requiring logical deductions are there for each unique 81*81 grid.
At first approximation, the number of logical deductions just goes up (exponentially?) with the number of missing digits. That's why you see many sudoku books grouping puzzles by "difficulty" by just stating how many digits are given.
Of course humans can make much more interesting puzzles, where you are expected to make a certain string of logical deductions to reach the solution. But simple "machine-made" sudokus seems to sell well enough, so I see no reason to exclude them from the definition.
> No, in Sudoku, the constraint of a "unique solution" is that a valid puzzle (i.e. an incomplete grid) must only have one correct way of being filled in without violating the rules.
My answer is to the answer to the above comment:
> Then the maximum number of solveable puzzles must be the maximum number of unique grids
It's the opposite of C&C3, which had a good campaign but the theme was a step back from the scifi of Tiberian Sun. Especially the GDI/NOD units were way less futuristic, and the alien ones were a bit too similar to each other in style. The cutscenes were also mostly boring compared to earlier games.
If I recall correctly, the expansion pack for C&C3 was much more interesting in these aspects, but the gameplay suffered.