Turing machines are a useful simplification: they eliminate the (literal) edge-case that we hit the end of the tape. We can either think of this as having "infinite tape", or we can imagine a "tape factory" on either end which extend the tape faster than the TM can read it.
Mathematically, this sort of simplification is used a lot: when our model requires some sort of bound, we can work under the assumption of it being 'sufficiently large' that we can ignore edge-cases. Another example is the set of "real" numbers, which we can represent as decimals and assume a sufficiently large number of decimal places to avoid rounding. In fact, similar to the "tape factories" of a TM, we can think of each real number as having a "decimal-place factory" which produces new digits more quickly than we can read them (for example, during Cantor diagonalisation).
The infinities in hypercomputation don't seem to be providing such a simplification. Their 'sufficiently large' assumption seems to be the number of steps which can be executed in a unit of time, which avoids the edge-case of non-halting programs. I'm not sure that's a useful simplification.
Mathematically, this sort of simplification is used a lot: when our model requires some sort of bound, we can work under the assumption of it being 'sufficiently large' that we can ignore edge-cases. Another example is the set of "real" numbers, which we can represent as decimals and assume a sufficiently large number of decimal places to avoid rounding. In fact, similar to the "tape factories" of a TM, we can think of each real number as having a "decimal-place factory" which produces new digits more quickly than we can read them (for example, during Cantor diagonalisation).
The infinities in hypercomputation don't seem to be providing such a simplification. Their 'sufficiently large' assumption seems to be the number of steps which can be executed in a unit of time, which avoids the edge-case of non-halting programs. I'm not sure that's a useful simplification.