No it’s not. If the heads aren’t moving in lockstep, what actually happens is you’ll extremely rarely have a moment of time when both locations are whole numbers.
At one moment you have { head1loc=0, head2loc=4.17 }, head #1 reached the cell position on the tape, head #2 is still moving. After a while, you’ll have { head1loc=0.872, head2loc=5}, head #1 is on its way, head #2 reached the cell position.
Looks like for a Turing machine with two asynchronously moving heads, there’s no usable concept of “state” at all. That’s very close to what happens when doing actual parallel programming on real hardware.
Good luck applying your idea of computation as a sequence of state transitions to that.
I'm not sure it's useful to try to discuss concurrency with someone who refuses to believe that a group of machines can be in a state (which is really the point of the article).
Even if a Turing machine were a physical object (I had thought you would be aware that it's not), either one of those moves completes before the other, or both complete simultaneously.
Sure, it’s an abstract machine invented to research mathematical model of computation.
I’d like to point out, the very moment you hooked second asynchronously moving head to the machine, the abstraction leaked spectacularly. With two asynchronous heads, the machine no longer has state; suddenly we need to consider physical time, not just count steps; and so on.
Turing machine is useful abstraction for sequential computing, but nearly useless to research parallel computing problems. The same happens with many other abstractions and approaches when going parallel.
> either one of those moves completes before the other
Yeah, but in parallel computing, we don’t know that order, and have no way of knowing. Meaning the “sequence of state transitions” idea is IMO nearly useless.
We use the tape's reference frame, of course! Looking at data stored on some particular device in the physical system is going to force a particular reference frame, where you will have a defined simultaneity. A global definition of simultaneity is only really needed if your system promises to provide sequential consistency.
No it’s not. If the heads aren’t moving in lockstep, what actually happens is you’ll extremely rarely have a moment of time when both locations are whole numbers.
At one moment you have { head1loc=0, head2loc=4.17 }, head #1 reached the cell position on the tape, head #2 is still moving. After a while, you’ll have { head1loc=0.872, head2loc=5}, head #1 is on its way, head #2 reached the cell position.
Looks like for a Turing machine with two asynchronously moving heads, there’s no usable concept of “state” at all. That’s very close to what happens when doing actual parallel programming on real hardware.
Good luck applying your idea of computation as a sequence of state transitions to that.