Variants

Sat 02:30 PM

Prof. Calvin

CSCI 5100

Sketch

  • Turing Machines
    • Rehash
    • Multi-tape
    • Non-deterministic

Questions

  • Why use this model?
    • All models equivalent in power
    • Herding toward TM by people writing proofs
    • Seem to be easiest for most people

Multi-tape

Motivation

  • PDA had a stack, why can’t TMs write two places!
  • We already have a place TMs can write!
  • I want my TM to have multiple tapes!
  • My device has multiple SSD/HDDs.

Minimal model

  • Take a TM
  • Add any number \(n\) of “work tapes”
  • “work tapes” are intially blank.
  • TM can read write to any tape,
  • TM maintains its head position on all tapes.

Theorem

  • \(A\) is \(T-recognizable\) iff some multi-tape TM recognizes \(A\)

  • It is trivial for a multi-tape machine to emulate a single tape machine.

    • Leave all work tapes blank with no head movement.

Theorem

  • \(A\) is \(T-recognizable\) iff some multi-tape TM recognizes \(A\)
  • We will show we can convert multi-tape to single tape.

Statement

  • \(S\) single tape, \(M\) multi-tape
  • \(S\) simulates \(M\) by storing contents of multiple tapes in blocks.
  • \(S\) introduces novel “tape break” symbol.
    • Like EOF
  • \(S\) introduces a novel “head here” symbol

Statement

  • \(S\) single tape, \(M\) multi-tape
  • \(S\) simulates \(M\) by storing contents of multiple tapes in blocks.
  • \(S\) introduces novel “tape break” symbol.
    • Like EOF
  • \(S\) introduces a novel “head here” symbol

Graphically

g t1 a a b b _ _ _ t2 1 0 1 _ _ _ _ t3 c c c a _ _ _

Single Tape

g t1 a a b b _ _ _ t a a b b # 1 0 1 # c c c a _ t1->t t2 1 0 1 _ _ _ _ t2->t t3 c c c a _ _ _ t3->t

Show heads

g h1 h1 t1 a a b b _ _ _ h1->t1:here h2 h2 t2 1 0 1 1 _ _ _ h2->t2:here h3 h3 t3 c c c a _ _ _ h3->t3:here t a a b b # 1 0 1 1 # c c c a _ t1->t:one t2->t:two t3->t:thr h h t:thr->h

Extend Alphabet

g h1 h1 t1 a a b b _ _ _ h1->t1:here h2 h2 t2 1 0 1 1 _ _ _ h2->t2:here h3 h3 t3 c c c a _ _ _ h3->t3:here t a a b # 1 1 1 # c c a _ t1->t:one t2->t:two t3->t:thr h h t:thr->h

In Practice

  • \(S\) reads the whole tape before making any read/write.
  • \(S\) maintains head location in every case
  • \(S\) has to extend tape space by copying or correctly guessing work space needed.
    • This is curiously similar to how e.g. a Python dictionary is implemented.

Theorem 8

  • \(A\) is \(T-recognizable\) iff some multi-tape TM recognizes \(A\)

Proof.

g t a a b # 1 1 1 # c c a _ h h t:thr->h

Nondetermistic

Definition

  • A Nondeterministic TM (TM) is the same 7-tuple, just
  • No longer this transition relation:
    • \(\delta: Q \times \Gamma \rightarrow Q \times \Gamma \times \{L, R\} \quad (L = \text{Left}, R = \text{Right})\)
  • Utilize power set:
    • \(\delta: Q \times \Gamma \rightarrow \mathcal{P} (Q \times \Gamma \times \{L, R\})\)

Recall

  • The NFA and the DFA are equivalent in power
  • The PDA and the deterministic PDA are not equivalent in power
  • The NTM and TM are equivalent in power.

Theorem

  • \(A\) is \(T-recognizable\) iff some NTM recognizes \(A\)

  • It is trivial for an NTM to emulate a TM

    • Impose the restriction that sets mapped in \(\delta\) have cardinality =1

Theorem

  • \(A\) is \(T-recognizable\) iff some NTM recognizes \(A\)
  • We will show we can convert an NTM to a TM.

Insight

  • For any computation, we can view the set of possible computations as a tree.
  • Consider fast exponentiation by squares:
def exp_by_squaring(a, n):
    if n == 0:
        return 1
    elif n == 1:
        return a
    elif n % 2 == 0:
        return exp_by_squaring(a * a, n / 2)
    else:
        return a * exp_by_squaring(a * a, (n - 1) / 2)

Insight

  • It would be much faster to ignore checks.
def exp(a, n, magic):
    return [1,a,exp(a * a, n / 2),a * exp(a * a, (n - 1) / 2)][magic]

Insight

  • Imagine a growing tree
def exp(a, n, magic):
    return [1,a,[1,a,exp(a ** 4, n / 4),a * exp(a ** 4, (n - 1) / 4)],a * exp(a * a, (n - 1) / 2)][magic]

Nondeterministic

g s o one s->o oa 'a' s->oa oo odd s->oo oe even s->oe

Recurse

g s o one s->o oa 'a' s->oa oo odd s->oo oe even s->oe t one oo->t ta 'a' oo->ta to odd oo->to te even oo->te

Recurse everywhere

g s o one s->o oa 'a' s->oa oo odd s->oo oe even s->oe t one oo->t ta 'a' oo->ta to odd oo->to te even oo->te ss one oe->ss sa 'a' oe->sa so odd oe->so se even oe->se

Recurse recursively

g s o one s->o oa 'a' s->oa oo odd s->oo oe even s->oe t one oo->t ta 'a' oo->ta to odd oo->to te even oo->te ss one oe->ss sa 'a' oe->sa so odd oe->so se even oe->se u one to->u ua 'a' to->ua uo odd to->uo ue even to->ue

It’s tree

g s o s->o oa s->oa oo s->oo oe s->oe t oo->t ta oo->ta to oo->to te oo->te ss oe->ss sa oe->sa so oe->so se oe->se u to->u ua to->ua uo to->uo ue to->ue

Take one path

g s o s->o oa s->oa oo s->oo oe s->oe t oo->t ta oo->ta to oo->to te oo->te ss oe->ss sa oe->sa so oe->so se oe->se u to->u ua to->ua uo to->uo ue to->ue

Multi-tape

g t1 one t2 'a' t3 odd # one `a` odd even #

Store state as separator

g t1 one t2 'a' t3 odd q_i one `a` odd even q_j

Create new tape on branch

g t1 one t2 'a' t3 odd q_i one `a` odd even q_j t4 q_j one `a` odd even q_k t3:odd->t4:q_j

Theorem 9

  • \(A\) is \(T-recognizable\) iff some NTM recognizes \(A\)

Proof.

g t1 one t2 'a' t3 odd q_i one `a` odd even q_j t4 q_j one `a` odd even q_k t3:odd->t4:q_j

Aside

  • There exists a formulation called a “Turing Enumerator”
  • It is a generator rather than a recognizer.
  • Left as an exercise.

Stretch Break