Okay, how is this for a definition of convergence:

An infinite sequence `xs :: [()]` converges if there exists an *n* such that for every *m > n*, `xs !! m = xs !! n`. In other words, this is a bunch of programs which either halt or don’t, and after some point, either they all halt or they all don’t.

Then an infinite sequence `xs :: [a]` converges if `map f xs` converges for every `f :: a -> ()`.

Sound good?

Advertisements

Explain to me how the convergence determines if they halt? I mean, “cycle [()]” converges at n == 0, but “map f $ cycle [()]” doesn’t halt.

I think I understand:

An infinite list ys :: [()] converges iff there exists n st for all m>n x!!m == x!!n where bottom == bottom.

So a list xs :: [a] exactly when it eventually provides the same (or indistinguishable) input to any given function.

On computable types does this give any convergent lists which aren’t eventually constant?

If, for some hausdorff topology, you limit the definition to continuous partial functions a -> () then I *think* you recover the normal definition of convergence. If it doesnt converge (topology-wise) you can always pick some cts function that seperates pairs of points by failing on one.

And since I was talking Haskell, continuous partial functions is what I meant.

After some thought, I don’t think this is a good definition. For example, consider the following:

s = [ replicate n False ++ repeat True | n <- [0..]]

I would really like to say that this converges to “repeat False”. However, the following function demonstrates that it does not converge:

f = convert . even . fromJust . findIndex id

where

convert True = ()

convert False = undefined

If we changed the primitive definition to be on total elements of Bool (using the standard discrete equality, of course), and change the general definition to only be from total continuous functions a -> Bool, then this sequence would converge (because each function has a finite modulus of uniform continuity — a place in the sequence past which it will never examine).

I’m worried that it is overly restrictive. But overly restrictive means it would causes

too manysequences to converge, and thinking about it, I don’t think it does.This new definition in words: a sequence converges if its elements are eventually indistinguishable by every continuous total predicate.

Re your comment: You say

However, you have

weakenedthe test for convergence (by allowing fewer test functions), so I think that this can only bemorepermissive, not less.Loren: Uh, yeah, that’s what I was saying.

Despite my hunch, it turns out it is too permissive. I had thought of this definition before, and discounted it for the following reason, then considered it again and forgot the reason :-).

The problem is that every continuous total predicate from a connected space is constant. So, using this new definition, every sequence of reals converges. This is obviously not correct.

I’m going to study more topology and ASD and then come back to this question.