Technology Musings

September 23, 2012

General / Today's Thoughts on Algorithmic Information Theory

JB

I know this won't make a lot of sense to most people, but I thought I needed to store these thoughts in a more permanent place before I forget them.  I've been reading a paper today on Algorithmic Information Theory, and it make me think these thoughts (most of which are probably ill-founded):

 

  1. Is the Chaitin halting probability really a probability?  In other words, if I have the first 5 bits of omega, does this mean that a random sampling of all programs will really match this value to some extent?
  2. To what extent is self-delimiting programs an intrinsic part of AIT, vs just a cheap trick to get the math right?  It makes sense simply because you need the program length.  But on the other hand, if omega is a probability, it seems like having the program length pre-coded would do a number on its interpretation as a probability.
  3. If log2(2^w) is w, is perhaps log2(w) = c?  Is c the complexity number I'm looking for for my axioms?
  4. Is it possible to express the idea of a formal relationship to be the relationship between the algorithmic specification and its logical depth?  I.e. we should expect a material cause when logical depth is low, but a formal one when logical depth is high?  Could this be expressed as a ratio?
  5. Algorithmic complexity on partial specifications.  The word "red", as implemented in 12 pt font or written in the clouds.  Perhaps the partial specification is something that can be detected non-algorithmically, via an Oracle machine?  In other words, instead of "give me the Xth object that matches specification X, you say, give me *an* object that matches specification X.  It does not require an iteration over the whole of infinity, but a potentially large part of it very quickly.  Or, perhaps, it drops it a level of Cantorian infinity.
  6. Recognizers vs generators = P/NP problem = logical depth vs algorithmic complexity
  7. What is the relationship between the constant given in AIT vs expressibility of language?  In other words, if C is small, then it is easier to express random strings, at the penalty of having no easily expressible strings.  A larger C will permit more compressibility.  This might also be usable, at least in theory, to determine the total number of "special" strings available.
  8. How can algorithmic information theory be expanded for programs that take arguments? (note - number of arguments shouldn't matter, only whether one of them is itself a program, perhaps? or if the result is a program?  Maybe what the Wolfram class of the argument is?)
  9. Might semantics and apobetics be higher Turing degrees?
  10. Might ethical norms be an expression of compressibility across Turing degrees?
  11. What about bad plans?  Is it possible to state, even in abstract, non-determinable terms, what a bad plan is?