Arquivo para August 22nd, 2017
Kurt Gödel and the computation
The proof that Kurt Gödel formalized for number theory was to be followed by a document that demonstrated that the same method applied to large formal axiomatic systems in other contexts, the modern approach made by the Turing Machine (it is important to say that it was done almost simultaneously by Emil Post), is a more general proof and touches the problem proposed by David Hilbert originally.
The proof now written in Gödel’s systems language applied to the particular formalization of number theory and sought to demonstrate that it served formal axiomatic systems, but a concept that could not be determined by the original Gödel document because of the lack of definition Mathematics of an effective procedure of a computational algorithm, or what Turing called the Finite State Machine.
After Alan Turing was able to determine the effective procedure, inventing an ideally idealized computer, now called a Turing machine (also done independently by Emil Post), it became possible to move more generally.
Hilbert’s fundamental requirement for a formal mathematical system was that there was an objective criterion for deciding whether a proof was written in the system language. In other words, there is a more modern proof whether it is the Turing machine, whether it is an algorithm, or a computer program, made for proof checking.
Thus the modern and compact definition of the formal axiomatic system as a recursibly enumerable set of assertions is an immediate consequence of a program that works with a large set of theorem, which by the amount of axioms, if treated humanely, would take an astronomical amount of time, Some proposals were made earlier in LISP (Levin, 1974) and more recently by Gregory Chaitin (1982) when proposing that the Information Theory of Algorithms proposes to work on individual objects instead of sets and probability distributions proposed by Claude Shannon And Norbert Wiener. So the correct question would be (Chaitin, 1982) how many bits are needed to calculate an individual object?
Algorithmic information theory focuses on individual objects, rather than on the sets and probability distributions considered in the information theory of Claude Shannon and Norbert Wiener. How many bits does it take to define how to calculate an individual object? These problems led to Computability theory.
This theory, also called recursion theory, is a branch of mathematical logic that originated in the 1930s with the study of functions computable by Alain Turing, and was studied by Kolmogorov, Chaitin, Levin, Martin-Löf, and Solomonoff. There are numerous papers on questions concerning this theory.
The question of completeness, or NP-complete class, is the subset of NP problems of computational complexity, verifies how any problem in NP can reduce, with a polynomial time reduction, to one of NP-complete problems, by verifying If the problem is computable, in practice, if the algorithm exists.