Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Machine learning of limit programs (i.e., programs allowed finitely many mind changes about their legitimate outputs) for computable functions is studied. Learning of iterated limit programs is also studied. To partially motivate these studies, it is shown that, in some cases, interesting global properties of computable functions can be proved from suitable (n+1)-iterated limit programs for them which can not be proved from any n-iterated limit programs for them. It is shown that learning power is increased when (n+1)-iterated limit programs rather than n-iterated limit programs are to be learned. Many trade-off results are obtained regarding learning power, number (possibly zero) of limits taken, program size constraints and information, and number of errors tolerated in final programs learned.
Hypercomputation is about the feasibility of machines and systems that are either more expressive or computationally more powerful than the Turing machine. A number of researchers and thinkers have put forth a number of supposedly knock-out arguments against hypercomputation. Nevertheless, these arguments are not unwavering as they seem to be and here I explain why.
A packing function on a set Ω in Rn is a one-to-one correspondence between the set of lattice points in Ω and the set N0 of non-negative integers. It is proved that if r and s are relatively prime positive integers such that r divides s - 1, then there exist two distinct quadratic packing polynomials on the sector {(x, y) ∈ R2 : 0 ≤ y ≤ rx/s}. For the rational numbers 1/s, these are the unique quadratic packing polynomials. Moreover, quadratic quasi-polynomial packing functions are constructed for all rational sectors.