The theorems of inductive inference in computational learning theory may be interpreted as the ultimate theoretical constraints on the abilities of finite machines to fabricate hypotheses and make predictions from information or data. However, all of the models of learning in this inductive inference literature require the hypothesis to be exactly correct infinitely often, have no way of measuring how far off a prediction of a hypothesis is, and require the hypothesis to make predictions that are directly measurable. Furthermore, most of the models of learning do not allow the learning of uncomputable functions, and of those that are capable of learning uncomputable functions, the literature has not explicitly noticed this property. New notions of learning that rectify these deficiencies are introduced and examined. The first criterion considers a hypothesis successful if each prediction is within δ(x) of the function to be learned on x, f(x). The second criterion considers a hypothesis successful if each prediction on x is equal to f on some domain element within ∊(x) of x, so long as the nearby value of the function f is within v(x) of f(x). These new learning criteria respectively model (a) learning in science with imprecise hypotheses, and (b) learning in science with hypotheses that ignore noisy and bad data, smoothing in image recognition with radius of smoothing ∊ and threshold v, and learning languages (i.e., two-valued functions) with imprecise hypotheses.