Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Classical statistical inference is nonmonotonic in nature. We show how it can be formalized in the default logic framework. The structure of statistical inference is the same as that represented by default rules. In particular, the prerequisite corresponds to the sample statistics, the justifications require that we do not have any reason to believe that the sample is misleading, and the consequence corresponds to the conclusion sanctioned by the statistical test.
As program behavior becomes complex, it’s increasingly important to analyze their behavior statistically. The article describes two separate but synergistic tools for statistically analyzing large Lisp programs. The first tool, called CLIP (Common Lisp Instrumentation Package), allows the researcher to define and run experiments, including experimental conditions (parameter values of the planner or simulator) and data to be collected. The data are written out to data files that can be analyzed by statistics software. The second tool, called CLASP (Common Lisp Analytical Statistics Package), allows the researcher to analyze data from experiments by using graphics, statistical tests, and various kinds of data manipulation. CLASP has a graphical user interface (using CLIM, the Common Lisp Interface Manager) and also allows data to be directly processed by Lisp functions. Finally, the paper describes a number of other data-analysis modules that have been added to work with CLIP and CLASP.
The goal of this work is the further development of neoclassical analysis, which extends the scope and results of the classical mathematical analysis by applying fuzzy logic to conventional mathematical objects, such as functions, sequences, and series. This allows us to reflect and model vagueness and uncertainty of our knowledge, which results from imprecision of measurement and inaccuracy of computation. Basing on the theory of fuzzy limits, we develop the structure of statistical fuzzy convergence and study its properties. Relations between statistical fuzzy convergence and fuzzy convergence are considered in the First Subsequence Theorem and the First Reduction Theorem. Algebraic structures of statistical fuzzy limits are described in the Linearity Theorem. Topological structures of statistical fuzzy limits are described in the Limit Set Theorem and Limit Fuzzy Set theorems. Relations between statistical convergence, statistical fuzzy convergence, ergodic systems, fuzzy convergence and convergence of statistical characteristics, such as the mean (average), and standard deviation, are studied in Secs. 2 and 4. Introduced constructions and obtained results open new directions for further research that are considered in the Conclusion.
In this study, a new regression method called Kappa regression is introduced to model conditional probabilities. The regression function is based on Dombi’s Kappa function, which is well known in fuzzy theory. Here, we discuss how the Kappa function relates to the Logistic function as well as how it can be used to approximate the Logistic function. We introduce the so-called Generalized Kappa Differential Equation and show that both the Kappa and the Logistic functions can be derived from it. Kappa regression, like binary Logistic regression, models the conditional probability of the event that a dichotomous random variable takes a particular value at a given value of an explanatory variable. This new regression method may be viewed as an alternative to binary Logistic regression, but while in binary Logistic regression the explanatory variable is defined over the entire Euclidean space, in the Kappa regression model the predictor variable is defined over a bounded subset of the Euclidean space. We will also show that asymptotic Kappa regression is Logistic regression. The advantages of this novel method are demonstrated by means of an example, and afterwards some implications are discussed.
I will make an argument of who will benefit from this special issue on data science and related topics.
In dermatology, the optical coherence tomography (OCT) is used to visualize the skin over few millimeters depth. These images are affected by speckle, which can alter their interpretation, but which also carries information that characterizes locally the visualized tissue. In this paper, we propose to differentiate the skin layers by modeling locally the speckle in OCT images. The performances of four probability density functions (Rayleigh, Lognormal, Nakagami and Generalized Gamma) to model the distribution of speckle in each skin layer are analyzed. From this study, we propose to classify the pixels of OCT images using the estimated parameters of the most appropriate distribution. Quantitative results with 30 images are compared to the manual delineations of five experts. The results confirm the potential of the method to generate useful data for robust segmentation.