This paper examines some of the aspects of an induction process that can have a significant decrease in total run time through the use of a vector processor and appropriate algorithms and code. The simplified induction algorithm assumes that all values are realistic, that a number of possible attributes could be used at every single node, and that possible partition points are given. The algorithm chooses the one-step optimal attribute, then builds a tree. Two slightly different variants of the algorithm are considered. For the first, the results include a comparison between the times taken for a scalar compilation, a compilation with the automatic vector code and some vector compiler directives. In the second variant calls to the vector library are used and other parts have the compiler directives.
Timings for building the tree on a learning sample, and passing a test sample down an existing tree are compared in each case. The gain in speed, even for relatively short vectors, is about a factor of two.
This paper focuses on the formal proof of parallel programs dedicated to distributed memory symmetric interconnection networks; communications are realized by message passing. We have developed a method to formally verify the computational correctness of this kind of application. Using the notion of Cayley graphs to model the networks in the Nqthm theorem prover, we have formally specified and mechanically proven correct a large set of collective communication primitives. Our compositional approach allows us to reuse these libraries of pre-proven procedures to validate complex application programs within Nqthm. This is illustrated by three examples.
Mutations are structural changes in DNA that can cause protein malfunction and genetic disease. This paper describes a machine learning approach to analyzing mutations associated to Osteogenesis Imperfecta (OI), also known as brittle bone disease. We apply SORCER, a second-order rule induction system to predict clinical phenotypes of OI from mutation and neighboring amino acid sequences in the COLIA1 gene. On the average, over ten 10-fold cross-validations, SORCER gives more accurate results than C4.5 with average accuracy of about 81.2%. The paper discusses the advantages and limitations of SORCER and demonstrates its use to provide initial exploration of biological sequences.
The purpose of this expert system is to assess a predisposition to bleeding in a patient undergoing a tonsillectomy and/or adenoidectomy, as may occur with patients who have certain blood conditions such as hemophilia, von Willebrand’s disease, and platelet function defects. This objective is achieved by establishing a correlation between the patients’ responses to a medical questionnaire and the relative quantities of blood lost during their operations.
Three major modules constitute the system: the automated questionnaire to be completed by the patient, the expert system proper, and the patient database. The questionnaire is divided into fourteen sections on topics such as the occurrence of bruises and positive familial history. The expert system takes the responses to the questionnaire and determines risk categories for patients undergoing the operations. The patient database contains the patients’ questionnaire responses, personal data and the risk assessments generated by the expert system.
Subsequent to the development of the expert system prototype, inductive learning techniques were examined to automatically classify patients as either normal or abnormal. Specifically, Quinlan’s ID3 induction algorithm was used whereby a training set of patient data generates a classification tree whose leaf nodes correspond to the classification outcomes.
This paper presents the architecture of the hematology expert system, identifies some problems encountered during the knowledge engineering process and presents some statistical data pertaining to the accuracy of its risk assessments. It also offers a comparative review of the expert system and inductive learning methodologies.
Inference is a very general reasoning process that allows us to draw consequences from some body of knowledge. Machine learning (ML) uses the three kinds of possible inferences, deductive, inductive, and analogical. We describe here different methods, using these inferences, that have been created during the last decade to improve the way machines can learn. We have already presented the most classical approaches in a book (Kodratoff, 1988), and in several review papers (Kodratoff, 1989, 1990a, 1992). These results will be described here very briefly, in order to leave room for newer results. We include also genetic algorithms as an induction technique. We restrict our presentation to the symbolic aspects of connectionism.
A learning system can also be viewed as a mechanism skimming interesting knowledge out of the flow of information that runs through it. We present several existing learning systems from this point of view.
A questionnaire, designed to assess bleeding predispositions in tonsillectomy and/or adenoidectomy patients, was administered to 236 otherwise healthy children. For comparative purposes, 114 patients with bleeding disorders were also studied. An unsupervised non-metric clustering technique was used in an attempt to classify bleeders against non-bleeders based solely on the responses to the questionnaire. Non-metric techniques are essential for the classification process because of the large number of missing attribute values in the patient data set. As a benchmark, a supervised inductive machine learning strategy was also used to classify the patients.
Performance results are compared and contrasted between the techniques across different subsets of the patient data. These techniques are also evaluated as a methodology for determining the relative significance of attributes vis-à-vis the reduction of the dimensionality of a large medical data set. In this investigation, the classification rate achieved using the non-metric technique (73%) was only marginally poorer than the rate using the supervised technique (76%). Moreover, these results were obtained with an accompanying 80% reduction in the number of attributes used to perform the analysis.
The osteoinductive activity of bone morphogenetic proteins can be demonstrated by implantation into the muscle pouch in the mouse to induce ectopic cartilage and bone. To study endogenous BMP expression after implantation of 2 mg of native bovine BMP, cDNA clones of the chondroinductive factors BMP-4 and BMP-7 were constructed and used as probes. The investigation concentrated on the early phase of regeneration three and seven days after implantation. Expressions of the endogenous BMP-4 and BMP-7 genes during chondrogenesis was observed. Both BMP-4 and BMP-7 genes were significantly expressed as early as seven days after implantation. The results suggest that ectopic implanted bovine BMP can induce chondrogenesis and that BMP-4/7 participated in this process. This observation could be applied clinically to the repair of damaged cartilage.
Frobenius reciprocity asserts that induction from a subgroup and restriction to it are adjoint functors in categories of unitary GG-modules. In the 1980s, Guillemin and Sternberg established a parallel property of Hamiltonian GG-spaces, which (as we show) unfortunately fails to mirror the situation where more than one GG-module “quantizes” a given Hamiltonian GG-space. This paper offers evidence that the situation is remedied by working in the category of prequantumGG-spaces, where this ambiguity disappears; there, we define induction and multiplicity spaces and establish Frobenius reciprocity as well as the “induction in stages” property.
Lie bialgebras occur as the principal objects in the infinitesimalization of the theory of quantum groups — the semi-classical theory. Their relationship with the quantum theory has made available some new tools that we can apply to classical questions. In this paper, we study the simple complex Lie algebras using the double-bosonization construction of Majid. This construction expresses algebraically the induction process given by adding and removing nodes in Dynkin diagrams, which we call Lie induction.
We first analyze the deletion of nodes, corresponding to the restriction of adjoint representations to subalgebras. This uses a natural grading associated to each node. We give explicit calculations of the module and algebra structures in the case of the deletion of a single node from the Dynkin diagram for a simple Lie (bi-)algebra.
We next consider the inverse process, namely that of adding nodes, and give some necessary conditions for the simplicity of the induced algebra. Finally, we apply these to the exceptional series of simple Lie algebras, in the context of finding obstructions to the existence of finite-dimensional simple complex algebras of types E9, F5 and G3. In particular, our methods give a new point of view on why there cannot exist such an algebra of type E9.
In this paper, we study lattices of subgroups that can be used to obtain lifts of π-partial characters. In particular, we find a condition on the lattices so that the associated lifts will be well-behaved regarding induction and restriction on normal subgroups. We show that the lattices obtained from all subnormal subgroups have this property, but the lattices obtained from all normal subgroups do not.
Natural melanin is a sustainable and environmentally friendly pigment. The structural properties of melanin are of great significance for its functional applications. In this study, the preliminary composition and structure of melanin extracted from the induced cultures of Lasiodiplodia theobromae were qualitatively characterized by ultraviolet-visible (UV) spectrophotometer, elemental analysis (EA), Fourier transform infrared spectrometer (FTIR), nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR) and ultra-performance liquid chromatography-mass spectrometry (UPLC-MS). The results showed that the secreted melanin contained many functional groups such as hydroxy groups, alkanes and aromatic rings, which belonged to benzoquinone structure and aliphatic structure. The melanin secreted by Lasiodiplodia theobromae can be identified as a type of allomelanin. This research work is expected to provide a reference for the application of melanin in various fields in the future.
Glaser and Strauss’s grounded theory (GT) was a new theory of conceptual sociology. GT emerged from Glaser’s empirical sociology and Strauss’s pragmatic symbolic interactionism. The result of GT was a practical method to build theory from qualitative data. Unfortunately, the practicality of GT hid its ontology, epistemology, and philosophical foundations. Three waves, or historical periods, of scientific philosophy have attempted to reveal GT’s foundations. These waves, inductivism, positivism, and social constructivism, left GT with no ground and no theory. This chapter pinpoints where GT went off track by examining the foundations of GT. We use Hegel’s philosophy to develop a fourth wave of GT using a foundation of dialectic ontological processes, with precisely what that means being explored in-depth. The intent of this fourth wave of GT is to get it back on track. Fourth-wave GT recognizes the multiplicity of intra-active, self-organizing ontologies necessary to defining ground and theory. Fourth-wave GT embraces Peirce’s, Heidegger’s, Bhaskar’s, and Žižek’s ontologies, resisting the grounding fallacies of logical positivism and social constructivism, contributing to the question of what theory can be, creating both ground and theory by attending deeply to philosophical concerns.
Grounded theory, as a qualitative research method, constructs theories grounded in data on previously unexplored phenomena. To this end, it requires the researcher to be able to form concepts and ideas from the data, to withstand uncertainty and a lack of understanding, and to accept the return, at any point, to the initial stage of the research process. Although grounded theory permits the required flexibility to produce and interpret themes that emerge from the data, its three dominant “waves” are techniques of knowing that are separated from “being-in-the world.” This means they discard the existence of the researcher’s ontological status and that of others in time, space, and matter. This state of affairs maintains the “(inter)subjectivity–objectivity” dualism and yields a “disembodied organization research” that develops and turns inductive inferences into general categories. Against this backdrop, this chapter discusses the fallacies of each of the three waves of grounded theory and advances a fourth one that adds theoretical anchoring and, therefore, provides a basis for developing theories.
Considering the impact on human beings and human activities of architectural decisions in the design of space for human habitation, this chapter discusses the increasingly evident and necessary confluence in contemporary times of many disciplines and human-oriented sciences, with architecture being the meeting ground to know emergent phenomena of human habitation. As both a general rubric and a specific phenomenon, architectural emergence is the chosen focus of discussion and other phenomena are related to it. Attention is given to the phenomena of architectural induction, emergence, and convergence as having strategic and explanatory value in understanding tensions between two competing mentalities, the global domineering nature-for-humans attitude, in opposition to the lesser practiced humans-for-nature attitude.
We introduce some core principles related to systems thinking: interaction, establishment of systems through organization and self-organization (emergence), and the constructivist role of the observer including the use of language. It is not effective to deal with systemic properties in a non-systemic way, by adopting a reductionist way of thinking, i.e., when properties acquired by systems are considered as properties possessed by objects. We consider the reduced language adopted in consumer societies as functional to maintain consumerist attitude. In consumer societies, language is suitable for maintaining people in the role of consumers with a limited ability to design and create. In this context freedom is intended as freedom of choice. To counteract this reduced language, we propose the diffusion of suitable games, cartoons, comics and pictures, images, concepts and words which can enrich everyday language, especially that of young people, and provide an effective way for inducing some elementary aspects of systems thinking in everyday life. The purpose is to have a language to design and develop things and not merely to select from what is already available. We list a number of proposals for the design of such games, stories and pictures.
The following sections are included:
Cholesterol oxidase (COX) induction in cells of Rhodococcus sp. (CIP 105335, strain GK1) depended on the sterol side chain. Octanoate, or Tween 80 showed a stimulating effect on enzyme synthesis by this strain. COX occurred in a secreted form and/or a cell surface-bound form depending on the carbon source used for strain growth. Extractability of the cell surface-linked form from cells with phosphate buffer or the buffer containing non-ionic detergent varied as the carbon source varied. The detergent-extracted COX was co-solubilized with mycolate and three proteins of around 52, 26, and 19 kDa. Some of these cell surface proteins might be involved in COX fixation into mycolate-rich cell surface. The secreted, buffer-extracted, and detergent-extracted enzyme forms showed the same substrate specificity and molecular mass, around 60 KDa. COX of GK1 is exported outside of the cell membrane, and either localized onto the cell surface, or released into the growth medium depending on the carbon source.
The purpose of this expert system is to assess a predisposition to bleeding in a patient undergoing a tonsillectomy and/or adenoidectomy, as may occur with patients who have certain blood conditions such as hemophilia, von Willebrand's disease, and platelet function defects. This objective is achieved by establishing a correlation between the patients' responses to a medical questionnaire and the relative quantities of blood lost during their operations.
Three major modules constitute the system: the automated questionnaire to be completed by the patient, the expert system proper, and the patient database. The questionnaire is divided into fourteen sections on topics such as the occurrence of bruises and positive familial history. The expert system takes the responses to the questionnaire and determines risk categories for patients undergoing the operations. The patient database contains the patients' questionnaire responses, personal data and the risk assessments generated by the expert system.
Subsequent to the development of the expert system prototype, inductive learning techniques were examined to automatically classify patients as either normal or abnormal. Specifically, Quinlan's ID3 induction algorithm was used whereby a training set of patient data generates a classification tree whose leaf nodes correspond to the classification outcomes.
This paper presents the architecture of the hematology expert system, identifies some problems encountered during the knowledge engineering process and presents some statistical data pertaining to the accuracy of its risk assessments. It also offers a comparative review of the expert system and inductive learning methodologies.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.