Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

Bestsellers

Linear Algebra and Optimization with Applications to Machine Learning
Linear Algebra and Optimization with Applications to Machine Learning

Volume I: Linear Algebra for Computer Vision, Robotics, and Machine Learning
by Jean Gallier and Jocelyn Quaintance
Linear Algebra and Optimization with Applications to Machine Learning
Linear Algebra and Optimization with Applications to Machine Learning

Volume II: Fundamentals of Optimization Theory with Applications to Machine Learning
by Jean Gallier and Jocelyn Quaintance

 

  • articleNo Access

    A GENETIC ALGORITHM FOR IMPROVING ACCURACY OF SOFTWARE QUALITY PREDICTIVE MODELS: A SEARCH-BASED SOFTWARE ENGINEERING APPROACH

    In this work, we present a genetic algorithm to optimize predictive models used to estimate software quality characteristics. Software quality assessment is crucial in the software development field since it helps reduce cost, time and effort. However, software quality characteristics cannot be directly measured but they can be estimated based on other measurable software attributes (such as coupling, size and complexity). Software quality estimation models establish a relationship between the unmeasurable characteristics and the measurable attributes. However, these models are hard to generalize and reuse on new, unseen software as their accuracy deteriorates significantly. In this paper, we present a genetic algorithm that adapts such models to new data. We give empirical evidence illustrating that our approach out-beats the machine learning algorithm C4.5 and random guess.

  • chapterNo Access

    A SURVEY OF SOFTWARE INSPECTION TECHNOLOGIES

    Software inspection is a proven method that enables the detection and removal of defects in software artifacts as soon as these artifacts are created. It usually involves activities in which a team of qualified personnel determines whether the created artifact is of sufficient quality. Detected quality deficiencies are subsequently corrected. In this way, an inspection cannot only contribute towards software quality improvement, but also lead to significant budget and time benefits. These advantages have already been demonstrated in many software development projects and organizations.

    After Fagan's seminal paper presented in 1976, the body of work in software inspection has greatly increased and matured. This survey is to provide an overview of the large body of contributions in the form of incremental improvements and/or new methodologies that have been proposed to leverage and amplify the benefits of inspections within software development and even maintenance projects. To structure this large volume of work, it introduces, as a first step, the core concepts and relationships that together embody the field of software inspection. In a second step, the survey discusses the inspection-related work in the context of the presented taxonomy.

    The survey is beneficial for researchers as well as practitioners. Researchers can use the presented survey taxonomy to evaluate existing work in this field and identify new research areas. Practitioners, on the other hand, get information on the reported benefits of inspections. Moreover, they find an explanation of the various methodological variations and get guidance on how to instantiate the various taxonomy dimensions for the purpose of tailoring and performing inspections in their software projects.

  • chapterNo Access

    METRICS FOR IDENTIFYING CRITICAL COMPONENTS IN SOFTWARE PROJECTS

    Improving field performance of telecommunication systems is a key objective of both telecom suppliers and operators, as an increasing amount of business critical systems worldwide are relying on dependable telecommunication. Early defect detection improves field performance in terms of reduced field failure rates and reduced intrinsic downtime. Cost-effective software project management will focus resources towards intensive validation of those areas with highest criticality. This article outlines techniques for identifying such critical areas in software systems. It concentrates on the practical application of criticality-based predictions in industrial development projects, namely the selection of a classification technique and the use of the results in directing management decisions. The first part is comprehensively comparing and evaluating five common classification techniques (Pareto classification, classification trees, factor-based discriminant analysis, fuzzy classification, neural networks) for identifying critical components. Results from a large-scale industrial switching project are included to show the practical benefits. Knowing which technique should be applied to the second area gains even more attention: What are the impacts for practical project management within given resource and time constraints? Several selection criteria based on the results of a combined criticality and history analysis are provided together with concrete implementation decisions.

  • chapterNo Access

    ON SOFTWARE ENGINEERING AND LEARNING THEORY FACILITATING LEARNING IN SOFTWARE QUALITY IMPROVEMENT PROGRAMS

    “Knowledge” is one of the main results of software engineering, software projects and software process improvement. During software engineering projects, developers learn to apply certain technologies and how to solve particular development problems. During the process of software improvement developers and managers learn how effective and efficient their development processes are, and how to improve these processes. As “learning” is so important in software practice, it is logical to examine it more closely. What is learning? How does learning take place? Is it possible to improve the conditions of learning?

    This chapter presents an overview of learning theories and the application of these theories in the software-engineering domain. It is not our intention to be complete; our objective is to show how established learning theories can help to facilitate learning in software development practice.

  • chapterNo Access

    AN APPLICATION OF GENETIC PROGRAMMING TO SOFTWARE QUALITY PREDICTION

    Because highly reliable software is becoming an essential ingredient in many systems, software developers apply various techniques to discover faults early in development, such as more rigorous reviews, more extensive testing, and strategic assignment of key personnel. Our goal is to target reliability enhancement activities to those modules that are most likely to have problems. This paper presents a methodology that incorporates genetic programming for predicting the order of software modules based on the expected number of faults. This is the first application of genetic programming to software engineering that we know of. We found that genetic programming can be used to generate software quality models whose inputs are software metrics collected earlier in development, and whose output is a prediction of the number of faults that will be discovered later in development or during operations. We established ordinal evaluation criteria for models, and conducted an industrial case study of software from a military communications system. Case study results were sufficiently good to be useful to a project for choosing modules for extra reliability enhancement treatment.

  • chapterNo Access

    Approximate Reasoning about Complex Objects in Distributed Systems: Rough Mereological Formalization

    We propose an approach to approximate reasoning by systems of intelligent agents based on the paradigm of rough mereology. In this approach, the knowledge of each agent is formalized as an information system (a data table) from which similarity measures on objects manipulated by this agent are inferred. These similarity measures are based on rough mereological inclusions which formally render degrees for one object to be a part of another. Each agent constructs in this way its own rough mereological logic in which it is possible to express approximate statements of the type: an object x satisfies a predicate Ψ in degree r. The agents communicate by means of mereological functors (connectives among distinct rough mereological logics) propagating similarity measures from simpler to more complex agents; establishing these connectives is the main goal of negotiations among agents. The presented model of approximate reasoning entails such models of approximate reasoning like fuzzy controllers, neural networks etc. Our approach may be termed analytic, in the sense that all basic constructs are inferred from data.