Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    ON SASAKIAN–EINSTEIN GEOMETRY

    We introduce a multiplication ⋆ (we call it a join) on the space of all compact Sasakian-Einstein orbifolds formula and show that formula has the structure of a commutative associative topological monoid. The set formula of all compact regular Sasakian–Einstein manifolds is then a submonoid. The set of smooth manifolds in formula is not closed under this multiplication; however, the join formula of two Sasakian–Einstein manifolds is smooth under some additional conditions which we specify. We use this construction to obtain many old and new examples of Sasakain–Einstein manifolds. In particular, in every odd dimension greater that five we obtain spaces with arbitrary second Betti number.

  • articleNo Access

    An analysis of QoS ranking prediction framework techniques

    In the real world, building the high quality cloud computing framework is the challenge for the researcher in the present scenario where on demand service is required. The services which are performing the non-functional activities are referred to as Quality-of Service (QoS). The experience of real-world usage services are generally required to obtain the QoS. Many organizations offer various cloud services, such as Amazon, HP and IBM, to the customers. No technique is available to measure the real-world usage and estimate the ranking of the cloud services. From the customer side, it is a very tough job to choose the right cloud service provider (SP), which fulfills all the requirements of the customers. To avoid the confusion to select the right CSP, this paper proposes QoS ranking prediction methods such as Cloud Rank1, Cloud Rank2 and Cloud Rank3. Various experiments are done on the real-world QoS data by using EC2 services of Amazon and providing healthy results and solutions.

  • articleNo Access

    DESIGN LEVEL HYPOTHESIS TESTING THROUGH REVERSE ENGINEERING OF OBJECT-ORIENTED SOFTWARE

    Comprehension of an object-oriented (OO) system, its design and use of OO features such as aggregation, generalisation and other forms of association is a difficult task to undertake without the original design documentation for reference. In this paper, we describe the collection of high-level class metrics from the UML design documentation of five industrial-sized C++ systems. Two of the systems studied were libraries of reusable classes. Three hypotheses were tested between these high-level features and the low-level class features of a number of class methods and attributes in each of the five systems. A further two conjectures were then investigated to determine features of key classes in a system and to investigate any differences between library-based systems and the other systems studied in terms of coupling.

    Results indicated that, for the three application-based systems, no clear patterns emerged for hypotheses relating to generalisation. There was, however, a clear (positive) statistical significance for all three systems studied between aggregation, other types of association and the number of methods and attributes in a class. Key classes in the three application-based systems tended to contain large numbers of methods, attributes, and associations, significant amounts of aggregation but little inheritance. No consistent, identifiable key features could be found in the two library-based systems; both showed a distinct lack of any form of coupling (including inheritance) other than through the C++ friend facility.

  • articleNo Access

    A FRAMEWORK FOR COMPARING REQUIREMENTS TRACING EXPERIMENTS

    The building of traceability matrices by those other than the original developers is an arduous, error prone, prolonged, and labor intensive task. Thus, after-the-fact requirements tracing is a process where the right kind of automation can definitely assist an analyst. Recently, a number of researchers have studied the application of various methods, often based on information retrieval after-the-fact tracing. The studies are diverse enough to warrant a means for comparing them easily as well as for determining areas that require further investigation. To that end, we present here an experimental framework for evaluating requirements tracing and traceability studies. Common methods, metrics and measures are described. Recent experimental requirements tracing journal and conference papers are catalogued using the framework. We compare these studies and identify areas for future research. Finally, we provide suggestions on how the field of tracing and traceability research may move to a more mature level.

  • articleNo Access

    A COMPARISON OF EFFORT ESTIMATION METHODS FOR 4GL PROGRAMS: EXPERIENCES WITH STATISTICS AND DATA MINING

    This paper presents an empirical study analysing the relationship between a set of metrics for Fourth–Generation Languages (4GL) programs and their maintainability. An analysis has been made using historical data of several industrial projects and three different approaches: the first one relates metrics and maintainability based on techniques of descriptive statistics, and the other two are based on Data Mining techniques. A discussion on the results obtained with the three techniques is also presented, as well as a set of equations and rules for predicting the maintenance effort in this kind of programs. Finally, we have done experiments about the prediction accuracy of these methods by using new unseen data, which were not used to build the knowledge model. The results were satisfactory as the application of each technique separately provides useful perspective for the manager in order to get a complementary insight from data.

  • articleNo Access

    A THEORETICAL AND EMPIRICAL ANALYSIS OF THREE SLICE-BASED METRICS FOR COHESION

    Sound empirical research suggests that we should analyze software metrics from a theoretical and practical perspective. This paper describes the result of an investigation into the respective merits of two cohesion-based metrics for program slicing. The Tightness and Overlap metrics were those originally proposed by Weiser for the procedural paradigm. We compare and contrast these two metrics with a third metric for the OO paradigm first proposed by Counsell et al. based on Hamming Distance and based on a matrix-based notation. We theoretically validated the three metrics using the properties of Kitchenham and then empirically validated the same three metrics; some revealing properties of the metrics were found as a result. In particular, that the OO-based metric was the most stable of the three; module length was not a confounding factor for the Hamming Distance-based metric; it was however for the two slice-based metrics supporting previous work by Meyers and Binkley. The number of module slices however, was found to be an even stronger influence on the values of the two slice-based metrics, whose near perfect correlation with each other suggests that they may be measuring the same software attribute. We calculated and then compared the three metrics using first, a set of manufactured, pre-determined modules as a preliminary analysis and second, approximately nine thousand functions from the modules of multiple versions of the Barcode system, used previously by Meyers and Binkley in their empirical study. The over-arching message of the research is that a combination of theoretical and empirical analysis can help significantly in comparing the viability and indeed choice of a metric or set of metrics. More specifically, although cohesion is a subjective measure, there are certain properties of a metric that are less desirable than others and it is these 'relative' features that distinguish metrics, make their comparison possible and their value more evident.

  • articleNo Access

    A MODEL FOR PREDICTING CLASS MOVEMENT IN AN INHERITANCE HIERARCHY

    In this paper, we present an empirical study to investigate whether class movement and re-location within inheritance hierarchy can be predicted based on size, coupling and cohesion for four Java open-source systems. Our results showed that class movement may not be predicted based on coupling and cohesion, and while class size was found to be a factor that may help predict class movement, it does not per se predict class movement within an inheritance hierarchy. We found a significantly higher odds ratio for larger classes to be moved within an inheritance hierarchy than that of smaller classes, suggesting that, counter-intuitively, larger classes tend to be more susceptible to movement than smaller classes. We also found that in the four systems, while classes with high coupling, low cohesion and larger size tended to be moved within their respective inheritance hierarchy, classes with high coupling, low cohesion and relatively smaller size tended to be candidate classes for deletion. Finally, while we found that class coupling and size tended to rise as the systems evolved we found no statistical support for class cohesion to decline. Directed towards developers and project managers, the message that the research conveys is that excessive growth in class size is at the root of a class' deterioration in terms of movement; developmental controls should be exercised to avoid such growth.

  • articleNo Access

    A SYSTEMATIC REVIEW OF THE EMPIRICAL VALIDATION OF OBJECT-ORIENTED METRICS TOWARDS FAULT-PRONENESS PREDICTION

    Object-oriented (OO) approaches of software development promised better maintainable and reusable systems, but the complexity resulting from its features usually introduce some faults that are difficult to detect or anticipate during software change process. Thus, the earlier they are detected, found and fixed, the lesser the maintenance costs. Several OO metrics have been proposed for assessing the quality of OO design and code and several empirical studies have been undertaken to validate the impact of OO metrics on fault proneness (FP). The question now is which metrics are useful in measuring the FP of OO classes? Consequently, we investigate the existing empirical validation of CK + SLOC metrics based on their state of significance, validation and usefulness. We used systematic literature review (SLR) methodology over a number of relevant article sources, and our results show the existence of 29 relevant empirical studies. Further analysis indicates that coupling, complexity and size measures have strong impact on FP of OO classes. Based on the results, we therefore conclude that these metrics can be used as good predictors for building quality fault models when that could assist in focusing resources on high risk components that are liable to cause system failures, when only CK + SLOC metrics are used.

  • articleNo Access

    Enhancing Software Maintenance via Early Prediction of Fault-Prone Object-Oriented Classes

    Object-oriented software (OOS) is dominating the software development world today and thus, has to be of high quality and maintainable. However, their recent size and complexity affects the delivering of software products with high quality as well as their maintenance. In the perspective of software maintenance, software change impact analysis (SCIA) is used to avoid performing change in the “dark”. Unfortunately, OOS classes are not without faults and the existing SCIA techniques only predict impact set. The intuition is that, if a class is faulty and change is implemented on it, it will increase the risk of software failure. To balance these, maintenance should incorporate both impact and fault-proneness (FP) predictions. Therefore, this paper propose an extended approach of SCIA that incorporates both activities. The goal is to provide important information that can be used to focus verification and validation efforts on the high risk classes that would probably cause severe failures when changes are made. This will in turn increase maintenance, testing efficiency and preserve software quality. This study constructed a prediction model using software metrics and faults data from NASA data set in the public domain. The results obtained were analyzed and presented. Additionally, a tool called Class Change Recommender (CCRecommender) was developed to assist software engineers compute the risks associated with making change to any OOS class in the impact set.

  • articleNo Access

    Investigating the Effect of Aspect-Oriented Refactoring on the Unit Testing Effort of Classes: An Empirical Evaluation

    This paper aims at investigating empirically the effect of aspect-oriented (AO) refactoring on the unit testability of classes in object-oriented software. The unit testability of classes has been addressed from the perspective of the unit testing effort, and particularly from the perspective of the unit test cases (TCs) construction. We investigated, in fact, different research questions: (1) the impact of AO refactoring on source code attributes (size, complexity, coupling, cohesion and inheritance), attributes that are mostly related to the unit testability of classes, (2) the impact of AO refactoring on unit test code attributes (size, assertions, invocations and data creation), attributes that are indicators of the effort involved to write the code of unit TCs, and (3) the relationships between the variations observed after AO refactoring in both source code and unit test code attributes. We used in the study different techniques: correlation analysis, statistical tests and linear regression. We performed an empirical evaluation using data collected from three well-known open source (Java) software systems (JHOTDRAW, HSQLBD and PETSTORE) that have been refactored using AO programming (AspectJ). Results suggest that: (1) overall, the effort involved in the construction of unit TCs of refactored classes has been reduced, (2) the variations of source code attributes have more impact on methods invocation between unit TCs, and finally (3) the variations of unit test code attributes are more influenced by the variation of the complexity of refactored classes compared to the other class attributes.

  • articleOpen Access

    Metrics Visualization Techniques Based on Historical Origins and Functional Layers for Developments by Multiple Organizations

    Software developments involving multiple organizations such as Open Source Software (OSS)-based projects tend to have numerous defects when one organization develops and another organization edits the program source code files. Developments with complex file creation, modification history (origin), and software architecture (functional layer) are increasing in OSS-based development. As an example, we focus on an Android smart phone and a VirtualBox development project, and propose new visualization techniques for product metrics based on file origin and functional layers. One is the Metrics Area Figure, which can express duplication of edits by multiple organizations intuitively using overlapping figures. The other is Origin City, which was inspired by Code City. It can represent the scale and other measurements, while simultaneously stacking functional layers as 3D buildings. The contributions of our paper are to propose new techniques, implement them as web applications, and share the results of our questionnaire. Our proposed techniques are useful not only to visualize the measured metrics, but also to improve the product quality.

  • articleNo Access

    EMSA: Extensibility Metric for Software Architecture

    Software extensibility, the capability of adding new functions to a software system, is established based on software architecture. Therefore, developers need to evaluate the capability when designing software architecture. To support the evaluation, researchers have proposed metrics based on quality models or scenarios. However, those metrics are vague or subjective, depending on specific systems and evaluators. We propose the extensibility metric for software architecture (EMSA), which represents the degree of extensibility of a software system based on its architecture. To reduce the subjectivity of the metric, we first identify a typical task of adding new functions to a software system. Second, we define the metrics based on the characteristics of software architecture and its changes and finally combine them into a single metric. The originality of EMSA comes from defining metrics based on software architecture and extensibility tasks and integrating them into one. Furthermore, we made an effort to translate the degree into effort estimation expressed as person-hours. To evaluate EMSA, we conducted two types of user studies, obtaining measurements in both a laboratory and a real-world project. The results show that the EMSA estimation is reasonably accurate [6.6% MMRE and 100% PRED(25%)], even in a real-world project (93.2% accuracy and 8.5% standard deviation).

  • articleOpen Access

    Quantitative Measurement of Scientific Software Quality: Definition of a Novel Quality Model

    This paper presents a novel quality model, which provides a quantitative assessment of the attributes evaluated at each stage of development of scientific applications. This model is defined by selecting a set of attributes and metrics that affect the quality of applications. It is based on the established quality standards. The practical application and verification of the quality model is confirmed by two case studies. The first is an application for solving one-dimensional and two-dimensional Schrödinger equations, using the discrete variables representation method. The second is an application for calculating an ECG-derived heart rate and respiratory rate. The first application follows a development model for scientific applications, which includes some software engineering practices. The second application does not use a specific development model, rather, it is developed ad hoc. The quality of the applications is evaluated through comparative analyses using the proposed model. Based on software quality metrics, the results of this study indicate that the application for solving one-dimensional and two-dimensional Schrödinger equations produces more desirable results.

  • articleNo Access

    An Empirical Study on the Architecture Instability of Software Projects

    Software architecture is an artifact that expresses how the initial concept of a software system has actually been implemented. However, changes to the requirement imply continuous modification of the software system and may affect its architecture. It is expected that when a software system reaches the mature state, the requirements for evolution decrease and its architecture becomes more stable. The paper analyzes how the architecture of a software system evolves during its life cycle, with the aim of obtaining quantitative information on its possible instability after it has been declared mature. The goal is to verify if the architectural instability decreases with the increase of the software system maturity and to identify the software components that are more unstable among multiple releases. The paper proposes metrics that measure the instability of the architecture of a software system and its components through different releases. Open source software projects classified as mature and active and related historical data are analyzed. The results of the empirical study point out that the instability of software projects continues to evolve even after they are declared mature. The proposed metrics give a useful support for investigating the instability of a software project, even if further factors can be analyzed. Furthermore, the study can be replicated on other software systems belonging to different domains and developed using different programming languages.

  • articleNo Access

    AN ALGEBRAIC APPROACH TO INDUCTIVE LEARNING

    The paper presents a framework to induction of concept hierarchies based on consistent integration of metric and similarity-based approaches. The hierarchies used are subsumption lattices induced by the least general generalization operator (lgg) commonly used in inductive learning. Using some basic results from lattice theory the paper introduces a semantic distance measure between objects in concept hierarchies and discusses its applications for solving concept learning and conceptual clustering tasks. Experiments with well known ML datasets represented in three types of languages - propositional (attribute-value), atomic formulae and Horn clauses, are also presented.

  • articleNo Access

    The subset of R3 not realizing metrics on the curve complex

    In [F. Zhang, R. Qiu and Y. Zou, The subset of R3 realizing metrics on the curve complex, Topology Appl.193 (2015) 259–269], they defined a subset 𝒱 of R3 and a metric dN through each point of 𝒱 for the curve complex 𝒞(S). For further understanding the curve complex, we concern the whole set R3. With adaption to the flow theory on torus, we prove that for any point of R3𝒱, the dN is not a metric on 𝒞(S) or 𝒞0(S). This means that the 𝒱 is the maximal subset of R3 realizing metrics on the curve complex.

  • articleNo Access

    Enhancing Recommender Diversity Using Gaussian Cloud Transformation

    The recommender systems community is paying great attention to diversity as key qualities beyond accuracy in real recommendation scenarios. Multifarious diversity-increasing approaches have been developed to enhance recommendation diversity in the related literature while making personalized recommendations to users. In this work, we present Gaussian Cloud Recommendation Algorithm (GCRA), a novel method designed to balance accuracy and diversity personalized top-N recommendation lists in order to capture the user's complete spectrum of tastes. Our proposed algorithm does not require semantic information. Meanwhile we propose a unified framework to extend the traditional CF algorithms via utilizing GCRA for improving the recommendation system performance. Our work builds upon prior research on recommender systems. Though being detrimental to average accuracy, we show that our method can capture the user's complete spectrum of interests. Systematic experiments on three real-world data sets have demonstrated the effectiveness of our proposed approach in learning both accuracy and diversity.

  • articleNo Access

    Fuzzy Network Based Framework for Software Maintainability Prediction

    Software metrics based maintainability prediction is leading to development of new sophisticated techniques to construct prediction models. This paper proposes a new software maintainability prediction framework, which bases on Fuzzy Network, a novel exploratory modeling technique. The proposed framework utilizes both the metric data collected from software system and the subjective appraisals from experts. An application example of the framework is shown. In comparison to the Standard Fuzzy System based models, Fuzzy Network based models improves the transparency more than 71.3% and the accuracy more than 11.0%. It is confirmed that Fuzzy Network based framework is more appropriate for constructing SMP model.

  • articleNo Access

    A Metrics Framework for Product Development in Software Startups

    Business cases and customer problem spaces are evolving quicker than ever before and more startups are moving to adopt the lean startup methodology to match this speed of changing customer needs. This phenomenon, however, comes with its own set of opportunities and challenges for startups to build great products, while catering to customer pain points. To this end, there is a need for a metrics framework which can help startups succeed in creating good software solutions and building successful business models around these solutions. Metrics can help measure the effectiveness of the product in relation to the customer problem and help drive key decisions in both the product and business aspects of the startup. This paper reviews current frameworks on metrics for software products, studies the appropriateness in the context of software startups and proposes a metrics framework to help provide good software experiences, while subsequently building good business models around these experiences. The framework is designed to cover aspects of both the product and business space, ranging from considerations of the problem space identification to the evolution of the solution. The proposed framework is validated using a case study approach of a successful startup. The framework aims to help startups in their journey to success by providing an end to end, structured approach to metric identification.

  • articleNo Access

    QUALITY OF SERVICE PREDICTION USING FUZZY LOGIC AND RUP IMPLEMENTATION FOR PROCESS ORIENTED DEVELOPMENT

    In a competitive business landscape, large organizations such as insurance companies and banks are under high pressure to innovate, improvise and differentiate their products and services while continuing to reduce the time-to market for new product introductions. Generating a single view of the customer is critical from different perspectives of the systems developer over a period of time because of the existence of disconnected systems within an enterprise. Therefore, to increase revenues and cost optimization, it is important build enterprise systems more closely with the business requirements by reusing the existing systems. While building distributed based applications, it is important to take into account the proven processes like Rational Unified Process (RUP) to mitigate the risks and increase the reliability of systems. Experiences in developing applications in Java Enterprise Edition (JEE) with customized RUP have been presented in this paper. RUP is adopted into an onsite-offshore development model along with ISO 9001 and SEI CMM Level 5 standards. This paper provides an RUP approach to achieve increased reliability with higher productivity and lower defect density along with competitiveness through cost effective custom software solutions. Early qualitative software reliability prediction is done using fuzzy expert systems, using which the expected number of defects in the software prior to the experimental testing is obtained. The predicted results are then compared with the practical values obtained during the actual testing procedure.