Please login to be able to save your searches and receive alerts for new content matching your search criteria.
High performance of mathematical functions is essential to speed up scientific calculations because they are very frequently used in scientific computing. This paper presents performance of important Fortran intrinsic functions on the fastest vector supercomputers.
It is assumed that a relationship between CPU-time and the number of function arguments given to calculate function values is linear, and speeds of a function were measured using the parameters and
. The author also examines how the speed of the function varies with respect to the selection of arguments. The computers tested in the present paper are Cray C9016E/16256– 4, Fujitsu VP2600/10, Hitachi S-3800/480 and NEC SX-3/14R.
The increasing complexity of available infrastructures with specific features (caches, hyperthreading, dual core, etc.) or with complex architectures (hierarchical, parallel, distributed, etc.) makes it extremely difficult to build analytical models that allow for a satisfying prediction. Hence, it raises the question on how to validate algorithms if a realistic analytic analysis is not possible any longer. As for some many other sciences, the one answer is experimental validation. Nevertheless, experimentation in Computer Science is a difficult subject that today still opens more questions than it solves: What may an experiment validate? What is a "good experiment"? How to build an experimental environment that allows for "good experiments"? etc. In this paper we will provide some hints on this subject and show how some tools can help in performing "good experiments", mainly in the context of parallel and distributed computing. More precisely we will focus on four main experimental methodologies, namely in-situ (real-scale) experiments (with an emphasis on PlanetLab and Grid'5000), Emulation (with an emphasis on Wrekavoc) benchmarking and simulation (with an emphasis on SimGRID and GridSim). We will provide a comparison of these tools and methodologies from a quantitative but also qualitative point of view.
This paper proposes data envelopment analysis (DEA) as a suitable data analysis tool to overcome facility management (FM) benchmarking difficulties: FM performance benchmarking analysis is often unsophisticated, relying heavily on simple statistical representation, linking hard cost data with soft customer satisfaction data is often problematic. A case study is presented to show that DEA can provide FM personnel with an objective view on performance improvements. An objective of the case study is to investigate the relative efficiency of nine facilities with the same goals and to determine the most efficient facility. The case is limited to nine buildings in FM on four inputs and nine output criteria. The paper concludes by demonstrating that DEA-generated improvement targets can be applied when formulating FM outsourcing policies, strategies and improvements. Facility manager can apply DEA-generated improvement targets in formulating FM outsourcing policies, specifications development, FM strategy and planning. FM benchmarking with DEA can enhance continuous improvement in service efficiency and cost saving. This will help reduce utility cost as well as pollution. This paper fills the gap in the research of FM benchmarking by applying DEA which studies both soft and hard data simultaneously. It also contributes to a future research of a trade-off sensitivity test between FM cost, services performance and reliability.
This paper proposes Data Envelopment Analysis (DEA) as a suitable data analysis tool to overcome facility management (FM) benchmarking difficulties: FM performance benchmarking analysis is often unsophisticated, relying heavily on simple statistical representation, linking hard cost data with soft customer satisfaction data is often problematic. A case study is presented to show that DEA can provide FM personnel with an objective view on performance improvements. An objective of the case study is to investigate the relative efficiency of nine facilities with the same goals and to determine the most efficient facility. The case is limited to nine buildings in FM on four inputs and nine output criteria. The paper concludes by demonstrating that DEA-generated improvement targets can be applied when formulating FM outsourcing policies, strategies and improvements. Facility manager can apply DEA-generated improvement targets in formulating FM outsourcing policies, specifications development, FM strategy and planning. FM benchmarking with DEA can enhance continuous improvement in services' efficiency and cost saving. This will help reduce utility cost as well as pollution. This paper fills the gap in the research of FM benchmarking by applying DEA which studies both soft and hard data, simultaneously. It also contributes to a future research of a tradeoff sensitivity test between FM cost, services performance and reliability.
Precedent research based on data envelopment analysis (DEA) has been conducted to rank and benchmark the achievements of participating nations/regions in the Olympics, while this paper contributes to the Asian Games, with further concerns on two main issues, namely the common comparison basis for ranking decision making units (DMUs) and the reference feasibility between the inefficient DMUs and their benchmark targets. This paper extends previous DEA research by introducing an improved context-dependent DEA model, in which empirical results establish a unique and fair ranking system for all participating nations/regions, implying two corresponding suggestions for rank improvement. A series of stepwise learning targets are further identified, alternatively providing a gradual performance improvement path for the inefficient participants. The above results will be helpful for strategic decision making in sport management of the Asian nations/regions.
This study aims to formulate the least-distance range adjusted measure (LRAM) in data envelopment analysis (DEA) and apply it to evaluate the relative efficiency and provide the benchmarking information for Japanese banks. In DEA, the conventional range adjusted measure (RAM) acts as a well-defined model that satisfies a set of desirable properties. However, because of the practicality of the least-distance measure, we formulate the LRAM and propose the use of an effective mixed integer programming (MIP) approach to compute it in this study. The formulated LRAM (1) satisfies the same desirable properties as the conventional RAM, (2) provides the least-distance benchmarking information for inefficient decision-making units (DMUs), and (3) can be computed easily by using the proposed MIP approach. Here, we apply the LRAM to a Japanese banking data set corresponding to the period 2017–2019. Based on the results, the LRAM generates higher efficiency scores and allows inefficient banks to improve their efficiency with a smaller extent of input–output modification than that required by the RAM, thereby indicating that the LRAM can provide more easy-to-achieve benchmarking information for inefficient banks. Therefore, from the perspective of the managers of DMUs, this study provides a valuable LRAM for efficiency evaluation and benchmarking analysis.
In some Computer Vision applications there is the need for grouping, in one or more clusters, only a part of the whole dataset. This happens, for example, when samples of interest for the application at hand are present together with several noisy samples.
In this paper we present a graph-based algorithm for cluster detection that is particularly suited for detecting clusters of any size and shape, without the need of specifying either the actual number of clusters or the other parameters.
The algorithm has been tested on data coming from two different computer vision applications. A comparison with other four state-of-the-art graph-based algorithms was also provided, demonstrating the effectiveness of the proposed approach.
This paper presents a methodology for generating pairs of attributed graphs with a lower and upper- bounded graph edit distance (GED). It is independent of the type of attributes on nodes and edges. The algorithm is composed of three steps: randomly generating a graph, generating another graph as a sub-graph of the first, and adding structural and semantic noise to both. These graphs, together with their bounded distances, can be used to manufacture synthetic databases of large graphs. The exact GED between large graphs cannot be obtained for runtime reasons since it has to be computed through an optimal algorithm with an exponential computational cost. Through this database, we can test the behavior of the known or new sub-optimal error-tolerant graph-matching algorithms against a lower and an upper bound GED on large graphs, even though we do not have the true distance. It is not clear how the error induced by the use of sub-optimal algorithms grows with problem size. Thus, with this methodology, we can generate graph databases and analyze if the current assumption that we can extrapolate algorithms’ behavior from matching small graphs to large graphs is correct or not. We also show that with some restrictions, the methodology returns the optimal GED in a quadratic time and that it can also be used to generate graph databases to test exact sub-graph isomorphism algorithms.
Interoperability among different development tools is not a straightforward task since ontology editors rely on specific internal knowledge models which are translated into common formats such as RDF(S). This paper addresses the urgent need for interoperability by providing an exhaustive set of benchmark suites for evaluating RDF(S) import, export and interoperability. It also demonstrates, in an extensive field study, the state-of-the-art of interoperability among six Semantic Web tools. From this field study we have compiled a comprehensive set of practices that may serve as recommendations for Semantic Web tool developers and ontology engineers.
The increasing demand for image dehazing-based applications has raised the value of efficient evaluation and benchmarking for image dehazing algorithms. Several perspectives, such as inhomogeneous foggy, homogenous foggy, and dark foggy scenes, have been considered in multi-criteria evaluation. The benchmarking for the selection of the best image dehazing intelligent algorithm based on multi-criteria perspectives is a challenging task owing to (a) multiple evaluation criteria, (b) criteria importance, (c) data variation, (d) criteria conflict, and (e) criteria tradeoff. A generally accepted framework for benchmarking image dehazing performance is unavailable in the existing literature. This study proposes a novel multi-perspective (i.e., an inhomogeneous foggy scene, a homogenous foggy scene, and a dark foggy scene) benchmarking framework for the selection of the best image dehazing intelligent algorithm based on multi-criteria analysis. Experiments were conducted in three stages. First was an evaluation experiment with five algorithms as part of matrix data. Second was a crossover between image dehazing intelligent algorithms and a set of target evaluation criteria to obtain matrix data. Third was the ranking of the image dehazing intelligent algorithms through integrated best–worst and VIseKriterijumska Optimizacija I Kompromisno Resenje methods. Individual and group decision-making contexts were applied to demonstrate the efficiency of the proposed framework. The mean was used to objectively validate the ranks given by group decision-making contexts. Checklist and benchmarking scenarios were provided to compare the proposed framework with an existing benchmark study. The proposed framework achieved a significant result in terms of selecting the best image dehazing algorithm.
Increasing demand for open-source software (OSS) has raised the value of efficient selection in terms of quality; usability is an essential quality factor that significantly affects system acceptability and sustainability. Most large and complex software packages partitioned across multiple portals and involve many users — each with their role in the software package; those users have different perspectives on the software package, defined by their knowledge, responsibilities, and commitments. Thus, a multi-perspective approach has been used in usability evaluation to overcome the challenge of inconsistency between users’ perspectives; the inconsistency challenge would lead to an ill-advised decision on the selection of a suitable OSS. This study aimed to assist the public and private organizations in evaluating and selecting the most suitable OSS. The evaluation of the OSS software packages to choose the best one is a challenging task owing to (a) multiple evaluation criteria, (b) criteria importance, and (c) data variation; thus, it is considered a sophisticated multi-criteria decision making (MCDM) problem; moreover, the multi-perspective usability evaluation framework for OSS selection lacks in the current literature. Hence, this study proposes a novel multi-perspective usability evaluation framework for the selection of OSS based on the multi-criteria analysis. Integration of best-worst method (BWM) and VIKOR MCDM techniques has been used for weighting and ranking OSS alternatives. BWM is utilized for weighting of evaluation criteria, whereas VIKOR is applied to rank OSS-LMS alternatives. Individual and group decision-making contexts, and the internal and external groups aggregation were used to demonstrate the efficiency of the proposed framework. A well-organized algorithmic procedure is presented in detail, and a case study was examined to illustrate the validity and feasibility of the proposed framework. The results demonstrated that BWM and VIKOR integration works effectively to solve the OSS software package benchmarking/selection problems. Furthermore, the ranks of OSS software packages obtained from the VIKOR internal and external group decision making were similar; the best OSS-LMS based on the two ways was ‘Moodle’ software package. Among the scores of groups in the objective validation, significant differences were identified; this indicated that the ranking results of internal and external VIKOR group decision making were valid, which pointed to the validation of the framework.
Measuring company efficiency is an important issue for both managers and investors. Efficiency measurement is always important because organizations are constantly striving to increase internal productivity. However, investors are more concerned about sustainability than many executives believe. Almost 75% of investment community respondents strongly believed that improvements in operational efficiency were often accompanied by progress in terms of sustainability. This study examined companies listed on the Taiwan 50 and Taiwan Mid-Cap 100 Indexes and measured and ranked their operational efficiencies, identifying representatives with high investment potential among these highly capitalized blue chip stocks from various industries. The results will provide managers with recommendations for improving operational efficiency through competitive mapping, as well as a list of the most attractive targets for investment.
The accurate prediction of the strength of protein–ligand interactions is a very difficult problem despite impressive advances in the field of biomolecular modeling. There are good reasons to believe that quantum mechanical methods can help with this task, but the application of such methods in the context of scoring is still in its infancy. Here we benchmark several wave function theory (WFT), density functional theory (DFT) and semiempirical quantum mechanical (SQM) approaches against high-level theoretical references for realistic test cases. Based on our findings for systematically generated model systems of real protein/ligand complexes from the PDB-bind database, we can recommend SCS-MP2 and B2-PLYP-D3 as reference methods, TPSS-D3+Dabc/def-TZVPP as the best DFT approach and PM6-DH+ as a fast and accurate alternative to full ab initio treatments.
Membrane proteins perform a number of crucial functions as transporters, receptors, and components of enzyme complexes. Identification of membrane proteins and prediction of their topology is thus an important part of genome annotation. We present here an overview of transmembrane segments in protein sequences, summarize data from large-scale genome studies, and report results of benchmarking of several popular internet servers.
In this study, we simulated the algorithmic performance of a small neutral atom quantum computer and compared its performance when operating with all-to-all versus nearest-neighbor connectivity. This comparison was made using a suite of algorithmic benchmarks developed by the Quantum Economic Development Consortium. Circuits were simulated with a noise model consistent with experimental data from [Nature 604, 457 (2022)]. We find that all-to-all connectivity improves simulated circuit fidelity by 10%–15%, compared to nearest-neighbor connectivity.
This paper contributes toward the benchmarking of control architectures for bipedal robot locomotion. It considers architectures that are based on the Divergent Component of Motion (DCM) and composed of three main layers: trajectory optimization, simplified model control, and whole-body quadratic programming (QP) control layer. While the first two layers use simplified robot models, the whole-body QP control layer uses a complete robot model to produce either desired positions, velocities, or torques inputs at the joint-level. This paper then compares two implementations of the simplified model control layer, which are tested with position, velocity, and torque control modes for the whole-body QP control layer. In particular, both an instantaneous and a Receding Horizon controller are presented for the simplified model control layer. We show also that one of the proposed architectures allows the humanoid robot iCub to achieve a forward walking velocity of 0.3372m/s, which is the highest walking velocity achieved by the iCub robot.
Technological innovation audit can help companies to improve innovation performance by identifying key factors in innovation management and provide the benchmarking guidance. Combining extant audit theories, the paper develops an audit framework that is more comprehensive and more suitable to Chinese firms. Using beta testing method in a medium-size sample of cross-sectional Chinese companies, the authors test the applicability of the operational audit scoreboard in the sample firms. Moreover, the authors identify qualitatively the key advantages and disadvantages in technological innovation in Chinese companies that can differentiate successful and less successful innovation management practice. Three types of innovation management are found in our sample and possible implication for technological innovation development strategy in Chinese firms is proposed.
Industries need to ascend their standards for competitiveness and adopt modern methods with techniques for effectiveness of their system which occurs through convention of benchmarking techniques. The rationale of the study is to review the benchmarking techniques and moreover to rank on the basis of their application in service industries. To rank the benchmarking, analytical network process and technique for order preference by similarity to ideal solution (TOPSIS) methods are used. An integrated model of multi-criteria decision-making (MCDM) is used for prioritizing the best practices in Indian service sector. The study identifies different types of benchmarking techniques among which generic benchmarking, external benchmarking and internal benchmarking occupy the first three ranks, providing basis for several critical success factors (CSFs) like planning, reliability, standardization, time behavior and usability as the more important parts of benchmarking. Thus, an endeavor has been made by authors to give a model for evaluation of benchmarking techniques through MCDM which gives confidence for executives to adopt benchmarking in their industries.
This paper examines how 169 executives from leading North American electronics businesses assess their competitive operations abilities and performance, trends in business and manufacturing performance, technological readiness and manufacturing strategies. Comparative analysis reveals that the world-class electronics performers are accelerating their overall competitiveness, both internally by investing in technology and human assets and externally by forging strong relationships with customers. As a result, the most successful firms display heightened levels of organizational agility and performance.
A range of management accounting innovations (MAIs) have emerged in responding to the increasing changes in technology through the proliferation of globalization. Researchers have offered alternative views concerning these MAIs. These views range from rational-economic perspectives to the social-organizational process perspectives that explore how MAIs are adopted and implemented in different organizational settings. This paper contributes to the implementation impact by discussing the network view and subsidiaries’ capabilities, both absorptive and combinative, in the diffusion of MAIs in group organizations. The paper identifies four possible sources of diffusion of MAIs that have not been discussed in the literature.