You do not have any saved searches
In this paper we present techniques to compare the quality of tracking performances of B-spline based contour trackers. Three trackers reported to give good tracking performance have been considered for our empirical evaluation. They are the CONT-IMM tracker, the Condensation tracker and the Baumberg's tracker. Four different test conditions were set and for each test, the tracking performance of each tracker was measured against four performance measures. The results presented have revealed some interesting findings about the performance of the trackers under various conditions.
In this work we present a framework for on-the-fly video transcoding that exploits computer vision-based techniques to adapt the Web access to the user requirements. The proposed transcoding approach aims at coping with both user bandwidth and resources capabilities, and with user interests in the video's content. We propose an object-based semantic transcoding that, according to the user-defined classes of relevance, applies different transcoding techniques to the objects segmented in a scene. Object extraction is provided by on-the-fly video processing, without manual annotation. Multiple transcoding policies are reviewed and a performance evaluation metric based on the Weighted Mean Square Error (and corresponding PSNR), that takes into account the perceptual user requirements by means of classes of relevance, is defined. Results are analyzed by varying transcoding techniques, bandwidth requirements and video types (with indoor and outdoor scenes), showing that the use of semantics can dramatically improve the bandwidth to distortion ratio.
In the current prostate biopsy procedure, it is necessary for the doctor to hold the ultrasonic probe during the entire process. This increases the fatigue degree of the doctor, and the motion of the ultrasonic probe is more likely to cause damage to the anus of the patient. A medical device that can assist doctors in prostate scans and biopsy puncture devices was developed. We focused on the passive interlocking transrectal ultrasonic probe posture adjustment mechanism. Based on the posture adjustment mechanism, a passive interlocking transrectal ultrasound probe position and posture adjustment mechanism capable of assisting the doctor in prostate scanning and puncture intervention was designed. The passive interlocking posture adjustment mechanism with a seven-degree-of-freedom (DOF) interlocking mechanism was designed and can meet the requirements of doctors. The physical prototype of the passive interlocking transrectal ultrasonic probe posture adjustment mechanism was developed and commissioned. The locking torque and braking torque of the mechanism were measured. The results showed that the ultrasonic probe can achieve reliable locking and effectively meet the operation requirements.
This study proposes an improved assessment system for vocational education, thus helping to increase technology college education efficiency. Data Envelopment Analysis (DEA) was used to examine the relative managerial efficiency for evaluating current-period and cross-period efficiency of 38 technological institutes upgraded from junior colleges in Taiwan by 1998. In addition, the managerial efficiency variations of each individual institute in between 1995 and 1998 were also determined. The study results show that the operational category is significant among primary analysis variants, i.e. private schools perform significantly better than public schools in terms of managerial efficiency. However, geographical location is not significant. This study also verifies that integration of the results of both relative managerial efficiency analysis and managerial efficiency variation analysis could be a powerful approach to help design managerial strategies that are both appropriate and effective. Some strategies to improve organization-wide operational competencies for site decision-makers are recommended, and a new way of thinking to construct a more appropriate evaluation system for the educational authorities is introduced.
This paper proposes a risk scoring model to assess the performance of 27 US companies listed online by applying Data Envelopment Analysis (DEA) and comparing with the traditional financial measure Return on Equity (ROE). The DEA evaluation process involves two processes: (1) computation of operating efficiency and effectiveness to measure a company's operating performance, and (2) measurement of the return level per unit of risk to provide guidance for their investors. The risk scoring model is useful for both investors and company managers. For investors, it yields a new stock selecting strategy. For managers, it provides a risk-adjusted performance evaluation process. Empirical results show that for the Internet industry, the effectiveness of a company is more important than operating efficiency. Investors investing in efficient online companies yield higher returns.
In ubiquitous data stream mining, different devices often aim to learn concepts that are similar to some extent. In many applications, such as spam filtering or news recommendation, the data stream underlying concept (e.g., interesting mail/news) is likely to change over time. Therefore, the resultant model must be continuously adapted to such changes. This paper presents a novel Collaborative Data Stream Mining (Coll-Stream) approach that explores the similarities in the knowledge available from other devices to improve local classification accuracy. Coll-Stream integrates the community knowledge using an ensemble method where the classifiers are selected and weighted based on their local accuracy for different partitions of the feature space. We evaluate Coll-Stream classification accuracy in situations with concept drift, noise, partition granularity and concept similarity in relation to the local underlying concept. The experimental results show that Coll-Stream resultant model achieves stability and accuracy in a variety of situations using both synthetic and real-world datasets.
Performance evaluation is one of the most important problems for retail chains and may have effect on tactical and strategic decisions. This paper proposes a grey-based multi-criteria performance evaluation model for retail sector. This model integrates Decision-Making Trial and Evaluation Laboratory (DEMATEL) and modified Grey Relational Analysis (GRA) methods. First, the grey-based DEMATEL method is used for determining the importance of performance indicators to be used in GRA based on the experts’ assessments. Then, the proposed modified GRA method is used for the performance evaluation and ranking of retail stores with respect to the predetermined performance indicators. Finally, the effectiveness and the applicability of the developed approach are illustrated with a case study with the actual data taken from a retail chain in Turkey.
The issue of efficiency analysis of network and multi-stage systems, as one of the most interesting fields in data envelopment analysis (DEA), has attracted much attention in recent years. A pure serial three-stage (PSTS) process is a specific kind of network in which all the outputs of the first stage are used as the only inputs in the second stage and in addition, all the outputs of the second stage are applied as the only inputs in the third stage. In this paper, a new three-stage DEA model is developed using the concept of three-player Nash bargaining game for PSTS processes. In this model, all of the stages cooperate together to improve the overall efficiency of main decision-making unit (DMU). In contrast to the centralized DEA models, the proposed model of this study provides a unique and fair decomposition of the overall efficiency among all three stages and eliminates probable confusion of centralized models for decomposing the overall efficiency score. Some theoretical aspects of proposed model, including convexity and compactness of feasible region, are discussed. Since the proposed bargaining model is a nonlinear mathematical programming, a heuristic linearization approach is also provided. A numerical example and a real-life case study in supply chain are provided to check the efficacy and applicability of the proposed model. The results of proposed model on both numerical example and real case study are compared with those of existing centralized DEA models in the literature. The comparison reveals the efficacy and suitability of proposed model while the pitfalls of centralized DEA model are also resolved. A comprehensive sensitivity analysis is also conducted on the breakdown point associated with each stage.
Performance evaluation is relevant for supporting managerial decisions related to the improvement of public emergency departments (EDs). As different criteria from ED context and several alternatives need to be considered, selecting a suitable Multicriteria Decision-Making (MCDM) approach has become a crucial step for ED performance evaluation. Although some methodologies have been proposed to address this challenge, a more complete approach is still lacking. This paper bridges this gap by integrating three potent MCDM methods. First, the Fuzzy Analytic Hierarchy Process (FAHP) is used to determine the criteria and sub-criteria weights under uncertainty, followed by the interdependence evaluation via fuzzy Decision-Making Trial and Evaluation Laboratory (FDEMATEL). The fuzzy logic is merged with AHP and DEMATEL to illustrate vague judgments. Finally, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is used for ranking EDs. This approach is validated in a real 3-ED cluster. The results revealed the critical role of Infrastructure (21.5%) in ED performance and the interactive nature of Patient safety (C+R=12.771). Furthermore, this paper evidences the weaknesses to be tackled for upgrading the performance of each ED.
The study of interactions between host and pathogen proteins is important for understanding the underlying mechanisms of infectious diseases and for developing novel therapeutic solutions. Wet-lab techniques for detecting protein–protein interactions (PPIs) can benefit from computational predictions. Machine learning is one of the computational approaches that can assist biologists by predicting promising PPIs. A number of machine learning based methods for predicting host–pathogen interactions (HPI) have been proposed in the literature. The techniques used for assessing the accuracy of such predictors are of critical importance in this domain. In this paper, we question the effectiveness of K-fold cross-validation for estimating the generalization ability of HPI prediction for proteins with no known interactions. K-fold cross-validation does not model this scenario, and we demonstrate a sizable difference between its performance and the performance of an alternative evaluation scheme called leave one pathogen protein out (LOPO) cross-validation. LOPO is more effective in modeling the real world use of HPI predictors, specifically for cases in which no information about the interacting partners of a pathogen protein is available during training. We also point out that currently used metrics such as areas under the precision-recall or receiver operating characteristic curves are not intuitive to biologists and propose simpler and more directly interpretable metrics for this purpose.
This paper presents design and performance of a prototype of new humanoid arm that has been developed at the LARM2 laboratory of the University of Rome “Tor Vergata”. This new arm, called LARMbot PK arm, is an upper limb that is designed for the LARMbot humanoid robot. LARMbot is a humanoid robot designed to move freely in open spaces, and able to adapt to task environment. Its objective is to transport objects weighing a few kilograms in order to facilitate the restocking of workstations, or to manage small warehouses and other tasks feasible for humanoids. The LARMbot PK arm is conceived as a solution that is designed on the basis of a parallel tripod structure using linear actuators to provide high agility of movement. This solution is designed with components that can be found on the market or can be created by 3D printing in order to offer a quality and price ratio well convenient for user-oriented humanoid robots. Experimental tests are discussed with the built prototype to demonstrate the capabilities of the proposed solution in terms of agility, autonomy, and power to validate the LARMbot PK arm solution as a satisfactory solution for the new upper limbs of the LARMbot humanoid robot.
Fragility functions that estimate the probability of exceeding different levels of damage in slab-column connections of existing non-ductile reinforced concrete buildings subjected to earthquakes are presented. The proposed fragility functions are based on experimental data from 16 investigations conducted in the last 36 years that include a total of 82 specimens. Fragility functions corresponding to four damage states are presented as functions of the level of peak interstory drift imposed on the connection. For damage states involving punching shear failure and loss of vertical carrying capacity, the fragility functions are also a function of the vertical shear in the connection produced by gravity loads normalised by the nominal vertical shear strength in the absence of unbalanced moments. Two sources of uncertainty in the estimation of damage as a function of lateral deformation are studied and discussed. The first is the specimen-to-specimen variability of the drifts associated with a damage state, and the second the epistemic uncertainty arising from using small samples of experimental data and from interpreting the experimental results. For a given peak interstorey drift ratio, the proposed fragility curves permit the estimation of the probability of experiencing different levels of damage in slab-column connections.
A reliable estimate of the actual capacity and deformability of existing reinforced concrete buildings in earthquake prone areas is essential in pre- or post-earthquake interventions. This study is concerned with the evaluation of the structural overstrength, the global ductility and available behaviour factor of existing reinforced concrete buildings, designed and constructed according to past generations of earthquake resistant design codes. For the estimation of these global performance characteristics different failure criteria are incorporated in a methodology established to predict the failure mode of the buildings. As an application, a typical five-storey building of the 1960s, designed according to the prevailing design codes, is selected and analysed in the inelastic range. Both bare and infilled structural forms of this building are studied. For this structure, the plastic hinge rotation capacity is the critical failure criterion. The same structure, designed according to current design codes, is re-evaluated using the same methodology, in order to calibrate the procedure and to compare the static and dynamic inelastic performances of the two frames. The results indicate that existing buildings exhibit higher overstrength than their contemporary counterparts, but with much reduced ductility capacity. Perimeter infill walls correct for their lack of ductility by augmenting their stiffness and their overall lateral resistance. The methodology is subsequently applied to a larger inventory of typical existing buildings, as described in a companion publication.
The results of a parametric study are presented, concerned with the evaluation of the structural overstrength, the global ductility and the available behaviour factor of existing reinforced concrete (RC) buildings designed and constructed according to past generations of earthquake resistant design codes in Greece. For the estimation of these parameters, various failure criteria are incorporated in a methodology established to predict the failure mode of such buildings under planar response, as described in detail in a companion publication. A collection of 85 typical building forms is considered. The influence of various parameters is examined, such as the geometry of the structure (number of storeys, bay width etc.), the vertical irregularity, the contribution of the perimeter frame masonry infill walls, the period of construction, the design code and the seismic zone coefficient. The results from inelastic pushover analyses indicate that existing RC buildings exhibit higher overstrength than their contemporary counterparts, but with much reduced ductility capacity. The presence of perimeter infill walls increases considerably their stiffness and lateral resistance, while further reducing their ductility. Fully infilled frames exhibit generally good behaviour, while structures with an open floor exhibit the worst performance by creating a soft storey. Shear failure becomes critical in the buildings with partial height infills. It is also critical for buildings with isolated shear wall cores at the elevator shaft. Out of five different forms of irregularity considered in this study, buildings with column discontinuities in the ground storey exhibit the worst performance. Furthermore, buildings located in the higher seismicity zone are more vulnerable, since the increase of their lateral resistance and ductility capacity is disproportional to the increase in seismic demand.
This paper reports the results of an empirical study designed to investigate the effects of alternative participation of the designers and their performance-evaluation measures on simultaneous achievement of quality and cost in product development. These effects are tested in the weight-determination process at different deployment teams of QFD for model change product. The findings are based on a questionnaire survey of 207 Japanese manufacturing companies. We tested the interaction effects of three levels of participation and two levels of evaluation measure on quality and cost performance by the proportional-odds model. Results show that in all deployment teams, joint-participation (JP) is the dominant category. At mechanism deployment, simultaneous attainment of quality and cost is possible by JP and evaluation by uncontrollable information (UC). At the function and parts deployment teams, JP is the most effective when it is interacted with controllable information (C). The difference in the results in mechanism, function and parts levels lies in the controllability of the evaluation measures, which corresponds to their team sizes. The UC is associated with the teams having complex and broader structure while C is with simple and small structured teams.
The ISO56002 international standard for managing innovation systems was published in 2019. In this paper, we review the rationale, the key features, and the evidence base for this new standard. The primary objective of the standard is to promote the professionalisation of the field by providing a framework for management and organisational practice. The standard was developed by a wide range of stakeholders, including consultants and professional associations, and therefore features most of the elements we would expect from such a high-level, generic approach: strategy, organisation, leadership, planning, support, process, performance evaluation, and improvement. We examine the empirical base for each of these components in this paper. We also identify some critical shortcomings, such as the implicit adoption of a linear model, lack of specific tools to support practice, or any significant variation in application by sector or context. Finally, we recommend how the standard could be improved and implemented in practice.
Some 200–300 local government agencies (LGAs) in the United States (US) use suites of indicators to monitor economic and environmental trends and social well-being, often against goals. Many reservations have been expressed about these community indicator programs (CIPs) including their internal logic such as the validity of indicators in relation to goals, the measurability and plausibility of goals and data reliability.
To test these criticisms, a formal evaluation of a CIP has been done using the well-respected Santa Monica, California, CIP as the case study. The research tests the plausibility of a CIP as well as its effectiveness. It is done using program evaluation techniques, in particular, the tool of evaluability assessment. Using Likert rating techniques against several criteria, the author finds that plausibility of the CIP is reasonably robust and that it is a particularly effective tool of governance. More important, the research yields an evaluative methodology for assessing all goal-based CIPs. It is a framework that can help strengthen existing CIPs and it is transferable to appraising similar documents. Lastly, the research offers several lessons of value to urban planners and managers including discussion of targets, regionalism and the nature of "success" in urban management.
South Africa is regarded as a leading developing country in terms of SEA practice. However, the lack of empirical research to evaluate and learn from this wealth of practical experience can be considered a major lost opportunity not only for South Africa but also for the development of our understanding of SEA in the developing world. This paper provides the research results of an effectiveness review of six high profile SEA case studies within the South African context. Measured against four key performance areas (KPAs) and nine key performance indicators (KPIs) the research results show a high degree of ineffectiveness across all six cases in terms of 'direct outputs'. The main areas of weakness were the inability to influence the contents of plans and programmes as well as decision making in general. It can thus be concluded that based on the 'poor' direct effectiveness results SEA is not achieving its objectives within the South African context. However, certain 'indirect outputs' also emerged such as highlighting deficiencies and gaps in existing policy as well as examples where SEA facilitated capacity building and raised awareness of sustainability issues. Moreover, SEAs also contributed significantly to information generation and sharing. The research results suggest that practitioners need to either redefine the purpose of SEA, or fundamentally rethink the way in which SEA is being applied within the South African context. The paper concludes by making proposals for future international research.
The need for empirical research and systematic performance evaluation of SEA, to advance theoretical understandings as well as practice, has been widely expressed. To promote such research any performance evaluation has to ensure that it is conceptually justified, methodologically sound, practically viable and tailored to the local context. This paper describes a SEA quality and effectiveness review protocol for application within the South African context. Based on international perspectives and debates it provides a description of the conceptual thinking underpinning the structure of the protocol in terms of its approach and framework as well as methodological justification on how the review areas and indicators were designed. Finally a critical evaluation of its application to selected case studies is presented. It concludes that the conceptual framework and methodology could be applied in any context although the contents in terms of review areas and indicators needs to be adapted to accommodate different understandings and perspectives on SEA.
During the outbreak of epidemic diseases, the importance of real-time communication (RTC) systems dramatically increases. People use RTC systems for communicating with others, presenting projects, attending online courses, and sharing videos. With different network conditions, applications, and scenarios, how to choose an appropriate system for high-quality RTC is an open question. To the best of our knowledge, there is no general and unified method to comprehensively evaluate the performance of the publicly available RTC systems. In this paper, we systematically evaluate several performances of RTC systems. Our method treats systems as a black-box, which can be easily adapted to other systems. Our method is also available for other video transmission systems, such as streaming and live broadcasting systems. According to our measurement method, we evaluate three web-based and three software-based RTC systems on two video conferencing (VC)-based and two screen sharing (SS)-based scenarios. We measure the received video quality (graphical quality and frame rate) at the receiver, the upload bitrate at the sender, and four usages of local resource. Furthermore, we propose a new metric to measure the ability of the system to handle insufficient bandwidth situations. Our proposed metric is the first one directly measure the ability of the rate adaptation mechanism for RTC systems. We expect the measurement method, the metric, and our findings can help system development in the future. Our detailed analysis reveals that (1) the software-based systems are more efficient than the web-based systems for bitrate usage; (2) the web-based systems are more tolerant to insufficient bandwidth conditions than the software-based systems; (3) the studied RTC systems currently are not designed for the scenario of sharing dynamic videos because of their low-frame-rate strategy, which limits the usage; and (4) decreasing graphical quality is more likely to be recognized than decreasing the frame rate. Frame rate adjustment for rate adaptation can be considered.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.