You do not have any saved searches
The concept of the so-called computer capacity was proposed in 2012 and applied for analysis of processors of different kinds. Here, we analyze the evolution of processors using the computer capacity as the main tool of analysis. It is shown that during the transition “from old to new” the manufacturers change the parameters that affect the computer capacity. It allows us to predict the values of parameters of following processors. Intel processors are used as the main example due to their high popularity and the accessibility of detailed description of all the technical characteristics.
Dynamic range and spurious-free dynamic range are two of the most critical performance indexes in the field of radio frequency (RF). However, the definitions of both the indexes are ambiguous, and their characterization ability is insufficient, resulting in unfair, and even mutually incompatible, performance evaluation in practice. In this study, a new index named radio frequency distortion dynamic range and its corresponding evaluation method are proposed to achieve a fair and detailed dynamic range evaluation by unifying the existing definitions and improving the performance resolution ability. First, the sliding threshold selection method is introduced to replace the classification-based method of dynamic range definition to characterize more details of the dynamic range. Second, a dynamic range evaluation method of “performance body” is proposed to obtain a more comprehensive evaluation by generalizing the current evaluation from being based on a single condition to the one based on scanning critical conditions. Experiments show that the proposed radio frequency distortion dynamic range index with the proposed evaluation method reduces the ambiguity of dynamic range evaluation and can distinguish the performance difference that the current indexes cannot do.
A compositional modeling and performance evaluation technique for traffic control systems based on Stochastic Timed Petri Nets (STPN's) is presented. We use STPN's to specify traffic and traffic control at an intersection and use a random distribution model to model the motion of vehicles in a road segment between any two consecutive intersections. A traffic control system is thus modeled as a composition of individual intersection models and segment random distribution models. A technique is presented to incrementally evaluate the system's performance by analyzing intersections separately according to a carefully selected order. The analysis technique conforms to the accepted practice of transportation research. Compared to existing Petri net models of traffic control systems, our technique dramatically reduces the complexity of analysis.
With increasing requirements of distributed software systems, software agents are becoming a mainstream technology for software engineering and data management. Scalability and adaptability are two key challenges that must be addressed. In this work a new model is introduced for building large-scale distributed software systems with high dynamics, using a hierarchy of homogeneous agents that has the capability of service discovery. The performance of the agent system can be improved using different combinations of optimisation strategies. A modelling and simulation environment has been developed to aid the performance evaluation process. Two case studies are given and simulation results are included that show the impact of the agent mobility and the choice of performance optimisation strategies on the overall system performance.
The problem of performance evaluation of business processes supported by Workflow Management Systems is a recent research issue. In this paper we propose a measurement framework in which several aspects concerning the timing and working of a business process, either as a whole or in terms of its components, can be precisely quantified.
Our approach is based on the workflow model introduced by the Workflow Management Coalition and introduces some fundamental measures from which a number of derived measures can be hierarchically obtained.
The paper describes the basic structures and the primitive operators of the framework as well as the fundamental and derived measures. Techniques for the evaluation of complex processes are also discussed. The proposed framework is quite general and can be applied to research and commercial workflow management systems with relatively little implementation effort.
Feature engineering is one aspect of knowledge engineering. Besides feature selection, the appropriate assignment of feature values is also crucial to the performance of many software applications, such as text categorization (TC) and speech recognition. In this work, we develop a general method to enhance TC performance by the use of context-dependent feature values (aka term weights), which are obtained by a novel adaptation of a context-dependent adjustment procedure previously shown to be effective in information retrieval. The motivation of our approach is that the general method can be used with different text representations and in combination of other TC techniques. Experiments on several test collections show that our context-dependent feature values can improve TC over traditional context-independent unigram feature values, using a strong classifier like Support Vector Machine (SVM), which past works have found to be hard to improve. We also show that the relative performance improvement of our method over the context-independent baseline is comparable to the levels attained by recent word embedding methods in the literature, while an advantage of our approach is that it does not require the substantial training needed to learn word embedding representations.
Probabilistic behavior is omnipresent in computer-controlled systems, in particular, so-called safety-critical hybrid systems, due to various reasons, like uncertain environments or fundamental properties of nature. In this paper, we extend the existing hybrid process algebra ACPsrths with probability without sacrificing the nondeterministic choice operator. The existing approximate probabilistic bisimulation relation is fragile and not robust in the sense of being dependent on the deviation range of the transition probability. To overcome this defect, a novel approximate probabilistic bisimulation is proposed which is inspired by the idea of Probably Approximately Correct (PAC) by relaxing the constraints of transition probability deviation range. Traditional temporal logics, even probabilistic temporal logics, are expressive enough, but they are limited to producing only true or false responses, as they are still logics and not suitable for performance evaluation. To settle this problem, we present a new performance evaluation language that expands quantitative analysis from the value range of {0,1} to real number to reason over probabilistic systems. After that, the corresponding algorithms for performance evaluation are given. Finally, an industrial example is given to demonstrate the effectiveness of our method.
Path generation means generating a path or a set of paths so that the generated path meets specified properties or constraints. To our knowledge, generating a path with the performance evaluation value of the path within a given value interval has received scant attention. This paper subtly formulates the path generation problem as an optimization problem by designing a reasonable fitness function, adapts the Markov decision process with reward model into a weighted digraph by eliminating multiple edges and non-goal dead nodes, constructs the path by using a priority-based indirect coding scheme, and finally modifies the bat algorithm with heuristic to solve the optimization problem. Simulation experiments were carried out for different objective functions, population size, number of nodes, and interval ranges. Experimental results demonstrate the effectiveness and superiority of the proposed algorithm.
Temporal Logics are a rich variety of logical systems designed for specifying properties over time, and about events and changes in the world over time. Traditional temporal logic, however, is limited to binary outcomes true or false and lacks the capacity to specify performance properties of a system such as the maximum, minimum, or average costs between states. Current languages do not accommodate the quantification of such performance properties, especially in scenarios involving infinite execution paths where performance property like cumulative sums may fail to converge. To this end, this paper introduces a novel formal language aimed at assessing system performance, which encapsulates not only temporal dynamics but also various performance-related properties. In this study, this paper utilizes reinforcement learning techniques to compute the values of performance property formulas. Finally, in the experimental part, a formal language representation of system performance properties was implemented, and the values of the performance property formulas were computed using reinforcement learning. The effectiveness and feasibility of the proposed method were validated.
In this paper we propose clustering methods based on weighted quasiarithmetic means of T-transitive fuzzy relations. We first generate a T-transitive closure RT from a proximity relation R based on a max-T composition and produce a T-transitive lower approximation or opening RT from the proximity relation R through the residuation operator. We then aggregate a new T-indistinguishability fuzzy relation by using a weighted quasiarithmetic mean of RT and RT. A clustering algorithm based on the proposed T-indistinguishability is thus created. We compare clustering results from three critical ti-indistinguishabilities: minimum (t3), product (t2), and Łukasiewicz (t1). A weighted quasiarithmetic mean of a t1-transitive closure Rt1 and a t1-transitive lower approximation or opening Rt1 with the weight p=0.5, demonstrates the superiority and usefulness of clustering begun by using a proximity relation R based on the proposed clustering algorithm. The algorithm is then applied to the practical evaluation of the performance of higher education in Taiwan.
Fuzzy regression model is developed to construct the relationship between independent variable and dependent variable in a fuzzy environment. In order to increase the explanatory performance of fuzzy regression model, the least-squares method usually is applied to determine the numeric coefficients based on the concept of distance. In this paper, we consider the fuzzy linear regression model with fuzzy input, fuzzy output and crisp parameters and introduce a new distance based on the geometric centroid and incentre points (GCIP) of triangular fuzzy number, merge least-squares method with the new GCIP distance and propose least-squares GCIP distance method. Finally, an example of employee job performance is given to illustrate the effectiveness and feasibility of the method. Comparisons with existing methods show that total estimation error using the same distance criterion, the explanatory performance of the GCIP method is satisfactory, and the calculation is relatively simple.
In the binary context, a consecutive-k-out-of-n: G system works if and only if at least k consecutive components are working. In the multi-state context, a consecutive-k-out-of-n: G system is in state j or above (j = 1,2,…,M) if and only if at least kl consecutive components are in state l or above for all l (1 ≤ l ≤ j). In this paper, we use minimal path vectors to evaluate the system state distribution. When M = 3, a recursive formula is provided for evaluating the system state distribution. When M ≥ 4, an algorithm is provided to bound the system state distribution. These bounds are sharper than those reported in the literature.
Mobile ad hoc networks (MANETs) are ad hoc networks in which the nodes co-operatively route the traffic to the destination nodes which are beyond the wireless range of source nodes. The nodes in the network act as both end devices and routers. The routing mechanism in MANETs differentiates it from other wireless networks. Developing a routing protocol which is light on resources and efficient is a challenging task. Several routing protocols have been developed but the reactive routing protocols have found favor in most applications since these obtain the route when a node has data to send. This results in lower routing load and better conservation of meagre resources of the nodes. The two prominent reactive routing protocols for mobile ad hoc networks-Dynamic Source Routing (DSR) and Ad-hoc On-demand Distance Vector (AODV) routing. Both the protocols have similar on-demand behavior, but the differences in the protocol mechanism can lead to significant performance differentials. The performance differentials are analyzed using varying network load and mobility.
ADR (Atomic Delayed Replication) is a controllable replication manager implemented on top of commercial distributed relational databases. ADR's goal is to enable various well-defined trade-offs between database coherence, throughput and response time in large database networks, e.g. for telecom applications. By combining a strategy for distributed database design with a specific replication protocol, ADR preserves the ACID properties with a controlled relaxation of coherence between primary and secondary copies. We first discuss formal characteristics of ADR, and present the implementation techniques required to realize these formal characteristics on top of commercial distributed database technology. Then, after reviewing a validated analytical performance model for the approach, we demonstrate its flexibility by summarizing experiences with two industrial ADR applications in telecommunications management, both jointly developed with Philips Laboratories. One is database support for the integrated operation and evolution of Intelligent Network telephone services, where secondary copies are held within a distributed database system optimized for throughput and availability during schema evolution. The other concerns database support for mobile phones in a City-wide DECT setting (Digital Enhanced Cordless Telecommunications), where secondary copies are held in main memory caches outside the DBMS.
Adding virtual channels to wormhole-routed networks greatly improves performance because they reduce blocking by acting as "bypass" lanes for non-blocked messages. Although several analytical models have been proposed in the literature for k-ary n-cubes with deterministic routing, most of them have not included the effects of virtual channel multiplexing on network performance. This paper proposes a new and simple analytical model to compute message latency in k-ary n-cubes with an arbitrary number of virtual channels. Results from simulation experiments confirm that the proposed model exhibits a good degree of accuracy for various network sizes and under different operating conditions. The proposed model is then used to investigate the relative performance merits of two different organisations of virtual channels.
As pervasive and high-density wireless networks become increasingly common, it is critical to address the problems of intermittent disconnection, high error rate and collision that cause degradation in the performance of wireless media access control protocols, such as slotted ALOHA Time Division Multiple Access (slotted ALOHA/TDMA) and Direct Sequence Code Division Multiple Access (DS/CDMA). We propose adaptive techniques for improving performance of media access protocols through awareness of the mobile communication environment. These techniques involve detection of intermittent disconnection, high error rates, and collisions. Upon detection and notification of these conditions by snooping devices, the media access control layer adapts its operation and synchronization accordingly to reduce delay and loss of bandwidth. Results from our simulation studies show that adaptive TDMA improves performance by as much as 12 times over basic TDMA and adaptive CDMA improves by as much as 4 times over basic CDMA in wireless network with high density cells. Overall, adaptive CDMA still performs better than adaptive TDMA by about 4 times.
Wireless Sensor Networks (WSNs) have proven their success in a variety of applications for monitoring physical and critical environments. However, the streaming nature, limited resources, and the unreliability of wireless communication are among the factors that affect the Quality of Service (QoS) of WSNs. In this paper, we propose a data mining technique to extract behavioral patterns about the sensor nodes during their operation. The behavioral patterns, which we refer to as Chronological Patterns, can be thought of as tutorials that teach about the set of sensors that report on events within a defined time interval and the order in which the events were detected. Chronological Patterns can serve as a helpful tool for predicting behaviors in order to enhance the performance of the WSN and thus improve the overall QoS. The proposed technique consists of: a formal definition of the Chronological Patterns and a new representation structure, which we refer to as Chlorotical Tree (CT), that facilities the mining of these patterns. To report about the performance of the CT, several experiments have been conducted to evaluate the CT using different density factors.
In wireless networks, the radio link vulnerability attributed to effects such as noise, interference, free-space loss, shadowing and multipath fading, must be considered. MAC protocols developed for these networks do not take into account these perturbations. It was shown, in the literature, that 802.11 suffers from what is called 'the 802.11 anomaly'. This anomaly concerns two aspects: all nodes throughput, in a 802.11 network, falls to the worst one of all nodes and the bandwidth will be divided by the number of the mobile nodes of the network. In order to improve the quality of service of a BSS (Basic Service Set) and to solve 802.11 anomaly, Cross-layer approaches are developed. These approaches are especially based on information given by the Physical layer. In this study we propose a new cross-layer scheme: AMCLM (Adaptive Multi-services Cross-Layer MAC). The goal of this protocol is to improve the Quality-of-Service (QoS) of Mobile Nodes (MNs) connected in a BSS by a temporary disassociation of the ones for which the SNR (Signal to Noise Ratio) is under a defined threshold. In this way, the network's throughput is improved. Our approach aims to improve global networks QoS by unselfishness decisions of nodes. In order to show the benefit of our method, a performance evaluation of this protocol has been made. We have built the discrete Markov Chain associated to the behavior of AMCLM protocol to analyze the throughput of mobile nodes connected to the BSS.
Motivated by emerging applications, we consider sensor networks where the sensors themselves (not just the sinks) are mobile. Furthermore, we focus on mobility scenarios characterized by heterogeneous, highly changing mobility roles in the network. To capture these high dynamics of diverse sensory motion we propose a novel network parameter, the mobility level, which, although simple and local, quite accurately takes into account both the spatial and speed characteristics of motion. We then propose adaptive data dissemination protocols that use the mobility level estimation to optimize performance, by basically exploiting high mobility (redundant message ferrying) as a cost-effective replacement of flooding, e.g. the sensors tend to dynamically propagate less data in the presence of high mobility, while nodes of high mobility are favored for moving data around. These dissemination schemes are enhanced by a distance-sensitive probabilistic message flooding inhibition mechanism that further reduces communication cost, especially for fast nodes of high mobility level, and as distance to data destination decreases. Our simulation findings demonstrate significant performance gains of our protocols compared to non-adaptive protocols, i.e. adaptation increases the success rate and reduces latency (even by 15%) while at the same time significantly reducing energy dissipation (in most cases by even 40%). Also, our adaptive schemes achieve significantly higher message delivery ratio and satisfactory energy-latency trade-offs when compared to flooding when sensor nodes have limited message queues.
Extreme actions, such as impact loads, contain many uncertainties and hence, may not be analyzed by a deterministic approach. In this paper, an effective framework for performance evaluation of reinforced concrete (RC) beams subjected to impact loadings is proposed. For this purpose, a simple yet effective model considering the shear-flexural interaction is developed based on available impact test results. By incorporating the shear effect, both the maximum displacement and impact force are well predicted, by which the proposed model for the impact analysis of RC beams is validated. The joint probability density function (PDF) of two damage indexes, i.e. local drift ratio and overall support rotation, is used to represent the local shear damage degree and the overall flexural damage degree. Taking advantage of the probabilistic framework and the effective model, reliability analysis of the RC beams under different impact scenarios is performed. The damage, described in this study by the joint PDF, is highly affected by the combination of impact mass and velocity. Thus, the mass–velocity (m–v) diagrams for various performance levels are generated for the damage assessment of the RC beams. Furthermore, the contribution of the local and global responses to the failure probability is quantified using the proposed probabilistic framework.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.