Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Traffic congestion is now nearly ubiquitous in many urban areas and frequently occurs during rush hour periods. Rush hour avoidance is an effective way to ease traffic congestion. It is significant to calculate the rush hour for alleviating traffic congestion. This paper provides a method to calculate the fuzzy peak hour of the urban traffic network considering the flow, speed and occupancy. The process of calculation is based on betweenness centrality of network theory, optimal separation method, time period weighting, probability–possibility transformations and trapezoidal approximations of fuzzy numbers. The fuzzy peak hour of the urban road traffic network (URTN) is a trapezoidal fuzzy number [m1, m2, m3, m4]. It helps us (i) to confirm a more detailed traffic condition at each moment, (ii) to distinguish the five traffic states of the traffic network in one day, (iii) to analyze the characteristic of appearance and disappearance processes of the each traffic state and (iv) to find out the time pattern of residents travel in one city.
In this paper, we present a logical framework to facilitate users in assessing a software system in terms of the required survivability features. Survivability evaluation is essential in linking foreign software components to an existing system or obtaining software systems from external sources. It is important to make sure that any foreign components/systems will not compromise the current system's survivability properties. Given the increasing large scope and complexity of modern software systems, there is a need for an evaluation framework to accommodate uncertain, vague, or even ill-known knowledge for a robust evaluation based on multi-dimensional criteria. Our framework incorporates user-defined constrains on survivability requirements. Necessity-based possibilistic uncertainty and user survivability requirement constraints are effectively linked to logic reasoning. A proof-of-concept system has been developed to validate the proposed approach. To our best knowledge, our work is the first attempt to incorporate vague, imprecise information into software system survivability evaluation.
In this paper, fuzzy linear regression models with fuzzy/crisp output, fuzzy/crisp input are considered. In this regard, we define risk-neutral, risk-averse and risk-seeking fuzzy linear regression models. In order to do that, two equality indices are applied to express the degree of equality between a pair of fuzzy numbers. We also develop three mathematical models to obtain the parameters of fuzzy linear regression models. Minimizing the difference between the total spread of the observed and estimated values is the objective of these models. The advantage of our proposed models is the simplicity in programming and computation.
To handle large variation data, an interval piecewise regression method with automatic change-point detection by quadratic programming is proposed as an alternative to Tanaka and Lee's method. Their unified quadratic programming approach can alleviate the phenomenon where some coefficients tend to become crisp in possibilistic regression by linear programming and also obtain the possibility and necessity models at one time. However, that method can not guarantee the existence of a necessity model if a proper regression model is not assumed especially with large variations in data. Using automatic change-point detection, the proposed method guarantees obtaining the necessity model with better measure of fitness by considering variability in data. Without piecewise terms in estimated model, the proposed method is the same as Tanaka and Lee's model. Therefore, the proposed method is an alternative method to handle data with the large variations, which not only reduces the number of crisp coefficients of the possibility model in linear programming, but also simultaneously obtains the fuzzy regression models, including possibility and necessity models with better fitness. Two examples are presented to demonstrate the proposed method.
Linguistic summarization provides to express large volumes of quantitative data in easy understandable natural language based forms. While various methods have been recommended for linguistic summarization, there is not any approach for linguistic summarization where possibilistic and probabilistic uncertainties exist together in data. In this study, we establish a tie between Z-number concept and type-I and type-II quantified sentences in order to calculate the truth degree of a linguistic summary covering possibilistic and probabilistic information. The proposed approach employs copulas to obtain a joint probability distribution of the variables included in type-II quantified sentence.
This paper gives a brief overview of the well-known impossibility-possibility theorem in constructing a social welfare function from individual functions. The Analytic Hierarchy Process uses a fundamental scale of absolute numbers to represent judgments about dominance in paired comparisons. It is shown that it is possible to derive such a function in two ways. One is from the synthesized functions of the judgments of each of the individuals. The other is obtained by first combining corresponding pairwise comparison judgments made by all the individuals, thus obtaining a matrix of combined judgments for the group and then deriving a welfare function for the group. With consistency the four conditions imposed by Arrow are satisfied. With inconsistency, an additional condition is needed.
The purpose of this fairly nontechnical introduction to uncertainty management is to identify various forms of uncertainty, and to survey methods for managing some of these uncertainties. Our emphasis is on topics that may not be familiar to software engineers or, to a lesser extent, to knowledge engineers. These topics include Bayesian estimation, fuzziness, time Petri nets, rough sets, belief and evidence, and possibility theory. Uncertainty management has been studied in the contexts of information systems and of artificial intelligence. We attempt to present a balanced view of the contributions from both areas.