Enhancing ecological efficiency stands out as a crucial avenue to realize carbon emission reduction without compromising economic and social development. This study introduces a fairness-concern ecological efficiency evaluation model to address the overestimation of efficiency values in traditional models with maintaining the balance between production activities and environmental conservation during the process of ecological efficiency measurement. The theoretical model is applied to 28 OECD countries, and reveals a general ecological efficiency for them during 2013 to 2017. The efficiency value assessments presented herein offer policymakers valuable insights for enhancing efficiency based on diverse preferences.
Using the China Household Finance Survey data in 2011, the estimation results of structural equation modeling demonstrate that the respondents with higher time preference rate have a significant higher probability of investing in stocks, which implies that the short-term households will prefer stock investment. The social insurance programs and insurance policies held by the family will have a significantly direct positive effect in promoting stock investment and also a significantly direct positive effect on the respondent’s time preference, which could further indirectly increase the family’s stock investment. These results show that the safety-net built by the Chinese government, including the social security and commercial insurance, is very likely to attract more short-term investors into the stock market. These empirical results provide new evidences to explain the extreme volatility of Chinese stock market and also testify the policy effect of building an environment for people to possess property income in China.
In this paper, we analyze the impacts of joint energy and output prices uncertainties on the input demands in a mean–variance framework. We find that an increase in expected output price will surely cause the risk-averse firm to increase the input demand, while an increase in expected energy price will surely cause the risk-averse firm to decrease the demand for energy, but increase the demand for the non-risky inputs. Furthermore, we investigate the two cases with only uncertain energy price and only uncertain output price. In the case with only uncertain energy price, we find that the uncertain energy price has no impact on the demands for the non-risky inputs. We also show that the concepts of elasticity and decreasing absolute risk aversion (DARA) play an important role in the comparative statics analysis.
When decisions are made under uncertainty (DMUU), the decision maker either disposes of an interval of possible profits for each alternative (the interval DMUU) or disposes of a discrete set of payoffs for each decision and then the amount of the profit related to a given alternative depends on the state of nature (the scenario DMUU). Existing methods, used to generate the ranking of decisions and applied to the second problem mentioned, take, to a different extent, into consideration how particular profits assigned to alternatives are ordered in the payoff matrix and what the position of a given outcome is in comparison with other outcomes for the same state of nature. The author proposes and describes several alternative procedures that enable connecting the structure of the payoff matrix with the selected decision. These methods are adjusted to the purpose and the nature of the decision maker. They refer to the Savage’s approach, to the maximin joy criterion, to the normalization technique and to some elements used in expected utility maximization and prospect theory.
The Mobile Ad-hoc Network (MANET) includes both the optimal allocation of shared channels and power in a network. Hence, obtaining the trade-off between energy consumption and delay is the major challenge. The source-constrained and maximizing lifetime of the unreliable wireless network includes optimal power allocation. We proposed an Integrated Spider Monkey Optimization Algorithm (ISMOA) for the minimization of energy consumption and the enhancement of throughput in MANET. The Nelder Mead Model (NMM) is used to improve the performance of the local leader stage in the spider monkey optimization algorithm. In this work, the novel approach is used to improve the performance of total energy consumption, throughput, and delay. The experimental works are executed in NS-2 software. The experimental results demonstrate the low delay, energy consumption with high throughput performance. Moreover, the proposed method outperforms the optimal performance compared to state-of-the-art methods.
Data mining is a method that gives valuable information where we can improve and achieve the goal. Existing data mining techniques are applied in education for analyzing the performance of students, classes, and institutions. This helps the teachers and management to identify where they lag and improvise. Some mining techniques are used to predict and identify special children’s categories. In this paper, data mining techniques are applied for the special children to predict the achievement in special school study for child categories like Mental Retardation (MR), autism, and cerebral palsy using the collected assessment detail. Vocational training is considered an essential aspect for special school children for their future survival. Data mining methods are applied to mine the most achievable and essential training factors as a pattern by apriori rule mining and high utility pattern mining algorithms, based on their assessment achieved from the age 10–14, where Madras developmental programming scale is followed in special schools. We can predict which category children are achieving necessary factors, which helps identify the alternate methods to train the particular activity. Essential factors that cannot achieve at the same level will be trained in the next prevocational level to reach their vocational training. The prediction that we have made will be helpful for the teachers to train the children concerning their achievements.
Purpose: The appropriate surgical procedure for Stage 3 osteonecrosis of the femoral head (ONFH) remains controversial. This study aimed to evaluate and compare postoperative quality of life (QOL) in patients who underwent total hip arthroplasty (THA) and bipolar hemiarthroplasty (BHA) for ONFH based on comprehensive and disease-specific scales using patient-directed questionnaires.
Methods: We included 54 of 66 patients who underwent artificial joint replacement for ONFH more than 1 year ago at our hospital between April 2013 and September 2020. THA was performed for ONFH Stage 4 and BHA for Stage 3 or below. The mean postoperative observation period in the THA and BHA groups was 3.9 and 3.7 years, respectively. The Short-Form 6-Dimension measure was used to calculate utility values.
Results: No significant differences in questionnaire results regarding disease-specific or comprehensive measures were observed after arthroplasty for ONFH between the THA and BHA groups. The utility values were 0.60 and 0.58 in the THA and BHA groups, respectively.
Conclusion: The postoperative QOL was similar between patients who underwent THA for Stage 4 ONFH and BHA for Stage 3 ONFH. Therefore, THA or BHA can be performed on patients with ONFH after considering age, stage classification, and previous medical conditions.
In complete financial markets, given a particular market variable, which could be finite dimensional (e.g., a price vector of a collection of stocks) or infinite dimensional (e.g., a price trajectory of some security over some period of time), the unique optimal strategy of consumption and investment in European claims contingent on that variable is obtained from two kinds of preference structure. Several examples are given to illustrate the optimality of the strategy. Results obtained in this paper are an extension of Jankunas [4].
Utility of DNA Profiling in Quality Control of Medicinal Herbs.
This paper involves developing financial utility function that considers compliance to a certain qualitative characteristic and studies the impact on market equilibrium prices, should this criterion be Sharia compliance, fair-trade, environmental, social and governance principles or other ethical aspect. The goal is to show that individual utility can depend on other parameters than wealth and risk aversion, that therefore influence equilibrium market prices. This has been done by examining a possible utility function that takes into account individual sensitivity to the criterion and the intrinsic quality of compliance of this parameter. In order to prove the effectiveness of the proposed utility function, a simulation is made using agent-based approach with NetLogo platform. Upon examination of the impact of these parameters, it becomes clear that compliance to a qualitative characteristic would impact individual utility, supply and demand and result in equilibrium prices. This research highlights the importance of ethical arguments on individual decision making and how markets behave to this.
In this paper we present a first attempt to represent the social behavior of actors in a resource sharing context in such a way that different forms of solidarity can be detected and measured. We expect that constructing agent-based models of water-related interactions at the interface of urban and rural areas, and running social simulations to study the occurrence and consequences of solidary behavior, will produce insights that may eventually contribute to water and land resource management practice. We propose a typology for solidary behavior, present the agent-based architecture that we are using, show some illustrative results, and formulate some questions that will guide our future work.
This article introduces both a new algorithm for reconstructing epsilon-machines from data, as well as the decisional states. These are defined as the internal states of a system that lead to the same decision, based on a user-provided utility or pay-off function. The utility function encodes some a priori knowledge external to the system, it quantifies how bad it is to make mistakes. The intrinsic underlying structure of the system is modeled by an epsilon-machine and its causal states. The decisional states form a partition of the lower-level causal states that is defined according to the higher-level user's knowledge. In a complex systems perspective, the decisional states are thus the "emerging" patterns corresponding to the utility function. The transitions between these decisional states correspond to events that lead to a change of decision. The new REMAPF algorithm estimates both the epsilon-machine and the decisional states from data. Application examples are given for hidden model reconstruction, cellular automata filtering, and edge detection in images.
In order to study the optimal R&D strategies of a firm in a dynamic environment, this paper introduces a reputation model on two-stage R&D decision-making by employing signaling games based on Schumpeter's process of creative destruction. There are two players in the game. One is a sender with private information on its own synthesized capability of type H (high) and L (low), and the other is a receiver without private information. The reputation model studies the type L sender on whether there is an incentive to build reputation in first phase. We solve the game model without considering reputation by applying adverse induction method and compare the results with that of using reputation. We show that the optimal signal of the type L sender is larger in phase two if it builds up reputation in phase one. The utility of the type L sender is less in phase one if it builds up reputation in phase one. However, it will receive higher utility in phase two. Our results can be applied to setting marketing and sales strategies.
This paper explores whether the decisions made by a negotiator during negotiations are consistent with her preferences. By considering the entire set of offers exchanged during a negotiation, the measures of consistency developed in this paper provide a compact representation of important behavioral characteristics throughout the negotiation process.
The consistency measures developed in this paper are validated with data from an experimental study in which the impact of two factors on negotiation processes is studied: the availability of analytical support and imposed vs. elicited preferences. We find that negotiators behave more consistently when preferences are assigned to them by the experimenters than when their preferences are elicited. On the other hand, an impact of analytical support is only found when preferences are elicited. These results shed light on both the design of negotiation experiments and the development of negotiation support systems.
Options are among the most important forms of compensation and incentive structuring. Standard option pricing theory provides guidelines but not a conclusive prescription of how to value executive stock options. Academic research on this subject has gone in several related but distinct directions. This paper examines one thread of this research stream: binomial models based on expected utility. We start by illustrating the procedures for estimating executive option values using expected utility analysis in a binomial framework. Using a common set of inputs based on empirical data, we compare option values and company costs based on differences in inputs and assumptions. Our findings identify variables that are important and others with relatively minor impact. We also examine the effect of dividends on executive stock options values, a topic that has been largely ignored to date. We present the argument for why the economic cost of an option equals its economic value, which contrasts with standard accounting procedures. This conflict between economics and accounting, while not new, can explain why corporations are so uncomfortable with new accounting rules for expensing executive stock options.
Discrete choice modeling (DCM) is widely used in economics, social studies, and marketing research for estimating utilities and preference probabilities of multiple alternatives. Data for the model is elicited from the respondents who are presented with several sets of items characterized by various attributes, and each respondent chooses the best alternative in each set. Estimation of utilities is usually performed in a multinomial-logit (MNL) modeling and software for Hierarchical Bayesian (HB) technique is usually applied to find individual utilities by iterative estimations. This paper describes an easy and convenient empirical Bayesian way to construct priors and combine them with the likelihood on individual level data. This allows the modeler to obtain posterior estimation of MNL utilities in noniterative evaluations. Logistic modeling for the posterior frequencies is performed using the linear link of their logarithm of odds that clarifies the results of DCM modeling. The problem of overfitting is considered and the optimum balance between signal and noise in the precision of individual prediction and the smoothing of overall data is suggested. Actual market research data are used and the results are discussed.
Maximum difference (MaxDiff) is a discrete choice modeling approach widely used in marketing research for finding utilities and preference probabilities among multiple alternatives. It can be seen as an extension of the paired comparison in Thurstone and Bradley–Terry techniques for the simultaneous presenting of three, four or more items to respondents. A respondent identifies the best and the worst ones, so the remaining are deemed intermediate by preference alternatives. Estimation of individual utilities is usually performed in a hierarchical Bayesian (HB)-multinomial-logit (MNL) modeling. MNL model can be reduced to a logit model by the data composed of two specially constructed design matrices of the prevalence from the best and the worst sides. The composed data can be of a large size which makes logistic modeling less precise and very consuming in computer time and memory. This paper describes how the results for utilities and choice probabilities can be obtained from the raw data, and instead of HB methods the empirical Bayes techniques can be applied. This approach enriches MaxDiff and is useful for estimations on large data sets. The results of analytical approach are compared with HB-MNL and several other techniques.
Due to widespread growth of cloud technology, virtual server accomplished in cloud platform may collect useful data from a client and then jointly disclose the client’s sensitive data without permission. Hence, from the perspective of cloud clients, it is very important to take confident technical actions to defend their privacy at client side. Accordingly, different privacy protection techniques have been presented in the literature for safeguarding the original data. This paper presents a technique for privacy preservation of cloud data using Kronecker product and Bat algorithm-based coefficient generation. Overall, the proposed privacy preservation method is performed using two important steps. In the first step, PU coefficient is optimally found out using PUBAT algorithm with new objective function. In the second step, input data and PU coefficient is then utilized for finding the privacy protected data for further data publishing in cloud environment. For the performance analysis, the experimentation is performed with three datasets namely, Cleveland, Switzerland and Hungarian and evaluation is performed using accuracy and DBDR. From the outcome, the proposed algorithm obtained the accuracy of 94.28% but the existing algorithm obtained only the 83.64% to prove the utility. On the other hand, the proposed algorithm obtained DBDR of 35.28% but the existing algorithm obtained only 12.89% to prove the privacy measure.
Cloud computing has become a powerful mechanism for initiating secure communication among users. The advancements in the technology lead to provide various services, like accessing network, resources, and platform. However, handling large datasets and security are major issues in the cloud systems. Hence, this paper proposes a technique, namely, Jaya–Whale Optimization (JWO), which is the integration of Jaya algorithm and Whale optimization algorithm (WOA) and adapts homomorphic encryption for initiating secure data transmission in the cloud. The original data are preserved by generating Data Protection (DP) coefficient using the proposed JWO algorithm. In the proposed algorithm, the fitness is calculated based on privacy and utility parameters for selecting the optimal solution. Also, the sanitized data are generated by EXORing the Key Information Product (KIP) matrix and key vector. Finally, the data owner provides the key to the users for retrieving the original data from the sanitized data. The experimentation is carried out using Cleveland, Hungarian, and Switzerland datasets in terms of BD, accuracy, and fitness and the analysis shows that the proposed JWO provides superior performance in terms of BD, accuracy, and fitness parameters with values 0.720, 0.822, and 0.722.
The uncertainty premium is the premium that is derived from not knowing the sure outcome (risk premium) and from not knowing the precise odds of outcomes (ambiguity premium). We generalize Pratt's risk premium to uncertainty premium based on Klibanoff et al.'s (2005) smooth model of ambiguity. We show that the uncertainty premium can decrease with an increase in decision maker's risk aversion. This happens because increasing risk aversion always results in a lower ambiguity premium. The positive ambiguity premium may provide an additional explanation to the equity premium puzzle.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.