This paper is concerned with new generalizations and refinements of the diamond-α integral Cauchy–Schwartz inequality on time scales. The new generalization and refinement of the Cauchy–Schwartz inequality related to diamond integral on time scales are also considered. These obtained inequalities unify continuous inequalities and their corresponding discrete forms associated with Cauchy–Schwartz inequality.
Ontologies are attracting increasing attention in software engineering research due to their ability to precisely model the semantic aspects of systems. Enriching software system models using ontology principles, especially in the process of developing interactive systems like Service-Oriented Architecture (SOA), can lead to the automated production of high-quality codes. Additionally, ontology-aware specification of the service from the abstract to the concrete level leads to early precise extraction of the service required by the users. This paper introduces a model-driven ontology-aware service development process to reduce the burden of code generation. This integrated approach utilizes a stepwise refinement methodology facilitated by a novel refinement algorithm to automate SOA development. The effectiveness of the approach is evaluated using three proposed parameters which examine the characteristics of the refined model in each refining step through some practical SOA case studies. Finally, we calculate the recall, precision, F-measure, accuracy, and also time analysis of discovered querying services for various scenarios in AWS and Netflix before and after applying ontology. The results show an average of 17% improvement in these metrics after applying ontology.
Hybrid dynamic systems include both continuous and discrete state variables. Properties of hybrid systems, which have an infinite state space, can often be verified using ordinary model checking together with a finite-state abstraction. Model checking can be inconclusive, however, in which case the abstraction must be refined. This paper presents a new procedure to perform this refinement operation for abstractions of hybrid systems. Following an approach originally developed for finite-state systems [11, 25], the refinement procedure constructs a new abstraction that eliminates a counterexample generated by the model checker. For hybrid systems, analysis of the counterexample requires the computation of sets of reachable states in the continuous state space. We show how such reachability computations with varying degrees of complexity can be used to refine hybrid system abstractions efficiently. Examples illustrate our counterexample-guided refinement procedure. Experimental results for a prototype implementation indicate significant advantages over existing methods.
It is well-known that the plasticity of magnesium and magnesium alloy is too low to be processed at low temperature. After normal hot extrusion, the plasticity of magnesium alloy can be improved, but not noticeably. In this paper, cold extrusion of commercial AZ31 magnesium alloy with severe plastic deformation was performed at room temperature with various extrusion ratios from 2:1 to 12.5:1 in order to refine the grain size of AZ31 magnesium alloy. Finally the grain size between 2~3 µm was obtained. And the influence of different grain size on the plasticity was further investigated. And the elongation of the initial billet with the grain size of 300 µm whose elongation is about 10.4%. After extrusion at room temperature with different extrusion ratios, the average grain size can be refined to below 10 µm, and the plasticity increased to 24%~30%. In order to study the effect of plasticity of AZ31 magnesium alloy under different processing techniques, we study the correspondence between plasticity and crystalline size at the same time. All results proved that, the increase in elongation could be mainly attributed to refining of the grain size of the magnesium crystalline. It also showed that, when the average grain size of AZ31 magnesium alloy was below 5 µm, the plasticity increased noticeably by cold extrusion.
The crystallization of melt-spun Fe-Pt-B amorphous alloy with a thickness of 20 µm in steady high magnetic field (HMF) was investigated. The intensity of the applied magnetic field is 0-10 T along with the ribbons annealed at temperature ranging from 673 K to 873 K. The direction of the performed magnetic field is perpendicular to the direction of surface velocity of the Cu wheel. The thermal magnetic properties of the samples were obtained by the thermal gravity analyzer with a magnet attached to the outside of the furnace filled with flowing inert gas. The phase transformations of Fe-Pt-B amorphous alloy on heating with and without the steady HMF were studied by X-ray diffractometry together with the melt-spun ribbon. The average grain size of the samples annealed in HMF was about 10 nm calculated from the XRD patterns. The HMF was thought to be effective on the refinement of nanograins which were produced by the crystallization of the amorphous ribbon during the annealing process.
Mesh adaptation is a reliable and effective method to improve the precision of flow simulation with computational fluid dynamics. Mesh refinement is a common technique to simulate steady flows. In order to dynamically optimize the mesh for transient flows, mesh coarsening is also required to be involved in an iterative procedure. In this paper, we propose a robust mesh adaptation method, both refinement and coarsening included. A data structure of k-way tree is adopted to save and access the parent–children relationship of mesh elements. Local element subdivision is employed to refine mesh, and element mergence is devised to coarsen mesh. The unrefined elements adjacent to a refined element are converted to polyhedrons to eliminate suspending points, which can also prevent refinement diffusing from one refined element to its neighbors. Based on an adaptation detector for vortices recognizing, the mesh adaptation was integrated to simulate the unsteady flow around a tri-wedges. The numerical results show that the mesh zones where vortices located are refined in real time and the vortices are resolved better with mesh adaptation.
A method for improving the segmentation of images is presented. It involves taking an initial segmentation provided by some other means, and modifying the region boundaries depending on the estimated region models until an equilibrium is reached. The advantages of this technique are: (1) no parameters are required, (2) it is invariant under constant scalings of the image intensities, and (3) it is relatively insensitive to the position and topology of the initial segmentation. Examples are given of its application to single and multi-scale intensity images, textured images, range images and multi-band satellite images.
In software product line engineering (SPLE), many studies have been conducted on commonality- and variability-based feature extraction methods and on the reasoning and refinement of feature models (FMs), aiming to enhance the appropriateness and reusability of the constructed FMs in compliance with feature-oriented development. The existing methods, however, failed to assure the developed applications that contain ambiguities between the features generated in FMs by analyzers' intuitions, and hindered the reuse of such applications. Moreover, the accuracy measurements of models based on mathematics-based theoretical verification methods are difficult to apply in practice. Therefore, a refinement technique is demanded to enhance the FM accuracy.
This paper aims to identify abnormal feature duplications and collisions based on the feature attributes to address the potential ambiguities between the features in an FM generated for a target domain, and to construct more precise FMs by presenting a technique for eliminating such abnormalities. For this purpose, the profiles of the formalized attributes were first defined based on MDR. Based on the semantics and relationships between the attributes, the duplications and collisions were identified using an analysis matrix, and were generalized to formulate rules by level. Such rules were evaluated to remove the duplications and collisions. In addition, using a supporting analyzer, the features in the initial FM were registered on a repository and were analyzed for feature duplications and collisions based on the saved attribute data.
The refinements of the ambiguities between such features are likely to enable the construction of more precise application FMs and the generation of common features with higher reusability. Further, the environments using support tools are expected to provide convenience in the similarity analysis and reuse of features.
Cogito 1 is the first iteration of a Z-based integrated methodology and support system for formal software development. This paper gives an overview of the Cogito methodology and associated tools. Particular emphasis is placed on the way in which Cogito integrates the various phases of the formal development process and provides comprehensive tools support for all phases of development addressed by the methodology.
An unfolding of a polyhedron is a single connected planar piece without overlap resulting from cutting and flattening the surface of the polyhedron. Even for orthogonal polyhedra, it is known that edge-unfolding, i.e., cuts are performed only along the edges of a polyhedron, is not sufficient to guarantee a successful unfolding in general. However, if additional cuts parallel to polyhedron edges are allowed, it has been shown that every orthogonal polyhedron of genus zero admits a grid-unfolding with quadratic refinement. Using a new unfolding technique developed in this paper, we improve upon the previous result by showing that linear refinement suffices. For 1-layer orthogonal polyhedra of genus g, we show a grid-unfolding algorithm using only 2(g−1) additional cuts, affirmatively answering an open problem raised in a recent literature. Our approach not only requires fewer cuts but yields much simpler algorithms.
In this paper, we have proposed a stochastic Knapsack Problem (KP) based mathematical model for small-scale vegetable sellers in India and solved it by an advanced Genetic Algorithm. The knapsack problem considered here is a bounded one, where vegetables are the objects. In this model, we have assumed that different available vegetables (objects) have different weights (that are available), purchase costs, and profits. The maximum weight of vegetables that can be transported by a seller is limited by the carrying capacity of the vegetable carrier and the business capital of the seller is also limited. The aim of the proposed mathematical model is to maximize the total profit of the loaded/traded items, with a set of predefined constraints on the part of the vegetable seller or retailer. This problem has been solved in a Type-2 fuzzy environment and the Critical Value (CV) reduction method is utilized to defuzzify the objective value. We have projected an improved genetic algorithm based approach, where we have incorporated two features, namely refinement and immigration. We have initially considered benchmark instances and subsequently some redefined cases for experimentation. Moreover, we have solved some randomly generated proposed KP instances in Type-2 fuzzy environment.
In a finite multicriteria game, one or more systems of weights might be implicitly used by the agents by playing a Nash equilibrium of the corresponding trade-off scalar games. In this paper, we present a refinement concept for equilibria in finite multicriteria games, called scalarization-stable equilibrium, that selects equilibria stable with respect to perturbations on the scalarization. An existence theorem is provided together with some illustrative examples and connections with some other refinement concepts are investigated.
This paper studies new refinement concepts for correlated equilibria based on altruistic behavior of the players and generalizes some refinement concepts previously developed by the authors for Nash equilibria. Effectiveness of the concepts, relations with the corresponding notions for Nash equilibria and with other correlated equilibrium refinements are investigated. The analysis of the topological properties of the set of solutions concludes the paper.
We prove several properties of kernels and cokernels in the category of augmented involutive stereotype algebras: (1) this category has kernels and cokernels, (2) the cokernel is preserved under the passage to the group stereotype algebras, and (3) the notion of cokernel allows to prove that the continuous envelope Env𝒞⋆(Z⋅K) of the group algebra of a compact buildup of an abelian locally compact group is an involutive Hopf algebra in the category of stereotype spaces (Ste,⊙). The last result plays an important role in the generalization of the Pontryagin duality for arbitrary Moore groups.
To help understand various reproducing kernels used in applied sciences, we investigate the inclusion relation of two reproducing kernel Hilbert spaces. Characterizations in terms of feature maps of the corresponding reproducing kernels are established. A full table of inclusion relations among widely-used translation invariant kernels is given. Concrete examples for Hilbert–Schmidt kernels are presented as well. We also discuss the preservation of such a relation under various operations of reproducing kernels. Finally, we briefly discuss the special inclusion with a norm equivalence.
The natural resources require continuous monitoring of pipelines to avoid algae blooms, corrosion, leakages, damages at joints for better management. Monitoring is performed through Wireless Sensors Network in collaboration with various communication methods using image sensors, but they face limitations like time consumption, color variations, blurred data, bit or data losses. To overcome the limitations, the paper proposes a Panoramic Image Transmission and Refinement Technique that uses acoustical sensors, Hybrid-Orthogonal Frequency Division Multiplexing (Hybrid-OFDM) and Mosaicing technique, targets to sense, capture and forward data to the surface station. It smooths the data using Random Sample Consensus (RANSAC) algorithm. The proposed method is tested and validated through MATLAB simulations. The obtained results support technique’s efficiency with respect to less energy consumption, communication, maximum delivery of data at low cost.
High-quality multiple sequence alignments can provide insights into the architecture and function of protein families. The existing MSA tools often generate results inconsistent with biological distribution of conserved regions because of positioning amino acid residues and gaps only by symbols. We propose RPfam, a refiner towards curated-like MSAs for modeling the protein families in the Pfam database. RPfam refines the automatic alignments via scoring alignments based on the PFASUM matrix, restricting realignments within badly aligned blocks, optimizing the block scores by dynamic programming, and running refinements iteratively using the Simulated Annealing algorithm. Experiments show RPfam effectively refined the alignments produced by the MSA tools ClustalO and Muscle with reference to the curated seed alignments of the Pfam protein families. Especially RPfam improved the quality of the ClustalO alignments by 4.4% and the Muscle alignments by 2.8% on the gp32 DNA binding protein-like family. Supplementary Table is available at http://www.worldscinet.com/jbcb/.
For almost 20 years, research on firm level innovation have relied upon [Lawson and Samson (2001). Developing innovation capability in organisations: A dynamic capabilities approach. International Journal of Innovation Management, 5(3), 377–400] concept of innovation capability (IC). Of note, these authors stated that this concept needs to be ‘refined, validated and tested using other research methods’ [Lawson and Samson (2001). Developing innovation capability in organisations: A dynamic capabilities approach. International Journal of Innovation Management, 5(3), 377–400], p. 396. To date, empirical studies heeding this call have been challenging to find. By researchers relying on this untested concept, they risk not attaining comprehensive insights into the firm level mechanisms underpinning the transformation idea and knowledge into innovations. This paper proposes a rethinking of the IC concept. The analysis is based on survey data of 69 firms involved in the Australian maritime industry using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The results suggest that the IC concept might be refined from seven dimensions, initially conceptualised, to three dimensions. The three dimensions are renamed as institutionalising innovation, implementing innovation and stimulating innovation.
Often, uncertainty is present in processes that are part of our routines. Having tools to understand the consequences of unpredictability is convenient. We introduce a general framework to deal with uncertainty in the realm of distribution sets that are descriptions of imprecise probabilities. We propose several non-biased refinement strategies to obtain sensible forecasts about results of uncertain processes. Initially, uncertainty on a system is modeled as the non-deterministic choice of its possible behaviors. Our refinement hypothesis translates non-determinism into imprecise probabilistic choices. Imprecise probabilities allow us to propose a notion of uncertainty refinement in terms of set inclusions. Later on, unpredictability is tackled through a strategic approach using uncertainty profiles and angel/daemon games (𝔞∕𝔡-games). Here, imprecise probabilities form the set of mixed strategies and Nash equilibria corresponds to natural uncertainty refinements. We use this approach to study the performance of Web applications — in terms of response times — under stress conditions.
Business environments are complex and evolve dynamically to react to environmental or contextual changes. Nowadays, many businesses use business processes to describe and automate their operations. However, current business process systems are based on executing statically defined workflows, which do not naturally allow the handling of emergent needs. In this work we are developing mechanisms to allow for a dynamic adaptation of workflows, and consider assurances that the derived workflows still achieve the main aims of the originals. Specifically, these mechanisms are self-adaptive workflows, based on policies, and self-managed adaptations, based on the refinement checking of an underlying formal model. The formal model, developed using the process algebra known as communicating sequential processes (CSP), allows for checking the correctness of desired adaptations at system runtime and when the adaptation is being attempted. In this paper we present the overview of our approach.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.