Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Recent advances in scheduling and networking have paved the way for efficient exploitation of large-scale distributed computing platforms such as computational grids and huge clusters. Such infrastructures hold great promise for the highly resource-demanding task of verifying and checking large models, given that model checkers would be designed with a high degree of scalability and flexibility in mind.
In this paper we focus on the mechanisms required to execute a high-performance, distributed, symbolic model checker on top of a large-scale distributed environment. We develop a hybrid algorithm for slicing the state space and dynamically distribute the work among the worker processes. We show that the new approach is faster, more effective, and thus much more scalable than previous slicing algorithms. We then present a checkpoint-restart module that has very low overhead. This module can be used to combat failures, the likelihood of which increases with the size of the computing plat-form. However, checkpoint-restart is even more handy for the scheduling system: it can be used to avoid reserving large numbers of workers, thus making the distributed computation work-efficient. Finally, we discuss for the first time the effect of reorder on the distributed model checker and show how the distributed system performs more efficient reordering than the sequential one.
We implemented our contributions on a network of 200 processors, using a distributed scalable scheme that employs a high-performance industrial model checker from Intel. Our results show that the system was able to verify real-life models much larger than was previously possible.
Sound empirical research suggests that we should analyze software metrics from a theoretical and practical perspective. This paper describes the result of an investigation into the respective merits of two cohesion-based metrics for program slicing. The Tightness and Overlap metrics were those originally proposed by Weiser for the procedural paradigm. We compare and contrast these two metrics with a third metric for the OO paradigm first proposed by Counsell et al. based on Hamming Distance and based on a matrix-based notation. We theoretically validated the three metrics using the properties of Kitchenham and then empirically validated the same three metrics; some revealing properties of the metrics were found as a result. In particular, that the OO-based metric was the most stable of the three; module length was not a confounding factor for the Hamming Distance-based metric; it was however for the two slice-based metrics supporting previous work by Meyers and Binkley. The number of module slices however, was found to be an even stronger influence on the values of the two slice-based metrics, whose near perfect correlation with each other suggests that they may be measuring the same software attribute. We calculated and then compared the three metrics using first, a set of manufactured, pre-determined modules as a preliminary analysis and second, approximately nine thousand functions from the modules of multiple versions of the Barcode system, used previously by Meyers and Binkley in their empirical study. The over-arching message of the research is that a combination of theoretical and empirical analysis can help significantly in comparing the viability and indeed choice of a metric or set of metrics. More specifically, although cohesion is a subjective measure, there are certain properties of a metric that are less desirable than others and it is these 'relative' features that distinguish metrics, make their comparison possible and their value more evident.
Smart contracts are programs running on blockchain. In recent years, due to the persistent occurrence of security-related accidents in smart contracts, the effective detection of vulnerabilities in smart contracts has received extensive attention from researchers and engineers. Machine learning-based vulnerability detection techniques have the advantage that they do not need expert rules for determining vulnerabilities. However, existing approaches cannot identify vulnerabilities when the versions of smart contract compilers are updated. In this paper, we propose OC-Detector (Opcode Clustering Detector), a smart contract vulnerability detection approach based on clustering opcode instructions. OC-Detector learns the characteristics of opcode instructions to cluster them and replaces opcode instructions belonging to the same cluster with the ID of the cluster. After that, the similarity between the contract under analysis and contracts in the vulnerability database is calculated to identify vulnerabilities. The experimental results demonstrate that OC-Detector improves the F1 value of detecting vulnerabilities from 0.04 to 0.40 compared to DC-Hunter, Securify, SmartCheck and Osiris. Additionally, compared to DC-Hunter, the F1 value is improved by 0.27 when detecting vulnerabilities in smart contracts compiled by different versions of compilers.
In this paper, we present certain algorithms for clustering the vertices of fuzzy graphs(FGs) and intuitionistic fuzzy graphs(IFGs). These algorithms are based on the edge density of the given graph. We apply the algorithms to practical problems to derive the most prominent cluster among them. We also introduce parameters for intuitionistic fuzzy graphs.
A review of various methods for generation of ultrashort X-ray pulses using relativistic electron beam from conventional accelerators is presented. Both spontaneous and coherent emission of electrons are considered.
A review of various methods for generation of ultrashort X-ray pulses using relativistic electron beam from conventional accelerators is presented. Both spontaneous and coherent emission of electrons are considered.