Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Combat data collected from World War II and a cellular automaton combat model called MANA are shown to display fractal properties. This strongly supports our earlier hypotheses as to the nature of combat attrition data. It also provides a method by which we can judge a combat model's ability to produce realistic synthetic combat data. The data appear to display properties extremely similar to those of the fractal cascade models used to describe turbulent dynamics. Interestingly, the fractal parameters appear to depend on how the model is set up, implying that they are determined by the boundary and initial conditions. Examination of the dynamical rules used in the MANA model simulation suggests that the model entities need to respond to changes in the level of order on the battlefield grid for fractal behavior to occur. Such data imply that the entropy of the battlefield is dependent on the scale at which it is examined. We speculate that such formations in a military case effectively act to isolate the highest level of command from disorder in the lowest. If disorder within a force grows to the point where that force can no longer maintain a fractal-like distribution, the force distribution may tend to become uniformly random, effectively destroying its viability as a combat unit.
A system of two components is analyzed as a two-period game. After period 1 the system can be fully operational, in two states of intermediate degradation, or fail. Analogously to changing failure rates in dependent systems analyzed with Markov analysis, unit costs of defense and attack, and contest intensities, change in period 2. As the values of the two intermediate states increase from zero which gives the series system, towards their maxima which gives the parallel system, the defender becomes more advantaged, and the attacker more disadvantaged. Simulations illustrate the players' efforts in the two time periods and utilities dependent on parametric changes. The defender withdraws from defending the system when the values of both degraded states are very low. The attacker withdraws from attacking the system when the values of both degraded states are very high. In the benchmark case the defender prefers the one-period game and the attacker prefers the two-period game, but if the attacker's unit cost of attack is large for one component, and the value of the degraded system with this component operational is above a low value, the defender prefers the two-period game to obtain high utility in period 2 against a weak attacker. When the values of the degraded states are above certain low values, the players exert higher efforts in period 1 of a two-period game than in a one-period game, as investments into the future to ensure high versus low reliability in period 2.
The objective of this paper is to understand why the pace of technical change has been slower in the defense and space industries than in semiconductors. To answer this question, we adopt an inductive method that exploits historical facts and patent statistics to provide empirical data. Basing our rationale on the appropriation and structural inertia arguments, we show that the constraints imposed by reliability in the defense and space industries impedes the appropriation of supra profits from innovation and thus slows the pace of technical change. The main managerial implication of this result is that technical inertia contributes to competitive success when the reliability constraint reaches high levels.
In the modern world, several areas of our lives can be improved, in the form of diverse additional dimensions, in terms of quality, by machine learning. When building machine learning models, open data are often used. Although this trend is on the rise, the monetary losses since the attacks on machine learning models are also rising. Preparation is, thus, believed to be indispensable in terms of embarking upon machine learning. In this field of endeavor, machine learning models may be compromised in various ways, including poisoning attacks. Assaults of this nature involve the incorporation of injurious data into the training data rendering the models to be substantively less accurate. The circumstances of every individual case will determine the degree to which the impairment due to such intrusions can lead to extensive disruption. A modus operandi is proffered in this research as a safeguard for machine learning models in the face of the poisoning menace, envisaging a milieu in which machine learning models make use of data that emanate from numerous sources. The information in question will be presented as training data, and the diversity of sources will constitute a barrier to poisoning attacks in such circumstances. Every source is evaluated separately, with the weight of each data component assessed in terms of its ability to affect the precision of the machine learning model. An appraisal is also conducted on the basis of the theoretical effect of the use of corrupt data as from each source. The extent to which the subgroup of data in question can undermine overall accuracy depends on the estimated data removal rate associated with each of the sources described above. The exclusion of such isolated data based on this figure ensures that the standard data will not be tainted. To evaluate the efficacy of our suggested preventive measure, we evaluated it in comparison with the well-known standard techniques to assess the degree to which the model was providing accurate conclusions in the wake of the change. It was demonstrated during this test that when the innovative mode of appraisal was applied, in circumstances in which 17% of the training data are corrupt, the degree of precision offered by the model is 89%, in contrast to the figure of 83% acquired through the traditional technique. The corrective technique suggested by us thus boosted the resilience of the model against harmful intrusion.
We present a review of detector systems used in accelerator-based security applications. The applications discussed span stockpile stewardship, material interdiction, treaty verification, and spent nuclear fuel assay. The challenge for detectors in accelerator-based applications is the separation of the desired signal from the background, frequently during high input count rates. Typical techniques to address the background challenge include shielding, timing, selection of sensitive materials, and choice of accelerator.
The history of U.S. intelligence and military assessments of the security implications of climate change go back many decades to the 1980s. Since that time, hundreds of analyses of climate change, a massively growing body of literature on the impacts of human-caused climate change, and reports from every U.S. defense, intelligence, and security agency have acknowledged the links between climate and security, with a focus on two key areas: the vulnerability of U.S. military bases and assets to the threats posed by climate change; and the risks that the consequences and impacts of climate change will cause political instability that may lead to increased U.S. military interventions.
We present a review of detector systems used in accelerator-based security applications. The applications discussed span stockpile stewardship, material interdiction, treaty verification, and spent nuclear fuel assay. The challenge for detectors in accelerator-based applications is the separation of the desired signal from the background, frequently during high input count rates. Typical techniques to address the background challenge include shielding, timing, selection of sensitive materials, and choice of accelerator.