This paper presents a system based on new operators for handling sets of propositional clauses compactly represented by means of ZBDDs. The high compression power of such data structures allows efficient encodings of structured instances. A specialized operator for the distribution of sets of clauses is introduced and used for performing multiresolution on clause sets. Cut eliminations between sets of clauses of exponential size may then be performed using polynomial size data structures. The ZRES system, a new implementation of the Davis-Putnam procedure of 1960, solves two hard problems for resolution, that are currently out of the scope of the best SAT provers.
An approach to accommodating semantic heterogeneity in a federation of interoperable, autonomous, heterogeneous databases is presented. A mechanism is described for identifying and resolving semantic heterogeneity while at the same time honoring the autonomy of the database components that participate in the federation. A minimal, common data model is introduced as the basis for describing sharable information, and a three-pronged facility for determining the relationships between information units (objects) is developed. Our approach serves as a basis for the sharing of related concepts through (partial) schema unification without the need for a global view of the data that is stored in the different components. The mechanism presented here can be seen in contrast with more traditional approaches such as “integrated databases” or “distributed databases”. An experimental prototype implementation has been constructed within the framework of the Remote-Exchange experimental system.
A Gauss code for a virtual knot diagram is a sequence of crossing labels, each repeated twice and assigned a + or - symbol to identify over and undercrossings. Eliminate these symbols and what remains is a Gauss code for the shadow of the diagram, one type of virtual pseudodiagram. While it is now impossible to determine which particular virtual diagram the shadow resulted from, we can consider the collection of all diagrams, called resolutions of the shadow, that would yield such a code. We compute the average virtual bridge number over all these diagrams and show that for a shadow with n classical precrossings, the average virtual bridge number is .
The BM@N is the first experiment at the NICA accelerator complex, which aims to study relativistic heavy ion interactions in the energy region corresponding to high net baryon densities. For the purpose of charged particle identification in the BM@N, two time-of-flight systems TOF400 and TOF700 are used. After the first physics run at the experiment, a series of calibrations have been performed. The calibration methods are described in detail, as well as an estimation of the time and coordinate resolution of the systems.
Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of five wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.
The feasible interpolation theorem for semantic derivations from [J. Krajíček, Interpolation theorems, lower bounds for proof systems, and independence results for bounded arithmetic, J. Symbolic Logic62(2) (1997) 457–486] allows to derive from some short semantic derivations (e.g. in resolution) of the disjointness of two NPNP sets UU and VV a small communication protocol (a general dag-like protocol in the sense of Krajíček (1997) computing the Karchmer–Wigderson multi-function KW[U,V]KW[U,V] associated with the sets, and such a protocol further yields a small circuit separating UU from VV. When UU is closed upwards, the protocol computes the monotone Karchmer–Wigderson multi-function KWm[U,V]KWm[U,V] and the resulting circuit is monotone. Krajíček [Interpolation by a game, Math. Logic Quart.44(4) (1998) 450–458] extended the feasible interpolation theorem to a larger class of semantic derivations using the notion of a real communication complexity (e.g. to the cutting planes proof system CP). In this paper, we generalize the method to a still larger class of semantic derivations by allowing randomized protocols. We also introduce an extension of the monotone circuit model, monotone circuits with a local oracle (CLOs), that does correspond to communication protocols for KWm[U,V]KWm[U,V] making errors. The new randomized feasible interpolation thus shows that a short semantic derivation (from a certain class of derivations larger than in the original method) of the disjointness of U,VU,V, UU closed upwards, yields a small randomized protocol for KWm[U,V]KWm[U,V] and hence a small monotone CLO separating the two sets. This research is motivated by the open problem to establish a lower bound for proof system R(LIN/F2)R(LIN/F2) operating with clauses formed by linear Boolean functions over F2F2. The new randomized feasible interpolation applies to this proof system and also to (the semantic versions of) cutting planes CP, to small width resolution over CP of Krajíček [Discretely ordered modules as a first-order extension of the cutting planes proof system, J. Symbolic Logic63(4) (1998) 1582–1596] (system R(CP)) and to random resolution RR of Buss, Kolodziejczyk and Thapen [Fragments of approximate counting, J. Symbolic Logic79(2) (2014) 496–525]. The method does not yield yet lengths-of-proofs lower bounds; for this it is necessary to establish lower bounds for randomized protocols or for monotone CLOs.
Benson and Goodearl [Periodic flat modules, and flat modules for finite groups, Pacific J. Math.196(1) (2000) 45–67] proved that if M is a flat module over a ring R such that there exists an exact sequence of R-modules 0 → M → P → M → 0 with P a projective module, then M is projective. The main purpose of this paper is to generalize this theorem to any exact sequence of the form 0 → M → G → M → 0, where G is an arbitrary module over R. Moreover, we seek counterpart entities in the Gorenstein homological algebra of pure projective and pure injective modules.
We use purely combinatorial arguments to give a formula to compute all graded Betti numbers of path ideals of paths and cycles. As a consequence, we can give new and short proofs for the known formulas of regularity and projective dimensions of path ideals of paths.
Over an infinite field KK, we investigate the minimal free resolution of some configurations of lines. We explicitly describe the minimal free resolution of complete grids of lines and obtain an analogous result about the so-called complete pseudo-grids. Moreover, we characterize the total Betti numbers of configurations that are obtained posing a multiplicity condition on the lines of either a complete grid or a complete pseudo-grid. Finally, we analyze when a complete pseudo-grid is seminormal, differently from a complete grid. The main tools that have been involved in our study are the mapping cone procedure and properties of liftings, of pseudo-liftings and of weighted ideals. Although complete grids and pseudo-grids are hypersurface configurations and many results about such type of configurations have already been stated in literature, we give new contributions, in particular about the maps of the resolution.
A wavelet-based forecasting method for time series is introduced. It is based on a multiple resolution decomposition of the signal, using the redundant "à trous" wavelet transform which has the advantage of being shift-invariant.
The result is a decomposition of the signal into a range of frequency scales. The prediction is based on a small number of coefficients on each of these scales. In its simplest form it is a linear prediction based on a wavelet transform of the signal. This method uses sparse modelling, but can be based on coefficients that are summaries or characteristics of large parts of the signal. The lower level of the decomposition can capture the long-range dependencies with only a few coefficients, while the higher levels capture the usual short-term dependencies.
We show the convergence of the method towards the optimal prediction in the autoregressive case. The method works well, as shown in simulation studies, and studies involving financial data.
The method of Lin and Huang [Lin and Huang [2004] “Decomposition of incident and reflected higher harmonic waves using four wave gauges,” Coast. Eng.51(5), 395–406.] (LH) is improved for resolution of incident and reflected strongly nonlinear regular waves in shallow waters with measurements of four stationary wave gauges. For first harmonics, wavenumbers, amplitudes and initial phases are obtained by using a nonlinear least squares method. For higher harmonics, wavenumbers of free and bound modes are determined from linear dispersion relation and multiple of first-harmonic wavenumbers, respectively, and the other unknowns are solved by using a linear least squares method. Auto-correlation function is used to determine fundamental wave period for gaining a good performance of Fourier transform. The efficiency and accuracy of the present method are demonstrated by using artificial data and numerical flume data. It is also demonstrated that the present method is less sensitive to signal noise and gauge spacings. Comparison between the present method and the LH method indicates the necessity of employing nonlinear method in determining fundamental wavenumbers of nonlinear shallow-water waves. Finally, the present method is extended to account for obliquely-incident waves. Sensitivity tests indicate the robustness of the extended method with respect to incident angles. Relative position of gauges in the array for avoiding singularity is suggested.
The purpose of this paper is to give, via totally different techniques, an alternate proof to the main theorem of [18] in the category of modules over an arbitrary ring R. In effect, we prove that this theorem follows from establishing a sequence of equalities between specific classes of R-modules. Actually, we tackle the following natural question: What notion emerges when iterating the very process applied to build the Gorenstein projective and Gorenstein injective modules from complete resolutions? In other words, given an exact sequence of Gorenstein injective R-modules G= ⋯ → G1→ G0→ G-1→ ⋯ such that the complex HomR(H,G) is exact for each Gorenstein injective R-module H, is the module Im(G0→ G-1) Gorenstein injective? We settle such a question in the affirmative and the dual result for the Gorenstein projective modules follows easily via a similar treatment to that used in this paper. As an application, we provide the Gorenstein versions of the change of rings theorems for injective modules over an arbitrary ring.
Let X be a zero-dimensional scheme in ℙ1 × ℙ1. Then X has a minimal free resolution of length 2 if and only if X is ACM. In this paper we determine a class of reduced schemes whose resolutions, similarly to the ACM case, can be obtained by their Hilbert functions and depend only on their distributions of points in a grid of lines. Moreover, a minimal set of generators of the ideal of these schemes is given by curves split into the union of lines.
This paper deals with the validity in fuzzy logic of some classical schemes of reasoning, namely, with those of disjunctive reasoning, resolution, reductio ad absurdum, and the so-called constructive dilemma.
Spectral domain optical coherence tomography (SDOCT) is a noninvasive, cross-sectional imaging technique that measures depth resolved reflectance of tissue by Fourier transforming the spectral interferogram with the scanning of the reference avoided. Interferometric synthetic aperture microscopy (ISAM) is an optical microscopy computed-imaging technique for measuring the optical properties of biological tissues, which can overcome the compromise between depth of focus and transverse resolution. This paper describes the principle of SDOCT and ISAM, which multiplexes raw acquisitions to provide quantitatively meaningful data with reliable spatially invariant resolution at all depths. A mathematical model for a coherent microscope with a planar scanning geometry and spectral detection was described. The two-dimensional fast Fourier transform (FFT) of spectral data in the transverse directions was calculated. Then the nonuniform ISAM resampling and filtering was implemented to yield the scattering potential within the scalar model. Inverse FFT was used to obtain the ISAM reconstruction. One scatterer, multiple scatterers, and noisy simulations were implemented by use of ISAM to catch spatially invariant resolution. ISAM images were compared to those obtained using standard optical coherence tomography (OCT) methods. The high quality of the results validates the rationality of the founded model and that diffraction limited resolution can be achieved outside the focal plane.
Whole brain emulation aims to re-implement functions of a mind in another computational substrate with the precision needed to predict the natural development of active states in as much as the influence of random processes allows. Furthermore, brain emulation does not present a possible model of a function, but rather presents the actual implementation of that function, based on the details of the circuitry of a specific brain. We introduce a notation for the representations of mind state, mind transition functions and transition update functions, for which elements and their relations must be quantified in accordance with measurements in the biological substrate. To discover the limits of significance in terms of the temporal and spatial resolution of measurements, we point out the importance of brain region and task specific constraints, as well as the importance of in-vivo measurements. We summarize further problems that need to be addressed.
Neutrophils are key effector cells involved in host defence against invading organisms such as bacteria and fungi. Their over-recruitment, uncontrolled activation and defective removal contribute to the initiation and propagation of many chronic inflammatory conditions. Neutrophil apoptosis is a physiological process that terminates the cells' functional responsiveness and induces phenotypic changes that render them recognizable by phagocytes (e.g. macrophages). Evidence indicates that neutrophil apoptosis and the subsequent removal of these cells by macrophages occur via mechanisms that do not elicit an inflammatory response and that these processes are fundamental for the successful resolution of inflammation. The molecular mechanisms regulating apoptosis in neutrophils are being elucidated and consequently it is now believed that selective induction of neutrophil is a potential target for therapeutic intervention.
This chapter presents the Horn-First Combinatorial Strategy (HFCS) within the Conflict-Driven Clause Learning (CDCL) solver framework, which leverages unit propagation to resolve conflicts via a directed acyclic entailment diagram. HFCS boosts the role of Horn clauses in clause management and their strategic use in solving, emphasizing learning clauses generated from their in-volvement. Experimental results confirm HFCS significantly enhances solver performance.
Singular mobiles were introduced by Encinas and Hauser in order to conceptualize the information which is necessary to prove strong resolution of singularities in characteristic zero. It turns out that after Hironaka's Annals paper from 1964 essentially all proofs rely — either implicitly or explicitly — on the data collected in a mobile, often with only small technical variations. The present text explains why mobiles are the appropriate resolution datum and how they are used to build up the induction argument of the proof.
Confocal microscopes provide clear, thin optical sections with little disturbance from regions of the specimen that are not in focus. In addition, they appear to provide somewhat greater lateral and axial image resolution than with non-confocal microscope optics. To address the question of resolution and contrast transfer of light microscopes, a new test slide that enables the direct measurement of the contrast transfer characteristics (CTC) of microscope optics at the highest numerical aperature has been developed. With this new test slide, the performance of a confocal scanning laser microscope operating in the confocal reflection mode and the non-confocal transmission mode was examined. The CTC curves show that the confocal instrument maintains exceptionally high contrast (up to twice that with non-confocal optics) as the dimension of the object approaches the diffraction limit of resolution; at these dimensions, image detail is lost with non-confocal microscopes owing to a progressive loss of image contrast. Furthermore, we have calculated theoretical CTC curves by modelling the confocal and non-confocal imaging modes using discrete Fourier analysis. The close agreement between the theoretical and experimental CTC curves supports the earlier prediction that the coherent confocal and the incoherent non-confocal imaging mode have the same limit of resolution (defined here as the inverse of the spatial frequency at which the contrast transfer converges to zero). The apparently greater image resolution of the coherent confocal optics is a consequence of the improved contrast transfer at spacings which are close to the resolution limit.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.