Computer vision obtains object and environment information by simulating human visual senses and borrowing human sensory activity. As one of the main tasks of computer vision, image classification can be used not only for face recognition, traffic scene recognition, image retrieval, and automatic photo categorization but also as a theoretical basis for target detection and image segmentation. In this paper, we use the existing CNN architecture network-ConvNeXt. By adapting and modifying the residual connectivity and convolutional structure of the network, we achieve a balance between classification accuracy and inference speed. These modifications are able to reduce both computation and memory consumption while keeping accuracy largely unchanged, thus better facilitating network lightweighting.
We give faster methods to compute discrete convolutions. We assume that all the inputs are packed, that is, strings are packed into words such that each word is packed with α = ⌊ω / log2|Σ|⌋ characters, where w is the length of a machine word and ∑ is the alphabet. The output of our methods is also packed, that is, each word of the output contains more than one element of the result. The approach is based on the word-level parallelism and the FFT. Given two strings with m and n ( n ≥ m ) characters that are packed into ⌈m/α⌉ and ⌈n/α⌉ words respectively, the convolution of them can be computed in O(nuw log muw) time, where u = (⌈log|Σ|m⌉ + 2) log2|Σ| by the FFT. Experiments show that our method is three times faster than the convolution using the standard trick.
We consider the problem of pattern matching with wildcards on packed strings. It finds all the occurrences of a pattern in a text, both of which may contain wildcards. By the convolution of packed strings, we present algorithms that are faster than the previous O(n log m)-time algorithm, where m is the length of the pattern and n the length of the text. The algorithm runs in O(nuw log muw + nlog |Σ|w log1+o(1) log |Σ| + mlog |Σ|w log wlog |Σ| + occ) time, where occ is the number of occurrences of the pattern in the input. Experiments show that the method is faster than the bit-parallel wildcard matching algorithm for long patterns.
This paper presents a systematic study for abstract Banach measure algebras over homogeneous spaces of compact groups. Let H be a closed subgroup of a compact group G and G/H be the left coset space associated to the subgroup H in G. Also, let M(G/H) be the Banach measure space consists of all complex measures over G/H. Then we introduce the abstract notions of convolution and involution over the Banach measure space M(G/H).
For continuous functions A and B, which are allowed to change sign, we consider the nonlocal differential equation
Gomoku is an ancient board game. The traditional approach to solving the Gomoku game is to apply tree search on a Gomoku game tree. Although the rules of Gomoku are straightforward, the game tree complexity is enormous. Unlike many other board games such as chess and Shogun, the Gomoku board state is more intuitive. That is to say, analyzing the visual patterns on a Gomoku game board is fundamental to play this game. In this paper, we designed a deep convolutional neural network model to help the machine learn from the training data (collected from human players). Based on this original neural network model, we made some changes and get two variant neural networks. We compared the performance of the original neural network with its variants in our experiments. Our original neural network model got 69% accuracy on the training data and 38% accuracy on the testing data. Because the decision made by the neural network is intuitive, we also designed a hard-coded convolution-based Gomoku evaluation function to assist the neural network in making decisions. This hybrid Gomoku artificial intelligence (AI) further improved the performance of a pure neural network-based Gomoku AI.
Image related computations are becoming mainstream in today's computing environment. However, Computer Science departments still offer image related courses mostly as electives or at the graduate level. As imaging's importance increases, one needs to consider its coverage at the core courses. Most departments cannot afford to add another course into their curriculum at this level, so the only remaining choice is to integrate image related applications into the existing core courses. This paper addresses this issue as it relates to an Algorithms course. The specific prerequisites and goals of our Algorithms course are described. The image related applications that have been adopted are presented and their impact on the course is discussed.
The nonuniform FFT (NuFFT) is widely used in many applications. Focusing on the most time-consuming part of the NuFFT computation, the data translation step, in this paper, we develop an automatic parallel code generation tool for data translation targeting emerging multicores. The key components of this tool are two scalable parallelization strategies, namely, the source-driven parallelization and the target-driven parallelization. Both these strategies employ equally sized geometric tiling and binning to improve data locality while trying to balance workloads across the cores through dynamic task allocation. They differ in the partitioning and scheduling schemes used to guarantee mutual exclusion in data updates. This tool also consists of a code generator and a code optimizer for the data translation. We evaluated our tool on a commercial multicore machine for both 2D and 3D inputs under different sample distributions with large data set sizes. The results indicate that both parallelization strategies have good scalability as the number of cores and the number of dimensions of data space increase. In particular, the target-driven parallelization outperforms the other when samples are nonuniformly distributed. The experiments also show that our code optimizations can bring about 32%–43% performance improvement to the data translation step of NuFFT.
We present a monotonic convolution for planar regions A and B bounded by line and circular arc segments. The Minkowski sum equals the union of the cells with positive crossing numbers in the arrangement of the convolution, as is the case for the kinetic convolution. The monotonic crossing number is bounded by the kinetic crossing number, and also by the maximum number of intersecting pairs of monotone boundary chains, which is typically much smaller. We give a Minkowski sum algorithm based on the monotonic convolution. The running time is O(s + nα(n)log(n) + m2), versus O(s + n2) for the kinetic algorithm, with s the input size and with n and m the number of segments in the kinetic and monotonic convolutions. For inputs with a bounded number of turning points and inflection points, the running time is O(sα(s) log s), versus Ω(s2) for the kinetic algorithm. The monotonic convolution is 37% smaller than the kinetic convolution and its arrangement is 62% smaller based on 21 test pairs.
The taxonomy of flattened Moebius strips (FMS) is reexamined in order to systematize the basis for its development. An FMS is broadly characterized by its twist and its direction of traverse. All values of twist can be realized by combining elementary FMS configurations in a process called fusion but the result is degenerate; a multiplicity of configurations can exist with the same value of twist. The development of degeneracy is discussed in terms of several structural factors and two principles, conservation of twist and continuity of traverse. The principles implicate a corresponding pair of constructs, a process of symbolic convolution, and the inner product of symbolic vectors. Combining constructs and structural factors leads to a systematically developed taxonomy in terms of twist categories assembled from permutation groups. Taxonomical structure is also graphically revealed by the geometry of an expository edifice that validates the convolution process while displaying the products of fusion. A formulation that combines some of the algebraic precepts of Quantum Mechanics with the primitive combinatorics and degeneracies inherent to the FMS genus is developed. The potential for further investigation and application is also discussed. An appendix outlines the planar extension of the fusion concept and another summarizes a related application of convolution.
We aggregate the transverse momentum spectra of J/ψ mesons produced in high energy gold–gold (Au–Au), deuteron–gold (d−Au), lead–lead (Pb–Pb), proton–lead (p–Pb), and proton–(anti)proton (p–p(¯p)) collisions measured by several collaborations at the Relativistic Heavy Ion collider (RHIC), the Tevatron Proton–Antiproton Collider, and the Large Hadron Collider (LHC). The collision energy (the center-of-mass energy) gets involved in a large range from dozens of GeV to 13 TeV (the top LHC energy). We consider two participant or contributor partons, a charm quark and an anti-charm quark, in the production of J/ψ. The probability density of each quark is described by means of the modified Tsallis–Pareto-type function (the TP-like function) while considering that both quarks make suitable contributions to the J/ψ transverse momentum spectrum. Therefore, the convolution of two TP-like functions is applied to represent the J/ψ spectrum. We adopt the mentioned convolution function to fit the experimental data and find out the trends of the power exponent, effective temperature, and of the revised index with changing the centrality, rapidity, and collision energy. Beyond that, we capture the characteristic of J/ψ spectrum, which is of great significance to better understand the production mechanism of J/ψ in high energy collisions.
The coupled fractional Fourier transform is a much recent ramification of the two-dimensional fractional Fourier transform, wherein the kernel is not a tensor product of one-dimensional copies, but relies on two angles that are coupled to yield a new pair of transform parameters. In this paper, we introduce a novel two-dimensional Wigner distribution, coined as coupled fractional Wigner distribution (CFrWD). The prime advantage of such a ramification of the Wigner distribution lies in the fact that the CFrWD can efficiently tackle the higher-order-phase and chirp signals, which constitute a wider class of signals arising in modern communication systems. To begin with, we study some fundamental properties of the proposed CFrWD, including marginal, shifting, conjugate-symmetry and anti-derivative properties. In addition, we also formulate the Moyal’s principle, inversion formula and the convolution and correlation theorems associated with CFrWD. Nevertheless, we demonstrate the efficacy of CFrWD for estimating and detecting both the one-component and multi-component linear-frequency-modulated signals.
The main concern of this paper is with the equations satisfied by the algebra of truth values of type-2 fuzzy sets. That algebra has elements all mappings from the unit interval into itself with operations given by certain convolutions of operations on the unit interval. There are a number of positive results. Among them is a decision procedure, similar to the method of truth tables, to determine when an equation holds in this algebra. One particular equation that holds in this algebra implies that every subalgebra of it that is a lattice is a distributive lattice. It is also shown that this algebra is locally finite. Many questions are left unanswered. For example, we do not know whether or not this algebra has a finite equational basis, that is, whether or not there is a finite set of equations from which all equations satisfied by this algebra follow. This and various other topics about the equations satisfied by this algebra will be discussed.
We present a toolbox to compute and extract information from inhomogeneous (i.e. unequally spaced) time series. The toolbox contains a large set of operators, mapping from the space of inhomogeneous time series to itself. These operators are computationally efficient (time and memory-wise) and suitable for stochastic processes. This makes them attractive for processing high-frequency data in finance and other fields. Using a basic set of operators, we easily construct more powerful combined operators which cover a wide set of typical applications.
The operators are classified as either macroscopic operators (that have a limit value when the sampling frequency goes to infinity) or microscopic operators (that strongly depend on the actual sampling). For inhomogeneous data, macroscopic operators are more robust and more important. Examples of macroscopic operators are (exponential) moving averages, differentials, derivatives, moving volatilities, etc.…
Pricing and hedging of financial instruments whose payoff depends on the joint realization of several underlyings (basket options, spread options, etc.) require multivariate models that are, at the same time, computationally tractable and flexible enough to accommodate the stylized facts of asset returns and of their dependence structure. Among the most popular models one finds models with VG marginals. The aim of this paper is to compare four multivariate models that are characterized by VG laws at unit time and to assess their performance by considering the flexibility they offer to calibrate the dependence structure for fixed marginals.
Below a new kind of convolution is introduced for probability measures, whose combinatorics is related to non-crossing partitions without inner blocks other than singletons — the partitions corresponding to the fermionic creation and annihilation operators and Pauli's principle.
We show that there are eight special cases of the conditionally free convolution of Bożejko, Leinert and Speicher with the property that in the corresponding moment-cumulant formula no nontrivial weights appear. All the eight convolutions are given. These include the free, the boolean and the Fermi convolutions, another special case of the bold t-free convolution and four more convolution laws that were not treated before.
We define two families of deformations of probability measures depending on the second free cumulants and the corresponding new associative convolutions arising from the conditionally free convolution. These deformations do not commute with dilation of measures, which means that the limit theorems cannot be obtained as a direct application of the theorems for the conditionally free case. We calculate the general form of the central and Poisson limit theorems. We also find the explicit form for three important examples.
In Ref. 2 the authors introduced field operators in one-mode type Interacting Fock Spaces whose spectral measures have common symmetric Jacobi recurrence coefficients but differ in the nonsymmetric ones. We show that the convolution of measures arising from addition of such field operators is the universal convolution of Accardi and Bożejko. We also present the associated central limit theorem in a more general form than in Ref. 2 and give it a proof based on the properties of the convolution.
Infinite divisibility for the free additive convolution was studied in Ref. 20. A complete characterization of -infinitely divisible distributions was given, and it was explained in Ref. 21 that this characterization is an analogue of the classical Lévy–Khintchine characterization. In fact, the analogue of the Gaussian distribution appeared even earlier, when the central limit theorem for free additive convolution was proven in Ref. 19.
In this paper we define the notion of -infinitely divisibility and give the description of infinitely divisible compactly supported probability measures relative to the conditionally free convolution. We also show that the Lévy–Khintchine measures associated with a
-infinitely divisible distribution μ can be calculated, as in the classical or free case, as a weak limit of measures related with the convolution semigroup generated by (μ, φ) for
-infinitely divisible.
In 2000 Carnovale and Koornwinder defined a q-convolution and proved that for some classes of measures it is associative and commutative. We investigate its positivity preserving properties. One of them is the notion of q-positivity related to q-moments. In this paper we describe an algebraic interpretation of q-positivity which leads us to the definition of (p, q)-convolution. It has a form similar to the q-convolution of Carnovale and Koornwinder coming from a braided algebra. For the new convolution we find an appropriate analogue of Fourier transform and also present a central limit theorem.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.