In this paper, we investigate the deterministic learning problem associated with spherical scattered data for the first time. We design the quadrature-weighted kernel regularized regression learning schemes associated with deterministic scattered data on the unit sphere and the spherical cap. By employing the minimal norm interpolation technique and leveraging results from the numerical integration of spherical radial basis functions over these surfaces, we derive the corresponding learning rates. Notably, our algorithm design and error analysis methods diverge from those typically employed in randomized learning algorithms. Our findings suggest that the learning rates are influenced by both the mesh norm of the scattered data and the smoothness of the radial basis function. This implies that when the radial basis function exhibits sufficient smoothness, the learning rate achieved with deterministic samples outperforms that obtained with random samples. Furthermore, our results provide theoretical support for the feasibility of deterministic spherical learning, which may bring potential applications in tectonic plate geology and Earth sciences.
A common operation in fabricating many products is bending a thin, flat sheet of material over a three-dimensional boundary curve that outlines the desired shape. Typically the shape of the closed boundary curve is given, and the geometric problem of interest is to compute the developable surface that forms when the material is bent to conform to this curve. A classical method of solving this problem on the drawing board is by triangulation development: approximating the boundary curve by line segments and the surface by triangles whose vertices are the boundary points. Choosing the proper boundary triangulation was the job of the draftsman. Automating this process on a computer requires identifying triangulations approximating developable surfaces. Moreover, since there are usually two or more developable surfaces that interpolate a given space curve, there must be a means of selecting the most appropriate solution for the application at hand. This paper describes a computer-based method that generates a boundary triangulation by geometrically simulating the bending of the sheet as it would occur during closing of the blankholder in a sheet-metal forming problem.
We give formulae for all complex geodesics of the symmetrized bidisc G. There are two classes of geodesics: flat ones, indexed by the unit disc, and geodesics of degree 2, naturally indexed by G itself. The flat geodesics foliate G, and there is a unique geodesic through every pair of points of G. We also obtain a trichotomy result for left inverses of complex geodesics.
We give sufficient conditions for a closed smooth hypersurface W in the n-dimensional Bergman ball to be interpolating or sampling. As in the recent work [5] of Ortega-Cerdà, Schuster and the second author on the Bargmann–Fock space, our sufficient conditions are expressed in terms of a geometric density of the hypersurface that, though less natural, is shown to be equivalent to Bergman ball analogs of the Beurling-type densities used in [5]. In the interpolation theorem we interpolate L2 data from W to the ball using the method of Ohsawa–Takegoshi, extended to the present setting, rather than the Cousin I approach used in [5]. In the sampling theorem, our proof is completely different from [5]. We adapt the more natural method of Berndtsson and Ortega-Cerdà [1] to higher dimensions. This adaptation motivated the notion of density that we introduced. The approaches of [5] and the present paper both work in either the case of the Bergman ball or of the Bargmann–Fock space.
We initiate a detailed study of two-parameter Besov spaces on the unit ball of ℝn consisting of harmonic functions whose sufficiently high-order radial derivatives lie in harmonic Bergman spaces. We compute the reproducing kernels of those Besov spaces that are Hilbert spaces. The kernels are weighted infinite sums of zonal harmonics and natural radial fractional derivatives of the Poisson kernel. Estimates of the growth of kernels lead to characterization of integral transformations on Lebesgue classes. The transformations allow us to conclude that the order of the radial derivative is not a characteristic of a Besov space as long as it is above a certain threshold. Using kernels, we define generalized Bergman projections and characterize those that are bounded from Lebesgue classes onto Besov spaces. The projections provide integral representations for the functions in these spaces and also lead to characterizations of the functions in the spaces using partial derivatives. Several other applications follow from the integral representations such as atomic decomposition, growth at the boundary and of Fourier coefficients, inclusions among them, duality and interpolation relations, and a solution to the Gleason problem.
This paper is motivated by the quest of a non-group irreducible finite index depth 2 maximal subfactor. We compute the generic fusion rules of the Grothendieck ring of Rep(PSL(2,q)), q prime-power, by applying a Verlinde-like formula on the generic character table. We then prove that this family of fusion rings (ℛq) interpolates to all integers q≥2, providing (when q is not prime-power) the first example of infinite family of non-group-like simple integral fusion rings. Furthermore, they pass all the known criteria of (unitary) categorification. This provides infinitely many serious candidates for solving the famous open problem of whether there exists an integral fusion category which is not weakly group-theoretical. We prove that a complex categorification (if any) of an interpolated fusion ring ℛq (with q non-prime-power) cannot be braided, and so its Drinfeld center must be simple. In general, this paper proves that a non-pointed simple fusion category is non-braided if and only if its Drinfeld center is simple; and also that every simple integral fusion category is weakly group-theoretical if and only if every simple integral modular fusion category is pointed.
Dimension-adaptive sparse grid interpolation is a powerful tool to obtain surrogate functions of smooth, medium to high-dimensional objective models. In case of expensive models, the efficiency of the sparse grid algorithm is governed by the time required for the function evaluations. In this paper, we first briefly analyze the inherent parallelism of the standard dimension-adaptive algorithm. Then, we present an enhanced version of the standard algorithm that permits, in each step of the algorithm, a specified number (equal to the number of desired processes) of function evaluations to be executed in parallel, thereby increasing the parallel efficiency.
We first reobtain in a simpler way the Haldane fractional statistics at thermal equilibrium. We then show that the mean occupation number for fractional statistics is invariant to a group of duality transformations, a non-abelian subgroup of a fractional linear group.
Fine resolution frequency estimation of a single-tone complex sinusoidal signal in the additive white Gaussian noise is of importance in many fields. In this paper, a generic analytical expression is proposed to refine the residual of a dichotomous search, leading to an estimator with much less iterations than the conventional dichotomous search estimator. Compared with other existing estimators, the proposed estimator has a better trade-off between performance and computational complexity. Simulation results demonstrate that the root-mean-square error (RMSE) of the proposed estimator is closer to the Cramer–Rao lower bound (CRLB) than other estimators over the whole frequency interval when the signal-to-noise ratio (SNR) is above a threshold.
In this paper, we present a novel method to solve the inter-frame interpolation problem in image morphing. We use our improved snake model that is associated with the gravitational force field to locate control points in the object contours. Afterwards, we apply the greedy algorithm in free-form deformations to achieve optimal warps among feature point pairs in starting and ending frames. The new method uses an energy-minimization function under the influence of inter-frames. The energy serves to impose frame-wise and curve-wise constraints among the interpolated frames.
In this paper, we propose an improved version of the neighbor embedding super-resolution (SR) algorithm proposed by Chang et al. [Super-resolution through neighbor embedding, in Proc. 2004 IEEE Computer Society Conf. Computer Vision and Pattern Recognition(CVPR), Vol. 1 (2004), pp. 275–282]. The neighbor embedding SR algorithm requires intensive computational time when finding the K nearest neighbors for the input patch in a huge set of training samples. We tackle this problem by clustering the training sample into a number of clusters, with which we first find for the input patch the nearest cluster center, and then find the K nearest neighbors in the corresponding cluster. In contrast to Chang’s method, which uses Euclidean distance to find the K nearest neighbors of a low-resolution patch, we define a similarity function and use that to find the K most similar neighbors of a low-resolution patch. We then use local linear embedding (LLE) [S. T. Roweis and L. K. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science290(5500) (2000) 2323–2326] to find optimal coefficients, with which the linear combination of the K most similar neighbors best approaches the input patch. These coefficients are then used to form a linear combination of the K high-frequency patches corresponding to the K respective low-resolution patches (or the K most similar neighbors). The resulting high-frequency patch is then added to the enlarged (or up-sampled) version of the input patch. Experimental results show that the proposed clustering scheme efficiently reduces computational time without significantly affecting the performance.
Digital scan conversion (DSC) is the process of converting received ultrasound signals, or echoes, in multi-scan lines, at varying angles (polar coordinate), to a Cartesian raster format for displaying. In this paper, we propose a new DSC technique that uses nearest-neighbor interpolation and the linear interpolation between adjacent scan lines to reduce artifacts on the far field, with smaller angular separation between the interpolated lines. A hardware implementation is described that uses only a FIFO register and a display memory. Rapid prototyping using an ARM processor with FPGA resources is achieved to validate the operation of the described system. Experimental results of the implemented design demonstrated the expected operation of the reduced complexity architecture in term of needed memory. Also, the performance of retrieved images were increased.
Fractional band-pass filters are a promising area in the signal processing. They are especially attractive as a method for processing of biomedical signals, such as EEG, where large signal distortion is undesired. We present two structures of fractional band-pass filters: one as an analog of classical second-order filter, and one arising from parallel connection of two fractional low-pass filters. We discuss a method for filter implementation — Laguerre Impulse Response Approximation (LIRA) — along with sufficient conditions for when the filter can be realized with it. We then discuss methods of filter tuning, in particular we present some analytical results along with optimization algorithm for numerical tuning. Filters are implemented and tested with EEG signals. We discuss the results highlighting the possible limitations and potential for development.
Transcendental functions cannot be expressed algebraically, which brings a big challenge to efficient and accurate approximation. Lookup table (LUT) and piecewise fitting are common traditional methods. However, they either trade approximate accuracy for computation and storage or require unaffordable resources when the expected accuracy is high. In this paper, we have developed a high-precision approximation method, which is error-controllable and resource-efficient. The method originally divides a transcendental function into two parts based on their slope. The steep slope part is approximated by the method of LUT with the interpolation, while the gentle slope part is approximated by the range-addressable lookup table (RALUT) algorithm. The boundary of two parts can be adjusted adaptively according to the expected accuracy. Moreover, we analyzed the error source of our method in detail, and proposed an optimal selection method for table resolution and data bit-width. The proposed algorithm is verified on an actual FPGA board and the results show that the proposed method’s error can achieve arbitrarily low. Compared to other methods, the proposed algorithm has a more stable increase in resource consumption as the required accuracy grows, which consumes fewer hardware resources especially at the middle accuracy, with at least 30% LUT slices saved.
In the present paper, the smoothness of a Coalescence Hidden-variable Fractal Interpolation Surface (CHFIS), as described by its Lipschitz exponent, is investigated. This is achieved by considering the simulation of a generally uneven surface using CHFIS. The influence of free variables and Lipschitz exponent on the smoothness of CHFIS is demonstrated by considering interpolation data generated from a sample surface.
We consider the theory and applications of bivariate fractal interpolation surfaces constructed as attractors of iterated function systems. Specifically, such kind of surfaces constructed on rectangular domains have been used to demonstrate their efficiency in computer graphics and image processing. The methodology followed is based on the labeling used for the vertices of the rectangular domain rather than on the constraints satisfied by the contractivity factors or the boundary data.
Let T = {τ1, τ2, …, τK; p1, p2, …, pK} be a position dependent random map on [0, 1], where {τ1, τ2, …, τK} is a collection of nonsingular maps on [0, 1] into [0, 1] and {p1, p2, …, pK} is a collection of position dependent probabilities on [0, 1]. We assume that the random map T has a unique absolutely continuous invariant measure μ with density f*. Based on interpolation, a piecewise linear approximation method for f* is developed and a proof of convergence of the piecewise linear method is presented. A numerical example for a position dependent random map is presented.
The neural network discussed in this paper is a self trained network for LArge Memory STorage And Retrieval (LAMSTAR) of information. It employs features such as forgetting, interpolation, extrapolation and filtering, to enhance processing and memory efficiency and to allow zooming in and out of memories. The network is based on modified SOM (Self-Organizing-Map) modules and on arrays of link-weight vectors to channel information vertically and horizontally throughout the network. Direct feedback and up/down counting serve to set these link weights as a higher-hierarchy performance evaluator element which also provides high level interrupts. Pseudo random modulation of the link weights prevents dogmatic network behavior. The input word is a coded vector of several sub-words (sub-vectors). These features facilitate very rapid intelligent retrieval and diagnosis of very large memories, that have properties of a self-adaptive expert system with continuously adjustable weights. The authors have applied the network to a simple medical diagnosis and fault detection problems.
Good quality terrain models are becoming more and more important, as applications such as runoff modelling are being developed that demand better surface orientation information than is available from traditional interpolation techniques. A consequence is that poor-quality elevation grids must be massaged before they provide useable runoff models. Rather than using direct data acquisition, this project concentrated on using available contour data because, despite modern techniques, contour maps are still the most available form of elevation information. Recent work on the automatic reconstruction of curves from point samples, and the generation of medial axis transforms (skeletons) has greatly helped in expressing the spatial relationships between topographic sets of contours. With these techniques the insertion of skeleton points into a TIN model guarantees the elimination of all "flat triangles" where all three vertices have the same elevation. Additional assumptions about the local uniformity of slopes give us enough information to assign elevation values to these skeleton points. In addition, various interpolation techniques were compared using the enriched contour data. Examination of the quality and consistency of the resulting maps indicates the required properties of the interpolation method in order to produce terrain models with valid slopes. The result provides us with a surprisingly realistic model of the surface - that is, one that conforms well to our subjective interpretation of what a real landscape should look like.
The problem of reconstructing a three-dimensional object from parallel slices has application in computer vision and medicine. Here we explore a specific existence question: given two polygons in parallel planes, is it always possible to find a polyhedron that has those polygons as faces, and whose vertices are precisely the vertices of the two polygons? We answer this question in the negative by providing an example of two polygons that cannot be connected to form a simple polyhedron. One polygon is a triangle, the other a somewhat complicated shape with spiraling pockets.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.