We establish a moderate deviation principle for linear eigenvalue statistics of β-ensembles in the one-cut regime with a real-analytic potential. The main ingredient is to obtain uniform estimates for the correlators of a family of perturbations of β-ensembles using the loop equations.
Contemporary wideband radio telescope backends are generally developed on Field Programmable Gate Arrays (FPGA) or hybrid (FPGA+GPU) platforms. One of the challenges faced while developing such instruments is the functional verification of the signal processing backend at various stages of development. In the case of an interferometer or pulsar backend, the typical requirement is for one independent noise source per input, with provision for a common, correlated signal component across all the inputs, with controllable level of correlation. This paper describes the design of a FPGA-based variable correlation Digital Noise Source (DNS), and its applications to built-in testing and debugging of correlators and beamformers. This DNS uses the Central Limit Theorem-based approach for generation of Gaussian noise, and the architecture is optimized for resource requirements and ease of integration with existing signal processing blocks on FPGA.
We present an overview of the ‘ICE’ hardware and software framework that implements large arrays of interconnected field-programmable gate array (FPGA)-based data acquisition, signal processing and networking nodes economically. The system was conceived for application to radio, millimeter and sub-millimeter telescope readout systems that have requirements beyond typical off-the-shelf processing systems, such as careful control of interference signals produced by the digital electronics, and clocking of all elements in the system from a single precise observatory-derived oscillator. A new generation of telescopes operating at these frequency bands and designed with a vastly increased emphasis on digital signal processing to support their detector multiplexing technology or high-bandwidth correlators — data rates exceeding a terabyte per second — are becoming common. The ICE system is built around a custom FPGA motherboard that makes use of an Xilinx Kintex-7 FPGA and ARM-based co-processor. The system is specialized for specific applications through software, firmware and custom mezzanine daughter boards that interface to the FPGA through the industry-standard FPGA mezzanine card (FMC) specifications. For high density applications, the motherboards are packaged in 16-slot crates with ICE backplanes that implement a low-cost passive full-mesh network between the motherboards in a crate, allow high bandwidth interconnection between crates and enable data offload to a computer cluster. A Python-based control software library automatically detects and operates the hardware in the array. Examples of specific telescope applications of the ICE framework are presented, namely the frequency-multiplexed bolometer readout systems used for the South Pole Telescope (SPT) and Simons Array and the digitizer, F-engine, and networking engine for the Canadian Hydrogen Intensity Mapping Experiment (CHIME) and Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) radio interferometers.
A 32GHz bandwidth VLBI capable correlator and phased array has been designed and deployeda at the Smithsonian Astrophysical Observatory’s Submillimeter Array (SMA). The SMA Wideband Astronomical ROACH2 Machine (SWARM) integrates two instruments: a correlator with 140kHz spectral resolution across its full 32GHz band, used for connected interferometric observations, and a phased array summer used when the SMA participates as a station in the Event Horizon Telescope (EHT) very long baseline interferometry (VLBI) array. For each SWARM quadrant, Reconfigurable Open Architecture Computing Hardware (ROACH2) units shared under open-source from the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) are equipped with a pair of ultra-fast analog-to-digital converters (ADCs), a field programmable gate array (FPGA) processor, and eight 10 Gigabit Ethernet (GbE) ports. A VLBI data recorder interface designated the SWARM digital back end, or SDBE, is implemented with a ninth ROACH2 per quadrant, feeding four Mark6 VLBI recorders with an aggregate recording rate of 64 Gbps. This paper describes the design and implementation of SWARM, as well as its deployment at SMA with reference to verification and science data.
Traditionally, back-ends for radio telescopes have been built using a hardware-based approach with ASICs, FPGAs, etc. With advancements in processing power of CPUs, software-based systems have emerged as an alternative option, which has received additional impetus with the advent of GPU-based computing. We present here the design of a hybrid system combining the best of FPGAs, CPUs and GPUs, to implement a next generation back-end for the upgraded GMRT. This back-end can process 400 MHz bandwidth signals from 32 dual-polarized antennas, for both interferometry and beamformer applications, including narrowband spectral line modes for the interferometer, incoherent array and phased array mode of operations for the beamfomer, and also a voltage mode attached to a real-time coherent dedispersion system for the beamformer. We describe in detail the design and architecture of this system, including the novel features and capabilities. We also present sample results from the system that validate its performance in conjunction with the entire receiver chain of the upgraded GMRT.
In this paper, we discuss the characteristics and operation of Astro Space Center (ASC) software FX correlator that is an important component of space–ground interferometer for Radioastron project. This project performs joint observations of compact radio sources using 10m space radio telescope (SRT) together with ground radio telescopes at 92, 18, 6 and 1.3 cm wavelengths. In this paper, we describe the main features of space–ground VLBI data processing of Radioastron project using ASC correlator. Quality of implemented fringe search procedure provides positive results without significant losses in correlated amplitude. ASC Correlator has a computational power close to real time operation. The correlator has a number of processing modes: “Continuum”, “Spectral Line”, “Pulsars”, “Giant Pulses”,“Coherent”. Special attention is paid to peculiarities of Radioastron space–ground VLBI data processing. The algorithms of time delay and delay rate calculation are also discussed, which is a matter of principle for data correlation of space–ground interferometers. During five years of Radioastron SRT successful operation, ASC correlator showed high potential of satisfying steady growing needs of current and future ground and space VLBI science. Results of ASC software correlator operation are demonstrated.
In radio interferometry, the quantization process introduces a bias in the magnitude and phase of the measured correlations which translates into errors in the measurement of source brightness and position in the sky, affecting both the system calibration and image reconstruction. In this paper, we investigate the biasing effect of quantization in the measured correlation between complex-valued inputs with a circularly symmetric Gaussian probability density function (PDF), which is the typical case for radio astronomy applications. We start by calculating the correlation between the input and quantization error and its effect on the quantized variance, first in the case of a real-valued quantizer with a zero mean Gaussian input and then in the case of a complex-valued quantizer with a circularly symmetric Gaussian input. We demonstrate that this input-error correlation is always negative for a quantizer with an odd number of levels, while for an even number of levels, this correlation is positive in the low signal level regime. In both cases, there is an optimal interval for the input signal level for which this input-error correlation is very weak and the model of additive uncorrelated quantization noise provides a very accurate approximation. We determine the conditions under which the magnitude and phase of the measured correlation have negligible bias with respect to the unquantized values: we demonstrate that the magnitude bias is negligible only if both unquantized inputs are optimally quantized (i.e. when the uncorrelated quantization error model is valid), while the phase bias is negligible when (1) at least one of the inputs is optimally quantized, or when (2) the correlation coefficient between the unquantized inputs is small. Finally, we determine the implications of these results for radio interferometry.
With the ever-increasing data rates in radio astronomy, a universal Field Programmable Gate Array (FPGA)-based hardware platform which can be used at different locations in the signal processing chain, like a beamformer, data router or correlator, would reduce development time significantly. In this paper, we present the design of such a platform, the UniBoard2. With UniBoard2, both large rack-based and single-board systems can be made. Standard Quad Small Form-factor Pluggable (QSFP) input and output (IO) interfaces on the front side make it easy to interface UniBoard2 to standard 40 Gigabit Ethernet (GbE) network equipment. Hardware design challenges, like transceiver links, power supplies, power dissipation and cooling are described. The paper concludes with some examples of systems (like beamformers and correlators) that can be built using the UniBoard2 hardware platform.
One of the main technologies to open up a wider field of view for today’s radio telescopes are phased arrays. This is especially the case for radio astronomy instruments operating below 2GHz. Nowadays, the existing dish-type instruments are being upgraded with phased array feeds (PAF) in the focal plane. This increases the field of view at the expense of needing more analog electronics and digital signal processing. One of the digital signal processing functionalities used to combine the digitized signals from the PAF is a beam-former which creates multiple high sensitivity beams within the field of view of the dish. Before beams can be formed, the signals from the PAF need to be calibrated using a correlator. In this paper, we present a solution where these two operations are combined by using the beam-former also as a correlator. The statistics unit used as part of the beam-former implementation, can be used as well for calculating correlation products. With the proper settings of the beam-former weight of each beamlet, a frequency sub-band with a direction, can be used as a single cross correlation product. By implementing the correlator on the beam-former, the digital resources and development time can be reduced. To validate the idea, two versions of the algorithm are implemented in the Apertif PAF system on the Westerbork Synthesis Radio Telescope (WSRT). Results show that two full-bandwidth correlation matrices per beam, needed to determine the static beam weights for the calibration, and a single column of the correlation matrix, used to compensate for any drift between the receiver chains, can be performed.
We present an overview of the Graphics Processing Unit (GPU)-based spatial processing system created for the Canadian Hydrogen Intensity Mapping Experiment (CHIME). The design employs AMD S9300x2 GPUs and readily available commercial hardware in its processing nodes to provide a cost- and power-efficient processing substrate. These nodes are supported by a liquid-cooling system which allows continuous operation with modest power consumption and in all but the most adverse conditions. Capable of continuously correlating 2048 receiver-polarizations across 400MHz of bandwidth, the CHIME X-engine constitutes the most powerful radio correlator currently in existence. It receives 6.6Tb/s of channelized data from CHIME’s FPGA-based F-engine, and the primary correlation task requires 8.39×1014 complex multiply-and-accumulate operations per second. The same system also provides formed-beam data products to commensal FRB and Pulsar experiments; it constitutes a general spatial-processing system of unprecedented scale and capability, with correspondingly great challenges in computation, data transport, heat dissipation, and interference shielding.
Modern and upcoming radio telescopes at low frequencies are often characterized by hundreds or thousands of antenna elements operating at wide bandwidths up to about 0.5GHz. A spectral correlator for such an array is required to estimate the cross-power spectrum of the response of each element with that of every other element with a high spectral resolution. The resulting all-to-all connectivity between signals from the entire array poses a serious bottleneck. In this paper, we propose a simple digital receiver architecture that interfaces the digitized time series from a large number of antenna elements to a High-Performance Computing (HPC) cluster through a communication switch to overcome the data ingest bottleneck. Each HPC node can then perform wideband processing in steps of finite but significant time-slices for the entire array. We explain in detail the implementation of our architecture for the proposed expansion of the Ooty Wide Field Array (OWFA) into a 1056 element array. Since the proposed digital receiver is based on Field Programmable Gate Array (FPGA), it can be reconfigured for different applications. This is illustrated by considering the case of Phased Array Feeds (PAF) for the proposed expanded Giant Metrewave Radio Telescope (eGMRT).
Optical processors offer many useful operations for computer vision. The maturity of these systems and the repertoire of operations they can perform is increasing rapidly. Hence a brief updated overview of this area merits attention. Many of the new algorithms employed can also be realized in digital and analog VLSI technology and hence computer vision researchers should benefit from this review. We consider optical morphological, feature extraction, correlation and neural network systems for different levels of computer vision with image processing examples and hardware fabrication work in each area included.
Optical processors offer many useful operations for computer vision. The maturity of these systems and the repertoire of operations they can perform is increasing rapidly. Hence a brief updated overview of this area merits attention. Many of the new algorithms employed can also be realized in digital and analog VLSI technology and hence computer vision researchers should benefit from this review. We consider optical morphological, feature extraction, correlation and neural network systems for different levels of computer vision with image processing examples and hardware fabrication work in each area included.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.