In this paper, we study wormhole routed networks and envision their suitability for real-time traffic in a priority-driven paradigm. A traditional blocking flow control in wormhole routing may lead to a priority inversion in the sense that high priority packets are blocked by low priority packets for unlimited time. The priority inversion causes the frequent deadline missing even at a low network load. This paper therefore proposes two preemptive flow control policies where high priority packets can preempt network resources held by low priority packets. As a result, the proposed flow controls can resolve the priority inversion. Our simulations show that preemptive flow controls significantly reduce deadline miss ratios for various real-time traffic configurations.
We extend the classic work of R.J. Parikh on context-free languages with operators min and max on unary alphabet. The new theory is called CAN (Compositional Algebra of Numbers) and can be used to model software processes that can be concatenated, concurrently executed, and recursively invoked. We propose and analyze an algorithm which constructs the execution time sets of a CAN in semilinear form. Finally, we consider several interesting variations of CAN whose execution time sets can be constructed with algorithms.
We examine the minimum amount of memory for real-time, as opposed to one-way, computation accepting nonregular languages. We consider deterministic, nondeterministic and alternating machines working within strong, middle and weak space, and processing general or unary inputs. In most cases, we are able to show that the lower bounds for one-way machines remain tight in the real-time case. Memory lower bounds for nonregular acceptance on other devices are also addressed. It is shown that increasing the number of stacks of real-time pushdown automata can result in exponential improvement in the total amount of space usage for nonregular language recognition.
In this paper we develop a theory for timed systems which relate to time granularity. We use the well known equivalence of bisimulation to study the effect of changing granularity. We identify situations where measuring time more accurately has no effect on the equivalence. Similarly we also present a few situations where measuring time less accurately has no effect on the equivalence. We also present properties of the situations where the semantics is indeed altered by a change in time granularity.
The development of suitable EEG-based emotion recognition systems has become a main target in the last decades for Brain Computer Interface applications (BCI). However, there are scarce algorithms and procedures for real-time classification of emotions. The present study aims to investigate the feasibility of real-time emotion recognition implementation by the selection of parameters such as the appropriate time window segmentation and target bandwidths and cortical regions. We recorded the EEG-neural activity of 24 participants while they were looking and listening to an audiovisual database composed of positive and negative emotional video clips. We tested 12 different temporal window sizes, 6 ranges of frequency bands and 60 electrodes located along the entire scalp. Our results showed a correct classification of 86.96% for positive stimuli. The correct classification for negative stimuli was a little bit less (80.88%). The best time window size, from the tested 1s to 12s segments, was 12s. Although more studies are still needed, these preliminary results provide a reliable way to develop accurate EEG-based emotion classification.
Emotion estimation systems based on brain and physiological signals such as electro encephalography (EEG), blood-volume pressure (BVP), and galvanic skin response (GSR) are gaining special attention in recent years due to the possibilities they offer. The field of human–robot interactions (HRIs) could benefit from a broadened understanding of the brain and physiological emotion encoding, together with the use of lightweight software and cheap wearable devices, and thus improve the capabilities of robots to fully engage with the users emotional reactions. In this paper, a previously developed methodology for real-time emotion estimation aimed for its use in the field of HRI is tested under realistic circumstances using a self-generated database created using dynamically evoked emotions. Other state-of-the-art, real-time approaches address emotion estimation using constant stimuli to facilitate the analysis of the evoked responses, remaining far from real scenarios since emotions are dynamically evoked. The proposed approach studies the feasibility of the emotion estimation methodology previously developed, under an experimentation paradigm that imitates a more realistic scenario involving dynamically evoked emotions by using a dramatic film as the experimental paradigm. The emotion estimation methodology has proved to perform on real-time constraints while maintaining high accuracy on emotion estimation when using the self-produced dynamically evoked emotions multi-signal database.
This paper proposes a real-time super-resolution (SR) system. The proposed system performs a fast SR algorithm that generates a high-resolution image from a low-resolution image using direct regression functions with an up-scaling factor of 2. This algorithm contained two processes: feature learning and SR image prediction. The feature learning stage is performed offline, in which several regression functions were trained. The SR image prediction stage is implemented on the proposed system to generate high-resolution image patches. The system implemented on a Xilinx Virtex 7 field-programmable gate array achieves output resolution of 3840×21603840×2160 (UHD) at 85 fps and 700Mpixels/s throughput. Structure similarity (SSIM) is measured for image quality. Experimental results show that the proposed system provides high image quality for real-time applications. And the proposed system possesses high scalability for resolution.
Michael Fischer has proposed a mutual exclusion algorithm that ingeniously exploits real time. We prove this algorithm using the time-honored technique of establishing an appropriate invariant.
Based on evolutionary computation, a new 3D route planner for unmanned air vehicles is presented. In our evolutionary route planner, the individual candidates are evaluated with respect to the workspace. Therefore a computation of the configuration space is avoided. With Digital Terrain Elevation Data, our approach can find a near-optimal route that can increase the surviving probability efficiently. By using a problem-specific representation of candidate solutions and genetic operators, the routes are generated in real-time and are able to take into account different kinds of mission constraints such as minimum route leg length and flying altitude, maximum turning angle, and fixed approach vector to goal position, etc.
In this paper, we present a real time lip-synch system that activates 2-D avatar's lip motion in synch with incoming speech utterance. To achieve the real time operation of the system, the processing time was minimized by "merge and split" procedures resulting in coarse-to-fine phoneme classification. At each stage of phoneme classification, the support vector machine (SVM) method was applied to reduce the computational load while maintaining the desired accuracy. The coarse-to-fine phoneme classification, is accomplished via two_stages of feature extraction: in the first stage, each speech frame is acoustically analyzed for three classes of lip opening using Mel Frequency Cepstral Coefficients (MFCC) as a feature; in the second stage, each frame is further refined for detailed lip shape using formant information. The method was implemented in 2-D lip animation and it was demonstrated that the system was effective in accomplishing real-time lip-synch. This approach was tested on a PC using the Microsoft Visual Studio with an Intel Pentium IV 1.4 Giga Hz CPU and 384 MB RAM. It was observed that the methods of phoneme merging and SVM achieved about twice the speed in recognition than the method employing the Hidden Markov Model (HMM). A typical latency time per a single frame observed using the proposed method was in the order of 18.22 milliseconds while an HMM method under identical conditions resulted about 30.67 milliseconds.
This paper presents a method for evaluating the quality of images altered by Gaussian blur. The method is based on the observation of bokeh mode images where the region of interest (foreground) is sharp, while the remaining parts (background) are intentionally blurred to enhance the perceptual quality of the image. The blurriness of the background increases attention towards the foreground part of the image. The proposed quality metric is obtained by combining the attention factor and the sharpness of the region of interest. The accuracy, in terms of Spearman’s-rank-order correlation-coefficient (SROCC), for popular and publicly available databases such as LIVE, VCL, TID2008, CSIQ, and TID2013, is 0.963, 0.925, 0.900, 0.930, and 0.930, respectively. The proposed method achieves high and consistent Spearman’s rank-order correlation coefficient (SROCC) values compared to the majority of state-of-the-art algorithms. Furthermore, in terms of speed, the proposed method surpasses other state-of-the-art methods. The MATLAB code of the proposed metric is publicly available at https://drive.google.com/drive/folders/1SRmUp0N157Ati9l3kV13uoCxw5PhMgQn?usp=sharing.
Real-time 3D reconstruction of static scenes can be achieved based on the RGB-D image sequence fusion. It is a popular practice to divide the space into uniform voxels and use a truncated signed distance function to represent surface information. In order to represent a scene of large scale, the voxel hash algorithm which stores voxels compressively can be used, but most of the conventional methods do not consider the complexity and roughness of the object surface in the scene, so the scene is represented with a uniform resolution. It somewhat limits the range of scene representation and the speed of real-time reconstruction. In this paper, a large-scale scene reconstruction algorithm based on voxel hashing storage with LOD representation is proposed. The main contributions include the following two aspects: (1) By preprocessing the depth image with smooth filtering, which ensures the accuracy of the data, it can effectively reduce the distortion caused by the sensor itself and violent motion and provide better support for the stages of voxel hashing, model rendering, and frame-to-model camera position tracking. (2) The 3D reconstruction with LOD representation is realized. We take the view distance and the roughness of the model surface as criteria to control the adaptive division and representation of spatial voxel blocks. Finally, we carried out qualitative and quantitative evaluations of the algorithm, and confirmed that the algorithm can achieve real-time reconstruction with different levels of detail in the commercial graphics processing hardware environment, and achieve a good fusion effect in large-scale scenes.
The Prioritized Production System (PRIOPS) is an architecture that supports time-constrained, knowledge-based embedded system programming and learning. Inspired by the theory of automatic and controlled human information processing in cognitive psychology, PRIOPS supports a two-tiered processing approach. The automatic partition provides for compilation of productions into constant-time-constrained processes for reaction to environmental conditions. The notion of a habit in humans approximates the concept of automatic processing trading flexibility and generality for efficiency and predictability in dealing with expected environmental situations. Explicit priorities allow critical automatic activities to pre-empt and defer execution of lower priority processing. An augmented version of the Rete match algorithm implements O(1), priority-scheduled automatic matching. The controlled partition supports more complex, less predictable activities such as problem solving, planning, and learning that apply in novel situations for which automatic reactions do not exist. The PRIOPS notation allows the programmer of knowledge-based embedded systems to work at a more appropriate level of abstraction than is provided by conventional embedded system programming techniques. This paper explores programming and learning in PRIOPS in the context of a maze traversal program.
This paper presents an algorithm for the allocation of on-chip FPGA Block RAMs for the implementation of Real-Time Video Processing Systems. The effectiveness of the algorithm is shown through the implementation of realistic image processing systems. The algorithm, which is based on a heuristic, seeks the most cost-effective way of allocating memory objects to the FPGA Block RAMs. The experimental results obtained, show that this algorithm generates results which are close to the theoretical optimum for most design cases.
Data hiding in the LSB of audio signals is an appealing steganographic method. This is due to the large volume of real-time production and transmission of audio data which makes it difficult to store and analyze these signals. Hence, steganalysis of audio signals requires online operations. Most of the existing steganalysis methods work on stored media files. In this paper, we present a steganalysis technique that can detect the existence of embedded data in the least significant bits of natural audio samples. The algorithm is designed to be simple, accurate, and to be hardware implementable. Hence, hardware implementation is presented for the proposed algorithm. The proposed hardware analyzes the histogram of an incoming stream of audio signals by using a sliding window strategy without needing the storage of the signals. The algorithm is mathematically modeled to show its capability to accurately predict the amount of embedding in an incoming stream of audio signals. Audio files with different amounts of embedded data were used to test the algorithm and its hardware implementation. The experimental results prove the functionality and high accuracy of the proposed method.
Refining and petrochemical processing facilities utilize various process control applications to raise productivity and enhance plant operation. Client–server communication model is used for integrating these highly interacting applications across multiple network layers utilized in distributed control systems. This paper presents an optimum process control environment by merging sequential and regulatory control, advanced regulatory control, multivariable control, unit-based process control, and plant-wide advanced process control into a single collaborative automation platform to ensure optimum operation of processing equipment for achieving maximum yield of all manufacturing facilities. The main control module is replaced by a standard real-time server. The input/output racks are physically and logically decoupled from the controller by converting them into distributed autonomous process interface systems. Real-time data distribution service middleware is used for providing seamless cross-vendor interoperable communication among all process control applications and distributed autonomous process interface systems. Detailed performance analysis was conducted to evaluate the average communication latency and aggregate messaging capacity among process control applications and distributed autonomous process interface systems. The overall performance results confirm the viability of the new proposal as the basis for designing an optimal collaborative automation platform to handle all process control applications. It also does not impose any inherent limit on the aggregate data messaging capacity, making it suitable for scalable automation platforms.
Oil and gas processing facilities utilize various process automation systems with proprietary controllers. As the systems age; older technologies become obsolete resulting in frequent premature capital investments to sustain their operation.
This paper presents a new design of automation controller to provide inherent mechanisms for upgrades and/or partial replacement of any obsolete components without obligation for a complete system replacement throughout the expected life cycle of the processing facilities.
The input/output racks are physically and logically decoupled from the controller by converting them into distributed autonomous process interface systems. The proprietary input/output communication between the conventional controller CPU and the associated input/output racks is replaced with standard real-time data distribution service middleware for providing seamless cross-vendor interoperable communication between the controller and the distributed autonomous process interface systems. The objective of this change is to allow flexibility of supply for all controller’s subcomponents from multiple vendors to safeguard against premature automation obsolescence challenges.
Detailed performance analysis was conducted to evaluate the viability of using the standard real-time data distribution service middleware technology in the design of automation controller to replace the proprietary input/output communication. The key simulation measurements to demonstrate its performance sustainability while growing in controller’s size based on the number of input/output signals are communication latency, variation in packets delays, and communication throughput. The overall performance results confirm the viability of the new proposal as the basis for designing cost effective evergreen process automation solutions that would result in optimum total cost of ownership capital investment throughout the systems’ life span. The only limiting factor is the selected network infrastructure.
The evolvable hardware (EHW) is widely used in the design of fault-tolerant system. Fault-tolerant system is really a real-time system, and the recovery time is necessary in fault detection and recovery. However, when applying EHW, real-time characteristic is usually ignored. In this paper, a fault-tolerant strategy based on EHW is proposed. The recovery time, predicted by the fault tree analysis (FTA), is considered as a constraint condition. A configuration library is set up in the design phase to accelerate the repair process of the anticipated faults. An evolvable algorithm (EA) based on similarity is applied to evolve the repair circuit for the unanticipated faults. When the library reaches the upper, the target system is reconfigured by the EA-repair technology. Extensive experiments are conducted to show that our method can improve the fault-tolerance of the system while satisfying the real-time requirement on FPGA platform. In a long run system, our method can keep a higher fault recovery rate.
Recently, an increasing number of real-time systems are implemented on multicore systems. To fully utilize the computation power of multicore systems, the scheduling problem of the real-time parallel task model is receiving more attention. Different types of scheduling algorithms and analysis techniques have been proposed for parallel real-time tasks modeled as directed acyclic graphs (DAG). In this paper, we study the scheduling problem for DAGs under the decomposition paradigm. We propose a new schedulability test and corresponding decomposition strategy. We show that this new decomposition approach strictly dominates the latest decomposition-based approach. Simulations are conducted to evaluate the real-time performance of our proposed scheduling algorithm, against the state-of-the-art scheduling and analysis methods of different types. Experimental results show that our method consistently outperforms other global methods under different parameter settings.
This paper presents a novel design of a coprocessor that performs hardware-accelerated task scheduling for embedded real-time systems consisting of mixed-criticality real-time tasks. The proposed solution is based on the Robust Earliest Deadline (RED) algorithm and previously developed hardware architectures used for scheduling of real-time tasks. Thanks to the HW implementation of the scheduler in the form of a coprocessor, the scheduler operations (i.e., instructions) are always completed in two clock cycles regardless of the actual or even maximum task amount within the system. The proposed scheduler was verified using simplified version of UVM and applying billions of randomly generated instructions as inputs to the scheduler. Chip area costs are evaluated by synthesis for Intel FPGA Cyclone V and for 28-nm TSMC ASIC. Three versions of real-time task schedulers were compared: EDF-based scheduler designed for hard real-time tasks only, GED-based scheduler and the proposed RED-based scheduler, which is suitable for tasks of various criticalities. According to the synthesis results, the RED-based scheduler consumes LUTs and occupies larger chip area than the original EDF-based scheduler with equivalent parameters used. However, the RED-based scheduler handles variations of task execution times better, achieves higher CPU utilization and can be used for the scheduling of hard real-time, soft real-time and nonreal-time tasks combined in one system, which is not possible with the former algorithms.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.