When it comes to filtering and compressing data before sending it to a cloud server, fog computing is a rummage sale. Fog computing enables an alternate method to reduce the complexity of medical image processing and steadily improve its dependability. Medical images are produced by imaging processing modalities using X-rays, computed tomography (CT) scans, magnetic resonance imaging (MRI) scans, and ultrasound (US). These medical images are large and have a huge amount of storage. This problem is being solved by making use of compression. In this area, lots of work is done. However, before adding more techniques to Fog, getting a high compression ratio (CR) in a shorter time is required, therefore consuming less network traffic. Le Gall5/3 integer wavelet transform (IWT) and a set partitioning in hierarchical trees (SPIHT) encoder were used in this study’s implementation of an image compression technique. MRI is used in the experiments. The suggested technique uses a modified CR and less compression time (CT) to compress the medical image. The proposed approach results in an average CR of 84.8895%. A 40.92% peak signal-to-noise ratio (PSNR) PNSR value is present. Using the Huffman coding, the proposed approach reduces the CT by 36.7434 s compared to the IWT. Regarding CR, the suggested technique outperforms IWTs with Huffman coding by 12%. The current approach has a 72.36% CR. The suggested work’s shortcoming is that the high CR caused a decline in the quality of the medical images. PSNR values can be raised, and more effort can be made to compress colored medical images and 3-dimensional medical images.
We have been developing a general theory of information distance and a paradigm of applying this theory to practical problems.[3, 19, 20] There are several problems associated with this theory. On the practical side, among other problems, the strict requirement of triangle inequality is unrealistic in some applications; on the theoretical side, the universality theorems for normalized information distances were only proved in a weak form. In this paper, we will introduce a complementary theory that resolves or avoids these problems.
This article also serves as a brief expository summary for this area. We will tell the stories about how and why some of the concepts were introduced, recent theoretical developments and interesting applications. These applications include whole genome phylogeny, plagiarism detection, document comparison, music classification, language classification, fetal heart rate tracing, question answering, and a wide range of other data mining tasks.
Multilingual text compression exploits the existence of the same text in several languages to compress the second and subsequent copies by reference to the first. We explore the details of this framework and present experimental results for parallel English and French texts.
Sorting techniques have numerous applications in computer science. Current real number and integer sorting techniques for the reconfigurable mesh operate in constant time using a reconfigurable mesh of size n × n to sort n numbers. This paper presents a constant time algorithm to sort n items on a reconfigurable network with switches and
processors. Also, new constant time selection and compression algorithms are given. All results may also be implemented on the 3-D reconfigurable mesh.
Recrystallization behavior of a nickel-base single crystal superalloy cold-deformed by compression has been investigated. The effects of plastic strain, annealing temperature and annealing time have been studied, and recrystallization diagram has been obtained. It has been found that a very strong dependence upon temperature is evident. For the single crystal superalloy with 4.5% strain, full recrystallization has been observed when annealing at 1300°C for 1h, surface recrystallization at 1250°C for 1h, cellular recrystallization at 1150°C for 1h and no recrystallization at 1100° for 1h. With the drop of temperature, the volume fraction of γ′ phase increases, which incrementally restricts recrystallized boundary migration. With the increasing of annealing time or strain, the sensitivity of recrystallization increases. Recrystallization tendency of standard- heat-treated superalloy is weaker than that of as-cast single crystal superalloy, because standard heat treatment reduces the microsegregation, lowers the eutectic amount and forms homogeneous γ1 phase distribution, which decrease preferential nucleation site of recrystallized grain and increase the resistance of boundary migration of recrystallization.
Simulations have been carried out on [001]-oriented Ni3Al nanowires with square cross-section with the purpose to investigate the mechanism of failure under tensile and compressive strain. Simulation results show that the elastic limit of the nanowire is up to about 15% strain with the yield stress of 5.99–6.48 GPa under tensile strain. Under the elastic stage, the deformation is carried mainly through the uniform elongation of the bonds between atoms. With more tensile strain, the slips in the {111} planes occur to accommodate the applied strain at room temperature under tensile strain. And the nanowires accommodate the compressive strain by forming the twins within the nanowires.
Wireless sensor networks (WSNs) are ubiquitous nowadays and have applications in variety of domains such as machine surveillance, precision agriculture, intelligent buildings, healthcare etc. Detection of anomalous activities in such domains has always been a subject undergoing intense study. As the sensor networks are generating tons of data every second, it becomes a challenging task to detect anomalous events accurately from this large amount of data. Most of the existing techniques for anomaly detection are not scalable to big data. Also, sometimes accuracy might get compromised while dealing with such a large amount of data. To address these issues in this paper, a unified framework for anomaly detection in big sensor data has been proposed. The proposed framework is based on data compression and Hadoop MapReduce-based parallel fuzzy clustering. The clusters are further refined for better classification accuracy. The modules of the proposed framework are compared with various existing state-of-art algorithms. For experimental analysis, real sensor data of ICU patients has been taken from the physionet library. It is revealed from the comparative analysis that the proposed framework is more time efficient and shows better classification accuracy.
The manufacturing techniques of sandwich composites containing core layers of weft-knit glass fabric (WG) and weft-knit carbon fabric (WC) with carbon fabric skin layers are discussed herein. The core layers of the sandwich composites were fabricated with WG-reinforced epoxy (E) resin, WC-reinforced epoxy resin, and polyurethane foam (F). The core layer was then stacked with two pieces of carbon fabric on the top and bottom surfaces to fabricate the sandwich composites. Three sandwich composites [plain carbon fabric sandwich composite with a WG core layer (C/E/WG), plain carbon fabric sandwich composite with a WC core layer (C/E/WC), and plain carbon fabric sandwich composite with an F core layer (C/E/F)] were developed in this study. A two-step manufacturing procedure was developed to achieve sufficient adhesiveness between the skin and core layers. The tensile, flatwise compressive, and longitudinal compressive properties of these sandwich composites were measured according to referred ASTM standards on a materials test system (MTS 810). Experimental results revealed that the WC core materials displayed excellent resistance to a flatwise compressive force and the foam core material show weak resistance. Under longitudinal compression, the skin and core layer of the C/E/F specimen separated, indicating that the C/E/F specimen could not withstand longitudinal force. Moreover, the C/E/WG and C/E/WC specimens both bend at the end of the same test.
Multifractal theory has been widely used in different kinds of fields. In this paper, methods were proposed to extract two kinds of multifractal descriptors of gray series and two-dimensional surfaces for gray image based on the multifractal detrended fluctuation analysis. The proposed multifractal parameters can be well described by texture feature through the test of some textures. Three aspects of experiments have been conducted to verify the robustness of the proposed parameters, which include noise immunity, degree of image blurring and compression ratio. Comparisons were conducted between the proposed parameters and other kinds of texture feature parameters calculated by the standard multifractal analysis, the method of differential box counting and the methods of gray level co-occurrence matrix. Results demonstrate that the proposed exponents of H(2) and h(2) have great noise immunity and are robust to image compression and blurring.
We present a new high speed parallel architecture and its VLSI implementation to design a special purpose hardware for real-time lossless image compression/ decompression using a decorrelation scheme. The proposed architecture can easily be implemented using state-of-the-art VLSI technology. The hardware yields a high compression rate. A prototype 1-micron VLSI chip based on this architectural idea has been designed. The scheme is favourably comparable to the lossless JPEG standard image compression schemes. We also discuss the parallelization issues of the lossless JPEG standard still compression schemes and their difficulties.
As it has already been proved, link layer compression is very effective when used in packet networks. In particular, packet compression is especially useful when encryption is applied to the network packets. Encrypting the packets causes the data to be random in nature, and thus no compression can be applied after it. It is believed that low level encryption will be applied to the vast majority of Internet Protocol (IP) networks in the near future, and thus a large number of very sophisticated encryption devices have already been manufactured. Based on these facts, we claim that hardware devices that can compress network streams at link speed (and perform the compression just before the encryption), will also be widely used in the future networks. In this paper, we present such a hardware compressor/decompressor core that can work at speeds up to 10 Gb/s, it is fairly inexpensive and can very easily be plugged into an existing network node without causing any side effects. We additionally examine the performance against complexity tradeoffs of such compressor/decompressor devices. Finally, it is claimed that compression devices with throughput ranging from 0.5 to 10 Gb/s can be efficiently implemented based on the reference architecture.
Mobile communication has a great potential to the users due to fulfilling the dreams of real-time multimedia communication like voice, image, and text. The huge amount of data redundancy in still image should be compressed using exact image compression algorithm (ICA) before transmitting via wireless channel. Thus, an ICA should be adaptive, simple, and cost-effective and suitable for feasible implementation. Hardware implementation of the different algorithms has improved using modern, fast, and cost-effective technologies. The main aim of this paper is to review and demonstrate various ICAs developed based on image transmission via wireless channel as well as their hardware implementation. Finally, this review makes bridge for researchers to the future relative studies between different algorithms and architectures, and stands as a reference point for developing more controlling and flexible structures.
We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM(CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by errors. Next, we simulate these parameters in a statistical simulator to study how they affect the endurance of the system. Our simulation results reveal that our technique, which is built on top of Error Correcting Pointers (ECP) but using a high-performance cache-oriented compression algorithm modified to better suit our purpose, manages to further extend the lifetime of the memory system. In particular, it guarantees that at least half of the physical pages are in usable condition for 25% longer than ECP, which is slightly more than 5% more than a scheme that can correct 16 failures per block.
Since different regions of an image have different importance, therefore only the important information of the image regions, which the users are really interested in, needs to be encrypted and protected emphatically in some special multimedia applications. However, the regions of interest (ROI) are always some irregular parts, such as the face and the eyes. Assuming the bulk data in transmission without being damaged, we propose a chaotic image encryption algorithm for ROI. ROI with irregular shapes are chosen and detected arbitrarily. Then the chaos-based image encryption algorithm with scrambling, S-box and diffusion parts is used to encrypt the ROI. Further, the whole image is compressed with Huffman coding. At last, a message authentication code (MAC) of the compressed image is generated based on chaotic maps. The simulation results show that the encryption algorithm has a good security level and can resist various attacks. Moreover, the compression method improves the storage and transmission efficiency to some extent, and the MAC ensures the integrity of the transmission data.
We describe data structures for representing simplicial meshes compactly while supporting online queries and updates efficiently. Our data structure requires about a factor of five less memory than the most efficient standard data structures for triangular or tetrahedral meshes, while efficiently supporting traversal among simplices, storing data on simplices, and insertion and deletion of simplices.
Our implementation of the data structures uses about 5 bytes/triangle in two dimensions (2D) and 7.5 bytes/tetrahedron in three dimensions (3D). We use the data structures to implement 2D and 3D incremental algorithms for generating a Delaunay mesh. The 3D algorithm can generate 100 Million tetrahedra with 1 Gbyte of memory, including the space for the coordinates and all data used by the algorithm. The runtime of the algorithm is as fast as Shewchuk's Pyramid code, the most efficient we know of, and uses a factor of 3.5 less memory overall.
Several Representations and Coding schemes have been proposed to represent efficiently 2D triangulations. In this paper, we propose a new practical approach to reduce the main memory space needed to represent an arbitrary triangulation, while maintaining constant time for some basic queries. This work focuses on the connectivity information of the triangulation, rather than the geometric information (vertex coordinates), since the combinatorial data represents the main part of the storage. The main idea is to gather triangles into patches, to reduce the number of pointers by eliminating the internal pointers in the patches and reducing the multiple references to vertices. To accomplish this, we define and use stable catalogs of patches that are closed under basic standard update operations such as insertion and deletion of vertices, and edge flips. We present some bounds and results concerning special catalogs, and some experimental results that exhibit the practical gain of such methods.
Power circuits are data structures which support efficient algorithms for highly compressed integers. Using this new data structure it has been shown recently by Myasnikov, Ushakov and Won that the Word Problem of the one-relator Baumslag group is in P. Before that the best known upper bound was non-elementary. In the present paper we provide new results for power circuits and we give new applications in algorithmic algebra and algorithmic group theory: (1) We define a modified reduction procedure on power circuits which runs in quadratic time, thereby improving the known cubic time complexity. The improvement is crucial for our other results. (2) We improve the complexity of the Word Problem for the Baumslag group to cubic time, thereby providing the first practical algorithm for that problem. (The algorithm has been implemented and is available in the CRAG library.) (3) The main result is that the Word Problem of Higman's group is decidable in polynomial time. The situation for Higman's group is more complicated than for the Baumslag group and forced us to advance the theory of power circuits.
The paper is a part of an ongoing program which aims to show that the problem of satisfiability of a system of equations in a free group (hyperbolic or even toral relatively hyperbolic group) is NP-complete. For that, we study compression of solutions with straight-line programs (SLPs) as suggested originally by Plandowski and Rytter in the context of a single word equation. We review some basic results on SLPs and give full proofs in order to keep this fundamental part of the program self-contained. Next we study systems of equations with constraints in free groups and more generally in free products of abelian groups. We show how to compress minimal solutions with extended Parikh-constraints. This type of constraints allows to express semi-linear conditions as e.g. alphabetic information. The result relies on some combinatorial analysis and has not been shown elsewhere. We show similar compression results for Boolean formula of equations over a torsion-free δ-hyperbolic group. The situation is much more delicate than in free groups. As byproduct we improve the estimation of the "capacity" constant used by Rips and Sela in their paper "Canonical representatives and equations in hyperbolic groups" from a double-exponential bound in δ to some single-exponential bound. The final section shows compression results for toral relatively hyperbolic groups using the work of Dahmani: We show that given a system of equations over a fixed toral relatively hyperbolic group, for every solution of length N there is an SLP for another solution such that the size of the SLP is bounded by some polynomial p(s + log N) where s is the size of the system.
For a compact surface Σ (orientable or not, and with boundary or not), we show that the fixed subgroup, Fix ℬ, of any family ℬ of endomorphisms of π1(Σ) is compressed in π1(Σ), i.e. rk(Fix ℬ) ≤ rk(H) for any subgroup Fix ℬ ≤ H ≤ π1(Σ). On the way, we give a partial positive solution to the inertia conjecture, both for free and for surface groups. We also investigate direct products, G, of finitely many free and surface groups, and give a characterization of when G satisfies that rk(Fix ϕ) ≤ rk(G) for every ϕ ∈ Aut(G).
We show that, given an equation over a finitely generated free group, the set of all solutions in reduced words forms an effectively constructible EDT0L language. In particular, the set of all solutions in reduced words is an indexed language in the sense of Aho. The language characterization we give, as well as further questions about the existence or finiteness of solutions, follow from our explicit construction of a finite directed graph which encodes all the solutions. Our result incorporates the recently invented recompression technique of Jeż, and a new way to integrate solutions of linear Diophantine equations into the process. As a byproduct of our techniques, we improve the complexity from quadratic nondeterministic space in previous works to NSPACE(nlogn) here.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.