This volume contains articles written by leading researchers in the fields of algorithms, architectures, and information systems security. The first five chapters address several challenging geometric problems and related algorithms. These topics have major applications in pattern recognition, image analysis, digital geometry, surface reconstruction, computer vision and in robotics. The next five chapters focus on various optimization issues in VLSI design and test architectures, and in wireless networks. The last six chapters comprise scholarly articles on information systems security covering privacy issues, access control, enterprise and network security, and digital image forensics.
Sample Chapter(s)
Foreword (28 KB)
Chapter 1: Euclidean Shortest Paths in a Simple Polygon (803 KB)
https://doi.org/10.1142/9789812836243_fmatter
The following sections are included:
https://doi.org/10.1142/9789812836243_0001
Let p and q be two points in a simple polygon Π. This chapter provides two rubberband algorithms for computing a shortest path between p and q that is contained in Π. The two algorithms use previously known results on triangular or trapezoidal decompositions of simple polygons, and have either O (n) or O (n log n) time complexity (where the super-linear time complexity is only due to preprocessing, i.e. for the trapezoidal decomposition of the simple polygon Π).
https://doi.org/10.1142/9789812836243_0002
Recently a Delaunay refinement algorithm has been proposed that can mesh domains as general as piecewise smooth complexes. These domains include polyhedra, smooth and piecewise smooth surfaces, volumes enclosed by them, and above all non-manifold spaces. The algorithm is guaranteed to capture the input topology at the expense of four tests, some of which are computationally intensive and hard to implement. The goal of this paper is to present the theory that justifies a refinement algorithm with a single disk test in place of four tests of the previous algorithm.
The algorithm is supplied with a resolution parameter that controls the level of refinement. We prove that, when the resolution is fine enough (this level is reached very fast in practice), the output mesh becomes homeomorphic to the input while preserving all input features. Moreover, regardless of the refinement level, each k-manifold element in the input complex ismeshed with a triangulated k-manifold. Boundary incidences among elements maintain the input structure. Implementation results reported in a companion paper corroborate our claims.
https://doi.org/10.1142/9789812836243_0003
Let (A, B, C) be a triple of disjoint closed convex sets in the plane such that each of them contributes at least one point to the boundary ∂ of the convex hull of their union. If there are three points a ∈ A, b ∈ B, c ∈ C that belong to ∂ and follow each other in clockwise (counterclockwise) order, we say that the orientation of the triple (A, B, C) is clockwise (counterclockwise). We construct families of disjoint closed convex sets {C1,…,Cn} in the plane whose every triple has a unique orientation, but there are no points p1,…,pn in general position in the plane whose triples have the same orientations. In other words, these families cannot be represented by a point set of the same order type. This answers a question of A. Hubard and L. Montejano. We also show the size of the largest subfamily representable by points, which can be found in any family of n disjoint closed convex sets in general position in the plane, is O(nlog8/log9). Some related Ramsey-type geometric problems are also discussed.
https://doi.org/10.1142/9789812836243_0004
Least-squares method is often used for solving optimization problems such as line fitting of a point sequence obtained by experiments. This paper describes some extensions of the method and presents an application to some completely different geometric optimization problem. Given a sorted sequence of points (x1,y1), (x2,y2),…,(xn,yn), a line y = ax+b that optimally approximates the sequence can be computed by determining the constants a and b that minimizes the sum of squared distances to the line, which is given by . It suffices to solve a system of linear equations derived by differentiating the sum by a and b. In this paper we extend the problem of approximating a point sequence by a 1-joint polyline. Another problem we consider is a geometric optimization problem. Suppose we are given a set of points in the plane. We want to insert a new point so that distances to existing points are as close as those distances specified as input data. If the criterion is to minimize the sum of squared errors, an application of the same idea as above combined with a notion of arrangement of lines leads to an efficient algorithm.
https://doi.org/10.1142/9789812836243_0005
Depth recovery from gradient vector fields is required when reconstructing a surface (in three-dimensional space) from its gradients. Such a reconstruction task results, for example, for techniques in computer vision aiming at calculating surface normals (such as shape from shading, photometric stereo, shape from texture, shape from contours and so on). Surprisingly, discrete integration has not been studied very intensively so far. This chapter presents three classes of methods for solving problems of depth recovery from gradient vector fields: a two-scan method, a Fourier-transform based method, and a wavelet-transform based method. These methods extend previously known techniques, and related proofs are given in a short but concise form.
The two-scan method consists of two different scans through a given gradient vector field. The final surface height values can be determined by averaging these two scans. Fourier-transform based methods are noniterative so that boundary conditions are not needed, and their robustness to noisy gradient estimates can be improved by choosing associated weighting parameters. The wavelet-transform based method overcomes the disadvantage of the Fourier-transform based method, which implicitly require that a surface height function is periodic. Experimental results using synthetic and real images are also presented.
https://doi.org/10.1142/9789812836243_0006
In this paper a new convolutional compactor with a single output is introduced. It is characterized by very good error masking properties: All odd errors, all 2-bit, 4-bit and 6-bit errors at the inputs of the compactor are not masked. Additionally, the depth of the compactor, i.e. the necessary number of flip flops, is only O(log2n), which makes the proposed convolutional compactor of special interest when a very large number of chip internal scan chains or test outputs are to be compressed.
https://doi.org/10.1142/9789812836243_0007
A new built-in self-test (BIST) scheme for scan-based circuits is proposed for reducing energy consumption during testing. In a random testing environment, a significant amount of energy is wasted in the LFSR and the circuit-under-test (CUT) by useless patterns that do not contribute to fault dropping. Another major source of energy drainage is the loss due to random switching activity in the CUT and in the scan path between applications of two successive vectors. In this work, a pseudorandom test sequence is generated by a linear feedback shift register (LFSR), and useless patterns are identified by fault simulation. Switching activity of the remaining (useful) patterns in the CUT including the scan path is estimated, and their optimal reordering that minimizes switching activity, is determined. Two components of switching activity, namely intrinsic (independent of test ordering) and variable (dependent on test ordering) are identified. An efficient technique of computation of switching activity occurring in the scan path is reported. Next, we design a mapping logic, which modifies the state transitions of the LFSR such that only the useful vectors are generated by the LFSR according to the desired sequence. Earlier techniques of energy reduction were based on blocking of useless vectors at the inputs to the CUT after being produced by the LFSR. Similarly, test reordering was used for low-energy BIST design, but only in the context of deterministic testing. The novelty of this approach lies in the fact that it not only inhibits the LFSR from generating the useless patterns, but also enforces the patterns to appear in a desired sequence. Further, it reduces test application time without affecting fault coverage. Experimental results on ISCAS-89 benchmark circuits reveal a significant amount of energy savings in the LFSR during testing. The design is simple and is applicable to a general scan-based test scheme like the STUMPS architecture.
https://doi.org/10.1142/9789812836243_0008
In this paper we summarize different methodologies which have been proposed for congestion estimation and reduction in floorplanning and placement steps for large-scale circuits. Moreover, we formulate the problem of congestion and wirelength minimization in a given initial floorplan to alleviate congestion in the early design cycle. A number of flow-based approaches, to concurrently move congested regions to non-congested regions, have been proposed. These techniques are highly inaccurate due to interference between the block movements. Furthermore, large changes in the floorplan will not preserve region and wirelength constraints. To solve this problem, we propose a novel approach by using a minimum perturbation zero slack assignment for distributing the congestion while preserving the wirelength quality of the design. The proposed technique employs an effective congestion estimator and uses a variation of the ZSA for congestion distribution. Our experimental results show that by applying our method we can reduce congestion by 40% while having less than 11% increase in the wirelength on average.
https://doi.org/10.1142/9789812836243_0009
This paper deals with the channel assignment problem in a hexagonal cellular network with two-band buffering, supporting multimedia services. We consider the simplest case of a multimedia cellular network dealing with only two different types of multimedia signals, where each cell has a single demand for each type. We first derive the lower bounds on the minimum bandwidth requirement for assigning multimedia channels to a seven-node subgraph of the hexagonal cellular network. We next estimate the lower bounds on the minimum bandwidth requirement for assigning channels in some real-life situations where the relative values of frequency separation constraints are somewhat restricted. Next, we present an algorithm for solving the multimedia channel assignment problem in its most general form, using Genetic Algorithm (GA). We then propose a technique for a clever re-use of the channels, by exploiting the hexagonal symmetry of the cellular network and using only eighteen distinct frequency bands on a nine-node subgraph of the network. With this concept of re-using the channels, we first find the required frequency separation constraints among the channels to be assigned to the different nodes of the network, and then use our proposed GA-based algorithm for assigning the multimedia channels for the complete network. Experiments with different values of the frequency separation constraints show that the proposed assignment algorithm converges very rapidly and generates near-optimal results with a bandwidth pretty close to our derived lower bound.
https://doi.org/10.1142/9789812836243_0010
In a radio network a set of pre-placed radio stations S = {s1, s2,…,sn} communicate each other by transmitting and receiving radio signals. Each radio station si is assigned a range ρ(si) which is a non-negative real number. A radio station can send message to another radio station sj if the Euclidean distance between si and sj is less than or equal to ρ(si). An exhaustive literature survey of the range assignment problem is presented in this paper. Several optimization problems that arise in assigning ranges to the radio stations in different application specific environment are discussed, and the best known algorithms for solving those problems are discussed.
https://doi.org/10.1142/9789812836243_0011
As the global information infrastructure is becoming more ubiquitous, digital business transactions are increasingly performed using a variety of mobile devices and across multiple communication channels. This new service-oriented paradigm is making the protection of privacy an increasing concern, as it relies on rich context representations (e.g., of location and purpose) and requires users to provide a vast amount of information about themselves and their behavior. This information is likely to be protected by a privacy policy, but restrictions to be enforced may come from different input requirements, possibly under the control of different authorities. In addition, users retain little control over their personal information once it has been disclosed to third parties. Secondary use regulations are therefore increasingly demanding attention. In this paper, we present the emerging trends in the data protection field to address the new needs and desiderata of today's systems.
https://doi.org/10.1142/9789812836243_0012
In the context of ubiquitous computing, small mobile devices are used as a platform for consuming various services. Specifically, personal data and information of a person (called data owner) are distributed over many third party organizations (data/service provider), and are being made available asWeb services, such as monthly financial statements, personal medical data services (e.g., X-ray results), etc. Very often, the personal informationWeb services are not just used by the data owner, but by third party data consumers who work on the cases on behalf of the data owner, such as financial advisers or doctors. In this environment, the data consumers often are not the same as the data owner. Access control is enforced to prevent confidential personal information from falling into the hands of “unauthorized” users. However, in many critical situations, such as emergencies, relevant information may need to be released even if users are not explicitly authorized. In this paper, we present the notion of situational role and propose a risk-based access control model that makes the decisions by assessing the risk in releasing data in the situation at hand. Specifically, it employs the “access first and verify later” strategy so that needed personal information is released without delaying access for a decision making third-party, and yet providing an adequate mechanism for appropriate release of personal information by a third party provider. Our approach employs the notion of situation role and uses semantics in building situation role hierarchies. It computes the semantic distance between the credential attributes required by the situational role and the actual role of a user requesting access, which essentially is used in assessing the risk.
https://doi.org/10.1142/9789812836243_0013
This chapter examines issues and methods for survivability of systems under malicious penetrating attacks. To protect from such attacks, it is necessary to take steps to prevent them from succeeding. At the same time, it is important to recognize that not all attacks can be averted at the outset; those that are partially successful may be unavoidable, and comprehensive support is required for identifying and responding to such attacks. We describe our Topological Vulnerability Analysis (TVA) system, which analyzes vulnerability to multi-step network penetration. At the core of the TVA system are graphs that represent known exploit sequences that attackers can use to penetrate computer networks. We show how TVA attack graphs can be used to compute actual sets of hardening measures that guarantee the safety of given critical resources. TVA can also correlate received alerts, hypothesize missing alerts, and predict future alerts. Thus, TVA offers a promising solution for administrators to monitor and predict the progress of an intrusion, and take quick appropriate countermeasures.
https://doi.org/10.1142/9789812836243_0014
Most of the commercial antivirus software fail to detect unknown and new malicious code. In order to handle this problem generic virus detection is a viable option. Generic virus detector needs features that are common to viruses. Recently Kolter et al.18 propose an efficient generic virus detector using n-grams as features. The fixed length n-grams used there suffer from the drawback that they cannot capture meaningful long sequences. In this paper we propose a new method of variable-length n-grams extraction based on the concept of episodes and demonstrate that they outperform fixed length n-grams in malicious code detection. The proposed algorithm requires only two scans over the whole data set whereas most of the classical algorithms require scans proportional to the maximum length of n-grams.
https://doi.org/10.1142/9789812836243_0015
Digital images can now be easily created, altered, and manipulated with no obvious traces of having been subjected to any of these operations. There are currently no established methodologies to verify the authenticity and integrity of digital images in an automatic manner. Digital image forensics is an emerging research field with important implications for ensuring the credibility of digital images. In an attempt to assist these efforts, this chapter surveys the recent developments in the field of digital image forensics. Proposed techniques in the literature are categorized into three primary areas based on their focus: image source identification, discrimination of synthetic images, and image forgery detection. The main idea of the proposed approaches in each category is described in detail, and reported results are discussed to evaluate the potential of the methods.
https://doi.org/10.1142/9789812836243_0016
Recent web-based applications offer users free service in exchange for access to personal communication, such as on-line services and instant messaging. The inspection and retention of user communication is generally intended to enable targeted marketing. However, unless specifically stated otherwise by the collecting service's privacy policy, such records have an indefinite lifetime and may be later used or sold without restriction. In this paper, we show that it is possible to protect a user's privacy from these risks by exploiting mutually oblivious, competing communication channels. We create virtual channels over online services (e.g., Google's Gmail, Microsoft's Hotmail) through which messages and cryptographic keys are delivered. The message recipient uses a shared secret to identify the shares and ultimately recover the original plaintext. In so doing, we create a wired “spread-spectrum” mechanism for protecting the privacy of web-based communication. We discuss the design and implementation of our open-source Java applet, Aquinas, and consider ways that the myriad of communication channels present on the Internet can be exploited to preserve privacy.
Sample Chapter(s)
Foreword (28k)
Chapter 1: Euclidean Shortest Paths in a Simple Polygon (803k)