This book presents a selective collection of papers from the 20th International Symposium on Computer and Information Sciences, held in Istanbul, Turkey. The selected papers span a wide spectrum of topics in computer networks, including internet and multimedia, security and cryptography, wireless networks, parallel and distributed computing, and performance evaluation. These papers represent the results of the latest research of academicians from more than 30 countries.
https://doi.org/10.1142/9781860947308_fmatter
PREFACE.
INVITED TALKS.
CONTENTS.
https://doi.org/10.1142/9781860947308_0001
This paper presents a lightweight passive replication protocol for deterministic servers in message-passing distributed systems. The protocol allows any server, not necessarily the primary, to take responsibility for processing its received client request and coordinating with the other replica servers after obtaining the delivery sequence number of the request from the primary. Thanks to this feature, the protocol with conventional load balancing techniques can avoid extreme load conditions on the primary. Therefore, the protocol promises better scalability of deterministic and replicated services compared with traditional protocols. Simulation results indicate that the proposed protocol can reduce 22.4% ~ 52.3% of the average response time of a client request.
https://doi.org/10.1142/9781860947308_0002
In this paper, PAS (partial-path combination approach for constrained path selection) is proposed to find delay-constrained paths with the same order of complexity as Dijkstra’s algorithm. Performance of PAS as an underlying path selection heuristic in multicast routing is evaluated using randomly generated sessions on random networks. Simulations show that PAS produces delay-constrained paths very fast without significantly trading-off tree cost for speed.
https://doi.org/10.1142/9781860947308_0003
It is expected that the proportional fair (PF) scheduler will be used widely in cdma2000 1×EV-DO systems because it maximizes the sum of each user’s utility, which is given by the logarithm of its average throughput. However, in terms of short-term average throughput, PF scheduler may lead to a large RTT variation. We analyze the impact of PF scheduler on TCP start-up behavior through NS-2 simulation. To show the impact of PF scheduling on TCP, we also analyze the packet transmission delay under the PF scheduling policy through mathematical model.
https://doi.org/10.1142/9781860947308_0004
This paper describes a novel method for flow assignment in survivable connection-oriented networks. The considered optimization problem is NP-complete and computationally difficult. In order to reduce size of the problem we propose to consider all networks eligible routes, which do not violate a predetermined hop-limit value. We focus on one of restoration methods - local rerouting – used in popular network technique MultiProtocol Label Switching (MPLS) to provide survivability. Seven various networks are analyzed to examine and evaluate the proposed approach.
https://doi.org/10.1142/9781860947308_0005
This paper focuses on how to allocate bandwidth fairly for new and handoff calls in multimedia cellular networks. Most (if not all) of the proposed schemes in the literature give priority to handoff calls at the expense of blocking new calls and degrading channel utilization. We present a new bandwidth allocation scheme based on guard policy. Accordingly, new calls are blocked if the amount of occupied bandwidth is greater than a predefined bandwidth threshold. The scheme is based on monitoring the elapsed real time of handoff calls and according to both, a time threshold parameter, and the call type, handoff calls are either prioritized or treated as new calls. Also in this paper, we introduce a crucial general performance metric Z that can be used to measure the performance of different bandwidth allocation schemes and compare them. Z, which is a performance/cost ratio, is a function of the new call blocking probability, handoff call dropping probability and system utilization all together. Simulation results show that, our scheme outperfoms other traditional schemes in terms of performance/cost ratio, and maintains its superiority under different network circumstances.
https://doi.org/10.1142/9781860947308_0006
This study proposes two new systems for the client and server sides of video streaming systems. The client side performance is improved by employing inactive clients as local servers for their peers, while the server side performance is increased by a redundant hierarchy of servers.
https://doi.org/10.1142/9781860947308_0007
Most applications consider network latency as an important metric for their operation. Latency plays a particular role in time-sensitive applications, such as, data transfers or interactive sessions. Smart packets in cognitive packet networks, can learn to find low-latency paths by explicitly expressing delay in their routing goal functions. However, to maintain the quality of paths, packets need to continuously monitor the round-trip delay that paths produce, to allow the algorithm learn any change. The acquisition of network status requires space in packets and lengthens their transmission time. This paper proposes an alternative composite goal consisting of path length and buffer occupancy of nodes that requires less storage space in packets, while offering a similar performance to a delay based goal. Measurements in a network testbed and simulation studies illustrate the problem and solution addressed in this study.
https://doi.org/10.1142/9781860947308_0008
Today’s business model, although boundaries are not clear, comprises of several layers: End users, Internet Service Providers (ISP), exchange points, and carriers. Arbitrary technological upgrades and absence of interaction in this layered business model, have led to critical system shortcomings. Foremost issues caused by this model are: 1.) unoptimized topology, 2.) low-liquidity, 3.) waste of resources, 4.) reduced interoperability.
We introduce a new and novel concept for coping with the issues and shortcomings of today’s business model. Integrated Neutral Topology Optimizer (INTO) is an over all arbiter layer with commercial functionality. This layer operates with ISPs and carriers during leasing process and continually monitors the traffic to optimize the bandwidth usage. Inter-and-intra carrier connections are made and maintained according to the decisions of this layer.
https://doi.org/10.1142/9781860947308_0009
In this paper, we present our research on characterizing voice and video traffic behavior in large-scale Internet Videoconferencing systems. We built a voice and video traffic quality measurement testbed to collect Videoconferencing traffic traces from several sites all over the world that were connected to our testbed via disparate network paths on the Internet. Our testbed also featured the H.323 Beacon, an H.323 session performance assessment tool we have developed, and various other open-source and commercial tools. Our findings obtained by analyzing the collected traffic traces demonstrate the impact of: 1) end-point technologies that use popular audio and video codecs and 2) network health status that is characterized by the variations of delay, jitter, lost and re-ordered packets in the network, on the end-user perception of audiovisual quality. The perceptual data used in our analysis includes both objective and subjective quality measures. These measures were collected from our testbed experiments for a few sample tasks involving various levels of human interaction in Internet Videoconferences.
https://doi.org/10.1142/9781860947308_0010
We propose a dynamic bandwidth allocation scheme for the downlink real–time video streaming in cellular networks. Our scheme is able to maximize the bandwidth utilization, while satisfying QoS constraints, e.g., the packet loss probability. Our scheme dynamically determines the amount of bandwidth to be allocated at each unit time interval by measuring the queue length and the accumulated packet loss probability. The simulation results show that our scheme is able to achieve the same level of performance as what can be accomplished with the pre-calculated effective bandwidth in terms of the bandwidth utilization and the packet loss probability.
https://doi.org/10.1142/9781860947308_0011
The restoration methods for the MPLS (Multi-Protocol Label Switching) network are 1+1, 1:1, 1:N, and M:N. However, these methods have a problem that it wastes network resources and cannot afford a perfect restoration when the physical connectivity is cut down. There is no guarantee that this problem is not present when the restoration methods are applied to the GMPLS (Generalized MPLS) network. This paper proposes the restoration mechanism that decreases the chances of getting the same problem where the Kini mechanism is applied over the GMPLS network. This mechanism computes a backup path at the time of a working path computation, and the resource allocation can maximize the resource utilization if the resource allocation is executed after the network fault. When the network fault is occured, this method extends the PATH or RESV message in RSVP-TE to allocate the resource as soon as it can.
https://doi.org/10.1142/9781860947308_0012
A wideband chirped fiber Bragg grating (FBG) dispersion compensator operating in C band is designed theoretically by numerically solving the coupled mode equations. The power reflectivity spectrum and dispersion characteristics of the chirped fiber Bragg gratings are analysed. In order to achieve wideband dispersion compensation with a low insertion loss, grating length, average refractive index change, apodization profile and chirp parameter of the grating should be precisely optimized. The chirped FBG designs achieved in this study have resulted in a negative dispersion of 4.95 ps/nm for a grating length of L = 10 mm and a negative dispersion of 9.76 ps/nm for a grating length of L = 20 mm with 16 nm bandwidth at around 1550 nm.
https://doi.org/10.1142/9781860947308_0013
Reducing power consumption to extend network lifetime is one of the most important challenges in designing wireless sensor networks. One promising approach to reduce energy consumption is node scheduling, which keeps only a subset of sensor nodes active and puts others into low-powered sleep status. However, most of previous work on node scheduling considers only sensing coverage. In this paper, we consider the sensing coverage and communication connectivity simultaneously and address the issue of constructing a minimal connected cover set in wireless sensor network. We propose a centralized, Voronoi tessellation (CVT) based algorithm to select the minimum number of active sensor nodes needed to cover the region of interest (ROI) completely. The constructed sensor set is connected when sensor node’s communication radius is at least twice of its sensing radius. For other situations where the CVT algorithm alone cannot maintain the network connectivity, we design a Steiner minimum tree (SMT) based algorithm to ensure the network connectivity. Finally, we evaluate the performance of the proposed algorithms through numerical experiments.
https://doi.org/10.1142/9781860947308_0014
This paper deals with the messages scheduling of a CAN (Controller Area Network), which is based on the distributed control scheme to integrate actuators and sensors in a humanoid robot. For a humanoid robot to implement distributed processing, each control unit should have an efficient control method, and fast calculation and valid data exchange capabilities. A preliminary study revealed that CAN has better performance and is easier to implement than other networks such as FIP (Factory Instrumentation Protocol), VAN (Vehicle Area Network), etc. Since a humanoid robot has to process all the control signals from many actuators and sensors, communication time limitation is dependent on the transmission speed and data length of CAN. In this paper, the CAN message scheduling is proposed for a humanoid robot under the conditions of jitter-present message group, high load of messages over the network, and transmission errors. In addition, the worst-case response time was compared with the limitation time given by the simulation algorithm. The proposed message scheduling can guarantee the CAN limitation, and be used to generate the walking patterns for a humanoid robot.
https://doi.org/10.1142/9781860947308_0015
This paper introduces a new J2ME RMI package, which makes use of object compression in order to minimize the transmission time. The package also makes use of object encryption for secure channels. The currently used RMI package for wireless devices does not provide either of these features. Our package substantially outperforms the existing Java package in the total time needed to compress, transmit, and decompress the object for GPRS networks, even under adversary conditions. The results show that the extra time incurred to compress and decompress serialized objects is small compared to the time required to transmit the object without compression in GPRS networks. Existing RMI code for J2ME can be obliviously used with our new package.
https://doi.org/10.1142/9781860947308_0016
iSCSI(Internet Small Computer System Interface) is a block-oriented storage access protocol that enables a user to recognize a remote storage as their own local block device through general TCP/IP networks. Since iSCSI uses a standard ethernet switch and router for this kind of access, it can not only be applied to ethernet technologies, but can also be used to create a storage networking system without any distance restrictions that can equally be applied to a wireless network environment. Accordingly, focusing on this applicability, this paper presents an alternative approach to overcome the limited storage space of mobile devices based on the iSCSI initiator driver, which was originally designed for wired networks. Its potential with a wireless network is also evaluated.
https://doi.org/10.1142/9781860947308_0017
A wireless sensor network (WSN) consisting of a large number of micro-sensors with low-power transceivers can be an effective tool for data-gathering in various environments. The energy constraint is the main challenge the WSN faces. Focusing on the characteristics of routing in WSN, we propose a novel Robust Cluster-based Multi-hop routing algorithm (RCM) that can save energy remarkably. The algorithm adaptively organizes sensors to multiple clusters, where each cluster includes a header and several members. The member takes charge of data collection and communication with header; while the header carries out data fusion and forwards the packet to the sink at cluster granularity. Furthermore, dynamic header rotation and node fail-to-resume mechanisms could balance the energy cost on every node and improve the robustness so as to prolong the lifetime of network. The simulation in ns2 demonstrates the advantages of the algorithm including energy efficiency, scalability and robustness.
https://doi.org/10.1142/9781860947308_0018
Security is a major concern in sensor networks, and key establishment is the basic element for secure communication. In this paper, we propose new pairwise key establishment mechanism based on clustering and polynomial sharing, and we also propose new authentication mechanism. Through analysis, we show that our key establishment mechanism achieves good performance, and our proposed authentication mechanism provides unicast and broadcast authentication.
https://doi.org/10.1142/9781860947308_0019
The Hypertext Transfer Protocol (HTTP) has progressively increased in sophistication and scope somewhat in parallel to growth of the Internet. HTTPs first incarnation allowed for only one transaction (request and retrieval) per connection, incurring a high overhead penalty for repetitive and laborious Transmission Control Protocol (TCP) connection management. Progressively HTTP was optimised through the inclusion of pipelined and persistent connections1,2 to improve HTTPs non-optimal use of TCP. We introduce an application layer multiplexing adaptation to HTTP 1.1; HTTP-MPLEX for compressing GET requests and multiplexing responses. Our protocol is backwards compatible. It minimises verbose request header overhead, reducing the need for multiple server-client connections and allows prioritised object delivery with a companion response encoding scheme.
https://doi.org/10.1142/9781860947308_0020
The current Internet infrastructure is suffering from various types of Distributed Denial of Service (DDoS) attacks. Internet worms are one of the most crucial problems in the field of computer security today. Worms can be propagated so fast that most Internet services over the world may be disabled by DDoS effects from the self-propagation. In our earlier research, we presented Traffic Rate Analysis (TRA) to analyze the characteristics of network traffic for DDoS attacks. In this research, we propose Support Vector Machine (SVM) approach with TRA to automatically detect DDoS attacks. Experimental results show that SVM can be a highly useful classifier for detecting DDoS attacks.
https://doi.org/10.1142/9781860947308_0021
In this paper we analyze the performance of end-to-end security in wireless applications. WTLS (Wireless Transport Layer Security) handshake protocol is used as the key security protocol. Several scenarios and different cryptosystems are considered. We took an experimental approach and implemented the protocols and necessary crypto primitives in both wireless handheld device and server. Tests are performed over a GSM provider network. Processing, queuing and transmission delays are considered in the analysis. Results are interpreted from both client and the server points of view. Not only the key sizes proposed by the WTLS standard, but also stronger key sizes are tested. Results show that (i) Elliptic Curve Cryptosystems (ECC) perform better than RSA cryptosystem, and (ii) it is possible to use ECC key sizes larger than the ones proposed in the WTLS standard without significant performance degradation. In our tests, GSM CSD and GPRS bearers are taken into account. Another interesting result is that the these two bearers perform close to each other in WTLS handshake protocol because of similar and significant traversal delays in both bearers.
https://doi.org/10.1142/9781860947308_0022
Ubiquitous computing aims at pursuing naturalness, harmony and adaptation. We think that ”semantic” is the key aspect in co-operations and communications of people, devices and smart devices. This paper presents a semantic context model for an adaptive middleware. We emphasize on the fusion of semantic context information in smart vehicle space for ubiquitous computing. Semantic web technology and web ontology language are used to build the ontology of the smart vehicle space. In addition, we propose an application scenario and a prototype system. The contribution is twofold. First, we present the architecture of semantic context model. Seconde, we have built the ontology of smart vehicle space, with which we make context fusion and inference for middleware adaptation.
https://doi.org/10.1142/9781860947308_0023
Context-awareness and security are critical issues in ubiquitous computing. In this paper we present a framework for context-aware authorization in ubiquitous computing environments. We present an architecture consisting of authorization infrastructure and context infrastructure. The authorization infrastructure makes decision to grant access rights based on both contexts and policies specified with a flexible language. The context infrastructure provides contexts at various levels of abstraction and enables context users to acquire contexts by submitting a query or using an event notification mechanism. The policy specification language allows one to authorize, prohibit, delegate, and revoke access rights. It also has constructs to package policies, resolve conflicts among policies, and specify the interaction with the context infrastructure.
https://doi.org/10.1142/9781860947308_0024
Elliptic Curve Cryptography is one of the emerging public key cryptography algorithms to provide communications security. Providing security means a decrease in performance and this decrease should be kept minimal in order to make the security algorithm attractive. Contemporarily, internet communications and sensor networks found a broad usage and research interest. In this paper, the performance of most dominant internet security protocol, Secure Sockets Layer, and TinySec, the link layer security mechanism for sensor networks, which are both enhanced by Elliptic Curve Cryptography, are studied.
https://doi.org/10.1142/9781860947308_0025
In this paper, we design a new heuristic for an important extension of the minimum power multicasting problem in ad hoc wireless networks20,21. Assuming that each transmission takes a fixed amount of time, we impose constraints on the number of hops allowed to reach the destination nodes in the multicasting application. This setting would be applicable in time critical or real time applications, and the relative importance of the nodes may be indicated by these delay bounds. We design a filtered beam search procedure for solving this problem. The performance of our algorithm is demonstrated on numerous test cases by benchmarking it against an optimal algorithm in small problem instances, and against a modified version of the well-known Broadcast Incremental Power (BIP) algorithm 20 for relatively large problems.
https://doi.org/10.1142/9781860947308_0026
This paper presents a robust video segmentation method particularly aimed at videophone and videoconference type video sequences containing slow moving objects. Classical change detection approaches based only on the frame difference of two successive image frames usually result in incorrect segmentations for video sequences containing slow motions or temporary poses. The video segmentation approach presented in this paper carries out block based change detection using numerous preceding image frames to accomplish successful segmentation in the presence of slow motions and temporary poses.
https://doi.org/10.1142/9781860947308_0027
In reliable multicast systems with receiver-oriented reliability approach, the reliability mechanism works based on detection of communication failures by the receivers themselves. Such mechanisms imply that a receiver can detect that it has missed one or more messages by means of a successful receipt of a subsequent message. However, in case some receivers miss the last message and if there is no subsequent message for a relatively long period of time, then the receiver group gets into an undesirable state in which the violation of “all or none” semantics, i.e. atomicity, occurs. This paper proposes a mechanism to prevent such a scenario by introducing a new reliable multicast protocol called RDDP-LAN that provides a loss-tolerant, totally ordered and atomic reliable multicast service for LANs of Ethernet type. To evaluate the system, a prototype of the protocol has been implemented in a Linux environment and investigated. Throughput performance results and the cost of the early message loss detection mechanism are presented.
https://doi.org/10.1142/9781860947308_0028
The IETF NEMO working group proposed the basic support protocol for the network mobility to support the movement of a mobile network consisting of several mobile nodes. However, this protocol has been found to suffer from the so-called ‘dog-leg problem’, and despite alternative research efforts to solve this problem, there are still limitations in the efficiency for real time data transmission and intra-domain communication. Accordingly, this paper proposes a new route optimization methodology that uses unidirectional tunneling and a tree-based intra-domain routing mechanisms which can significantly reduce delay in both signaling and data transmission.
https://doi.org/10.1142/9781860947308_0029
Prediction is one of the most important factors that prevents a Web system from collapsing when high intensity of transactional traffic arrives to it. It is necessary to ensure the quality of prediction is good enough in order to maintain under control the correct utilization of the Web system by allocating its resources among different types of incoming traffic that can reach the system and by controlling bursty transaction arrivals to the system. We have developed an algorithm that resides in a Web switch and distributes the workload among a set of servers following a resource allocation policy based on quality of service (QoS) attributes. The algorithm includes a reward scheme to control the accuracy of the predictions on real time.
https://doi.org/10.1142/9781860947308_0030
The next generation wireless network environments are expected to support anywhere, anytime connectivity for high-performance applications like multimedia, full-motion video and high data rates with appropriate quality of service (QoS). To provide a solution to these demanding needs, service providers intend to combine different technologies in heterogeneous networks. In this study, we propose a fuzzy-based vertical handoff scheme for interworking between WLAN and GSM. We concentrated on crucial features like user preference, network conditions, speed, RSS and support of different service types. After extensive simulation, the results show that the proposed handoff scheme maintains a stable utilization against mobility under a specific load. Moreover, the proposed scheme provides good results for the call blocking and dropping probabilities, Grade of Service (GoS), and throughput.
https://doi.org/10.1142/9781860947308_0031
Reliability issue of multicasting contains the challenges for detection and recovery of packet losses and ordered delivery of the entire data. In this work, existing reliable multicast protocols are classified into three main groups, namely tree based, NACK-only and router assisted, and a representative protocol for each group is selected to demonstrate the advantages and disadvantages of the corresponding approaches. The selected protocols are SRM, PGM and RMTP. Performance characteristics of these protocols are empirically evaluated by using simulation results. A public Network Simulator-2 (NS-2) implementation of RMTP was also performed as part of this study and the protocols were evaluated by investigating performance metrics of distribution delay, recovery latency and request overhead with respect to varying multicast group size and link loss rate.
https://doi.org/10.1142/9781860947308_0032
Simulations, emulations, and test deployments play a central role in design and development of ad hoc network protocols and software. Among these three methods, emulations have gained considerable popularity. This is partly because emulations address efficiency-accuracy trade-off of simulations by incorporating real hardware or software into the synthetic environment in a controlled manner. Although generally treated as a special case of simulations, emulation experimentation have its own problems and pitfalls. These problems do not typically show up in simulation studies, since they stem from presence of real hardware or software in the experiment. In this paper, the sketch of an emulation integrated development lifecycle is presented for establishing the boundaries of emulation experiments, and the problems specific to emulations are identified and discussed.
https://doi.org/10.1142/9781860947308_0033
The problem of maximizing network lifetime in wireless ad-hoc networks is addressed with a cooperative routing approach. The network lifetime optimization problem is defined as a linear programming problem and it is shown with simulations that networks utilizing cooperative transmission have larger lifetime than the networks not utilizing.
https://doi.org/10.1142/9781860947308_0034
This paper discusses a novel approach to identifying entities involved in ad-hoc wireless communications through using an efficient visual code system called UbiCode. UbiCode, used in an out-of-band channel in order to bootstrap trust between entities unknown to each other, facilitates a mechanism for demonstrative identification of entities involved in ad-hoc wireless communications. We present the design of UbiCode as well as identification protocols leveraging the system. We also demonstrate our approach through a proof-of-concept implementation.
https://doi.org/10.1142/9781860947308_0035
Mobile IP (MIP) is the current standard for supporting mobility in wireless mobile networks. However, the latency and packet losses faced during handovers may significantly degrade the quality of seamless connection in Mobile IP. Thus, minimizing handover latency and the number of lost packets during a handover is an important task in wireless mobile networks. In this paper, we propose a new mobility management scheme to handle the movements of Mobile Nodes (MNs) among different cell in the same wireless network to reduce data loss and maintain uniform connectivity. We also provide the performance results both for the proposed and basic horizontal handovers in wireless mobile networks. The simulation results show that the proposed model profoundly decreases latency at the expense of slightly increased data traffic.
https://doi.org/10.1142/9781860947308_0036
Third Generation mobile telecommunications systems enable multimedia services combining the growth in mobile communications with the growth of the Internet. On the other hand, WLAN’s are easy to implement, cheaper to construct and provides high speed services compared with other wireless technologies. In order to achieve access independence and to maintain a smooth interoperation with wireline and wireless terminals across the Internet, security is an important issue that must be handled. One step to achieve interoperability may be providing secure interoperability between the 802.11 WLAN and 3G networks. Although both systems have authentication mechanisms in their own domain, there is not a single mechanism that provides seamless authentication between two domains. In this paper, an AAA based authentication mechanism is presented between 3G networks and 802.11 networks. The authenticaton mechanism provides seamless authentication between two domains using the existing authentication mechanisms in each domain and provides a way for distributing session key and encryption algorithm between 3G core network and 802.11 core network.