Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Considering continuous routing, we analyze the transient behavior of n×n routers with input buffering, split input buffering, output buffering, and central buffering with dedicated virtual circuits, one for each source-destination pair in a network. Assuming similar buffer space requirements, output buffering has the highest throughput. Split input buffering and central buffering have comparable performance; split input buffering slightly outperforms central buffering for large switches. Input buffering is known to saturate at packet generating rates above 0.586. By extending these models, two 1024-node, unique-path multistage networks configured with (approximately-modeled) input buffered STC104 32×32 switches, or central buffered Telegraphos 4×4 switches (Telegraphos I version) are compared. Surprisingly, the network configured with smaller switches performs better. This is due to the higher peak bandwidth of the Telegraphos switch and saturation of the input buffered STC104 switch.
Transmission efficiency and robustness are two important properties of various networks and a number of optimization strategies have been proposed recently. We propose a scheme to enhance the network performance by adding a small fraction of links (or edges) to the currently existing network topology, and we present four edge addition strategies for adding edges efficiently. We aim at minimizing the maximum node betweenness of any node in the network to improve its transmission efficiency, and a number of experiments on both Barabási–Albert (BA) and Erdös–Rényi (ER) networks have confirmed the effectiveness of our four edge addition strategies. Also, we evaluate the effect of some other measure metrics such as average path length, average betweenness, robustness, and degree distribution. Our work is very valuable and helpful for service providers to optimize their network performance by adding a small fraction of edges or to make good network planning on the existing network topology incrementally.
Of the many factors that contribute to communication performance, perhaps one of the least investigated is that of message-buffer alignment. Although the generally accepted practice is to page-align buffer memory for best performance, our studies show that the actual relationship of buffer alignment to communication performance cannot be expressed with such a simple formula. This paper presents a case study in which porting a simple network performance test from one language to another resulted in a large performance discrepancy even though both versions of the code consist primarily of calls to messaging-layer functions. Careful analysis of the two code versions revealed that the discrepancy relates to the alignment in memory of the message buffers. Further investigation revealed some surprising results about the impact of message-buffer alignment on communication performance: (1) different networks and node architectures prefer different buffer alignments; (2) page-aligned memory does not always give the best possible performance, and, in some cases, actually yields the worst possible performance; and, (3) on some systems, the most significant factor affecting network performance is the relative alignment of send and receive buffers with respect to each other.
We propose a symmetrical scheme, by drawing results from group theory, and use it to build a new class of data center network models. The results are superior to current network models with respect to a number of performance criteria. Greater symmetry in networks is important, as it leads to simpler structure and more efficient communication algorithms. It also tends to produce better scalability and greater fault tolerance. Our models are general and are expected to find many applications, but they are particularly suitable for large-scale data-center networks.
Network traffic is believed to have a significant impact on network performance and is the result of the application operation on networks. Majority of current network performance analysis are based on the premise that the traffic transmission is through the shortest path, which is too simple to reflect a real traffic process. The real traffic process is related to the network application process characteristics, involving the realistic user behavior. In this paper, first, an application can be divided into the following three categories according to realistic application process characteristics: random application, customized application and routine application. Then, numerical simulations are carried out to analyze the effect of different applications on the network performance. The main results show that (i) network efficiency for the BA scale-free network is less than the ER random network when similar single application is loaded on the network; (ii) customized application has the greatest effect on the network efficiency when mixed multiple applications are loaded on BA network.
A social network is composed of social individuals and their relationships. In many real-world applications, such a network will evolve dynamically over time and events. A social network can be naturally viewed as a multiagent system if considering locally-interacting social individuals as autonomous agents. In this paper, we present an Autonomy-Oriented Computing (AOC) based model of a social network, and study the dynamics of the network based on this model. In the AOC model, the profile of agents, service-based interactions, and the evolution of the network are defined, and the autonomy of the agents is emphasized. The model can reveal dynamic relationships among global performance, local interaction (partner selection) strategies, and network topology. The experimental results show that the agent network forms a community with a high clustering coefficient, and the performance of the network is dynamically changing along with the formation of the network and the local interaction strategies of the agents. In this paper, the performance and topology of the agent network are analyzed, and the factors that affect the performance and evolution of the agent network are examined.
In this paper, we develop methods to estimate the network coverage of a TTL-bound query packet undergoing flooding on an unstructured p2p network. The estimation based on the degree distribution of the networks, reveals that the presence of certain cycle-forming edges, that we name as cross and back edges, reduces the coverage of the peers in p2p networks and also generate a large number of redundant messages, thus wasting precious bandwidth. We therefore develop models to estimate the back/cross edge probabilities and the network coverage of the peers in the presence of these back and cross edges. Extensive simulation is done on random, power-law and Gnutella networks to verify the correctness of the model. The results highlight the fact that for real p2p networks, which are large but finite, the percentage of back/cross edges can increase enormously with increasing distance from a source node, thus leading to huge traffic redundancy.
Technological advances such as high speed Ethernet and ATM have provided a means for business organizations to employ high performance networking. However, few studies have been conducted to verify the architecture's typical performance in a business environment. This study analyzed the network performance of high speed Ethernet and ATM when they were configured as LAN backbones. The results revealed that ATM exhibited performance superior to high speed Ethernet, but when adjustments were made for differences in line speed, the throughput was similar. In addition to analyzing empirical data about each technologies' performance, the advantages and limitations of using ATM in a business network are discussed.
We present the equivalent path (single link) method to estimate the blocking probabilities in Wavelength Division Multiplexing (WDM) networks without wavelength converters, assuming fixed routing with First-Fit wavelength assignment. Our approach views the WDM network as a set of different layers (colors) in which, blocked traffic in one layer is overflowed to another layer. Then, we replace an end-to-end path with an equivalent single link system. The first two moments of both the end-to-end traffic and background traffic is then calculated, and the Bernoulli–Poisson–Pascal BPP moment matching function is used to calculate the single link blocking probability. Results presented indicate the accuracy of our method.