Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This paper introduces a new framework for the design of parallel algorithms that may be executed on multiprogrammed architectures with variable resources. These features, in combination with an implied ability to handle fault tolerance, facilitates environments such as the GRID. A new model, BSPGRID is presented, which exploits the bulk synchronous paradigm to allow existing algorithms to be easily adapted and used. It models computation, communication, external memory accesses (I/O) and synchronization. By combining the communication and I/O operations BSPGRID allows the easy design of portable algorithms while permitting them to execute on non-dedicated hardware and/or changing resources, which is typical for machines in a GRID. However, even with this degree of dynamicity, the model still offers a simple and tractable cost model. Each program runs in its own virtual BSPGRID machine. Its emulation on a real computer is demonstrated to show the practicality of the framework. A dense matrix multiplication algorithm and its emulation in a multiprogrammed environment is given as an example.
As of today many cosmological simulations exist spread throughout the world and it is difficult for an astronomer to find the one he/she is interested in to compare observational data or to compare the data of different types of simulations. The aim of this work is to follow the Virtual Observatory idea of simplifying the work to astronomers and begin unifying the world of simulations just as it has been made by IVOA with the observational data. The Italian Theoretical Virtual Observatory (ITVO) database (DB) was born with the idea of drawing a DB structure for the cosmological simulations general enough to ingest not only the metadata of one specific simulation but of many different types (N-body, N-body+SPH, Mesh, etc.). The goal is to be able to provide a web service to the astrophysical community allowing to search with one query in many kinds of simulations archives and also in data obtained through many levels of post-processing and to use appropriate IVOA tools to analyze the data of every levels. So we present the first DB structure in which two type of metadata coexist, one coming from an N-body+SPH simulation and another from an N-body+Mesh simulation, and some examples of using tools that permit searching and immediately comparing theoretical and observational data. This project is being developed as part of VO-Tech/DS4, ITVO and VObs.it assets.
A Large Ion Collider Experiment (ALICE) at CERN is a general-purpose heavy-ion experiment designed to study the physics of strongly interacting matter and the quark-gluon plasma in high-energy nucleus-nucleus collisions. Unprecedented scale of the computing resources needed to store, reconstruct and analyze the data that will be taken by the experiment is a strong reason for moving ALICE computing toward a distributed, GRID, approach. In this paper, the ALICE distributed computing environment (AliEn) and its application to the physics data analysis, as seen from a user point of view, is shortly discussed.