Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago.
This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project.
The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.
Recent advances in simulation optimization research and explosive growth in computing power have made it possible to optimize complex stochastic systems that are otherwise intractable. In the first part of this paper, we classify simulation optimization techniques into four categories based on how the search is conducted. We provide tutorial expositions on representative methods from each category, with a focus in recent developments, and compare the strengths and limitations of each category. In the second part of this paper, we review applications of simulation optimization in various contexts, with detailed discussions on health care, logistics, and manufacturing systems. Finally, we explore the potential of simulation optimization in the new era. Specifically, we discuss how simulation optimization can benefit from cloud computing and high-performance computing, its integration with big data analytics, and the value of simulation optimization to help address challenges in engineering design of complex systems.
The ATLAS experiment at CERN relies on a Worldwide Distributed Computing Grid infrastructure to support its physics program at the Large Hadron Collider. ATLAS has integrated cloud computing resources to complement its Grid infrastructure and conducted an R&D program on Google Cloud Platform. These initiatives leverage key features of commercial cloud providers: lightweight configuration and operation, elasticity and availability of diverse infrastructures. This paper examines the seamless integration of cloud computing services as a conventional Grid site within the ATLAS workflow management and data management systems, while also offering new setups for interactive, parallel analysis. It underscores pivotal results that enhance the on-site computing model and outlines several R&D projects that have benefited from large-scale, elastic resource provisioning models. Furthermore, this study discusses the impact of cloud-enabled R&D projects in three domains: accelerators and AI/ML, ARM CPUs and columnar data analysis techniques.
Cloud computing offers opportunities to increase productivity and reduce costs. Quickly adapting to changing needs is key to maintaining cloud applications. In traditional development, coordination is implemented in computational code. Although change impact analyses are studied, adjusting the implementation is time-consuming and error-prone when the coordination strategy changes. Exogenous coordination separates the implemented coordination and computational code to cope with this problem. This separation improves the reusability of components. Additionally, other applications with similar interaction patterns can reuse the coordination specification. The main contribution of this paper is to propose a methodology to develop and maintain cloud applications following the exogenous approach. To illustrate the idea, we introduce a new framework named OCCIwareBIP, which integrates JavaBIP — a framework for the exogenous coordination of concurrent Java components into OCCIware — a framework for designing cloud applications. We also leverage the coordination model to verify the deadlock-freedom of the cloud application. Finally, we present an application to show the ability of our approach to guarantee the safety and benefits of modularization in developing concurrent cloud applications.
In the process of task scheduling, due to the large search space of resources, it takes a long time to allocate appropriate resources for tasks, which results in the increase of execution time and execution cost of algorithms. For this reason, this paper proposes a cloud computing task fuzzy scheduling strategy based on hybrid search algorithm and differential evolution, which incorporates the classification based on normal distribution and a variety of mutation strategies on the basis of standard differential evolution algorithm. In mutation strategy, the individual difference vector assigns priority to each task and arranges tasks according to resource allocation rules. It improves the slow convergence speed and easy to fall into local optimum of standard difference algorithm, and can effectively solve the task scheduling problem of cloud computing. A cloud computing task scheduling algorithm with time and cost constraints is designed and tested in a simulation environment. The experimental results show that the algorithm can not only shorten the task processing time and reduce the execution cost, but also fully satisfy the users’ actual quality of service requirements by adjusting the weights of time and cost factors.
In today’s world, cloud computing is widely used in various applications and services, which refers to the utilization of computational resources as per user requirements through the Internet. However, despite the numerous advancements in cloud services and applications, there are various security threats associated with it, mainly due to data outsourcing to third-party controlled data centers. In this context, this paper introduces a new model called Attribute-based Advanced Security Model (AASM) for Reliable Data Sharing in the Cloud. This model combines the Advanced Encryption Technique (AET) with Attribute-Based Signature (ABS) to ensure secure data sharing in the Cloud while efficiently controlling data access. The model enables encrypted access control from the data owner’s side with advanced access privileges, ensuring user privacy with an anonymous authentication model using ABS. By implementing these measures, the model provides security for cloud providers and users while safeguarding against malicious attacks. The effectiveness of the proposed model is evaluated based on factors such as time complexities, security and accountability.
Cloud computing forms a mainstream in the emerging field of Internet of Things (IoT) networks, which provides high storage and access to data whenever needed. The cloud architecture is highly vulnerable to various anomalies due to the centralised process that has the capability of ruining the reputation or causing the loss of trust in an organisation. Preventing anomalies in cloud architecture extends the lifetime of the system and increases privacy preservation. In this research, blockchain technology is adopted for facilitating secure communication in the network, and anomaly detection is performed using the proposed Hexabullus optimisation-based Fuzzy classifier based on the entropy-based rules. The importance of this research relies on the calculation of entropy and anomaly detection using optimal rules generated using the proposed hexabullus optimisation. The experimental results show that the proposed blockchain-enabled cloud architecture prevents the occurrence of attacks more efficiently. The proposed hexabullus optimisation-based anomaly detection is evaluated with existing methods that attained an improved accuracy of 88%, precision of 88%, and recall of 90%, which is highly efficient in rendering the secure communication of the data in the cloud.
Cloud computing is a rapidly advancing paradigm that enables users to access various services and resources anytime, anywhere. With this advancement, security has become a major concern for business organisations and individuals, and hence, it is essential to ensure that the services are provided with high data security. Numerous researches have focused on devising effective techniques to enhance data security. However, with the increasing connectivity, security still remains to be a major challenge. This paper devises a novel data protection scheme in the cloud by using the Twofish encryption algorithm and a key generation scheme with the Bald Eagle Pelican Optimization (BEPO) Algorithm. The proposed Twofish+BEPO_KeyGen is implemented in various phases, like initialization, registration, key generation, data encryption, authentication, validation and data sharing, and data decryption. Here, the Twofish algorithm is used to encrypt the data that has to be outsourced to the cloud, and for encryption, the security key required is generated by the BEPO algorithm. The efficacy of the Twofish+BEPO_KeyGen approach is examined by considering metrics, like memory usage, validation time, normalized variance, and conditional privacy, and is detected to have achieved values of 76.3 MB, 37.278 s, 1.665, and 0.926, correspondingly.
This paper describes the procedure followed for using third-party tools and applications, avoiding the development of complex communication software modules for data sharing. A common practice in robotics is the use of middlewares to interconnect different software applications, hardware components, or even complete systems. It allows code and tool reuse minimizing the development effort. In this way, applications developed for one middleware can be shared with others by means of establishing communication bridges among them. The most extended procedure is the development of software modules that use the low-level communication resources that middlewares provide. This procedure has many advantages but a clear disadvantage: the complexity of development. The procedure proposed is based on the use of cloud technologies for data sharing without the development of middleware bridges. The way of inter-relate different middlewares is by means of the development of a compatible robot model. This procedure has enabled the use of the ArmarX middleware tools and the application of the results obtained to the humanoid robot TEO, that uses the YARP middleware, in an easy and fast way.
Current healthcare applications commonly incorporate the Internet of Things (IoT) and cloud computing ideas. IoT devices provide massive amounts of patient data in the healthcare industry. These data stored in the cloud are analyzed using mobile devices’ built-in storage and processing power. The Internet of Medical Healthcare Things (IoMHT) integrates health monitoring components including sensors and medical equipment to remotely monitor patient records in order to provide more intelligent and sophisticated healthcare services. In this research, we analyze one of the deadliest illnesses with a high fatality rate worldwide, the chronic kidney disease (CKD), to provide the finest healthcare services possible to users of e-health and m-health applications by presenting the IoTC services based on healthcare delivery system for the prediction and observation of CKD with its severity level. The suggested architecture gathers patient data from linked IoT devices and saves it in the cloud alongside real-time data, pertinent medical records that are collected from the UCI Machine Learning Repository, and relevant medical documents. We further use a Deep Neural Network (DNN) classifier to predict CKD and its severity. To boost the effectiveness of the DNN classifier, a Particle Swarm Optimization (PSO)-based feature selection technique is also applied. We compare the performance of the proposed model using different classification measures utilizing different classifiers. A Quick Flower Pollination Algorithm (QFPA)- (DNN)-based IoT and cloud-based CKD diagnosis model, is presented in this paper. The CKD diagnosis steps in the QFPA- DNN model involve data gathering, preparation, feature selection and classification stages.
Cloud services are used to achieve diverse computing needs such as cost, security, scalability, and availability. Acceleration evolution in the distributed and cloud domains is common for large and dynamic workflows deployment. Resources and task mapping depend on the user’s objectives such as reduction in cost or execution completion within the stipulated time in consideration with certain quality of services. Multiple virtual machine instances can be launched by defining different configurations such as operating system, server types, and applications. Though workflow scheduling is an NP-Hard problem, variety of decision-making techniques are available for optimum resource allocation. In this research paper, different algorithms are studied and compared with evolutionary approaches. Workflow scheduling using genetic algorithm is implemented and discussed. This paper aims to design a decision-making technique to optimize resources of cloud. It is an adaptive scheduling to maximize profit by reducing execution time. The approach implemented is useful to cloud service providers to maximize profit and resource efficiency in their services.
The growth of cloud computing has been increasing fast since a few years ago, although it is still a small part of overall Information Technology (IT) spending of organizations. Both private and public sectors are embracing cloud computing as it offers an innovative computing model, simplifying IT resource management, offering cost savings and flexible scaling. The question is no longer whether to adopt cloud computing or not, but what should be adopted and how? The transaction cost economy theory offers a rationale for the adoption and the decision-making theory helps construct stages for the adoption and operate cloud computing to provide effective and optimal IT solutions for organizations. This paper offers decision makers to overview cloud computing, especially in utilizing values offered and selecting resources or operations that can be migrated to the cloud.
Cloud computing is an on-demand availability of computing resources. The current cloud computing environment uses different algorithms for its security; RC4 is one of them. However, due to the serial implementation of RC4 over cloud computing, the process of encryption and data transmission is badly affected by the factors of poor latency. This research is proposed to cover this factor by implementing a parallel RC4 algorithm in a cloud computing environment. The parallel RC4 model has been developed in web-based architecture relevant to the cloud computing environment. The method processed data simultaneously in four parallel pipelines encrypted through the RC4 algorithm. The proposed method used RC4 algorithms’ parallel implementation to encrypt and decrypt the data. The proposed method increases the efficiency of data encryption and decryption and transmission over a cloud environment using four pipelines at once. These four pipelines receive data and encrypt them using a key and then merge those four streams into a single cipher. These four pipelines enhance the speed of communication over cloud and solve the latency issue majorly. Data sending from node devices to cloud servers is divided into four equal streams and then encrypted using RC4 distinctly but parallel. Then after the roaming, the data are again first combined and then decrypted using the RC4 algorithm. The sample data sets are evaluated on the criteria of the starting time and ending time of the encryption and decryption of data. Results show that the proposed method is able to enhance the speed of the whole process up to average 3.7% due to the implementation of parallel technique and had reduced the time notably. The future work on this research involves enhancement of security process of RC4 algorithm since it can be breached easily.
Clinical trials generate a large amount of data that have been underutilized due to obstacles that prevent data sharing including risking patient privacy, data misrepresentation, and invalid secondary analyses. In order to address these obstacles, we developed a novel data sharing method which ensures patient privacy while also protecting the interests of clinical trial investigators. Our flexible and robust approach involves two components: (1) an advanced cloud-based querying language that allows users to test hypotheses without direct access to the real clinical trial data and (2) corresponding synthetic data for the query of interest that allows for exploratory research and model development. Both components can be modified by the clinical trial investigator depending on factors such as the type of trial or number of patients enrolled. To test the effectiveness of our system, we first implement a simple and robust permutation based synthetic data generator. We then use the synthetic data generator coupled with our querying language to identify significant relationships among variables in a realistic clinical trial dataset.
Innovating organizations are adopting and quickly implementing the open innovation (OI) approach for developing new products and services. There is a growing need to improve the process, to achieve faster and better outcomes. The integration of disruptive digital technologies (DTs) into the innovation processes bolsters the development of new business models, innovation processes, and ecosystems. However, there is limited information regarding the management of a digitalized OI process, and the role of different DTs across the stages of the innovation process. A conceptualized framework has been established, which integrates different DTs, and maps them across the stages of the OI process. The framework identifies the links between different dimensions and attributes of different DTs, such as big data, the IoT, cloud computing, artificial intelligence, blockchain and social media, and the stages of the OI process. The framework also provides a consolidated approach for understanding the benefits and challenges of different DTs across the OI process.
As biomedical research data grow, researchers need reliable and scalable solutions for storage and compute. There is also a need to build systems that encourage and support collaboration and data sharing, to result in greater reproducibility. This has led many researchers and organizations to use cloud computing [1]. The cloud not only enables scalable, on-demand resources for storage and compute, but also collaboration and continuity during virtual work, and can provide superior security and compliance features. Moving to or adding cloud resources, however, is not trivial or without cost, and may not be the best choice in every scenario. The goal of this workshop is to explore the benefits of using the cloud in biomedical and computational research, and considerations (pros and cons) for a range of scenarios including individual researchers, collaborative research teams, consortia research programs, and large biomedical research agencies / organizations.
Recent advancements in Artificial Intelligence (AI) and data center infrastructure have brought the global cloud computing market to the forefront of conversations about sustainability and energy use. Current policy and infrastructure for data centers prioritize economic gain and resource extraction, inherently unsustainable models which generate massive amounts of energy and heat waste. Our team proposes the formation of policy around earth-friendly computation practices rooted in Indigenous models of circular systems of sustainability. By looking to alternative systems of sustainability rooted in Indigenous values of aloha ‘āina, or love for the land, we find examples of traditional ecological knowledge (TEK) that can be imagined alongside Solarpunk visions for a more sustainable future. One in which technology works with the environment, reusing electronic waste (e-waste) and improving data life cycles.