Nowadays, the Internet of Things (IoT) is reshaping numerous application domains. Amidst the different communication protocols currently available (MQTT, CoAP, AMQP, DPWS, etc.) and the abundance of management platforms, the IoT domain has stumbled into vertical silos of proprietary systems hindering interoperability. Within this intricate and fragmented ecosystem, the need for an intelligent and scalable architecture promoting interoperability becomes imperative to maximize the potential of IoT. This paper introduces a system designed to enhance the convergence of IoT protocols (e.g., MQTT, CoAP, AMQP, etc.) with the Web of Things (WoT) paradigm. Our proposed gateway-based solution integrates two systems: Stack4Things (S4T) and Data eXchange Mediator Synthesizer (DeXMS). The role of the latter is to adapt, at the gateway level, the IoT protocols and expose the resources/functionalities of the IoT devices as RESTful APIs using HTTP. Meanwhile, the former (i.e., S4T), leveraging its Dynamic DNS system, ensures that these RESTful resources (exposed at the gateway) are accessible over the Web using publicly routable Uniform Resource Locators (URLs) even when the gateway is deployed behind networking middle boxes (e.g., NATs and firewalls).
The prevalence of distributed denial of service (DDoS) flooding assaults is one of the most serious risks to cloud computing security. These types of assaults have as their primary objective the exhaustion of the system’s available resources, that is, the target of the attack, in order to make the system in question unavailable to authorized users. Internet thieves often conduct flooding assaults of the kind known as DDoS, focusing primarily on the application and network levels. When the computer infrastructure is multi-mesh-geo distributed, includes multi-parallel services, and a high number of domains, it may be difficult to detect assaults. This is particularly true when a substantial number of domains are present. When there are a big number of independent administrative users using the services, the situation gets more complicated. The purpose of this body of research is to identify signs that may be utilized to detect DDoS flooding assaults; this is its main objective. As a result, throughout the course of our study, we established a composite metric that considers application, system, network, and infrastructure elements as possible indicators of the incidence of DDoS assaults. According to our research, DDoS assaults may be triggered by a combination of variables. Investigations of simulated traffic are being conducted in the cloud. High traffic may be the result of flooding assaults. The composite metric-based intrusion detection system will be the name of a one-of-a-kind intrusion detection system (IDS) that has been agreed upon ICMIDS. This system will use K-Means clustering and the Genetic Algorithm (GA) to detect whether an effort has been made to flood the cloud environment. CMIDS employs a multi-threshold algorithmic strategy in order to identify malicious traffic occurring on a cloud-based network. Cisco has created this technology. This strategy necessitates a comprehensive investigation of all factors, which is crucial for assuring the continuation of cloud-based computing-based activities. This monitoring system involves the development, administration, and storage of a profile database, denoted as Profile DB. This database is used for recording and using the composite metric for each virtual machine. The results of a series of tests are compared to the ISCX benchmark dataset and statistical settings. The results indicate that ICMIDS has a reasonably high detection rate and the lowest false alarm rate in the majority of situations examined during the series of tests done to validate and verify its efficacy. This was shown by the fact that ICMIDS had the lowest false alarm rate among all examined conditions.
Cloud computing’s simulation and modeling capabilities are crucial for big data analysis in smart grid power; they are the key to finding practical insights, making the grid resilient, and improving energy management. Due to issues with data scalability and real-time analytics, advanced methods are required to extract useful information from the massive, ever-changing datasets produced by smart grids. This research proposed a Dynamic Resource Cloud-based Processing Analytics (DRC-PA), which integrates cloud-based processing and analytics with dynamic resource allocation algorithms. Computational resources must be able to adjust the changing grid circumstances, and DRC-PA ensures that big data analysis can scale as well. The DRC-PA method has several potential uses, including power grid optimization, anomaly detection, demand response, and predictive maintenance. Hence the proposed technique enables smart grids to proactively adjust to changing conditions, boosting resilience and sustainability in the energy ecosystem. A thorough simulation analysis is carried out using realistic circumstances within smart grids to confirm the usefulness of the DRC-PA approach. The methodology is described in the intangible, showing how DRC-PA is more efficient than traditional methods because it is more accurate, scalable, and responsive in real-time. In addition to resolving existing issues, the suggested method changes the face of contemporary energy systems by paving the way for innovations in grid optimization, decision assistance, and energy management.
The main idea of this framework is that it is capable of overcoming the drawbacks that are always linked with conventional cloud-based methods. Computation and storage resources in Internet of Things (IoT) networks are distributed closer to the network’s edge; therefore, the amount of data processed in real time is reduced. By decreasing the distance of the data transfers, less bandwidth is used. Structural problems, data safety, interoperability, and resource allocation-related matters denote challenges preventing the successful implementation of those ideas. The proposed work is the cloud-enabled fog computing framework (C-FCF) in data center systems based on the IoT platform. It brings cloud computing to a new level of scalability by compounding the following: Scalable architecture, uniform communication interfaces, the dynamic algorithms that allocate resources, and the data-centered approach, on the one hand, and strong security protocols, on the other. The wireless sensor network (WSN) approach to this technology represents a greater versatility of this system as it can be applied to perform different tasks in various industries like smart cities, healthcare, transportation, and industrial automation services. The application of the given services illustrates C-FCF’s capability of creating innovation, modeling effectiveness, and uncovering potential for integration within the IoT network. The virtual simulation analysis is necessary to validate C-FCF’s effectiveness in real-life scenarios. The simulations provide evidence of features, including low latency, efficient resource utilization, and overall system performance, which underlines the practical aspects of applying C-FCF in different IoT settings. Developing this advanced computing architecture, which can surpass the limitations of conventional technology and the ability to entail many different use cases, will potentially change the data processing and management paradigm in IoT-enabled settings.
In the new generation of power grids, the smart grid (SG) integrates sophisticated characteristics, including situation awareness, two-way communication, and distributed energy supplies. Integrated SG uses various operational metrics, including devices with sensors, meters, and renewable power sources. There are several challenges when securely disposing and storing electricity data acquired from an SG. It is vulnerable to cyberattacks due to its digitization and integration of an increasing number of links. Issues with latency, security, privacy, and excessive bandwidth consumption will arise when this enormous amount of data is transmitted directly to the cloud. Edge computing (EC) solves this problem by moving data processing to the network’s periphery, close to the embedded devices. With improved data processing speeds, a more responsive and resilient grid may be achieved, instantly responding to energy demand and supply changes. EC reduces the volume of sensitive data sent to central servers, reducing potential security breaches. Data may be better protected from intrusions by being analyzed locally and only pertinent information transferred to the cloud. Thus, a blockchain is an intriguing SG paradigm solution with many benefits. The SG’s decentralization and improved cybersecurity have prompted a lot of work into using blockchain technology; since it is well-known that data saved in the blockchain is immutable, it is crucial to find foolproof ways to verify data are accurate and comply with high-quality standards before storing it in the blockchain. A practical solution for storing precise power data that enables the safe execution of adaptable transactions is a Cloud-Edge Fusion Blockchain model for the smart grid (CEFBM-SG). Consequently, the SG’s dependability, resilience, and scalability will be improved as the number of distributed energy sources (DERs) connected to it increases. Utilizing the idea of computing at the edge to enhance responsiveness and dependability. Executed security analyses and performance evaluations demonstrate CEFBM-SG’s exceptional security and efficiency.
Data redundancy consumes huge storage space while setting up or employing cloud and fog storage. The dynamic cloud nature primarily focuses on the static environments which must be revised. Data deduplication solutions help minimize and control this issue by eradicating duplicate data from cloud storage systems. Since it might improve storage economy and security, data deduplication (DD) over encrypted data is a crucial problem in computing and storage systems. In this research, a novel approach to building secure deduplication systems across cloud and fog environments is developed. It uses MCDD and convergent cryptographic algorithms. The two most significant objectives of such systems are the focus of the suggested approach. Data redundancy must be minimized, but it also needs to be secured using a robust encryption method, which needs to be devised. The suggested approach is ideally suited for tasks like a user uploading new data to cloud storage or the fog. The proposed method might eliminate data redundancy by detecting redundancy at the block level. The testing results indicate that the recommended methodology can surpass a few cutting-edge techniques regarding computing effectiveness and security levels. The file is encrypted twice, once with the modified cryptographic model for deduplication (MCDD) and once with convergence encryption (CE).
Ciphertext-policy attribute-based encryption, denoted by CP-ABE, extends identity based encryption by taking a set of attributes as users’ public key which enables scalable access control over outsourced data in cloud storage services. However, a decryption key corresponding to an attribute set may be owned by multiple users. Then, malicious users are subjectively willing to share their decryption keys for profits. In addition, the authority who issues decryption keys in CP-ABE system is able to generate arbitrary decryption key for any (including unauthorized) user. Key abuses of both malicious users and the authority have been regarded as one of the major obstacles to deploy CP-ABE system in real-world commercial applications. In this paper, we try to solve these two kinds of key abuses in CP-ABE system, and propose two accountable CP-ABE schemes supporting any LSSS realizable access structures. Two proposed accountable CP-ABE schemes allow any third party (with the help of authorities if necessary) to publicly verify the identity of an exposed decryption key, allow an auditor to publicly audit whether a malicious user or authorities should be responsible for an exposed decryption key, and the key abuser can’t deny it. At last, we prove the two schemes can achieve publicly verifiable traceability and accountability.
In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago.
This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project.
The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.
In this paper, we consider optimal scheduling algorithms for scientific workows with two typical structures, fork&join and tree, on a set of provisioned (virtual) machines under budget and deadline constraints in cloud computing. First, given a total budget B, by leveraging a bi-step dynamic programming technique, we propose optimal algorithms in pseudo-polynomial time for both workows with minimum scheduling length as a goal. Our algorithms are efficient if the total budget B is polynomially bounded by the number of jobs in respective workows, which is usually the common case in practice. Second, we consider the dual of this optimization problem to minimize the cost when the deadline of the computation D is fixed. We change this problem into the standard multiple-choice knapsack problem via a parallel transformation.
In the real world, building the high quality cloud computing framework is the challenge for the researcher in the present scenario where on demand service is required. The services which are performing the non-functional activities are referred to as Quality-of Service (QoS). The experience of real-world usage services are generally required to obtain the QoS. Many organizations offer various cloud services, such as Amazon, HP and IBM, to the customers. No technique is available to measure the real-world usage and estimate the ranking of the cloud services. From the customer side, it is a very tough job to choose the right cloud service provider (SP), which fulfills all the requirements of the customers. To avoid the confusion to select the right CSP, this paper proposes QoS ranking prediction methods such as Cloud Rank1, Cloud Rank2 and Cloud Rank3. Various experiments are done on the real-world QoS data by using EC2 services of Amazon and providing healthy results and solutions.
This paper introduces an efficient and scalable cloud-based privacy preserving model using a new optimal cryptography scheme for anomaly detection in large-scale sensor data. Our proposed privacy preserving model has maintained a better tradeoff between reliability and scalability of the cloud computing resources by means of detecting anomalies from the encrypted data. Conventional data analysis methods have used complex and large numerical computations for the anomaly detection. Also, a single key used by the symmetric key cryptographic scheme to encrypt and decrypt the data has faced large computational complexity because the multiple users can access the original data simultaneously using this single shared secret key. Hence, a classical public key encryption technique called RSA is adopted to perform encryption and decryption of secure data using different key pairs. Furthermore, the random generation of public keys in RSA is controlled in the proposed cloud-based privacy preserving model through optimizing a public key using a new hybrid local pollination-based grey wolf optimizer (LPGWO) algorithm. For ease of convenience, a single private server handling the organization data within a collaborative public cloud data center when requiring the decryption of secure sensor data are allowed to decrypt the optimal secure data using LPGWO-based RSA optimal cryptographic scheme. The data encrypted using an optimal cryptographic scheme are then encouraged to perform data clustering computations in collaborative public servers of cloud platform using Neutrosophic c-Means Clustering (NCM) algorithm. Hence, this NCM algorithm mainly focuses for data partitioning and classification of anomalies. Experimental validation was conducted using four datasets obtained from Intel laboratory having publicly available sensor data. The experimental outcomes have proved the efficiency of the proposed framework in providing data privacy with high anomaly detection accuracy.
Cloud computing is a computing technology that is expeditiously evolving. Cloud is a type of distributed computing system that provides a scalable computational resource on demand including storage, processing power and applications as a service via Internet. Cloud computing, with the assistance of virtualization, allows for transparent data and service sharing across cloud users, as well as access to thousands of machines in a single event. Virtual machine (VM) allocation is a difficult job in virtualization that is governed as an important aspect of VM migration. This process is performed to discover the optimum way to place VMs on physical machines (PMs) since it has clear implications for resource usage, energy efficiency, and performance of several applications, among other things. Hence an efficient VM placement problem is required. This paper presents a VM allocation technique based on the elephant herd optimization scheme. The proposed method is evaluated using real-time workload traces and the empirical results show that the proposed method reduces energy consumption, and maximizes resource utilization when compared to the existing methods.
In recent years, cloud computing technologies have been developed rapidly in this computing world to provide suitable on-demand network access all over the world. A cloud service provider offers numerous types of cloud services to the user. But the most significant issue is how to attain optimal virtual machine (VM) allocation for the user and design an efficient big data storage platform thereby satisfying the requirement of both the cloud service provider and the user. Therefore, this paper presents two novel strategies for optimizing VM resource allocation and cloud storage. An optimized cloud cluster storage service is introduced in this paper using a binarization based on modified fuzzy c-means clustering (BMFCM) algorithm to overcome the negative issues caused by the repetitive nature of the big data traffic. The BMFCM algorithm utilized can be implemented transparently and can also address problems associated with massive data storage. The VM selection is optimized in the proposed work using a hybrid COOT-reverse cognitive fruit fly (RCFF) optimization algorithm. The main aim of this algorithm is to improve the massive big data traffic and storage locality. The CPU utilization, VM power, memory dimension and network bandwidth are taken as the fitness function of the hybrid COOT-RCFF algorithm. When implemented in CloudSim and Hadoop, the proposed methodology offers improvements in terms of completion time, overall energy consumption, makespan, user provider satisfaction and load ratio. The results show that the proposed methodology improves the execution time and data retrieval efficiency by up to 32% and 6.3% more than the existing techniques.
The future holds the possibility of hospitals sharing medical images obtained through non-invasive systems to patients remotely. The advent of cloud and the storage and deployment of medical healthcare images in the cloud has resulted in the increased need for application of Cryptographic techniques to protect them from unauthorized access and malicious attacks. The Digital Imaging and Communication in Medicine (DICOM) standard is more compatible across medical imaging instruments globally. The pixel data of DICOM images requires more privacy and security. A novel ECDS based cryptographic approach is suggested to encrypt the original DICOM image as well as the ROI pixel data extracted from DICOM images. Results computed experimentally have proved that medical image encryption via ECDH is more robust, efficient and faster than existing medical image encryption schemes.
Cloud computing (CC), which provides numerous benefits to customers, is a new revolution in information technology. The benefits are on-demand, support, scalability, along with reduced cost usage of computing resources. However, with the prevailing techniques, the system’s authentication is still challenging and it leads to being vulnerable. Thus, utilizing Barrel Shift-centric Whirlpool Hashing-Secure Diffie Hellman ASCII Key-Elliptic Curve Cryptography (BSWH-SDHAK-ECC), the hashed access policy (AP)-grounded secure data transmission is presented in this paper. The data owner (DO) registers their information initially. The user login and verify their profile grounded on the registration. The user selects the data to upload to the Cloud Server (CS) after successful verification. The AP is created; after that, the image attributes are extracted; also, utilizing the BSWH approach, a hash code is produced for the AP. Concurrently, by utilizing the Adaptive Binary Shift-based Huffman Encoding (ABSHE) technique, the selected image is compressed. Also, by utilizing the SDHAK-ECC algorithm, the compressed image is encrypted. Lastly, to the CS, the created AP along with the encrypted image is uploaded. The data user sent the request to access and downloads the data. After that, the AP was provided to the user by the data owner. Next, the user sends it to the CS, which checks its AP with the user’s AP. When the AP is matched with the cloud AP, the encrypted data is downloaded and decrypted. Finally, the experimental outcomes revealed that the proposed model achieved a higher security value of 0.9970 that shows the proposed framework’s efficient performance in contrast to the prevailing techniques.
Amplifying Spatial Awareness via GIS — Tech which brings Healthcare Management, Preventative & Predictive Measures under the same Cloud
When it is not just about size, you gotta' be Smart, too!
Chew on It! How Singapore-based health informatics company MHC Asia Group crunches big-data to uncover your company's health
Digital tool when well-used, it is Passion
Carving the Digital Route to Wellness
Big Data, Bigger Disease Management and Current preparations to manage the Future Health of Singaporeans
A Conversation with Mr Arun Puri
Extreme Networks: Health Solutions
Big Data in Clinical Research Sector
SINGAPORE – A*STAR Scientists Reveal How Stem Cells Defend Against Viruses: New Insights to the Mechanisms bring Broad Implications to Stem Cell Therapy and Disease Diagnosis
SINGAPORE – Singtel - Singapore Cancer Society Race against Cancer 2015
SINGAPORE – NCCS and IMCB to Collaborate on Research for New Treatments to Benefit Cancer Patients
SINGAPORE – MerLion’s Finafloxacin Shown to be More Efficacious than Ciprofloxacin in the Treatment of Complicated Urinary Tract Infections
TAIWAN AND UNITED STATES – Professor Yuk-ling Yung Receives Gerard P. Kuiper Prize
SWEDEN – New Data Confirm Tresiba® U200 Delivers Significantly Lower Rates of Confirmed Hypoglycaemia versus Insulin Glargine U100
UNITED KINGDOM – Using Ultrasound to Clean Medical Instruments
THE NETHERLANDS & UNITED STATES – Philips and Dutch Radboud University Medical Centre Introduce First Diabetes Prototype App with Integrated Online Community to Empower Patients and Enhance Continuity of Care
UNITED STATES – New Clinical Architecture Content Cloud Establishes Reliable Single Source for Terminology Updates, Makes It Easier to Keep Healthcare Systems Up-to-Date
UNITED STATES – Genomic Analysis for All Cancer Patients
UNITED STATES – Birds That Eat at Feeders Are More Likely to Get Sick, Spread Disease, International Research Team Says
From Home to Hospital: Digitisation of Healthcare.
Microsoft with RingMD, Oneview Healthcare, Vital Images, Aruba, and Clinic to Cloud: The Ecosystem of Healthcare Solutions Providers in Asia.
Data Helps in Improving Nursing Practice, Making Better Decisions.
Launch of Asian Branch for QuintilesIMS Institute.
In recent years, there have been many studies utilizing DNA methylome data to answer fundamental biological questions. Bisulfite sequencing (BS-seq) has enabled measurement of a genome-wide absolute level of DNA methylation at single-nucleotide resolution. However, due to the ambiguity introduced by bisulfite-treatment, the aligning process especially in large-scale epigenetic research is still considered a huge burden. We present Cloud-BS, an efficient BS-seq aligner designed for parallel execution on a distributed environment. Utilizing Apache Hadoop framework, Cloud-BS splits sequencing reads into multiple blocks and transfers them to distributed nodes. By designing each aligning procedure into separate map and reducing tasks while an internal key-value structure is optimized based on the MapReduce programming model, the algorithm significantly improves alignment performance without sacrificing mapping accuracy. In addition, Cloud-BS minimizes the innate burden of configuring a distributed environment by providing a pre-configured cloud image. Cloud-BS shows significantly improved bisulfite alignment performance compared to other existing BS-seq aligners. We believe our algorithm facilitates large-scale methylome data analysis. The algorithm is freely available at https://paryoja.github.io/Cloud-BS/.
Presented in this paper is a possible solution for speeding up the integration of various data in the big data mainstream. The data enrichment and convergence of all possible sources is still at the beginning. As a result, existing techniques must be retooled in order to increase the integration of already existing databases or of the ones specific to Internet of Things in order to use the advantages of the big data to fulfill the final goal of web of data creation. In this paper, semantic web-specific solutions are used to design a system based on intelligent agents. It tries to solve some problems specific to automation of the database migration system with the final goal of creating a common ontology over various data repositories or producers in order to integrate them into systems based on big data architecture.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.