It is important to understand the behavior of an information network and its features. In this research, we explore this idea by applying a multidimensional data analysis system, a system that is significant in enhancing data intrusion detection. To accomplish this, we gather data from various sources, like network traffic, user behavior, and system logs. Using the examples mentioned, the proposed system detects and prevents cyber threats more accurately. Artificial Intelligence techniques including deep learning, clustering, and principal component analysis (PCA), are used. These are essential in analyzing complex patterns within the data and enabling the detection of sophisticated and evolving intrusion techniques. Multidimensional data allows for the capture of intricate, non-linear relationships. This improves the system’s ability to differentiate between normal and abnormal activities. Real-time data processing and AI-driven algorithms enhance detection speed and enable faster responses to potential intrusions. We tested this system on benchmark datasets. Results showed significant improvements in detection rates and a reduction in false positives compared to traditional NIDS. The integration of AI gave us a more adaptive and scalable approach to intrusion detection. Also, it allowed the system to learn from new attack patterns and continuously refine its capabilities.
The rapid advancement and integration of renewable energy systems (RES) such as solar, wind, and hydropower have intensified the need for robust network security solutions to protect against emerging cyber vulnerabilities. These systems are increasingly interconnected with digital grids and IoT devices, heightening their exposure to cyber threats that, if exploited, could disrupt energy supply and lead to severe socio-economic repercussions. This paper proposes an artificial intelligence (AI)-driven approach to enhance network security specifically for renewable energy (RE) infrastructures, targeting vulnerabilities that affect data integrity and operational stability. This research introduces an Adaptive Spider Wasp optimizer-mutated Extreme Gradient Boosting (ASW-XGBoost) model as a novel solution designed to improve detection accuracy and enhance resilience across diverse RE networks. The proposed method initiates the creation of a dataset representative of both power system behaviors and potential cyber-attacks, pre-processed using a normalization algorithm to improve data quality. Feature extraction leverages a scalable approach to identify critical indicators unique to RE environments. The ASW-XGBoost model combines the optimization advantages of adaptive spider wasp algorithms with the classification robustness of XGBoost, allowing precise identification of attack signatures even within fluctuating renewable power outputs. Performance evaluations, conducted in simulated power networks with high renewable penetration, demonstrate that ASW-XGBoost surpasses conventional methods in both detection rate and operational efficiency. The findings underscore the model’s capacity to adapt to dynamic, renewable-intensive environments, offering a more responsive solution to evolving cyber threats. This paper concludes with a discussion on the implications of AI-enhanced security protocols for the RE sector, highlighting ASW-XGBoost’s potential as a foundation for further research and application in sustainable energy cybersecurity.
In the current landscape, the Internet of Things (IoT) finds its utility across diverse sectors, including finance, healthcare, and beyond. However, security emerges as the principal obstacle impeding the advancement of IoT. Given the intricate nature of IoT cybersecurity, traditional security protocols fall short when addressing the unique challenges within the IoT domain. Security strategies anchored in the cybersecurity knowledge graph present a robust solution to safeguard IoT ecosystems. The foundation of these strategies lies in the intricate networks of the cybersecurity knowledge graph, with Named Entity Recognition (NER) serving as a crucial initial step in its implementation. Conventional cybersecurity entity recognition approaches IoT grapple with the complexity of cybersecurity entities, characterized by their sophisticated structures and vague meanings. Additionally, these traditional models are inadequate at discerning all the interrelations between cybersecurity entities, rendering their direct application in IoT security impractical. This paper introduces an innovative Cybersecurity Entity Recognition Model, referred to as CERM, designed to pinpoint cybersecurity entities within IoT. CERM employs a hierarchical attention mechanism that proficiently maps the interdependencies among cybersecurity entities. Leveraging these mapped dependencies, CERM precisely identifies IoT cybersecurity entities. Comparative evaluation experiments illustrate CERM’s superior performance over the existing entity recognition models, marking a significant advancement in the field of IoT security.
Detecting software vulnerabilities is a vital component of cybersecurity, concentrating on identifying and remedying weaknesses or flaws in software that malicious actors could exploit. Improving Android security includes using robust software vulnerability detection processes to identify and mitigate possible threats. Leveraging advanced methods like dynamic and static analysis and machine learning (ML) approaches with fractals theories these models early scan Android apps for vulnerabilities. Effectual software vulnerability detection is critical to mitigate safety risks, security systems, and data from cyber-attacks. Android malware detection employing deep learning (DL) supports the control of neural networks (NNs) for identifying and mitigating malicious apps targeting the Android and Complex Systems platforms. DL approaches, namely recurrent neural networks (RNNs) and convolutional neural networks (CNNs) can be trained on massive datasets encompassing benign and malicious samples. This study develops a Hyperparameter Tuned Deep Learning Approach for Robust Software Vulnerability Detection (HPTDLA-RSVD) technique. The primary aim of the HPTDLA-RSVD technique is to ensure Android malware security using an optimal DL model. In the HPTDLA-RSVD technique, the min–max normalization method is applied to scale the input data into a uniform format. In addition, the HPTDLA-RSVD methodology employs ant lion fractal optimizer (ALO)-based feature selection (FS) named ALO-FS methodology for choosing better feature sets. Besides, the HPTDLA-RSVD technique uses a deep belief network (DBN) model for vulnerability detection and classification. Moreover, the slime mould algorithm (SMA) has been executed to boost the hyperparameter tuning process of the DBN approach. The experimental value of the HPTDLA-RSVD approach can be examined by deploying a benchmark database. The simulation outcomes implied that the HPTDLA-RSVD approach performs better than existing approaches with respect to distinct measures.
Synopsis
The research problem
This study investigated how firms employ corporate social responsibility (CSR) as a precautionary strategy in response to heightened concerns about cybersecurity following the adoption of data breach disclosure laws in the United States.
Motivation
CSR has garnered substantial attention in contemporary society. Simultaneously, the last few decades have witnessed a rapid surge of the digital economy. However, it remains unclear how CSR is adapting to digitalization. In this study, I focused on cybersecurity, a pivotal challenge in the digital age.
Theoretical reasoning
The enactment of data breach disclosure laws enhances the reporting of cybersecurity incidents and intensifies concerns about cybersecurity, promoting firms to take measures to mitigate the adverse impacts of data breaches. Building on the theory that CSR functions like an insurance policy, I hypothesized that firms increase their engagement in CSR to fortify their reputation after the enactment of data breach disclosure laws, helping cushion the potential impact of future breaches.
Analyses
The main analysis employed a difference-in-differences research design to compare the changes in CSR engagement between firms with high and low levels of cybersecurity risk following the enactment of data breach disclosure laws in the United States. Cross-sectional analyses delved into the underlying mechanisms. Additional analyses first explored the role of CSR in mitigating stock price decline and then illustrated reputational concerns after data breaches.
Findings
The main analysis showed that firms with high cybersecurity risk increase their CSR engagement to a greater extent following the adoption of data breach disclosure laws. CSR initiatives are particularly pronounced for firms likely to incur significant losses from data breaches, aligning with the theoretical framework and offering insight into the underlying mechanisms. I also found that firms with fewer financial constraints exhibit stronger CSR initiatives. Furthermore, these CSR initiatives are distinct and cannot be substituted by investments in information technology. The additional analysis illustrates that firms with superior CSR performance undergo a smaller stock price decline surrounding data breach announcements. This supports the notion that CSR functions much like insurance, shielding against the impacts of data breaches. Subsequently, this study presents direct evidence on firms’ concerns regarding the reputational impact of cybersecurity. Overall, this study underscores cybersecurity concerns as a driving force behind social responsibility initiatives in this digital era.
Target population
This research holds significance for policymakers worldwide who are considering cybersecurity-related regulations and for firms seeking effective risk management strategies in the face of cybersecurity challenges.
Synopsis
The research problem
This study examines the influence of firms’ business strategies on their cybersecurity risk disclosures (CRDs).
Motivations
The exponential expansion of the digital economy and the increasing reliance on online data storage and processing have made cybersecurity breaches a critical issue for businesses worldwide. The disclosure of cybersecurity risks is vital in fostering transparency and communication between firms and external stakeholders. This study draws from the Miles and Snow (1978) strategic typology to explore how firms’ chosen strategies influence their CRDs. This investigation is important because firms that adopt a prospector- or a defender-type strategy have different strategic focuses, leading to varying degrees of exposure to cybersecurity risks. As a result, these firms have various incentives to engage in CRDs. By enhancing our understanding of CRDs’ determinants, this study provides insights into the way in which business strategies shape firms’ approaches to communicating cybersecurity risks.
Despite the increasing scholarly attention paid to firms’ disclosure of cybersecurity risks, there remains a lack of literature concerning the factors influencing the extent of CRDs. Our study fills the gap in the literature by investigating how firms’ business strategies shape CRDs. By uncovering the influence of business strategies on CRDs, our study provides valuable insights into the information environment within firms.
The test hypotheses
Hypothesis 1a posits that firms adopting a prospector-type strategy are more likely to make more CRDs than firms following a defender-type strategy. Conversely, Hypothesis 1b suggests that firms with a prospector-type strategy are less inclined to provide extensive CRDs than their defender-type counterparts.
Target population
Regulators, investors, and other stakeholders.
Adopted methodology
Ordinary least squares regressions.
Analyses
Our independent variable of interest is business strategies, which we categorized into prospectors, analyzers, and defenders based on the Miles & Snow (1978) strategic typology. As our dependent variable, we employ the CRD score developed by Florackis et al. (2023), which utilizes machine learning-based textual analysis to quantify CRDs. As for the analysis of consequences, we use Tobin’s Q as an indicator of firm value.
Findings
We find that firms adopting a prospector-type strategy are more inclined to provide extensive CRDs than firms following a defender-type strategy. Moreover, we observe that the impact of business strategies on CRDs is heightened in firms with strong corporate governance attributes, including effective boards, robust internal controls, and the engagement of industry expert auditors or Big Four accounting firms. In addition, our findings indicate that the strengthened relationship between business strategies and CRDs contributes positively to firm value.
Network intrusion detection is becoming a challenging task with cyberattacks that are becoming more and more sophisticated. Failing the prevention or detection of such intrusions might have serious consequences. Machine learning approaches try to recognize network connection patterns to classify unseen and known intrusions but also require periodic re-training to keep the performances at a high level. In this paper, a novel continuous learning intrusion detection system, called Soft-Forgetting Self-Organizing Incremental Neural Network (SF-SOINN), is introduced. SF-SOINN, besides providing continuous learning capabilities, is able to perform fast classification, is robust to noise, and it obtains good performances with respect to the existing approaches. The main characteristic of SF-SOINN is the ability to remove nodes from the neural network based on their utility estimate. SF-SOINN has been validated on the well-known NSL-KDD and CIC-IDS-2017 intrusion detection datasets as well as on some artificial data to show the classification capability on more general tasks.
Battlefield of Things (BoT) is a modern defense network that connects smart military devices to strategic networks. Cybersecurity plays a vital role in maintaining the security of BoT networks and provides encrypted communication networks with combat devices on an end-to-end or peer-to-peer basis. This paper proposes approaches to BoT networks that operate on a three-tier architecture, starting with an application and service layer, a network and cybersecurity layer, and finally, a battlefield layer; implements CNN-YOLO-based target detection; and also formulates information security policies, privacy, and IT laws to maintain algorithmic data access and authorization. It connects a battlefield combat equipment network to a command data center’s ground base station wireless, Bluetooth, sensor, radio, and ethernet cable. This paper analyzes prior Internet of Things (IoT) device attack strategies by collecting data sets of IoT security breaches from external sources. How the system security works, what breach techniques an attacker can use, how to avoid these, and how our systems can be strengthened to protect us from future attacks are discussed in detail.
Malicious attacks to software applications are on the rise as more people use Internet of things (IoT) devices and high-speed internet. When a software system crash happens caused by malicious action, a malware imaging method can examine the application. In this study, we present a novel malware classification method that captures suspected operations in a variety of discrete size image features, allowing us to identify such IoT device malware families. To decrease deep neural network training time, essential local and global image features are selected using a combined local and global feature descriptor (LBP-GLCM). The classification performance of the proposed deep learning model is improved by combining the predictions of weak learners (CNNs) and using them as knowledge input to a multi-layer perceptron meta learner. This is a neural network ensemble with stacked generalization that is used to improve network generalization ability. The public dataset used for performance evaluation contains 5472 samples from 11 different malware families. In order to compare the proposed methodology to current malware detection systems, we developed a baseline experiment. The proposed approach improved malware classification results to 98.5% accuracy and 98.4% accuracy when using 256×256256×256 and 200×200200×200 image sizes, respectively. Overall, the results showed that the stacked generalization ensemble with multi-step extracting features is a more effective method for classification performance and response time.
Nowadays, information/data security and availability are of utmost importance. However, due to the fact that security is a process rather than a state, there is an increasing demand for technologies or architectural solutions that would allow a computer system to adjust its level of security in response to changes in its environmental/network characteristics. In this paper, an architecture for a self-managing adaptive router/firewall has been proposed to facilitate an intelligent and real-time self-protection of a computer system. We also show how the proposed architecture might be used to control other system mechanisms or resources (for example, RAM).
Intelligent computing techniques have a paramount importance to the treatment of cybersecurity incidents. In such Artificial Intelligence (AI) context, while most of the algorithms explored in the cybersecurity domain aim to present solutions to intrusion detection problems, these algorithms seldom approach the correction procedures that are explored in the resolution of cybersecurity incident problems that already took place. In practice, knowledge regarding cybersecurity resolution data and procedures is being under-used in the development of intelligent cybersecurity systems, sometimes even lost and not used at all. In this context, this work proposes the Case-based Cybersecurity Incident Resolution System (CCIRS), a system that implements an approach to integrate case-based reasoning (CBR) techniques and the IODEF standard in order to retain concrete problem-solving experiences of cybersecurity incident resolution to be reused in the resolution of new incidents. Different types of experimental results so far obtained with the CCIRS show that information security knowledge can be retained with our approach in a reusable memory improving the resolution of new cybersecurity problems.
A method of classifying network security data based on multi-featured extraction is proposed to address instability of a nonlinear time series in a network security threat. Cybersecurity information is divided in line with the principle of acquiring multiple attributes. On this basis, an adaptive adaptation estimation technology is optimized in analogue. With the proposed method, a cybersecurity information classification system is constructed according to the phase interval reconstruction principle so that a dynamic and autonomous adaptation estimation of the cybersecurity threat can be completed to ensure the feasibility of cybersecurity information classification. The experimental result proves that the cybersecurity information classification technology based on multi-attribute extraction can effectively guide chaos into adjacent orbits and reasonably control the training scale. Moreover, the accuracy of the estimation is guaranteed and the cybersecurity threat is estimated because of its high-speed convergence and strong proximity. Therefore, the proposed classification technology can assist professionals and backstage managers in guaranteeing security by facilitating receipt of information in a timely manner.
When analyzing cybersecurity datasets with machine learning, researchers commonly need to consider whether or not to include Destination Port as an input feature. We assess the impact of Destination Port as a predictive feature by building predictive models with three different input feature sets and four combinations of web attacks from the CSE-CIC-IDS2018 dataset. First, we use Destination Port as the only (single) input feature to our models. Second, all features (from CSE-CIC-IDS2018) are used without Destination Port to build the models. Third, all features plus (including) Destination Port are used to train and test the models. All three of these feature sets obtain respectable classification results in detecting web attacks with LightGBM and CatBoost classifiers in terms of Area Under the Receiver Operating Characteristic Curve (AUC) scores, with AUC scores exceeding 0.90 for all scenarios. We observe the best classification performance scores when Destination Port is combined with all of the other CSE-CIC-IDS2018 features. Although, classification performance is still respectable when only using Destination Port as the only (single) input feature. Additionally, we validate that Botnet attacks also have respectable AUC with Destination Port as the only input feature to our models. This highlights that practitioners must be mindful of whether or not to include Destination Port as an input feature if it experiences lopsided label distributions as we clearly identify in this study. Our brief survey of existing CSE-CIC-IDS2018 literature also discovered that many studies incorrectly treat Destination Port as a numerical input feature with machine learning models. Destination Port should be treated as a categorical input value to machine learning models, as its values do not represent numerical values which can be used in mathematical equations for the models.
Innovations driving the Indian dental consumables market.
A breath of hope.
Cybersecurity tips to prevent healthcare organizations from having to swallow a bitter pill.
Nutritional solutions to reduce harmful impact of air pollution on cardiovascular health.
The following topics are under this section:
The following topics are under this section:
The following topics are under this section:
For the month of January 2021, APBN features a cover story on innovations in home-based faecal calprotectin test for self-management of inflammatory bowel disease. Also, in the Features section, we bring you highlights of the new technologies and trends in cell-line research and development Taking a dive into healthcare technology, the Spotlights section covers interviews from two different experts on data analytics in healthcare as well as mitigating cybersecurity threats in healthcare systems. Keeping to this same umbrella of health technology, the Columns section the articles cover on the use of artificial intelligence in healthcare.
The extensive integration of interconnected devices and the inadvertent information obtained from untrusted sources has exposed the Industrial Control Systems (ICS) ecosystem to remote attacks by the exploitation of new and old vulnerabilities. Unfortunately, although recognized as an emerging risk based on the recent rise of cyber attacks, cybersecurity for ICS has not been addressed adequately both in terms of technology but, most importantly, in terms of organizational leadership and policy. In this paper, we will present our findings regarding the cybersecurity challenges for Smart Grid and ICS and the need for changes in the way that organizations perceive cybersecurity risk and leverage resources to balance the needs for information security and operational security. Moreover, we present empirical data that point to cybersecurity governance and technology principles that can help public and private organizations to navigate successfully the technical cybersecurity challenges for ICS and Smart Grid systems. We believe that by identifying and mitigating the inherent risks in their systems, operations, and processes, enterprises will be in a better position to shield themselves and protect against current and future cyber threats.
Cybersecurity has become a great concern in many real-world applications involving adversaries with Machine Learning (ML) algorithms being more widely used. This concern is more challenging in Internet of Things (IoT) platforms. As IoT-enabled applications are growing at a rapid pace in every sector there are growing security related incidents as well. ML algorithms are widely deployed to perform data analysis, reasoning and decision-making over the data emanating from IoT devices. Security of this data while collection, communication and computing is a major challenge. Various attackers are trying to find weaknesses in the ML algorithms and trying to deceive these ML algorithms to learn the wrong information from the data. Countermeasures need to be developed to evaluate the security of the ML models. To develop such countermeasures, one needs to understand all possible attacks on the ML models. Data poisoning attacks are a class of adversarial attacks on ML where an adversary has the power to alter a small fraction of the training data in order to make the trained classifier to satisfy certain objectives. In order to develop an attack-resistant ML model, one needs to know all possible attacks on these models. The recent data poisoning technique such as the Fast Gradient Sign Method (FGSM) is static and provides very micro control to the attacker on creating adversarial data. In this research, we develop a more robust data poisoning technique for deep neural networks using Generative Adversarial Networks (GANs) to create a data poisoning attack. We then evaluate the performance of the proposed algorithm and also compare the results obtained by FGSM.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.