Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    The Synergy of Scientometric Analysis and Knowledge Mapping with Topic Models: Modelling the Development Trajectories of Information Security and Cyber-Security Research

    An important part of an organisation’s mission is protecting its information assets from inside or outside threats. As the information environment has become more diverse and inclusive, security concern has shifted from information assets resided in the organisation to information assets and networked devices exposed to broader cyberspace, such as cloud or Internet of things environment and mobile Internet. Organisations have to keep up with the knowledge and trends in information security and cyber-security to safeguard their information assets. Knowledge mapping will aid in this sort of knowledge management process. Mandatory standards and government regulations help industries establish best practices in cyber-security. Knowledge mapping and scientometric analysis across disciplines also provide a tracking system to notify researchers and practitioners should the new solutions and technology facilitating threat detection emerge. While various topics in information security and cyber-security have been extensively investigated in academia, identifying salient themes and development trajectories in information security and cyber-security research is relatively unexplored. This study employs scientometric analysis and topic modelling to develop knowledge maps that visualise core concepts associated with information security and cyber-security research over time and across disciplines. With scientometric analysis and knowledge mapping using topic models, this study identifies the commonality, difference, and relationship between information security and cyber-security research domains. This approach could gain insights into how these research areas have evolved and might be improved concerning learning and teaching cyber-security. The proposed approach to developing the knowledge map may be extended to other research areas.

  • articleNo Access

    Exploring Symmetrical and Asymmetrical Dirichlet Priors for Latent Dirichlet Allocation

    Latent Dirichlet Allocation (LDA) has gained much attention from researchers and is increasingly being applied to uncover underlying semantic structures from a variety of corpora. However, nearly all researchers use symmetrical Dirichlet priors, often unaware of the underlying practical implications that they bear. This research is the first to explore symmetrical and asymmetrical Dirichlet priors on topic coherence and human topic ranking when uncovering latent semantic structures from scientific research articles. More specifically, we examine the practical effects of several classes of Dirichlet priors on 2000 LDA models created from abstract and full-text research articles. Our results show that symmetrical or asymmetrical priors on the document–topic distribution or the topic–word distribution for full-text data have little effect on topic coherence scores and human topic ranking. In contrast, asymmetrical priors on the document–topic distribution for abstract data show a significant increase in topic coherence scores and improved human topic ranking compared to a symmetrical prior. Symmetrical or asymmetrical priors on the topic–word distribution show no real benefits for both abstract and full-text data.

  • articleFree Access

    Public and Private Information: Firm Disclosure, SEC Letters, and the JOBS Act

    This paper examines the impact of the recently passed Jumpstart Our Business Startups (JOBS) Act on the behavior of market participants. Using the JOBS Act — which relaxed mandatory information disclosure requirements — as a natural experiment on firms’ choices of the mix of hard, accounting information and textual disclosures, we find that relative to a peer group of firms, initial public offering (IPO) firms reduce accounting disclosures and change textual disclosures. Because it allows a partial revelation of IPO quality, only textual disclosures affect underpricing. We also find that the Securities and Exchange Commission (SEC) changes its behavior post-JOBS Act in responding to draft registration statements. Specifically, the SEC’s comment letters to firms are more negative in tone, and more forceful in their recommendations, focusing on quantitative information. Finally, under the JOBS Act, investors place more emphasis on the information produced by the SEC when pricing the stock. Returns following public release of the letters vary by about 4% based on letter tone.

  • articleNo Access

    Automatic analysis of microblogging data to aid in emergency management

    Microblogging platforms like Twitter, in the recent years, have become one of the important sources of information for a wide spectrum of users. As a result, these platforms have become great resources to provide support for emergency management. During any crisis, it is necessary to sieve through a huge amount of social media texts within a short span of time to extract meaningful information from them. Extraction of emergency-specific information, such as topic keywords or landmarks or geo-locations of sites, from these texts plays a significant role in building an application for emergency management. This paper thus highlights different aspects of automatic analysis of tweets to help in developing such an application. Hence, it focuses on: (1) identification of crisis-related tweets using machine learning, (2) exploration of topic model implementations and looking at its effectiveness on short messages (as short as 140 characters); and performing an exploratory data analysis on short texts related to crises collected from Twitter, and looking at different visualizations to understand the commonality and differences between topics and different crisis-related data, and (3) providing a proof of concept for identifying and retrieving different geo-locations from tweets and extracting the GPS coordinates from this data to approximately plot them in a map.

  • chapterNo Access

    Automatic analysis of microblogging data to aid in emergency management

    Microblogging platforms like Twitter, in the recent years, have become one of the important sources of information for a wide spectrum of users. As a result, these platforms have become great resources to provide support for emergency management. During any crisis, it is necessary to sieve through a huge amount of social media texts within a short span of time to extract meaningful information from them. Extraction of emergency-specific information, such as topic keywords or landmarks or geo-locations of sites, from these texts plays a significant role in building an application for emergency management. This paper thus highlights different aspects of automatic analysis of tweets to help in developing such an application. Hence, it focuses on: (1) identification of crisis-related tweets using machine learning, (2) exploration of topic model implementations and looking at its effectiveness on short messages (as short as 140 characters); and performing an exploratory data analysis on short texts related to crises collected from Twitter, and looking at different visualizations to understand the commonality and differences between topics and different crisis-related data, and (3) providing a proof of concept for identifying and retrieving different geo-locations from tweets and extracting the GPS coordinates from this data to approximately plot them in a map.

  • chapterNo Access

    Chapter 10: Analyzing Textual Information at Scale

    We provide an overview on the recent advances in textual analysis for social sciences. Count-based economic model, structured statistical tool, and plain-vanilla machine learning apparatus each have their own merits and limitations. To take a data-driven approach to capture complex linguistic structures while ensuring computational scalability and economic interpretability, a general framework for analyzing large-scale text-based data is needed. We discuss the recent attempts combining the strengths of neural network language models, such as word embedding, and generative statistical modeling, such as topic modeling. We also describe typical sources of texts and the applications of these methodologies to issues in finance and economics and discuss promising future directions.