Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Panax ginseng is one of the most frequently used herbs in the world. Numerous trials have evaluated its clinical benefits. However, the quality of these studies has not been comprehensively and systematically assessed. We reviewed randomized controlled trials (RCTs) of Panax ginseng to evaluate their quality and risk of bias. We searched four English databases, without publication date restriction. Two reviewers extracted details about the studies' methodological quality, guided by the Consolidated Standards of Reporting Trials (CONSORT) checklist and its extension for herbal interventions. Risk of bias was determined using the Cochrane Risk of Bias tool. Of 475 potentially relevant studies, 58 met our inclusion criteria. In these 58 studies, 48.3% of the suggested CONSORT checklist items and 35.9% of the extended herbal items were reported. The quality of RCTs published after the CONSORT checklist improved. Until 1995 (before CONSORT) (n = 4), 32.8% of the items were reported in studies. From 1996–2006 (CONSORT published and revised) (n = 30), 46.1% were reported, and from 2007 (n = 24), 53.5% were reported (p = 0.005). After the CONSORT extension for herbal interventions was published in 2006, RCT quality also improved, although not significantly. Until 2005 (n = 34), 35.2% of the extended herbal items were reported in studies; and from 2006 onwards (n = 24), 37.3% were reported (p = 0.64). Most studies classified risk of bias as "unclear". Overall, the quality of Panax ginseng RCT methodology has improved since the CONSORT checklist was introduced. However, more can be done to improve the methodological quality of, and reporting in, RCTs.
Formal methods have been introduced as the efficient approach for Trojan detection using the model checking approach. However, systematic property generation for detecting a wide range of Trojans is the main challenge in these methods. In other words, the efficient property generation to pinpoint projected Trojan precisely in targeted hardware has been a bottleneck in this approach. This paper evaluated the feasibility of the systematic property generation methodology for Trojan detection in Cryptosystems by analyzing the vulnerable points of an implemented design respecting Trojans’ covered class. The vulnerability analysis reduces the search space of formal methods and considerably improves the scalability of these methods. We have shown in this research that these properties can be generated systematically and prove that the generated properties can be used in conventional model-checking tools to find hard-to-detect Trojans efficiently. Experimental results show that the proposed method reduces Trojan detection time considerably. Also, the memory required to evaluate circuits is possible for several hundreds of clock cycles.
The outsourcing market has been growing in recent years and it is expected to keep doing so in the coming years, but this growth has been limited by the failure of many projects that, in some cases, has led organizations to take back those services (insourcing). These failures have been due mainly to problems with providers: lack of experience and capacity to take on the projects, and difficulties of communication. There are good practices frameworks for managing outsourcing projects for clients, but it is not the same for providers, who base the provision of services on their past experience and technical capabilities. The aim of this paper is to state the need of proposing a methodology that guides providers throughout the whole outsourcing life cycle and facilitates the provision of quality services and their management.
Ontology instance migration is one of the complex and not fully solved problems in knowledge management. A solution is required when the ontology schema evolves in the life cycle and the assertions have to be transferred to the newer version. The problem may become more complex in distributed settings when, for example, several autonomous software entities use and exchange partial assertional knowledge in a domain that is formalized by different though semantically overlapping descriptive theories. Such an exchange is essentially the migration of the assertional part of an ontology to other ontologies belonging to or used by different entities. The paper presents our method and tool for migrating instances between the ontologies that have structurally different but semantically overlapping schemas. The approach is based on the use of the manually coded transformation rules describing the changes between the input and the output ontologies. The tool is implemented as a plug-in for the ProjectNavigator prototype software framework. The article also reports the results of our three evaluation experiments. In these experiments we evaluated the degree of complexity in the structural changes to which our approach remains valid. We also chose the ontology sets in one of the experiments to make the results comparable with the ontology alignment software. Finally we checked how well our approach scales with the increase of the quantity of the migrated ontology instances to the numbers that are characteristic to industrial ontologies. In our opinion the evaluation results are satisfactory and suggest some directions for the future work.
In this study, response surface methodology was used to examine the effects of temperature and time on the development of niobium carbide coatings on AISI D3 steel. The effect of niobizing temperature (900–1100∘C) and period (2–6 hours) on coating thickness, hardness, fracture toughness, coefficient of friction and wear rates was investigated. ANOVA was conducted to analyze the experimental data, and it was observed that the coating thickness and microhardness increased with temperature and time. The response surfaces developed for fracture toughness, coefficient of friction and wear rates were found to exhibit a complex structure that is significantly influenced by temperature, time and their interactions. The correlation coefficients of the developed regression models range between 0.82 and 0.99. Using the empirical formulas obtained with these mathematical models, it is predicted that niobium carbide coatings can be obtained with the targeted properties more economically and practically with the thermochemical method.
As the technology associated with the "Web Services" trend gains significant adoption, the need for a corresponding design approach becomes increasingly important. This paper introduces a foundational model for designing (composite) services. The innovation of this model lies in the identification of four interrelated viewpoints (interface behaviour, provider behaviour, choreography, and orchestration) and their formalization from a control-flow perspective in terms of Petri nets. By formally capturing the interrelationships between these viewpoints, the proposal enables the static verification of the consistency of composite services designed in a cooperative and incremental manner. A proof-of-concept simulation and verification tool has been developed to test the possibilities of the proposed model.
Due to the current technological and economical context, there is an increasing need for cooperating information systems based on federated databases. Though the technical issues of these architectures have been studied for long, the way to build them has not triggered as much effort. This paper describes a general architecture, a methodology and a CASE environment intended to address the problem of providing users and programmers with an abstract interface to independent heterogeneous and distributed databases. The architecture comprise a hierarchy of mediators and a repository that dynamically transform actual data into a virtual homogeneous database and allow client applications to query it. The InterDB approach provides a complete methodology to define this architecture, including schema recovery through reverse engineering, database integration and mapping building. The methodology is supported by the DB-MAIN CASE tool that helps developers generate the mediators and their repository.
The preceding papers have shown the impressive versatility and potential of agent-based modeling in developing an understanding of industrial and labor dynamics. The main attraction of agent-based models is that the actors — firms, workers, and networks — that are the objects of study in the 'real world,' can be represented directly in the model. This one-to-one correspondence between model agents and economic actors provides greater clarity and more opportunities for analysis than many alternative modeling approaches. However, the advantages of agent-based modeling have to be tempered by disadvantages and as yet unsolved methodological problems. In this brief summary drawn from the discussion at the closing session of WILD@ACE, we review three of these open problems in the context of the papers presented at the conference: How can agent-based models be empirically validated? What criteria should be used to evaluate the explanatory success of agent-based models? And how can the conclusions of research on similar topics be integrated?
This paper presents an overview of a novel approach called Transcendental Psychology Methodology (TPM), largely inspired by Bach-y-Rita's work and developed by A.I. Mirakyan (1929–1995) and his associates. Beginning with the perceptual constancy problem, in which stimuli do not change perceptually despite variations in their physical characteristics, Mirakyan recognized that contemporary accounts of constancy included both theoretical contradictions and empirical discrepancies. This led him to propose TPM as an alternative to the traditional Product Basis Paradigm (PBP). Whilst PBP focuses upon perceptual phenomena, TPM focuses upon the underlying processes and upon the principles that support the flexibility needed to create complex, coherent representations under different stimulus conditions. The central idea that generative perceptual processes may be universal is grounded in Bach-y-Rita's famous experiments and has important parallels with present day conceptions of the construction of meaning in neurophysiological process. TPM inspired studies of a wide range of perceptual phenomena have so far suggested basic principles that can be applied to all perceptual processes, regardless of their modality. Its new conception of the perception of spatial extent can contribute to our understanding of the visual effects that Bach-y-Rita's blind subjects experience and it may provide a useful general tool for uncovering the psychological principles behind Bach-y-Rita's findings. Because of its axiomatic approach and its focus upon universal, generative processes, TPM may also be useful in other disciplines, e.g., in providing a line between neurophysiological and psychological levels of investigation. It may ultimately serve as a general theory of how sensory experience is created.
It would be useful for IS researchers to know the major methodological methods used in IS filed and the trends MIS is experiencing in this regard. For this reason, we have conducted a meta-research on three leading IS journals over a period of 15 years in an attempt to provide the MIS field with an overview of research methodologies used and help researchers in choosing appropriate research method. 651 survey-based studies reported in three leading MIS journals between 1992 and 2006 are reviewed. The findings indicate that a considerable proportion of MIS research has been conducted using the survey method. In terms of research purpose, there has been an imbalance between the use of exploratory and explanatory surveys. Consequently, it is to be expected that in the immediate future, the trend will be much more towards the development of models and the explanation of relationships that is towards explanatory studies.
This paper presents the results of a research work carried out in order to develop a novel gaming model for formative knowledge assessment (FKA). The benefits of FKA are well acknowledged by educators as it aids for better learning. As FKA is more frequent, the students’ willingness to take frequent assessments is poor. Often the students expressed that they felt uncomfortable and this resulted in lack of active participation. These are considered as major challenges to be addressed. Hence, this research aimed to find a way to make the assessments interesting. Based on the positive responses for game-based learning, a theoretical model that integrates the principles of extensive games (EGs) and concept map (CM), namely concept tree game (CTG), has been defined and developed. Further, a qualitative research has been performed to evaluate the impact of the FKA model. Major data gathering techniques used in this work were a class test, proposed FKA method, participant observation, survey and focussed team interview. The data were analysed statistically and the results revealed that it has been effective and useful for educators in assessing the knowledge of students and for students, it aided them to know where they stand and helped them to learn effectively.
Artificial markets are an emerging form of agent-based simulation in which agents represent individual industries, firms, or consumers interacting under simulated market conditions. While artificial markets demonstrate considerable potential for advancing innovation research, the validity of the method depends on the ability of researchers to construct agents that faithfully capture the key behavior of targeted entities. To date, few such methods have been documented in the academic literature.
This article describes a novel method for combining qualitative innovation research (case studies, grounded theory, and sequence analysis) with software engineering techniques to synthesize simulation-ready theories of adoption behavior. A step-by-step example is provided from the transportation domain. The result was a theory of adoption behavior that is sufficiently precise and formal to be expressed in Unified Modeling Language (UML). The article concludes with a discussion of the limitations of the method and recommendations future applications to the study of diffusion of innovation.
The purpose of this paper is to contribute to understanding innovation dynamics in services, in particular the link between innovation and productivity. A methodology to explain this link is suggested. Instead of establishing a single, direct connection between innovation and labor productivity, as in earlier approaches in the services literature, a simultaneous equations model is used. We put forward an extended version of the CDM (Crepon, Duguet and Mairesse) model, incorporating two feedback effects and using innovation activities rather than the more restrictive R&D proxy. Activities prior to the innovation implementation are also taken into account allowing for direct and indirect effects on labor productivity. Moreover, we discuss and handle the oftentimes overlooked methodological problems affecting this relationship. Micro data for ten service sectors in Portugal are used to estimate the model. The existence of a Schumpeterian virtuous cycle is confirmed, pointing to a mechanism reinforcing innovation investment returns. We find that innovation activities have a positive impact on labor productivity, but no evidence was found of a significant direct effect of innovation output. Labor productivity also improves with management capabilities. Relationships with customers, suppliers and cooperation partnerships significantly increase the probability of innovating, suggesting that stimulating organizational networking is a key element in a service firm’s innovation strategy.
The issue of boundary determination in ecosystems remains problematic which is exaggerated in a dynamic, emerging innovation context, with actors joining and leaving. As part of wider research, on how firms innovate in emerging ecosystems, the boundaries of several early innovation ecosystems were explored.
Using evolutionary approaches, with extensive interviews and mapping, the wider ecosystem was initially researched. Then, five in-depth firm-focussed case studies were undertaken, and specific innovation ecosystem boundaries were mapped as they emerged and evolved.
The findings point to ‘identity’, a common early approach, being limited as the ecosystem evolves. The influence of competence and relationships play an increasing role. It is suggested that as the innovation ecosystem develops through its lifecycle, different approaches aligned to Santos and Eisenhardt’s (2005) four foci can be employed, starting with identity, then competence, then power and finally issues of efficiency.
Actor-network theory (ANT) represents a research paradigm applicable to innovation management (IM) research. Its unique ontology of second-degree objectivity through symmetry can be combined with many research contexts, methods, and concepts. This paper summarises the insights from a literature review of 299 Web of Science articles on both ANT and IM. The meta-features of all articles are analysed to identify 25 articles for in-depth analysis. Three ANT literature streams are identified: descriptive, managerial proactive, and participatory proactive. These three streams are found to differ in terms of their specific methods, concepts, and research contexts. An additional subset of the 10 most cited articles is used to validate the findings. We suggest that IM researchers should select a specific ANT approach based on the context and the theorised hypothesis of their research. Knowledge of the options within ANT and how they can be applied to different IM contexts can help IM researchers in maximising research outcomes.
The linkages between disaster and environmental damage are recognized as important to predicting, preventing and mitigating the impact of disasters. Environmental Impact Assessment (EIA) procedures are well developed for non-ndisaster situations. However, they are conceptually and operationally inappropriate for use in disaster conditions, particularly in the first 120 days after the disaster has begun. The paper provides a conceptual overview of the requirements for an environmental impact assessment procedure appropriate for disaster conditions. These requirements are captured in guidelines for a Rapid Environmental Impact Assessment (REA) for use in disasters. The REA guides the collection and assessment of a wide range of factors which can indicate: (1) the negative impacts of a disaster on the environment, (2) the impacts of environmental conditions on the magnitude of a disaster and, (3) the positive or negative impacts of relief efforts on environmental conditions. The REA also provides a foundation for recovery program EIAs, thus improving the overall post disaster recovery process. The REA is designed primarily for relief cadres, but is also expected to be usable as an assessment tool with disaster victims. The paper discusses the field testing of the REA under actual disaster conditions.
The challenge for implementing an ecosystems approach to environmental decision-making processes, such as spatial planning, is to understand the range, nature and amount of ecosystem services currently provided and the potential for such service provision in the future. The ability to spatially represent ecosystems services is a critical element of the evidence base on which to make decisions about how physical space is used most effectively and sustainably, and the way people and activities are distributed at different spatial scales. This paper reports on the outcomes of a research project originally undertaken for the UK Department for the Environment, Food and Rural Affairs which developed a methodology for mapping ecosystem services using GIS and readily available, existing land use/land cover datasets. Critical components of the methodology, in order to determine which datasets are appropriate for which services, are network analysis and stakeholder engagement techniques, to define the relevant typology of ecosystem services and their relationship to land use/land cover types. The methodology was developed and tested successfully in the context of green grid (green infrastructure) networks in a major UK regeneration area, the Thames Gateway, to the east of London, and its potential use in impact assessment further explored through a number of case studies.
One of the requirements for a good Strategic Environmental Assessment (SEA) is its capacity to adjust itself to the planned decision-making process. This paper presents recent experiences involving the application of SEA in Brazil in three different contexts. In the first case, an SEA was conducted to meet a request of the Ministry of Tourism for information to prepare the Development Plan for Sustainable Tourism in the North Coast. The second case is an initiative undertaken by the Secretary of Environment of the State of Bahia for the construction of a seaport-industrial complex in the region of Ilhéus (Bahia). Finally, an SEA commissioned by a group of environmental NGOs to assess options for the development of a mining-metal and chemical-gas complex in the Pantanal Region near the Bolivian and Paraguayan border is presented. The paper highlights the differences in the contexts of the three studies (responsibilities in the decision-making process, stages of the planning process, etc.) as well as in their methodological approaches. Difficulties, gaps, advances and findings in each case are also analysed to assess the effectiveness of each SEA.
Environmental Risk Assessment (ERA) is a powerful set of technical and analytical instruments for analyzing environmental impacts, and has found application in supporting Decision-Making Processes (DMPs) over the last two decades. However, there is no interrelated application of ERA in Strategic Decision-Making (SDM) processes, and no systematic research on the approaches and methods of ERA to support the processes of SDM. In this paper, a new approach and methodological system of ERA for SDM process is set up, and then applied to the Principal Coastal Functional Zoning (PCFZ) in Xiamen Bay, China, as a case study to verify the feasibility of the proposed approach and its methodology. The results show that the approach and methodology of ERA for SDM could integrate ERA into the entire SDM process, and thereby support the PCFZ directly. Furthermore, this approach avoids or mitigates against dire environmental risk that are sometimes introduced by the SDM processes.
The ability to efficiently and effectively reuse ontologies is commonly acknowledged to play a crucial role in the large scale dissemination of ontologies and ontology-driven technology, being thus a pre-requisite for the ongoing realization of the Semantic Web. In this article, we give an account of ontology reuse from a process point of view. We present a methodology that can be utilized to systematize and monitor ontology engineering processes in scenarios reusing available ontological knowledge in the context of a particular application. Notably, and by contrast to existing approaches in this field, our aim is to provide means to overcome the poor reusability of existing resources — rather than to solve the more general issue of building new, more reusable knowledge components. To do so we investigate the impact of the application context of an ontology — in terms of tasks this ontology has been created for and will be utilized in — has on the feasibility of a reuse-oriented ontology development strategy and provide guidelines that take these aspects into account. The applicability of the methodology is demonstrated through a case study performed in collaboration with an international eRecruitment solution provider.