Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In this paper, a proposal for how to store and retrieve the quantum state of a nonstationary photon wave packet in quantum dots is presented. Here, the input photon with two central frequencies is different from the usual photon. In order to store the quantum state of the input photon more stably, we control the magnetic field to transfer the coherence from electronic spins to nuclear spins. In this paper, we also suggest one method to produce a photon with two central frequencies, and point out a possible way to code quantum information.
The rdfs:seeAlso predicate plays an important role in linking web resources in semantic web. Based on the W3C definition, it shows that the object resource provide additional information about the subject resource. Since providing additional information can take various forms, the definition is generic. In the other words, the rdfs:seeAlso link can present different meanings to the users and it can represents different kind of patterns and relationships between web resources. These patterns are unknown and have to be specified to help organizations, and individuals to interlink, and publish their datasets on Web of Data using the rdfs:seeAlso link. In this paper, we investigate to the traditional usages of seealso and then present a methodology to specify the patterns of rdfs:seeAlso usages in Semantic Web. The results of our investigation show that the discovered patterns constitute a significant portion of rdfs:seeAlso usages in Web of Data.
Sina Weibo, the most popular Chinese social platform with hundreds and millions of user-contributed images and texts, is growing rapidly. However, the noise between the image and text, as well as their incomplete correspondence, makes accurate image retrieval and ranking difficult. In this paper, we propose a deep learning framework using visual features, text content and popularity of Weibo to calculate the similarity between the image and the text based on training the model to maximize the likelihood of the target description sentence given the training image. In addition, the retrieval results are reranked using the popularity of the image. The comparison experiment of the large-scale Sina Weibo dataset proves the validity of the proposed method.
This paper proposes a research on cross-media intelligent perception and retrieval analysis technology based on deep learning education in view of the complex learning knowledge environment of artificial intelligence of traditional cross-media retrieval technology, which is unable to obtain retrieval information timely and accurately. Based on the cross-media theory, this paper analyzes the pre-processing of cross-media data, transforms from a single media form such as text, voice, image, and video to a cross-media integration covering network space and physical space, and designs a cross-media intelligent perception platform system. With multi-core method of typical correlation analysis algorithm, this paper develops a new joint learning framework based on the joint feature selection and subspace learning cross-modal retrieval. What’s more, the algorithm is tested experimentally. The results show that the retrieval analysis technology is significantly better than the traditional media retrieval technology, which can effectively identify text semantics and visual image classification, and can better maintains the relevance of data content and the consistency of semantic information, which has important reference value for cross-media applications.
The purpose of this paper is to retrieve and study the highly cited papers as well as the correlation between the citation frequency and the download frequency of the 20 traditional Chinese medicine journals in China, in order to provide the guidance for improving the influence and academic quality of these journals.
Bibliometric analyses were conducted on 1103 papers of 20 traditional Chinese medicine journals from 2011 to 2020 by retrieving for the China Academic Journal Network Publishing Database (CAJD) in China National Knowledge Infrastructure (CNKI). SPSS 17.0 software was used to analyze the correlation between the citation frequency and the download frequency via conducting regression fitting and establishing the mathematical models.
The results showed that the total citations of the 1103 papers were 93051 times and the average citations were 84.36 times per paper. The total downloads of the 1103 papers were 2058442 times, and the average downloads were 1866.22 times per paper. China Journal of Chinese Materia Medica ranked first according to the number of papers, total citations and total downloads. The citations of Journal of Chinese Medicinal Materials ranked first based on the number of citations per paper. One of Li’s paper had been cited the most (983 times). There were 629 (57.03%) papers whose first author was from universities. The scopes of the first authors were distributed in 29 regions and 2 special administrative regions (Macao, Hong Kong) in China. The authors from Beijing published 283 (25.66%) papers, ranking number one. The number of papers supported by funds was 882 (79.96%). The research results of correlation showed that the citation frequency and the download frequency of the highly cited papers had a highly positive correlation from both journal and paper level for whether the sample data of journals was normally distributed or nonnormally distributed. The correlation coefficients of the 20 journals at journal level and that at paper level were 0.9765 and 0.6677, respectively. The correlation was better at journal level than that at paper level, while the optimal regression fitting was all cubic polynomial. Among the 1103 papers, there were 684 (62.01%) research papers and 419 (37.99%) review papers. The main citation period of the top 15 papers was from the 2nd year to the 6th year after publication, accounting for 78.39%.
Papers on clinical therapeutics research, papers on the pharmacological effects and its mechanism of traditional Chinese medicine, and papers on traditional Chinese medicine and natural medicine were the main source of the highly cited papers of the traditional Chinese medicine journals. Editors of the journals should focus on the above-mentioned research areas to select manuscripts for exploiting the excellent sources extensively, while paying attention to review papers, focusing on national major or key projects, paying attention to network spreading, stabilizing authors with quality services, in order to improve the influence and the academic quality of journals.
The neural network discussed in this paper is a self trained network for LArge Memory STorage And Retrieval (LAMSTAR) of information. It employs features such as forgetting, interpolation, extrapolation and filtering, to enhance processing and memory efficiency and to allow zooming in and out of memories. The network is based on modified SOM (Self-Organizing-Map) modules and on arrays of link-weight vectors to channel information vertically and horizontally throughout the network. Direct feedback and up/down counting serve to set these link weights as a higher-hierarchy performance evaluator element which also provides high level interrupts. Pseudo random modulation of the link weights prevents dogmatic network behavior. The input word is a coded vector of several sub-words (sub-vectors). These features facilitate very rapid intelligent retrieval and diagnosis of very large memories, that have properties of a self-adaptive expert system with continuously adjustable weights. The authors have applied the network to a simple medical diagnosis and fault detection problems.
The Summary-based Object-Oriented Reuse Library System (SOORLS) was developed to support both librarians who manage databases of object-oriented reusable components, and software developers who intend to use these components to develop software on the Web. This paper presents the library management functions implemented by SOORLS, with focus on a software reuse approach based on the summary contents of the library. The cluster-based classification scheme proposed in this paper alleviates the labor-intensity domain analysis problem often attributed to traditional facet-based classification schemes. We then concentrate on the facilities offered by SOORLS' tools, as well as its Web-based architecture, which allows distributed access to reusable components on servers from a variety of platforms.
In this paper, we present a method with a low complexity for analysing short and dynamic biomedical sequences. The method uses the Daubechies D4 wavelet in combination with similarity fitness schemes for retrieval. We also present a more traditional way of retrieving dynamical biomedical sequences based on stretching the sequences in time. The first mentioned method has been shown to outperform Fourier based methods in retrieving biomedical sequences of dynamic lengths, as well as the Haar wavelet.
This paper proposes a classification and retrieval technique for process program reuse, which is composed of a syntax and a semantic level processing. The syntax level uses the process features of: phase, paradigm, technique, standard, and application domain. The facet approach is applied here, in which each feature corresponds to a facet. Since no complicated computations are involved, this level is expected to be efficient. Regarding the semantic level, it uses the process program contents of: product, product component, exception, work assignment, tool, and role. They are organized into a semantic network for classification and retrieval. Since process program contents are used, this level is expected to be precise. It, however, may be inefficient, because complicated computations are involved. The proposed classification and retrieval technique uses the syntax level processing before the semantic level. Since most unreusable process programs are filtered out in the syntax level, the semantic level operates on relatively few process programs. Therefore, this technique is expected to be both efficient and precise.
When dealing with massive amounts of primarily read-only data, significant improvements can be made over distributed DBMS for making these data available to a large network. This paper outlines some methods for heuristic query routing and cooperative caching which manage read-only replicas of data in a fully distributed manner on a network of arbitrary topology. These methods insure that query throughout increases steadily as nodes are added to the network while maintaining good response time. The resulting system is capable of providing automatic resource discovery and information retrieval over a wide area network without relying on resource directories.
This article gives the short out look about DDBJ website on Deposit, Retrieve and analyze sequences of genes and genomes.
In this paper, we present a novel scheme on video content representation by exploring the spatio-temporal information. A pseudo-object-based shot representation containing more semantics is proposed to measure shot similarity and force competition approach is proposed to group shots into scene based on content coherences between shots. Two content descriptors, color objects: Dominant Color Histograms (DCH) and Spatial Structure Histograms (SSH), are introduced. To represent temporal content variations, a shot can be segmented into several subshots that are of coherent content, and shot similarity measure is formulated as subshot similarity measure that serves to shot retrieval. With this shot representation, scene structure can be extracted by analyzing the splitting and merging force competitions at each shot boundary. Experimental results on real-world sports video prove that our proposed approach for video shot retrievals achieve the best performance on the average recall (AR) and average normalized modified retrieval rank (ANMRR), and Experiment on MPEG-7 test videos achieves promising results by the proposed scene extraction algorithm.
Interpreting the semantics of a photo is a hard problem. However, for storing and indexing large multimedia collections, it is essential to build systems that can automatically extract semantics from digital photos. In this research we show how we can fuse content and context to extract semantics from digital photographs. Our experiments show that if we can properly model context associated with media, we can interpret semantics using only a part of high dimensional content data.
Artificial neural networks are used to retrieve vertical profiles of atmospheric temperature from simulated microwave radiometer data. The global and local retrieval algorithms are compared. The global retrieval experiments show that the overall root mean square error in the retrieved profiles of a test dataset is about 7% better than the overall error using a linear statistical retrieval, and the retrieval errors are about 0.5K better at 200- and 250-hPa levels. The local experiments show that the differences between the results of neural network and linear statistical retrieval approaches are different from region to region but are not remarkable. In comparison with the global retrieval, the local retrieval is well.
Knowledge representation and similarity measure play an important role in classifying vague legal concepts. In order to consider fuzziness and context-sensitive effects, for the representation of the precedent, a fuzzy factor hierarchy is studied. Current distance-based and feature-based similarity measures are only surface level ones that can't make more than a comparison between objects. Therefore, a deep level similarity measure that can evaluate the results of the surface level one is needed. A structural similarity: factor-based similarity, that is integrated by the surface level and deep level ones is proposed. An argument model that is based on the proposed knowledge representation and similarity measure is proposed. Considering the vague legal concept in the United Nations Convention on Contracts for the International Sale of Goods(CISG), a fuzzy legal argument system is constructed. The main purpose of the proposed system is to support the law education.
In this paper, we present an image retrieval system based on the content. The content of images include both low level features such as colors, textures, and high level features such as spatial constraints and shapes of relevant regions. Based on object technology, the image features and behaviors are modeled and stored in a database. Images can be retrieved by examples (show me images similar to this image) or by selecting properties from pickers such as a sketched shape, a color histogram, a spatial constraint interface, a list of key words and a combination of these. The integration of high and low level features in the object-oriented database is an important property of our work.
Processing of information into long-term storage (consolidation of memory) and the retrieval of processed knowledge is not independent of the physiological state of animal and man. The neuroendocrine system which is composed of central nervous and peripheral components i.e., peptidergic neurons and forming of membrane-active steroids in the brain, on the one hand, and releasing hormones into the circulation, on the other hand, is the primary messenger of bodily states. The neuroendocrine system is a rapidly responding one to environmental changes and, in turn, assures optimal conditions for processing information into long-term storage. Retrieval of knowledge is then affected either by pro-active influence of neuroendocrine principles during learning and consolidation or by a simple presence (tonic actions) during retrieval. These general conclusions can be drawn from studies devoted to the mnemonic effects of circulating adreno-sympathetic catecholamines epinephrine and norepinephrine, adrenal corticosteroids and (neuro) peptide vasopressin. The action of these hormones is of central nervous nature via direct or indirect mechanisms involving the central nucleus of the amygdala and the hippocampus as major targets.
First I give a brief description of the classical Hopfield model introducing the fundamental concepts of patterns, retrieval, pattern recognition, neural dynamics, capacity and describe the fundamental results obtained in this field by Amit, Gutfreund and Sompolinsky,1 using the non rigorous method of replica and the rigorous version given by Pastur, Shcherbina, Tirozzi2 using the cavity method. Then I give a formulation of the theory of Quantum Neural Networks (QNN) in terms of the XY model with Hebbian interaction. The problem of retrieval and storage is discussed. The retrieval states are the states of the minimum energy. I apply the estimates found by Lieb3 which give lower and upper bound of the free-energy and expectation of the observables of the quantum model. I discuss also some experiment and the search of ground state using Monte Carlo Dynamics applied to the equivalent classical two dimensional Ising model constructed by Suzuki et al.6 At the end there is a list of open problems.