Nowadays, the Recommender System (RS) has gained much attention in industry and academia. RS offers valuable ideas to users while interacting with a website or application. With the advancements in the mobile environment (ME), the users’ interest has turned to the movie recommendation system. This model overcomes the problem of surplus information about movies and offers users only the relevant items by analyzing their interests and preferences. This work presents a hybrid movie recommendation system based on unsupervised clustering and supervised deep learning. Here, the dataset used is the IMDB movie reviews dataset. The clustering technique used is adaptive density-based clustering or adaptive DBSCAN. Next, the similarity calculation is performed using the cosine similarity metric. Finally, the Convolutional Neural Network (CNN) model integrated with the Adaptive Red Deer (ARD) Optimization offers the most relevant depictions on the movie list with higher accuracy. The implementation is performed on the MATLAB simulation environment. The system’s performance is measured in terms of accuracy, recall, precision, F-measure, RMSE and MAE.
Recommender systems are becoming a popular and important set of personalization techniques that assist individual users with navigating through the rapidly growing amount of information. A good recommender system should be able to not only find out the objects preferred by users, but also help users in discovering their personalized tastes. The former corresponds to high accuracy of the recommendation, while the latter to high diversity. A big challenge is to design an algorithm that provides both highly accurate and diverse recommendation. Traditional recommendation algorithms only take into account the contributions of similar users, thus, they tend to recommend popular items for users ignoring the diversity of recommendations. In this paper, we propose a recommendation algorithm by considering both the effects of similar and dissimilar users under the framework of collaborative filtering. Extensive analyses on three datasets, namely MovieLens, Netflix and Amazon, show that our method performs much better than the standard collaborative filtering algorithm for both accuracy and diversity.
Recommender systems have developed rapidly and successfully. The system aims to help users find relevant items from a potentially overwhelming set of choices. However, most of the existing recommender algorithms focused on the traditional user-item similarity computation, other than incorporating the social interests into the recommender systems. As we know, each user has their own preference field, they may influence their friends' preference in their expert field when considering the social interest on their friends' item collecting. In order to model this social interest, in this paper, we proposed a simple method to compute users' social interest on the specific items in the recommender systems, and then integrate this social interest with similarity preference. The experimental results on two real-world datasets Epinions and Friendfeed show that this method can significantly improve not only the algorithmic precision-accuracy but also the diversity-accuracy.
Users’ ratings in recommender systems can be predicted by their historical data, item content, or preferences. In recent literature, scientists have used complex networks to model a user–user or an item–item network of the RS. Also, community detection methods can cluster users or items to improve the prediction accuracy further. However, the number of links in modeling a network is too large to do proper clustering, and community clustering is an NP-hard problem with high computation complexity. Thus, we combine fuzzy link importance and K-core decomposition in complex network models to provide more accurate rating predictions while reducing the computational complexity. The experimental results show that the proposed method can improve the prediction accuracy by 4.64% to 5.71% on the MovieLens data set and avoid solving NP-hard problems in community detection compared with existing methods. Our research reveals that the links in a modeled network can be reasonably managed by defining fuzzy link importance, and that the K-core decomposition can provide a simple clustering method with relatively low computation complexity.
Network diffusion processes play an important role in solving the information overload problem. It has been shown that the diffusion-based recommendation methods have the advantage to generate both accurate and diverse recommendation items for online users. Despite that, numerous existing works consider the rating information as link weight or threshold to retain the useful links, few studies use the rating information to evaluate the recommendation results. In this paper, we measure the average rating of the recommended products, finding that diffusion-based recommendation methods have the risk of recommending low-rated products to users. In addition, we use the rating information to improve the network-based recommendation algorithms. The idea is to aggregate the diffusion results on multiple user-item bipartite networks each of which contains only links of certain ratings. By tuning the parameters, we find that the new method can sacrifice slightly the recommendation accuracy for improving the average rating of the recommended products.
With the rapid growth of commerce and development of Internet technology, a large number of user consumption preferences become available for online market intelligence analysis. A critical demand is to reduce the impact of information overload by using recommendation algorithms. In physical dynamics, network-based recommendation algorithms based on mass-diffusion have been popular for its simplicity and efficiency. In this paper, to solve the problem that most network-based recommendation algorithms cannot distinguish how much the user likes collected items and make resource configuration more reasonable, we propose a novel method called biased network-based inference (BNBI). The proposed method treats rating systems and nonrating systems differently and measures user’s preference for items by means of item similarity. The proposed method is evaluated in real datasets (MovieLens and Last.FM) and compared with some existing classic recommendation algorithms. Experimental results show that the proposed method is more effective and it can reduce the impact of item diversity and discover the real interest of users.
Collaborative Filtering (CF) is currently one of the most popular and most widely used personalization techniques. It generates personalized predictions based on the assumption that users with similar tastes prefer similar items. One of the major drawbacks of the CF from the computational point of view is its limited scalability since the computational effort required by the CF grows linearly both with the number of available users and items. This work proposes a novel efficient variant of the CF employed over a multidimensional content-addressable space. The proposed approach heuristically decreases the computational effort required by the CF algorithm by limiting the search process only to potentially similar users. Experimental results demonstrate that the proposed heuristic approach is capable of generating predictions with high levels of accuracy, while significantly improving the performance in comparison with the traditional implementations of the CF.
Recommender systems have already been engaging multiple criteria for the production of recommendations. Such systems, referred to as multicriteria recommenders, demonstrated early the potential of applying Multi-Criteria Decision Making (MCDM) methods to facilitate recommendation in numerous application domains. On the other hand, systematic implementation and testing of multicriteria recommender systems in the context of real-life applications still remains rather limited. Previous studies dealing with the evaluation of recommender systems have outlined the importance of carrying out careful testing and parameterization of a recommender system, before it is actually deployed in a real setting. In this paper, the experimental analysis of several design options for three proposed multiattribute utility collaborative filtering algorithms is presented for a particular application context (recommendation of e-markets to online customers), under conditions similar to the ones expected during actual operation. The results of this study indicate that the performance of recommendation algorithms depends on the characteristics of the application context, as these are reflected on the properties of evaluations' data set. Therefore, it is judged important to experimentally analyze various design choices for multicriteria recommender systems, before their actual deployment.
Matrix factorization models often reveal the low-dimensional latent structure in high-dimensional spaces while bringing space efficiency to large-scale collaborative filtering problems. Improving training and prediction time efficiencies of these models are also important since an accurate model may raise practical concerns if it is slow to capture the changing dynamics of the system. For the training task, powerful improvements have been proposed especially using SGD, ALS, and their parallel versions. In this paper, we focus on the prediction task and combine matrix factorization with approximate nearest neighbor search methods to improve the efficiency of top-N prediction queries. Our efforts result in a meta-algorithm, MMFNN, which can employ various common matrix factorization models, drastically improve their prediction efficiency, and still perform comparably to standard prediction approaches or sometimes even better in terms of predictive power. Using various batch, online, and incremental matrix factorization models, we present detailed empirical analysis results on many large implicit feedback datasets from different application domains.
The recommender system predicts user preferences by mining user historical behavior data. This paper proposes a social recommendation combining trust relationship and distance metric factorization. On the one hand, the recommender system has a cold start problem, which can be effectively alleviated by adding social relations. Simultaneously, to improve the problem of sparse trust matrix, we use the Jaccard similarity coefficient and the Dijkstra algorithm to reconstruct the trust matrix and explore the potential user trust relationship. On the other hand, the traditional matrix factorization algorithm is modeled by the user item potential factor dot product, however, it does not satisfy the triangle inequality property and affects the final recommender effect. The primary motivator behind our approach is to combine the best of both worlds, mitigate the inherent weaknesses of each paradigm. Combining the advantages of the two ideas, it has been demonstrated that our algorithm can enhance recommender performance and improve cold start in recommender systems.
Recommender systems combine research from user profiling, information filtering and artificial intelligence to provide users with more intelligent information access. They have proven to be useful in a range of Internet and e-commerce applications. Recent research has shown that a content-based (or case-based) perspective on collaborative filtering for recommendation can provide significant benefits in decision support accuracy over traditional collaborative techniques, particularly as dataset sparsity increases. These benefits derive both from the use of more sophisticated case-based similarity metrics and from the proactive maintenance of item similarity knowledge using data mining. This article presents a natural next step in this ongoing research to improve the quality of recommender systems by validating these findings in the context of more complex models of collaborative filtering, as well as by demonstrating that such techniques also preserve recommendation diversity, one of the key issues affecting traditional recommender systems.
Recommender systems bring together ideas from information retrieval and filtering, user profiling, and machine learning in an attempt to provide users with more proactive and personalized information systems. Forwarded as a response to the information overload problem, recommender systems have enjoyed considerable theoretical and practical successes, with a range of core techniques and a compelling array of evaluation studies to demonstrate success in many real-world domains. That said, there is much yet to understand about the strengths and weaknesses of recommender systems technologies and in this article, we make a fine-grained analysis of a successful case-based recommendation approach. We describe a detailed, fine-grained ablation study of similarity knowledge and similarity metric contributions to improved system performance. In particular, we extend our earlier analyses to examine how measures of interestingness can be used to identify and analyse relative contributions of segments of similarity knowledge. We gauge the strengths and weaknesses of knowledge components and discuss future work as well as implications for research in the area.
Increasing availability of information has furthered the need for recommender systems across a variety of domains. These systems are designed to tailor each user's information space to suit their particular information needs. Collaborative filtering is a successful and popular technique for producing recommendations based on similarities in users' tastes and opinions. Our work focusses on these similarities and the fact that current techniques for defining which users contribute to recommendation are in need of improvement.
In this paper we propose the use of trustworthiness as an improvement to this situation. In particular, we define and empirically test a technique for eliciting trust values for each producer of a recommendation based on that user's history of contributions to recommendations.
We compute a recommendation range to present to a target user. This is done by leveraging under/overestimate errors in users' past contributions in the recommendation process. We present three different models to compute this range. Our evaluation shows how this trust-based technique can be easily incorporated into a standard collaborative filtering algorithm and we define a fair comparison in which our technique outperforms a benchmark algorithm in predictive accuracy.
We aim to show that the presentation of absolute rating predictions to users is more likely to reduce user trust in the recommendation system than presentation of a range of rating predictions. To evaluate the trust benefits resulting from the transparency of our recommendation range techniques, we carry out user-satisfaction trials on BoozerChoozer, a pub recommendation system. Our user-satisfaction results show that the recommendation range techniques perform up to twice as well as the benchmark.
Collaborative Filtering (CF) is a popular technique employed by Recommender Systems, a term used to describe intelligent methods that generate personalized recommendations. Some of the most efficient approaches to CF are based on latent factor models and nearest neighbor methods, and have received considerable attention in recent literature. Latent factor models can tackle some fundamental challenges of CF, such as data sparsity and scalability. In this work, we present an optimal scaling framework to address these problems using Categorical Principal Component Analysis (CatPCA) for the low-rank approximation of the user-item ratings matrix, followed by a neighborhood formation step. CatPCA is a versatile technique that utilizes an optimal scaling process where original data are transformed so that their overall variance is maximized. We considered both smooth and non-smooth transformations for the observed variables (items), such as numeric, (spline) ordinal, (spline) nominal and multiple nominal. The method was extended to handle missing data and incorporate differential weighting for items. Experiments were executed on three data sets of different sparsity and size, MovieLens 100k, 1M and Jester, aiming to evaluate the aforementioned options in terms of accuracy. A combined approach with a multiple nominal transformation and a "passive" missing data strategy clearly outperformed the other tested options for all three data sets. The results are comparable with those reported for single methods in the CF literature.
Recommender systems have proven to be an effective method to deal with the problem of information overload in finding interesting products. It is still a challenge to increase the accuracy and diversity of recommendation algorithms to fulfill users' preferences. To provide a better solution, in this paper, we propose a novel recommendation algorithm based on heterogeneous diffusion process on a user-object bipartite network. This algorithm generates personalized recommendation results on the basis of the physical dynamic feature of resources diffusion which is influenced by objects' degrees and users' interest degrees. Detailed numerical analysis on two benchmark datasets shows that the presented algorithm is of high accuracy, and also generates more diversity.
The interest in image annotation and recommendation has been increased due to the ever rising amount of data uploaded to the web. Despite the many efforts undertaken so far, accuracy or efficiency still remain open problems. Here, a complete image annotation and tourism recommender system is proposed. It is based on the probabilistic latent semantic analysis (PLSA) and hypergraph ranking, exploiting the visual attributes of the images and the semantic information found in image tags and geo-tags. In particular, semantic image annotation resorts to the PLSA, exploiting the textual information in image tags. It is further complemented by visual annotation based on visual image content classification. Tourist destinations, strongly related to a query image, are recommended using hypergraph ranking enhanced by enforcing group sparsity constraints. Experiments were conducted on a large image dataset of Greek sites collected from Flickr. The experimental results demonstrate the merits of the proposed model. Semantic image annotation by means of the PLSA has achieved an average precision of 92% at 10% recall. The accuracy of content-based image classification is 82, 6%. An average precision of 92% is measured at 1% recall for tourism recommendation.
Collaborative filtering methods are widely accepted and used for item recommendation in various applications and domains. Their simplicity and ability to provide recommendations without the need for specific domain knowledge makes them utilized in both academic and industrial community. However, continuous dynamics of the system makes them vulnerable on different kind of changes, such as the change in user’s preferences or the appearance of new users and items in the system. It is particularly emphasized in user-based collaborative filtering recommendations which are based on both users and its nearest neighbor’s long-term profiles. In this work an approach is presented in order to recognize some of the changes and provide an upgraded model that provides improved performance. Such changes include deviations in user’s mean ratings, changes in neighbors’ similarities, as well as deviations from neighbors’ mean rating. For each of the proposed improvements a set of matching parameters are identified that form a long-term user’s profile. The performance of the proposed models compared to standard user-based collaborative filtering methods is evaluated in terms of prediction accuracy. The experimental results performed on real data set show sound improvement utilizing adjustment of deviations from user’s mean ratings and the neighbor’s similarities, as well as promising improvement with adjustment of neighbor’s mean ratings.
Clustering-based recommender systems bound the seek of similar users within small user clusters providing fast recommendations in large-scale datasets. Then groups can naturally be distributed into different data partitions scaling up in the number of users the recommender system can handle. Unfortunately, while the number of users and items included in a cluster solution increases, the performance in terms of precision of a clustering-based recommender system decreases. We present a novel approach that introduces a cluster-based distance function used for neighborhood computation. In our approach, clusters generated from the training data provide the basis for neighborhood selection. Then, to expand the search of relevant users, we use a novel measure that can exploit the global cluster structure to infer cluster-outside user’s distances. Empirical studies on five widely known benchmark datasets show that our proposal is very competitive in terms of precision, recall, and NDCG. However, the strongest point of our method relies on scalability, reaching speedups of 20× in a sequential computing evaluation framework and up to 100× in a parallel architecture. These results show that an efficient implementation of our cluster-based CF method can handle very large datasets providing also good results in terms of precision, avoiding the high computational costs involved in the application of more sophisticated techniques.
Recommender systems’ evaluation is usually based on predictive accuracy and information retrieval metrics, with better scores meaning recommendations are of higher quality. However, new algorithms are constantly developed and the comparison of results of algorithms within an evaluation framework is difficult since different settings are used in the design and implementation of experiments. In this paper, we propose a guidelines-based approach that can be followed to reproduce experiments and results within an evaluation framework. We have evaluated our approach using a real dataset, and well-known recommendation algorithms and metrics; to show that it can be difficult to reproduce results if certain settings are missing, thus resulting in more evaluation cycles required to identify the optimal settings.
In this paper we describe a general framework for parallel optimization based on the island model of evolutionary algorithms. The framework runs a number of optimization methods in parallel with periodic communication. In this way, it essentially creates a parallel ensemble of optimization methods. At the same time, the system contains a planner that decides which of the available optimization methods should be used to solve the given optimization problem and changes the distribution of such methods during the run of the optimization. Thus, the system effectively solves the problem of online parallel portfolio selection.
The proposed system is evaluated in a number of common benchmarks with various problem encodings as well as in two real-life problems — the optimization in recommender systems and the training of neural networks for the control of electric vehicle charging.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.