Deep Learning based Recommender System: A Survey and New Perspectives • 1:7
professional support. e good modularization makes development and engineering a lot more ecient.
For example, it is easy to combine dierent neural structures to formulate powerful hybrid models, or
replace one module with others. us, we could easily build hybrid and composite recommendation
models to simultaneously capture dierent characteristics and factors.
2.4 On Potential Limitations
Are there really any drawbacks and limitations with using deep learning for recommendation? In this section,
we aim to tackle several commonly cited arguments against the usage of deep learning for recommender systems
research.
• Interpretability.
Despite its success, deep learning is well-known to behave as black boxes, and providing
explainable predictions seem to be a really challenging task. A common argument against deep neural
networks is that the hidden weights and activations are generally non-interpretable, limiting explainability.
However, this concern has generally been eased with the advent of neural aention models and have
paved the world for deep neural models that enjoy improved interpretability [
126
,
146
,
178
]. While
interpreting individual neurons still pose a challenge for neural models (not only in recommender
systems), present state-of-the-art models are already capable of some extent of interpretability, enabling
explainable recommendation. We discuss this issue in more detail in the open issues section.
• Data Requirement.
A second possible limitation is that deep learning is known to be data-hungry, in
the sense that it requires sucient data in order to fully support its rich parameterization. However,
as compared with other domains (such as language or vision) in which labeled data is scarce, it is
relatively easy to garner a signicant amount of data within the context of recommender systems
research. Million/billion scale datasets are commonplace not only in industry but also released as
academic datasets.
• Extensive Hyperparameter Tuning.
A third well-established argument against deep learning is the
need for extensive hyperparameter tuning. However, we note that hyperparameter tuning is not an
exclusive problem of deep learning but machine learning in general (e.g., regularization factors and
learning rate similarly have to be tuned for traditional matrix factorization etc) Granted, deep learning
may introduce additional hyperparameters in some cases. For example, a recent work [
145
], aentive
extension of the traditional metric learning algorithm [60] only introduces a single hyperparameter.
3 DEEP LEARNING BASED RECOMMENDATION: STATE-OF-THE-ART
In this section, we we rstly introduce the categories of deep learning based recommendation models and then
highlight state-of-the-art research prototypes, aiming to identify the most notable and promising advancement
in recent years.
3.1 Categories of deep learning based recommendation models
To provide a bird-eye’s view of this eld, we classify the existing models based the types of employed deep
learning techniques. We further divide deep learning based recommendation models into the following two
categories. Figure 1 summarizes the classication scheme.
•
Recommendation with Neural Building Blocks. In this category, models are divided into eight subcategories
in conformity with the aforementioned eight deep learning models: MLP, AE, CNNs, RNNs, RBM, NADE,
AM, AN and DRL based recommender system. e deep learning technique in use determines the applica-
bility of recommendation model. For instance, MLP can easily model the non-linear interactions between
users and items; CNNs are capable of extracting local and global representations from heterogeneous
ACM Computing Surveys, Vol. 1, No. 1, Article 1. Publication date: July 2018.