Prior to describing the noteworthy properties and features that form most trust and reputation models, both concepts ought to be presented. Trust may be expressed in multiple ways depending on enforcement environments. Misztal [
79] defined trust as “the belief that the person, who has a degree of freedom to disappoint our expectations, will meet an obligation under all circumstances over which they have control,” or by Mohammadi et al. [
80] as “the risk of accepting or denying a decision,” to name but a couple. As a computer science concept, we may define trust as the mechanism to evaluate, establish, maintain, and revoke trustworthy relationships between entities of the same or different networks within one or multiple environments. In the case of reputation, this may be contemplated as “a perception that an agent creates through past behaviors about its intentions and norms” by Mui et al. [
82]. Thus, reputation is interpreted as a feasible methodology to foresee the agent’s trust. Notwithstanding, reputation models are not the only methodology, as role-based trust models [
20] and identity-based trust models [
21] could also be considered as methodologies to predict an agent’s trustworthiness.
Trust and reputation models in turn comprehend a set of properties (generic characteristics to be employed by any trust model regardless of whether it is role based, attestation based, reputation based, etc.) and features (specific characteristics only considered by reputation-based trust models) that are understood independently of enforcement scenarios. Nevertheless, how these properties and features are addressed by the authors allow differentiating multiple trust and reputation approaches within the same enforcement scenario, as well as to obtain more accurate outputs. Therefore, we consider it crucial to briefly describe such properties and features before detailing how they are technically addressed in the proposals that will be introduced later in Section
4. Through the process of identifying a set of general properties and features in trust and reputation models, we have grouped them into the three main groups described in the following sections.
2.1.1 Properties.
In the first place, the properties group (Figure
1, left) encloses a list of attributes that began to be considered from trust and reputation standardization origins [
117].
One of the uppermost factors directly affecting trust is the time and how it has an impact on trust and reputation models. Traditionally, the
dynamism has been contemplated as an intrinsic variable that empowers trust and reputation models to adapt them to conceivable changes over time. Hence, this property is the most contemplated among the trust and reputation models [
23,
51,
120]. These models are also designed to offer a set of services based on particular scenarios. Thus, the
context-dependence property permits to adjust design and deployment requirements [
116] considering characteristics such as the character of entities, interactions between entities, application environments, and so on. Such is the importance of time that dynamism is not the only property related to it. Note that the context-dependence characteristics are sensitive to the decay of time, and in consequence, their values and context itself may be altered over time.
Another prominent property of trust and reputation models is the
quantification of trust score. By means of this property, a model symbolizes the trust and reputation level of an entity following continuous [
28] or discrete [
43] quantification values. Besides, the
quantification property is indirectly employed in the feedback propagation process with third parties, as depending on the quantification approach taken, recommendations may be based on an uncountable set of values (labels) or a particular range of real values, respectively. With regard to information sharing, trust and reputation models must guarantee the user’s data protection since the
integrity of trust and reputation information is a transcendental property to accomplish an accurate and secure model. Therefore, this property wants to ensure that digital information is uncorrupted over its entire lifecycle and can only be modified without tampering its validity by those authorized to do so. In the same way, considering mechanisms that ensure data integrity [
125] may facilitate the identification of illicit data modifications by misbehaving users. Thus, it would be worthwhile for trust and reputation models to be able to determine entities’
benevolence, as well as to identify each entity actively or passively participating in the actions surrounding models. On the one hand, the benevolence denotes the trustor’s kindness perception in the trustee’s efforts without looking for rewards [
127]. In this vein, the benevolence may be comprehended as a property to dwindle concerns and uncertainty about the willingness of trustee’s actions and hence to foresee feasible misbehaviors in the long run. On the other hand, and related to misbehavior, trust and reputation models should incorporate mechanisms to associate a unique
identity to each participant involved in such models. By means of identity, not only would the security of models be enhanced by being able to verify the identity of participants [
119] but also conventional attacks to obtain a new identity in cases of poor reputation [
23] could be lessened.
To conclude the general property list depicted in Figure
1, we introduce how the
asymmetry,
transitivity,
privacy-preserving,
reward and punishment, and
attack resilience properties affect trust and reputation models. The asymmetry property refers to the fact that two entities partaking in a trust relationship do not imply a bidirectional trust between them. Therefore, in most cases, trust is seen as non-reciprocal in nature [
48]. The transitivity is also a pivotal property that needs to be clarified. There are trust and reputation models that interpret it as the fact of disseminating trust values or recommendations [
1]. Nevertheless, from our standpoint, such definition is not totally precise. To the best of our knowledge, the transitivity in trust and reputation models makes reference to the fact that an entity A may trust an entity B, and the latter may in turn trust an entity C. Hence, the entity A may also trust the entity C—nevertheless, the trust value of the entity B on an entity C is not the same that an entity A has on an entity C. In this regard, it is worth noting that feedback dissemination may not entail the transitivity property [
23], since the disseminated feedback is not directly utilized as the final trust value but other parameters such as
credibility or
weighting are applied on top of it. Therefore, the final trust value is adjusted with respect to the initial disseminated trust value received.
The privacy-preserving is not an unprecedented concept, but it is true that in the past years it has become a fashionable concept, so it is really decisive to consider it [
77]. In trust and reputation models, the privacy-preserving endeavors to not only guaranteeing a security level when an entity sensitive information may be partially shared with others (via recommendations) but also diminishing the ability to infer sensitive information from the model itself. Therefore, the privacy-preserving intends to some extent to dwindle misbehavior impact with shared data. Considering feasible misbehaviors, trust and reputation models should withstand some conventional attacks such as Sybil attack [
84], collusion attack [
143], on-off attack [
121], or swing attack [
62], to name but a few. Hence, the attack resilience attempts to add another extra security level by confronting some of the most well-known trust and reputation attacks.
Due to the fact that attacks have a direct impact on performance and reputation of trust models, reward and punishment techniques are contemplated to boost honest praxis [
58]. In this regard, reward and punishment mechanisms, which are mainly statics [
59], endeavor to guide entities’ behavior toward a more secure, trustworthy, and efficient environment. Ergo, a well-behaved entity may benefit from a partial increase in its trust and reputation score so that other entities may be more likely to select its services, whereas a poorly behaving entity is less likely to be selected.
2.1.2 Features.
Concerning the features group (see Figure
1, right), it encompasses a set of the most prominent characteristics for designing and implementing trust and reputation models. Among the most well-known features, we can underline the two key information sources covered by reputation models:
direct trust and
indirect trust. The direct trust, also known as historical trust [
126], refers to an entity’s own experience acquired through previous interactions with other entities. Conventionally, this trust information source is contemplated to be the most reliable for calculating a final trust and reputation score since it is released of standard factors related to the interpretation of recommendations, as we will discuss in the next paragraph. Notwithstanding, there are circumstances where it is necessary to behold another information source such as indirect trust [
68]. The indirect trust is interpreted as the experience that an entity can acquire from third entities via relationships previously established with them. Traditionally, the indirect trust trends to have a lower
weighting than direct trust. Nonetheless, when trustworthiness information is not available first hand, or when trust and reputation models require further information to be more accurate, the weighting of these two trust information sources may be adjusted [
104].
As mentioned earlier, time is one of the factors with the vastest influence on trust. In this sense, the
forgetting factor [
18] is a crucial feature to handle the repercussions of time passage over trust relationships. In general terms, the forgetting factor allows utilizing aging functions to adapt gradually to the oblivion of past interactions, whereas recent interactions may acquire a higher relevance. Nevertheless, this is not the only factor that has an impact on trust and reputation scores. Bearing in mind that trust and reputation models leverage recommendations as trust information, it also implies thinking about factors such as recommendation’s
credibility and
subjectivity. The credibility refers the validation and verification of gathered feedback from a recommender [
120]. Through the consideration of this characteristic, trust and reputation approaches intend to diminish inaccurate trust evaluations. In the case of subjectivity [
147], it arises due to the very nature of trust where it may be influenced by personal interpretations of entities. In this regard, a recommendation might be misinterpreted by an entity because the recommender carried out a personalization of trust and reputation evaluation of which the recommender is unaware.
To bring the feature list to an end, the satisfaction [
39] is a feature that enables to reflect the user’s gratification after estimating a trust and reputation value. Even though not all trust and reputation models take into account the satisfaction property as part of their approaches, it has a key role because it allows discovering possible imprecise predictions as well as assessing the trust of a new interaction. Therefore, the satisfaction should be contemplated in most reputation-based trust models.