skip to main content
research-article

From manifesta to krypta: The relevance of categories for trusting others

Published: 03 April 2013 Publication History

Abstract

In this article we consider the special abilities needed by agents for assessing trust based on inference and reasoning. We analyze the case in which it is possible to infer trust towards unknown counterparts by reasoning on abstract classes or categories of agents shaped in a concrete application domain. We present a scenario of interacting agents providing a computational model implementing different strategies to assess trust. Assuming a medical domain, categories, including both competencies and dispositions of possible trustees, are exploited to infer trust towards possibly unknown counterparts. The proposed approach for the cognitive assessment of trust relies on agents' abilities to analyze heterogeneous information sources along different dimensions. Trust is inferred based on specific observable properties (manifesta), namely explicitly readable signals indicating internal features (krypta) regulating agents' behavior and effectiveness on specific tasks. Simulative experiments evaluate the performance of trusting agents adopting different strategies to delegate tasks to possibly unknown trustees, while experimental results show the relevance of this kind of cognitive ability in the case of open multiagent systems.

References

[1]
Bacharach, M. and Gambetta, D. 2001. Trust as type detection. In Trust and Deception in Virtual Societies, Kluwer Academic Publishers, 1--26.
[2]
Barber, K. and Kim, J. 2001. Belief revision process based on trust: Agents evaluating reputation of information sources. In Trust in Cyber-Societies. Lecture Notes in Computer Science, vol. 2246, Springer, 73--82.
[3]
Bordini, R. H., Hubner, J. F., and Wooldrige, M. 2007. Programming Multi-Agent Systems in AgentSpeak Using Jason. Wiley Series in Agent Technology. John Wiley & Sons.
[4]
Bueno, S. and Salmeron, J. 2009. Benchmarking main activation functions in fuzzy cognitive maps. Expert Syst. Appl. Int. J. 36, 3, 5221--5229.
[5]
Burnett, C., Norman, T., and Sycara, K. 2010. Bootstrapping trust evaluations through stereotypes. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS'10). 241--248.
[6]
Castelfranchi, C. and Falcone, R. 1997. Delegation conflicts. In Multi-Agent Rationality, M. Boman and W. Van de Velde, Eds, Lecture Notes in Artificial Intelligence, vol. 1237, Springer, 234--254.
[7]
Castelfranchi, C. and Falcone, R. 1998. Principles of trust for MAS: Cognitive anatomy, social importance, and quantification. In Proceedings of the 3rd International Conference on Multi Agent Systems. 72--79.
[8]
Castelfranchi, C. and Falcone, R. 2010. Trust Theory. A Socio-Cognitive and Computational Model. Wiley Series in Agent Technology. John Wiley & Sons.
[9]
Falcone, R. and Castelfranchi, C. 2002. Social trust: A cognitive approach. In Trust and Deception in Virtual Societies, 55--90.
[10]
Falcone, R., Pezzulo, G., and Castelfranchi, C. 2003. A fuzzy approach to a belief-based trust computation. In Trusting Agents for Trusting Electronic Societies. Lecture Notes in Artificial Intelligence, vol. 3577, Springer.
[11]
Falcone, R., Pezzulo, G., Castelfranchi, C., and Calvi, G. 2004. Why a cognitive trustier perform better: Simulating trust-based contract nets. In Proceedings of the 3rd International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS'04). ACM Press, New York, 1392--1393.
[12]
Hang, C., Wang, Y., and Singh, M. 2009. Operators for propagating trust and their evaluation in social networks. In Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS'09). Vol. 2. 1025--1032.
[13]
Huynh, T. G., Jennings, N. R., and Shadbolt, N. R. 2006. An integrated trust and reputation model for open multi-agent systems. J. Auton. Agent Multi-Agent Syst. 13, 119--154.
[14]
Jonker, C. and Treur, J. 1999. Formal analysis of models for the dynamics of trust based on experiences. In Proceedings of the 9th European Workshop on Modelling Autonomous Agents in a Multi-Agent World: MultiAgent System Engineering (MAAMAW'99). 221--231.
[15]
Kosko, B. 1986. Fuzzy cognitive maps. Int. J. Man-Mach. Stud. 24, 1, 65--75.
[16]
Kosko, B. and Burgess, J. 1998. Neural networks and fuzzy systems. J. Acoust. Soc. Amer. 103, 3131.
[17]
Marsh, S. 1994. Formalising trust as a computational concept. Ph.D. thesis, University of Stirling.
[18]
Papageorgiou, E. I., Stylios, C., and Groumpos, P. P. 2006. Unsupervised learning techniques for fine-tuning fuzzy cognitive map causal links. Int. J. Hum.-Comput. Stud. 64, 727--743.
[19]
Rehak, M., Gregor, M., Pechoucek, M., and Bradshaw, J. 2007. Representing context for multiagent trust modeling. In Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'06). IEEE, 737--746.
[20]
Resnick, P. and Zeckhauser, R. 2002. Trust among strangers in Internet transactions: Empirical analysis of eBay's reputation system. Adv. Appl. Microecon. Res. Annual 11, 127--157.
[21]
Ricci, A., Piunti, M., and Viroli, M. 2010. Environment programming in multi-agent systems: An artifact-based perspective. Auton. Agents Multi-Agent Syst. 23, 2, 158--192.
[22]
Sabater, J. 2003. Trust and reputation for agent societies. Ph.D. thesis, Universitat Autonoma de Barcelona.
[23]
Tavakolifard, M., Knapskog, S., and Herrmann, P. 2008. Trust transferability among similar contexts. In Proceedings of the 4th ACM Symposium on QoS and Security for Wireless and Mobile Networks. ACM Press, New York, 91--97.
[24]
Yolum, P. and Singh, M. P. 2003. Emergent properties of referral systems. In Proceedings of the 2nd International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS'03).
[25]
Yu, B. and Singh, M. 2003. Searching social networks. In Proceedings of the 2nd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS'03). ACM Press, New York, 72.
[26]
Ziegler, C. 2009. On propagating interpersonal trust in social networks. In Computing with Social Trust, J. Golbeck, Ed., 133--168.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Intelligent Systems and Technology
ACM Transactions on Intelligent Systems and Technology  Volume 4, Issue 2
Special section on agent communication, trust in multiagent systems, intelligent tutoring and coaching systems
March 2013
339 pages
ISSN:2157-6904
EISSN:2157-6912
DOI:10.1145/2438653
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 April 2013
Accepted: 01 April 2011
Revised: 01 March 2011
Received: 01 August 2010
Published in TIST Volume 4, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Trust by reasoning
  2. cognitive analysis
  3. fuzzy cognitive maps
  4. open systems

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)18
  • Downloads (Last 6 weeks)2
Reflects downloads up to 23 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media