skip to main content
research-article
Open access

Harnessing Collective Differences in Crowdsourcing Behaviour for Mass Photogrammetry of 3D Cultural Heritage

Published: 24 December 2022 Publication History

Abstract

Disorganised and self-organised crowdsourcing activities that harness collective behaviours to achieve a specific level of performance and task completeness are not well understood. Such phenomena become indistinct when highly varied environments are present, particularly for crowdsourcing photogrammetry-based 3D models. Mass photogrammetry can democratise traditional close-range photogrammetry procedures by outsourcing image acquisition tasks to a crowd of non-experts to capture geographically scattered 3D objects. To improve public engagement, we need to understand how individual behaviour in collective efforts work in traditional disorganised crowdsourcing and how it can be organised for better performance. This research aims to investigate the effectiveness of disorganised and self-organised collaborative crowdsourcing. It examines the collaborative dynamics among participants and the trends we could leverage if team structures were incorporated. Two scenarios were proposed and constructed: asynchronous crowdsourcing, which implicitly aggregates isolated contributions from disorganised individuals; and synchronous collaborative crowdsourcing, which assigns participants into a crowd-based self-organised team. Our experiment demonstrated that a self-organised team working in synchrony can effectively improve crowdsourcing photogrammetric 3D models in terms of model completeness and user experience. Through our study, we demonstrated that this crowdsourcing mechanism can provide a social context where participants can exchange information via implicit communication, and collectively build a shared mental model that pertains to their responsibilities and task goals. It stimulates participants’ prosocial motivation and reinforces their commitment. With more time and effort invested, their positive sense of ownership increases, fostering higher dedication and better contribution. Our findings shed further light on the potentials of adopting team structures to encourage effective collaborations in conventionally individual-based voluntary crowdsourcing settings, especially in the digital heritage domain.

1 Introduction

The past decade has seen an increase in the use of photogrammetry for digitally recording and sharing cultural heritage. The trend is widespread not only in institutions and groups that are specialised in working with cultural heritage [2, 57, 70, 71]; groups and individuals with particular interests in heritage are also using photogrammetric models in a myriad of academic research and creative projects, re-purposing such models for use in designs and entertainment within the creative sector [1, 12, 20, 21]. While such endeavours are worthwhile, it may be unsustainable in terms of resources to digitally capture, process, and store artefacts at all levels of priorities due to the inestimable amount of cultural heritage objects scattered over large geographical regions. The British Museum, for example, has been collaborating with Sketchfab – one of the largest online 3D model platforms since 2014. At the time of writing, the British Museum has digitised and published 273 high-quality 3D models from the museum’s collection, which has gained 1.6M views and 13.6K likes [63]. Despite the quality, the publication of the models has been relatively slow, implying that closed digitisation work may require greater financial budgeting, technical support and human resources that are part of the museum’s digital team. That is why mass photogrammetry [13] that uses crowdsourcing means to achieve its goals may become necessary.
Our main goal is to obtain 3D reconstructions of cultural heritage objects that can faithfully capture the form and colour of the actual artefacts. The 3D models are neither digital modelling that requires designers’ interpretations nor technology-savvy documentation that involves additional geometric information or interior measurement [44, 45]. Mass photogrammetry rarely achieves professional, archivable copies due to the high variability of imaging devices, lens settings and lighting conditions. We aim to capture digital surrogates with adequate visual resemblances to the original physical objects for most consumption scenarios, including communication, sharing, cultural exchanges, creative products, and even research [13]. Since offline close-range photogrammetry procedures can be separated into on-site image acquisition and off-site image processing [40], the time-intensive image acquisition may benefit from engaging the public. Our research, therefore, proposes crowd-based image acquisition as it can use ubiquitous smartphones and accessible DSLR cameras, together with the crowds’ voluntary attitudes, to reconstruct photogrammetry-based 3D artefacts at a massive scale and possibly beyond the reach of institutions.
High-quality image acquisition is not easy as far as mass photogrammetry is concerned. Following the offline close-range photogrammetry with multi-image acquisition approaches, the object should be, in principle, acquired by multiple overlapping images from locations chosen to enable sufficient intersecting angles of bundles of rays in object space [40]. The number of photographs is unspecified, all based on the target’s shape, form, and details. When extended to mass photogrammetry, the simple solution would be to collect as many high-resolution images as possible and hope that the images have at least 60% overlaps and cover every possible angle at different distances [13]. To reconstruct high-quality 3D models, we focus primarily on capturing cultural heritage objects that possess the following characteristics:
Display sufficient space for manoeuvring one’s camera all around the display
Material avoid materials with low contrast to the background, e.g., transparent, translucent, glow, or reflective
Illumination suitable lighting conditions to avoid too much exposure and shadows
Therefore, the challenge is to outsource image acquisition tasks to a large group of amateurs so that their contributions can be used to reconstruct high-quality photogrammetric 3D models. Prevalent crowdsourcing trends we observed focus on stimulating as many independent contributions as possible to ensure the diversity of the crowdsourced data [73]. We think collecting isolated contributions may not be the best way to crowdsource 3D models because this may yield inadequate and fragmented data (please see our definition of what constitutes high-quality data in Section 2.3 and why conventional crowdsourcing is not suitable in Section 2.4). To create crowdsourcing activities that can be mapped to collective tasks to ensure the quality of photogrammetric 3D models, more effective mechanisms should be explored.
It is a known fact that collaborative teamwork can yield synergetic effects [27] so as to solve more complex problems [6, 54]. Some researchers have proposed the implementation of team structures in typical crowdsourcing mechanisms [4, 43, 59, 60, 67] to reduce individual workload, shorten the time, boost individual performance, and so on. However, the majority of existing collaborative crowdsourcing approaches fall short of the ability to facilitate productive collaboration due to the inflexible and inactive team mechanisms [66]. Therefore, it is vital to understand the crowd behaviours and collaboration dynamics for it can help reveal the characteristic behaviours embedded in the crowdsourcing activity.
The unique nature of crowdsourcing 3D cultural heritage objects introduces a variability of elements in terms of task complexity, professionalisation, incentives of the crowd, and so on. With the introduction of team structures, there is also the added layer of disorganised and self-organised groups that may come together asynchronously and synchronously to complete a set of tasks. These unknowns have never been studied, and we believe that understanding these phenomena may provide us with the knowledge for facilitating a more optimised crowdsourcing workflow to obtain large numbers of high-quality 3D models. To fill this research gap, we aim to investigate the effectiveness and differences of crowdsourcing behaviours that are carried out following two different crowdsourcing mechanisms:
Group A a group of disorganised individuals working independently and asynchronously
Group B a crowd-based self-organised team working together synchronously
Group A follows the bottom-up procedure as most conventional crowdsourcing approaches, which implicitly harnesses the collective efforts since individual participants are unaware that their contributions will be later aggregated. Group A allows for isolated, asynchronous contributions. On the other hand, Group B can be seen as a top-down approach because participants will be assigned to a team and are fully aware they will be synchronously working together towards a common goal. We investigate the viability of incorporating team structures in terms of 3D model completeness by testing three hypotheses:
H1 The disorganised group of individuals working asynchronously will produce an incomplete 3D model.
H2 The crowd-based self-organised team working synchronously will produce a complete 3D model.
H3 The combination of imageries generated by the disorganised group and the crowd-based self-organised team will produce a complete 3D model that is better than each group separately.
The paper is organised as follows. We first reviewed existing practices on crowdsourcing 3D models. We then proposed a plausible solution to mitigate the challenge of obtaining high-quality data via implementing team structures in the collaborative crowdsourcing mechanism, assisted with proper task assignment and coordination. A Web App was developed to facilitate our crowdsourcing activity in terms of image acquisition and uploads. In the next section, our exploratory study investigated two scenarios constructed for two different crowdsourcing mechanisms. We used multiple metrics to measure the crowdsourcing performances at both group level and individual level. These include Productivity (quality of the 3D reconstruction); Dedication (time and efforts spent); Experience (image acquisition and application interaction); Psychological Ownership (to what extent does participants think they own the images and reconstructed 3D models); and Future Intention (willingness to take more pictures and recommend this activity to friends). This is followed by data analysis and findings. The paper ends with a discussion on the restrictions and limitations of this work and how it can be generalised to establish a more optimised workflow for future crowdsourcing activities.

2 Related Works

Here, we outlined literature works that form the basis of our work.

2.1 Mass Photogrammetry

Awareness of the need to digitally preserve cultural heritage has led to an increasing trend in building 3D digital heritage databases both locally and globally [15, 22, 33, 47, 48, 49, 64, 65]. Thanks to recent technological advances, photogrammetry techniques are now accessible and relatively easy to operate for 3D reconstructions. Traditional offline photogrammetric procedures involve on-site image acquisition and asynchronous image processing in an off-site laboratory, including measurement and orientation calibration. The off-site steps can be automated to a large extent due to the rapid development in computational power and distributed processing algorithms. By employing multi-image triangulation (e.g., bundle adjustment), the recording cameras can be automatically calibrated with suitable imaging configuration and a high level of redundancy of images. In this way, it is possible to use cameras with a lower level of mechanical stability [40].
However, on-site image acquisition is still laborious and requires time-consuming efforts, the success of which directly affects the overall photogrammetric accuracy. Since photo-taking is now a common activity in everyday life, such time-intensive tasks may benefit from crowdsourcing. Within crowdsourcing works [10], even if each individual is drafted to perform a small portion of the task, the volume of workers performing can allow idiosyncratic and consistent performance overall. We project that a promising solution may be mass photogrammetry. It differentiates itself from conventional photogrammetric procedures by engaging the crowd for image recording. It may help reduce costs, increase scalability, and alleviate problems often raised in the traditional data collection process.

2.2 The Need for Quality Crowdsourced Data

The quality of 3D reconstruction of this paper is defined in terms of 3D model completeness with respect to the achievable accuracy of the input data. Close-range photogrammetry with multi-image configuration can guarantee the internal quality of photogrammetric adjustment process, if provided with sufficient intersecting angles of bundles of image rays [40]. The accuracy of overall 3D reconstruction is thus primarily determined by the number of images contributed, such as the different viewing angles, and sufficient overlapping among images. An ideal multi-image acquisition with 360\(^{\circ }\) capture can generate a complete model. However, in actual fieldwork, there may be unavoidable barriers blocking viewing angles from which there are no solutions. To illustrate our point, if an object is placed against a wall, the most angles one can capture is 270\(^{\circ }\), resulting in the achievable accuracy of \(3/4\) model completeness. The achievable accuracy of a photogrammetric 3D model hence depends on the object’s surroundings, size, and materials.
The knowledge, experience, and skills that participants possess may all contribute to overall performance of mass photogrammetry and the eventual quality of the 3D reconstructions. If the image quality is poor or the quantity is inadequate, the model produced will be of low quality and incomplete. It has been proposed [13] that the quality of photogrammetric models can be written as the equation: \(M=w1\mu +w2\epsilon +w3\delta +w4\zeta\), where \(\mu\) is the object’s material and form; \(\epsilon\) represents the object’s environment; \(\delta\) is the devices used; \(\zeta\) is the skill of the photographer, and M represents the standard practice in the range [0,1] with the arbitrary weights indicating how much a person has control over a given variable. Since our goal is to increase the quality of 3D reconstruction, we need to increase the weights for all the variables.
First, participants must pay attention to the target itself (material, form, and environment). There is general unpredictability in the fieldwork, such as obstacles blocking views, visitors crowding around the monument, and locations that may pose potential risks to participants’ safety. For the experimental purpose of this paper, we chose an object with a suitable form and environment so as to eliminate the external, uncontrollable factors. Secondly, the photogrammetric accuracy is also affected by the resolution of pictures, which is often determined by the capturing devices. Ch’ng et al.’s study has determined that images captured by affordable devices (smartphones) can produce 3D models with no significant visual differences compared to digital SLR cameras [13]. As w1, w2 and w3 can be controlled, the special focus should be on optimising the weight of the fourth term w4 (i.e., the skill of the photographer). We assume that most participants would know how to take photos using a device that they own. Instead of emphasising the basic photo-taking techniques, we should focus on the procedure of mass photogrammetry and its requirement on image quality and quantity. A rule of thumb would be to increase the number of images participants can contribute. During the process, participants would also need to pay attention to the image quality, such as overexposure, lack of focus, and the wrong field of view. The ideal scenario for 3D reconstruction is to manoeuver ones’ camera all around the target with sufficient overlaps among sequential images (a standard overlap would be of >60%) so that they can form stereo pairs, facilitating the further configuration, orientation, and measurement [13, 40].

2.3 Collaborative Crowdsourcing with Team Structures

There has been extensive research on the effectiveness of crowdsourcing regarding motivation factors [8, 26, 34, 38, 51, 72]; task allocation [3, 31, 39]; IT-mediated interfaces [29, 36, 55]; data collection and quality control [11, 17, 23, 25, 35, 41]; as well as some moderating roles, such as task-related self-efficacy [62], network connectivity [32], community commitment [69]. Despite different incentives and quality control mechanisms, they all concentrate on stimulating as many independent contributions as possible to ensure the diversity of the crowdsourced data, which are prevalent approaches in crowdsourcing [73]. In our experience, we consider previous approaches may not be entirely suitable for crowdsourcing photogrammetric 3D models. In our crowdsourcing activity, participants must contribute overlapping photographs from as many angles as possible with no tangible rewards. Individuals may find such tasks cumbersome and tedious, especially for targets that are too large to complete or to see immediate results. In our pilot studies, even if individuals are motivated to contribute more, they tended to capture objects from certain vantage points and will likely miss unremarkable surfaces that have no attracting features. This often results in both insufficiency and/or redundancy, i.e., leaving an inadequate number of pictures to cover all the angles and details, yet an overabundant number of pictures from similar angles. Consequently, it is likely that the crowdsourced models would be poorly constructed and may impair the overall performance.
As understood from topics within the organisational psychology and social theories [61, 68], cooperative goals can stimulate synergetic effects [27], the performance gain in teamwork, which help participants to overcome challenges that would be impossible to reach when playing along as well as improve data quality [6, 54]. Cooperative structures also provide opportunities to invoke intrinsic motivations, positive influencing factor for enjoyment, and deep competence satisfaction [27, 58] for the experience of social relatedness that satisfy innate needs. Therefore, in recent years, many collaborative crowdsourcing activities have also emerged, such as collaboration among mathematicians [52], sentence translation [5], or digital storytelling [53]. However, these crowdsourcing campaigns still follow the bottom-up process, which encourages the crowd to select tasks according to their own knowledge and skills [58]. One of the frequent consequences of this is mismatches between participants and the contextualised knowledge required for the open call [10, 58]. Thus, generic disorganised collaborative crowdsourcing may not facilitate productive mass photogrammetry since its tasks are location-based and requiring matching volunteers with desired demographic characteristics.
Crowdsourcing often fails because initiators fail to properly plan for crowd engagement. As [18] pointed out, crowdsourcing is not only about how crowd workers are sourced, but more fundamentally to the methods that the crowd can be organised and coordinated. The effectiveness of collaboration among crowdsourcing participants is highly critical and has been increasingly emphasised [67]. Therefore, to fully tap potential of the crowd, the crowdsourcing process, we should impose some organisational control over the implementation that can dictate what the crowd needs to work on during the crowdsourcing processes. But unlike employees or suppliers in an organisation, such central authority must be limited, for the crowd should still be able to be self-selected and spontaneous for voluntary activities. A step-wise design and refinement requires a high expenditure of time and effort in coordination [16]. Therefore, we need to consider its trade-offs, balancing its coordination costs and efficiency.

3 Research Gap

There is a growing body of research that focuses on exploring the potentials of integrating team structures into crowdsourcing to enhance effective collaboration for high-quality crowdsourced data [31, 42, 54, 59, 67]. Because team structures can not only stimulate synergetic effects by emphasising the cooperative goal, but more importantly, provide the social context [56] where individuals can share and exchange information as well as adapt their behaviours via observation [37]. Collaboration-based crowdsourcing thus occurs when self-selected members communicate and work together to solve a problem [14].
In our literature review, we have not found studies that examine how to leverage crowdsourcing to reconstruct 3D models via mass photogrammetry with no tangible rewards. We have not fully understood how team structures would affect crowd behaviours and overall performances. Additionally, existing studies on voluntary crowdsourcing have mainly focused on analysing isolated contributions by individuals and thus collaboration dynamics among participants is equally unknown. Thus, we are motivated to investigate the potential of incorporating team structures to encourage effective collaborations in the crowdsourcing settings that have traditionally been individual-based, within which, we aim to examine collaboration dynamics as it can help reveal the embedded characteristic behaviours.

4 Methodology

This section describes the rationale by which we tested our hypotheses. First, we developed a Web App to facilitate the crowdsourcing activity. We then recruited two groups of volunteers for two crowdsourcing scenarios.

4.1 Web App for Facilitating the Crowdsourcing Process

The Web App can access mobile device sensors such as cameras, GPS, gyroscope sensors and photo storage, which is good enough to serve our needs. The main functionalities of the Web App consist of the Home page where the basic information is listed and the Contribution page where users can take and upload images along with optional descriptions (Figure 1). When users are directed to the Contribution page, they can click the File-Upload to either choose multiple pictures from their album or take pictures directly. To facilitate ease of upload, multiple images can be selected and uploaded at the same time. While the files are uploading, there will be progress bars displaying for each file, thus giving users instant visual feedback. Furthermore, users can cancel uploads during the process simply by clicking the cross symbol associated with each file. Upon that, we also store the EXIF files of the collected images for necessary geographical data.
Fig. 1.
Fig. 1. The Web App developed for facilitating our crowdsourcing activities (https://digitalheritage.site).

4.2 Crowdsourcing Experiments

4.2.1 Site of Study.

The experimental site is located at our university’s public square (Figure 2). The public square simulates a heritage site where crowds tend to gather. Additionally, the site was chosen because our participants can easily access it and perform tasks safely. The size of the fountain is approximately 12 \(\times\) 12 \(\times\) 4 metres, suitable for the designated group size of 10. We deliberately picked the monument of this height as the top would pose certain challenges for participants to solve as they would in real-world scenarios.
Fig. 2.
Fig. 2. The Fountain located at the public square of the university is chosen as the target object.

4.2.2 Equipment and Software.

Volunteers could choose their own devices according to their preferences (most participants used their own smartphones). They were asked to grant the web app access to mobile sensors, including GPS, albums, and cameras. And they were also asked to upload full-size images via the Web App.

4.2.3 Participants.

Ethics approval was obtained from our institution prior to recruitment. A total of 20 university students were recruited and later randomly divided into two groups, each consisting of five females and five males. Participants were assigned a unique ID for anonymisation. They were provided with a detailed participant information sheet and consent form. There was also a statement on the Web App that says “By pressing the submit button, I acknowledge the statement in the consent form and agree to take part in this study”. Participants can stop and leave at any time.
To obtain high-quality data, we need to ensure that volunteers understand how mass photogrammetry works and the basic criteria for the required images. This was achieved by visualising photogrammetric procedures using the Lion Status captured from the Tiantong Temple as an instance (Figure 3). All volunteers were told “Photogrammetry is a science of measuring photographs. It imitates the stereoscopy of binocular human vision and can be used to convert multiple images into a digital 3D model. As shown in the figure, each white dot represents one photo of the corresponding angle. For 3D reconstruction, one needs to take a sufficient number of sequential images from different angles with sufficient overlapping (>60%).”
Fig. 3.
Fig. 3. The reconstruction of the Lion Statue from the Tiantong Temple in Ningbo was used as an illustrating example to help participants understand the process of mass photogrammetry (Model by [13]).

4.2.4 Different Crowdsourcing Mechanisms and Task Assignments.

We set a preliminary framework for the field settings with achievable sub-tasks. These tasks are told to all the participants in an identical way, as follows: The goal for us is to reconstruct a digital 3D model of the Fountain located at our university following the instructions:
To attain this goal, you should at least follow the steps in sequence:
(1)
Get to the destination
(2)
Take photos of the target from as many different angels as possible
(3)
Make sure there is sufficient overlap (>60%) in successive images
(4)
Upload your images via the Web Apps
Both groups were given one week to finish this task. The difference (Table 1) between the two groups was that all the members in Group A were given the instructions individually. They did not know that there were other volunteers and assumed that they had to complete the task on their own. Hence, members in Group A could perform the tasks at any time during the given week. On the other hand, all the members in Group B were given instructions as a group and were required to perform the tasks as a team. Although we did not fix a particular time-slot for Group B, they discussed within the group to choose a time suitable for every member of the group to capture the Fountain on-site at the same time.
Table 1.
GroupsGroup AGroup B
 Disorganised individuals working independentlyCrowd-based self-organised team working synchronously
Features– Bottom-up Approach
– Disorganised, Asynchronous
– Unawareness of implicit aggregation
– Top-down Approach
– Self-organised, Synchronous
– Awareness of team collaboration
GoalParticipants were told individually that he/she needs to capture as many pictures of the Fountain as to reconstruct the 3D model.The team was told that they need to cooperate to capture as many pictures of the Fountain as to reconstruct a 3D model.
Table 1. Experimental Settings: Two Scenarios were Formalised and Controlled for Two Different Crowdsourcing Structures: Asynchronous Crowdsourcing which Implicitly Aggregates Isolation Contributions from Individual Participants; and Synchronous Collaborative Crowdsourcing which Assigns Participants into a Crowd-based Team to Work Towards a Common Goal

4.2.5 Data Collection and Processing.

We collected full-size images that contain both the pixel information and metadata, including geolocations and IP addresses for behavioural data. A follow-up questionnaire was used for basic user profiles, self-reported effort, and Web App feedback. We also conducted semi-structured interviews asking participants about their general experience and behavioural intentions. Data were analysed using the R and Python. The findings are reported in the next section.

5 Findings

This section reports our findings. We examined the crowdsourcing performances of Group A (Disorganised individuals working independently) and Group B (Crowd-based self-organised team working synchronously) using multiple metrics, through which we deducted the effects the crowd-based team structures imposed on the crowdsourcing performance. Since all participants were randomly assigned to groups and informed of identical instructions, we may eliminate the effects the exogenous inputs have regarding the demographics and task assignment.

5.1 Group-level Performance

5.1.1 Productivity.

At the group level (Table 2), the total number of photos contributed by Group B is six times more than that of Group A. Group A contributed only 144 pictures with varying image quality. Since members in Group A could freely choose the time-slots they prefer within the given week, some captured the Fountain in the morning and some at dusk, resulting in different lighting conditions. The image aggregation of Group A had a loose overlapping: only 61 out of 144 images were used in the final 3D reconstruction. As result, Group A produced a 3D representation that covered only 1/3 of the Fountain (Figure 4).
Fig. 4.
Fig. 4. Model 1: the 3D model reconstructed using photos contributed by Group A (Disorganised individuals working independently) is sparse and partially reconstructed. It covers only 1/3 of the Fountain.
Table 2.
 ProductivityDedication
CompletenessImagesOverlappingModel detailsTime
Group A1/3 representations (Figure 4)144 images varied qualitiesLoosely overlapped (61 images used)Missing a lot of info202 mins in total
Group BA complete representation (Figure 5)919 images high-quality in generalDensely overlapped (713 images used)Containing essential info needed for 3D reconstruction430 mins in total
Combination of Group A and Group BA complete representation (Figure 7)1,063 images in totalDensely overlapped (816 images used)Including more details (e.g., text on the labels can be read clearly)632 mins in total
Table 2. Crowdsourcing Performances at Group-level: Group B (Crowd-based Self-organised Team Working Synchronously) Outperformed Group A (Disorganised Individuals Working Independently) in Both Productivity and Dedication
Group B contributed 919 pictures in total, 713 of which were used in 3D reconstruction. Group B successfully yielded a complete representation of the Fountain (Figure 5).
Fig. 5.
Fig. 5. Model 2: the 3D model reconstructed using photos contributed by Group B (Crowd-based self-organised team working synchronously) has more overlapping photos and yielded a more thorough reconstruction of the Fountain.
Two heat-maps (Figure 6) were generated using locational information to illustrate crowd behaviour during the activities. As observed, the footprints of Group B were clearly denser and covered a larger area. Footprints of Group A were relatively sparse, implying that participants in Group A took photos from fewer angles and at less varying distances. The reason may be habitual or preferential, which may represent that these angles were considered suitable for capturing the essential features of the target by most participants.
Fig. 6.
Fig. 6. The heat-maps tracing the location of photos taken. Group A (Disorganised individuals working independently) has a sparser positional heat-map. Group B (Crowd-based self-organised team working synchronously) has a denser heat-map, which appears to have more circular coverage around the target object).

5.1.2 Dedication.

Group-level dedication was measured using total time spent. Group B spent a total 430 minutes as a team to complete the task, while Group A only spent 202 minutes in total. Therefore, we can conclude that Group B outperformed Group A in the group-level dedication.

5.1.3 3D Model Reconstruction.

Here, we reiterate our purpose. To what extent can we use crowdsourced images for reconstructing a targeted monument. Our hypotheses H1 and H2 were verified in both cases. We continued the analysis to test hypothesis H3 by combining the two sets of data. The combination consisted of 1,063 pictures, 816 of which could be used in the 3D reconstruction, producing a complete representation of the Fountain (Figure 7).
Fig. 7.
Fig. 7. Model 3: the 3D model reconstructed (using the combination of imageries generated by the disorganised group of individuals and the crowd-based self-organised team) is a complete representation of the actual Fountain and contains more details compared to the 3D models reconstructed from either one of the groups.
Although the final 3D model appears roughly similar to the Model 2 (Figure 5), more detailed features can be observed when zoomed in. For instance, while the sign on one of the columns of the Fountain cannot be read on the Model 2, the words can easily be read on Model 3 (Figure 7).
It is true that some of the images from Group A have varying image quality, yet it does not affect the overall result since uniform precision of the derived coordinates can be obtained via multi-image triangulation under the circumstance that a sufficient number and configuration of image rays were provided [40]. Thus, most of the credits for reconstructing the full 3D model should still go to the contributions of Group B.

5.2 Individual-level Performance

To learn more from the cognitive-affective aspects, we investigated individual-level performance across four dimensions: Productivity, Dedication, Experience, Psychological Ownership and Future Intention (Table 3).
Table 3.
ProductivityDedicationExperienceOwnershipFuture Intention
ImagesTimeEffortTaskWeb AppImages3D modelTake more picturesRecommend it to friends
Group AMin5 images10 mins1132143
Avg.15 images20 mins4.75.86.14.74.45.65.4
Max26 images35 mins101077777
Std.7.59.92.82.81.31.62.11.21.3
Group BMin55 images20 mins2443331
Avg.92 images43 mins4.86.45.85.64.65.24.6
Max165 images90 mins81077677
Std.31.921.12.32.41.21.70.81.41.8
Table 3. Crowdsourcing Performances at Individual-level: Group A (Disorganised Individuals Working Independently) and Group B (Crowd-based Self-organised Team Working Synchronously)

5.2.1 Productivity and Dedication.

Observed from Figures 9 and 10, members of Group B had stronger dedication for they spent more time and reported slightly fewer efforts. Combining the results demonstrated in Figure 8, we can conclude that Group B outperformed Group A at the individual level in terms of Productivity and Dedication.
Fig. 8.
Fig. 8. Individual-level comparison between Group A and Group B in terms of the number of images uploaded: Group B greatly outperformed Group A in individual productivity. We can see that even the least number of contributions in Group B (n = 55 images) is greater than the greatest number of contributions of Group A (n = 26 images).

5.2.2 Experience.

We also explored participants’ experience of two essential stages in the crowdsourcing activity: Task Execution (i.e., image capture) and Web App Interaction (i.e., image upload). In general, members of both groups reported similar experiences for task execution. Even if the mean value of Group B (\(\overline{x}\) = 6.4) is greater than that of Group A (\(\overline{x}\) = 5.8) (Table 3), they had an identical median value (\(\tilde{x}\) = 5.5) (Figure 11). As for the Web App interaction, members of Group A had a better experience, which can be attributed to the fact that all the members of Group B took more images and had to spent considerably more time uploading.

5.2.3 Psychological Ownership.

Positive psychological ownership, as mentioned in multiple research work [9, 19, 30, 50] can entail reciprocity and encourage individuals to be more altruistic. We asked participants “to what extent you think you own the contributed image and the reconstructed 3D model” to learn their psychological ownership. As demonstrated (Figure 12), members of Group B tended to have a stronger sense of ownership of contributed images and the 3D reconstruction.
These phenomena can be ascribed to the fact that every member of Group B contributed more images and was more dedicated than members in Group A (Figure 8, 9, 10). Higher dedication and contribution can lead to more substantial psychological ownership as people invest greater time and effort [9, 50]. We also learnt from the interviews that members of Group B were genuinely more interested in the 3D reconstruction and expressed higher excitement in seeing the final model.
Fig. 9.
Fig. 9. Individual-level comparison between Group A and Group B in terms of the time spent in task execution: Group B outperformed Group A because the minimum time (min = 20min) the member of Group B spent equals the median value of time (\(\tilde{x}\) = 20min) spent of Group A; and the median value of time (\(\tilde{x}\) = 35min) spent of Group B equals to the maximum time (max = 35min) the member of Group A spent.
Fig. 10.
Fig. 10. Individual-level comparison between Group A and Group B in terms of the self-reported effort in task execution: in general, members of Group B reported similar efforts as members of Group A. Even if the mean value of Group B (\(\overline{x}\) = 4.8) is greater than that of Group A (\(\overline{x}\) = 4.7), the data of Group A is relatively more dispersed. The median value of Group B (\(\tilde{x}\) = 4) is less than that of Group A (\(\tilde{x}\) = 4.5)), implying that members of Group B reported slightly less effort than that of Group A.
Fig. 11.
Fig. 11. Individual-level comparison between Group A and Group B in terms of the overall experience in crowdsourcing activities: both groups reported similar task execution experiences. For the Web App interaction, members of Group A had a better experience in general.
In Group A, a few participants also expressed strong feelings towards possessing the images and the 3D model (i.e., chose the maximum value of 7 for both questions). Through investigation, interestingly, we found that those who contributed more in Group A, who had given higher rates, were always those who were intrinsically fascinated by or working in the fields related to digital heritage. But those in Group A, who gave lower scores and contributed less, had expressed indifference or unconcerned toward digital heritage. This can also explain the data dispersion of Group A (Figure 12).
Fig. 12.
Fig. 12. Individual-level comparison between Group A and Group B in terms of the psychological ownership: members of Group B tend to have a stronger sense of psychological ownership of the image contributed, because both the minimum value (min = 3) and the median value (\(\tilde{x}\) = 6.5) of Group B is greater than that of Group A. For the final 3D model, although the maximum value of Group A (max = 7) is greater than that of Group B, the data of Group A is more scattered. The median value (\(\tilde{x}\) = 5) and the mean value (\(\overline{x}\) = 4.6) of Group B are greater than that of Group A (\(\tilde{x}\) = 4, \(\overline{x}\) = 4.4). Hence, Group B experienced stronger psychological ownership feelings.

5.2.4 Future Intention.

We inspected participants’ future intentions by asking if they are willing to take more images as well as their tendency to want to recommend such crowdsourcing activities to their peers (Figure 13). We discovered that Group A held a more positive attitude. One possible reason is that all the members in Group A contributed considerably less than any member in Group B, which took significantly less time and effort. Thus, the intention to make more contributions was expressed. Our predictions were supported by the evidence found in interviews, in which many members of Group A admitted that they knew they could have contributed more during the activity. This also implies that the disorganised independent crowdsourcing context (Group A followed) did not stimulate the full potential of participants. While intentions are not equivalent to the actual actions, we believe that it is still a good sign that participants expressed willingness to participate again and to contribute more.
Fig. 13.
Fig. 13. Individual-level comparison between Group A and Group B in terms of their future intention: Group A expressed more willingness to take more pictures because its mean value (\(\overline{x}\) = 5.6) is greater than that of Group B, despite having the same median value (\(\tilde{x}\) = 5.5). Group A is more willing to recommend it to friends for both its mean value (\(\overline{x}\) = 5.4) and the median value (\(\tilde{x}\) = 5.5) is larger than that of Group B (\(\overline{x}\) = 4.6, \(\tilde{x}\) = 4.5).
Another reason emerged in the interviews with Group B. A particular member from Group B gave the lowest scores to both questions for ‘Future Intention’. The specific member explained “I have already spent too much time”, “I think I have contributed enough”, “I don’t think any of my friends like digital heritage or any related area”. Such responses were totally understandable since participants were voluntarily using extra time with no recourse to any monetary rewards.

5.2.5 Cross-Evaluation.

To gain more insight, we cross-evaluated responses from follow-up questionnaires and semi-structured interviews. Figure 8 shows that the minimum number of images (n = 55) that Group B contributed is greater than the maximum number of images (n = 26) Group A member contributed. Participants in Group B who are not interested in the contents of the task still contributed more than those in Group A who are interested. When the participants in Group B were interviewed, the response was that since the other members in the group were still taking pictures, they also continued to do so. And they were constantly adjusting their shooting angles because they can see others were taking pictures from different angles at different locations. This is a sign of vicarious learning and vicarious reinforcement from observations [7]. We can conclude that the synchronous on-site activity creates a social context where participants can observe and exchange information, even without explicit communication.
Some participants in Group B contributed more than 100 images individually (the total number of images Group A contributed as a group is only 144). However, unlike those who think they own both the images and 3D model entirely in Group A (i.e., rate 7 score in Figure 12), those in Group B who significantly contributed did not consider that they own the 3D model in full. As they mentioned in the interview, “other members worked really hard as well”, “I saw people went upstairs in nearby buildings so that they can capture angles from the top”, “I think other people contributed more than I did”. Such recognition relieved the contributor’s excessive possessiveness, resulting in positive psychological ownership.
All members of Group B expressed appreciation for their teammates’ contributions even though no explicit communication occurred during the activities. They responded “I did not really talk to other participants since we did not know each other”. The crowd-based team was self-organised without a hierarchical structure or any leader roles; members did not need to give or receive any orders. Since task interdependence (i.e., image acquisition) was low, it was not necessary for members to explicitly communicate with each other verbally. However, there exists implicit team coordination [46] because members of Group B could observe what others were doing and adjusted their behaviours accordingly.
Although, in follow-up interviews, all participants of both groups stated that they understood the importance of reconstructing photogrammetric 3D model, some noteworthy points can be highlighted. Members in Group A demonstrated varying levels of comprehension of mass photogrammetry as a topic, depending on their inherent interests or familiarity with the subject. Members of Group B, however, appeared to be more familiar with the general photogrammetric process and more committed to the goal. Regardless of participants’ inherent interests or motivation, the crowd-based team structures (Group B followed) had positive effects on participant behaviours and performance.
Additionally, members of Group B were more concerned with credits even if they knew they cannot own the 3D model. A trend was spotted that the more they contributed, the more desirous they were to be appreciated. “It would be nice if my name could be mentioned somewhere near the 3D reconstruction”, “I think the project initiators should give credits to people who have contributed”. Therefore, the fact that we did not integrate any acknowledgement functionalities yet may hinder future willingness to some extent. This highlighted the need to include such features as part of the mechanism in the future.
Our cross-evaluation disclosed that volunteer with little or no intrinsic motivations were more reluctant to participate in the future. It indicates that even if crowd-based team structures do promote individual’s behaviours and performances, inherent attitudes are difficult to change. Motivation and coordination issues can be challenging for crowdsourcing sustainability. Screening for the right audience may be an important process for the success of on-going crowdsourcing campaigns.
The efforts expended by participants to contribute images can be viewed as prosocial behaviours as there were no tangible rewards. Prosocial behaviours are directly driven by prosocial motivations, which have been proven to be a reliable predictor of performance and productivity [24, 28, 28, 43]. Overall, the potentials of Group B members were effectively stimulated throughout mass photogrammetry. Thus, the results suggested that the crowd-based team structures successfully promoted participants’ prosocial motivations and prosocial behaviours.
To conclude, we may make a rational inference concerning what effects the crowd-based team structures can impose on crowd behaviours and crowdsourcing performances (Figure 14). This will be discussed further in the Discussions section. Our model framework created as a result of this study can be used for future voluntary collaborative crowdsourcing activities.
Fig. 14.
Fig. 14. A chart depicting our identification of the effects of crowd-based self-organised team structures on collective behaviour. Forming an ad-hoc team requires coordination. Team structures provide a social context different from the asynchronous crowdsourcing that encourages isolated contributions from disorganised individuals. Such synchronous on-site collaborative crowdsourcing context allows for implicit communications and vicarious learning via observation, enhancing team cognition and positive psychological ownership. The illustration shows participants’ prosocial motivations were stimulated throughout the crowdsourcing, leading to stronger dedication and higher productivity.

6 Discussion

The present research successfully demonstrated that self-organised team structures can leverage collective behaviours for collaborative crowdsourcing and facilitate mass photogrammetry of 3D objects. Through our experimental study, we have validated the hypotheses raised at the beginning of the article.
The disorganised crowdsourcing Group A conducted is a pervasive approach in most crowdsourcing scenarios. It implicitly aggregates isolated contributions by encouraging individuals to explore the tasks freely and contribute asynchronously. The results (Figure 4) validated that H1 - The disorganised group of individuals working asynchronously will produce an incomplete 3D model. Group B follows the self-organised synchronous collaborative crowdsourcing mechanism that integrates crowd-based team structures for effective collaboration. The results (Figure 5) verified H2 - The crowd-based self-organised team working synchronously will produce a complete 3D model. The results (Figure 7) obtained by combining data from both groups validated H3 - The combination of imageries generated by the disorganised group and the crowd-based self-organised team will produce a complete 3D model that is better than each group separately.
Our experiments revealed that a crowd-based, self-organised team working in synchrony could effectively facilitate crowdsourcing 3D models regarding model completeness. It also alleviated, to some degree, participant fatigue in the repeated photo-taking practices. Conventional crowdsourcing, which engages a group of disorganised participants working independently, was sub-optimal because the aggregation of isolated contributions can only produce a partially reconstructed 3D model. The model generated using the combination of contributions from both groups obtained the best details and completeness. But most of the credits should be attributed to the contributions from Group B - the crowd-based self-organised team. The crowd-based self-organised team performs better and more consistently than the traditional crowdsourcing approach throughout our fieldwork.
We investigated further the reasons why crowd-based, self-organised team have positively affected crowdsourced models. The nature of our crowdsourcing was voluntary and location-dependent. It provided no tangible rewards and required spontaneous participants conduct on-site crowdsourced tasks. As compared to the disorganised and independent crowdsourcing context (Group A), forming an ad-hoc team required relatively high coordination costs. Although scheduling could be flexible, the lighting and weather conditions were limitations that could greatly affect the quality of the final reconstruction. Additionally, non-experts were recruited to compose a group of amateurs who would be similar to future volunteers from any community. No administrator or team leader was assigned, and no formal rules or established norms were set; thus, the team was self-organised with fluid team boundaries, which has less pressure and more freedom and flexibility.
After the team was formed, synchronous collaboration was scheduled at the designated location. The emergent collaboration created a social context that was different from traditionally individual-based crowdsourcing scenarios (Group A). Participants’ physical presence allowed them to observe peer activities and adapt their own via vicarious learning. Through implicit communication, exchanged information reinforced a shared mental model pertaining to their responsibility and collective goal. Evidences can be found when members of Group B constantly gave credits to their teammates. They had higher familiarity with the photogrammetric process, and became more concerned about the 3D reconstruction. As more time and effort were invested, participants experience psychological ownership towards the contents they have contributed, further stimulating their prosocial motivations. Participants’ prosocial behaviours led to higher dedication and more contributions. Such a positive cycle is beneficial for mass photogrammetry.
Although crowd-based team structures can promote individual behaviour and performance, the inherent attitudes would be difficult to change. Participants with little or no intrinsic motivations are unlikely to participate or contribute in future activities. This indicates that, in the long run, there will be challenges in maintaining such activities in terms of coordination and motivation. This also implies that the right kinds of volunteering will need to be targeted as this can be a critical step towards success in collective performance. We also learn that participants can be motivated by simple acknowledgements, such as giving credit where credit is due. These can be a viable approach for maintaining participant interest.
Despite the positive findings, there are limitations and issues that can be further investigated and improved. Future research can explore influential factors such as task complexity, team composition and personality traits, technology-mediation, remote work, and so on. Our participants were limited to the university demographics, who may have similar experiences and interests according to their age groups. We felt that further work could be conducted within a larger context with more diversified demographics so we can generalise our findings to other scenarios, especially when demographic factors may affect team collaboration. In the present research, the crowd-based team were randomly assigned by the initiator. Results may improve if the process of team formation is from the bottom-up, i.e., self-selection where participants can decide whether they would want to work alone or collaboratively.
For our data collection and analysis, we collected the image contributions via the Web App, tracked users’ behavioural data from the metadata files stored in the images; and learnt self-reported feedback using questionnaires and semi-structured interviews. Although we re-generated the footprints of participants after the campaigns, it would have been better if there were observers during the fieldwork so that behavioural data can be documented. This may help researchers gain a deeper understanding of correlations between specific behaviours and cognition that might have shaped these behaviours. We feel that future research can include timestamps during collaboration. While we have recorded the total time participants have spent in the tasks, a more accurate record of time may help uncover collaborative dynamics from both the individual and the team level. This awareness can assist initiators in identifying potential inhibiting factors in organisational structures within specific contexts. Furthermore, more appropriate and accurate prescriptive measurements (such as bipartite network analysis) can be used, with particular criteria revealing structural features embedded within the collaboration network and temporal dynamics.
In this experiment, we demonstrated through data that the disorganised group of individuals working asynchronously could improve the overall model completeness by voluntarily adding more pictures with detailed features. Since the members of this group expressed a higher interest in taking more pictures and a considerable higher willingness to recommend this activity to friends, we can explore the use of this factor in organising asynchronous collaborative crowdsourcing in the future. For instance, a future crowdsourcing campaign could visualise reconstruction process in real-time, transforming an initially incomplete model and refining it as more participants contributed photos. We could also explore approaches to facilitate online asynchronous collaboration via technology-mediated communications strategies. Since participation is often spontaneous, it is of importance to investigate sustained participation in ongoing crowdsourcing campaigns.

7 Conclusion

This article underscores the importance of integrating team collaboration mechanisms in crowdsourcing activities. We proposed a collaborative mass photogrammetry model. Our results have demonstrated that crowd-based self-organised team structures can effectively reduce the coordination cost and improve crowdsourcing performance in terms of overall time spent, effort devotion, and model completeness. Recent crowdsourcing literature has mainly concentrated on examining the overall campaign designs to identify influential motivation factors to be modified or leveraged for future work. In extension to prior results, our research not only discusses how self-organised teamwork can motivate crowds to participate, but also pioneers the understanding of the collaborative dynamics among the participants during crowdsourcing activities. Such an understanding can provide us with reflective assessments and help leverage collective behaviour within task-oriented collaborative activities. Finally, the present research combines the collective work of the synchronous crowd-based self-organised team with asynchronous disorganised groups. The dynamic interplay between the groups is beneficial for the overall performance and can be investigated further to formulate a more comprehensive crowdsourcing framework. Our approach can be complementary, and an extension to current crowdsourcing approaches for improving productivity and data quality with as little coordination cost as possible.

Acknowledgments

This work was carried out at the NVIDIA Joint-Lab on Mixed Reality, NVIDIA Technology Centre at the University of Nottingham Ningbo China. The authors wish to express their gratitude to CapturingReality, and Epic Games for their support with RealityCapture, the photogrammetry software that made this research possible. Our sincere thanks to all the volunteers for participating in this exciting and worthwhile project. Finally, our special appreciation to the NVIDIA AI Technology Center, Singapore, for their continued support and knowledge exchange in technologies we use for our global cultural heritage research.

References

[1]
Munir Abbasi, Panayiota Vassilopoulou, and Lampros Stergioulas. 2017. Technology roadmap for the creative industries. Creative Industries Journal 10, 1 (Jan.2017), 40–58.
[2]
Irene Aicardi, Filiberto Chiabrando, Andrea Maria Lingua, and Francesca Noardo. 2018. Recent trends in cultural heritage 3D survey: The photogrammetric computer vision approach. 257–266 pages.
[3]
Harini Alagarai Sampath, Rajeev Rajeshuni, and Bipin Indurkhya. 2014. Cognitively inspired task design to improve user performance on crowdsourcing platforms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 3665–3674.
[4]
Mohammad Allahbakhsh, Samira Samimi, Hamid Reza Motahari-Nezhad, and Boualem Benatallah. 2014. Harnessing implicit teamwork knowledge to improve quality in crowdsourcing processes. In Proceedings - IEEE 7th International Conference on Service-Oriented Computing and Applications, SOCA 2014, Vol. 13. Institute of Electrical and Electronics Engineers Inc., 17–24.
[5]
Dimitra Anastasiou and Rajat Gupta. 2011. Comparison of crowdsourcing translation with Machine Translation. Journal of Information Science 37, 6 (Dec.2011), 637–659.
[6]
Hayward P. Andres. 2013. Team cognition using collaborative technology: A behavioral analysis. Journal of Managerial Psychology 28, 1 (Jan.2013), 38–54.
[7]
Albert Bandura. 1965. Vicarious processes: A case of no-trial learning. 1–55.
[8]
Daren C. Brabham. 2010. Moving the crowd at threadless: Motivations for participation in a crowdsourcing application. Information Communication and Society 13, 8 (Dec.2010), 1122–1145.
[9]
Helen Campbell Pickford, Genevieve Joy, and Kate Roll. 2016. Psychological ownership: Effects and applications. Oxford Saïd Business School 32 (Oct.2016).
[10]
Jess Chandler, Gabriele Paolaccci, and Mueller Pam. 2013. Risks and rewards of crowdsourcing marketplaces. Handbook of Human ComputationNovember (2013), 1–1059.
[11]
Peng Peng Chen, Hai Long Sun, Yi Li Fang, and Jin Peng Huai. 2018. Collusion-proof result inference in crowdsourcing. Journal of Computer Science and Technology 33, 2 (Mar.2018), 351–365.
[12]
Danzhao Cheng and Eugene Ch’ng. 2020. Potentials for learning history through role-playing in virtual reality. In ACHS 2020 FUTURES - Association of Critical Heritage Studies 5th Biennial Conference.
[13]
Eugene Ch’ng, Shengdan Cai, Tong Evelyn Zhang, and Fui Theng Leow. 2019. Crowdsourcing 3D cultural heritage: Best practice for mass photogrammetry. Journal of Cultural Heritage Management and Sustainable Development 9, 1 (Jan.2019), 24–42.
[14]
Eoin Cullina, Kieran Conboy, and Lorraine Morgan. 2015. Measuring the crowd - A preliminary taxonomy of crowdsourcing metrics. Proceedings of the 11th International Symposium on Open Collaboration, OPENSYM 2015 (2015).
[15]
CyArk. 2003. Digitally record, archive and share cultural heritage. https://www.cyark.org/.
[16]
Linus Dahlander and Henning Piezunka. 2020. Why crowdsourcing fails. Journal of Organization Design 9, 1 (Dec.2020), 24.
[17]
Florian Daniel, Pavel Kucherbaev, Cinzia Cappiello, Boualem Benatallah, and Mohammad Allahbakhsh. 2018. Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. arxiv:1801.02546
[18]
Indika Dissanayake, Jie Zhang, and Bin Gu. 2015. Task division for team success in crowdsourcing contests: Resource allocation and alignment effects. Journal of Management Information Systems 32, 2 (2015), 8–39.
[19]
Sara Loughran Dommer and Vanitha Swaminathan. 2013. Explaining the endowment effect through ownership: The role of identity, gender, and self-threat. Journal of Consumer Research 39, 5 (Feb.2013), 1034–1050.
[20]
Pierre Drap, Julien Seinturier, and David Scaradozzi. 2007. Photogrammetry for virtual exploration of underwater archaeological sites. In Proceedings of the 21st International Symposium, CIPA. http://www.venus-project.eu.
[21]
Aviv Elor and Samantha Conde. 2020. Exploring the creative possibilities of infinite photogrammetry through spatial computing and extended reality with wave function collapse serious games view project. https://www.researchgate.net/publication/344157899.
[22]
Europeana. 2008. Discovering inspiring Europeana’s cultural heritage. https://www.europeana.eu/en.
[23]
Ju Fan, Guoliang Li, Beng Chin Ooi, Kian-lee Tan, and Jianhua Feng. 2015. iCrowd: An adaptive crowdsourcing framework. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, Vol. 2015-May. ACM, New York, NY, USA, 1015–1030.
[24]
Adam M. Grant and Justin M. Berg. 2011. Prosocial Motivation at Work: When, Why, and How Making a Difference Makes a Difference. Oxford University Press.
[25]
Bin Guo, Huihui Chen, Zhiwen Yu, Wenqian Nan, Xing Xie, Daqing Zhang, and Xingshe Zhou. 2017. TaskMe: Toward a dynamic and quality-enhanced incentive mechanism for mobile crowd sensing. International Journal of Human Computer Studies 102 (Jun.2017), 14–26.
[26]
Mark Hedges and Stuart Dunn. 2018. Motivations and benefits. In Academic Crowdsourcing in the Humanities. Elsevier, 87–103.
[27]
Guido Hertel. 2011. Synergetic effects in working teams. Journal of Managerial Psychology 26, 3 (Mar.2011), 176–184.
[28]
Jia Hu and Robert C. Liden. 2015. Making a difference in the teamwork: Linking team prosocial motivation to team processes and effectiveness. Academy of Management Journal 58, 4 (Aug.2015), 1102–1127.
[29]
Yun Huang, Corey White, Huichuan Xia, and Yang Wang. 2017. A computational cognitive modeling approach to understand and design mobile crowdsourcing for campus safety reporting. International Journal of Human-Computer Studies 102 (Jun.2017), 27–40.
[30]
Ata Jami, Maryam Kouchaki, and Francesca Gino. 2021. I own, so I help out: How psychological ownership increases prosocial behavior. Journal of Consumer Research 47, 5 (Feb.2021), 698–715.
[31]
Jiuchuan Jiang, Bo An, Yichuan Jiang, Chenyan Zhang, Zhan Bu, and Jie Cao. 2021. Group-oriented task allocation for crowdsourcing in social networks. IEEE Transactions on Systems, Man, and Cybernetics: Systems 51, 7 (Jul.2021), 4417–4432.
[32]
Yuanyuan Jiao, Yepeng Wu, and Linna Hao. 2021. Does crowdsourcing lead to better product design: The moderation of network connectivity. Journal of Business and Industrial Marketing ahead-of-print, (Aug.2021).
[33]
Jared Katz. 2017. Digitized Maya music: The creation of a 3D database of Maya musical artifacts. Digital Applications in Archaeology and Cultural Heritage 6 (Sep.2017), 29–37.
[34]
Nicolas Kaufmann, Thimo Schulze, and Daniel Veit. 2011. More than fun and money: Worker motivation in crowdsourcing-a study on mechanical turk. In AMCIS 2011 Proceedings - All Submissions. 340.Proceedings of the Seventeenth Americas Conference on Information Systems. https://www.mturk.com, https://www.researchgate.net/publication/216184483.
[35]
Gabriella Kazai. 2011. In search of quality in crowdsourcing for search engine evaluation. InProceedings of the 33rd European Conference on Advances in Information Retrieval, Vol. 6611 LNCS. 165–176.
[36]
Shashank Khanna, Aishwarya Ratan, James Davis, and William Thies. 2010. Evaluating and improving the usability of mechanical turk for low-income workers in India. In Proceedings of the First ACM Symposium on Computing for Development - ACM DEV’10. ACM Press, New York, NY, USA, 1.
[37]
Steve W. J. Kozlowski and Bradford S. Bell. 2008. Team learning, development, and adaptation. In V. I. Sessa and M. London (Eds.), Work Group Learning: Understanding, Improving and Assessing How Groups Learn in Organizations. Taylor and Francis Group/Lawrence Erlbaum Associates, 15–44. https://www.taylorfrancis.com/chapters/edit/10.4324/9780203809747-7/team-learning-development-adaptation-steve-kozlowski-bradford-bell.
[38]
Huigang Liang, Meng Meng Wang, Jian Jun Wang, and Yajiong Xue. 2018. How intrinsic motivation and extrinsic incentives affect task effort in crowdsourcing contests: A mediated moderation model. Computers in Human Behavior 81 (Apr.2018), 168–176.
[39]
Thomas Ludwig, Christoph Kotthaus, Christian Reuter, Sören van Dongen, and Volkmar Pipek. 2017. Situated crowdsourcing during disasters: Managing the tasks of spontaneous volunteers through public displays. International Journal of Human Computer Studies 102 (Jun.2017), 103–121.
[40]
Thomas Luhmann, Stuart Robson, Stephen Kyle, and Jan Boehm. 2019. Close-Range Photogrammetry and 3D Imaging. De Gruyter.
[41]
Roman Lukyanenko, Jeffrey Parsons, Yolanda F. Wiersma, and Mahed Maddah. 2019. Expecting the unexpected: Effects of data collection design choices on the quality of crowdsourced user-generated content. MIS Quarterly 43, 2 (Jan.2019), 623–647.
[42]
Ioanna Lykourentzou, Shannon Wang, Robert E. Kraut, and Steven P. Dow. 2016. Team dating: A self-organized team formation strategy for collaborative crowdsourcing. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Vol. 07-12-May-. ACM, New York, NY, USA, 1243–1249.
[43]
Benedikt Morschheuser, Juho Hamari, and Alexander Maedche. 2019. Cooperation or competition – When do people contribute more? A field experiment on gamification of crowdsourcing. International Journal of Human-Computer Studies 127 (Jul.2019), 7–24.
[44]
M. Mudg, Carla Schroer, Graeme Earl, Kirk Martinez, Hembo Pagi, Corey Toler-Franklin, Szymon Rusinkiewicz, Gianpaolo Palma, Melvin Wachowiak, M. Ashey, N. Mathews, Tommy Noble, and M. Dellepian. 2010. Principles and practices of robust, photography-based digital imaging techniques for museums. In The 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage VAST.
[45]
Mark Mudge, Michael Ashley, and Carla Schroer. 2007. A digital future for cultural heritage. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, Vol. 36.
[46]
Kengo Nawata, Hiroyuki Yamaguchi, and Mika Aoshima. 2020. Team implicit coordination based on transactive memory systems. Team Performance Management: An International Journal 26, 7/8 (Aug.2020), 375–390.
[47]
Ikrom Nishanbaev. 2020. A web repository for geo-located 3D digital cultural heritage models. Digital Applications in Archaeology and Cultural Heritage 16 (Mar.2020).
[50]
Jon L. Pierce, Tatiana Kostova, and Kurt T. Dirks. 2003. The state of psychological ownership: Integrating and extending a century of research. Review of General Psychology 7, 1 (Mar.2003), 84–107.
[51]
Luiz Fernando Silva Pinto and Carlos Denner dos Santos. 2018. Motivations of crowdsourcing contributors. Innovation and Management Review 15, 1 (Jun.2018), 58–72.
[52]
Polymath Projects. 2009. Massively collaborative mathematical projects. https://polymathprojects.org/.
[53]
Ofilia I. Psomadaki, Charalampos A. Dimoulas, George M. Kalliris, and Gregory Paschalidis. 2019. Digital storytelling and audience engagement in cultural heritage management: A collaborative model based on the Digital City of Thessaloniki. Journal of Cultural Heritage 36 (Mar.2019), 12–22.
[54]
Habibur Rahman, Senjuti Basu Roy, Saravanan Thirumuruganathan, Sihem Amer-Yahia, and Gautam Das. 2019. Optimized group formation for solving collaborative tasks. The VLDB Journal 28, 1 (Feb.2019), 1–23.
[55]
Bahareh Rahmanian and Joseph G. Davis. 2014. User interface design for crowdsourcing systems. In Proceedings of the Workshop on Advanced Visual Interfaces AVI. Association for Computing Machinery, 405–408.
[56]
Roni Reiter-Palmon, Ben Wigert, and Triparna de Vreede. 2012. Team creativity and innovation: The effect of group composition, social processes, and cognition. In Handbook of Organizational Creativity. Elsevier, 295–326.
[57]
Fabio Remondino. 2011. Heritage recording and 3D modeling with photogrammetry and 3D scanning. Remote Sensing 3, 6 (Jun.2011), 1104–1138.
[58]
Jie Ren, Pinar Ozturk, and William Yeoh. 2019. Online crowdsourcing campaigns: Bottom-up versus top-down process model. Journal of Computer Information Systems 59, 3 (May2019), 266–276.
[59]
Christoph Riedl and Anita Williams Woolley. 2017. Teams vs. crowds: A field test of the relative contribution of incentives, member ability, and emergent collaboration to crowd-based problem solving performance. Academy of Management Discoveries 3, 4 (Dec.2017), 382–403.
[60]
Markus Rokicki, Sergej Zerr, and Stefan Siersdorfer. 2015. Groupsourcing: Team competition designs for crowdsourcing. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 906–915.
[61]
Neal Schmitt, Jose M. Cortina, Michael J. Ingerick, and Darin Wiechmann. 2003. Handbook of Psychology, volume 12: Industrial and organizational psychology. Handbook of Psychology, Industrial and Organisational Psychology 1 (2003), 77–105.
[62]
Xiaoxiao Shi, Wei Evans, Wei Pan, and Wei Shan. 2021. Understanding the effects of personality traits on solver engagement in crowdsourcing communities: A moderated mediation investigation. Information Technology and People, ahead-of-print (Mar.2021).
[63]
The British Museum. 2014. Sketchfab@britishmuseum. https://sketchfab.com/.
[64]
The Palace Museum. 2001. Digital Heritage Database. https://digicol.dpm.org.cn/.
[65]
Nestor Tsirliganis, George Pavlidis, Anestis Koutsoudis, Despina Papadopoulou, Apostolos Tsompanopoulos, Konstantinos Stavroglou, Zacharenia Loukou, and Christodoulos Chamzas. 2004. Archiving cultural objects in the 21st century. Journal of Cultural Heritage 5, 4 (Oct.2004), 379–384.
[66]
Maja Vukovic. 2009. Crowdsourcing for enterprises. In 2009 Congress on Services - I. IEEE, 686–692.
[67]
Rong Wang. 2020. Marginality and team building in collaborative crowdsourcing. Online Information Review 44, 4 (Apr.2020), 827–846.
[68]
Bernhard Weber and Guido Hertel. 2007. Motivation gains of inferior group members: A meta-analytical review. Journal of Personality and Social Psychology 93, 6 (Dec.2007), 973–993.
[69]
Wei Wu and Xiang Gong. 2020. Motivation and sustained participation in the online crowdsourcing community: The moderating role of community commitment. Internet Research 31, 1 (Oct.2020), 287–314.
[70]
Naci Yastikli. 2007. Documentation of cultural heritage using digital photogrammetry and laser scanning. Journal of Cultural Heritage 8, 4 (Sep.2007), 423–427.
[71]
H. M. Yilmaz, M. Yakar, S. A. Gulec, and O. N. Dulgerler. 2007. Importance of digital close-range photogrammetry in documentation of cultural heritage. Journal of Cultural Heritage 8, 4 (2007), 428–433.
[72]
Xuanhui Zhang, Shijie Song, Yuxiang (Chris) Zhao, and Qinghua Zhu. 2018. Motivations of volunteers in the Transcribe Sheng project: A grounded theory approach. Proceedings of the Association for Information Science and Technology 55, 1 (Jan.2018), 951–953.
[73]
Yuxiang Zhao and Qinghua Zhu. 2014. Evaluation on crowdsourcing research: Current status and future direction. Information Systems Frontiers 16, 3 (2014), 417–434.

Cited By

View all
  • (2023)Using a digital participatory approach to facilitate inclusivity in Jordanian heritage sites: Stakeholders’ requirements and a proposed systemArchitecture Papers of the Faculty of Architecture and Design STU10.2478/alfa-2023-001428:3(3-9)Online publication date: 20-Sep-2023
  • (2022)Kitle kaynaklı insansız hava aracı verileri kullanılarak ahşap eserlerin 3B modellenmesi: Truva Atı örneği3D modeling of wooden artifacts using crowdsourced unmanned aerial vehicle data: A case study of the Trojan HorseMobilya ve Ahşap Malzeme Araştırmaları Dergisi10.33725/mamad.12074165:2(155-166)Online publication date: 26-Dec-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Journal on Computing and Cultural Heritage
Journal on Computing and Cultural Heritage   Volume 16, Issue 1
March 2023
437 pages
ISSN:1556-4673
EISSN:1556-4711
DOI:10.1145/3572829
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 December 2022
Online AM: 27 October 2022
Accepted: 08 June 2022
Revised: 01 June 2022
Received: 16 December 2021
Published in JOCCH Volume 16, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Crowdsourcing
  2. mass photogrammetry
  3. team structures
  4. collaboration dynamics
  5. crowd behaviour
  6. task allocation
  7. cultural heritage

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • AHRC ‘Shaping the Connected Museum II’

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)780
  • Downloads (Last 6 weeks)111
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Using a digital participatory approach to facilitate inclusivity in Jordanian heritage sites: Stakeholders’ requirements and a proposed systemArchitecture Papers of the Faculty of Architecture and Design STU10.2478/alfa-2023-001428:3(3-9)Online publication date: 20-Sep-2023
  • (2022)Kitle kaynaklı insansız hava aracı verileri kullanılarak ahşap eserlerin 3B modellenmesi: Truva Atı örneği3D modeling of wooden artifacts using crowdsourced unmanned aerial vehicle data: A case study of the Trojan HorseMobilya ve Ahşap Malzeme Araştırmaları Dergisi10.33725/mamad.12074165:2(155-166)Online publication date: 26-Dec-2022

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media