1. Introduction
Revealing the network structure and thereafter reconstructing the ongoing network dynamics from observational data is the fundamentally inverse problem in network science [
1,
2]. As the reconstruction of complex networked systems from data observed in dynamical processes plays an essential role in practical applications aimed at the understanding and control of networked dynamics, it has attracted increasing attention in a wide range of research fields recently [
3]. Prominent applications range from the discovering of genetic regulatory networks from gene expression data in computational biology [
4,
5], uncovering functional or structural brain networks from sensed data in neuroscience [
6,
7], reconstructing contact networks from contagion data in epidemiology [
8,
9], to revealing hidden social connections from social media data on information cascades in social science [
10,
11]. In the typical setting investigated in the literature, observational data for reconstructing network structure and inferring parameters of dynamical processes are given as time series [
3]. Most previous research has focused on the network reconstruction problem under the assumption that the entire time series of the network dynamics is accessible to ensure sufficient information is provided for accurate network inference. However, as investigated by the work of [
8], in many real-world cases such as neuron cascades and epidemic spreading, the first stage of propagation is hard to measure and only a limited number of data points will be observed. Despite the experimental or technical limitations for data collection, obtaining high-precision estimations with less data is always desirable, especially when the measurement for the dynamical quantities is costly [
12]. Motivated by the dilemma between the availability of observational data and the accuracy of inference, in this paper, we explore the issue of accelerating the convergence of inference by discovering more informative observational data. However, different from previous literature such as [
8], we develop and explore a framework of how convergence of estimates can be accelerated through targeted interaction with the networked dynamics. Our framework thus supposes that the observer can influence the dynamical process on the network and we explore how such influence can be optimally deployed to improve the inference of unknown parameters of the dynamics.
To derive dynamical process parameters or reconstruct network topology from observational data, it is often necessary to draw on domain-specific expertise [
3]. Here, we place the problem of speeding up inference in the context of opinion dynamics using the well-known competitive influence maximization framework [
13,
14], which studies the competition among external controllers who aim to maximally spread their opinions in the network through strategically distributing their influencing resources. Specifically, a common assumption while investigating the competitive influence maximization problem is that the external controllers are unaware of the strategy being used by their opponents during the competition. However, as, e.g., shown in [
15], knowing the opponent’s strategy allows for a better design of influence allocations. For instance, in the setting of [
15] when the controller has more resources than its opponent, a good strategy is to target the same agents as the opponent to shadow the opponent’s influence. Otherwise, the controller should avoid wasting resources and rather target agents not targeted by the opponent. Making use of such heuristics, however, presupposes knowledge of the opponent’s strategy. Moreover, as there are inherent time limits in many practical applications of competitive influence maximization [
16,
17], there may be limited time to learn from observation of opponents in many real-world settings. Indeed, the need of inferring the opponent’s behaviour in a short time frame is observed in many real-world contexts, such as finding out the source of fake news as soon as possible in the social network to stop it from spreading [
18], analysing the provenance of extreme opinions to prevent radicalization [
19], and uncovering the strategy of the opposing political parties before a given deadline to gain advantages in the election [
20]. Therefore, accelerating the inference to obtain better estimates of opponent’s strategies from dynamical data within a short time frame is an important problem relevant to competitive influence maximization.
To be more concrete, in this paper we explore the problem of opponent strategy inference in the setting of the competitive voting dynamics as studied in [
15,
16,
21]. This choice is motivated by the popularity of the voter model in opinion dynamics as well as its high levels of tractability [
22]. Specifically, in the voting dynamics, opinions are represented as binary variables, and each agent in the network holds one of two opinions. On top of the internal agents, following the work of [
15,
16,
23], the external controllers exert their influence on the network by building unidirectional connections with agents, in which the intensity of their targeting are represented by link weights. The opinion propagates according to the rules that agents flip their opinion states with probabilities proportional to the number of agents with opposing opinions and link weights from opposing controllers [
21]. The problem we are interested in is that one of the controllers can change its control allocations to accelerate its learning of the opposing controllers targeting through observation of the voting dynamics.
Since we model the way of exerting influence from external controllers by building unidirectional connections with agents in the network, the connections from the external controller can also be viewed as edges that constitute part of the network topology. Therefore, our research problem of opponent strategy inference is closely related to the topic of network structure inference. There is rich literature in the field of reconstructing network structure from information flows [
3], and a detailed review of the related work within the domains of epidemiology and information spreading is given in
Section 2. Most relevant to our modelling approach, [
11,
24,
25] infer the network topology from time series of binary-state data. More specifically, [
11,
24] treat the connections between agents as binary variables, and transform the network inference problem to identifying the existence of binary links between agents. Hence, these approaches are unsuitable to infer continuous interaction intensity between agents and from the external controllers. Further to the works of [
11,
24], Chen and Lai [
25] remove the binary restriction and consider the network inference problem in a continuous space by developing a data-driven framework to predict link weights. Nevertheless, none of these works investigate the network inference problem from the perspective of manipulating the opinion diffusion process to accelerate the convergence of estimation, which is an important lever if one wants to obtain an estimate with an accuracy guarantee within a short and limited observation time.
To address the current gaps in accelerating the convergence of inference, in this paper, we follow the setting of our previous work [
26], in which we relate the problem of accelerating opponent strategy inference with network control. By doing so, we assume an active strategic controller who tries to minimise the uncertainty of inference of an opponent’s strategy by optimally allocating its control resources to agents in the network based on the voter model. In other words, we explore how a controller can modify network dynamics such that the influence of opponents becomes easier to identify. Note that we always assume only limited resources are available for the active controller to interfere with the network dynamics, since for most real-world applications [
14,
27], there are natural resource constraints.
In the following, our main interest is in designing heuristic algorithms for allocating limited resources of the active controller. This will enable the generation of more informative observational data during the opinion propagation process and thereby accelerate the convergence of the estimations of the opponent’s strategy. Our paper is based on results that have previously been presented at the Conference on Complex Networks and their Applications 2021 [
26]. Beyond a more detailed exposition of the problem, we additionally extend the previously presented results in two important ways. First, we discuss the ability to predict for an optimizing controller in the face of different opponent strategies. Second, we propose an improved algorithm (which we name the two-step-ahead optimization). In contrast to what we presented in [
26] this new method also accounts for indirect influence between agents in the optimization of resource allocations.
Our main contributions are as follows: First, before our work of [
26], the network inference in the field of information spreads has never been studied from the perspective of strategically interacting with the opinion dynamics to speed up the process of inference. In this paper, we extend the results from [
26] and provide a systematic investigation of how to optimally deploy resources in order to maximally accelerate the opponent strategy inference. Second, we model the opinion propagation process for an individual agent in the network as a non-homogeneous Markov chain and further derive estimators of the opponent’s strategy via maximum likelihood estimation. We also provide uncertainty quantification of our estimators by using the variance deduced from the expectation of the second-order derivative of the likelihood function. This, in turn, is used to inform decisions on the optimal allocations and understand the process of inference acceleration. Third, we develop several heuristic algorithms for speeding up opponent strategy inference via minimizing the variance of estimators, and test the effectiveness of our algorithms in numerical experiments.
The key findings of our work are as follows. First, we demonstrate that it is possible to accelerate the inference process by strategically interacting with the network dynamics. Second, we consider two settings: One is accelerating the inference of the opponent strategy at a single node, when only the inferred node is controllable. The other is minimizing the variance of the opponent influence at the inferred node when both the inferred node and also its neighbours are controllable. In the first setting, we find that the optimized resource allocation is inversely proportional to the sum of neighbouring opinion states. In the second setting, we observe two regimes of the optimized resource allocations based on varying amounts of available resources for the active controller. If the active controller has very limited resources, then it should target the inferred node only. In contrast, if resources are large, a better strategy is to not target the inferred node, but instead focus only on neighbouring nodes. Third, in the scenario of inferring opponent strategies over entire networks, strategic allocations become increasingly important as more resources are available for the active controller. We also find that nodes with lower degrees and targeted with smaller amount of resources by the opponent will generally have a smaller variance in inference.
The structure of this paper is as follows.
Section 2 gives an overview over the state-of-the-art on network inference in the field of reconstructing network structure.
Section 3 formalises the problem of accelerating opponent strategy inference for the voter model and presents heuristics for solving the opponent strategy inference problem.
Section 4 shows the corresponding results after applying the heuristics.
Section 5 summarises the main findings and discusses some ideas for future work.
2. Related Work
As our study is based on the opinion dynamics, we first provide an overview of existing research from the closely related domain of reconstructing network structure from epidemiology and information spreads. Starting from the seminal work of Gomez-Rodriguez et al. [
28], inferring networks using maximum likelihood methods in this area has been extensively explored in a variety of scenarios. In Gomez-Rodriguez et al. [
28], the authors treat network structure inference as a binary optimization problem (i.e., whether or not there is an edge between two agents) and propose the NetInf algorithm based on the maximization of the likelihood of the observed cascades in a progressive cascade model [
29], where the opinion propagation occurs as a one-off process. To improve the performance of the NetInf algorithm in the progressive cascade model, Rodriguez and Schölkopf [
30] propose the MultiTree algorithm by including all directed trees in the optimization. In addition, algorithms have been developed to infer the intensity of connections by Braunstein et al. [
8] based on the susceptible–infected–recovered model, which is also a progressive cascade model. Moreover, some other works have incorporated prior knowledge about the network structure (e.g., sparsity [
31], motif frequency [
32], or community structure [
33]) to improve the performance of network inference given limited amounts of data.
In order to incorporate uncertainty in inference, several other works employ Bayesian inference using Markov chain Monte Carlo methods. Early works in the domain of epidemiology [
34,
35] treat the network model (e.g., an Erdős-Rényi random graph, or a scale-free network [
36]) as known, and use Bayesian inference to discover the network model parameters as well as diffusion parameters (e.g., the infection rate). However, the assumption of knowing the network model is too restrictive and, in most cases, inference of structural information is necessary. The most representative work of using Bayesian inference to reconstruct network structure from information cascades is the work by Gray et al. [
2], which has improved estimates of network structure, especially in the presence of noise or missing data, and is also based on the progressive cascade model. However, their work assumes that the adjacency matrix of the underlying graph is binary, and it is therefore not suitable for inferring the intensity of connections.
Most of the above-mentioned works reconstruct network structure from observations of information cascades or infection trees and are based on progressive cascade models. However, the assumption of the progressive cascade models that once an agent gets infected, its state will remain unchanged is inappropriate for modelling opinion dynamics, as opinion states can be switched back and forth in most cases. The exceptions that explore network structure based on non-progressive models (e.g., the voter model, the suspicious-infected-suspicious (SIS) model, the Ising model) are Barbillon et al. [
9], Li et al. [
24], Chen and Lai [
25] and Zhang et al. [
11]. In more detail, Barbillon et al. [
9] apply the matrix-tree theorem to infer the network structure based on a susceptible–infected–susceptible model. To maintain the information cascades as a directed acyclic graph as works based on progressive cascade models, the information propagation has been encoded as a matrix with
dimensions where
n represents the number of individuals and
m is the length of time series. Unlike Barbillon et al. [
9] and all works based on progressive cascade models which need input sequences of agents with infection times sorted from a root and monotonically increasing, the works by Li et al. [
24], Chen and Lai [
25] and Zhang et al. [
11] reconstruct network structure from observations of binary-state dynamics. In more detail, Li et al. [
24] translate the network structure inference into a sparse signal reconstruction problem by linearization and solve it via convex optimization. Moreover, Chen and Lai [
25] develop a model combining compressive sensing and a clustering algorithm for network reconstruction. However, the above works only consider unidirectional infection (e.g., in the SIS model, if a susceptible node is in contact with an infected node, it will be infected according to a certain probability. Nevertheless, an infected node will not change to the susceptible state due to the contact with another susceptible node but according to a systematic recovery rate). Instead, Zhang et al. [
11] solve the network inference problem by expectation maximization with a focus on the setting that two states are equivalent (as, e.g., in the voter model) and utilize bidirectional dynamics to calculate transition probabilities to reduce the amount of data needed for accurate estimation. However, this work treats an edge as a binary variable (i.e., the existence or absence of a link between two nodes), and it is not suitable for inferring the link weight between two agents.
To summarise, most works in the field of epidemiology and information propagation infer network structure from information cascades or infection trees which are identical to directed acyclic graphs, and are not applicable to situations where opinions can be changed back and forth. Moreover, none of these works combines network control with the network structure inference where external controllers can interact with the intrinsic dynamics of opinion propagation to elicit more information during inference.
3. Model Description and Methods
We consider a population of
N agents exchanging opinions through a social network
G. The social connections between agents are represented by an adjacency matrix
, with
indicating the existence of a social link between agent
i and agent
j and
otherwise. Note that agent
i and
j are called neighbours if there is a link between them. Moreover, we assume that each of the
N agents holds a binary opinion at time
t denoted as
(
). In addition, opinion propagation through the social network follows the classic voter model [
22] where agents copy one of their neighbours’ opinions according to a probability proportional to the weight of social connections.
On top of the classic voter model, following the works of [
15,
16,
21], we consider the existence of two external controllers, named controller
A and
B. In more detail, controller
A and
B are zealots who have fixed opinions
and
for
. By building unidirectional and non-negatively weighted links
and
to agent
i at time
t, the two external controllers
A and
B exert their influence on the social network and therefore interact with the intrinsic opinion dynamics. Here, the sum of the link weights are subject to budget constraints, i.e.,
and
, where
and
are the total resources available to controller
A and
B respectively. The weighted links
and
are also taken into consideration in the opinion updating process. In more detail, we assume a parallel and discrete-time opinion updating for the whole population as follows: at time
t, agent
i (
) updates its opinion to
with probability
and to
with probability
From the equations of
and
, note that the opinion transition probabilities are determined only by the neighbouring states of the updated agent and the weighted links from the controllers, and they are independent of the current opinion of the updated agent. For a better understanding of our framework an illustration is given in
Figure 1. Take agent
i as an example and assume unit-strength connections between agents and from the controllers. Agent
i in
Figure 1 is linked with three other agents (one of which holds opinion 0 and two who hold opinion 1), and is targeted by controller
A. Therefore, in the next update, agent
i will have probability
to stay in opinion 1 and probability
to flip its opinion to 0.
From the perspective of external controllers, they aim to maximize their influence by strategically allocating resources to agents in the network under the context of competitive influence maximization. According to [
15], knowing the opponent’s strategies allows for an efficient budget allocation to maximise influence. However, even though it may be possible to directly observe agents’ opinions at each time step, observing the strategies of controllers, i.e., if an agent is targeted by the external controller, or even how strong the intensity of influence from the controllers is, are often very challenging [
37]. For instance, considering opinion propagation on social media, as the users adopt a new opinion, they may post it without mentioning the source. Thus, we only observe the time when the user’s opinion is changed, but not who it was influenced by.
To solve this problem of opponent-strategy reconstruction from observable data, we model the updating process of agent
i (
) as a non-homogeneous Markov chain [
38] where the Markov property is retained but the transition probabilities
and
depend on time. Further to this formalization, we assume an active controller
A infers the strategy of the passive and constant controller
B who has fixed budget allocations (i.e.,
,
,
) from the time series of agents’ opinion changes. Here, the time series are given by a matrix
where
T is the length of the observation period. In other words, while updating the voting dynamics, we obtain a data matrix
S with
N rows and
T columns in which each row of
S denotes the binary opinion dynamics of an agents over an observation period of length
T. Taking the data matrix
S as an input, we are interested in decoding the unknown parameters
(referred to as
in the following) from the input. Given the transition probabilities
and
of the opinion flow between agents in the existence of controller, a commonly-used method for solving such parametric inference is maximum-likelihood estimation (MLE) [
28]. Specifically, replacing
and
with data actually observed along time series from 0 to
T yields the log-likelihood function of agent
i
where
is the degree of node
i, i.e.,
. This log-likelihood function gives the likelihood of observing an agent’s time series, given the parameter
. Depending on the opinion states in the next step
, either
or
is taken into account in the log-likelihood function of Equation (
3). We then estimate the budget allocations of controller
B to be the values
that are most likely to generate the given data matrix
S after
T observations. Therefore, we maximize the log-likelihood function
in Equation (
3) with respect to the budget allocations of controller
B to obtain an estimate of
, denoted as
in the following.
According to the consistency of maximum likelihood estimates [
39], for a sufficiently large dataset, the estimator asymptotically converges to the true value. However, in this paper, we are interested in the problem of whether the observations of opinion states can be improved by interfering with the opinion dynamics so that we will obtain good-fit estimates within limited observations. To achieve this, instead of passively observing, we assume the controller
A is an active controller who strategically allocates its resources to accelerate the inference of the strategy of its opponent (i.e.,
,
). To evaluate the goodness of fit of the inference obtained from MLE, a commonly-used measurement is the Fisher information [
40]. Specifically, Fisher information is used to test if the maximum likelihood estimators are aligned with the dataset and to derive a measure of dispersion between the true value and the estimator. Following [
40], the Fisher information
about
is given by the expectation of second-order partial derivative of Equation (
3) with respect to
, which is given by
For ease of exposition, let
and
Given this, Equation (
4) can be written as
Moreover, in Equation (
4) we have,
Correspondingly, the negative sum of the above equation over
t from 0 to
is non-positive, and will decrease as the length of observation
T increases. Hence, the Fisher information
is also non-positive and monotonously decreasing as
T increases.
As mentioned above, knowledge of the Fisher information is used to determine whether the maximum likelihood estimator is close to the true value. Specifically, for a large enough sample (i.e.,
), the maximum likelihood estimator
converges in distribution of a normal distribution to the true value
[
39], i.e.,
where
stands for a normal distribution with mean
and variance
for agent
i. As the Fisher information is non-positive and monotonously decreasing along observations, the variance is always positive and, after a long period of observations, we will obtain more information and produce an estimator
closer to the true value
. Moreover, by taking the first order partial derivative of
with respect to
, one obtains
and we find that the variance is monotonously increasing with the increase of
regardless the values of
and
. Note that the variance in Equation (
5) is calculated from Fisher information at the true value. As the true value of
is unknown, in practical calculations we later replace the true value of
with
to calculate the estimated variance
.
By introducing the Fisher information, we transform the problem of accelerating opponent strategy inference by interacting with the opinion dynamics into strategically deploying the budget of controller
A to maximally decrease the variance of estimates. As the Fisher information can be represented in a recursive way, where the Fisher information at time
T is calculated by Fisher information at time
plus two additional terms, the variance can also be calculated recursively via
where
,
and
represents the expected variance at time
.
Inspired by the recursive expression for the variance in Equation (
7), we propose two types of heuristics in which we explore configurations of the budget allocations of controller
A at time
t for node
i (i.e.,
,
) to maximally decrease the expected variance of the estimators in future updates. Because of the combinatorics involved when dealing with arbitrary numbers of updates, we limit considerations to looking one or two steps ahead and correspondingly label the resulting heuristics
one-step-ahead optimization and
two-step-ahead optimization. Our strategy here is as follows. At time
t, controller
A has an estimate of the influence of controller
B and an estimate of the variance around it. It then allocates its influence in such a way as to minimize the expected variance of its next estimate either one or two updating steps in the future.
In the following, we first give the formalized expressions of minimizing the variance of a single estimator
via optimizing the budget allocation on a single node
i in the one-step-ahead and two-step-ahead scenarios, respectively. The extensions of these two heuristics will be further discussed in
Section 4 in which we consider to optimize the budget allocations over multiple nodes to minimize the sum of variance for the entire network.
3.1. One-Step-Ahead Optimization
Specifically, for the one-step-ahead optimization scenario, the argument of the objective function through which we aim to minimize the one-step-ahead variance of estimator
is
where
is the optimized budget allocation for controller
A at time
t in order to minimize the expected variance at time
. Analogous to Equation (
7), we have
and
.
To define an experimental setup, we focus on obtaining a step-wise optimized budget allocation for node i which can differ at each time step t, while fixing other nodes’ budget allocations as . The one-step-ahead optimization algorithm then proceeds according to the following steps:
- (i)
To satisfy the premise of enough samples before using the Fisher information to calculate the variance of a maximum likelihood estimator, we let controller A target all nodes equally with fixed budget allocation for the first m updates and record the likelihood at time m as .
- (ii)
If the current updating step
t is less than the length of total time series
T, we calculate the current estimator
by maximizing the likelihood function
with respect to
and evaluate the Fisher information
. Then, we calculate the expectation of the variance defined in Equation (
8). Next, we obtain the optimized
by applying the interior point optimization algorithm [
41]. Finally, we update the network with a new assignment of
and simulate the stochastic voting dynamics to gain the next-step states for all nodes.
- (iii)
The procedure is terminated when a fixed number of observations T have been made.
This procedure is more formally presented in Algorithm 1. The main body of Algorithm 1 (lines 3–7) corresponds to step (ii). After applying Algorithm 1, we obtain a sequence of
where
. Note that the initial states of agents are generated randomly to ensure that
of the initial opinions of agents are 0 or 1.
Algorithm 1: One-step-ahead optimization |
|
3.2. Two-Step-Ahead Optimization
For the two-step-ahead optimization scenario, we label the optimized budget allocations for node
i at time
t and
as
and
. Then, the objective function for minimizing the two-step-ahead variance is calculated by the expected negatively inverse Fisher information two steps ahead given by:
where
Note that the probabilities of agent
i having opinion 1 or 0 at the current time step are dependent on its neighbouring states at the previous time step. As in the one-step-ahead procedure, when performing the optimization of Equation (
9),
for
are known. Therefore, the expressions for
and
only contain one unknown parameter, which is
. However, in the expressions of
and
, the sum of their respective neighbouring opinions
are unknown, and thus the full expressions for
and
are obtained via applying the law of total probability
where
In the above,
l stands for the number of combinations leading to
and the elements of
, represented as
for
, indicate all possible combinations of the neighbourhood of node
i adding up to
m at time
. If we denote the neighbourhood of node
i as
, then
returns the set of elements in
but not in
. Inserting Equations (
10) and (
11) into Equation (
9) yields the full expression for the goal function. The optimization procedure for the two-step-ahead scenario follows along the lines of Algorithm 1 except for updating every two steps in step (ii) using Equation (
9), as we optimize
and
in one loop. As shown in Equations (
10) and (
11), to calculate the probability that node
i has state 1 at time
, we have to list all combinations of nodes leading to having sum of neighbouring states from 0 to
. Therefore, the time complexity for calculating Equation (
11) is
and will grow exponentially if we look into more than two steps ahead. As it will become infeasible to calculate the combinatorics for more than two steps ahead for large networks, in this paper, we only consider to look one or two steps ahead.
5. Discussion
In this paper, we have proposed an approach to apply network control in the context of a network inference problem. In our setting, an active controller interacts with a process of opinion dynamics on a network and aims to influence the resulting opinion dynamics in such a way that estimates of an opposing controller’s strategy can be accelerated. Existing approaches related to such types of inference problems are often based on the assumption that the inference is performed using given data. In contrast, our approach aims to strategically interfere with the networked dynamics to generate more informative datasets.
By using the variance deduced from the Fisher information as a criterion of inference uncertainty, we have proposed several optimization heuristics. In a first step, in a benchmark scenario in which an active controller can target nodes uniformly by an adjustable amount of influence, we have demonstrated that interference with the system’s dynamics can substantially accelerate the convergence of estimates about opponents. We have then proceeded to develop more sophisticated optimization heuristics, based on step-wise updating of the interference with the dynamics and have shown that such approaches are typically effective if the active controller has a relatively large budget.
Next we have explored the one-step-ahead and two-step-ahead heuristics systematically in a variety of scenarios. First, in a scenario in which the active controller only aims at inference of a single node, we find that only very limited acceleration can be achieved by targeting only this node. However, far more substantial results can be achieved by also targeting the node’s neighbours. For the latter setting we have demonstrated the effectiveness of a simple heuristics, which relies on targeting only the focal node when the controller’s budget is small and only conditionally influencing the focal node’s neighbours when budget availability is large. Conditional targeting of neighbours should be carried out whenever a majority of them are not aligned with the active controller.
Furthermore, we have explored the effectiveness of inference acceleration for networks with varying amounts of degree heterogeneity for different settings of the opponent’s influence allocations. As one might expect, we find that both, predicting opponent influence at nodes with large degrees, and precisely predicting large opponent influence nodes, are difficult. The first is essentially due to the presence if a large changing environment of the node which makes it difficult to distinguish the influence of control from the influence of neighbours. This finding is consistent with results presented in [
25] in the context of link inference from static data. The second is due to the effect that large opponent control tends to fix a node in a static state, which makes it difficult to precisely predict the amount of opponent’s influence.
As a consequence of the above, if an opponent targets uniformly at random the inferrability of its influence is strongly related to the number of high-degree nodes on a network. Correspondingly, using our optimization schemes, we find that inference is the more difficult the larger the degree-heterogeneity of a network. The above finding also holds when opponent’s influence with a strength is drawn randomly with inverse proportionality to node degrees. In this case networks with higher degree heterogeneity will also have larger average variance, since they have more low-degree nodes with large opponent influence, which also impedes inference.
Even though the framework we suggest is more general, results of our paper are restricted to analysing accelerating the opponent strategy inference using voting dynamics. However, we believe that the heuristics we propose can also be used in other complex systems with binary-state dynamics such as the Ising model and the susceptible–infected–susceptible model, which we leave for future work. Another limitation of our study is that we only consider opponents with a fixed strategy. Therefore, an interesting line of future enquiry might be to explore the inference acceleration in scenarios in which opponent influence changes dynamically.