We explore the power of luminous mobile robots on both discrete and continuous domains. For the discrete domain, we solve a problem named MIS (maximal independent set) Filling on a given connected graph. In the continuous domain, we solve the Mutual Visibility problem on the usual Euclidean plane. These problems are well motivated by real-world applications such as information dissemination in a network, pattern formation and many more. We first discuss the MIS filling problem and then move to the Mutual Visibility problem.
For a given graph, it is very natural to ask whether a swarm of autonomous (no external controller), anonymous (no identifier), operated under
Look-Compute-Move (LCM) cycles and
myopic (robots with limited visibility) luminous robots can fill a specific graph property with local knowledge or not. We affirmatively answer this question for the
maximal independent set (MIS) of an arbitrary graph. The given graph
G = (
V,
E) has specific vertices, called
doors, through which the robots enter the graph. The goal is to occupy a set of vertices
V1 ⊂
V such that
V1 is an MIS of
G. This is a natural extension of the filling problem by Hideg and Lukovszki [
2], where the aim is to fill every vertex with a robot after entering through the door without collision. They even extended the problem for an arbitrary connected graph in the asynchronous setting in [
3]. Please refer to [
6] for detailed literature.
We then approach a fundamental problem in swarm robotics, introduced by Di Luna et al. [
1], called the
Mutual visibility problem, which has been widely studied for over a decade. On the Euclidean plane, the problem is explored under various constraints such as axis agreement, fault tolerance, adversarial scheduler, etc. Robots are considered to be
opaque, enabling robots to obstruct other robots’ line of sight. The problem aims to arrange the robots in a configuration such that any two robots are visible to each other. The literature majorly considers the
luminous robot model, where the externally visible persistent light can determine color from a constant-sized prefixed color set. Robots are
oblivious except for memory for the colors, which doesn’t allow them to remember anything from the past. Two collocated robots meet a
collision, which is not allowed in this problem. In the recent past, Sharma et al. [
7] presented a
O(1) algorithm with asynchronous robots having no agreement on the coordinate axes. This problem is explored from the aspect of fault tolerance. Poudel et al. [
4] gave an algorithm that tolerates any number of mobile faulty robots, assuming that the robots agree on one coordinate axis. Refer to [
5] for detailed related works.
Model and Problems.
Here, we first discuss the model considered for the MIS filling problem on a graph and the associated problem we solved. Then, we highlight two models for two variants of the mutual visibility problem with the problem definitions. The first one is called MVInaccurate, and the second one is MVDisoriented.
▶
MIS Filling: A connected, port-labelled graph is given with specific vertices, called
doors. Each vertex
v of the graph contains port numbers corresponding to the incident edges from [
1,
2, …,
δ(
v)], where
δ(
v) is the degree of the vertex
v. The robots enter the graph through the doors, and each of them has a limited visibility range. A robot having
z hops of visibility can see all the vertices and the associated ports up to the distance
z from its current position. As soon as a robot leaves a door, another one appears at the door. We define the following problem.
Problem 1: Given an anonymous connected port-labelled graph G = (V, E) with k Doors, the objective is to relocate robots that appear at Doors such that at termination, the robots occupy a set of vertices V1 (V1 ⊂ V) that forms a maximal independent set of G.
▶
MVInaccurate: A set of
N luminous, opaque point robots are deployed on the Euclidean plane, each operating under LCM cycles. We introduce a new flavour of challenge to the existing model, where a robot may exhibit inaccurate movement. Inaccuracy is defined as an angular deviation that makes a robot
r reach at a point
T′ instead of the actual target point
T, where
\(\overline{rT} = \overline{rT^{\prime }}\) (refer to Fig.
1). We call
θr = ∠
TrT′ the
inaccuracy angle of
r. We assume that
θr < 90° and this angle are not necessarily the same for two robots. None of the robots has knowledge about
N. The current position of a robot
r is considered to be the origin of its local coordinate system. However, the robots are assumed to agree on one coordinate axis (without loss of generality,
y-axis). The robots can be susceptible to
mobility fault, where a faulty robot becomes immobile thenceforth, but the light remains functional. We define the following two problems.
Problem 2: We are given N anonymous and luminous opaque point robots on the 2D plane with a common coordinate axis. They might exhibit angular inaccuracy in their movement. The robots must reach a configuration starting from any configuration so that no three robots are collinear.
Problem 3: N luminous point robots with a common coordinate axis might exhibit angular inaccuracy in their movement. With that, they are also vulnerable to mobility faults. The goal is to obtain a configuration where all non-faulty robots achieve mutual visibility.
▶
MVDisoriented: In this model, we want to study the mutual visibility of
N luminous point robots from the fault tolerance perspective by revoking the assumption of one-axis agreement as in [
4]. The movements of the robots are accurate, but they are completely
disoriented, meaning they do not agree on any coordinate system and orientation. A robot is vulnerable to mobility faults, which may occur anytime. So, we define the following problem.
Problem 4: N disoriented luminous mobile robots are deployed on the 2D plane, out of which f(< N) can be faulty anytime. The goal is to achieve mutual visibility among the non-faulty robots.
Results and Brief Highlevel Idea.
We propose two algorithms, IND and MULTIND, in [
6] for two versions of Problem 1 corresponding to single and multiple doors, respectively. IND algorithm runs under an asynchronous scheduler using robots with 3 hops of visibility,
Δ + 8 colors, and
O(log
Δ) bits memory (
Δ is the maximum degree). MULTIND runs under a semi-synchronous scheduler using robots with 5 hops of visibility,
Δ +
k + 7 colors, and
O(log (
Δ +
k)) bits memory. Both algorithms take
O(
m2) epochs, where
m is the number of robots that form an MIS. The robots move throughout the graph similar to the depth-first search.
We solve Problem 2 in [
5] under a semi-synchronous setting using 2 colors (color optimal). We propose another algorithm under the asynchronous setting that needs 3 colors to solve the problem. Both algorithms require
O(
N) epochs. We also proved that the problem becomes unsolvable of
θr ≥ 90°. We use a technique to sequentially relocate the robots to break the collinearities, where virtual horizontal lines are imagined through the robots to serialize them. In this process, we ensure that no robot inadvertently creates new collinearities after its movements with two other robots with which it was not collinear. Additionally, we assume
y-axis agreement where robots know the north (positive
y-axis) and south (negative
y-axis) directions. Upon activation, a robot
r needs to check whether it is eligible to execute a movement. This can be done by checking the colors of all visible robots located in the northward direction. If all of them have the color
FINISH,
r checks whether it is a part of some collinearity. We take advantage of the fact that a middle robot of collinearity always detects the collinearity. If
r finds itself as a part of multiple collinearity, it chooses a target to move vertically northward. Inaccuracy might make
r collinear again. After this movement, we ensure that
r is not part of more than one collinearity. When
r detects itself in exactly one collinearity
\(\overline{r_irr_j}\) for two other robots
ri and
rj, it executes another movement with respect to the line
\(\overleftrightarrow{r_ir_j}\) to break the collinearity. In case of no collinearity,
r sets its color to
FINISH and terminates.
Problem 3 is solved in [
5] using 10 colors in asynchronous setting, and the proposed algorithm runs in
O(
N) epochs. In case of mobility failure, the middle robot of a collinearity might be faulty. So, we move the last robot (based on the common
y-axis) of collinearity using the guidance of the faulty visible robots using extra colors. So, with the help of a faulty robot
rf, a non-faulty robot
r moves against the line
\(\overleftrightarrow{rr_f}\) to break the collinearity. We ensure that any non-faulty
r moves at most four times, ensuring
O(
N) run time.
We solved Problem 4 under the FSYNC setting, and the proposed algorithm runs in
O(
N2) rounds using 21 colors. When the robots agree on one coordinate axis, it becomes easier to serialize them. In the case of robots, without any agreement on the coordinate system, it becomes very difficult to serialize them. Moreover, it is difficult to achieve the goal for the opaque robots if all the robots are executing movement simultaneously. In that case, the serialization of the robots becomes useful in designing algorithms to achieve mutual visibility. We introduce the concepts of layers, which have a similar concept to the virtual horizontal lines in the case of
y-axis agreement. The boundary of the convex hull
\(\mathcal {CH}\) of all the robots present on the plane, denoted by
\(\mathcal {CH}^0\), is the
outer-most layer of the robots. For
j ≥ 1,
\(\mathcal {CH}^j\), referred to as the
j-th layer, is the boundary of the convex hull of all the robots excluding the robots on
\(\mathcal {CH}^0 \cup \mathcal {CH}^1 \cup \cdots \cup \mathcal {CH}^{j-1}\). We start moving the non-faulty robots starting from the outermost layer
\(\mathcal {CH}^0\) (refer to Fig.
2) towards the innermost layer
\(\mathcal {CH}^k\). The challenge comes when all of the robots on a layer are faulty. In that case, the robots on the next layer need to be made to understand that it is their turn to move towards the innermost layer.
First, our aim is to identify the innermost layer of the robots using colors. Starting from \(\mathcal {CH}^0\), a robot r moves to a side of the subsequent inner layer. Our purpose is to eventually reach \(\mathcal {CH}^{k-1}\) (when the inner-most layer is linear) or \(\mathcal {CH}^k\) (when the inner-most layer is non-linear). After this, the robots move in such a way that they either become a corner of a common convex hull inside \(\mathcal {CH}^k\) or reach on some points on an ellipse (for linear \(\mathcal {CH}^k\)) around the layer \(\mathcal {CH}^k\).