swarm_nl/testing_guide.rs
1//! A doc-only module explaining how to run core library tests.
2//!
3//! > **Note**: the library is compatible with both `tokio` and `async-std` runtimes, however all
4//! > tests are written to use the `tokio` executor.
5//! > Therefore, to run the tests you must specify the runtime feature flag e.g. `cargo test
6//! > --features=tokio-runtime` unless it is already set as the default runtime in Cargo.toml.
7//!
8//! Tests are organised into the following modules:
9//!
10//! - `node_behaviour` tests for single node setup and behaviour.
11//! - `layer_communication` tests involving the synchronization between two nodes.
12//! - `replication` tests for integration tests involving replication configuration and behavior.
13//! - `sharding` tests for integration tests involving sharding configuration and behavior.
14//!
15//!
16//! # Node behaviour testing
17//!
18//! These are simple unit tests that check the behaviour of a single node. To run these tests,
19//! simply run the following command:
20//!
21//! ```bash
22//! cargo test node_ --features=tokio-runtime
23//! ```
24//!
25//! # Layer communication testing
26//!
27//! In order to create tests for communication between two nodes, we used the Rust conditional
28//! compilation feature to be able to setup different nodes and test their communication.
29//! All commands for running these tests should be run with `-- --nocapture` to verify the expected
30//! results.
31//!
32//! For these tests, we've created two test nodes: `node1` and `node2`.
33//!
34//! - Node 1 is setup by calling the `setup_node_1` function which uses a pre-configured
35//! cryptographic keypair and the `setup_core_builder_1` function to configure a default node.
36//! This keeps its identity consistent across tests.
37//!
38//! - Node 2 is setup by calling the `setup_node_2` function which creates a new node identity every
39//! time it is called.
40//! It then adds Node 1 as its bootnode and establishes a connection by dialing Node 1.
41//!
42//! ### Peer dialing tests
43//!
44//! The peer dialing tests checks if a node can dial another node by using a `listening` node and a
45//! `dialing` node. To run these tests, start the listening node by running the following command in
46//! one terminal:
47//!
48//! ```bash
49//! cargo test dialing_peer_works --features=test-listening-node --features=tokio-runtime -- --nocapture
50//! ```
51//!
52//! Then, in another terminal run the dialing node:
53//!
54//! ```bash
55//! cargo test dialing_peer_works --features=test-dialing-node --features=tokio-runtime -- --nocapture
56//! ```
57//!
58//! The application event handler will log the dialing node's peer id and the listening node's peer
59//! id.
60//! ## Fetching tests
61//!
62//! The fetching test checks if a node can fetch a value from another node.
63//! These tests use a `server` node and a `client` node.
64//!
65//! To run these tests first start the server node in one terminal:
66//!
67//! ```bash
68//! cargo test rpc_fetch_works --features=test-server-node --features=tokio-runtime -- --nocapture
69//! ```
70//!
71//! And in another terminal, run the client node:
72//!
73//! ```bash
74//! cargo test rpc_fetch_works --features=test-client-node --features=tokio-runtime -- --nocapture
75//! ```
76//!
77//! Then you can check that the server node prints out a _"Recvd incoming RPC:"_ message with the
78//! data sent by the client node.
79//!
80//! ## Kademlia tests
81//!
82//! For Kademlia tests, we have a `reading` node and a `writing` node.
83//! We use a time delay to simulate the reading node "sleeping" so as to allow the writing node to
84//! make changes to the DHT.
85//!
86//! When the reading node "wakes up" it then tries to read the value from the DHT. If the value is
87//! what it expects, the tests passes successfully.
88//!
89//! To run this test, run the following command in one terminal to launch the "reading" node:
90//!
91//! ```bash
92//! cargo test kademlia_record_store_itest_works --features=test-reading-node --features=tokio-runtime -- --nocapture
93//! ```
94//!
95//! And then run the following command in another terminal to launch the "writing node":
96//!
97//! ```bash
98//! cargo test kademlia_record_store_itest_works --features=test-writing-node --features=tokio-runtime -- --nocapture
99//! ```
100//!
101//! ### Record providers tests
102//!
103//! To run the providers tests, we have a `reading` node and a `writing` node.
104//!
105//! We first run the "writing" node to store a record in the DHT. Then we run a "reading" node to
106//! fetch the list of providers of the record that's been written.
107//!
108//! Then we simply assert that node 1 is a provider of the record.
109//!
110//! To run this test, first run the "writing" node:
111//!
112//! ```bash
113//! cargo test kademlia_provider_records_itest_works --features=test-writing-node --features=tokio-runtime -- --nocapture
114//! ```
115//!
116//! Then, in another terminal, run the "reading" node:
117//!
118//! ```bash
119//! cargo test kademlia_provider_records_itest_works --features=test-reading-node --features=tokio-runtime -- --nocapture
120//! ```
121//!
122//! ### Gossipsub tests
123//!
124//! **Join/Exit tests**
125//!
126//! For Gossipsub tests, we have a `subscribe` node and a `query` node.
127//!
128//! When the "subscribe" node is set up, it joins a mesh network. Then node 2 is setup and connects
129//! to node 1, sleeps for a while (to allow propagtion of data from node 1) and then joins the
130//! network. After joining, it then queries the network layer for gossipping information. This
131//! information contains topics the node is currently subscribed to such as the peers that node 2
132//! knows (which is node 1) and the network they are a part of. The peers that have been blacklisted
133//! are also returned.
134//!
135//! In this test, we test that node 1 is a part of the mesh network that node 2 is subscribed to.
136//!
137//! To run this test, first run the "subscribe" node:
138//!
139//! ```bash
140//! cargo test gossipsub_join_exit_itest_works --features=test-subscribe-node --features=tokio-runtime -- --nocapture
141//! ```
142//!
143//! Then, in another terminal, run the "query" node:
144//!
145//! ```bash
146//! cargo test gossipsub_join_exit_itest_works --features=test-query-node --features=tokio-runtime -- --nocapture
147//! ```
148//!
149//! **Publish/Subscribe tests**
150//!
151//! For this test we have a `listening` node and a `broadcast` node. The first node is setup which
152//! joins a mesh network. Then, node 2 is setup and connects to node 1, sleeps for a few seconds (to
153//! allow propagtion of data from node 1) and then joins the network. It then joins the network that
154//! node 1 was already a part of and sends a broadcast message to every peer in the mesh network.
155//!
156//! The indicator of the success of this test is revealed in the application's event handler
157//! function which logs the message received from node 2.
158//!
159//! To run this test, first run the "listening" node in one terminal:
160//!
161//! ```bash
162//! cargo test gossipsub_message_itest_works --features=test-listening-node --features=tokio-runtime -- --nocapture
163//! ```
164//!
165//! Then run the "broadcast" node in another terminal:
166//!
167//! ```bash
168//! cargo test gossipsub_message_itest_works --features=test-broadcast-node --features=tokio-runtime -- --nocapture
169//! ```
170//!
171//! ## Replication tests
172//!
173//! For each Replication test, we setup nodes as separate async tasks that dial each other to form a replica network.
174//!
175//! The `setup_node` function builds each node with replication configured.
176//!
177//! For basic replication tests we test the network behaves as expected for:
178//! - Joining and exiting the network
179//! - Replicating and fetching data from the network
180//! - Fully replicating a node from the replica network
181//!
182//! For Strong Consistency tests, we test the network behaves as expected for:
183//! - Number of confirmations are correct in a network with only 2 nodes
184//! - Number of confirmations are correct in a network with 3 nodes
185//! - Number of confirmations are correct in a network with `MinPeers` set to 2 nodes
186//!
187//! For Eventual Consistency tests, we test the network behaves as expected for:
188//! - A node is updated upon newly joining a replica network
189//! - Lamport ordering
190//! - Replicating a value across the network
191//!
192//! ## Sharding tests
193//!
194//! To setup the testing environment for Sharding, we implement `ShardStorage` to define the behavior of `fetch_data()`
195//! which fetches data separated by `-->`. We also implement the `Sharding` trait for range-based sharding to test the behavior of the network.
196//!
197//! In each test, we setup nodes as separate async tasks that forms the sharded network.
198//! The `setup_node` function builds each node with replication and sharding configured.
199//!
200//! For Sharding, we test the network behaves as expected for:
201//!
202//! - Joining and exiting a sharded network
203//! - Data forwarding between shards
204//! - Replication between nodes in a shard
205//! - Sharding and fetching from local storage
206//! - Fetching sharded data from the network
207//!