Figure 2.
1D CNN process.
Figure 2.
1D CNN process.
Figure 3.
LSTM RNN elemental network structure.
Figure 3.
LSTM RNN elemental network structure.
Figure 4.
An overview of CNN–LSTM NARX proposed layers.
Figure 4.
An overview of CNN–LSTM NARX proposed layers.
Figure 5.
Imputation flowchart for every feature in Piccadilly station, Manchester, UK.
Figure 5.
Imputation flowchart for every feature in Piccadilly station, Manchester, UK.
Figure 6.
Sample of data from 1 May 2016 to 1 September 2016 data of Piccadilly station, Manchester, UK, before imputation.
Figure 6.
Sample of data from 1 May 2016 to 1 September 2016 data of Piccadilly station, Manchester, UK, before imputation.
Figure 7.
Sample of data from 1 May 2016 to 1 September 2016 data of Piccadilly station, Manchester, the UK, after imputation.
Figure 7.
Sample of data from 1 May 2016 to 1 September 2016 data of Piccadilly station, Manchester, the UK, after imputation.
Figure 8.
A sample dataset showing how data shifting is done for two look-back hours.
Figure 8.
A sample dataset showing how data shifting is done for two look-back hours.
Figure 9.
Training vs. testing in timeseries split cross-validation n=10 for the Beijing dataset.
Figure 9.
Training vs. testing in timeseries split cross-validation n=10 for the Beijing dataset.
Figure 10.
Training vs. testing in timeseries split cross-validation n=10 for the Manchester dataset.
Figure 10.
Training vs. testing in timeseries split cross-validation n=10 for the Manchester dataset.
Figure 11.
CNN–LSTM layers for the seventh iteration with (d0, o1) for the Beijing dataset.
Figure 11.
CNN–LSTM layers for the seventh iteration with (d0, o1) for the Beijing dataset.
Figure 12.
CNN–LSTM layers for the seventh iteration with (d0, o1) for the Manchester dataset.
Figure 12.
CNN–LSTM layers for the seventh iteration with (d0, o1) for the Manchester dataset.
Figure 13.
Real PM2.5 data of part of the seventh iteration results comparing the Beijing vs. Manchester datasets ranges.
Figure 13.
Real PM2.5 data of part of the seventh iteration results comparing the Beijing vs. Manchester datasets ranges.
Figure 14.
Real vs. CNN–LSTM and its NARX variants in part of the seventh iteration results for the Beijing dataset.
Figure 14.
Real vs. CNN–LSTM and its NARX variants in part of the seventh iteration results for the Beijing dataset.
Figure 15.
Real vs. CNN–LSTM and its NARX variants in part of the seventh iteration results for the Manchester dataset.
Figure 15.
Real vs. CNN–LSTM and its NARX variants in part of the seventh iteration results for the Manchester dataset.
Figure 16.
Real vs. LSTM and its NARX variants in part of the seventh iteration results for the Beijing dataset.
Figure 16.
Real vs. LSTM and its NARX variants in part of the seventh iteration results for the Beijing dataset.
Figure 17.
Real vs. LSTM and its NARX variants in part of the seventh iteration results for the Manchester dataset.
Figure 17.
Real vs. LSTM and its NARX variants in part of the seventh iteration results for the Manchester dataset.
Figure 18.
Real vs. Extra Trees and its NARX variants in part of the seventh iteration results for the Beijing dataset.
Figure 18.
Real vs. Extra Trees and its NARX variants in part of the seventh iteration results for the Beijing dataset.
Figure 19.
Real vs. Extra Trees and its NARX variants in part of the seventh iteration results for the Manchester dataset.
Figure 19.
Real vs. Extra Trees and its NARX variants in part of the seventh iteration results for the Manchester dataset.
Figure 20.
Real vs. XGBRF and its NARX variants in part of the seventh iteration results for the Beijing dataset.
Figure 20.
Real vs. XGBRF and its NARX variants in part of the seventh iteration results for the Beijing dataset.
Figure 21.
Real vs. XGBRF and its NARX variants in part of the seventh iteration results for the Manchester dataset.
Figure 21.
Real vs. XGBRF and its NARX variants in part of the seventh iteration results for the Manchester dataset.
Figure 22.
Evaluation results of non-NARX and NARX in terms of coefficient of determination for the Beijing dataset.
Figure 22.
Evaluation results of non-NARX and NARX in terms of coefficient of determination for the Beijing dataset.
Figure 23.
Evaluation results of non-NARX and NARX in terms of index of agreement for the Beijing dataset.
Figure 23.
Evaluation results of non-NARX and NARX in terms of index of agreement for the Beijing dataset.
Figure 24.
Evaluation results of non-NARX and NARX in terms of root mean square error for the Beijing dataset.
Figure 24.
Evaluation results of non-NARX and NARX in terms of root mean square error for the Beijing dataset.
Figure 25.
Evaluation results of non-NARX and NARX in terms of normalised root mean square error for the Beijing dataset.
Figure 25.
Evaluation results of non-NARX and NARX in terms of normalised root mean square error for the Beijing dataset.
Figure 26.
Evaluation results of non-NARX and NARX in terms of offline training time for the Beijing dataset.
Figure 26.
Evaluation results of non-NARX and NARX in terms of offline training time for the Beijing dataset.
Figure 27.
Evaluation results of non-NARX and NARX in terms of coefficient of determination for the Manchester dataset.
Figure 27.
Evaluation results of non-NARX and NARX in terms of coefficient of determination for the Manchester dataset.
Figure 28.
Evaluation results of non-NARX and NARX in terms of index of agreement for Manchester dataset.
Figure 28.
Evaluation results of non-NARX and NARX in terms of index of agreement for Manchester dataset.
Figure 29.
Evaluation results of non-NARX and NARX in terms of root mean square error for the Manchester dataset.
Figure 29.
Evaluation results of non-NARX and NARX in terms of root mean square error for the Manchester dataset.
Figure 30.
Evaluation results of non-NARX and NARX in terms of normalised root mean square error for the Manchester dataset.
Figure 30.
Evaluation results of non-NARX and NARX in terms of normalised root mean square error for the Manchester dataset.
Figure 31.
Evaluation results of non-NARX and NARX in terms of offline training time for the Manchester dataset.
Figure 31.
Evaluation results of non-NARX and NARX in terms of offline training time for the Manchester dataset.
Table 1.
Related work summary.
Table 1.
Related work summary.
Reference | Algorithms | Prediction Horizon | Evaluation Metrics | Pros | Cons |
---|
[19] | APNet (CNN–LSTM with normalised batching) | Used past 24 h to predict next hour | RMSE, MAE, IA | Viability and usefulness were validated experimentally for predicting PM2.5 using their proposal. | Algorithmic forecasts did not precisely follow real trends and were shifted and distorted. |
[22] | CNN–LSTM | Used past 24–72 h to predict next 3 h | RMSE, correlation coefficient | Their model is used for processing input from many sites in a city. | They did not verify that their model can be applied to other cities than the one experimented upon. |
[23] | CNN–LSTM | Used past 4, 12, and 24 h to predict next hour | MAE, RMSE | They combined data from meteorological and traffic sources and air pollution stations to compare the effectiveness of adding external sources for better air-quality prediction. | They used all the data and features available, which would incur a high computation cost and long execution time. |
[24] | Multivariate CNN–LSTM | Used past week to predict next 24 h | MAE, RMSE | CNN obtained air-quality features, decreasing training time; meanwhile, long-term historical input data aided LSTM in the prediction process. | More evaluation metrics could have been applied to verify their models’ performance, stating proximity to actual values such as R2 or IA. |
[20] | LSTM | Used past 24 h to predict next hour | RMSE, NRMSE, R2, IA | Using NARX minimised data input to a lower limit speeding up the process and improving accuracy in LSTM. | Evaluation using K-Fold is inaccurate. |
Table 2.
Beijing, China dataset statistics.
Table 2.
Beijing, China dataset statistics.
| PM2.5 | Cumulated Hours of Rain | Cumulated Wind Speed |
---|
Count | 41,757 | 43,824 | 43,824 |
Mean | 98.61321 | 0.194916 | 23.88914 |
Standard Deviation | 92.04928 | 1.415851 | 50.01006 |
Minimum | 0 | 0 | 0.45 |
Percentile (25%) | 29 | 0 | 1.79 |
Percentile (50%) | 72 | 0 | 5.37 |
Percentile (75%) | 137 | 0 | 21.91 |
Maximum | 994 | 36 | 585.6 |
Empty Count | 2067 | 0 | 0 |
Loss Percentage | 4.95% | 0.00% | 0.00% |
Coverage Percentage | 95.28% | 100.00% | 100.00% |
Table 3.
Piccadilly station, Manchester, UK dataset statistics before processing.
Table 3.
Piccadilly station, Manchester, UK dataset statistics before processing.
| PM2.5 | M_DIR | M_SPED | M_T | NO | NO2 | O3 |
---|
Count | 39,962 | 42,768 | 42,768 | 42,768 | 42,801 | 42,710 | 42,790 |
Mean | 10.2795 | 197.5673 | 3.3021 | 9.1598 | 18.0077 | 37.2121 | 28.2244 |
Standard Deviation | 10.2253 | 82.0140 | 1.8266 | 5.6743 | 29.9828 | 18.2559 | 19.3880 |
Minimum | −4 | 0.1 | 0 | −6.9 | 0 | 1.5181 | 0.0998 |
Percentile (25%) | 4.3 | 138.9 | 1.9 | 5.2 | 3.3162 | 22.9991 | 11.8744 |
Percentile (50%) | 7.6 | 205.4 | 2.9 | 8.9 | 8.1880 | 34.7902 | 26.4430 |
Percentile (75%) | 13.1 | 258.1 | 4.4 | 13.1 | 19.7654 | 49.0941 | 41.8099 |
Maximum | 404.3 | 360 | 13.8 | 30.6 | 671.7575 | 256.1077 | 138.5515 |
Empty Count | 3862 | 1056 | 1056 | 1056 | 1023 | 1114 | 1034 |
Loss Percentage | 8.81% | 2.41% | 2.41% | 2.41% | 2.33% | 2.54% | 2.36% |
Coverage Percentage | 91.19% | 97.59% | 97.59% | 97.59% | 97.67% | 97.46% | 97.64% |
Table 4.
Piccadilly station, Manchester, UK dataset statistics after imputation and processing.
Table 4.
Piccadilly station, Manchester, UK dataset statistics after imputation and processing.
| PM2.5 | M_DIR | M_SPED | M_T | NO | NO2 | O3 |
---|
Count | 43,824 | 43,824 | 43,824 | 43,824 | 43,824 | 43,824 | 43,824 |
Mean | 10.4240 | 197.6653 | 3.3172 | 9.1687 | 17.9776 | 37.3214 | 28.2042 |
Standard Deviation | 9.7384 | 81.0646 | 1.8110 | 5.6170 | 29.6567 | 18.1270 | 19.2519 |
Minimum | 0 | 0.1 | 0 | −6.9 | 0 | 1.5181 | 0.0998 |
Percentile (25%) | 4.8 | 141.6 | 1.9 | 5.3 | 3.4017 | 23.2407 | 12.0241 |
Percentile (50%) | 7.9 | 205.6 | 3 | 9 | 8.4868 | 34.9435 | 26.4929 |
Percentile (75%) | 12.7940 | 256.6 | 4.4 | 13 | 19.8489 | 49.2579 | 41.6438 |
Maximum | 404.3 | 360 | 13.8 | 30.6 | 671.7575 | 256.1077 | 138.5515 |
Table 5.
Prediction evaluation metrics averaged for Timeseries K-Fold = 10 for the Beijing dataset.
Table 5.
Prediction evaluation metrics averaged for Timeseries K-Fold = 10 for the Beijing dataset.
No | Algorithm Name | R2 ↑ | IA ↑ | RMSE (µg/m3) ↓ | NRMSE ↓ | Ttr (Seconds) ↓ |
---|
1 | CNN–LSTM | 0.93151 | 0.98237 | 23.22744 | 0.03776 | 31.83709 |
2 | (d0, o1) | 0.93498 | 0.98304 | 22.56670 | 0.03670 | 33.48102 |
3 | (d0, o4) | 0.93358 | 0.98264 | 22.88752 | 0.03715 | 31.94185 |
4 | (d0, o24) | 0.93136 | 0.98185 | 23.23515 | 0.03780 | 30.90029 |
5 | (d8, o1) | 0.93472 | 0.98309 | 22.60365 | 0.03677 | 34.92095 |
6 | LSTM | 0.93000 | 0.98157 | 23.45492 | 0.03817 | 23.10278 |
7 | (d0, o1) | 0.93372 | 0.98266 | 22.81122 | 0.03709 | 24.75054 |
8 | (d0, o4) | 0.93329 | 0.98270 | 22.86120 | 0.03719 | 24.73670 |
9 | (d0, o24) | 0.92800 | 0.98108 | 23.77952 | 0.03870 | 23.65251 |
10 | (d8, o1) | 0.93119 | 0.98220 | 23.30740 | 0.03764 | 25.30951 |
11 | ET | 0.92624 | 0.98027 | 24.21871 | 0.03926 | 3.86640 |
12 | (d0, o1) | 0.92583 | 0.98018 | 24.27789 | 0.03936 | 1.67357 |
13 | (d0, o4) | 0.92609 | 0.98028 | 24.21005 | 0.03927 | 1.97124 |
14 | (d0, o24) | 0.92633 | 0.98030 | 24.15589 | 0.03921 | 4.09777 |
15 | (d8, o1) | 0.92482 | 0.97992 | 24.43607 | 0.03964 | 1.75481 |
16 | XGBRF | 0.92051 | 0.97881 | 25.32772 | 0.04087 | 1.39812 |
17 | (d0, o1) | 0.92106 | 0.97893 | 25.24395 | 0.04061 | 0.90726 |
18 | (d0, o4) | 0.92137 | 0.97904 | 25.19564 | 0.04058 | 0.98165 |
19 | (d0, o24) | 0.92124 | 0.97901 | 25.21104 | 0.04064 | 1.70556 |
20 | (d8, o1) | 0.92116 | 0.97897 | 25.22721 | 0.04060 | 1.11933 |
21 | APNet [19] | N/A | 0.97831 | 24.22874 | N/A | N/A |
22 | NARX LSTM (d8, o1) [20] | 0.9291 | 0.98150 | 23.64560 | 0.03750 | 15.518 |
Table 6.
Prediction evaluation metrics averaged for Timeseries K-Fold = 10 for Manchester, UK dataset.
Table 6.
Prediction evaluation metrics averaged for Timeseries K-Fold = 10 for Manchester, UK dataset.
No | Algorithm Name | R2 ↑ | IA ↑ | RMSE (µg/m3) ↓ | NRMSE ↓ | Ttr(Seconds) ↓ |
---|
1 | CNN–LSTM | 0.73343 | 0.91014 | 4.60168 | 0.04338 | 45.65250 |
2 | (d0, o1) | 0.75676 | 0.92043 | 4.41522 | 0.04093 | 62.93308 |
3 | (d0, o4) | 0.75719 | 0.92129 | 4.40502 | 0.04082 | 63.98762 |
4 | (d0, o24) | 0.72561 | 0.90614 | 4.68568 | 0.04383 | 68.29444 |
5 | (d8, o1) | 0.75587 | 0.92121 | 4.42376 | 0.04098 | 67.65048 |
6 | LSTM | 0.71410 | 0.90178 | 4.80527 | 0.04494 | 36.36427 |
7 | (d0, o1) | 0.75132 | 0.91719 | 4.46954 | 0.04131 | 56.96864 |
8 | (d0, o4) | 0.74757 | 0.91746 | 4.50223 | 0.04162 | 51.03376 |
9 | (d0, o24) | 0.70886 | 0.89991 | 4.85817 | 0.04536 | 54.52106 |
10 | (d8, o1) | 0.74860 | 0.91608 | 4.48958 | 0.04164 | 55.04288 |
11 | ET | 0.75236 | 0.91677 | 4.48561 | 0.04100 | 10.69692 |
12 | (d0, o1) | 0.75413 | 0.91787 | 4.47682 | 0.04096 | 2.29273 |
13 | (d0, o4) | 0.75144 | 0.91707 | 4.50112 | 0.04108 | 3.35555 |
14 | (d0, o24) | 0.75453 | 0.91775 | 4.47306 | 0.04095 | 10.22563 |
15 | (d8, o1) | 0.74594 | 0.91575 | 4.55117 | 0.04158 | 2.41416 |
16 | XGBRF | 0.73285 | 0.91280 | 4.64339 | 0.04220 | 5.45900 |
17 | (d0, o1) | 0.74011 | 0.91516 | 4.59084 | 0.04192 | 1.77346 |
18 | (d0, o4) | 0.74169 | 0.91564 | 4.57746 | 0.04181 | 2.10689 |
19 | (d0, o24) | 0.74247 | 0.91579 | 4.57905 | 0.04183 | 4.59505 |
20 | (d8, o1) | 0.74069 | 0.91578 | 4.58845 | 0.04193 | 1.71298 |
Table 7.
An excerpt from Beijing dataset matching the sharp transition in results (cv = calm and variable, NW = northwest).
Table 7.
An excerpt from Beijing dataset matching the sharp transition in results (cv = calm and variable, NW = northwest).
Timestep | Date and Time | PM2.5 | Cumulated Wind Speed | Combined Wind Direction |
---|
30126 | 9 June 2013 5:00 | 130 | 1.78 | cv |
30127 | 9 June 2013 6:00 | 153 | 2.23 | cv |
30128 | 9 June 2013 7:00 | 110 | 1.79 | NW |
30129 | 9 June 2013 8:00 | 21 | 3.58 | NW |
30130 | 9 June 2013 9:00 | 14 | 9.39 | NW |
30131 | 9 June 2013 10:00 | 13 | 17.44 | NW |
30132 | 9 June 2013 11:00 | 36 | 23.25 | NW |
30133 | 9 June 2013 12:00 | 14 | 29.06 | NW |
Table 8.
Output statistics of CNN–LSTM and LSTM along with NARX vs. training and testing output for the seventh iteration for the Beijing dataset.
Table 8.
Output statistics of CNN–LSTM and LSTM along with NARX vs. training and testing output for the seventh iteration for the Beijing dataset.
Testing Count = 3722 | Mean | SD | Min | Percentile | Max |
---|
(25%) | (50%) | (75%) | (95%) | (99%) | (99.99%) |
---|
Training | 101.9 | 95.1 | 0 | 29 | 75 | 144 | 289 | 434 | 915.5 | 994 |
Testing | 78 | 56 | 4 | 37 | 66 | 107.3 | 182 | 257.3 | 459.2 | 466 |
CNN–LSTM | 77.5 | 53.8 | 4 | 36 | 66 | 107 | 179 | 248.6 | 379.7 | 382 |
(d0, o1) | 78.4 | 54.9 | 4 | 37 | 67 | 108 | 182 | 255.3 | 396.7 | 399 |
(d0, o4) | 78.2 | 53.5 | 5 | 38 | 67 | 107 | 178 | 248 | 383.6 | 387 |
(d0, o24) | 77.1 | 52.7 | −3.0 | 37 | 66 | 106 | 176.5 | 247.9 | 363.5 | 365 |
(d8, o1) | 78.8 | 55.1 | −15.0 | 38 | 67 | 108 | 182 | 254.6 | 400.4 | 403 |
LSTM | 78.2 | 53.8 | −7.0 | 36 | 66 | 107 | 181.5 | 255.6 | 359.2 | 360 |
(d0, o1) | 77.8 | 52.8 | −7.0 | 38 | 67 | 107 | 177 | 246.3 | 368.1 | 370 |
(d0, o4) | 77.1 | 53.1 | −11.0 | 36 | 66 | 106 | 177 | 248 | 361.7 | 364 |
(d0, o24) | 77 | 51.8 | −12.0 | 36 | 67 | 106 | 174 | 247.3 | 357.1 | 359 |
(d8, o1) | 77.8 | 52.8 | 8 | 38 | 67 | 107 | 177 | 244.3 | 368 | 371 |
Table 9.
Output statistics of Extra Trees and XGBRF along with NARX vs. the training and testing output for the seventh iteration for the Beijing dataset.
Table 9.
Output statistics of Extra Trees and XGBRF along with NARX vs. the training and testing output for the seventh iteration for the Beijing dataset.
Testing Count = 3722 | Mean | SD | Min | Percentile | Max |
---|
(25%) | (50%) | (75%) | (95%) | (99%) | (99.99%) |
---|
Training | 101.9 | 95.1 | 0 | 29 | 75 | 144 | 289 | 434 | 915.5 | 994 |
Testing | 78 | 56 | 4 | 37 | 66 | 107.3 | 182 | 257.3 | 459.2 | 466 |
ET | 79.5 | 54.2 | 5 | 39 | 69 | 108 | 179.5 | 252 | 418.2 | 422 |
(d0, o1) | 79.6 | 54.7 | 6 | 39 | 69 | 108 | 179.5 | 254.2 | 451.5 | 473 |
(d0, o4) | 79.6 | 54.7 | 5 | 39 | 69 | 107 | 179.5 | 253.3 | 439.7 | 442 |
(d0, o24) | 79.6 | 54.4 | 5 | 39 | 69 | 109 | 180 | 253 | 424.2 | 425 |
(d8, o1) | 79.6 | 55 | 5 | 39 | 69 | 108 | 181 | 254.3 | 447.9 | 449 |
XGBRF | 79.4 | 54.5 | 10 | 39 | 71 | 105 | 178 | 252.3 | 442.3 | 448 |
(d0, o1) | 79.4 | 54.7 | 10 | 39 | 70 | 105 | 178 | 251.3 | 449 | 455 |
(d0, o4) | 79.4 | 54.6 | 9 | 39 | 70 | 105 | 178 | 251.3 | 449.3 | 455 |
(d0, o24) | 79.4 | 54.6 | 10 | 39 | 70 | 105 | 178 | 252.3 | 446.7 | 452 |
(d8, o1) | 79.4 | 54.7 | 10 | 39 | 70.5 | 105 | 178.5 | 252 | 449 | 455 |
Table 10.
Output statistics of CNN–LSTM and LSTM along with NARX vs. training and testing output for the seventh iteration for the Manchester dataset.
Table 10.
Output statistics of CNN–LSTM and LSTM along with NARX vs. training and testing output for the seventh iteration for the Manchester dataset.
Testing Count = 3960 | Mean | SD | Min | Percentile | Max |
---|
(25%) | (50%) | (75%) | (95%) | (99%) | (99.99%) |
---|
Training | 10 | 9.7 | 0 | 4.5 | 7.5 | 12.3 | 27.8 | 45.2 | 253.9 | 404.3 |
Testing | 11.8 | 10.1 | 0 | 6 | 9 | 15 | 28 | 48.4 | 131 | 135 |
CNN–LSTM | 11.3 | 7.9 | −1.0 | 6 | 9 | 15 | 26 | 40 | 65.4 | 67 |
(d0, o1) | 11.1 | 7.7 | −3.0 | 6 | 9 | 14 | 25 | 40 | 66 | 66 |
(d0, o4) | 11.4 | 7.9 | −1.0 | 6 | 9 | 15 | 26 | 40 | 65 | 65 |
(d0, o24) | 11.1 | 7.4 | −2.0 | 6 | 9 | 14 | 25 | 38 | 59.6 | 60 |
(d8, o1) | 11.3 | 7.6 | −1.0 | 6 | 9 | 14 | 26 | 39.4 | 66 | 66 |
LSTM | 11.4 | 7.6 | −2.0 | 6 | 9 | 14 | 26 | 41 | 61.4 | 63 |
(d0, o1) | 11.6 | 8.1 | −3.0 | 6 | 9 | 14 | 26 | 42 | 78.4 | 80 |
(d0, o4) | 11.2 | 8 | −1.0 | 6 | 9 | 14 | 26 | 41 | 71.4 | 73 |
(d0, o24) | 11.5 | 7.6 | −3.0 | 6 | 9 | 15 | 26 | 40 | 66.6 | 67 |
(d8, o1) | 11.3 | 8.4 | −4.0 | 6 | 9 | 14 | 27 | 43 | 73.8 | 75 |
Table 11.
Output statistics of Extra Trees and XGBRF along with NARX vs. the training and testing output for the seventh iteration for the Manchester dataset.
Table 11.
Output statistics of Extra Trees and XGBRF along with NARX vs. the training and testing output for the seventh iteration for the Manchester dataset.
Testing Count = 3960 | Mean | SD | Min | Percentile | Max |
---|
(25%) | (50%) | (75%) | (95%) | (99%) | (99.99%) |
---|
Training | 10 | 9.7 | 0 | 4.5 | 7.5 | 12.3 | 27.8 | 45.2 | 253.9 | 404.3 |
Testing | 11.8 | 10.1 | 0 | 6 | 9 | 15 | 28 | 48.4 | 131 | 135 |
ET | 11.4 | 8.5 | 2 | 6 | 9 | 14 | 26 | 43 | 113.7 | 120 |
(d0, o1) | 11.4 | 8.5 | 2 | 6 | 9 | 15 | 26 | 40 | 110.8 | 112 |
(d0, o4) | 11.5 | 8.7 | 2 | 6 | 9 | 15 | 26 | 42 | 140 | 142 |
(d0, o24) | 11.4 | 8.5 | 2 | 6 | 9 | 15 | 26 | 40 | 122.9 | 130 |
(d8, o1) | 11.6 | 8.8 | 2 | 6 | 9 | 15 | 27 | 43 | 130.1 | 138 |
XGBRF | 11.6 | 9.8 | 3 | 6 | 9 | 14 | 27 | 46.4 | 189.2 | 190 |
(d0, o1) | 11.6 | 9.4 | 3 | 6 | 9 | 14 | 27 | 46 | 157.2 | 158 |
(d0, o4) | 11.5 | 9.3 | 3 | 6 | 9 | 14 | 27 | 45.4 | 147.2 | 148 |
(d0, o24) | 11.5 | 9.2 | 3 | 6 | 9 | 14 | 27 | 46 | 139.6 | 140 |
(d8, o1) | 11.6 | 9.4 | 3 | 6 | 9 | 14 | 27 | 46 | 157.2 | 158 |