In last month’s “Bad Science” blog posting called “Bad Science: Show me a protocol, and I will show a scenario where mine is better …“, I commented on the paper “Performance Study of AODV with Variation in Simulation Time and Network Size“, published in the “International Journal of Advanced Research in Computer Science and Software Engineering (IJARCSSE)“, as drawing unjustified conclusion: presenting results as “general” despite having been obtained in an extremely “narrow” study.
That very same paper is, by the way, a great example of another “Bad Science” sin: absent, incomplete or biased bibliographies.
In the paper “Performance Study of AODV with Variation in Simulation Time and Network Size“, this becomes first evident in the “Routing protocols in MANETS” section, included in full in the below:
Several routing protocols have been proposed for MANETs. These can be classified into different categories based on different criteria. Based on the technique used for updating routing information (i.e. the way they react to the changes in network topology) they are classified into three major categories: i) proactive (also known as table-driven) protocols, ii) reactive (known as source initiated or demand-driven) protocols and iii) hybrid protocols. Based on the role of routing nodes and the organization of the network, routing protocols are classified into flat protocols and hierarchical protocols [8].
Proactive protocols maintain consistent up-to-date routing information from each node to every other node in the MANET. The nodes need to maintain one or more tables which contain latest routing information. If there is a change in network topology that has to be included by broadcasting updated information throughout the network in order to maintain consistent routing information. Optimized Link State Routing protocol (OLSR), Destination Sequenced Distance-Vector routing protocol (DSDV), and Wireless Routing Protocol (WRP) are the examples of proactive protocols.
Reactive or on-demand routing protocols are source-initiated, which do not maintain or constantly update their routing tables with the latest route topology. In this type of routing routes are created only when there is a need by the source node. When a node requires a route to a destination, it initiates a route discovery process within the network. It will be completed when one or more routes are established or all possible route permutations have been tested. The main motivation in designing on-demand protocols is to reduce large amount of overhead for maintaining the routing table in the table-driven protocols in the dynamic MANET and hence they are widely used. AODV, DSR, TORA, DYMO are examples of reactive protocols.
Hybrid routing protocols have combined the advantage of both proactive and reactive routing protocols to balance the delay and control overhead. The main disadvantage of hybrid routing protocols is that the nodes which have high level topological information maintain more routing information that leads to more memory and power consumption [7]. The most typical hybrid routing scheme is zone routing protocol (ZRP).
That’s right, this paper presenting a simulation study of the AODV routing protocol does not even bother to include a reference to the AODV protocol specification (which was published in 2003) – nor, for that matter, to the specifications of any of the other protocols it is discussing in that section.
The related works section of this paper (included in full, bolding is mine) reads:
Comparison of AODV and DSR was done [9] for packet transmission to variation of simulation time. It is found that AODV protocol performs much better than DSR protocol. Authors in [10] compared AODV and DSDV protocols and they observed that AODV is better than DSDV in almost every aspect. Authors in [11] compared the performance of AODV, OLSR and TORA using NS2 (Network Simulator version2). The result shows that AODV performs better in PDF and throughput compared to OLSR and TORA. However TORA performs better than OLSR and AODV related to end-to-end delay metrics. Comparison of AODV, DSDV and DSR routing protocols [12] was done using NS-2 and the result shows that AODV performs better than other two protocols
I sense a common theme going on here…recapitulating the bolded parts above :
- AODV protocol performs much better than DSR protocol
- AODV is better than DSDV in almost every aspect
- result shows that AODV performs better in PDF and throughput compared to OLSR and TORA
- TORA performs better than OLSR and AODV related to end-to-end delay metrics.
- shows that AODV performs better than other two protocols
All but one of the “related work” indicate superiority of AODV, and the only “AODV critical” related work given, is critical to AODV on a single metric only.
And for all but one of these “related work” which indicate superiority of AODV, “general” arguments are given: “much better”, “almost every aspect”, “performs better”, and “performs better than the other two protocols” – no specifics, no caveats, no conditions given. Stating universal truths is almost always a “red flag”…
There are, of course, several things makes this “Bad Science”:
- The obvious bias – in this case, I guess, so as to justify why studying AODV (as opposed to the other protocols) is relevant, only papers arguing the superiority of AODV are considered.
- The assumption that “because others have done a simulation study, another one must be relevant, also, something I discussed in another earlier Bad Science posting “Bad Science: Incessant Protocol Comparisons“. What new information is this “yet another simulation study” supposed to bring?
- Without discussing the veracity of “is AODV superior”, a long list of related work which is painting a more nuanced picture exists.
Arguing (scientifically) why related work expressing different conclusions is irrelevant for the study being conducted would be worthy use of a “related work” section – simply ignoring it, by means that it’s not science, not even junk science – it’s simply junk. Again, this is something which any the peer review should have caught – and while the authors are to blame, it its befuddling how something like this manages to get past peer reviews and editors which, again, casts considerable doubts on the serious of the “International Journal of Advanced Research in Computer Science and Software Engineering (IJARCSSE)“.