As many (most, I suspect?) scientists, I have Google Scholar set up to sent me email-alerts. For example, I get an email when publications matching certain keywords are released, or when whatever fuzzy (or rather, “dizzy”, since it sometimes, seems to be on drugs) algorithm that Google uses, identifies “related research” to my research topics.
I also get an email when google find new citations to papers that I have published. I tend to read – at least, in diagonal – papers which cite mine. It’s interesting to see what others scientists do in the same domain as I, occasionally it’s a good source of inspiration — and (worst case) it’s satisfying to get confirmation that others have read, and found cite-worthy, something I’ve published in the past.
Sometimes, however, it’s a source of utter befuddlement. Towards the end of 2017, I received the Google Scholar Alert to the right…and the first citation immediately befuddled me.
While I do teach, I have never taught primary school students – nor have I ever published anything related to teaching primary school students (the closest I come is that I contributed a chapter to “Une introduction à la science informatique pour les enseignants de la discipline en lycée” — a text-book intended for helping high-school teachers learn how to teach computer science.)
Thus, ignoring the small grammatical glitch, a paper, titled “A Study of Academy Achievement of Government and Non Government Primary School Students in Relation to Reinforcement“, citing my work was … intriguing.
That paper is published in something called “International Education & Research Journal”, which claims on their webpage to be an “International Indexed Journal”, a “Double Reviewed Journal”, and a “Multi-disciplinary research journal” with an “Impact Factor Journal” of 4.064.
We will get back to all that, after having taken a look at the actual paper in question. (Local copy cached here). It’s written by an assistant professor, and an associate professor, at the “Department of Teacher Education”, from an institution called “SHUATS, India”. The abstract promises a quantitative and qualitative, data-based, analysis of performance of some pedagogical approaches applied to primary school students in a specific district (Allahabad, India).
That’s incredibly far from my own scientific domain, which (i) implies that I am incompetent in having a valid opinion about the scientific content of the paper, and (ii) makes it very curious how it can possibly cite anything that I have ever published.
So, I – naively – set forth and read the paper …
As I believe is common within the scientific field in question, the paper cites related work thus (albeit, with a slightly inconsistent notation):
…. development of adolescents Allen (2005)…..
…. equal to your age or ability (Hardcastle, 2002) ….
And then, after having read through 5 pages, which contains nuggets such as this:
Methuen (1987) argues that to spare the rod is to spoil the child, and there is very good reason to believe this.
(And, spoiler alert, no, their experiments did not include small electric shocks to primary students not properly memorizing their multiplication tables … )
I finally got to the bibliography, which started:
REFERENCES:
1. S. Marti, T.J. Giuli, K. Lai, and M. Baker, “Mitigating Routing Misbehavior in Mobile AdhocNetworks,”Proc.6thAnnualACM/IEEEInt’l.Conf.Mobile Computingand Networking (Mobicom’00), Boston, Massachusetts, August 2000, pp. 255-265.2. E.M. Royer, and C.-K. Toh, “A Review of Current Routing Protocols for Ad hoc Mobile Wireless Networks,” IEEE Personal Communications, vol. 2, no. 6, April 1999, pp. 46- 55.
3. S. Ramanathan, and M. Steenstrup, “A Survey of Routing Techniques for Mobile Communications Networks,” Mobile Networks And Applications, vol. 2, no. 1, October 1996, pp. 89-104.
4. T. Clausen, P. Jacquet, and L. Viennot, “Comparative Study of Routing Protocols for Mobile Ad hoc Networks,” Med-Hoc-Net’02, Sardegna, Italy, September 2002, 10 pp.
5. C.E. Perkins, and P. Bhagwat, “Highly Dynamic Destination-Sequenced Distance- Vector (DSDV) for Mobile Computers,” Proc. ACM Conf. Communications Architec- tures and Protocols (SIGCOMM’94), London, UK, August 1994, pp. 234-244.
6. T. Clausen, G. Hansen, L. Christensen, and G. Behrmann, “The Optimized Link State Routing Protocol – Evaluation Through Experiments and Simulation,” Proc. 4th Int’l. Symp. Wireless Personal Multimedia Communications, Aalborg, Denmark, Septem- ber 2001, 6 pp.
…
A couple of comments are required here:
- The citation style (Hardcastle, 2002), and the format of the references section (numbered references) do not match up. That’s a red flag that the journal doesn’t have proper editorial processes.
- None of the citations in the paper are found in the references section.
- The two papers listed in the references section, where I am among the authors, are very coherent with the rest of the papers cited in the references section: all incredibly on-topic of mobile, wireless, ad-hoc networks, which was a key activity for me a decade past.
- None of the papers listed in the references section, have anything to do with the topic of this paper.
- None of the papers listed in the references section seem to be cited in this paper.
Now, as it happens, I recognized that very references section, and a quick look in my archives confirmed…..In fact, all the references are from the references section from this paper, specifically:
- References 1-28 corresponds to references 1-28 from this paper,
- Reference 29-36 corresponds to references 36-43 from this paper,
This is, at worst, both plagiarism (my bet is that someone copied the bibtex file and removed a few entries…) and academic dishonesty (publishing a paper with faux citations to give pretence of being serious) — at best, it is an instance of gross oversight.
Giving the benefit of the doubt, let’s assume that it’s “just” gross oversight. But, by whom? The authors? The editor of the journal? The reviewers?
Authors may, inadvertently, have submitted an incorrect manuscript to the journal. But, in that case, this should have been caught by the editorial staff (a cursory glance, as indicated, should have caught a mismatch of citation style and reference section style, which a responsible editorial assistant would have raised).
A reputable journal would also have a review process, with several independent reviewers — whose job it would be to verify the scientific validity of the submission. That includes, also, verifying that the domain-appropriate related work is discussed which, mechanically, includes at least looking through the references. I find it hard to believe that a reviewer looking through the reference section of this paper does not stumble over two facts:
- None of the domain-appropriate related work is cited, and
- All of the cited work is entirely irrelevant (a very narrow field of computer networking vs. primary school pedagogics) for this paper.
This leads to conclude that this paper has been subject to neither an appropriate editorial process, nor to any sort of peer review.
The paper is published in something called “International Education & Research Journal”, which claims on their webpage to be an “International Indexed Journal”, a “Double Reviewed Journal”, and a “Multi-disciplinary research journal” with an “Impact Factor Journal” of 4.064.
I do not know what “Double Reviewed Journal” means — but to dispel any suspicion that it might mean “double-blind review”, the journal web-site helpfully has a page on their review process, where one can read:
IERJ – International Education and Research Journal employ the peer review process in order to maintain academic standards and insure the validity of individual works submitted for publication. In addition, follows a single-blinded peer review process, to ensure independent editorial decision-making.
But, also, that double-blind review can be requested:
IERJ – International Education and Research Journal will grant a double-blinded peer-review process upon an author’s request, and this requires the prior approval of the Editor-in-Chief
If the claimed process, depicted in the flow-chart to the right (From the IERJ website) was followed, then clearly both the editorial assistant, the reviewers, and the editor-in-chief are guilty of gross negligence in their respective oversight of the publication process. It is hard to believe – however – that this many academics get something so basic so incredibly wrong …
… Thus more likely, no editorial and review process was followed, which indicates that IERJ is a potential “predatory publisher”…. so let’s look a little closer at the journal …
The front page of the web-page of the journal, where the paper was published, claims “Impact Factor Journal” of 4.064 – which, if it is a Thomson-Reuters Impact Factor (which, for its flaws, is the usually accepted “authentic” impact factor), would be quite respectable.
However elsewhere on the front page, the claimed Impact Factor is qualified by “(SJIF)” – “Scientific Journal Impact Factor”. This is distinctly different from the Thomson-Reuters Impact Factor, and the company generating the “SJIF” has a “helpful” webpage that explains their “evaluation methodology”, and which list criteria such as (see image to the right):
A. Scientific Quality
– Scientific papers published in last year;
– Indexation shown in the list of databases.
B. Internationalization:
– Language of the titles and abstracts of articles;
– Language of published articles;
– International Editorial Board;
– Long-term cooperation with foreign reviewers;
– Degree of popularity of the journal for the international market.
C. Stability:
– The regularity of appearance;
– Age of journal.
D. Technical quality:
– DOI number;
– Availability on the Internet;
– Availability of information, editorial, table of contents, abstracts, full text articles;
– The tools used to manage bibliographies.
E. Standards:
– Procedures for reviewing;
– Statements about the originality of the article;
– legal ownership of published content, copyright transfer, etc..
F. Editorial Quality:
– The first page of the cover: title, ISSN, frequency, volume / number / month / year;
– Editorial information;
– Contacts (editor and publisher);
– Detailed guidelines (instructions) for authors;
– Uniform structure of the article.
G . Print and website score:
– Quality of photos and illustrations, graphs and tables, as well as the ability to print in color and quality of the paper
It’s quite unclear how from those “criteria” an “Impact factor” value of “4.064” is calculated – for that matter, it’s not clear from that’s webpage if “4.064” is a good or a bad. And….really, how does “the ability to print in color and quality of the paper” factor in to the scientific impact?
Another page on the website of the company generating the SJIF suggests that a journal can “buy” an Impact Factor for the modest sum of USD50, and that it will take just a week to generate it. And, the FAQ on the same site reads:
What is the minimum requirement for evaluation of SJIF?
• At least one issue has been published.
• At least 3 articles must have been published.
For reference, the “Impact Factor” of a journal for a given year is, in academia, well-defined, calculated as the number of citations recorded during that year, for papers published by the journal during the two preceding years, and divided by the total number of papers published during these two years. Thus, computing a “real” Impact Factor after just a single issue is … not possible. Even for an US$50 bribe fee.
It, therefore, seems established that the SJIF is:
- A far cry from an objective measure of the “impact” of an academic journal
- Not based on the same measures as the Thomson-Reuters Impact Factor (which would be, strictly, recorded entries in JCR)
- Not calculated according to the established equation for an Impact Factor (i.e., based on citation and article count, only)
- ….Yet, is named so as to lend to confusion with any “real” Impact Factor.
Given this, and the glaring flaws in paper in question … draw your own conclusions as to the serious of both SJIF, the “Impact Factor Company” and of IERJ, the “Journal Company”…