Assess Quality Research Paper

When considering a research idea, we are bound to rely on previous findings on the topic. Work done in the field constructs the foundation for our research and determines its course and value. Inaccurate findings may lead to imprecise applications and end in further fallacies in your own new scientific knowledge that you construct.  In order to set a solid basis for research on any topic and to prevent multiplication of misinformation, it is crucial to to critically evaluate existing scientific evidence. It is important to know which information can be regarded as plausible.

So what’s the criteria to determine whether a result can be trusted? As it is taught in the first classes in psychology, errors may emerge from any phase of the research process. Therefore, it all boils down to how the research has been conducted and the results presented.

Meltzoff (2007) emphasizes the key issues that can produce flawed results and interpretations and should therefore be carefully considered when reading articles. Here is a reminder on what to bear in mind when reading a research article:

Research question
The research must be clear in informing the reader of its aims. Terms should be clearly defined, even more so if they’re new or used in specific non-spread ways. You as a reader should pay particular attention should to errors in logic, especially those regarding causation, relationship or association.

Sample
To provide trustworthy conclusions, a sample needs to be representative and adequate. Representativeness depends on the method of selection as well as the assignment.  For example, random assignment has its advantages in front of systematic assignment in establishing group equivalence. The sample can be biased when researchers used volunteers or selective attrition. The adequate sample size can be determined by employing power analysis.

Control of confounding variables
Extraneous variation can influence research findings, therefore methods to control  relevant confounding variables should be applied.

Research designs
The research design should be suitable to answer the research question. Readers should distinguish true experimental designs with random assignment from pre-experimental research designs.

Criteria and criteria measures
The criteria measures must demonstrate reliability and validity for both, the independent and dependent variable.

Data analysis
Appropriate statistical tests should be applied for the type of data obtained, and assumptions for their use met. Post hoc tests should be applied when multiple comparisons are performed. Tables and figures should be clearly labelled. Ideally, effect sizes shou

ld be included throughout giving a clear indication of the variables’ impact.

Discussion and conclusions
Does the study allow generalization? Also, limitations of the study should be mentioned. The discussion and conclusions should be consistent with the study’s results. It’s a common mistake to emphasizing the results that are in accordance with the researcher’s expectations while not focusing on the ones that are not. Do the authors of the article you hold in hand do the same?

Ethics
Last but not least, ere the ethical standards met? For more information, refer to the APA’s Ethical Principles of Psychologists and Code of Conduct (2010).

References
American Psychological Association (2010, June 1). American Psychological Association Ethical Principles of Psychologists and Code of Conduct. Retrieved July 28, 2011 from http://www.apa.org/ethics/code/index.aspx

Meltzoff, J. (2007). Critical Thinking About Research. Washington, DC: American Psychological Association.

 

Edited by: Maris Vainre

Posted in How-to, Literature research and tagged literature review, literature search, review process, scientific writing style, tips on by Zorana Zupan. 6 Comments

Evaluating the Quality of Research

IEEE issues guidelines for assessing the impact of research articles

By KATHY PRETZ6 June 2014

Evaluating the impact of scientific research is a notoriously difficult problem with no standard solution. Nevertheless, making such evaluations has become increasingly important, as more universities and research administrators, research agencies, funding and government organizations and, ultimately, taxpayers want to assess the results of public and private research.

The pressure is on to find a way to measure the value of a researcher’s published work. But this has led to oversimplified and ultimately incorrect methods.

“Technically incorrect use of bibliometric indicators has caused great concern in the scholarly community,” says Gianluca Setti, vice president of IEEE Publication Services and Products.

As an example, some employers are using a single bibliometric indicator of a journal—the Thomson Reuters Impact Factor (IF)—as a gauge for evaluating each individual paper published in the journal and of the researchers who authored it. Setti points out that the IF was not designed for that purpose. In fact, it was introduced to help librarians decide whether to renew journal subscriptions.

Thanks to the “scientific recognition” attached to a citation, the IF is indeed a legitimate proxy for the relative importance of a journal within its field: the more citations, the higher the IF and the more “important” that journal is.

Currently, the IF of individual publications is being misused, Setti says, when it alone is employed to assess the performance of a researcher not only for salary increases but also for decisions on hiring, promotion, and tenure. In medicine, biology, and other areas, the situation is even worse, according to Setti, because of the practice of computing a single indicator to rank individual performance by totaling (or averaging) the IFs of the publications produced by a scientist in a given period. Doing so has no significance from a bibliometric point of view, he says.

Setti points to several problems with using the IF as a gold standard for assessing the quality of research.

First, the IF of a scholarly journal is a measure reflecting the average number of citations to articles it contains. Yet the number of citations is not evenly distributed but skewed: In each journal, only a few articles receive an appreciable number of citations, while most articles are cited only a few times, if at all, Setti notes.

As a consequence, use of basic statistics is sufficient to understand that an average measure like the IF, upon which the reputation of a journal is based, is not at all related to the quality (for example, number of citations) of a specific article.

Second, the IF possesses several weak points in the area of bibliometrics that have been criticized by the the scientific community. As a result, improved indicators have been introduced.

indicators

  • The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, in Seattle, computes the ranking of a scientific journal based on an algorithm similar to the one used by Google to rank Web pages as a result of a search. Journals are rated according to the number of citations their articles attract, with citations from highly ranked journals weighted to make a larger contribution to the Eigenfactor. Citations by authors to their own articles are excluded. Furthermore, unlike the IF, the Eigenfactor measures the performance of the journal as a whole; as such, it tends to be larger for journals publishing a substantial number of papers.
  • The Article Influence Score is computed by normalizing the Eigenfactor to the number of papers published in a specific journal to obtain a measure of the average impact of an individual article.
  • The Scimago Journal Rankingis similar to the Article Influence Score but with the partial inclusion of self-citations, to better evaluate—the thinking goes—the impact of journals that are the sole reference of a small scientific community.

concerns

One of the scientific community’s main bibliometric concerns is that a journal’s impact is multidimensional and cannot be captured by any single bibliometric indicator. For instance, the Article Influence Score and the IF are jointly necessary.

Another problem is that the misuse of a single indicator to evaluate the impact of research has led to manipulation, mainly by artificially inflating the number of self-citations. Although citing oneself is legitimate in cases of previous relevant work in the same area or when a scientist is part of a large research group, recently the number of self-citations for some journals has increased dramatically, according to Setti. In some cases, this has led Thomson Reuters to exclude some publications from its Journal Citation Reports.

distorted view

Over the years, the IF has become the single most widely used factor measuring an article’s impact. And therein lies the problem.

“The IF can simply not be used for this purpose,” Setti says. “And what is worse, its use leads to many unintended consequences, including the manipulation of the indicator.” A better measure for the impact of an individual research article is simply the actual number of citations it has received, he says. Yet he believes reducing impact evaluation to that simple number is also inappropriate.

That’s because citation practices can vary widely across disciplines and subdisciplines. What’s more, the number of authors contributing to a specific field can be vastly different. And the count can include citations to poor work or even to incorrect results.

In short, Setti says, even if bibliometrics and citation analysis can be used as an additional source of information, nothing can replace human judgment through a fair peer-review process in assessing the impact of a research article or of a scientist.

Recommendations

The IEEE Board of Directors in September issued a statement on the correct use of bibliometric indicators. The statement includes the following guidelines for assessing the quality of research papers in the engineering, computer science, and information technology areas.

  • The use of multiple complementary bibliometric indicators is of fundamental importance to offer a comprehensive and balanced view of each scholarly journal. Accordingly, IEEE has adopted the Eigenfactor and the Article Influence Score in addition to the Impact Factor for assessing its publications. IEEE also welcomes the adoption of other complementary measures at the article level, such as the number of citations from different sources and the so-called altmetrics, once these have been validated and recognized by the scientific community. Altmetrics cover other aspects of the impact of a work, such as article views, downloads, or mentions by news outlets or social media.
  • A journal-based metric such as the Impact Factor does not capture the quality of individual papers and must not be used alone to gauge single-article quality or to evaluate individual scientists. In fact, it cannot be assumed that any single article published in a high-impact journal, as determined by any particular journal metric, will be highly cited.
  • While bibliometrics may be employed as a source of additional information for quality assessment within a specific area of research, the primary means for assessing either the scientific quality of a research project or of an individual scientist should be peer review.

IEEE also condemns any practice aimed at influencing the number of citations of a specific journal with the sole purpose of artificially influencing the corresponding indices.


For more information, see the full text of the IEEE statement.

 

 

0 thoughts on “Assess Quality Research Paper”

    -->

Leave a Comment

Your email address will not be published. Required fields are marked *