There is no doubt that The Association to Advance Collegiate Schools of Business (AACSB) is one of the leading and most respected organizations when it comes to schools of business worldwide. The association is coming upon their 100th anniversary and has provided advances in education since it’s inception. In order for a university to be accredited by this institution, they must uphold to certain standards outlined by AACSB listed on their website.

One of the most important standards for a business school to be eligible for accreditation by AACSB is assessing the impact of their academic activities. There are three popular ways for a university to prove the impact of their professor’s research.

The first is having published in highly recognized, peer-reviewed journals. It is clear that peer-review is a necessity when it comes to publishing quality research, but what exactly does “highly recognized” mean? While on the surface this may seem like a worthy goal, there are problems with this assessment. The age of a publication does not always correlate with the quality of published research. More importantly, this assumes that new journals are automatically not up to par with their senior counterparts. An author could publish groundbreaking research in a newer journal and not receive the same credit or prestige that he/she could have if they published in a large, well-known journal. Compounding this restriction, authors from underfunded institutions are simply priced out of these journals.

Another way for schools to cite the impact of their intellectual contributions are by citation counts; but this method has major flaws. Impact factors are calculated annually as the mean number of citations to articles published in any given journal in the two preceding years (Vanclay). Many researchers have found flaws in the calculations used to compute these impact factors such as this professor in Australia who wrote an entire paper on the topic.

“A review of Garfield’s journal impact factor and its specific implementation as the Thomson Reuters impact factor reveals several weaknesses in this commonly-used indicator of journal standing”

The main issue is applying the impact factor (IF) of an entire journal to represent the importance of one specific article. This puts pressure on researchers to try and publish in journals with high IF, even though the topic does not fit into that journal. Highly rated journals are flooded with submissions that cannot be published because the topics do not match. This also pushes publishers into not accepting a groundbreaking research paper if it relates to a small topic that is not cited very often. High citation counts make these journals popular and drive business. It is a conflict of interest to the publishing companies to accept a manuscript with a narrow base of researchers. Bruce Alberts, the editor in chief of Science, wrote a great article about many of these problems associated with impact factors:

“The misuse of the journal impact factor is highly destructive, inviting a gaming of the metric that can bias journals against publishing important papers in fields (such as social sciences and ecology) that are much less cited than others (such as biomedicine). And it wastes the time of scientists by overloading highly cited journals such as Science with inappropriate submissions from researchers who are desperate to gain points from their evaluators”

The final way a school can determine the impact of their contributions is through download rates for electronic journals. This is where open access becomes so important. Journal hosting software counts every time an article is downloaded and is extremely accurate. Since there is no charge for a researcher to view a paper published in an open access journal, more people are able to download the information and use it in their own research. Another reason download rates are more accurate than other impact factors are they keep track of one specific article. The impact factor of an entire journal is a false way of measuring the citations of one article and citation counts are clearly more correct.

Based on the three most common metrics used by schools to assess the impact factor of their activities for AACSB, we believe that download rates are the most accurate metric. It is clear that many in the research world are questioning impact factors and this sparks an interesting conversation. We at Clute hope the download rates in our open access journals will help professors everywhere receive the credit their research deserves.


Alberts, Bruce. “Impact Factor Distortions.” Science, 17 May 2013. Web. 18 Sept. 2015. <>.

Vanclay, Jerome. “Impact Factor: Outdated Artefact or Stepping-Stone to Journal Certification?” Scientometrics 92.2 (2012): 211-38. Springer Netherlands, 2012. Web. 18 Sept. 2015. <>.

Clute Institute
[email protected]