Wednesday, November 13, 2013

Scholarly Output: What Should Be Measured?

Citation analysis is based on four implicit dimensions.  These are productivity, visibility, reputation, and impact.  Each of these has been discussed either directly or indirectly in the scholarly communications and citation analysis literature, but not explicitly in terms of faculty evaluation criteria.  This is primarily due to the fact that the application of webometrics (discussed here) departs from the control and domain of academic publishing companies as sources of reputational metrics.  The following is a brief discussion of each.

Productivity
Academic “productivity” typically only refers to research and refereed publication activities, not to teaching or outreach.  As a simple quantitative measure (i.e., numeric count) of artifacts, including books, chapters, articles, presentations, grants, etc., productivity is the traditional method of evaluating academic output (Leahey 2007; Adkins and Budd 2006).  There are few reliable metrics for productive teaching or outreach other than the output or count of activities – like student credit hours, contact hours, or listing internal or external committee/board memberships (Massy 2010).   Productivity is easily derived from a CV by simply counting each of the artifacts or activities listed.  In some cases, the number of journal article citations (and more recently this has included books and book chapters) and journal impact factors (JIF) to convey the weight, importance, or recognition of the work are included within CVs.  However, these metrics only apply to published materials that are indexed by citation databases.  While traditionally important, these products only account for a portion of what is commonly expected of tenure track faculty, missing the rest of the academic footprint which includes: their dissertation, book reviews, conference presentations (and proceedings), research reports, grant activity, and teaching activities (Youn and Price 2009).  There are subjective ways to evaluate the quality and importance of these works, but not in ways similar to bibliometrics.  Counting these products is a very limited way to assess academic output.


The meaning of “productivity” is also discipline specific, where expectations for research activity, scholarly publication, and other creative works vary (Dewett and Denisi 2004).  Some disciplines have devised weighting systems used to show how specific activities or outputs are counted relative to promotion and tenure, or merit-pay evaluation (see for example Davis and Rose 2011; Mezrich and Nagy 2007).  While controversial, academic activities and productivity have funding implications for public universities in the eyes of state legislatures and the public, especially during challenging economic times (see Musick 2011; Townsend and Rosser 2007; Webber 2011; O’Donnell 2011).  As public universities become increasingly more self-reliant for funding, they may need to adopt more of a private business model for accountability (Adler and Harzing 2009).  Pressure to dissolve tenuring systems that protect “unproductive” faculty members should be confronted constructively and creatively instead of being dismissed out of hand.

Visibility
Traditionally, academic visibility was assumed to be a function of productivity.  As Leahey (2007) argues, like productivity, visibility is also a form of social capital.   If an academic is prolific, then there is a greater likelihood more academics will be aware of them, leading to other opportunities for professional gain.  Pre-internet visibility included the number of books sold, journal or publication impact based on where an article appeared, or through conference presentations with large attendance (depending on the prestige and popularity of the conference).  Visibility could also include newspaper, radio, or even television references, but not too common for the typical academic.  On the other hand, the web provides visibility and the ability to reach far beyond traditional academic borders.  And as an electronic archive, web visibility can be measured through searches that count the number of web mentions, web pages, or web links to an academic product.  Academics who strategically publish their work on the internet (personal pages, blogs, institutional repositories, etc) will have greater visibility (Beel, Gipp, and Wilde 2010).   Self promotion can benefit an academic's discipline, institution, and academic unit.  However, visibility is distinct from productivity and reputation because it provides little indication about the quality of the work.


Leahey (2007) finds that productivity is positively correlated with visibility and that visibility in turn has a positive personal impact in the form of compensation for faculty members.  There are other benefits to academic visibility as well, including attracting good students, internal and external financial support for research, and departmental growth because of increased enrollments and additional departmental resources (Baird 1986).  The link between productivity and visibility is also expressed through, and motivated by, the promotion and tenure process which secures lifetime employment and other benefits.  But once tenure is granted, the challenge for some departments is to have faculty continue to be productive and be creative in promoting their work for the benefit of academic units.
 
Reputation
Web 2.0, or social web, provides the means for generating reputation metrics through online user behavior (Priem and Hemminger 2010).  This includes social bookmarking (Taraborelli 2008), social collection management (Neylon and Wu 2009), social recommendations (Heck, Peters, and Stock 2010), publisher-hosted comment spaces (Adie 2009), microblogging (Priem and Costello 2010), user edited references (Adie 2009), blogs (Hsu and Lin 2007), social networks (Roman 2011), data repositories (Knowlton 2011), and social video (Anderson 2009).  All of these modes rely on users to view, tag, comment, download, share, or store academic output on the web whereby usage metrics can be tracked.  This requires meaningful interaction with the content, for which there is not a clear incentive structure for users (Cheverie, Boettcher, and Buschman 2009).


Benefits include potentially faster feedback and broader assessment of impact on audiences (Priem and Hemminger 2010).  However, these “audiences” may not have any particular level of validity or authority.  In terms of webometric analysis, reputation, recognition, and prestige are related.  These refer to the number or rank of sites that mention an academic's work.  While the total number of links or mentions indicates the level of visibility or accessibility of scholarly artifacts, being recognized by esteemed researchers and institutions is a measure of value in the academic information market.  This same concept is used in citation analysis where more weight and value is placed on citations, where the authors of citing articles have themselves been cited often.  Just as in the traditional tenure review process, positive external reference letters carry more weight when they come from well known and respected individuals in the discipline.  Popularity and respect are manifest on the web by the amount of attention gained either through back links or traffic.  It can be argued that the amount of attention gained by a person does not reflect the quality of their work within their disciplines; however, positive citations and positive book reviews have been relied upon for years, for lack of other metrics.

Impact
Along with productivity, visibility, and reputation, impact is the fourth dimension.  The impact measure takes into account reputation per academic product or artifact.  In other words, “impact” expresses the amount of attention generated by an article, chapter, report, presentation, etc. across an academic's career.  One could assume that impact is always higher for senior faculty because their work has been in circulation for a longer amount of time compared to younger faculty.  But time likely has a bigger effect on visibility and not necessarily reputation.  High visibility (i.e., wide availability) can influence reputational characteristics, but in ways different than positive reviews from respected colleagues.  In cyberspace, reputation is gained by having others express interest in an academic product by referring (or linking) to it. 
 

Kousha, Thelwall, and Rezaie (2010) refer to formal and informal online impact.  Formal impact being that measured by sources such as GS for citations and informal impact being associated with gray sources such as online course syllabi, scholarly presentations (conference or seminar presentations), and blog impact.  They also conclude that informal online impact is significant and increasing in several disciplines.  Another approach, “altmetrics” (see altmetrics.org) is “the creation and study of new metrics based on the social web for analyzing, and informing scholarship” (Priem, Taraborelli, Groth, and Neylon 2010, 1).  To assess scholarly impact, including measures of usage (downloads and views), peer-review (expert opinion), citations, and alt-metrics (storage, links, bookmarks, conversations).  Kousha et al (2010), Priem et al (2010), and Bollen Rodriguez, and Van de Sompel (2007) make strong cases for usage-based metrics, but do not emphasize the full range of academic outputs as suggested in this article.

An academic product (e.g. journal article mention) may appear on three web pages, and in Case A only one of these pages has backlinks (links from other web pages) and in another Case B the same product, the same visibility, with nine backlinks.  So while in both cases a product has the same amount of visibility, but Case B expresses a higher level of reputation, and hence, impact.  It can also be possible that lower visibility products can actually have a higher impact than a higher visibility product (see Cases C and D).  It is unclear at this point if there is a relationship between measures of visibility and reputation due to the lack of empirical data. A product needs to first be available and visible to gain attention and be valued by peers or the public.  However, low quality, very visible work will not have high impact as described here.  The point is that visibility is not an end in itself (see Franceschet 2010; Dewett and Denisi 2004) and that mentions or linking are a better measures because they signify high quality and impact.

 







The key points thus far are that, a) the web contains far more types of academic output compared to traditional citation databases, b) urban planning and other social sciences should recognize the value of gray literature (discussed here) to their disciplines and pay attention to these for faculty evaluation through webometrics, and c) expect that a web presence and visibility will increase in importance over time as academic programs compete for increasingly scarce resources.  Academics should see the advantages of web visibility and use it to more broadly for scholarly communications while at the same time use it to assess scholarly impact.

References
Adie, E. 2009. Commenting on scientific articles (PLoS edition). Nascent. http://blogs.nature.com/wp/nascent/2009/02/commenting_on_scientific_artic.html.

Adkins, D., and J. Budd. 2006. “Scholarly productivity of US LIS faculty.” Library & Information Science Research 28 (3): 374-389.


Adler, N. J, and A. W Harzing. 2009. “When knowledge wins: Transcending the sense and nonsense of academic rankings.” The Academy of Management Learning and Education ARCHIVE 8 (1): 72-95.


Anderson, K. 2009. The Impact Factor: A Tool from a Bygone Era? The Scholarly Kitchen. http://scholarlykitchen.sspnet.org/2009/06/29/is-the-impact-factor-from-a-bygone-era/.


Baird, L. L. 1986. “What characterizes a productive research department?” Research in Higher Education 25 (3): 211-225.


Beel, J., B. Gipp, and E. Wilde. 2010. “Academic search engine optimization (ASEO).” Journal of Scholarly Publishing 41 (2): 176-190.


Davis, E. B, and J. T Rose. 2011. “Converting Faculty Performance Evaluations Into Merit Raises: A Spreadsheet Model.” Journal of College Teaching & Learning (TLC) 1 (2).


Dewett, T., and A. S Denisi. 2004. “Exploring scholarly reputation: It’s more than just productivity.” Scientometrics 60 (2): 249-272.


Falagas, M. E, E. I Pitsouni, G. A Malietzis, and G. Pappas. 2008. “Comparison of PubMed, Scopus, web of science, and Google scholar: strengths and weaknesses.” The FASEB Journal 22 (2): 338-342.


Franceschet, M. 2010. “The difference between popularity and prestige in the sciences and in the social sciences: A bibliometric analysis.” Journal of Informetrics 4 (1): 55-63.


Heck, T., and I. Peters. 2010. Expert recommender systems: Establishing Communities of Practice based on social bookmarking systems. In Proceedings of I-Know, 458–464.


Hsu, C. L, and J. C.C Lin. 2008. “Acceptance of blog usage: The roles of technology acceptance, social influence and knowledge sharing motivation.” Information & Management 45 (1): 65-74.


Knowlton, A. 2011. Internet Usage Data. , Communication Studies Theses, Dissertations, and Student Research. Retrieved from http://digitalcommons.unl.edu/commstuddiss/17


Kousha, K. 2005. “Webometrics and Scholarly Communication: An Overview.” Quarterly Journal of the National Library of Iran [online] 14 (4).


Kousha, K., M. Thelwall, and S. Rezaie. 2010. “Using the web for research evaluation: The Integrated Online Impact indicator.” Journal of Informetrics 4 (1): 124-135.


Leahey, E. 2007. “Not by productivity alone: How visibility and specialization contribute to academic earnings.” American sociological review 72 (4): 533-561.


Massy, W. F. 2010. “Creative Paths to Boosting Academic Productivity.”


Mezrich, R., and P. G Nagy. 2007. “The academic RVU: a system for measuring academic productivity.” Journal of the American College of Radiology 4 (7): 471-478.


Musick, Marc A. 2011. An Analysis of Faculty Instructional and Grant-based Productivity at The University of Texas at Austin. Austin, TX. http://www.utexas.edu/news/attach/2011/campus/32385_faculty_productivity.pdf.


Neuhaus, C., and H. D Daniel. 2008. “Data sources for performing citation analysis: an overview.” Journal of Documentation 64 (2): 193-210.


Neylon, C., and S. Wu. 2009. “Article-level metrics and the evolution of scientific impact.” PLoS biology 7 (11): e1000242.


O’Donnell, R. 2011. Higher Education’s Faculty Productivity Gap : The Cost to Students , Parents & Taxpayers. Austin, TX.


Priem, J., and K. L Costello. 2010. “How and why scholars cite on Twitter.” Proceedings of the American Society for Information Science and Technology 47 (1): 1-4.


Priem, J., and B. H Hemminger. 2010. “Scientometrics 2.0: New metrics of scholarly impact on the social Web.” First Monday 15 (7).


Priem, J., D. Taraborelli, P. Groth, and C. Neylon. 2010. alt-metrics: A manifesto. Web. http://altmetrics.org/manifesto.


Roman, D. 2011. Scholarly publishing model needs an update. Communications of the ACM, 54(1), 16. Retrieved from http://dl.acm.org/ft_gateway.cfm?id=1866744&type=html


Taraborelli, D. 2008. Soft peer review: Social software and distributed scientific evaluation. In Proceedings of the 8th International Conference on the Design of Cooperative Systems COOP 08. Institut d’Etudes Politiques d’Aix-en-Provence.


Townsend, B. K., & Rosser, V. J. 2007. Workload issues and measures of faculty productivity. Thought & Action, 23, 7-19.


Webber, K. L. 2011. “Measuring Faculty Productivity.” University Rankings: 105-121.


Youn, T. I.K, and T. M Price. 2009. “Learning from the experience of others: The evolution of faculty tenure and promotion rules in comprehensive institutions.” The Journal of Higher Education 80 (2): 204-237.

8 comments:

  1. Very informative blog. I really like your post. Great post.

    ReplyDelete
  2. Web is important for both academic and teaching, were in fact they are still the same when it comes in usefulness. It does not require any to prove that these two are not the same as long as they are important when it comes in writing, languages, learning and studies. Just like resume service uk has to offer, quality service and guaranteed writing.

    ReplyDelete
  3. Tom, You are great about writing. Your writing style is creditable. You talk about research writing. I am impressed about this writing. I also want read more about acm format writing information.

    ReplyDelete
  4. The best output for the productivity and the reputation is present here for your own business. There is the good site which can help you to be a better person in the business world. References for your work is present here with the names and articles.

    ReplyDelete
  5. There are the announcements of the missed US deadlines from the people who want to apply in their respected field or section. Just apply in these countries and you can see that personal statement radiology fellowship and to maintain your fellowship.

    ReplyDelete
  6. This particular is usually apparently essential and moreover outstanding truth along with for sure fair-minded and moreover admittedly useful My business is looking to find in advance designed for this specific useful stuffs… academicexperts

    ReplyDelete
  7. I am truly inspired by this online journal! Extremely clear clarification of issues is given and it is open to every living soul. I have perused your post, truly you have given this extraordinary informative data about it.
    Write My Essay For Me Cheap

    ReplyDelete