Research

The Digital Information Research Foundation is striving hard to infuse digital technology in the society. It works for the promotion of digital information use across countries. The DIRF has divisions such as training, research, projects, publications, conferences and consultancy.

see here

ICT Projects

Papers and Reports. DIRF through its research base has published many papers on digital information literacy, digital information content, ICT, web content management etc.

Project Management

The digital conversion and digital archiving of data and information is significant for effective exploitation of information resources. Many digital conversion practices

More

 

CALL FOR PAPERS

Conferences

image 1

 

More

Collaborative Partner

Institute of Electronic and Information Technology (IEIT)

Mosharaka

 

High Education Forum, Taiwan

higher-education-forum.jpg

 

Cambridge International Academics

Cambridge International Academics

 

 

Research Assessment Practices

In research evaluation, individuals, groups, institutions, countries, and disciplines are assessed using bibliometric and non-bibliometric indicators. All indicators have strengths and pitfalls when reflecting research performance. Despite the San Francisco declaration, research assessment systems use traditional publication and citation metrics for tenure, funding, research evaluation and performance.

Recently, Impact Factors and other citation-based metrics have been found to have limitations in research assessment, and they have become more synthetic. Researchers are interested in inflating the rates artificially, and their reliance on them for assessment is questionable. Realising these issues, we work on generating newer assessment patterns that should be natural and do not permit any kind of manipulation by authors, journals, and institutions.

While working on alternative assessment forms, converting the evaluation models to an accurate metric is challenging. Since citations, publications, and impact factors readily produce a few numbers, research evaluation systems quickly adopt and use them for ranking. Hence, a viable objective evaluation should work on translating the measures to some metrics. Currently, we undertake two major models. One is the peer review metric, for which we use sentiment analysis to translate text to measurable metrics. The second is adopting natural language processing, such as unique phrases in scientific texts. To learn more about them, you can reach us, service at dirf.org