The Digital Information Research Foundation is striving hard to infuse digital technology in the society. It works for the promotion of digital information use across countries. The DIRF has divisions such as training, research, projects, publications, conferences and consultancy.
Papers and Reports. DIRF through its research base has published many papers on digital information literacy, digital information content, ICT, web content management etc.
The digital conversion and digital archiving of data and information is significant for effective exploitation of information resources. Many digital conversion practices
Sixth International Conference on Science & Technology Metrics (STMet 2025)
October (second week) 2025
University of Macau, Macau
Fifth International Conference on Digital Data Processing (DDP 2025)
University of Bedfordshire, Luton. UK
Third International Conference on Modelling and Forecasting Global
Economic Issues (MFGEI 2025)
Seventh International Conference on Real Time Intelligent System (RTIS 2025)
Collaborative PartnerInstitute of Electronic and Information Technology (IEIT) |
Mosharaka |
High Education Forum, Taiwan |
Cambridge International Academics |
In research evaluation, individuals, groups, institutions, countries, and disciplines are
assessed using bibliometric and non-bibliometric indicators. All indicators have strengths
and pitfalls when reflecting research performance. Despite the San Francisco declaration,
research assessment systems use traditional publication and citation metrics for tenure,
funding, research evaluation and performance.
Recently, Impact Factors and other citation-based metrics have been found to have
limitations in research assessment, and they have become more synthetic. Researchers are
interested in inflating the rates artificially, and their reliance on them for assessment is
questionable. Realising these issues, we work on generating newer assessment patterns
that should be natural and do not permit any kind of manipulation by authors, journals, and
institutions.
While working on alternative assessment forms, converting the evaluation models to an
accurate metric is challenging. Since citations, publications, and impact factors readily
produce a few numbers, research evaluation systems quickly adopt and use them for
ranking. Hence, a viable objective evaluation should work on translating the measures to
some metrics. Currently, we undertake two major models. One is the peer review metric, for
which we use sentiment analysis to translate text to measurable metrics. The second is
adopting natural language processing, such as unique phrases in scientific texts. To learn
more about them, you can reach us, service at dirf.org