Ranking of Journals, Institutions, and Countries
Discipline and Domain Analysis
Databases and datasets for evaluation
Evaluation Tools and Indicators
Visibility and impact
Scientific Collaboration and cooperation analysis
National Evaluation Systems
Open access and open publishing
New indices for evaluation
Data Science and Digital Repositories
e-Science in the Cloud
Track 1: Open Science
Science relies on the sharing of ideas, research results, data and methods. Quiet often many
scientific discoveries and innovations fail to reach the society because of the closed scientific
publications. If Science is built based on existing knowledge it needs to reach every one and hence
Open Science leads the knowledge transaction. This track addresses the challenges and issues and
pave the way to develop and nurture open science.
Track 2: Peer Review
Peer review is the process by which experts critically examine the research contributions using
standard metrics. Peer review helps maintain and enhance quality both directly by detecting
weaknesses and errors in specific works. These reviews enable to decide publication, research
grants, employment, promotion, and tenure. Peer review promotes accountability and improves
quality of work.
Track 3: Content and Text mining based metrics
This track enable to present works that use numeric indices to process unstructured (textual)
information, and to build various data mining (statistical and machine learning) algorithms.
Information can be extracted to derive summaries for the words contained in the documents or to
compute summaries for the documents based on the words contained in them. One can analyze
words, clusters of words used in documents, and we could analyze documents and determine
similarities between them or how they are related to other variables. The studies of text mining will
“turn text into numbers” (meaningful indices), which can then be incorporated in other analyses.
All submitted papers will be reviewed by a double-blind (at least three reviewers), and participative
peer review. The review process will enable the selection process of those that will be accepted for
their presentation at the international conference. Authors of accepted papers who registered in
the conference can have access to the evaluations and possible feedback provided by the reviewers
who recommended the acceptance of their papers, so they can accordingly improve the final version
of their papers.
Mentoring support is available to young researchers and authors of developing countries.
Track 4: Technology Transfer Metrics
The technology transfer enables the outreach of the research outcomes to a wider level. The technology transfer measures are of interest to practitioners, program managers, and policymakers in the evaluation of technology transfer programs. Building metrics can lead to a better measurement of effectiveness, efficiency, and return on investment. The introduction of newer metrics will be helpful to identify the benchmarking tools. The proposed track will discuss the currently existing metrics and the research on the development of new metrics as well as their applications.
Chair: Mohammad Hassanzadeh, Tarbiat Modares University, Iran
Track 5: Open Science Metrics
The primary goal of this track is to enable to construct, identify, and specify relevant metrics and indicators for open science. Measauring the extent of openness in science and creating or using metrics is important. Measuring Open science help to realize the innovation. We can thus find how collaboration and knowledge sharing occurs and how institutions work on it. Openness also yield enormous data and knoweldge. This track gauge using available data about opennes s in science.
Unique identifiers in a database are used to distinguish fields from each other. A unique identifier is used when information is called from the database and needs to be distinguished from other information in the database. (IBM). In e-world they play key role in metrics and evaluation, particularly in Research productivity data analysis. They tend to reduce errors, omissions and duplication. They play a major role in improving research data analysis. Several unique identification systems for research data management are employed such as orcid, datacite, doi etc. However, the analysis of data or case studies of the unique identifiers is very less in literature. In this proposed special track we intend to address such research database processing.
Co-Chairs: J.K. Vijayakumar, King Abdullah University of Science and Technology, Saudi Arabia
The post-conference modified versions of the papers will be published in the following journals.
1.Journal of Digital Information Management
2. Journal of Contemporary Eastern Asia
3. Special Section in Research Evaluation
4. Journal of Scientometric Research
5. Malaysian Journal of Library & Information Science
6. International Journal of Computational Linguistics Research
|Full Paper Submission||September 15, 2019|
|Notification of Acceptance/Rejection||October 15, 2019|
|Registration Due||November 20, 2019|
|Camera Ready Due||November 20, 2019|
|Workshops/Tutorials/Demos||December 03, 2019|
|Main conference||December 02-04, 2019|