|
|
|
|
Keynote speakers
Service Science
Are we researching and educating professionals for the future ?
Barry Davies,
University of Gloucestershire,
UK |
|
In 2008, IBM and the University of Cambridge Institute for Manufacturing (IfM) published ‘Succeeding through Service Innovation’. They focused on the growth of services in economies (now typically 70% - 80% in the developed world). The scope for innovation moves increasingly beyond goods, into new service provision, not least through ICT and Web 2.0. Have their observations elicited a response? Are there movements in other fields in similar directions?
Cambridge IfM and IBM set their arguments in a systems perspective, clarifying what they define as service systems. They ask a series of critical questions for business, government and third-sector organisations. The deep challenge is seen in integrating various disciplines and strands of knowledge. The isolation of individual disciplines is limiting and ultimately destructive of our ability to enhance, improve and innovate in services provision.
There are knowledge gaps and skills gaps. They argue that the only effective response is an interdisciplinary one – which cuts across our traditional ways of researching and teaching. This crosscutting direction has also been reinforced by development in the postgraduate design curriculum at Stanford University.
At the heart of their approach is the development of T-shaped professionals, who combine deep professional knowledge with an ability to empathise and reach out to other professionals. Can we create a new interdiscipline of Service Science, Management, Engineering and Design?
|
|
|
Bioinformatics and the difference between dead and living matter Dr. Hanan Al Shargi
Department of Computer Science
The George Washington University
Washington, DC 20052 USA |
|
Bioinformatics deals with genetic configurations of a limited information capacity. It seems problematical that separate directives of several thousands symbols could be responsible for the whole richness of living systems. In accord with the growing evidences of epigenetic impacts the phenomenon of Life is deemed as a collective effect relying on the infrastructure of the physical world analogously to distributed activities employing the remote access on the Internet. Such a data intensive processing is considered as the determinant of the particular behavior of the living matter. The suggested construction is implemented by means of a holographic mechanism with shared content-addressable access. Experimental testing of this hypothetical machinery will be discussed. Remarkably, the holistic nature of the holographic content-addressable access incorporates the instantaneous correlations between distant microobjects – the inexplicable quantum entanglement that has been long suspected to play a pivotal role in biology. Thus, the same mechanism produces selective feedbacks for large DNA and protein molecules. The unique identification of the organisms needed for sharing the underlying bandwidth and storage resources comes from the structural specificities of the macromolecules. The controlled reactions of macromolecules under elaborate holographic operations translate into higher functionality of animated matter. Emulation of this biologically inspired computational scheme with digital holography provides an information retrieval system displaying sophisticated “intelligent” capabilities. |
|
|
Normalization for Relational Databases: Manual, Automatic or None
Prof. Dr. Ali YAZICI
Head of Software Engineering Department, Atilim University,
Ankara, Turkey
|
|
For the relational database design, normalization is a systematic way of arranging relations in such a way that the integrity of data is maintained throughout the lifecycle of the database system. The concept of normalization was first introduced by the inventor of the relational model in 1970. Codd went on to define the second and third normal forms in 1971; and Codd and Raymond F. Boyce defined the Boyce-Codd normal form in 1974 . Higher normal forms were defined by other researchers in subsequent years, the most recent being the sixth normal form introduced by Chris Date, Hugh Darwen, and Nikos Lorentzos in 2002.
Although, developing software tools for the task has been a point of attraction for many database researchers since 1970's, CASE tools for the DB design still lack the automatic normalization component.
In this talk, first, a survey of existing research on automatic normalization will be reviewed. And, a complete normalization tool, JMathNorm, designed by the author using Java and Mathematica will be introduced. JMathNorm supports interactive use of modules for experimenting the fundamental set operations such as closure, and full closure together with modules to obtain the minimal cover of the functional dependency (FD) set and testing an attribute for a candidate key. JMathNorm's GUI interface is written in Java and utilizes Mathematica's JLink facility to drive the Mathematica kernel. JMathNorm is tested with several benchmark data and in classroom environment and found to be effective in particular for studying the normalization theory. |
|
|