Copyright © Erik Hollnagel 2016
All Rights Reserved.
Writing and publishing peer-reviewed papers is a traditional way to contribute to the progress of science - although today it sometimes has degenerated into a mindless hunt for citation scores, impact factors, and various indices. The less said about that, the better. (1)
It has also become the custom to try to compile some kind of overview of one's own work. Several of my esteemed collagoues have done that, and I frequently benefit from that when I try to find an old paper or report. For people of my age, who started when writing was done on a manual typewrite and 'printing' was done using mimeographs, compiling such an overview has the advantage (at least for one self) of getting everything together in a more accessible (electronic) form.
The compilation will include published journal papers, some published conference papers, as well as some reports or notes that might be of interest to others. (A complete list of publications, including many technical reports not listed here, is available on request.) All done without trying to be too pretentious, but do accept my apologies, just in case I get carried away.
I have tried to provide links to publications that are free to download. This regrettably excludes most of the journal papers.
(1) I can nevertheless not refrain from noticing that the reliance on calculations to represent the quality of a journal paper, or even of a researcher's publications, is an ETTO. It is, of course, much easier to pick the scores (impact factors, etc.) and produce a single number, than actually to read papers - or even abstracts - and decide about the quality of the work oneself. The latter is definitely more thorough and in the long run more fair. But it struggles in vain against more powerful forces. (If you agree with this line of reasoning, you might enjoy the following paper: Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(9), 696-700. DOI: 10.1371/journal.pmed.0020124)