Why most machine UIs are terrible for science and business, and one easy step to fix it! (it’s not what you think!)

white and gray control panel
Photo by Şahin Sezer Dinçer on Pexels.com

My sincerest apologies to you for the most clickbait title ever. It had to be done because it integrates in a larger system of SEO and the peculiarities of the human brain. And therein lies the rub when it comes to User Interfaces, or “UIs”: measurements are not done in a vacuum, but require integration in a broader workflow.

What you might think I will talk about:

We have all cursed at the various machine user interfaces presented to us on outdated, isolated computers in the laboratory (because future support for sold instruments is *very* limited), trying to coax them into doing what we need them to do. Ask any user, and their first, and often only point of contact with the machine is their greatest source of frustration. The user interface is the result of a fine balance of three conflicting points:

  • Firstly there’s the limitations in resources (funding, time, interest and care) on the side of the manufacturer. Additionally, there is often a real or perceived need for some secrecy on how the cookie is crumbled on the inside.
  • Secondly, there are the demands and needs of the (initially imaginary and often overly simplified) user at the time of development.
  • Lastly, the company engineers are trying to prevent opening up avenues for the more crafty user flavours to accidentally break things. This means restricting the operations that can be done.

What I actually want to highlight:

Regardless of your particular variant of manufacturer-supplied UI, the majority of the interfaces at least do some of what they claim to do on the surface. The problem appears when you take a step back and take a wider view of its position in the experimental chain. The problem is not so much the user interface design, but the underlying principle of the human as an integral part of the operation. The user interface here adds an untraceable, hard to replicate step in this operation.

As detailed in this post from a few weeks back, from a holistic or systems-perspective, the experimental chain consists of a great number of individual steps, which includes things like operating machines and noting the details down. At the moment, the glue that combines these steps consists of a human in the loop, usually armed with a paper notebook and a partial understanding of what is relevant metadata. To say that this is and always has been a disaster, would be an understatement. Problems in reproducibility of experiments arise partially due to the inability of the researcher to record the important aspects of experiments, and to communicate these important aspects to the outside world. In short: a lot of time is spent doing things that are impossible to reproduce even with the best of efforts. It has been like this for a long time – and we ignore this at our own peril.

The full solution lies in a (partial) reinterpretation of what it means to do science. We need to stop ourselves from mindlessly going through the motions of our ancestors in this pastiche of science we currently find ourselves in, and refocus on the core of the scientific method. This means a focus not on maximising metrics of publications and grants, but on making unbiased findings, with reproducibility front and center. Needless to say, this is a long-term goal, I’m not holding my breath. More on that later, as that is a whole beast of a topic best not tackled alone nor on the sidelines.

Back to the topic at hand: what’s wrong with the machine UIs? The problem is that most of these are not designed to aid traceability outside of their scope (or even within). All the actions have to be (manually, and often poorly) recorded by the user, and combined with their other actions within the particular experiment they are conveying. More often than not, this is spread over several paper logbooks with all the associated disadvantages of the format. If there is a log kept within a machine UI, it is unlikely to be user-readable outside of that particular machine software (version), rendering it useless. This lack of transparency and automated traceability is, as mentioned, wholly inadequate for traceability and reproducibility of an experiment or the path to a scientific finding.

Rethinking the scientific workflow implies that any and all actions and choices are, wherever possible, becoming a part of the scientific record of that experiment. The current insular nature of machine UIs is incompatible with this concept. A change is required, but the initial step may be more modest than you would initially think: the inclusion of a messaging API would suffice for starters. An open file format that integrates a step-by-step process log would be a good addition to this.

An easy solution

The inclusion of a simple messaging API in the machine UI would mean that any action, be they user-driven or machine-selected, can be accessed and recorded. These messages can either be made available over a simple ethernet protocol, through a REST API, or something fancier like MQTT, RabbitMQ, or even EPICS, though the simpler the better usually.

Translation will be necessary, but to the medium-seasoned programmer or Ph.D. student, this is achievable. Ideally, these messages would end up in an electronic lab journal together with the other steps. It requires a slight change in thinking and working by the scientist as well, and it will (at least initially) be a slower style of science while the necessary communications backend is established.

Future improvements will require the implementation not just of an output stream of messages, but also allow for (limited) external control of the machine. This then allows the machine to be integrated into larger set-ups in automated laboratories. To my surprise, I recently saw some XRD equipment manufacturers already starting to offer such external control APIs for some of their diffractometers, and I can only applaud their efforts in these directions.