Refreshing the imp

imp2: a modular approach to data correction.
imp2: a modular approach to data correction.

During the daytime, I am spending most of my time getting the new detector to work somehow (strange problems abound). In the evenings, however, I am preparing for more remote beamtime.

imp2: a modular approach to data correction.
imp2: a modular approach to data correction.

This time, I have taken the opportunity again to update the code of the imp2 data correction procedures. After getting mostly positive comments on the modular data processing concept at the CanSAS meeting last month, I am more confident about its future.

However, the code is code written 1-1.5 years ago. Since then, I have learned more about proper coding practices. Looking back at it, I did notice several inefficiencies in implementation, some duplicate code, and way too many ways to do the same thing.

The clean-up of the code took a bit longer than expected (during clean-up I also had to back-track a bit as some intended changes turned out not to be possible), but is now in an advanced state. The core has been updated, now the modules need to be updated to work with the core again.

After that, it is time to work on the documentation again. Documentation is one of the worst things for most software, but is required for future-proofing your code. One major change resulted from CanSAS discussions: the scaling uncertainty (or what I previously called “absolute uncertainty”) is best defined as a relative measure. This change needs to be properly documented. This also would be the first major deviation from the “Everything SAXS”-review paper on which this approach is based.

Once that is done, it is time to ask around for collaborations. The code as it is is quite ok, but it really needs a user interface to lower the barrier of using it. At the moment, configuration means writing custom processing scripts per instrument and measurement method. However, writing a GUI to alleviate the configuration headaches, requires serious effort and should, perhaps, not be attempted by oneself.

I do hope that the changes result in easier access to the code, allowing for simpler checking of the data corrections. It would be excellent if we could somehow agree on a more standardized approach to data collection and correction so we can focus on the application of the technique instead of getting lost in forests of raw datafiles.


1 Trackback / Pingback

  1. It was the year that was (2015 edition) | Looking At Nothing

Leave a Reply

Your email address will not be published.



This site uses Akismet to reduce spam. Learn how your comment data is processed.