The displaced volume correction (pointed at here and here) is one that, to my great regret, does not appear in the “Everything SAXS” paper as I did not know about it at the time. However, it is time for some further thougts on it. We start with its definition…
This is a correction that only needs to be considered for measurements on samples or sample dispersions, where the sample takes up a significant fraction of the volume. A rule of thumb would be to use this for samples which occupy a volume fraction of at least 1% within the dispersant *.
What happens in these cases is that there is a reduction in the amount of background material that the primary beam passes through, since a part of that space is now not occupied by background material but by sample instead. There is simply less background material in the beam. This implies that there is a reduction in the background signal, by an amount proportional to the volume of sample in the beam. This is not something that is compensated for by the transmission measurement: the sample may have a very similar absorption probability as the background, but still occupy a large fraction of the space.
What complicates matters firstly is that this only reduces the background signal originating from the solvent, while leaving the background signal from the sample container walls unaffected. This means that the background signal needs to be “disassembled” into its components, and that the background scattering signal from the liquid needs to be reduced in a scaling procedure.
What you do, see figure 2, is to use the “fancy background subtraction” procedure to separate the wall scattering from the solvent scattering. You use the same procedure to separate the wall signal from the dispersion scattering. The solvent scattering can then be multiplied with its (remaining) volume fraction, and subsequently subtracted from the dispersion scattering with a simple background subtraction procedure, to obtain the scattering pattern from the sample only.
The second complication is that there is a bit of a chicken-and-egg problem: you cannot do this correction without knowledge on the volume fraction occupied by the sample. That volume fraction, however, may result from the scattering pattern analysis of the corrected scattering pattern (which you don’t have yet). It may be possible to do this correction in an iterative manner (yet untested). Alternatively, the volume fraction of analyte needs to be determined in another way.
So will it matter? To be honest, I am not sure I ever had a case when this played a large role, but if we are to achieve the ultimate precision, this is to be taken into account. Its effects must be significant if 1) the analyte volume fraction is significant, and 2) the scattering signal from the sample is weak vs. the signal from the solvent. Proteins in solution are a prime example, but also dispersed polymers and vesicles may be affected by it.
*) While this holds for a range of samples, here we’ll consider the case of an analyte dispersed in a solvent.