Many AI systems are designed with a specific use-case in mind. However, as stated, it may evolve over time and slowly drift away from the designers’ original intentions. It is therefore important to document clearly the initial assumptions and conditions under which the AI system was intended to be used. For example, does the AI system expect a specific environment, or does the training set contain known biases? If an AI system is publicly available, the documentation of the system’s reliability should be as well.
In addition to reliability, the reproducibility of an AI system’s results is important. Not only is reproducibility a desirable technical property of an AI system (e.g. to investigate the reason for faulty results), it is also an important prerequisite for trust. If a result cannot be reproduced, its explainability – and therefore trust in the AI system – may suffer.