Digital Copy of Transparent Events
To explore digital simulations, we investigate the concepts of copy, replica, and digital counterpart. A key step is defining criteria to verify the trustworthiness of digital copies of physical models. This involves logical systems to compare the digital and physical behaviors.
Toy example: a fair dice simulation can be evaluated by comparing the algorithm’s probabilities to theoretical dice probabilities, without testing output frequencies.
Digital Copy of Opaque Events
In complex cases, especially where systems are not transparent, theoretical probabilities can’t be used to verify trustworthiness. In these cases, program output frequencies must be compared with experimental data. This applies to unpredictable physical objects and digital models, where source code may be inaccessible due to program opacity.
Paradigmatic examples are ML systems and proprietary software, for which the source code is not available. For them, trustworthiness can be checked only by comparing observed frequencies.
Weak Reliability Criteria
Trustworthiness criteria must be flexible, allowing for differences between a copy and its model. Sometimes, only specific features or values matter. In these cases experimental analyses may be the only available criteria.
For example, a program predicting university freshmen could be trustworthy if it provides less detailed but still relevant outputs with respect to another system already evaluated as trustworthy. In certain cases, even over-estimations might be acceptable, such as prioritizing sufficient resources under a precautionary principle.