Help for

Mean photometric error
Photometric measurements are affected by observational errors that here are modelled following a Gaussian distribution. The user can fix a 1 sigma constant error that is applied to all stars in the simulation for all 9 photometric bands. Another option is to employ an error that varies with the actual star magnitude and/or photometric band. The exact values have to be specified in an appropriate table. This table contains 18 columns. The first 9 columns display magnitudes in the 9 photometric bands, in order of increasing values. The remaining 9 columns are the corresponding photometric errors. An example is the following:

Mu Mb Mv Mr Mi Mj Mh Mk Ml sMu sMb sMv sMr sMi sMj sMh sMk sMl
3.0 2.5 3.0 3.0 3.0 3.0 3.0 3.0 3.0 0.02 0.010.01 0.02 0.02 0.02 0.02 0.02 0.02
5.0 5.1 5.0 5.0 5.0 5.0 5.0 5.0 5.0 0.02 0.010.02 0.03 0.03 0.03 0.03 0.03 0.03
8.5 7.5 8.5 8.5 8.5 8.5 8.5 8.5 8.5 0.03 0.02 0.02 0.03 0.03 0.03 0.03 0.03 0.03
10.0 9.0 10.0 10.0 10.0 10.0 10.0 10.0 10.0 0.04 0.030.03 0.03 0.03 0.03 0.03 0.03 0.03

Let's assume a synthetic star has U and B magnitudes equal to Mu=6.0 and Mb=4.9. The program searches for the pair of tabulated U and B values that bracket the star's magnitudes. In case of the U band these two values are 5.0 and 8.5. In case of the B band this pair of tabulated values are 2.5 and 5.1. The 1 sigma error assigned to the star is the value corresponding to larger of the two bracketing magnitudes. This means that the error in U will be equal to 0.03 mag (corresponding to the tabulated U=8.5 value) and the error in B will be equal to 0.01 mag (corresponding to the tabulated B=5.1 value). The allowed range of the 1 sigma dispersion in any band is 0.0 - 1.0 dex.