Guides say m_yaw and m_pitch are multipliers to the sensitivity, both cvars defaults are 0.022.

According to what i found around the internets and some good sense, one could set these vars to different values in order to have different x and y sensitivities (essentially just changing the ratio between the two vars).

At the same time though, i found some pros using non-default values, but with both the cvars having the same non-default value like 0.014 or 0.011 for both m_yaw and m_pitch, thus mantaining the same default ratio of 1:1.

I was asking myself why didn't they just keep the defaults and pick a different sensitivity value.

This lead me to an hypothesis i'd like to discuss: what if m_pitch and m_yaw represent the least angular movement the model is able to move between one frame and another? Wouldn't it be good to set m_yaw and m_pitch to the lowest values possible and then raise the sensitivity to keep the same sens*yaw and sens*pitch values? This way the least angular movement my model would be able to do would be much smaller (thus precise) than 0.0022 and yet i'd have the same sensitivity I always used?

I tested it and, while the sensitivity has remained the same, I cannot notice any big difference in pointer precision, but it may be i'm just too noob to notice it. I also fear that by lowering values I may end up in more floating point approximation than before.