Maybe my questions doesn't make sense due to not understanding but please explain me what I miss because I did read posts and wiki and still it's not clear to me.
As I understand setting low value for qmax will improve the quality by increasing the bitrate. Maybe I didn't understood something but isn't lowing the Q(quantization) will decrease the quantization levels and thus the bitrate which means degradation in quality? Or in ffmpeg lowing Q means increasing the quantization levels? If the last is true so it make sense that lower qmax improves the quality.
If the above is true, so increasing qmax will decrease the quantization levels which means less bits for coding a quantization level. So, if number of bits for a level is lower, so total bits per frame will be lower, so how the encoder manage to get to the desired bitrate?
Your are right in your interpretation of the relation between quantization factor and bit-rate.
But in any case, for a given quantizer, you can still ask for a target bit rate, and if so you have 2 cases:
But with ffmpeg qmax may have different meaning as it's a codec dependent parameters. for x264 it should be a quantizer see here, But with some others codecs it doesn't represent a quantization level but a quality range.
qmax and qmin are the 'quality-ranges' in which you define to encode. Oposite from what most, atleast me, would expect is that higher the values the lower the quality.
values of qmin lower than 16 and qmax 26 are visibly 'very good' lowering qmin below 16 costs extra space 'without adding visible' quality.
so if you increase the video quality the encoded output will be closer to the original, and that often require a higher bit-rate, but internally it often means that lower quantization level are use.