When Excel determines the axis values it will use to represent your data in a chart, the values are 'evenly distributed'.
For Example:
If you plot the following series in an Excel Line Chart.
[0.22,0.33,0.44,0.55,0.66,0.77,0.88,0.99,1.1,1.21,1.32,1.43,1.54,1.65,1.76,1.87,1.98,2.09,2.2]
Excel determines that the y-axis values should be [0,0.5,1,1.5,2,2.5].
What technique or formula is used to determine these values ?
Please have a look at:
Reasonable optimized chart scaling
And
Algorithm for "nice" grid line intervals on a graph
There are more questions like this.
After doing some experiementing, I conclude that Excel:
1) It will keep the Y axis starting at zero unless you explicitly tell it otherwise
2) It will set the Y axis max one major tick higher than your largest value
3) The last part seems more arbitrary - it clearly has "preferred" units (.1, .2, .5, 1, 2, 5, 10, 50, 100, etc) that it will choose for the major tick. It will use the smallest preferred unit that will result in between 5 and 10 major ticks, meeting the requirements above.