What practical use of connection between Image Render Output Dimensions and Camera Sensor Size?
When you change the output image's X-resolution, this scales up the camera render frame's Y-dimension. So far I know that this counterintuitive behaviour has to do with the camera's sensor size (set in the "Camera" panel of the camera' s "Object Data" tab) which is set to Auto per default (unless you choose a sensor preset where this default setting depends on the choosen sensor) and fits the image dimensions into a frame of certain dimensions. If you would go down with the X-demension far enough, that frame would switch growing in the view's y-derection to shrink along the X-axis. With "Horizontal" as sensor fit, the camera's render frame extension in the view's X-axis would be kept while scaling the Y-extension in order to keep the images resolution ration between X and Y. With "Vertical" it would be the complementary with the view's Y-direction being kept.
With "Auto" as sensor fit the view angle never exceeds the limits of a squared render frame for equal values in X and Y, even if I choose extremely high values:
As soon as I crank up or down one of both values the scene outcut shrinks in X or Y:
With "Sensor Fit" set to "Vertical" I get a squared scene outcut for equal X and Y but for an growing "X divided by Y" ratio an outcut that can become so wide that I can't zoom out further:
With "Sensor Fit" set to "Horizontal" the same happens in view's vertical direction.
There's also a zoom out of the scene when switching "Sensor Fit" from "Auto" or "Horizontal" to "Vertical"
What sense does this behaviour make? Does this have consequences for how far I can scale up my final rendering when printing it out (for example a wallpaper ;)? And why isn't this scene outcut scaling possible in X an Y simultaneously?