In the spirit of the blog, here is an elementary use of NSImage. I have a png format image file that I added to the project, entitled "Ella." She's cute, but sometimes gets kinda suspicious. Hmm...
We have a standard Cocoa application with absolutely nothing added to the nib. In the AppDelegate, we put our code. It comes from the template with an outlet to the window, from which we will grab its content view, and draw the image there. In the first part of the code, we examine the sizes of the image and the view.
The output is this:
The image is 554 x 392, but the view is only 480 x 360. What to do?
The drawing is always hard for me to wrap my head around. The function we'll use is from NSImage. It uses the current "context" to draw and takes a specified rectangle of pixels from the source image and draws them in the view. If they don't match, it will stretch either or both dimensions to fit. Stretching only one dimension is not OK, it looks weird.
In the second section of code, we figure out which dimension is too large by the biggest ratio. That will be f, the factor we use to adjust by.
In the third section, we do the actual drawing. Like I said, we take a given width and height from the source, and we also give the function the specified rectangle in the view to draw into. We'll use the whole view for that.
What we need to do is to adjust the number of pixels we take from the source to match how much room we have in the view, or adjust the number of pixels in the destination rectangle.
Choosing the first method, we can either crop one dimension, or specify additional pixels that aren't present. If our source rectangle is too big and there are missing pixels for the view, the view will supply its own background (here, white).
We want to keep the aspect ratio (width / height) the same. The ration of the image height to view height is (554/480 = 1.154), and for the the width it's (392/360 = 1.09). Our image is larger in the height than the width. So we specify 1.154 * the view's dimensions = 554 x 415 pixels to be taken from the image, including the 23 missing ones. The rect's origin is NSZeroPoint. The 23 missing pixels are filled in with white at the top of the view.
If the image is smaller than the view (if the test
(! (NSContainsRect(vRect,iRect) ))fails) then we'll simply paste the view's count of pixels from the image into the view (missing ones supplied for free)!