Consider this admirable script which draws a (circular) gradient,
https://github.com/paiv/AngleGradientLayer/blob/master/AngleGradient/AngleGradientLayer.m
int w = CGRectGetWidth(rect);
int h = CGRectGetHeight(rect);
and then
angleGradient(data, w, h ..
and the it loops over all those
for (int y = 0; y < h; y++)
for (int x = 0; x < w; x++) {
basically setting the color
*p++ = color;
But wait - wouldn't this be working by points, not pixels?
How, really, would you draw to the physical pixels on dense screens?
Is it a matter of:
Let's say the density is 4 on the device. Draw just as in the above code, but, on a bitmap four times as big, and then put it in the rect?
That seems messy - but is that it?
What it sounds like you are looking for is the scale property on UIScreen:
https://developer.apple.com/documentation/uikit/uiscreen/1617836-scale
This allows you to control the number of pixels the coordinate system gives you per virtual pixel. iOS devices basically work at non retina coordinates. Old link explaining what is going on here:
http://www.daveoncode.com/2011/10/22/right-uiimage-and-cgimage-pixel-size-retina-display/
Don't use his macros, as some devices are now scale of 3.0, but the post explains what is going on.
[Note: The code on the github example does not calculate the gradient on a pixel basis. The code on the github example calculates the gradient on a points basis. -Fattie]
The code is working in pixels. First, it fills a simple raster bitmap buffer with the pixel color data. That obviously has no notion of an image scale or unit other than pixels. Next, it creates a
CGImage
from that buffer (in a bit of an odd way).CGImage
also has no notion of a scale or unit other than pixels.The issue comes in where the
CGImage
is drawn. Whether scaling is done at that point depends on the graphics context and how it has been configured. There's an implicit transform in the context that converts from user space (points, more or less) to device space (pixels).The
-drawInContext:
method ought to convert the rect usingCGContextConvertRectToDeviceSpace()
to get the rect for the image. Note that the unconverted rect should still be used for the call toCGContextDrawImage()
.So, for a 2x Retina display context, the original rect will be in points. Let's say 100x200. The image rect will be doubled in size to represent pixels, 200x400. The draw operation will draw that to the 100x200 rect, which might seem like it would scale the large, highly-detailed image down, losing information. However, internally, the draw operation will scale the target rect to device space before doing the actual draw, and fill a 200x400 pixel area from the 200x400 pixel image, preserving all of the detail.
So, based on the magnificent answer of KenThomases, and a day of testing, here's exactly how you draw at physical pixel level. I think.
The critical elements:
first ...
second ...
third ...
Example ...
It's absolutely critical to set
contentsScale
at initialization time.I tried some os versions, and it seems for better or worse the default for layers for
contentsScale
is unfortunately "1" rather than screen density, so, do not forget to set it!!! (Note that other systems in the OS will use it, also, to know how to handle your layer efficiently, etc.)