- Feb 12, 2013
- 3,818
- 1
- 0
from what i understand this is rendering the image at a higher resolution then downsampling them to lower resolution. my question is if there is anyway (theoreticals) to optimize this?
my thoughts on this as a non programmer, is to increase the resolution of the edges. Using edge detection to find and place an array of pixels into a quadtree. Render the quadtree cells that have edges in them then blend the original image with the higher quality edge layer.
do any of you think this is feasible or has something like this already been made?
p.s. please be gentle
my thoughts on this as a non programmer, is to increase the resolution of the edges. Using edge detection to find and place an array of pixels into a quadtree. Render the quadtree cells that have edges in them then blend the original image with the higher quality edge layer.
do any of you think this is feasible or has something like this already been made?
p.s. please be gentle