*A well known problem of computer vision is the identification of edges and surfaces in images. A robot roaming on the ground will appreciate being able to distinguish green grass from the water of a lake.*

The robot has a camera on its head and is given a still image consisting of pixels in the rgb palette. Let be the “real world” image such that . We suppose that is piecewise continuous in the following sense: the set of continuity points of is open, dense in and has a finite number of connected components. The *edge points *are therefore and the *surfaces *are the connected components of . To characterize , we make use of the modulus of continuity, defined as

for all .

**Lemma. ***A point is an edge point of if and only if *

*.*

This lemma gives a hint as to how we can detect edge points. We may simply compare a pixel to its neighbours and say that is an edge point when the maximum distance (in the rgb palette) to a neighbour is greater than a fixed threshold .

There are two problems with this approach. First, the set of edge points so obtained will be irregular and very messy. Second, this is not robust to noise and the approach may be broken by intricate texture patterns. Here, I’m going to focus on resolving the first issue: obtaining smooth edges.

# Obtaining smooth edges.

(We assume that the image is simple and contains no noise or textures). The idea is the following. As we identify surfaces, in a growing number , we correspondingly adjust a smooth partition of unity

At each pixel , we understand as being the probability that is a point of the th surface. Once we have algorithmically established a satisfying set of probabilities, we define the th surface as being the set of points on which is greater than any other .

The idea of an algorithm to construct the partition of unity is the following. Let be a positive radial function, decreasing with the bandwidth and such that its maximum also decreases with ; and let be a fixed threshold, to be adjusted. Pick an pixel and find the largest ball of radius around such that the distance between and other pixels of are less than . If is not too small, we *spawn* a new surface and let be the kernel centered at . We then *grow *this surface by trying to fit other balls in this way at all the points of , replacing by the maximum of itself with the kernels corresponding to these balls. We maintain (weakly) decreasing radiuses for the balls as we get away from , and the height of correspondingly gets smaller. Once we have explored all the connected component of , we start again, this time from an unexplored point, to obtain . If no points were left unexplored, we can start again from a point at which attains a minimum. We do this for a fixed number of time or until gets close to be constant.

Note that if the are translates of some radial kernel for a fixed , then the surfaces are Voronoi cells and the edges form a Voronoi diagram.

The post will be updated ~~this weekend~~ in a near future with the python implementation of the algorithm. I already know that the modulus of continuity approach yields decent results (but with irregular and broken edges); we will see if this modification works.

**Update:** I didn’t have the time to program and experiment with this. It will have to wait until I approach image segmentation more seriously.