NVIDIA Researchers Advances in Deep Learning

Here’s an excerpt: The deep learning network uses the well-established physics principle of diffusion to better understand the relationship between neighboring pixels. This helps it differentiate, for example, between neighboring pixels of a bicycle’s wheel, its spokes and the empty space in between. This is a spatial affinity for image segmentation, but the network could be trained to determine many other affinities: color, tone, texture, etc.

The spatial propagation network learns to define and model these affinities purely using data, rather than hand-designed models. And the learning model can be applied to any task that requires pixel-level labels, including image matting (think Photoshop), image colorization and face parsing, to name a few. Plus the model could figure out affinities — such as functional or semantic relationships in an image — that might not even occur to people.

Leave a Reply

Your email address will not be published.