Multispectral Texture Synthesis Using RGB Convolutional Neural Networks

Research output: Contribution to journalArticlepeer-review

Abstract

State-of-the-art red-green–blue (RGB) texture synthesis algorithms rely on style distances that are computed through statistics of deep features. These deep features are extracted by classification neural networks that have been trained on large datasets of RGB images. Extending such synthesis methods to multispectral images is not straightforward, since the pretrained networks are designed for and have been trained on RGB images. In this work, we propose two solutions to extend these methods to multispectral imaging (MSI). Neither of them requires additional training of the neural network from which the second-order neural statistics are extracted. The first one involves optimizing over batches of random triplets of spectral bands during training. The second one projects multispectral pixels onto a 3-D space. We further explore the benefit of a color transfer operation upstream of the projection to avoid the potentially abnormal color distributions induced by the projection. Our experiments compare the performances of the various methods through different metrics. We demonstrate that they can be used to perform exemplar-based texture synthesis, achieve good visual quality, and come close to state-of-the-art methods on RGB bands.

Original languageEnglish
Article number5402914
JournalIEEE Transactions on Geoscience and Remote Sensing
Volume63
DOIs
Publication statusPublished - 1 Jan 2025

Keywords

  • Multispectral imaging (MSI)
  • texture synthesis

Fingerprint

Dive into the research topics of 'Multispectral Texture Synthesis Using RGB Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this