Get the latest tech news

VLMaterial: Procedural Material Generation with Large Vision-Language Models


Procedural materials, represented as functional node graphs, are ubiquitous in computer graphics for photorealistic material appearance design. They allow users to perform intuitive and precise editing to achieve desired visual appearances. However, creating a procedural material given an input image requires professional knowledge and significant effort. In this work, we leverage the ability to convert procedural materials into standard Python programs and fine-tune a large pre-trained vision-language model (VLM) to generate such programs from input images. To enable effective fine-tuning, we also contribute an open-source procedural material dataset and propose to perform program-level augmentation by prompting another pre-trained large language model (LLM). Through extensive evaluation, we show that our method outperforms previous methods on both synthetic and real-world examples.

View a PDF of the paper titled VLMaterial: Procedural Material Generation with Large Vision-Language Models, by Beichen Li and 6 other authors However, creating a procedural material given an input image requires professional knowledge and significant effort. To enable effective fine-tuning, we also contribute an open-source procedural material dataset and propose to perform program-level augmentation by prompting another pre-trained large language model (LLM).

Get the Android app

Or read this on Hacker News

Read more on:

Photo of language models

language models

Related news:

News photo

TopoNets: High-Performing Vision and Language Models with Brain-Like Topography

News photo

TinyStories: How Small Can Language Models Be and Still Speak Coherent English? (2023)

News photo

Letting Language Models Write My Website