A performance comparison of CNNs trained on synthetically generated or recorded data for gaze estimation
Description
In this work the data generator PeopleSansPeople will be adapted and used to create a dataset of images of the 3D human assets included in he Microsoft RocketBox Avatar Library for the training of a Convolutional Neural Network (CNN), that should be capable of eye gaze estimation. This network will be compared with another CNN that was trained with a naturally recorded dataset, to validate the assumption, that we can use synthetic data to get plausible results.