Download PDFOpen PDF in browserIs Self-Supervised Learning a Surrogate of Supervised Learning?EasyChair Preprint 152635 pages•Date: October 18, 2024AbstractOver the decades in the sphere of deep learning and machine learning, supervised learning has stood to be the anchor but the enormous amount of unannotated data, high cost in the annotation process, lengthy cycle involved in annotating the data, and the need for experts have been a drawback. This drawback brought semi-supervised learning into the limelight yet still semi-supervised needs a portion of annotated data. This expensiveness involved in getting the correct annotated data has brought a nascent self-supervised learning. Self-supervised learning learns useful information called pretext from the vast amount of data that is used on the downstream task. This emerging strategy has been a topic for research. The use of unannotated data to achieve supervised learning has brought the question if self-supervised is a surrogate for supervised learning. In this work, we reviewed the work of researchers to tackle the answer of whether these two strategies should be a surrogate or synergized by comparing their accuracy and their robustness to attack or detect out-of-distribution. Just like supervised learning faces problems with annotations, self-supervised learning also calls for a good pretext to be used on its downstream task. In this work, we recommended to conclude on the question asked all factors must be taken into account Keyphrases: Downstream, out-of-distribution pretext, robust, self-supervised, semi-supervised learning, supervised learning
|