Deep Learning/Anomaly Detection

Anomaly Detection Paper 모음

sdbeans 2022. 1. 23. 16:59

Patch SVDD

  • ACCV 2020
  • patch-wise detection method on image data (look at small sections of an image)
  • self-supervised learning
  • randomly initialized encoder of Deep SVDD is also used here, but taken further to study why the random features were separable
  • Deep SVDD:
  • Patch-wise Deep SVDD:
  • self-supervised learning:
  • hierarchical encoding:
  • generating anomaly maps:







Deep SAD

  • ICLR 2020
  • deep semi-supervised anomaly detection
  • end-to-end deep method for general semi-supervised AD
  • generalization of unsupervised Deep SVDD method to the semi-supervised AD setting
  • same loss term as Deep SVDD for unlabeled data
  • for labeled data then, new loss term weighted by hyperparameter controlling the balance between the labeled and unlabeled term



A Two-Stage Autoencoder For Visual Anomaly Detection

  1. single RotNet (like encoder) (RotNet is a self-supervision approach that relies on predicting image rotations as the pretext task in order to learn image representations.) (https://paperswithcode.com/method/rotnet)
  2. from decoder, get discriminative representations to train
  3. 2 parallel decoders for image reconstruction
  4. reconstruction error from 2 decoders are combined to be anomaly score
  • this paper is for images, not on time series data
  • does not consider temporal data

Choosing Effective Projections for Fast and Accurate Anomaly Detection

  • ODD (Object Detection and Description workshop) 2021
  • https://oddworkshop.github.io/assets/papers/2.pdf
  • Chimera
  • consensus-based approach
  • select best subset of projections
  • requires reliable uni-modal unsupervised statistical test, so there is S-Chimera that removes this requirement






Enhancing Unsupervised Anomaly Detection with Score-Guided Network

  • https://arxiv.org/abs/2109.04684
  • submitted to IEEE (Sept 2021)
  • Score-Guided AutoEncoder (SG-AE)
  • has official code
  • challenges of unsupervised learning:
  • this model has:
  • can learn more representations with sufficient information
  • not on time series data
  • maybe the score function can be extended to time series but how? will calculating at every time step be time-consuming?
  • does it work with CNN?

 

Learning and Evaluating Representations for Deep One-class Classification

 

DOC3-Deep One Class Classification using Contradictions

  • https://arxiv.org/abs/2105.07636
  • learning from contradictions a.k.a. universum learning
  • success depends on tuning hyperparameters
  • 2 goals:
  • effectiveness also depend on type of universum used




Contrastive Predictive Coding

  • https://arxiv.org/abs/2107.07820
  • ICML 2021 Workshop
  • CPC
  • solve the problem of lack of data
  • self-supervised representation learning setting
  • patch-wise contrastive loss directly become anomaly score
  • for images
  • patches within an image is contrasted
  • can also create anomaly segmentation masks
  • 2 adjustments:

Semi-Supervised Anomaly Detection Algorithm Using Probabilistic Labeling (SAD-PL)

DOCC: Deep one-class crop classification via positive and unlabeled learning for multi-modal satellite imagery

  • https://www.sciencedirect.com/science/article/pii/S0303243421003056
  • International Journal of Applied Earth Observation and Geoinformation 2021
  • multi-modal time series satellite images
  • deep one-class crop framework:
  • input: only samples of 1 target class
  • automatically extract features, without labeling based on prior knowledge
  • beneficial for large-scale mapping when samples of multi-class are difficult to obtain