Ndung'u 2023: Advances on the morphological classification of radio galaxies

https://doi.org/10.1016/j.newar.2023.101685

  1. SKA will generate datasets on the scale of exabytes. SKA-LOW data rate 10 TB/s, SKA-MID 19 TB/s.
  2. MeerKAT raw data rate 2.2 TB/s. MWA 300 GB/s, LOFAR 13 TB/s.
  3. EMU will map 70M radio sources, SKA 500M.
  4. Need for classification: science, serendipitous discovery (Ray 2016).
  5. Yatawatta 2021 gives a smart calibration package based on deep reinforcement learning.
  6. LOFAR sensitivity 100 $\mu$Jy, resolution $6''$.
  7. FR0 are 5 times more numerous than FR-I-II combined. The gradient in Fig.3 is nice.
  8. Is it right that WAT-NAT come from both FR I-II???
  9. IVOA (int virt obs alliance) gives FAIR (findable accessible interoperable reusable) data.
  10. Besides classification, need to focus on source extraction and anomaly detection. Create AI-alternatives to PyBDSF.
  11. Table 2 dos not have a dataset created from Sasmal 23.
  12. Most works depend on deep and shallow CNN (ConvNet, mimicks human visual cortex). The deeper (more layers) the better.
  13. Regularization techniques (dropout of weakly connected neurons) used to solve overfitting.
  14. Bowles 21 uses attention-gate layers to suppress irrelevant info.
  15. Tang 22 uses multidomain multibranch CNN to take multiple inputs.
  16. Brand 23 aligned the PC of all galaxies with the axis of the coordinate system via PCA.
  17. Sadeghi 21 uses Zernike polynomials (SVM) to extract image moments. Ntwaetsile 21 uses Haralick features for texture.
  18. Tang 19 successfully used FIRST-trained model on NVSS, but not the reverse.
  19. Wang 21 uses SKnet module for attention. Zhang 22 uses SE Net.
  20. Tiramisu (Pino 21) works best for source extraction, detection, localization and then classification.
  21. Main drawback of AI is the lack of explainability.
  22. Another drawback is that we are reducing 4D data cubes into 2D.