Portrait of me.

Hi!

I am an applied machine learning researcher at NASA Ames, specialized in the application of machine learning to astronomy and astrophysics.

My research primarily focuses on enhancing the output of exoplanet survey missions, such as Kepler and TESS. I develop machine learning models to analyze high-quality, large datasets of transit signals, aiming to accelerate the vetting of planet candidates and the validation of exoplanets. Additionally, I have interests in uncertainty quantification, the characterization of deep learning models, and model explainability (XAI).

From more general topics...
  • Uncertainty quantification and characterization of deep learning models.
  • Model explainability (XAI).

  • ...to more concrete applications:
  • Denoising mid-infrared images from the James Webb Space Telescope (JWST) by developing algorithms for cosmic ray detection, which facilitate the creation of high-quality data products from NASA's flagship observatory.
  • Performing transit detection for TESS and Kepler using machine learning methods.
  • Research

    This is a list that intends to give an overview of my current and previous research. For a comprehensive list of my published work you can see my Google Scholar page or find me on NASA ADS.

    Kepler spacecraft artist depiction and ExoMiner logo.
    Artist's depiction of Kepler. Credit: NASA/ESA/CSA/STScI.

    Finding Exoplanets in Kepler

    I am the main developer of ExoMiner (check the NASA GitHub repository!), a deep learning-based method that sifted through the Kepler data to validate 301 planet candidates. Valizadegan & Martinho et al. (2022)

    ExoMiner consists of a convolutional neural network designed with several branches to process multi-modal data. Each branch mimics a type of diagnostic test conducted by SMEs to reject transit signals as false positives. We conducted an additional statistical validation of 69 Kepler planet candidates by using multiplicity boost information and a logistic regression framework on top of ExoMiner. Valizadegan & Martinho et al. (2023)

    Other studies using Kepler data include:
  • Probing ExoMiner for Completeness and Effectiveness on Kepler Data | Presentaton
  • Detecting Label Noise in Kepler Confirmed Planet Catalog using Machine Learning | Presentation
  • TESS spacecraft artist depiction and ExoMiner logo.
    Artist's depiction of TESS. Credit: NASA/MIT.

    Vetting Transit Signals from the TESS Mission

    We adapted ExoMiner to the TESS mission data to search through the hundreds of thousands of transit signals generated by Primary, 1st, and 2nd Extended Missions of TESS.

    For each sector run and for both 2-min cadence and full-frame image data, we generated catalogs of Threshold Crossing Events (TCEs) and Community TESS Objects of Interest (CTOIs). These vetting catalogs aim to provide the community with a more (limited) set of interesting transit signals and make manual vetting more targeted and less time-consuming.

    Other studies using TESS data include:
  • Deep Learning Vetting of TESS FFI Data [Ongoing Work - Soon!]| Presentation
  • Example of vascular image from the DRIVE dataset.
    Credit: Example from the DRIVE dataset.

    Vascular Tissue Segmentation for VESGEN

    Developed convolutional-based models like U-Nets to perform the segmentation of vascular networks and trees for VESGEN.

    VESGEN is a NASA's software created to map vascular patterns in tissues such as the human retina and quantify processes that include angiogenesis, with the goal of enabling a better understanding of terrestrial diseases as well as study the effects of microgravity and radiation exposure to astronauts and other organisms in space environments. Using a public dataset of human retina images (DRIVE ), we extended our private dataset of vascular images of patients with different degrees of diabetic retinopathy to train and evaluate the performance of our models. An additional dataset of images from astronauts before and after their missions to the ISS was also explored. This project started as a proof of concept to perform the automatic binarization of vascular patterning in the human retina to free researchers and clinicians from the time-consuming task (3-20 hours) of producing a manual segmentation map. Lagatuz et al. (2021)