Make Your Own Reduction Scrypt (photometry)#

Work in Progress

ASTROPOP is not a reduction script by itself, but a library containing (almost) everything you need to create your reduction script by yourself.

In this guide we will follow a standard reduction procedure using the ASTROPOP modules and you can follow it to perform your own reduction.

If you want to use this notebook directly in your Jupyter, you can download it in this link.

Organize Your Data#

The first step is to organize your data. Do not forget to backup all the raw data before start your reduction!

Here we will perform a reduction using just bias and flatfield corrections, in a single band. No dark frame subtraction will be used, so we do not have to set it.

First, we will have to create the lists containing the image names we need. You can do it manually, using glob package, or the built-in file organizer of ASTROPOP. This last option allow filtering using header keys, but may be slow and memory consuming for very large sets (with thousands of images). For didatic reasons, we will to it with ASTROPOP file organizer.

Is convenient to set now some directory names for raw, reduced and temporary files, as in the code bellow.

[ ]:
import os
import numpy as np
from astropy import units as u

# astropop used modules
from astropop.image import imcombine, processing, imarith
from astropop.framedata import read_framedata
from astropop.image.register import register_framedata_list
from astropop.photometry import background, starfind
from astropop.logger import logger
logger.setLevel('INFO')

# Directories where we will storefiles
base_dir = os.path.expanduser('~/astropop-tutorial/photometry-example')
raw_dir = os.path.join(base_dir, 'raw')  # Directory containing the rawdata
reduced_dir = os.path.join(base_dir, 'reduced')  # Directory to store processed data
tmp_dir = os.path.join(base_dir, 'tmp')  # Directory to store temporary files

# create the dir if not exists
os.makedirs(raw_dir, exist_ok=True)
os.makedirs(reduced_dir, exist_ok=True)
os.makedirs(tmp_dir, exist_ok=True)

from matplotlib import pyplot

In this tutorial, we will a set of images of HD5980 star taken in Observatório do Pico dos Dias (OPD/LNA) and available in github/sparc4-dev/astropop-data repository.

To download these data in the raw folder, follow the script bellow.

[ ]:
# Downloading the data. If you alrady downloaded it, you can skip this cell
import urllib.request
import os

with urllib.request.urlopen('https://raw.githubusercontent.com/sparc4-dev/astropop-data/main/raw_images/opd_ixon_bc_hd5980/filelist.txt') as f:
    for line in f.readlines():
        url = line.decode('UTF-8')
        filename = os.path.join(raw_dir, url.split('/')[-1].split('?')[0])
        urllib.request.urlretrieve(url, filename)

With astropop.file_collection.FitsFileGroup we will create a dynamic library of fits files. This library allow iterate over its files, filter by header keywords, access file names or direct access a given index. In this case, we will access only the file names via FitsFileGroup.files property.

For the filtered method, the arguments is a dictionary containing all keywords you want to filter in the header. Only equality is allowed. In this example, we will filter our files by the obstype key, that can have values BIAS, FLAT and SCIENCE. For science, if you have multiple stars, you can use a second key, object, to filter the desired images. This is the standard of our observations, but your observatory may contian a different standard.

[ ]:
from astropop.file_collection import FitsFileGroup

main_fg = FitsFileGroup(location=raw_dir, fits_ext=['.fits'], ext=0)

print(f'Total files: {len(main_fg)}')

# Filter files by header keywords
bias_fg = main_fg.filtered({'obstype': 'BIAS'})  # OBSTYPE='BIAS'
print(f'Bias files: {len(bias_fg)}')
print(bias_fg.files)
flat_fg = main_fg.filtered({'obstype': 'FLAT'})  # OBSTYPE='FLAT'
print(f'Flat files: {len(flat_fg)}')
print(flat_fg.files)
star_fg = main_fg.filtered({'obstype': 'SCIENCE', 'object': 'HD5980'})  # OBSTYPE='SCIENCE' and OBJECT='HD5980'
print(f'Star files: {len(star_fg)}')
print(star_fg.files)

Master Calibration Frames#

Before process the science images, we need to create the master_bias and master_flat calibration frames.

Master Bias#

Bias files will be corrected by gain, cosmic removal and after they will be median combined to generate the master_bias file.

The first step is to read the fits files to FrameData instances. use_memmap_backend feature will be used to save RAM. FitsFileGroups already have a feature to read and iterate over the FrameData created instances.

[ ]:
# if you already have the master bias and the master flat, you can skip the next two cells
bias_frames = list(bias_fg.framedata(unit='adu', use_memmap_backend=True))

# lest extract gain from the first image
gain = float(bias_frames[0].header['GAIN'])*u.electron/u.adu  # using quantities is better for safety
print('gain:', gain)

# Perform calibrations
for i, frame in enumerate(bias_frames):
    print(f'processing frame {i+1} from {len(bias_frames)}')
    processing.cosmics_lacosmic(frame, inplace=True)
    processing.gain_correct(frame, gain, inplace=True)

# combine
master_bias = imcombine(bias_frames, method='median')

# save to disk
mbias_name = os.path.join(reduced_dir, 'master_bias.fits')
master_bias.write(mbias_name, overwrite=True, no_fits_standard_units=True)
master_bias.meta['astropop master_type'] = 'bias'
print('master_bias statistics', master_bias.statistics())

del bias_frames  # remove tmp frames from memory and disk

Master Flat#

In addition to cosmic/gain correction, flat frames must be subtracted by the master bias and normalized after the combination.

[ ]:
flat_frames = list(flat_fg.framedata(unit='adu', use_memmap_backend=True))

for i, frame in enumerate(flat_frames):
    print(f'processing frame {i+1} from {len(flat_frames)}')
    processing.cosmics_lacosmic(frame, inplace=True)
    processing.gain_correct(frame, gain, inplace=True)
    processing.subtract_bias(frame, master_bias, inplace=True)

# combine and normalize
master_flat = imcombine(flat_frames, method='median')
mean_value = master_flat.mean()
print('flat mean value:', mean_value)
master_flat = imarith(master_flat, mean_value, '/')
master_bias.meta['astropop master_type'] = 'flat'

# save to disk
mflat_name = os.path.join(reduced_dir, 'master_flat.fits')
master_flat.write(mflat_name, overwrite=True)
print('master_flat statistics', master_flat.statistics())

del flat_frames  # remove tmp frames from memory and disk

Basic CCD Image Processing#

Now, with our master images created, its time to performe the processing of our science images.

The steps are: - cosmic ray removal - gain correct - master_bias subtraction - division by normalized flat

We already have the master images loaded into the memory. But, considering you have the master frames already created, lets load them from the disk using read_framedata function.

[ ]:
# list of framedata of our science frames
star_frames = list(star_fg.framedata(unit='adu', use_memmap_backend=True))

# needed parameters for calibration
gain = float(star_frames[0].header['GAIN'])*u.electron/u.adu  # using quantities is better for safety
print('gain:', gain)
mbias_name = os.path.join(reduced_dir, 'master_bias.fits')
mflat_name = os.path.join(reduced_dir, 'master_flat.fits')
master_bias = read_framedata(mbias_name)
print('bias: ', master_bias.statistics())
master_flat = read_framedata(mflat_name)
print('flat:', master_flat.statistics())
[ ]:
# process the science frames and save individual files calibrated files to disk
for i, frame in enumerate(star_frames):
    print(f'processing frame {i+1} from {len(star_frames)}')
    processing.cosmics_lacosmic(frame, inplace=True)
    processing.gain_correct(frame, gain, inplace=True)
    processing.subtract_bias(frame, master_bias, inplace=True)
    processing.flat_correct(frame, master_flat, inplace=True)

for frame, name in zip(star_frames, star_fg.files):
    basename = os.path.basename(name)
    frame.write(os.path.join(reduced_dir, basename), overwrite=True, no_fits_standard_units=True)

Align and Stack#

To get a better static photometry, it is good to have several images ofthe field, align them and stack to reduce the noise. This process of align images is called image registration.

Astropop has two algorithms of image registration: - cross-correlation: that computes the phase difference in fourier space using the cross-correlation of the images. This method allow register any images. But it may have less precision and is vulnerable for structures of bad rows/columns in the image. - asterism-matching: it uses groups of 3 detected stars in each image to estimate the transform between the by similarity. You can only apply the method when you have good stars in the field. It supports translation and rotation and it’s errors depend on the errors of the centroid detection.

As we are working with star filed images, lets use the asterism matching. Astropop directly preform registration in FrameData container lists!

[ ]:
registered_frames = register_framedata_list(star_frames, algorithm='asterism-matching', inplace=True)

To combine the images we will use the imcombine function of astropop. In our case we will perform the sum of all images, excluding masked pixels.

[ ]:
combined = imcombine(registered_frames, method='sum')
pyplot.figure()
pyplot.imshow(combined.data, vmin=combined.median().value, vmax=np.percentile(combined.data, 99.5), origin='lower')
pyplot.show()

Source Detection and Photometry#

We will perform a very standard aperture photometry reduction using commot astropop default routines. Another routines are available and read the documentation is recommended for customizations.

To detect background, we will use the background function of astropop. As our image has a big nebula that covers a big part of the image, we will use a space-varying background.

[ ]:
data = np.array(combined.data)
bkg, rms = background(data, global_bkg=False)
vmin=np.median(combined.data)
vmax=np.percentile(combined.data, 99.5)

fig, ax = pyplot.subplots(1, 3, sharex=True, sharey=True, figsize=(8, 4))
ax[0].imshow(combined.data, vmin=vmin, vmax=vmax, origin='lower')
ax[0].set_title('Data')
ax[1].imshow(bkg, vmin=vmin, vmax=vmax, origin='lower')
ax[1].set_title('Background')
ax[2].imshow(data-bkg, vmin=0, vmax=vmax, origin='lower')
ax[2].set_title('Data-Background')
fig.tight_layout()

Now, lets identify the sources in the image. For this we will use the starfind function. It will perform the identification of punctual sources only. It has an advantage against daofind beacuse it operates blindly, with no necessity of meassure the FWHM previously.

[ ]:
sources = starfind(data, threshold=10, background=bkg, noise=rms)
print('Total found stars:', len(sources))
pyplot.figure()
pyplot.imshow(data-bkg, vmin=0, vmax=np.percentile(combined.data, 95), origin='lower')
pyplot.plot(sources['x'], sources['y'], 'w+', ms=5)
pyplot.show()

Photometry Calibration with Online Catalogs#